WorldWideScience

Sample records for regional model calibration

  1. Regional calibration models for predicting loblolly pine tracheid properties using near-infrared spectroscopy

    Science.gov (United States)

    Mohamad Nabavi; Joseph Dahlen; Laurence Schimleck; Thomas L. Eberhardt; Cristian Montes

    2018-01-01

    This study developed regional calibration models for the prediction of loblolly pine (Pinus taeda) tracheid properties using near-infrared (NIR) spectroscopy. A total of 1842 pith-to-bark radial strips, aged 19–31 years, were acquired from 268 trees from 109 stands across the southeastern USA. Diffuse reflectance NIR spectra were collected at 10-mm...

  2. Regional Calibration of SCS-CN L-THIA Model: Application for Ungauged Basins

    OpenAIRE

    Jeon, Ji-Hong; Lim, Kyoung; Engel, Bernard

    2014-01-01

    Estimating surface runoff for ungauged watershed is an important issue. The Soil Conservation Service Curve Number (SCS-CN) method developed from long-term experimental data is widely used to estimate surface runoff from gaged or ungauged watersheds. Many modelers have used the documented SCS-CN parameters without calibration, sometimes resulting in significant errors in estimating surface runoff. Several methods for regionalization of SCS-CN parameters were evaluated. The regionalization met...

  3. Regional Calibration of SCS-CN L-THIA Model: Application for Ungauged Basins

    Directory of Open Access Journals (Sweden)

    Ji-Hong Jeon

    2014-05-01

    Full Text Available Estimating surface runoff for ungauged watershed is an important issue. The Soil Conservation Service Curve Number (SCS-CN method developed from long-term experimental data is widely used to estimate surface runoff from gaged or ungauged watersheds. Many modelers have used the documented SCS-CN parameters without calibration, sometimes resulting in significant errors in estimating surface runoff. Several methods for regionalization of SCS-CN parameters were evaluated. The regionalization methods include: (1 average; (2 land use area weighted average; (3 hydrologic soil group area weighted average; (4 area combined land use and hydrologic soil group weighted average; (5 spatial nearest neighbor; (6 inverse distance weighted average; and (7 global calibration method, and model performance for each method was evaluated with application to 14 watersheds located in Indiana. Eight watersheds were used for calibration and six watersheds for validation. For the validation results, the spatial nearest neighbor method provided the highest average Nash-Sutcliffe (NS value at 0.58 for six watersheds but it included the lowest NS value and variance of NS values of this method was the highest. The global calibration method provided the second highest average NS value at 0.56 with low variation of NS values. Although the spatial nearest neighbor method provided the highest average NS value, this method was not statistically different than other methods. However, the global calibration method was significantly different than other methods except the spatial nearest neighbor method. Therefore, we conclude that the global calibration method is appropriate to regionalize SCS-CN parameters for ungauged watersheds.

  4. A global model for residential energy use: Uncertainty in calibration to regional data

    International Nuclear Information System (INIS)

    van Ruijven, Bas; van Vuuren, Detlef P.; de Vries, Bert; van der Sluijs, Jeroen P.

    2010-01-01

    Uncertainties in energy demand modelling allow for the development of different models, but also leave room for different calibrations of a single model. We apply an automated model calibration procedure to analyse calibration uncertainty of residential sector energy use modelling in the TIMER 2.0 global energy model. This model simulates energy use on the basis of changes in useful energy intensity, technology development (AEEI) and price responses (PIEEI). We find that different implementations of these factors yield behavioural model results. Model calibration uncertainty is identified as influential source for variation in future projections: amounting 30% to 100% around the best estimate. Energy modellers should systematically account for this and communicate calibration uncertainty ranges. (author)

  5. An Exact Confidence Region in Multivariate Calibration

    OpenAIRE

    Mathew, Thomas; Kasala, Subramanyam

    1994-01-01

    In the multivariate calibration problem using a multivariate linear model, an exact confidence region is constructed. It is shown that the region is always nonempty and is invariant under nonsingular transformations.

  6. Development of regional scale soil erosion and sediment transport model; its calibration and validations

    International Nuclear Information System (INIS)

    Rehman, M.H.; Akhtar, M.N.

    2005-01-01

    Despite of the fact that many soil erosion models have been developed in the past more than 5 decades including empirical based models like USLE and RUSLE and many process based soil erosion and sediment transport models like WEPP, EUROSEM and SHETRAN, the application of these models to regional scales remained questionable. To address the problem, a process-based soil erosion and sediment transport model has been developed to estimate the soil erosion, deposition, transport and sediment yield at regional scale. The soil erosion processes are modeled as the detachment of soil by the raindrop impact over the entire grid and detachment of soil due to overland flow only within the equivalent channels, whereas sediment is routed to the forward grid considering the transport capacity of the flow. The loss of heterogeneity in the spatial information of the topography due to slope averaging effect is reproduced by adapting a Fractal analysis approach. The model has been calibrated for Nan river basin (N.13A) and validated to the Yom river basin (Y.6) and Nam Mae Klang river basin (P.24A) of Thailand, simulated results show good agreements with the observed sediment discharge data. The developed model with few new components can also be applied for predicting the sediment discharges of the river Indus. (author)

  7. Calibration of a Distributed Hydrological Model using Remote Sensing Evapotranspiration data in the Semi-Arid Punjab Region of Pakista

    Science.gov (United States)

    Becker, R.; Usman, M.

    2017-12-01

    A SWAT (Soil Water Assessment Tool) model is applied in the semi-arid Punjab region in Pakistan. The physically based hydrological model is set up to simulate hydrological processes and water resources demands under future land use, climate change and irrigation management scenarios. In order to successfully run the model, detailed focus is laid on the calibration procedure of the model. The study deals with the following calibration issues:i. lack of reliable calibration/validation data, ii. difficulty to accurately model a highly managed system with a physically based hydrological model and iii. use of alternative and spatially distributed data sets for model calibration. In our study area field observations are rare and the entirely human controlled irrigation system renders central calibration parameters (e.g. runoff/curve number) unsuitable, as it can't be assumed that they represent the natural behavior of the hydrological system. From evapotranspiration (ET) however principal hydrological processes can still be inferred. Usman et al. (2015) derived satellite based monthly ET data for our study area based on SEBAL (Surface Energy Balance Algorithm) and created a reliable ET data set which we use in this study to calibrate our SWAT model. The initial SWAT model performance is evaluated with respect to the SEBAL results using correlation coefficients, RMSE, Nash-Sutcliffe efficiencies and mean differences. Particular focus is laid on the spatial patters, investigating the potential of a spatially differentiated parameterization instead of just using spatially uniform calibration data. A sensitivity analysis reveals the most sensitive parameters with respect to changes in ET, which are then selected for the calibration process.Using the SEBAL-ET product we calibrate the SWAT model for the time period 2005-2006 using a dynamically dimensioned global search algorithm to minimize RMSE. The model improvement after the calibration procedure is finally evaluated based

  8. Calibrated Properties Model

    International Nuclear Information System (INIS)

    Ahlers, C.F.; Liu, H.H.

    2001-01-01

    The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the AMR Development Plan for U0035 Calibrated Properties Model REV00 (CRWMS M and O 1999c). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions

  9. Calibrated Properties Model

    International Nuclear Information System (INIS)

    Ahlers, C.; Liu, H.

    2000-01-01

    The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions

  10. Calibrated Properties Model

    International Nuclear Information System (INIS)

    Ghezzehej, T.

    2004-01-01

    The purpose of this model report is to document the calibrated properties model that provides calibrated property sets for unsaturated zone (UZ) flow and transport process models (UZ models). The calibration of the property sets is performed through inverse modeling. This work followed, and was planned in, ''Technical Work Plan (TWP) for: Unsaturated Zone Flow Analysis and Model Report Integration'' (BSC 2004 [DIRS 169654], Sections 1.2.6 and 2.1.1.6). Direct inputs to this model report were derived from the following upstream analysis and model reports: ''Analysis of Hydrologic Properties Data'' (BSC 2004 [DIRS 170038]); ''Development of Numerical Grids for UZ Flow and Transport Modeling'' (BSC 2004 [DIRS 169855]); ''Simulation of Net Infiltration for Present-Day and Potential Future Climates'' (BSC 2004 [DIRS 170007]); ''Geologic Framework Model'' (GFM2000) (BSC 2004 [DIRS 170029]). Additionally, this model report incorporates errata of the previous version and closure of the Key Technical Issue agreement TSPAI 3.26 (Section 6.2.2 and Appendix B), and it is revised for improved transparency

  11. Application of an automatic approach to calibrate the NEMURO nutrient-phytoplankton-zooplankton food web model in the Oyashio region

    Science.gov (United States)

    Ito, Shin-ichi; Yoshie, Naoki; Okunishi, Takeshi; Ono, Tsuneo; Okazaki, Yuji; Kuwata, Akira; Hashioka, Taketo; Rose, Kenneth A.; Megrey, Bernard A.; Kishi, Michio J.; Nakamachi, Miwa; Shimizu, Yugo; Kakehi, Shigeho; Saito, Hiroaki; Takahashi, Kazutaka; Tadokoro, Kazuaki; Kusaka, Akira; Kasai, Hiromi

    2010-10-01

    The Oyashio region in the western North Pacific supports high biological productivity and has been well monitored. We applied the NEMURO (North Pacific Ecosystem Model for Understanding Regional Oceanography) model to simulate the nutrients, phytoplankton, and zooplankton dynamics. Determination of parameters values is very important, yet ad hoc calibration methods are often used. We used the automatic calibration software PEST (model-independent Parameter ESTimation), which has been used previously with NEMURO but in a system without ontogenetic vertical migration of the large zooplankton functional group. Determining the performance of PEST with vertical migration, and obtaining a set of realistic parameter values for the Oyashio, will likely be useful in future applications of NEMURO. Five identical twin simulation experiments were performed with the one-box version of NEMURO. The experiments differed in whether monthly snapshot or averaged state variables were used, in whether state variables were model functional groups or were aggregated (total phytoplankton, small plus large zooplankton), and in whether vertical migration of large zooplankton was included or not. We then applied NEMURO to monthly climatological field data covering 1 year for the Oyashio, and compared model fits and parameter values between PEST-determined estimates and values used in previous applications to the Oyashio region that relied on ad hoc calibration. We substituted the PEST and ad hoc calibrated parameter values into a 3-D version of NEMURO for the western North Pacific, and compared the two sets of spatial maps of chlorophyll- a with satellite-derived data. The identical twin experiments demonstrated that PEST could recover the known model parameter values when vertical migration was included, and that over-fitting can occur as a result of slight differences in the values of the state variables. PEST recovered known parameter values when using monthly snapshots of aggregated

  12. Observation models in radiocarbon calibration

    International Nuclear Information System (INIS)

    Jones, M.D.; Nicholls, G.K.

    2001-01-01

    The observation model underlying any calibration process dictates the precise mathematical details of the calibration calculations. Accordingly it is important that an appropriate observation model is used. Here this is illustrated with reference to the use of reservoir offsets where the standard calibration approach is based on a different model to that which the practitioners clearly believe is being applied. This sort of error can give rise to significantly erroneous calibration results. (author). 12 refs., 1 fig

  13. CALIBRATED HYDRODYNAMIC MODEL

    Directory of Open Access Journals (Sweden)

    Sezar Gülbaz

    2015-01-01

    Full Text Available The land development and increase in urbanization in a watershed affect water quantityand water quality. On one hand, urbanization provokes the adjustment of geomorphicstructure of the streams, ultimately raises peak flow rate which causes flood; on theother hand, it diminishes water quality which results in an increase in Total SuspendedSolid (TSS. Consequently, sediment accumulation in downstream of urban areas isobserved which is not preferred for longer life of dams. In order to overcome thesediment accumulation problem in dams, the amount of TSS in streams and inwatersheds should be taken under control. Low Impact Development (LID is a BestManagement Practice (BMP which may be used for this purpose. It is a land planningand engineering design method which is applied in managing storm water runoff inorder to reduce flooding as well as simultaneously improve water quality. LID includestechniques to predict suspended solid loads in surface runoff generated over imperviousurban surfaces. In this study, the impact of LID-BMPs on surface runoff and TSS isinvestigated by employing a calibrated hydrodynamic model for Sazlidere Watershedwhich is located in Istanbul, Turkey. For this purpose, a calibrated hydrodynamicmodel was developed by using Environmental Protection Agency Storm WaterManagement Model (EPA SWMM. For model calibration and validation, we set up arain gauge and a flow meter into the field and obtain rainfall and flow rate data. Andthen, we select several LID types such as retention basins, vegetative swales andpermeable pavement and we obtain their influence on peak flow rate and pollutantbuildup and washoff for TSS. Consequently, we observe the possible effects ofLID on surface runoff and TSS in Sazlidere Watershed.

  14. SURF Model Calibration Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-10

    SURF and SURFplus are high explosive reactive burn models for shock initiation and propagation of detonation waves. They are engineering models motivated by the ignition & growth concept of high spots and for SURFplus a second slow reaction for the energy release from carbon clustering. A key feature of the SURF model is that there is a partial decoupling between model parameters and detonation properties. This enables reduced sets of independent parameters to be calibrated sequentially for the initiation and propagation regimes. Here we focus on a methodology for tting the initiation parameters to Pop plot data based on 1-D simulations to compute a numerical Pop plot. In addition, the strategy for tting the remaining parameters for the propagation regime and failure diameter is discussed.

  15. Development of a generic auto-calibration package for regional ecological modeling and application in the Central Plains of the United States

    Science.gov (United States)

    Wu, Yiping; Liu, Shuguang; Li, Zhengpeng; Dahal, Devendra; Young, Claudia J.; Schmidt, Gail L.; Liu, Jinxun; Davis, Brian; Sohl, Terry L.; Werner, Jeremy M.; Oeding, Jennifer

    2014-01-01

    Process-oriented ecological models are frequently used for predicting potential impacts of global changes such as climate and land-cover changes, which can be useful for policy making. It is critical but challenging to automatically derive optimal parameter values at different scales, especially at regional scale, and validate the model performance. In this study, we developed an automatic calibration (auto-calibration) function for a well-established biogeochemical model—the General Ensemble Biogeochemical Modeling System (GEMS)-Erosion Deposition Carbon Model (EDCM)—using data assimilation technique: the Shuffled Complex Evolution algorithm and a model-inversion R package—Flexible Modeling Environment (FME). The new functionality can support multi-parameter and multi-objective auto-calibration of EDCM at the both pixel and regional levels. We also developed a post-processing procedure for GEMS to provide options to save the pixel-based or aggregated county-land cover specific parameter values for subsequent simulations. In our case study, we successfully applied the updated model (EDCM-Auto) for a single crop pixel with a corn–wheat rotation and a large ecological region (Level II)—Central USA Plains. The evaluation results indicate that EDCM-Auto is applicable at multiple scales and is capable to handle land cover changes (e.g., crop rotations). The model also performs well in capturing the spatial pattern of grain yield production for crops and net primary production (NPP) for other ecosystems across the region, which is a good example for implementing calibration and validation of ecological models with readily available survey data (grain yield) and remote sensing data (NPP) at regional and national levels. The developed platform for auto-calibration can be readily expanded to incorporate other model inversion algorithms and potential R packages, and also be applied to other ecological models.

  16. Impacts of Spatial Climatic Representation on Hydrological Model Calibration and Prediction Uncertainty: A Mountainous Catchment of Three Gorges Reservoir Region, China

    Directory of Open Access Journals (Sweden)

    Yan Li

    2016-02-01

    Full Text Available Sparse climatic observations represent a major challenge for hydrological modeling of mountain catchments with implications for decision-making in water resources management. Employing elevation bands in the Soil and Water Assessment Tool-Sequential Uncertainty Fitting (SWAT2012-SUFI2 model enabled representation of precipitation and temperature variation with altitude in the Daning river catchment (Three Gorges Reservoir Region, China where meteorological inputs are limited in spatial extent and are derived from observations from relatively low lying locations. Inclusion of elevation bands produced better model performance for 1987–1993 with the Nash–Sutcliffe efficiency (NSE increasing by at least 0.11 prior to calibration. During calibration prediction uncertainty was greatly reduced. With similar R-factors from the earlier calibration iterations, a further 11% of observations were included within the 95% prediction uncertainty (95PPU compared to the model without elevation bands. For behavioral simulations defined in SWAT calibration using a NSE threshold of 0.3, an additional 3.9% of observations were within the 95PPU while the uncertainty reduced by 7.6% in the model with elevation bands. The calibrated model with elevation bands reproduced observed river discharges with the performance in the calibration period changing to “very good” from “poor” without elevation bands. The output uncertainty of calibrated model with elevation bands was satisfactory, having 85% of flow observations included within the 95PPU. These results clearly demonstrate the requirement to account for orographic effects on precipitation and temperature in hydrological models of mountainous catchments.

  17. Calibration of the maximum carboxylation velocity (Vcmax using data mining techniques and ecophysiological data from the Brazilian semiarid region, for use in Dynamic Global Vegetation Models

    Directory of Open Access Journals (Sweden)

    L. F. C. Rezende

    Full Text Available Abstract The semiarid region of northeastern Brazil, the Caatinga, is extremely important due to its biodiversity and endemism. Measurements of plant physiology are crucial to the calibration of Dynamic Global Vegetation Models (DGVMs that are currently used to simulate the responses of vegetation in face of global changes. In a field work realized in an area of preserved Caatinga forest located in Petrolina, Pernambuco, measurements of carbon assimilation (in response to light and CO2 were performed on 11 individuals of Poincianella microphylla, a native species that is abundant in this region. These data were used to calibrate the maximum carboxylation velocity (Vcmax used in the INLAND model. The calibration techniques used were Multiple Linear Regression (MLR, and data mining techniques as the Classification And Regression Tree (CART and K-MEANS. The results were compared to the UNCALIBRATED model. It was found that simulated Gross Primary Productivity (GPP reached 72% of observed GPP when using the calibrated Vcmax values, whereas the UNCALIBRATED approach accounted for 42% of observed GPP. Thus, this work shows the benefits of calibrating DGVMs using field ecophysiological measurements, especially in areas where field data is scarce or non-existent, such as in the Caatinga.

  18. Calibration and Evaluation of Different Estimation Models of Daily Solar Radiation in Seasonally and Annual Time Steps in Shiraz Region

    Directory of Open Access Journals (Sweden)

    Hamid Reza Fooladmand

    2017-06-01

    2006 to 2008 were used for calibrating fourteen estimated models of solar radiation in seasonally and annual time steps and the measured data of years 2009 and 2010 were used for evaluating the obtained results. The equations were used in this study divided into three groups contains: 1 The equations based on only sunshine hours. 2 The equations based on only air temperature. 3 The equations based on sunshine hours and air temperature together. On the other hand, statistical comparison must be done to select the best equation for estimating solar radiation in seasonally and annual time steps. For this purpose, in validation stage the combination of statistical equations and linear correlation was used, and then the value of mean square deviation (MSD was calculated to evaluate the different models for estimating solar radiation in mentioned time steps. Results and Discussion: The mean values of mean square deviation (MSD of fourteen models for estimating solar radiation were equal to 24.16, 20.42, 4.08 and 16.19 for spring to winter respectively, and 15.40 in annual time step. Therefore, the results showed that using the equations for autumn enjoyed high accuracy, however for other seasons had low accuracy. So, using the equations for annual time step were appropriate more than the equations for seasonally time steps. Also, the mean values of mean square deviation (MSD of the equations based on only sunshine hours, the equations based on only air temperature, and the equations based on the combination of sunshine hours and air temperature for estimating solar radiation were equal to 14.82, 17.40 and 14.88, respectively. Therefore, the results indicated that the models based on only air temperature were the worst conditions for estimating solar radiation in Shiraz region, and therefore, using the sunshine hours for estimating solar radiation is necessary. Conclusions: In this study for estimating solar radiation in seasonally and annual time steps in Shiraz region

  19. Model Calibration in Option Pricing

    Directory of Open Access Journals (Sweden)

    Andre Loerx

    2012-04-01

    Full Text Available We consider calibration problems for models of pricing derivatives which occur in mathematical finance. We discuss various approaches such as using stochastic differential equations or partial differential equations for the modeling process. We discuss the development in the past literature and give an outlook into modern approaches of modelling. Furthermore, we address important numerical issues in the valuation of options and likewise the calibration of these models. This leads to interesting problems in optimization, where, e.g., the use of adjoint equations or the choice of the parametrization for the model parameters play an important role.

  20. Model Calibration in Watershed Hydrology

    Science.gov (United States)

    Yilmaz, Koray K.; Vrugt, Jasper A.; Gupta, Hoshin V.; Sorooshian, Soroosh

    2009-01-01

    Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must, therefore, be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. This Chapter reviews the current state-of-the-art of model calibration in watershed hydrology with special emphasis on our own contributions in the last few decades. We discuss the historical background that has led to current perspectives, and review different approaches for manual and automatic single- and multi-objective parameter estimation. In particular, we highlight the recent developments in the calibration of distributed hydrologic models using parameter dimensionality reduction sampling, parameter regularization and parallel computing.

  1. LLNL's Regional Model Calibration and Body-Wave Discrimination Research in the Former Soviet Union using Peaceful Nuclear Explosions (PNEs)

    International Nuclear Information System (INIS)

    Bhattacharyya, J.; Rodgers, A.; Swenson, J.; Schultz, C.; Walter, W.; Mooney, W.; Clitheroe, G.

    2000-01-01

    Long-range seismic profiles from Peaceful Nuclear Explosions (PNE) in the Former Soviet Union (FSU) provide a unique data set to investigate several important issues in regional Comprehensive Nuclear-Test-Ban Treaty (CTBT) monitoring. The recording station spacing (∼15 km) allows for extremely dense sampling of the propagation from the source to ∼ 3300 km. This allows us to analyze the waveforms at local, near- and far-regional and teleseismic distances. These data are used to: (1) study the evolution of regional phases and phase amplitude ratios along the profile; (2) infer one-dimensional velocity structure along the profile; and (3) evaluate the spatial correlation of regional and teleseismic travel times and regional phase amplitude ratios. We analyzed waveform data from four PNE's (m b = 5.1-5.6) recorded along profile KRATON, which is an east-west trending profile located in northern Sibertil. Short-period regional discriminants, such as P/S amplitude ratios, will be essential for seismic monitoring of the Comprehensive Nuclear-Test-Ban Treaty (CTBT) at small magnitudes (m b o and 10 o , respectively

  2. Calibrating and Validating a Simulation Model to Identify Drivers of Urban Land Cover Change in the Baltimore, MD Metropolitan Region

    Directory of Open Access Journals (Sweden)

    Claire Jantz

    2014-09-01

    Full Text Available We build upon much of the accumulated knowledge of the widely used SLEUTH urban land change model and offer advances. First, we use SLEUTH’s exclusion/attraction layer to identify and test different urban land cover change drivers; second, we leverage SLEUTH’s self-modification capability to incorporate a demographic model; and third, we develop a validation procedure to quantify the influence of land cover change drivers and assess uncertainty. We found that, contrary to our a priori expectations, new development is not attracted to areas serviced by existing or planned water and sewer infrastructure. However, information about where population and employment growth is likely to occur did improve model performance. These findings point to the dominant role of centrifugal forces in post-industrial cities like Baltimore, MD. We successfully developed a demographic model that allowed us to constrain the SLEUTH model forecasts and address uncertainty related to the dynamic relationship between changes in population and employment and urban land use. Finally, we emphasize the importance of model validation. In this work the validation procedure played a key role in rigorously assessing the impacts of different exclusion/attraction layers and in assessing uncertainty related to population and employment forecasts.

  3. Calibration and simulation of Heston model

    Directory of Open Access Journals (Sweden)

    Mrázek Milan

    2017-05-01

    Full Text Available We calibrate Heston stochastic volatility model to real market data using several optimization techniques. We compare both global and local optimizers for different weights showing remarkable differences even for data (DAX options from two consecutive days. We provide a novel calibration procedure that incorporates the usage of approximation formula and outperforms significantly other existing calibration methods.

  4. Calibration of PMIS pavement performance prediction models.

    Science.gov (United States)

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  5. Error-in-variables models in calibration

    Science.gov (United States)

    Lira, I.; Grientschnig, D.

    2017-12-01

    In many calibration operations, the stimuli applied to the measuring system or instrument under test are derived from measurement standards whose values may be considered to be perfectly known. In that case, it is assumed that calibration uncertainty arises solely from inexact measurement of the responses, from imperfect control of the calibration process and from the possible inaccuracy of the calibration model. However, the premise that the stimuli are completely known is never strictly fulfilled and in some instances it may be grossly inadequate. Then, error-in-variables (EIV) regression models have to be employed. In metrology, these models have been approached mostly from the frequentist perspective. In contrast, not much guidance is available on their Bayesian analysis. In this paper, we first present a brief summary of the conventional statistical techniques that have been developed to deal with EIV models in calibration. We then proceed to discuss the alternative Bayesian framework under some simplifying assumptions. Through a detailed example about the calibration of an instrument for measuring flow rates, we provide advice on how the user of the calibration function should employ the latter framework for inferring the stimulus acting on the calibrated device when, in use, a certain response is measured.

  6. A single model procedure for estimating tank calibration equations

    International Nuclear Information System (INIS)

    Liebetrau, A.M.

    1997-10-01

    A fundamental component of any accountability system for nuclear materials is a tank calibration equation that relates the height of liquid in a tank to its volume. Tank volume calibration equations are typically determined from pairs of height and volume measurements taken in a series of calibration runs. After raw calibration data are standardized to a fixed set of reference conditions, the calibration equation is typically fit by dividing the data into several segments--corresponding to regions in the tank--and independently fitting the data for each segment. The estimates obtained for individual segments must then be combined to obtain an estimate of the entire calibration function. This process is tedious and time-consuming. Moreover, uncertainty estimates may be misleading because it is difficult to properly model run-to-run variability and between-segment correlation. In this paper, the authors describe a model whose parameters can be estimated simultaneously for all segments of the calibration data, thereby eliminating the need for segment-by-segment estimation. The essence of the proposed model is to define a suitable polynomial to fit to each segment and then extend its definition to the domain of the entire calibration function, so that it (the entire calibration function) can be expressed as the sum of these extended polynomials. The model provides defensible estimates of between-run variability and yields a proper treatment of between-segment correlations. A portable software package, called TANCS, has been developed to facilitate the acquisition, standardization, and analysis of tank calibration data. The TANCS package was used for the calculations in an example presented to illustrate the unified modeling approach described in this paper. With TANCS, a trial calibration function can be estimated and evaluated in a matter of minutes

  7. Fermentation process tracking through enhanced spectral calibration modeling.

    Science.gov (United States)

    Triadaphillou, Sophia; Martin, Elaine; Montague, Gary; Norden, Alison; Jeffkins, Paul; Stimpson, Sarah

    2007-06-15

    The FDA process analytical technology (PAT) initiative will materialize in a significant increase in the number of installations of spectroscopic instrumentation. However, to attain the greatest benefit from the data generated, there is a need for calibration procedures that extract the maximum information content. For example, in fermentation processes, the interpretation of the resulting spectra is challenging as a consequence of the large number of wavelengths recorded, the underlying correlation structure that is evident between the wavelengths and the impact of the measurement environment. Approaches to the development of calibration models have been based on the application of partial least squares (PLS) either to the full spectral signature or to a subset of wavelengths. This paper presents a new approach to calibration modeling that combines a wavelength selection procedure, spectral window selection (SWS), where windows of wavelengths are automatically selected which are subsequently used as the basis of the calibration model. However, due to the non-uniqueness of the windows selected when the algorithm is executed repeatedly, multiple models are constructed and these are then combined using stacking thereby increasing the robustness of the final calibration model. The methodology is applied to data generated during the monitoring of broth concentrations in an industrial fermentation process from on-line near-infrared (NIR) and mid-infrared (MIR) spectrometers. It is shown that the proposed calibration modeling procedure outperforms traditional calibration procedures, as well as enabling the identification of the critical regions of the spectra with regard to the fermentation process.

  8. Financial model calibration using consistency hints.

    Science.gov (United States)

    Abu-Mostafa, Y S

    2001-01-01

    We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.

  9. Iowa calibration of MEPDG performance prediction models.

    Science.gov (United States)

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  10. External calibration of GOCE data using regional terrestrial gravity data

    Directory of Open Access Journals (Sweden)

    Wu Yunlong

    2012-08-01

    Full Text Available This paper reports on a study of the methodology of external calibration of GOCE data, using regional terrestrial-gravity data. Three regions around the world are selected in the numerical experiments. The result indicates that this calibration method is feasible. The effect is best with an accuracy of scale factor at 10−2 level, in Australia, where the area is smooth and the gravity data points are dense. The accuracy is one order of magnitude lower in both Canada where the area is smooth but the data points are sparse, and Norway, where the area is rather tough and the data points are sparse.

  11. Logarithmic transformed statistical models in calibration

    International Nuclear Information System (INIS)

    Zeis, C.D.

    1975-01-01

    A general type of statistical model used for calibration of instruments having the property that the standard deviations of the observed values increase as a function of the mean value is described. The application to the Helix Counter at the Rocky Flats Plant is primarily from a theoretical point of view. The Helix Counter measures the amount of plutonium in certain types of chemicals. The method described can be used also for other calibrations. (U.S.)

  12. Calibration of amino acid racemization (AAR) kinetics in United States mid-Atlantic Coastal Plain Quaternary mollusks using 87Sr/ 86Sr analyses: Evaluation of kinetic models and estimation of regional Late Pleistocene temperature history

    Science.gov (United States)

    Wehmiller, J.F.; Harris, W.B.; Boutin, B.S.; Farrell, K.M.

    2012-01-01

    The use of amino acid racemization (AAR) for estimating ages of Quaternary fossils usually requires a combination of kinetic and effective temperature modeling or independent age calibration of analyzed samples. Because of limited availability of calibration samples, age estimates are often based on model extrapolations from single calibration points over wide ranges of D/L values. Here we present paired AAR and 87Sr/ 86Sr results for Pleistocene mollusks from the North Carolina Coastal Plain, USA. 87Sr/ 86Sr age estimates, derived from the lookup table of McArthur et al. [McArthur, J.M., Howarth, R.J., Bailey, T.R., 2001. Strontium isotopic stratigraphy: LOWESS version 3: best fit to the marine Sr-isotopic curve for 0-509 Ma and accompanying Look-up table for deriving numerical age. Journal of Geology 109, 155-169], provide independent age calibration over the full range of amino acid D/L values, thereby allowing comparisons of alternative kinetic models for seven amino acids. The often-used parabolic kinetic model is found to be insufficient to explain the pattern of racemization, although the kinetic pathways for valine racemization and isoleucine epimerization can be closely approximated with this function. Logarithmic and power law regressions more accurately represent the racemization pathways for all amino acids. The reliability of a non-linear model for leucine racemization, developed and refined over the past 20 years, is confirmed by the 87Sr/ 86Sr age results. This age model indicates that the subsurface record (up to 80m thick) of the North Carolina Coastal Plain spans the entire Quaternary, back to ???2.5Ma. The calibrated kinetics derived from this age model yield an estimate of the effective temperature for the study region of 11??2??C., from which we estimate full glacial (Last Glacial Maximum - LGM) temperatures for the region on the order of 7-10??C cooler than present. These temperatures compare favorably with independent paleoclimate information

  13. Moment Magnitude Calibration for the Eastern Mediterranean Region from Broadband Regional Coda Envelopes

    Energy Technology Data Exchange (ETDEWEB)

    Mayeda, K; Eken, T; Hofstetter, A; Turkelli, N; O' Boyle, J; Orgulu, G; Gok, R

    2003-07-17

    The following is an overview of results from ROA01-32 that focuses on an empirical method of calibrating stable seismic source moment-rate spectra derived from regional coda envelopes using broadband stations. The main goal was to develop a regional magnitude methodology that had the following properties: (1) it is tied to an absolute scale and is thus unbiased and transportable; (2) it can be tied seamlessly to the well-established teleseismic and regional catalogs; (3) it is applicable to small events using a sparse network of regional stations; (4) it is flexible enough to utilize S{sub n}-coda, L{sub g}-coda, or P-coda, whichever phase has the best signal-to-noise ratio. The results of this calibration yield source spectra and derived magnitudes that were more stable than any other direct-phase measure to date. Our empirical procedure accounted for all propagation, site, and S-to-coda transfer function effects. The resultant coda-derived moment-rate spectra were used to provide traditional band-limited magnitude (e.g., M{sub L}, m{sub b} etc.) as well as an unbiased, unsaturated magnitude (moment magnitude, M{sub w}) that is tied to a physical measure of earthquake size (i.e., seismic moment). We validated our results by comparing our coda-derived moment estimates with those obtained from long-period waveform modeling. We first tested and validated the method using events distributed along the Dead Sea Rift (e.g., Mayeda et al., 2003). Next, we tested the transportability of the method to earthquakes distributed across the entire country of Turkey and validated our results using seismic moments of over 50 events that had been previously waveform modeled using the method of Dreger and Helmberger, (1993). In both regions we demonstrated that the interstation magnitude scatter was significantly reduced when using the coda-based magnitudes (i.e., M{sub w}(coda) and m{sub b}(coda)). Once calibrated, the coda-derived source spectra provided stable, unbiased magnitude

  14. Calibration of CORSIM models under saturated traffic flow conditions.

    Science.gov (United States)

    2013-09-01

    This study proposes a methodology to calibrate microscopic traffic flow simulation models. : The proposed methodology has the capability to calibrate simultaneously all the calibration : parameters as well as demand patterns for any network topology....

  15. Stochastic calibration and learning in nonstationary hydroeconomic models

    Science.gov (United States)

    Maneta, M. P.; Howitt, R.

    2014-05-01

    Concern about water scarcity and adverse climate events over agricultural regions has motivated a number of efforts to develop operational integrated hydroeconomic models to guide adaptation and optimal use of water. Once calibrated, these models are used for water management and analysis assuming they remain valid under future conditions. In this paper, we present and demonstrate a methodology that permits the recursive calibration of economic models of agricultural production from noisy but frequently available data. We use a standard economic calibration approach, namely positive mathematical programming, integrated in a data assimilation algorithm based on the ensemble Kalman filter equations to identify the economic model parameters. A moving average kernel ensures that new and past information on agricultural activity are blended during the calibration process, avoiding loss of information and overcalibration for the conditions of a single year. A regularization constraint akin to the standard Tikhonov regularization is included in the filter to ensure its stability even in the presence of parameters with low sensitivity to observations. The results show that the implementation of the PMP methodology within a data assimilation framework based on the enKF equations is an effective method to calibrate models of agricultural production even with noisy information. The recursive nature of the method incorporates new information as an added value to the known previous observations of agricultural activity without the need to store historical information. The robustness of the method opens the door to the use of new remote sensing algorithms for operational water management.

  16. Calibration models for high enthalpy calorimetric probes.

    Science.gov (United States)

    Kannel, A

    1978-07-01

    The accuracy of gas-aspirated liquid-cooled calorimetric probes used for measuring the enthalpy of high-temperature gas streams is studied. The error in the differential temperature measurements caused by internal and external heat transfer interactions is considered and quantified by mathematical models. The analysis suggests calibration methods for the evaluation of dimensionless heat transfer parameters in the models, which then can give a more accurate value for the enthalpy of the sample. Calibration models for four types of calorimeters are applied to results from the literature and from our own experiments: a circular slit calorimeter developed by the author, single-cooling jacket probe, double-cooling jacket probe, and split-flow cooling jacket probe. The results show that the models are useful for describing and correcting the temperature measurements.

  17. SURFplus Model Calibration for PBX 9502

    Energy Technology Data Exchange (ETDEWEB)

    Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-12-06

    The SURFplus reactive burn model is calibrated for the TATB based explosive PBX 9502 at three initial temperatures; hot (75 C), ambient (23 C) and cold (-55 C). The CJ state depends on the initial temperature due to the variation in the initial density and initial specific energy of the PBX reactants. For the reactants, a porosity model for full density TATB is used. This allows the initial PBX density to be set to its measured value even though the coeffcient of thermal expansion for the TATB and the PBX differ. The PBX products EOS is taken as independent of the initial PBX state. The initial temperature also affects the sensitivity to shock initiation. The model rate parameters are calibrated to Pop plot data, the failure diameter, the limiting detonation speed just above the failure diameters, and curvature effect data for small curvature.

  18. Grid based calibration of SWAT hydrological models

    Directory of Open Access Journals (Sweden)

    D. Gorgan

    2012-07-01

    Full Text Available The calibration and execution of large hydrological models, such as SWAT (soil and water assessment tool, developed for large areas, high resolution, and huge input data, need not only quite a long execution time but also high computation resources. SWAT hydrological model supports studies and predictions of the impact of land management practices on water, sediment, and agricultural chemical yields in complex watersheds. The paper presents the gSWAT application as a web practical solution for environmental specialists to calibrate extensive hydrological models and to run scenarios, by hiding the complex control of processes and heterogeneous resources across the grid based high computation infrastructure. The paper highlights the basic functionalities of the gSWAT platform, and the features of the graphical user interface. The presentation is concerned with the development of working sessions, interactive control of calibration, direct and basic editing of parameters, process monitoring, and graphical and interactive visualization of the results. The experiments performed on different SWAT models and the obtained results argue the benefits brought by the grid parallel and distributed environment as a solution for the processing platform. All the instances of SWAT models used in the reported experiments have been developed through the enviroGRIDS project, targeting the Black Sea catchment area.

  19. High Accuracy Transistor Compact Model Calibrations

    Energy Technology Data Exchange (ETDEWEB)

    Hembree, Charles E. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Mar, Alan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Robertson, Perry J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirements require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.

  20. Gradient-based model calibration with proxy-model assistance

    Science.gov (United States)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  1. Electroweak Calibration of the Higgs Characterization Model

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will present the preliminary results of histogram fits using the Higgs Combine histogram fitting package. These fits can be used to estimate the effects of electroweak contributions to the p p -> H mu+ mu- Higgs production channel and calibrate Beyond Standard Model (BSM) simulations which ignore these effects. I will emphasize my findings' significance in the context of other research here at CERN and in the broader world of high energy physics.

  2. Ideas for fast accelerator model calibration

    International Nuclear Information System (INIS)

    Corbett, J.

    1997-05-01

    With the advent of a simple matrix inversion technique, measurement-based storage ring modeling has made rapid progress in recent years. Using fast computers with large memory, the matrix inversion procedure typically adjusts up to 10 3 model variables to fit the order of 10 5 measurements. The results have been surprisingly accurate. Physics aside, one of the next frontiers is to simplify the process and to reduce computation time. In this paper, the authors discuss two approaches to speed up the model calibration process: recursive least-squares fitting and a piecewise fitting approach

  3. Model calibration for building energy efficiency simulation

    International Nuclear Information System (INIS)

    Mustafaraj, Giorgio; Marini, Dashamir; Costa, Andrea; Keane, Marcus

    2014-01-01

    Highlights: • Developing a 3D model relating to building architecture, occupancy and HVAC operation. • Two calibration stages developed, final model providing accurate results. • Using an onsite weather station for generating the weather data file in EnergyPlus. • Predicting thermal behaviour of underfloor heating, heat pump and natural ventilation. • Monthly energy saving opportunities related to heat pump of 20–27% was identified. - Abstract: This research work deals with an Environmental Research Institute (ERI) building where an underfloor heating system and natural ventilation are the main systems used to maintain comfort condition throughout 80% of the building areas. Firstly, this work involved developing a 3D model relating to building architecture, occupancy and HVAC operation. Secondly, the calibration methodology, which consists of two levels, was then applied in order to insure accuracy and reduce the likelihood of errors. To further improve the accuracy of calibration a historical weather data file related to year 2011, was created from the on-site local weather station of ERI building. After applying the second level of calibration process, the values of Mean bias Error (MBE) and Cumulative Variation of Root Mean Squared Error (CV(RMSE)) on hourly based analysis for heat pump electricity consumption varied within the following ranges: (MBE) hourly from −5.6% to 7.5% and CV(RMSE) hourly from 7.3% to 25.1%. Finally, the building was simulated with EnergyPlus to identify further possibilities of energy savings supplied by a water to water heat pump to underfloor heating system. It found that electricity consumption savings from the heat pump can vary between 20% and 27% on monthly bases

  4. Another look at volume self-calibration: calibration and self-calibration within a pinhole model of Scheimpflug cameras

    International Nuclear Information System (INIS)

    Cornic, Philippe; Le Besnerais, Guy; Champagnat, Frédéric; Illoul, Cédric; Cheminet, Adam; Le Sant, Yves; Leclaire, Benjamin

    2016-01-01

    We address calibration and self-calibration of tomographic PIV experiments within a pinhole model of cameras. A complete and explicit pinhole model of a camera equipped with a 2-tilt angles Scheimpflug adapter is presented. It is then used in a calibration procedure based on a freely moving calibration plate. While the resulting calibrations are accurate enough for Tomo-PIV, we confirm, through a simple experiment, that they are not stable in time, and illustrate how the pinhole framework can be used to provide a quantitative evaluation of geometrical drifts in the setup. We propose an original self-calibration method based on global optimization of the extrinsic parameters of the pinhole model. These methods are successfully applied to the tomographic PIV of an air jet experiment. An unexpected by-product of our work is to show that volume self-calibration induces a change in the world frame coordinates. Provided the calibration drift is small, as generally observed in PIV, the bias on the estimated velocity field is negligible but the absolute location cannot be accurately recovered using standard calibration data. (paper)

  5. Calibration of hydrological models using flow-duration curves

    Directory of Open Access Journals (Sweden)

    I. K. Westerberg

    2011-07-01

    acceptability applied directly to the time series of discharge, it still requires a reasonable representation of the distribution of inputs. Additional constraints might therefore be required in catchments subject to snow and where peak-flow timing at sub-daily time scales is of high importance. The results suggest that the calibration method can be useful when observation time periods for discharge and model input data do not overlap. The method could also be suitable for calibration to regional FDCs while taking uncertainties in the hydrological model and data into account.

  6. Seepage Calibration Model and Seepage Testing Data

    International Nuclear Information System (INIS)

    Dixon, P.

    2004-01-01

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM is developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA (see upcoming REV 02 of CRWMS M and O 2000 [153314]), which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model (see BSC 2003 [161530]). The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross Drift to obtain the permeability structure for the seepage model; (3) to use inverse modeling to calibrate the SCM and to estimate seepage-relevant, model-related parameters on the drift scale; (4) to estimate the epistemic uncertainty of the derived parameters, based on the goodness-of-fit to the observed data and the sensitivity of calculated seepage with respect to the parameters of interest; (5) to characterize the aleatory uncertainty

  7. Thermodynamically consistent model calibration in chemical kinetics

    Directory of Open Access Journals (Sweden)

    Goutsias John

    2011-05-01

    Full Text Available Abstract Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new

  8. Calibration of hydrological model with programme PEST

    Science.gov (United States)

    Brilly, Mitja; Vidmar, Andrej; Kryžanowski, Andrej; Bezak, Nejc; Šraj, Mojca

    2016-04-01

    PEST is tool based on minimization of an objective function related to the root mean square error between the model output and the measurement. We use "singular value decomposition", section of the PEST control file, and Tikhonov regularization method for successfully estimation of model parameters. The PEST sometimes failed if inverse problems were ill-posed, but (SVD) ensures that PEST maintains numerical stability. The choice of the initial guess for the initial parameter values is an important issue in the PEST and need expert knowledge. The flexible nature of the PEST software and its ability to be applied to whole catchments at once give results of calibration performed extremely well across high number of sub catchments. Use of parallel computing version of PEST called BeoPEST was successfully useful to speed up calibration process. BeoPEST employs smart slaves and point-to-point communications to transfer data between the master and slaves computers. The HBV-light model is a simple multi-tank-type model for simulating precipitation-runoff. It is conceptual balance model of catchment hydrology which simulates discharge using rainfall, temperature and estimates of potential evaporation. Version of HBV-light-CLI allows the user to run HBV-light from the command line. Input and results files are in XML form. This allows to easily connecting it with other applications such as pre and post-processing utilities and PEST itself. The procedure was applied on hydrological model of Savinja catchment (1852 km2) and consists of twenty one sub-catchments. Data are temporary processed on hourly basis.

  9. Calibration

    International Nuclear Information System (INIS)

    Greacen, E.L.; Correll, R.L.; Cunningham, R.B.; Johns, G.G.; Nicolls, K.D.

    1981-01-01

    Procedures common to different methods of calibration of neutron moisture meters are outlined and laboratory and field calibration methods compared. Gross errors which arise from faulty calibration techniques are described. The count rate can be affected by the dry bulk density of the soil, the volumetric content of constitutional hydrogen and other chemical components of the soil and soil solution. Calibration is further complicated by the fact that the neutron meter responds more strongly to the soil properties close to the detector and source. The differences in slope of calibration curves for different soils can be as much as 40%

  10. Calibration of discrete element model parameters: soybeans

    Science.gov (United States)

    Ghodki, Bhupendra M.; Patel, Manish; Namdeo, Rohit; Carpenter, Gopal

    2018-05-01

    Discrete element method (DEM) simulations are broadly used to get an insight of flow characteristics of granular materials in complex particulate systems. DEM input parameters for a model are the critical prerequisite for an efficient simulation. Thus, the present investigation aims to determine DEM input parameters for Hertz-Mindlin model using soybeans as a granular material. To achieve this aim, widely acceptable calibration approach was used having standard box-type apparatus. Further, qualitative and quantitative findings such as particle profile, height of kernels retaining the acrylic wall, and angle of repose of experiments and numerical simulations were compared to get the parameters. The calibrated set of DEM input parameters includes the following (a) material properties: particle geometric mean diameter (6.24 mm); spherical shape; particle density (1220 kg m^{-3} ), and (b) interaction parameters such as particle-particle: coefficient of restitution (0.17); coefficient of static friction (0.26); coefficient of rolling friction (0.08), and particle-wall: coefficient of restitution (0.35); coefficient of static friction (0.30); coefficient of rolling friction (0.08). The results may adequately be used to simulate particle scale mechanics (grain commingling, flow/motion, forces, etc) of soybeans in post-harvest machinery and devices.

  11. Seepage Calibration Model and Seepage Testing Data

    Energy Technology Data Exchange (ETDEWEB)

    S. Finsterle

    2004-09-02

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross

  12. Seepage Calibration Model and Seepage Testing Data

    International Nuclear Information System (INIS)

    Finsterle, S.

    2004-01-01

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross-Drift to obtain the permeability structure for the seepage model

  13. CALIBRATION OF DISTRIBUTED SHALLOW LANDSLIDE MODELS IN FORESTED LANDSCAPES

    Directory of Open Access Journals (Sweden)

    Gian Battista Bischetti

    2010-09-01

    Full Text Available In mountainous-forested soil mantled landscapes all around the world, rainfall-induced shallow landslides are one of the most common hydro-geomorphic hazards, which frequently impact the environment and human lives and properties. In order to produce shallow landslide susceptibility maps, several models have been proposed in the last decade, combining simplified steady state topography- based hydrological models with the infinite slope scheme, in a GIS framework. In the present paper, two of the still open issues are investigated: the assessment of the validity of slope stability models and the inclusion of root cohesion values. In such a perspective the “Stability INdex MAPping” has been applied to a small forested pre-Alpine catchment, adopting different calibrating approaches and target indexes. The Single and the Multiple Calibration Regions modality and three quantitative target indexes – the common Success Rate (SR, the Modified Success Rate (MSR, and a Weighted Modified Success Rate (WMSR herein introduced – are considered. The results obtained show that the target index can 34 003_Bischetti(569_23 1-12-2010 9:48 Pagina 34 significantly affect the values of a model’s parameters and lead to different proportions of stable/unstable areas, both for the Single and the Multiple Calibration Regions approach. The use of SR as the target index leads to an over-prediction of the unstable areas, whereas the use of MSR and WMSR, seems to allow a better discrimination between stable and unstable areas. The Multiple Calibration Regions approach should be preferred, using information on space distribution of vegetation to define the Regions. The use of field-based estimation of root cohesion and sliding depth allows the implementation of slope stability models (SINMAP in our case also without the data needed for calibration. To maximize the inclusion of such parameters into SINMAP, however, the assumption of a uniform distribution of

  14. Calibration of regional palaeohydrogeology and sensitivity analysis using hydrochemistry data in site investigations

    International Nuclear Information System (INIS)

    Hunter, F.M.I.; Hartley, L.J.; Hoch, A.; Jackson, C.P.; McCarthy, R.; Marsic, N.; Gylling, B.

    2008-01-01

    A transient coupled regional model of groundwater flow and solute transport has been developed, which allows the use of hydrochemical data to calibrate the model input parameters. The methodology has been illustrated using examples from the Simpevarp area in south-eastern Sweden which is being considered for geological disposal of spent nuclear fuel. The 3-dimensional model includes descriptions of spatial heterogeneity, density driven flow, rock matrix diffusion and transport and mixing of different water types, and has been simulated between 8000 BC and 2000 AD. Present-day analyses of major elemental ions and stable isotopes have been used to calibrate the model, which has then been cross checked against measured hydraulic conductivities, and against the hydrochemical interpretation of reference water mixing fractions. The key hydrogeological model sensitivities have been identified using the calibrated model and are found to include high sensitivity to the top surface flow boundary condition, the influence of variations in fracture transmissivity in different orientations (anisotropy), spatial heterogeneity in the deterministic regional deformation zones and the spacing between water bearing fractures (in terms of its effect on matrix diffusion)

  15. Application of Iterative Robust Model-based Optimal Experimental Design for the Calibration of Biocatalytic Models

    DEFF Research Database (Denmark)

    Van Daele, Timothy; Gernaey, Krist V.; Ringborg, Rolf Hoffmeyer

    2017-01-01

    The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during...... experimentation is not actively used to optimise the experimental design. By applying an iterative robust model-based optimal experimental design, the limited amount of data collected is used to design additional informative experiments. The algorithm is used here to calibrate the initial reaction rate of an ω......-transaminase catalysed reaction in a more accurate way. The parameter confidence region estimated from the Fisher Information Matrix is compared with the likelihood confidence region, which is a more accurate, but also a computationally more expensive method. As a result, an important deviation between both approaches...

  16. HRM: HII Region Models

    Science.gov (United States)

    Wenger, Trey V.; Kepley, Amanda K.; Balser, Dana S.

    2017-07-01

    HII Region Models fits HII region models to observed radio recombination line and radio continuum data. The algorithm includes the calculations of departure coefficients to correct for non-LTE effects. HII Region Models has been used to model star formation in the nucleus of IC 342.

  17. Updating a synchronous fluorescence spectroscopic virgin olive oil adulteration calibration to a new geographical region.

    Science.gov (United States)

    Kunz, Matthew Ross; Ottaway, Joshua; Kalivas, John H; Georgiou, Constantinos A; Mousdis, George A

    2011-02-23

    Detecting and quantifying extra virgin olive adulteration is of great importance to the olive oil industry. Many spectroscopic methods in conjunction with multivariate analysis have been used to solve these issues. However, successes to date are limited as calibration models are built to a specific set of geographical regions, growing seasons, cultivars, and oil extraction methods (the composite primary condition). Samples from new geographical regions, growing seasons, etc. (secondary conditions) are not always correctly predicted by the primary model due to different olive oil and/or adulterant compositions stemming from secondary conditions not matching the primary conditions. Three Tikhonov regularization (TR) variants are used in this paper to allow adulterant (sunflower oil) concentration predictions in samples from geographical regions not part of the original primary calibration domain. Of the three TR variants, ridge regression with an additional 2-norm penalty provides the smallest validation sample prediction errors. Although the paper reports on using TR for model updating to predict adulterant oil concentration, the methods should also be applicable to updating models distinguishing adulterated samples from pure extra virgin olive oil. Additionally, the approaches are general and can be used with other spectroscopic methods and adulterants as well as with other agriculture products.

  18. Influence of rainfall observation network on model calibration and application

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-01-01

    Full Text Available The objective in this study is to investigate the influence of the spatial resolution of the rainfall input on the model calibration and application. The analysis is carried out by varying the distribution of the raingauge network. A meso-scale catchment located in southwest Germany has been selected for this study. First, the semi-distributed HBV model is calibrated with the precipitation interpolated from the available observed rainfall of the different raingauge networks. An automatic calibration method based on the combinatorial optimization algorithm simulated annealing is applied. The performance of the hydrological model is analyzed as a function of the raingauge density. Secondly, the calibrated model is validated using interpolated precipitation from the same raingauge density used for the calibration as well as interpolated precipitation based on networks of reduced and increased raingauge density. Lastly, the effect of missing rainfall data is investigated by using a multiple linear regression approach for filling in the missing measurements. The model, calibrated with the complete set of observed data, is then run in the validation period using the above described precipitation field. The simulated hydrographs obtained in the above described three sets of experiments are analyzed through the comparisons of the computed Nash-Sutcliffe coefficient and several goodness-of-fit indexes. The results show that the model using different raingauge networks might need re-calibration of the model parameters, specifically model calibrated on relatively sparse precipitation information might perform well on dense precipitation information while model calibrated on dense precipitation information fails on sparse precipitation information. Also, the model calibrated with the complete set of observed precipitation and run with incomplete observed data associated with the data estimated using multiple linear regressions, at the locations treated as

  19. SWAT Model Configuration, Calibration and Validation for Lake Champlain Basin

    Science.gov (United States)

    The Soil and Water Assessment Tool (SWAT) model was used to develop phosphorus loading estimates for sources in the Lake Champlain Basin. This document describes the model setup and parameterization, and presents calibration results.

  20. Calibration of the Site-Scale Saturated Zone Flow Model

    International Nuclear Information System (INIS)

    Zyvoloski, G. A.

    2001-01-01

    The purpose of the flow calibration analysis work is to provide Performance Assessment (PA) with the calibrated site-scale saturated zone (SZ) flow model that will be used to make radionuclide transport calculations. As such, it is one of the most important models developed in the Yucca Mountain project. This model will be a culmination of much of our knowledge of the SZ flow system. The objective of this study is to provide a defensible site-scale SZ flow and transport model that can be used for assessing total system performance. A defensible model would include geologic and hydrologic data that are used to form the hydrogeologic framework model; also, it would include hydrochemical information to infer transport pathways, in-situ permeability measurements, and water level and head measurements. In addition, the model should include information on major model sensitivities. Especially important are those that affect calibration, the direction of transport pathways, and travel times. Finally, if warranted, alternative calibrations representing different conceptual models should be included. To obtain a defensible model, all available data should be used (or at least considered) to obtain a calibrated model. The site-scale SZ model was calibrated using measured and model-generated water levels and hydraulic head data, specific discharge calculations, and flux comparisons along several of the boundaries. Model validity was established by comparing model-generated permeabilities with the permeability data from field and laboratory tests; by comparing fluid pathlines obtained from the SZ flow model with those inferred from hydrochemical data; and by comparing the upward gradient generated with the model with that observed in the field. This analysis is governed by the Office of Civilian Radioactive Waste Management (OCRWM) Analysis and Modeling Report (AMR) Development Plan ''Calibration of the Site-Scale Saturated Zone Flow Model'' (CRWMS M and O 1999a)

  1. Model Calibration of Exciter and PSS Using Extended Kalman Filter

    Energy Technology Data Exchange (ETDEWEB)

    Kalsi, Karanjit; Du, Pengwei; Huang, Zhenyu

    2012-07-26

    Power system modeling and controls continue to become more complex with the advent of smart grid technologies and large-scale deployment of renewable energy resources. As demonstrated in recent studies, inaccurate system models could lead to large-scale blackouts, thereby motivating the need for model calibration. Current methods of model calibration rely on manual tuning based on engineering experience, are time consuming and could yield inaccurate parameter estimates. In this paper, the Extended Kalman Filter (EKF) is used as a tool to calibrate exciter and Power System Stabilizer (PSS) models of a particular type of machine in the Western Electricity Coordinating Council (WECC). The EKF-based parameter estimation is a recursive prediction-correction process which uses the mismatch between simulation and measurement to adjust the model parameters at every time step. Numerical simulations using actual field test data demonstrate the effectiveness of the proposed approach in calibrating the parameters.

  2. Calibration technique for radiation measurements in vacuum ultraviolet - soft x-ray region

    International Nuclear Information System (INIS)

    Mizui, Jun-ichi

    1986-05-01

    This is a collection of the papers presented at the workshop on ''Calibration Technique for Radiation Measurements in Vacuum Ultraviolet - Soft X-ray Region'' held at the Institute of Plasma Physics, Nagoya University, on December 19 - 20, 1985, under the Collaborating Research Program at the Institute. The following topics were discussed at the workshop: the needs for the calibration of plasma diagnostic devices, present status of the calibration technique, use of the Synchrotron Orbit Radiations for radiometry, and others. (author)

  3. Hand-eye calibration using a target registration error model.

    Science.gov (United States)

    Chen, Elvis C S; Morgan, Isabella; Jayarathne, Uditha; Ma, Burton; Peters, Terry M

    2017-10-01

    Surgical cameras are prevalent in modern operating theatres and are often used as a surrogate for direct vision. Visualisation techniques (e.g. image fusion) made possible by tracking the camera require accurate hand-eye calibration between the camera and the tracking system. The authors introduce the concept of 'guided hand-eye calibration', where calibration measurements are facilitated by a target registration error (TRE) model. They formulate hand-eye calibration as a registration problem between homologous point-line pairs. For each measurement, the position of a monochromatic ball-tip stylus (a point) and its projection onto the image (a line) is recorded, and the TRE of the resulting calibration is predicted using a TRE model. The TRE model is then used to guide the placement of the calibration tool, so that the subsequent measurement minimises the predicted TRE. Assessing TRE after each measurement produces accurate calibration using a minimal number of measurements. As a proof of principle, they evaluated guided calibration using a webcam and an endoscopic camera. Their endoscopic camera results suggest that millimetre TRE is achievable when at least 15 measurements are acquired with the tracker sensor ∼80 cm away on the laparoscope handle for a target ∼20 cm away from the camera.

  4. Cosmic CARNage I: on the calibration of galaxy formation models

    Science.gov (United States)

    Knebe, Alexander; Pearce, Frazer R.; Gonzalez-Perez, Violeta; Thomas, Peter A.; Benson, Andrew; Asquith, Rachel; Blaizot, Jeremy; Bower, Richard; Carretero, Jorge; Castander, Francisco J.; Cattaneo, Andrea; Cora, Sofía A.; Croton, Darren J.; Cui, Weiguang; Cunnama, Daniel; Devriendt, Julien E.; Elahi, Pascal J.; Font, Andreea; Fontanot, Fabio; Gargiulo, Ignacio D.; Helly, John; Henriques, Bruno; Lee, Jaehyun; Mamon, Gary A.; Onions, Julian; Padilla, Nelson D.; Power, Chris; Pujol, Arnau; Ruiz, Andrés N.; Srisawat, Chaichalit; Stevens, Adam R. H.; Tollet, Edouard; Vega-Martínez, Cristian A.; Yi, Sukyoung K.

    2018-04-01

    We present a comparison of nine galaxy formation models, eight semi-analytical, and one halo occupation distribution model, run on the same underlying cold dark matter simulation (cosmological box of comoving width 125h-1 Mpc, with a dark-matter particle mass of 1.24 × 109h-1M⊙) and the same merger trees. While their free parameters have been calibrated to the same observational data sets using two approaches, they nevertheless retain some `memory' of any previous calibration that served as the starting point (especially for the manually tuned models). For the first calibration, models reproduce the observed z = 0 galaxy stellar mass function (SMF) within 3σ. The second calibration extended the observational data to include the z = 2 SMF alongside the z ˜ 0 star formation rate function, cold gas mass, and the black hole-bulge mass relation. Encapsulating the observed evolution of the SMF from z = 2 to 0 is found to be very hard within the context of the physics currently included in the models. We finally use our calibrated models to study the evolution of the stellar-to-halo mass (SHM) ratio. For all models, we find that the peak value of the SHM relation decreases with redshift. However, the trends seen for the evolution of the peak position as well as the mean scatter in the SHM relation are rather weak and strongly model dependent. Both the calibration data sets and model results are publicly available.

  5. Cumulative error models for the tank calibration problem

    International Nuclear Information System (INIS)

    Goldman, A.; Anderson, L.G.; Weber, J.

    1983-01-01

    The purpose of a tank calibration equation is to obtain an estimate of the liquid volume that corresponds to a liquid level measurement. Calibration experimental errors occur in both liquid level and liquid volume measurements. If one of the errors is relatively small, the calibration equation can be determined from wellknown regression and calibration methods. If both variables are assumed to be in error, then for linear cases a prototype model should be considered. Many investigators are not familiar with this model or do not have computing facilities capable of obtaining numerical solutions. This paper discusses and compares three linear models that approximate the prototype model and have the advantage of much simpler computations. Comparisons among the four models and recommendations of suitability are made from simulations and from analyses of six sets of experimental data

  6. Calibration laboratories as a regional repair center: consolidate or collocate

    OpenAIRE

    Mitchell, Marquita A; Pasch, John E.

    1996-01-01

    The purpose of this thesis is to examine the integration of AIMDs Miramar and North Island, and NADEP North Island calibration laboratories. The expected benefits and weaknesses or problems resulting from integration are examined. The benefits analyzed include those in the areas of manpower, training, standards reduction, inventory reduction, streamlining facilities, and increased productivity. The problems analyzed include increased transportation costs, facilities modification costs, reduce...

  7. Development of a Regional Glycerol Dialkyl Glycerol Tetraether (GDGT) - Temperature Calibration for Antarctic and sub-Antarctic Lakes

    Science.gov (United States)

    Roberts, S. J.; Foster, L. C.; Pearson, E. J.; Steve, J.; Hodgson, D.; Saunders, K. M.; Verleyen, E.

    2016-12-01

    Temperature calibration models based on the relative abundances of sedimentary glycerol dialkyl glycerol tetraethers (GDGTs) have been used to reconstruct past temperatures in both marine and terrestrial environments, but have not been widely applied in high latitude environments. This is mainly because the performance of GDGT-temperature calibrations at lower temperatures and GDGT provenance in many lacustrine settings remains uncertain. To address these issues, we examined surface sediments from 32 Antarctic, sub-Antarctic and Southern Chilean lakes. First, we quantified GDGT compositions present and then investigated modern-day environmental controls on GDGT composition. GDGTs were found in all 32 lakes studied. Branched GDGTs (brGDGTs) were dominant in 31 lakes and statistical analyses showed that their composition was strongly correlated with mean summer air temperature (MSAT) rather than pH, conductivity or water depth. Second, we developed the first regional brGDGT-temperature calibration for Antarctic and sub-Antarctic lakes based on four brGDGT compounds (GDGT-Ib, GDGT-II, GDGT-III and GDGT-IIIb). Of these, GDGT-IIIb proved particularly important in cold lacustrine environments. Our brGDGT-Antarctic temperature calibration dataset has an improved statistical performance at low temperatures compared to previous global calibrations (r2=0.83, RMSE=1.45°C, RMSEP-LOO=1.68°C, n=36 samples), highlighting the importance of basing palaeotemperature reconstructions on regional GDGT-temperature calibrations, especially if specific compounds lead to improved model performance. Finally, we applied the new Antarctic brGDGT-temperature calibration to two key lake records from the Antarctic Peninsula and South Georgia. In both, downcore temperature reconstructions show similarities to known Holocene warm periods, providing proof of concept for the new Antarctic calibration model.

  8. Testing of a one dimensional model for Field II calibration

    DEFF Research Database (Denmark)

    Bæk, David; Jensen, Jørgen Arendt; Willatzen, Morten

    2008-01-01

    Field II is a program for simulating ultrasound transducer fields. It is capable of calculating the emitted and pulse-echoed fields for both pulsed and continuous wave transducers. To make it fully calibrated a model of the transducer’s electro-mechanical impulse response must be included. We...... examine an adapted one dimensional transducer model originally proposed by Willatzen [9] to calibrate Field II. This model is modified to calculate the required impulse responses needed by Field II for a calibrated field pressure and external circuit current calculation. The testing has been performed...... to the calibrated Field II program for 1, 4, and 10 cycle excitations. Two parameter sets were applied for modeling, one real valued Pz27 parameter set, manufacturer supplied, and one complex valued parameter set found in literature, Alguer´o et al. [11]. The latter implicitly accounts for attenuation. Results show...

  9. Balance between calibration objectives in a conceptual hydrological model

    NARCIS (Netherlands)

    Booij, Martijn J.; Krol, Martinus S.

    2010-01-01

    Three different measures to determine the optimum balance between calibration objectives are compared: the combined rank method, parameter identifiability and model validation. Four objectives (water balance, hydrograph shape, high flows, low flows) are included in each measure. The contributions of

  10. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  11. Modeling Prairie Pothole Lakes: Linking Satellite Observation and Calibration (Invited)

    Science.gov (United States)

    Schwartz, F. W.; Liu, G.; Zhang, B.; Yu, Z.

    2009-12-01

    This paper examines the response of a complex lake wetland system to variations in climate. The focus is on the lakes and wetlands of the Missouri Coteau, which is part of the larger Prairie Pothole Region of the Central Plains of North America. Information on lake size was enumerated from satellite images, and yielded power law relationships for different hydrological conditions. More traditional lake-stage data were made available to us from the USGS Cottonwood Lake Study Site in North Dakota. A Probabilistic Hydrologic Model (PHM) was developed to simulate lake complexes comprised of tens-of-thousands or more individual closed-basin lakes and wetlands. What is new about this model is a calibration scheme that utilizes remotely-sensed data on lake area as well as stage data for individual lakes. Some ¼ million individual data points are used within a Genetic Algorithm to calibrate the model by comparing the simulated results with observed lake area-frequency power law relationships derived from Landsat images and water depths from seven individual lakes and wetlands. The simulated lake behaviors show good agreement with the observations under average, dry, and wet climatic conditions. The calibrated model is used to examine the impact of climate variability on a large lake complex in ND, in particular, the “Dust Bowl Drought” 1930s. This most famous drought of the 20th Century devastated the agricultural economy of the Great Plains with health and social impacts lingering for years afterwards. Interestingly, the drought of 1930s is unremarkable in relation to others of greater intensity and frequency before AD 1200 in the Great Plains. Major droughts and deluges have the ability to create marked variability of the power law function (e.g. up to one and a half orders of magnitude variability from the extreme Dust Bowl Drought to the extreme 1993-2001 deluge). This new probabilistic modeling approach provides a novel tool to examine the response of the

  12. Efficient Calibration of Distributed Catchment Models Using Perceptual Understanding and Hydrologic Signatures

    Science.gov (United States)

    Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.

    2015-12-01

    Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.

  13. Comparison between two calibration models of a measurement system for thyroid monitoring

    International Nuclear Information System (INIS)

    Venturini, Luzia

    2005-01-01

    This paper shows a comparison between two theoretical calibration that use two mathematical models to represent the neck region. In the first model thyroid is considered to be just a region limited by two concentric cylinders whose dimensions are those of trachea and neck. The second model uses functional functions to get a better representation of the thyroid geometry. Efficiency values are obtained using Monte Carlo simulation. (author)

  14. Using Active Learning for Speeding up Calibration in Simulation Models.

    Science.gov (United States)

    Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2016-07-01

    Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.

  15. Using genetic algorithms to calibrate a water quality model.

    Science.gov (United States)

    Liu, Shuming; Butler, David; Brazier, Richard; Heathwaite, Louise; Khu, Soon-Thiam

    2007-03-15

    With the increasing concern over the impact of diffuse pollution on water bodies, many diffuse pollution models have been developed in the last two decades. A common obstacle in using such models is how to determine the values of the model parameters. This is especially true when a model has a large number of parameters, which makes a full range of calibration expensive in terms of computing time. Compared with conventional optimisation approaches, soft computing techniques often have a faster convergence speed and are more efficient for global optimum searches. This paper presents an attempt to calibrate a diffuse pollution model using a genetic algorithm (GA). Designed to simulate the export of phosphorus from diffuse sources (agricultural land) and point sources (human), the Phosphorus Indicators Tool (PIT) version 1.1, on which this paper is based, consisted of 78 parameters. Previous studies have indicated the difficulty of full range model calibration due to the number of parameters involved. In this paper, a GA was employed to carry out the model calibration in which all parameters were involved. A sensitivity analysis was also performed to investigate the impact of operators in the GA on its effectiveness in optimum searching. The calibration yielded satisfactory results and required reasonable computing time. The application of the PIT model to the Windrush catchment with optimum parameter values was demonstrated. The annual P loss was predicted as 4.4 kg P/ha/yr, which showed a good fitness to the observed value.

  16. Use of regional climate model simulations as an input for hydrological models for the Hindukush-Karakorum-Himalaya region

    NARCIS (Netherlands)

    Akhtar, M.; Ahmad, N.; Booij, Martijn J.

    2009-01-01

    The most important climatological inputs required for the calibration and validation of hydrological models are temperature and precipitation that can be derived from observational records or alternatively from regional climate models (RCMs). In this paper, meteorological station observations and

  17. A Generic Software Framework for Data Assimilation and Model Calibration

    NARCIS (Netherlands)

    Van Velzen, N.

    2010-01-01

    The accuracy of dynamic simulation models can be increased by using observations in conjunction with a data assimilation or model calibration algorithm. However, implementing such algorithms usually increases the complexity of the model software significantly. By using concepts from object oriented

  18. A mathematical model for camera calibration based on straight lines

    Directory of Open Access Journals (Sweden)

    Antonio M. G. Tommaselli

    2005-12-01

    Full Text Available In other to facilitate the automation of camera calibration process, a mathematical model using straight lines was developed, which is based on the equivalent planes mathematical model. Parameter estimation of the developed model is achieved by the Least Squares Method with Conditions and Observations. The same method of adjustment was used to implement camera calibration with bundles, which is based on points. Experiments using simulated and real data have shown that the developed model based on straight lines gives results comparable to the conventional method with points. Details concerning the mathematical development of the model and experiments with simulated and real data will be presented and the results with both methods of camera calibration, with straight lines and with points, will be compared.

  19. Model calibration and beam control systems for storage rings

    International Nuclear Information System (INIS)

    Corbett, W.J.; Lee, M.J.; Ziemann, V.

    1993-04-01

    Electron beam storage rings and linear accelerators are rapidly gaining worldwide popularity as scientific devices for the production of high-brightness synchrotron radiation. Today, everybody agrees that there is a premium on calibrating the storage ring model and determining errors in the machine as soon as possible after the beam is injected. In addition, the accurate optics model enables machine operators to predictably adjust key performance parameters, and allows reliable identification of new errors that occur during operation of the machine. Since the need for model calibration and beam control systems is common to all storage rings, software packages should be made that are portable between different machines. In this paper, we report on work directed toward achieving in-situ calibration of the optics model, detection of alignment errors, and orbit control techniques, with an emphasis on developing a portable system incorporating these tools

  20. The cost of uniqueness in groundwater model calibration

    Science.gov (United States)

    Moore, Catherine; Doherty, John

    2006-04-01

    Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The "cost of uniqueness" is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, in turn, can lead to erroneous predictions made by a model that is ostensibly "well calibrated". Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration

  1. Quality control of radionuclide calibrators used in nuclear medicine services in the Brazilian northeast region

    International Nuclear Information System (INIS)

    Fragoso, Maria C.F.; Albuquerque, Antonio M.S.; Oliveira, Mercia L.; Lima, Ricardo A.; Lima, Fabiana F.

    2011-01-01

    The radionuclide calibrators are essential instruments in nuclear medicine services in order to activity determination of radiopharmaceuticals which will be administered to the patients. Inappropriate performance of these equipment could provide underestimation or overestimation of the activity, compromising the success of diagnosis or therapeutic procedures. To ensure the satisfactory performance of the radionuclide calibrators, quality control tests are recommended by national and international guides. The aim of this work was evaluate the establishment of the quality control program in the radionuclide calibrators at medicine nuclear services in the Brazilian northeast region, highlighting the tests and their frequencies. (author)

  2. Bayesian calibration of power plant models for accurate performance prediction

    International Nuclear Information System (INIS)

    Boksteen, Sowande Z.; Buijtenen, Jos P. van; Pecnik, Rene; Vecht, Dick van der

    2014-01-01

    Highlights: • Bayesian calibration is applied to power plant performance prediction. • Measurements from a plant in operation are used for model calibration. • A gas turbine performance model and steam cycle model are calibrated. • An integrated plant model is derived. • Part load efficiency is accurately predicted as a function of ambient conditions. - Abstract: Gas turbine combined cycles are expected to play an increasingly important role in the balancing of supply and demand in future energy markets. Thermodynamic modeling of these energy systems is frequently applied to assist in decision making processes related to the management of plant operation and maintenance. In most cases, model inputs, parameters and outputs are treated as deterministic quantities and plant operators make decisions with limited or no regard of uncertainties. As the steady integration of wind and solar energy into the energy market induces extra uncertainties, part load operation and reliability are becoming increasingly important. In the current study, methods are proposed to not only quantify various types of uncertainties in measurements and plant model parameters using measured data, but to also assess their effect on various aspects of performance prediction. The authors aim to account for model parameter and measurement uncertainty, and for systematic discrepancy of models with respect to reality. For this purpose, the Bayesian calibration framework of Kennedy and O’Hagan is used, which is especially suitable for high-dimensional industrial problems. The article derives a calibrated model of the plant efficiency as a function of ambient conditions and operational parameters, which is also accurate in part load. The article shows that complete statistical modeling of power plants not only enhances process models, but can also increases confidence in operational decisions

  3. Calibration and Confirmation in Geophysical Models

    Science.gov (United States)

    Werndl, Charlotte

    2016-04-01

    For policy decisions the best geophysical models are needed. To evaluate geophysical models, it is essential that the best available methods for confirmation are used. A hotly debated issue on confirmation in climate science (as well as in philosophy) is the requirement of use-novelty (i.e. that data can only confirm models if they have not already been used before. This talk investigates the issue of use-novelty and double-counting for geophysical models. We will see that the conclusions depend on the framework of confirmation and that it is not clear that use-novelty is a valid requirement and that double-counting is illegitimate.

  4. Applying Hierarchical Model Calibration to Automatically Generated Items.

    Science.gov (United States)

    Williamson, David M.; Johnson, Matthew S.; Sinharay, Sandip; Bejar, Isaac I.

    This study explored the application of hierarchical model calibration as a means of reducing, if not eliminating, the need for pretesting of automatically generated items from a common item model prior to operational use. Ultimately the successful development of automatic item generation (AIG) systems capable of producing items with highly similar…

  5. Cloud-Based Model Calibration Using OpenStudio: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Hale, E.; Lisell, L.; Goldwasser, D.; Macumber, D.; Dean, J.; Metzger, I.; Parker, A.; Long, N.; Ball, B.; Schott, M.; Weaver, E.; Brackney, L.

    2014-03-01

    OpenStudio is a free, open source Software Development Kit (SDK) and application suite for performing building energy modeling and analysis. The OpenStudio Parametric Analysis Tool has been extended to allow cloud-based simulation of multiple OpenStudio models parametrically related to a baseline model. This paper describes the new cloud-based simulation functionality and presents a model cali-bration case study. Calibration is initiated by entering actual monthly utility bill data into the baseline model. Multiple parameters are then varied over multiple iterations to reduce the difference between actual energy consumption and model simulation results, as calculated and visualized by billing period and by fuel type. Simulations are per-formed in parallel using the Amazon Elastic Cloud service. This paper highlights model parameterizations (measures) used for calibration, but the same multi-nodal computing architecture is available for other purposes, for example, recommending combinations of retrofit energy saving measures using the calibrated model as the new baseline.

  6. Calibrating cellular automaton models for pedestrians walking through corners

    Science.gov (United States)

    Dias, Charitha; Lovreglio, Ruggiero

    2018-05-01

    Cellular Automata (CA) based pedestrian simulation models have gained remarkable popularity as they are simpler and easier to implement compared to other microscopic modeling approaches. However, incorporating traditional floor field representations in CA models to simulate pedestrian corner navigation behavior could result in unrealistic behaviors. Even though several previous studies have attempted to enhance CA models to realistically simulate pedestrian maneuvers around bends, such modifications have not been calibrated or validated against empirical data. In this study, two static floor field (SFF) representations, namely 'discrete representation' and 'continuous representation', are calibrated for CA-models to represent pedestrians' walking behavior around 90° bends. Trajectory data collected through a controlled experiment are used to calibrate these model representations. Calibration results indicate that although both floor field representations can represent pedestrians' corner navigation behavior, the 'continuous' representation fits the data better. Output of this study could be beneficial for enhancing the reliability of existing CA-based models by representing pedestrians' corner navigation behaviors more realistically.

  7. Analysis and classification of data sets for calibration and validation of agro-ecosystem models

    DEFF Research Database (Denmark)

    Kersebaum, K C; Boote, K J; Jorgenson, J S

    2015-01-01

    Experimental field data are used at different levels of complexity to calibrate, validate and improve agro-ecosystem models to enhance their reliability for regional impact assessment. A methodological framework and software are presented to evaluate and classify data sets into four classes regar...

  8. A single model procedure for tank calibration function estimation

    International Nuclear Information System (INIS)

    York, J.C.; Liebetrau, A.M.

    1995-01-01

    Reliable tank calibrations are a vital component of any measurement control and accountability program for bulk materials in a nuclear reprocessing facility. Tank volume calibration functions used in nuclear materials safeguards and accountability programs are typically constructed from several segments, each of which is estimated independently. Ideally, the segments correspond to structural features in the tank. In this paper the authors use an extension of the Thomas-Liebetrau model to estimate the entire calibration function in a single step. This procedure automatically takes significant run-to-run differences into account and yields an estimate of the entire calibration function in one operation. As with other procedures, the first step is to define suitable calibration segments. Next, a polynomial of low degree is specified for each segment. In contrast with the conventional practice of constructing a separate model for each segment, this information is used to set up the design matrix for a single model that encompasses all of the calibration data. Estimation of the model parameters is then done using conventional statistical methods. The method described here has several advantages over traditional methods. First, modeled run-to-run differences can be taken into account automatically at the estimation step. Second, no interpolation is required between successive segments. Third, variance estimates are based on all the data, rather than that from a single segment, with the result that discontinuities in confidence intervals at segment boundaries are eliminated. Fourth, the restrictive assumption of the Thomas-Liebetrau method, that the measured volumes be the same for all runs, is not required. Finally, the proposed methods are readily implemented using standard statistical procedures and widely-used software packages

  9. MT3DMS: Model use, calibration, and validation

    Science.gov (United States)

    Zheng, C.; Hill, Mary C.; Cao, G.; Ma, R.

    2012-01-01

    MT3DMS is a three-dimensional multi-species solute transport model for solving advection, dispersion, and chemical reactions of contaminants in saturated groundwater flow systems. MT3DMS interfaces directly with the U.S. Geological Survey finite-difference groundwater flow model MODFLOW for the flow solution and supports the hydrologic and discretization features of MODFLOW. MT3DMS contains multiple transport solution techniques in one code, which can often be important, including in model calibration. Since its first release in 1990 as MT3D for single-species mass transport modeling, MT3DMS has been widely used in research projects and practical field applications. This article provides a brief introduction to MT3DMS and presents recommendations about calibration and validation procedures for field applications of MT3DMS. The examples presented suggest the need to consider alternative processes as models are calibrated and suggest opportunities and difficulties associated with using groundwater age in transport model calibration.

  10. Effect of Using Extreme Years in Hydrologic Model Calibration Performance

    Science.gov (United States)

    Goktas, R. K.; Tezel, U.; Kargi, P. G.; Ayvaz, T.; Tezyapar, I.; Mesta, B.; Kentel, E.

    2017-12-01

    Hydrological models are useful in predicting and developing management strategies for controlling the system behaviour. Specifically they can be used for evaluating streamflow at ungaged catchments, effect of climate change, best management practices on water resources, or identification of pollution sources in a watershed. This study is a part of a TUBITAK project named "Development of a geographical information system based decision-making tool for water quality management of Ergene Watershed using pollutant fingerprints". Within the scope of this project, first water resources in Ergene Watershed is studied. Streamgages found in the basin are identified and daily streamflow measurements are obtained from State Hydraulic Works of Turkey. Streamflow data is analysed using box-whisker plots, hydrographs and flow-duration curves focusing on identification of extreme periods, dry or wet. Then a hydrological model is developed for Ergene Watershed using HEC-HMS in the Watershed Modeling System (WMS) environment. The model is calibrated for various time periods including dry and wet ones and the performance of calibration is evaluated using Nash-Sutcliffe Efficiency (NSE), correlation coefficient, percent bias (PBIAS) and root mean square error. It is observed that calibration period affects the model performance, and the main purpose of the development of the hydrological model should guide calibration period selection. Acknowledgement: This study is funded by The Scientific and Technological Research Council of Turkey (TUBITAK) under Project Number 115Y064.

  11. Calibration of a stochastic health evolution model using NHIS data

    Science.gov (United States)

    Gupta, Aparna; Li, Zhisheng

    2011-10-01

    This paper presents and calibrates an individual's stochastic health evolution model. In this health evolution model, the uncertainty of health incidents is described by a stochastic process with a finite number of possible outcomes. We construct a comprehensive health status index (HSI) to describe an individual's health status, as well as a health risk factor system (RFS) to classify individuals into different risk groups. Based on the maximum likelihood estimation (MLE) method and the method of nonlinear least squares fitting, model calibration is formulated in terms of two mixed-integer nonlinear optimization problems. Using the National Health Interview Survey (NHIS) data, the model is calibrated for specific risk groups. Longitudinal data from the Health and Retirement Study (HRS) is used to validate the calibrated model, which displays good validation properties. The end goal of this paper is to provide a model and methodology, whose output can serve as a crucial component of decision support for strategic planning of health related financing and risk management.

  12. Optical model and calibration of a sun tracker

    International Nuclear Information System (INIS)

    Volkov, Sergei N.; Samokhvalov, Ignatii V.; Cheong, Hai Du; Kim, Dukhyeon

    2016-01-01

    Sun trackers are widely used to investigate scattering and absorption of solar radiation in the Earth's atmosphere. We present a method for optimization of the optical altazimuth sun tracker model with output radiation direction aligned with the axis of a stationary spectrometer. The method solves the problem of stability loss in tracker pointing at the Sun near the zenith. An optimal method for tracker calibration at the measurement site is proposed in the present work. A method of moving calibration is suggested for mobile applications in the presence of large temperature differences and errors in the alignment of the optical system of the tracker. - Highlights: • We present an optimal optical sun tracker model for atmospheric spectroscopy. • The problem of loss of stability of tracker pointing at the Sun has been solved. • We propose an optimal method for tracker calibration at a measurement site. • Test results demonstrate the efficiency of the proposed optimization methods.

  13. Bayesian calibration of the Community Land Model using surrogates

    Energy Technology Data Exchange (ETDEWEB)

    Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Swiler, Laura Painton

    2014-02-01

    We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditional on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural error in CLM under two error models. We find that surrogate models can be created for CLM in most cases. The posterior distributions are more predictive than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters' distributions significantly. The structural error model reveals a correlation time-scale which can be used to identify the physical process that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.

  14. Regionalizing global climate models

    NARCIS (Netherlands)

    Pitman, A.J.; Arneth, A.; Ganzeveld, L.N.

    2012-01-01

    Global climate models simulate the Earth's climate impressively at scales of continents and greater. At these scales, large-scale dynamics and physics largely define the climate. At spatial scales relevant to policy makers, and to impacts and adaptation, many other processes may affect regional and

  15. Calibration of Automatically Generated Items Using Bayesian Hierarchical Modeling.

    Science.gov (United States)

    Johnson, Matthew S.; Sinharay, Sandip

    For complex educational assessments, there is an increasing use of "item families," which are groups of related items. However, calibration or scoring for such an assessment requires fitting models that take into account the dependence structure inherent among the items that belong to the same item family. C. Glas and W. van der Linden…

  16. LED-based Photometric Stereo: Modeling, Calibration and Numerical Solutions

    DEFF Research Database (Denmark)

    Quéau, Yvain; Durix, Bastien; Wu, Tao

    2018-01-01

    We conduct a thorough study of photometric stereo under nearby point light source illumination, from modeling to numerical solution, through calibration. In the classical formulation of photometric stereo, the luminous fluxes are assumed to be directional, which is very difficult to achieve in pr...

  17. Calibration of a Plastic Classification System with the Ccw Model

    International Nuclear Information System (INIS)

    Barcala Riveira, J. M.; Fernandez Marron, J. L.; Alberdi Primicia, J.; Navarrete Marin, J. J.; Oller Gonzalez, J. C.

    2003-01-01

    This document describes the calibration of a plastic Classification system with the Ccw model (Classification by Quantum's built with Wavelet Coefficients). The method is applied to spectra of plastics usually present in domestic wastes. Obtained results are showed. (Author) 16 refs

  18. Technical Note: Calibration and validation of geophysical observation models

    NARCIS (Netherlands)

    Salama, M.S.; van der Velde, R.; van der Woerd, H.J.; Kromkamp, J.C.; Philippart, C.J.M.; Joseph, A.T.; O'Neill, P.E.; Lang, R.H.; Gish, T.; Werdell, P.J.; Su, Z.

    2012-01-01

    We present a method to calibrate and validate observational models that interrelate remotely sensed energy fluxes to geophysical variables of land and water surfaces. Coincident sets of remote sensing observation of visible and microwave radiations and geophysical data are assembled and subdivided

  19. Hydrological model calibration for flood prediction in current and future climates using probability distributions of observed peak flows and model based rainfall

    Science.gov (United States)

    Haberlandt, Uwe; Wallner, Markus; Radtke, Imke

    2013-04-01

    Derived flood frequency analysis based on continuous hydrological modelling is very demanding regarding the required length and temporal resolution of precipitation input data. Often such flood predictions are obtained using long precipitation time series from stochastic approaches or from regional climate models as input. However, the calibration of the hydrological model is usually done using short time series of observed data. This inconsistent employment of different data types for calibration and application of a hydrological model increases its uncertainty. Here, it is proposed to calibrate a hydrological model directly on probability distributions of observed peak flows using model based rainfall in line with its later application. Two examples are given to illustrate the idea. The first one deals with classical derived flood frequency analysis using input data from an hourly stochastic rainfall model. The second one concerns a climate impact analysis using hourly precipitation from a regional climate model. The results show that: (I) the same type of precipitation input data should be used for calibration and application of the hydrological model, (II) a model calibrated on extreme conditions works quite well for average conditions but not vice versa, (III) the calibration of the hydrological model using regional climate model data works as an implicit bias correction method and (IV) the best performance for flood estimation is usually obtained when model based precipitation and observed probability distribution of peak flows are used for model calibration.

  20. Calibration of a distributed hydrologic model for six European catchments using remote sensing data

    Science.gov (United States)

    Stisen, S.; Demirel, M. C.; Mendiguren González, G.; Kumar, R.; Rakovec, O.; Samaniego, L. E.

    2017-12-01

    While observed streamflow has been the single reference for most conventional hydrologic model calibration exercises, the availability of spatially distributed remote sensing observations provide new possibilities for multi-variable calibration assessing both spatial and temporal variability of different hydrologic processes. In this study, we first identify the key transfer parameters of the mesoscale Hydrologic Model (mHM) controlling both the discharge and the spatial distribution of actual evapotranspiration (AET) across six central European catchments (Elbe, Main, Meuse, Moselle, Neckar and Vienne). These catchments are selected based on their limited topographical and climatic variability which enables to evaluate the effect of spatial parameterization on the simulated evapotranspiration patterns. We develop a European scale remote sensing based actual evapotranspiration dataset at a 1 km grid scale driven primarily by land surface temperature observations from MODIS using the TSEB approach. Using the observed AET maps we analyze the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mHM model. This model allows calibrating one-basin-at-a-time or all-basins-together using its unique structure and multi-parameter regionalization approach. Results will indicate any tradeoffs between spatial pattern and discharge simulation during model calibration and through validation against independent internal discharge locations. Moreover, added value on internal water balances will be analyzed.

  1. Evaluation of multivariate calibration models transferred between spectroscopic instruments

    DEFF Research Database (Denmark)

    Eskildsen, Carl Emil Aae; Hansen, Per W.; Skov, Thomas

    2016-01-01

    In a setting where multiple spectroscopic instruments are used for the same measurements it may be convenient to develop the calibration model on a single instrument and then transfer this model to the other instruments. In the ideal scenario, all instruments provide the same predictions for the ......In a setting where multiple spectroscopic instruments are used for the same measurements it may be convenient to develop the calibration model on a single instrument and then transfer this model to the other instruments. In the ideal scenario, all instruments provide the same predictions...... for the same samples using the transferred model. However, sometimes the success of a model transfer is evaluated by comparing the transferred model predictions with the reference values. This is not optimal, as uncertainties in the reference method will impact the evaluation. This paper proposes a new method...... for calibration model transfer evaluation. The new method is based on comparing predictions from different instruments, rather than comparing predictions and reference values. A total of 75 flour samples were available for the study. All samples were measured on ten near infrared (NIR) instruments from two...

  2. Calibration and verification of numerical runoff and erosion model

    Directory of Open Access Journals (Sweden)

    Gabrić Ognjen

    2015-01-01

    Full Text Available Based on the field and laboratory measurements, and analogous with development of computational techniques, runoff and erosion models based on equations which describe the physics of the process are also developed. Based on the KINEROS2 model, this paper presents basic modelling principles of runoff and erosion processes based on the St. Venant's equations. Alternative equations for friction calculation, calculation of source and deposition elements and transport capacity are also shown. Numerical models based on original and alternative equations are calibrated and verified on laboratory scale model. According to the results, friction calculation based on the analytic solution of laminar flow must be included in all runoff and erosion models.

  3. An Expectation-Maximization Method for Calibrating Synchronous Machine Models

    Energy Technology Data Exchange (ETDEWEB)

    Meng, Da; Zhou, Ning; Lu, Shuai; Lin, Guang

    2013-07-21

    The accuracy of a power system dynamic model is essential to its secure and efficient operation. Lower confidence in model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, this paper proposes an expectation-maximization (EM) method to calibrate the synchronous machine model using phasor measurement unit (PMU) data. First, an extended Kalman filter (EKF) is applied to estimate the dynamic states using measurement data. Then, the parameters are calculated based on the estimated states using maximum likelihood estimation (MLE) method. The EM method iterates over the preceding two steps to improve estimation accuracy. The proposed EM method’s performance is evaluated using a single-machine infinite bus system and compared with a method where both state and parameters are estimated using an EKF method. Sensitivity studies of the parameter calibration using EM method are also presented to show the robustness of the proposed method for different levels of measurement noise and initial parameter uncertainty.

  4. Calibration of a Chemistry Test Using the Rasch Model

    Directory of Open Access Journals (Sweden)

    Nancy Coromoto Martín Guaregua

    2011-11-01

    Full Text Available The Rasch model was used to calibrate a general chemistry test for the purpose of analyzing the advantages and information the model provides. The sample was composed of 219 college freshmen. Of the 12 questions used, good fit was achieved in 10. The evaluation shows that although there are items of variable difficulty, there are gaps on the scale; in order to make the test complete, it will be necessary to design new items to fill in these gaps.

  5. On the efficiency calibration of Si(Li) detector in the low-energy region using thick-target bremsstrahlung

    Energy Technology Data Exchange (ETDEWEB)

    An, Z. E-mail: anzhu@scu.edu.cn; Liu, M.T

    2002-10-01

    In this paper, the efficiency calibration of a Si(Li) detector in the low-energy region down to 0.58 keV has been performed using thick-carbon-target bremsstrahlung by 19 keV electron impact. The shape of the efficiency calibration curve was determined from the thick-carbon-target bremsstrahlung spectrum, and the absolute value for the efficiency calibration was obtained from the use of {sup 241}Am radioactive standard source. The modified Wentzel's formula for thick-target bremsstrahlung was employed and it was also compared with the most recently developed theoretical model based upon the doubly differential cross-sections for bremsstrahlung of Kissel, Quarles and Pratt. In the present calculation of theoretical bremsstrahlung, the self-absorption correction and the convolution of detector's response function with the bremsstrahlung spectrum have simultaneously been taken into account. The accuracy for the efficiency calibration in the low-energy region with the method described here was estimated to be about 6%. Moreover, the self-absorption correction calculation based upon the prescription of Wolters et al. has also been presented as an analytical factor with the accuracy of {approx}1%.

  6. Stochastic isotropic hyperelastic materials: constitutive calibration and model selection

    Science.gov (United States)

    Mihai, L. Angela; Woolley, Thomas E.; Goriely, Alain

    2018-03-01

    Biological and synthetic materials often exhibit intrinsic variability in their elastic responses under large strains, owing to microstructural inhomogeneity or when elastic data are extracted from viscoelastic mechanical tests. For these materials, although hyperelastic models calibrated to mean data are useful, stochastic representations accounting also for data dispersion carry extra information about the variability of material properties found in practical applications. We combine finite elasticity and information theories to construct homogeneous isotropic hyperelastic models with random field parameters calibrated to discrete mean values and standard deviations of either the stress-strain function or the nonlinear shear modulus, which is a function of the deformation, estimated from experimental tests. These quantities can take on different values, corresponding to possible outcomes of the experiments. As multiple models can be derived that adequately represent the observed phenomena, we apply Occam's razor by providing an explicit criterion for model selection based on Bayesian statistics. We then employ this criterion to select a model among competing models calibrated to experimental data for rubber and brain tissue under single or multiaxial loads.

  7. Regional forest cover estimation via remote sensing: the calibration center concept

    Science.gov (United States)

    Louis R. Iverson; Elizabeth A. Cook; Robin L. Graham; Robin L. Graham

    1994-01-01

    A method for combining Landsat Thematic Mapper (TM), Advanced Very High Resolution Radiometer (AVHRR) imagery, and other biogeographic data to estimate forest cover over large regions is applied and evaluated at two locations. In this method, TM data are used to classify a small area (calibration center) into forest/nonforest; the resulting forest cover map is then...

  8. Calibration of two complex ecosystem models with different likelihood functions

    Science.gov (United States)

    Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán

    2014-05-01

    The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model

  9. Calibrating corneal material model parameters using only inflation data: an ill-posed problem

    CSIR Research Space (South Africa)

    Kok, S

    2014-08-01

    Full Text Available is to perform numerical modelling using the finite element method, for which a calibrated material model is required. These material models are typically calibrated using experimental inflation data by solving an inverse problem. In the inverse problem...

  10. Calibration process of highly parameterized semi-distributed hydrological model

    Science.gov (United States)

    Vidmar, Andrej; Brilly, Mitja

    2017-04-01

    Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group

  11. Insights from Synthetic Star-forming Regions. III. Calibration of Measurement and Techniques of Star Formation Rates

    Energy Technology Data Exchange (ETDEWEB)

    Koepferl, Christine M.; Robitaille, Thomas P. [Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany); Dale, James E., E-mail: koepferl@usm.lmu.de [University Observatory Munich, Scheinerstr. 1, D-81679 Munich (Germany)

    2017-11-01

    Through an extensive set of realistic synthetic observations (produced in Paper I), we assess in this part of the paper series (Paper III) how the choice of observational techniques affects the measurement of star formation rates (SFRs) in star-forming regions. We test the accuracy of commonly used techniques and construct new methods to extract the SFR, so that these findings can be applied to measure the SFR in real regions throughout the Milky Way. We investigate diffuse infrared SFR tracers such as those using 24 μ m, 70 μ m and total infrared emission, which have been previously calibrated for global galaxy scales. We set up a toy model of a galaxy and show that the infrared emission is consistent with the intrinsic SFR using extra-galactic calibrated laws (although the consistency does not prove their reliability). For local scales, we show that these techniques produce completely unreliable results for single star-forming regions, which are governed by different characteristic timescales. We show how calibration of these techniques can be improved for single star-forming regions by adjusting the characteristic timescale and the scaling factor and give suggestions of new calibrations of the diffuse star formation tracers. We show that star-forming regions that are dominated by high-mass stellar feedback experience a rapid drop in infrared emission once high-mass stellar feedback is turned on, which implies different characteristic timescales. Moreover, we explore the measured SFRs calculated directly from the observed young stellar population. We find that the measured point sources follow the evolutionary pace of star formation more directly than diffuse star formation tracers.

  12. Calibration and validation of earthquake catastrophe models. Case study: Impact Forecasting Earthquake Model for Algeria

    Science.gov (United States)

    Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.

    2012-04-01

    Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the

  13. Calibration of Airframe and Occupant Models for Two Full-Scale Rotorcraft Crash Tests

    Science.gov (United States)

    Annett, Martin S.; Horta, Lucas G.; Polanco, Michael A.

    2012-01-01

    Two full-scale crash tests of an MD-500 helicopter were conducted in 2009 and 2010 at NASA Langley's Landing and Impact Research Facility in support of NASA s Subsonic Rotary Wing Crashworthiness Project. The first crash test was conducted to evaluate the performance of an externally mounted composite deployable energy absorber under combined impact conditions. In the second crash test, the energy absorber was removed to establish baseline loads that are regarded as severe but survivable. Accelerations and kinematic data collected from the crash tests were compared to a system integrated finite element model of the test article. Results from 19 accelerometers placed throughout the airframe were compared to finite element model responses. The model developed for the purposes of predicting acceleration responses from the first crash test was inadequate when evaluating more severe conditions seen in the second crash test. A newly developed model calibration approach that includes uncertainty estimation, parameter sensitivity, impact shape orthogonality, and numerical optimization was used to calibrate model results for the second full-scale crash test. This combination of heuristic and quantitative methods was used to identify modeling deficiencies, evaluate parameter importance, and propose required model changes. It is shown that the multi-dimensional calibration techniques presented here are particularly effective in identifying model adequacy. Acceleration results for the calibrated model were compared to test results and the original model results. There was a noticeable improvement in the pilot and co-pilot region, a slight improvement in the occupant model response, and an over-stiffening effect in the passenger region. This approach should be adopted early on, in combination with the building-block approaches that are customarily used, for model development and test planning guidance. Complete crash simulations with validated finite element models can be used

  14. Bayesian model calibration of ramp compression experiments on Z

    Science.gov (United States)

    Brown, Justin; Hund, Lauren

    2017-06-01

    Bayesian model calibration (BMC) is a statistical framework to estimate inputs for a computational model in the presence of multiple uncertainties, making it well suited to dynamic experiments which must be coupled with numerical simulations to interpret the results. Often, dynamic experiments are diagnosed using velocimetry and this output can be modeled using a hydrocode. Several calibration issues unique to this type of scenario including the functional nature of the output, uncertainty of nuisance parameters within the simulation, and model discrepancy identifiability are addressed, and a novel BMC process is proposed. As a proof of concept, we examine experiments conducted on Sandia National Laboratories' Z-machine which ramp compressed tantalum to peak stresses of 250 GPa. The proposed BMC framework is used to calibrate the cold curve of Ta (with uncertainty), and we conclude that the procedure results in simple, fast, and valid inferences. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  15. Calibrating Vadose Zone Models with Time-Lapse Gravity Data

    DEFF Research Database (Denmark)

    Christiansen, Lars; Hansen, A. B.; Looms, M. C.

    2009-01-01

    A change in soil water content is a change in mass stored in the subsurface. Given that the mass change is big enough, the change can be measured with a gravity meter. Attempts have been made with varying success over the last decades to use ground-based time-lapse gravity measurements to infer...... hydrogeological parameters. These studies focused on the saturated zone with specific yield as the most prominent target parameter. Any change in storage in the vadose zone has been considered as noise. Our modeling results show a measureable change in gravity from the vadose zone during a forced infiltration...... experiment on 10m by 10m grass land. Simulation studies show a potential for vadose zone model calibration using gravity data in conjunction with other geophysical data, e.g. cross-borehole georadar. We present early field data and calibration results from a forced infiltration experiment conducted over 30...

  16. A new sewage exfiltration model--parameters and calibration.

    Science.gov (United States)

    Karpf, Christian; Krebs, Peter

    2011-01-01

    Exfiltration of waste water from sewer systems represents a potential danger for the soil and the aquifer. Common models, which are used to describe the exfiltration process, are based on the law of Darcy, extended by a more or less detailed consideration of the expansion of leaks, the characteristics of the soil and the colmation layer. But, due to the complexity of the exfiltration process, the calibration of these models includes a significant uncertainty. In this paper, a new exfiltration approach is introduced, which implements the dynamics of the clogging process and the structural conditions near sewer leaks. The calibration is realised according to experimental studies and analysis of groundwater infiltration to sewers. Furthermore, exfiltration rates and the sensitivity of the approach are estimated and evaluated, respectively, by Monte-Carlo simulations.

  17. A hierarchical analysis of terrestrial ecosystem model Biome-BGC: Equilibrium analysis and model calibration

    Energy Technology Data Exchange (ETDEWEB)

    Thornton, Peter E [ORNL; Wang, Weile [ORNL; Law, Beverly E. [Oregon State University; Nemani, Ramakrishna R [NASA Ames Research Center

    2009-01-01

    The increasing complexity of ecosystem models represents a major difficulty in tuning model parameters and analyzing simulated results. To address this problem, this study develops a hierarchical scheme that simplifies the Biome-BGC model into three functionally cascaded tiers and analyzes them sequentially. The first-tier model focuses on leaf-level ecophysiological processes; it simulates evapotranspiration and photosynthesis with prescribed leaf area index (LAI). The restriction on LAI is then lifted in the following two model tiers, which analyze how carbon and nitrogen is cycled at the whole-plant level (the second tier) and in all litter/soil pools (the third tier) to dynamically support the prescribed canopy. In particular, this study analyzes the steady state of these two model tiers by a set of equilibrium equations that are derived from Biome-BGC algorithms and are based on the principle of mass balance. Instead of spinning-up the model for thousands of climate years, these equations are able to estimate carbon/nitrogen stocks and fluxes of the target (steady-state) ecosystem directly from the results obtained by the first-tier model. The model hierarchy is examined with model experiments at four AmeriFlux sites. The results indicate that the proposed scheme can effectively calibrate Biome-BGC to simulate observed fluxes of evapotranspiration and photosynthesis; and the carbon/nitrogen stocks estimated by the equilibrium analysis approach are highly consistent with the results of model simulations. Therefore, the scheme developed in this study may serve as a practical guide to calibrate/analyze Biome-BGC; it also provides an efficient way to solve the problem of model spin-up, especially for applications over large regions. The same methodology may help analyze other similar ecosystem models as well.

  18. Fundamental studies to develop certified reference material to calibrate spectrophotometer in the ultraviolet region

    International Nuclear Information System (INIS)

    Da Conceição, F C; Borges, P P; Gomes, J F S

    2016-01-01

    Spectrophotometry is the technique used in a great number of laboratories around the world. Quantitative determination of a high number of inorganic, organic and biological species can be made by spectrophotometry using calibrated spectrophotometers. International standards require the use of optical filters to perform the calibration of spectrophotometers. One of the recommended materials is the crystalline potassium dichromate (K_2Cr_2O_7), which is used to prepare solutions in specific concentrations for calibration or verification of spectrophotometers in the ultraviolet (UV) spectral regions. This paper presents the results concerning the fundamental studies for developing a certified reference material (CRM) of crystalline potassium dichromate to be used as standard of spectrophotometers in order to contribute to reliable quantitative analyses. (paper)

  19. Spatial and Temporal Self-Calibration of a Hydroeconomic Model

    Science.gov (United States)

    Howitt, R. E.; Hansen, K. M.

    2008-12-01

    Hydroeconomic modeling of water systems where risk and reliability of water supply are of critical importance must address explicitly how to model water supply uncertainty. When large fluctuations in annual precipitation and significant variation in flows by location are present, a model which solves with perfect foresight of future water conditions may be inappropriate for some policy and research questions. We construct a simulation-optimization model with limited foresight of future water conditions using positive mathematical programming and self-calibration techniques. This limited foresight netflow (LFN) model signals the value of storing water for future use and reflects a more accurate economic value of water at key locations, given that future water conditions are unknown. Failure to explicitly model this uncertainty could lead to undervaluation of storage infrastructure and contractual mechanisms for managing water supply risk. A model based on sequentially updated information is more realistic, since water managers make annual storage decisions without knowledge of yet to be realized future water conditions. The LFN model runs historical hydrological conditions through the current configuration of the California water system to determine the economically efficient allocation of water under current economic conditions and infrastructure. The model utilizes current urban and agricultural demands, storage and conveyance infrastructure, and the state's hydrological history to indicate the scarcity value of water at key locations within the state. Further, the temporal calibration penalty functions vary by year type, reflecting agricultural water users' ability to alter cropping patterns in response to water conditions. The model employs techniques from positive mathematical programming (Howitt, 1995; Howitt, 1998; Cai and Wang, 2006) to generate penalty functions that are applied to deviations from observed data. The functions are applied to monthly flows

  20. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  1. Investigation of the transferability of hydrological models and a method to improve model calibration

    Directory of Open Access Journals (Sweden)

    G. Hartmann

    2005-01-01

    Full Text Available In order to find a model parameterization such that the hydrological model performs well even under different conditions, appropriate model performance measures have to be determined. A common performance measure is the Nash Sutcliffe efficiency. Usually it is calculated comparing observed and modelled daily values. In this paper a modified version is suggested in order to calibrate a model on different time scales simultaneously (days up to years. A spatially distributed hydrological model based on HBV concept was used. The modelling was applied on the Upper Neckar catchment, a mesoscale river in south western Germany with a basin size of about 4000 km2. The observation period 1961-1990 was divided into four different climatic periods, referred to as "warm", "cold", "wet" and "dry". These sub periods were used to assess the transferability of the model calibration and of the measure of performance. In a first step, the hydrological model was calibrated on a certain period and afterwards applied on the same period. Then, a validation was performed on the climatologically opposite period than the calibration, e.g. the model calibrated on the cold period was applied on the warm period. Optimal parameter sets were identified by an automatic calibration procedure based on Simulated Annealing. The results show, that calibrating a hydrological model that is supposed to handle short as well as long term signals becomes an important task. Especially the objective function has to be chosen very carefully.

  2. Model- and calibration-independent test of cosmic acceleration

    International Nuclear Information System (INIS)

    Seikel, Marina; Schwarz, Dominik J.

    2009-01-01

    We present a calibration-independent test of the accelerated expansion of the universe using supernova type Ia data. The test is also model-independent in the sense that no assumptions about the content of the universe or about the parameterization of the deceleration parameter are made and that it does not assume any dynamical equations of motion. Yet, the test assumes the universe and the distribution of supernovae to be statistically homogeneous and isotropic. A significant reduction of systematic effects, as compared to our previous, calibration-dependent test, is achieved. Accelerated expansion is detected at significant level (4.3σ in the 2007 Gold sample, 7.2σ in the 2008 Union sample) if the universe is spatially flat. This result depends, however, crucially on supernovae with a redshift smaller than 0.1, for which the assumption of statistical isotropy and homogeneity is less well established

  3. Technical note: Bayesian calibration of dynamic ruminant nutrition models.

    Science.gov (United States)

    Reed, K F; Arhonditsis, G B; France, J; Kebreab, E

    2016-08-01

    Mechanistic models of ruminant digestion and metabolism have advanced our understanding of the processes underlying ruminant animal physiology. Deterministic modeling practices ignore the inherent variation within and among individual animals and thus have no way to assess how sources of error influence model outputs. We introduce Bayesian calibration of mathematical models to address the need for robust mechanistic modeling tools that can accommodate error analysis by remaining within the bounds of data-based parameter estimation. For the purpose of prediction, the Bayesian approach generates a posterior predictive distribution that represents the current estimate of the value of the response variable, taking into account both the uncertainty about the parameters and model residual variability. Predictions are expressed as probability distributions, thereby conveying significantly more information than point estimates in regard to uncertainty. Our study illustrates some of the technical advantages of Bayesian calibration and discusses the future perspectives in the context of animal nutrition modeling. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  4. A surface hydrology model for regional vector borne disease models

    Science.gov (United States)

    Tompkins, Adrian; Asare, Ernest; Bomblies, Arne; Amekudzi, Leonard

    2016-04-01

    Small, sun-lit temporary pools that form during the rainy season are important breeding sites for many key mosquito vectors responsible for the transmission of malaria and other diseases. The representation of this surface hydrology in mathematical disease models is challenging, due to their small-scale, dependence on the terrain and the difficulty of setting soil parameters. Here we introduce a model that represents the temporal evolution of the aggregate statistics of breeding sites in a single pond fractional coverage parameter. The model is based on a simple, geometrical assumption concerning the terrain, and accounts for the processes of surface runoff, pond overflow, infiltration and evaporation. Soil moisture, soil properties and large-scale terrain slope are accounted for using a calibration parameter that sets the equivalent catchment fraction. The model is calibrated and then evaluated using in situ pond measurements in Ghana and ultra-high (10m) resolution explicit simulations for a village in Niger. Despite the model's simplicity, it is shown to reproduce the variability and mean of the pond aggregate water coverage well for both locations and validation techniques. Example malaria simulations for Uganda will be shown using this new scheme with a generic calibration setting, evaluated using district malaria case data. Possible methods for implementing regional calibration will be briefly discussed.

  5. A Linear Viscoelastic Model Calibration of Sylgard 184.

    Energy Technology Data Exchange (ETDEWEB)

    Long, Kevin Nicholas; Brown, Judith Alice

    2017-04-01

    We calibrate a linear thermoviscoelastic model for solid Sylgard 184 (90-10 formulation), a lightly cross-linked, highly flexible isotropic elastomer for use both in Sierra / Solid Mechanics via the Universal Polymer Model as well as in Sierra / Structural Dynamics (Salinas) for use as an isotropic viscoelastic material. Material inputs for the calibration in both codes are provided. The frequency domain master curve of oscillatory shear was obtained from a report from Los Alamos National Laboratory (LANL). However, because the form of that data is different from the constitutive models in Sierra, we also present the mapping of the LANL data onto Sandia’s constitutive models. Finally, blind predictions of cyclic tension and compression out to moderate strains of 40 and 20% respectively are compared with Sandia’s legacy cure schedule material. Although the strain rate of the data is unknown, the linear thermoviscoelastic model accurately predicts the experiments out to moderate strains for the slower strain rates, which is consistent with the expectation that quasistatic test procedures were likely followed. This good agreement comes despite the different cure schedules between the Sandia and LANL data.

  6. Evaluation of an ASM1 Model Calibration Precedure on a Municipal-Industrial Wastewater Treatment Plant

    DEFF Research Database (Denmark)

    Petersen, Britta; Gernaey, Krist; Henze, Mogens

    2002-01-01

    treatment plant. In the case that was studied it was important to have a detailed description of the process dynamics, since the model was to be used as the basis for optimisation scenarios in a later phase. Therefore, a complete model calibration procedure was applied including: (1) a description......The purpose of the calibrated model determines how to approach a model calibration, e.g. which information is needed and to which level of detail the model should be calibrated. A systematic model calibration procedure was therefore defined and evaluated for a municipal–industrial wastewater...

  7. Dynamic calibration of agent-based models using data assimilation.

    Science.gov (United States)

    Ward, Jonathan A; Evans, Andrew J; Malleson, Nicolas S

    2016-04-01

    A widespread approach to investigating the dynamical behaviour of complex social systems is via agent-based models (ABMs). In this paper, we describe how such models can be dynamically calibrated using the ensemble Kalman filter (EnKF), a standard method of data assimilation. Our goal is twofold. First, we want to present the EnKF in a simple setting for the benefit of ABM practitioners who are unfamiliar with it. Second, we want to illustrate to data assimilation experts the value of using such methods in the context of ABMs of complex social systems and the new challenges these types of model present. We work towards these goals within the context of a simple question of practical value: how many people are there in Leeds (or any other major city) right now? We build a hierarchy of exemplar models that we use to demonstrate how to apply the EnKF and calibrate these using open data of footfall counts in Leeds.

  8. Calibration and validation of a general infiltration model

    Science.gov (United States)

    Mishra, Surendra Kumar; Ranjan Kumar, Shashi; Singh, Vijay P.

    1999-08-01

    A general infiltration model proposed by Singh and Yu (1990) was calibrated and validated using a split sampling approach for 191 sets of infiltration data observed in the states of Minnesota and Georgia in the USA. Of the five model parameters, fc (the final infiltration rate), So (the available storage space) and exponent n were found to be more predictable than the other two parameters: m (exponent) and a (proportionality factor). A critical examination of the general model revealed that it is related to the Soil Conservation Service (1956) curve number (SCS-CN) method and its parameter So is equivalent to the potential maximum retention of the SCS-CN method and is, in turn, found to be a function of soil sorptivity and hydraulic conductivity. The general model was found to describe infiltration rate with time varying curve number.

  9. Calibration of the simulation model of the VINCY cyclotron magnet

    Directory of Open Access Journals (Sweden)

    Ćirković Saša

    2002-01-01

    Full Text Available The MERMAID program will be used to isochronise the nominal magnetic field of the VINCY Cyclotron. This program simulates the response, i. e. calculates the magnetic field, of a previously defined model of a magnet. The accuracy of 3D field calculation depends on the density of the grid points in the simulation model grid. The size of the VINCY Cyclotron and the maximum number of grid points in the XY plane limited by MERMAID define the maximumobtainable accuracy of field calculations. Comparisons of the field simulated with maximum obtainable accuracy with the magnetic field measured in the first phase of the VINCY Cyclotron magnetic field measurements campaign has shown that the difference between these two fields is not as small as required. Further decrease of the difference between these fields is obtained by the simulation model calibration, i. e. by adjusting the current through the main coils in the simulation model.

  10. Recent Improvements to the Calibration Models for RXTE/PCA

    Science.gov (United States)

    Jahoda, K.

    2008-01-01

    We are updating the calibration of the PCA to correct for slow variations, primarily in energy to channel relationship. We have also improved the physical model in the vicinity of the Xe K-edge, which should increase the reliability of continuum fits above 20 keV. The improvements to the matrix are especially important to simultaneous observations, where the PCA is often used to constrain the continuum while other higher resolution spectrometers are used to study the shape of lines and edges associated with Iron.

  11. Calibration of a distributed hydrologic model using observed spatial patterns from MODIS data

    Science.gov (United States)

    Demirel, Mehmet C.; González, Gorka M.; Mai, Juliane; Stisen, Simon

    2016-04-01

    Distributed hydrologic models are typically calibrated against streamflow observations at the outlet of the basin. Along with these observations from gauging stations, satellite based estimates offer independent evaluation data such as remotely sensed actual evapotranspiration (aET) and land surface temperature. The primary objective of the study is to compare model calibrations against traditional downstream discharge measurements with calibrations against simulated spatial patterns and combinations of both types of observations. While the discharge based model calibration typically improves the temporal dynamics of the model, it seems to give rise to minimum improvement of the simulated spatial patterns. In contrast, objective functions specifically targeting the spatial pattern performance could potentially increase the spatial model performance. However, most modeling studies, including the model formulations and parameterization, are not designed to actually change the simulated spatial pattern during calibration. This study investigates the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mesoscale hydrologic model (mHM). This model is selected as it allows for a change in the spatial distribution of key soil parameters through the optimization of pedo-transfer function parameters and includes options for using fully distributed daily Leaf Area Index (LAI) values directly as input. In addition the simulated aET can be estimated at a spatial resolution suitable for comparison to the spatial patterns observed with MODIS data. To increase our control on spatial calibration we introduced three additional parameters to the model. These new parameters are part of an empirical equation to the calculate crop coefficient (Kc) from daily LAI maps and used to update potential evapotranspiration (PET) as model inputs. This is done instead of correcting/updating PET with just a uniform (or aspect driven) factor used in the mHM model

  12. Model validation and calibration based on component functions of model output

    International Nuclear Information System (INIS)

    Wu, Danqing; Lu, Zhenzhou; Wang, Yanping; Cheng, Lei

    2015-01-01

    The target in this work is to validate the component functions of model output between physical observation and computational model with the area metric. Based on the theory of high dimensional model representations (HDMR) of independent input variables, conditional expectations are component functions of model output, and the conditional expectations reflect partial information of model output. Therefore, the model validation of conditional expectations tells the discrepancy between the partial information of the computational model output and that of the observations. Then a calibration of the conditional expectations is carried out to reduce the value of model validation metric. After that, a recalculation of the model validation metric of model output is taken with the calibrated model parameters, and the result shows that a reduction of the discrepancy in the conditional expectations can help decrease the difference in model output. At last, several examples are employed to demonstrate the rationality and necessity of the methodology in case of both single validation site and multiple validation sites. - Highlights: • A validation metric of conditional expectations of model output is proposed. • HDRM explains the relationship of conditional expectations and model output. • An improved approach of parameter calibration updates the computational models. • Validation and calibration process are applied at single site and multiple sites. • Validation and calibration process show a superiority than existing methods

  13. Geomechanical Simulation of Bayou Choctaw Strategic Petroleum Reserve - Model Calibration.

    Energy Technology Data Exchange (ETDEWEB)

    Park, Byoung [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    A finite element numerical analysis model has been constructed that consists of a realistic mesh capturing the geometries of Bayou Choctaw (BC) Strategic Petroleum Reserve (SPR) site and multi - mechanism deformation ( M - D ) salt constitutive model using the daily data of actual wellhead pressure and oil - brine interface. The salt creep rate is not uniform in the salt dome, and the creep test data for BC salt is limited. Therefore, the model calibration is necessary to simulate the geomechanical behavior of the salt dome. The cavern volumetric closures of SPR caverns calculated from CAVEMAN are used for the field baseline measurement. The structure factor, A 2 , and transient strain limit factor, K 0 , in the M - D constitutive model are used for the calibration. The A 2 value obtained experimentally from the BC salt and K 0 value of Waste Isolation Pilot Plant (WIPP) salt are used for the baseline values. T o adjust the magnitude of A 2 and K 0 , multiplication factors A2F and K0F are defined, respectively. The A2F and K0F values of the salt dome and salt drawdown skins surrounding each SPR cavern have been determined through a number of back fitting analyses. The cavern volumetric closures calculated from this model correspond to the predictions from CAVEMAN for six SPR caverns. Therefore, this model is able to predict past and future geomechanical behaviors of the salt dome, caverns, caprock , and interbed layers. The geological concerns issued in the BC site will be explained from this model in a follow - up report .

  14. Selection, calibration, and validation of models of tumor growth.

    Science.gov (United States)

    Lima, E A B F; Oden, J T; Hormuth, D A; Yankeelov, T E; Almeida, R C

    2016-11-01

    This paper presents general approaches for addressing some of the most important issues in predictive computational oncology concerned with developing classes of predictive models of tumor growth. First, the process of developing mathematical models of vascular tumors evolving in the complex, heterogeneous, macroenvironment of living tissue; second, the selection of the most plausible models among these classes, given relevant observational data; third, the statistical calibration and validation of models in these classes, and finally, the prediction of key Quantities of Interest (QOIs) relevant to patient survival and the effect of various therapies. The most challenging aspects of this endeavor is that all of these issues often involve confounding uncertainties: in observational data, in model parameters, in model selection, and in the features targeted in the prediction. Our approach can be referred to as "model agnostic" in that no single model is advocated; rather, a general approach that explores powerful mixture-theory representations of tissue behavior while accounting for a range of relevant biological factors is presented, which leads to many potentially predictive models. Then representative classes are identified which provide a starting point for the implementation of OPAL, the Occam Plausibility Algorithm (OPAL) which enables the modeler to select the most plausible models (for given data) and to determine if the model is a valid tool for predicting tumor growth and morphology ( in vivo ). All of these approaches account for uncertainties in the model, the observational data, the model parameters, and the target QOI. We demonstrate these processes by comparing a list of models for tumor growth, including reaction-diffusion models, phase-fields models, and models with and without mechanical deformation effects, for glioma growth measured in murine experiments. Examples are provided that exhibit quite acceptable predictions of tumor growth in laboratory

  15. Differential Evolution algorithm applied to FSW model calibration

    Science.gov (United States)

    Idagawa, H. S.; Santos, T. F. A.; Ramirez, A. J.

    2014-03-01

    Friction Stir Welding (FSW) is a solid state welding process that can be modelled using a Computational Fluid Dynamics (CFD) approach. These models use adjustable parameters to control the heat transfer and the heat input to the weld. These parameters are used to calibrate the model and they are generally determined using the conventional trial and error approach. Since this method is not very efficient, we used the Differential Evolution (DE) algorithm to successfully determine these parameters. In order to improve the success rate and to reduce the computational cost of the method, this work studied different characteristics of the DE algorithm, such as the evolution strategy, the objective function, the mutation scaling factor and the crossover rate. The DE algorithm was tested using a friction stir weld performed on a UNS S32205 Duplex Stainless Steel.

  16. A critical comparison of systematic calibration protocols for activated sludge models: a SWOT analysis.

    Science.gov (United States)

    Sin, Gürkan; Van Hulle, Stijn W H; De Pauw, Dirk J W; van Griensven, Ann; Vanrolleghem, Peter A

    2005-07-01

    Modelling activated sludge systems has gained an increasing momentum after the introduction of activated sludge models (ASMs) in 1987. Application of dynamic models for full-scale systems requires essentially a calibration of the chosen ASM to the case under study. Numerous full-scale model applications have been performed so far which were mostly based on ad hoc approaches and expert knowledge. Further, each modelling study has followed a different calibration approach: e.g. different influent wastewater characterization methods, different kinetic parameter estimation methods, different selection of parameters to be calibrated, different priorities within the calibration steps, etc. In short, there was no standard approach in performing the calibration study, which makes it difficult, if not impossible, to (1) compare different calibrations of ASMs with each other and (2) perform internal quality checks for each calibration study. To address these concerns, systematic calibration protocols have recently been proposed to bring guidance to the modeling of activated sludge systems and in particular to the calibration of full-scale models. In this contribution four existing calibration approaches (BIOMATH, HSG, STOWA and WERF) will be critically discussed using a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis. It will also be assessed in what way these approaches can be further developed in view of further improving the quality of ASM calibration. In this respect, the potential of automating some steps of the calibration procedure by use of mathematical algorithms is highlighted.

  17. A calibration hierarchy for risk models was defined: from utopia to empirical data.

    Science.gov (United States)

    Van Calster, Ben; Nieboer, Daan; Vergouwe, Yvonne; De Cock, Bavo; Pencina, Michael J; Steyerberg, Ewout W

    2016-06-01

    Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions. We present results based on simulated data sets. A common definition of calibration is "having an event rate of R% among patients with a predicted risk of R%," which we refer to as "moderate calibration." Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. "Strong calibration" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic. Strong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Calibrating emergent phenomena in stock markets with agent based models.

    Science.gov (United States)

    Fievet, Lucas; Sornette, Didier

    2018-01-01

    Since the 2008 financial crisis, agent-based models (ABMs), which account for out-of-equilibrium dynamics, heterogeneous preferences, time horizons and strategies, have often been envisioned as the new frontier that could revolutionise and displace the more standard models and tools in economics. However, their adoption and generalisation is drastically hindered by the absence of general reliable operational calibration methods. Here, we start with a different calibration angle that qualifies an ABM for its ability to achieve abnormal trading performance with respect to the buy-and-hold strategy when fed with real financial data. Starting from the common definition of standard minority and majority agents with binary strategies, we prove their equivalence to optimal decision trees. This efficient representation allows us to exhaustively test all meaningful single agent models for their potential anomalous investment performance, which we apply to the NASDAQ Composite index over the last 20 years. We uncover large significant predictive power, with anomalous Sharpe ratio and directional accuracy, in particular during the dotcom bubble and crash and the 2008 financial crisis. A principal component analysis reveals transient convergence between the anomalous minority and majority models. A novel combination of the optimal single-agent models of both classes into a two-agents model leads to remarkable superior investment performance, especially during the periods of bubbles and crashes. Our design opens the field of ABMs to construct novel types of advanced warning systems of market crises, based on the emergent collective intelligence of ABMs built on carefully designed optimal decision trees that can be reversed engineered from real financial data.

  19. Calibrating emergent phenomena in stock markets with agent based models

    Science.gov (United States)

    Sornette, Didier

    2018-01-01

    Since the 2008 financial crisis, agent-based models (ABMs), which account for out-of-equilibrium dynamics, heterogeneous preferences, time horizons and strategies, have often been envisioned as the new frontier that could revolutionise and displace the more standard models and tools in economics. However, their adoption and generalisation is drastically hindered by the absence of general reliable operational calibration methods. Here, we start with a different calibration angle that qualifies an ABM for its ability to achieve abnormal trading performance with respect to the buy-and-hold strategy when fed with real financial data. Starting from the common definition of standard minority and majority agents with binary strategies, we prove their equivalence to optimal decision trees. This efficient representation allows us to exhaustively test all meaningful single agent models for their potential anomalous investment performance, which we apply to the NASDAQ Composite index over the last 20 years. We uncover large significant predictive power, with anomalous Sharpe ratio and directional accuracy, in particular during the dotcom bubble and crash and the 2008 financial crisis. A principal component analysis reveals transient convergence between the anomalous minority and majority models. A novel combination of the optimal single-agent models of both classes into a two-agents model leads to remarkable superior investment performance, especially during the periods of bubbles and crashes. Our design opens the field of ABMs to construct novel types of advanced warning systems of market crises, based on the emergent collective intelligence of ABMs built on carefully designed optimal decision trees that can be reversed engineered from real financial data. PMID:29499049

  20. Geomechanical Model Calibration Using Field Measurements for a Petroleum Reserve

    Science.gov (United States)

    Park, Byoung Yoon; Sobolik, Steven R.; Herrick, Courtney G.

    2018-03-01

    A finite element numerical analysis model has been constructed that consists of a mesh that effectively captures the geometries of Bayou Choctaw (BC) Strategic Petroleum Reserve (SPR) site and multimechanism deformation (M-D) salt constitutive model using the daily data of actual wellhead pressure and oil-brine interface location. The salt creep rate is not uniform in the salt dome, and the creep test data for BC salt are limited. Therefore, the model calibration is necessary to simulate the geomechanical behavior of the salt dome. The cavern volumetric closures of SPR caverns calculated from CAVEMAN are used as the field baseline measurement. The structure factor, A 2, and transient strain limit factor, K 0, in the M-D constitutive model are used for the calibration. The value of A 2, obtained experimentally from BC salt, and the value of K 0, obtained from Waste Isolation Pilot Plant salt, are used for the baseline values. To adjust the magnitude of A 2 and K 0, multiplication factors A 2 F and K 0 F are defined, respectively. The A 2 F and K 0 F values of the salt dome and salt drawdown skins surrounding each SPR cavern have been determined through a number of back analyses. The cavern volumetric closures calculated from this model correspond to the predictions from CAVEMAN for six SPR caverns. Therefore, this model is able to predict behaviors of the salt dome, caverns, caprock, and interbed layers. The geotechnical concerns associated with the BC site from this analysis will be explained in a follow-up paper.

  1. Secondary clarifier hybrid model calibration in full scale pulp and paper activated sludge wastewater treatment

    Energy Technology Data Exchange (ETDEWEB)

    Sreckovic, G.; Hall, E.R. [British Columbia Univ., Dept. of Civil Engineering, Vancouver, BC (Canada); Thibault, J. [Laval Univ., Dept. of Chemical Engineering, Ste-Foy, PQ (Canada); Savic, D. [Exeter Univ., School of Engineering, Exeter (United Kingdom)

    1999-05-01

    The issue of proper model calibration techniques applied to mechanistic mathematical models relating to activated sludge systems was discussed. Such calibrations are complex because of the non-linearity and multi-model objective functions of the process. This paper presents a hybrid model which was developed using two techniques to model and calibrate secondary clarifier parts of an activated sludge system. Genetic algorithms were used to successfully calibrate the settler mechanistic model, and neural networks were used to reduce the error between the mechanistic model output and real world data. Results of the modelling study show that the long term response of a one-dimensional settler mechanistic model calibrated by genetic algorithms and compared to full scale plant data can be improved by coupling the calibrated mechanistic model to as black-box model, such as a neural network. 11 refs., 2 figs.

  2. A joint calibration model for combining predictive distributions

    Directory of Open Access Journals (Sweden)

    Patrizia Agati

    2013-05-01

    Full Text Available In many research fields, as for example in probabilistic weather forecasting, valuable predictive information about a future random phenomenon may come from several, possibly heterogeneous, sources. Forecast combining methods have been developed over the years in order to deal with ensembles of sources: the aim is to combine several predictions in such a way to improve forecast accuracy and reduce risk of bad forecasts.In this context, we propose the use of a Bayesian approach to information combining, which consists in treating the predictive probability density functions (pdfs from the individual ensemble members as data in a Bayesian updating problem. The likelihood function is shown to be proportional to the product of the pdfs, adjusted by a joint “calibration function” describing the predicting skill of the sources (Morris, 1977. In this paper, after rephrasing Morris’ algorithm in a predictive context, we propose to model the calibration function in terms of bias, scale and correlation and to estimate its parameters according to the least squares criterion. The performance of our method is investigated and compared with that of Bayesian Model Averaging (Raftery, 2005 on simulated data.

  3. A Solvatochromic Model Calibrates Nitriles’ Vibrational Frequencies to Electrostatic Fields

    Science.gov (United States)

    Bagchi, Sayan; Fried, Stephen D.; Boxer, Steven G.

    2012-01-01

    Electrostatic interactions provide a primary connection between a protein’s three-dimensional structure and its function. Infrared (IR) probes are useful because vibrational frequencies of certain chemical groups, such as nitriles, are linearly sensitive to local electrostatic field, and can serve as a molecular electric field meter. IR spectroscopy has been used to study electrostatic changes or fluctuations in proteins, but measured peak frequencies have not been previously mapped to total electric fields, because of the absence of a field-frequency calibration and the complication of local chemical effects such as H-bonds. We report a solvatochromic model that provides a means to assess the H-bonding status of aromatic nitrile vibrational probes, and calibrates their vibrational frequencies to electrostatic field. The analysis involves correlations between the nitrile’s IR frequency and its 13C chemical shift, whose observation is facilitated by a robust method for introducing isotopes into aromatic nitriles. The method is tested on the model protein Ribonuclease S (RNase S) containing a labeled p-CN-Phe near the active site. Comparison of the measurements in RNase S against solvatochromic data gives an estimate of the average total electrostatic field at this location. The value determined agrees quantitatively with MD simulations, suggesting broader potential for the use of IR probes in the study of protein electrostatics. PMID:22694663

  4. Calibration Modeling Methodology to Optimize Performance for Low Range Applications

    Science.gov (United States)

    McCollum, Raymond A.; Commo, Sean A.; Parker, Peter A.

    2010-01-01

    Calibration is a vital process in characterizing the performance of an instrument in an application environment and seeks to obtain acceptable accuracy over the entire design range. Often, project requirements specify a maximum total measurement uncertainty, expressed as a percent of full-scale. However in some applications, we seek to obtain enhanced performance at the low range, therefore expressing the accuracy as a percent of reading should be considered as a modeling strategy. For example, it is common to desire to use a force balance in multiple facilities or regimes, often well below its designed full-scale capacity. This paper presents a general statistical methodology for optimizing calibration mathematical models based on a percent of reading accuracy requirement, which has broad application in all types of transducer applications where low range performance is required. A case study illustrates the proposed methodology for the Mars Entry Atmospheric Data System that employs seven strain-gage based pressure transducers mounted on the heatshield of the Mars Science Laboratory mission.

  5. Preliminary report on NTS spectral gamma logging and calibration models

    International Nuclear Information System (INIS)

    Mathews, M.A.; Warren, R.G.; Garcia, S.R.; Lavelle, M.J.

    1985-01-01

    Facilities are now available at the Nevada Test Site (NTS) in Building 2201 to calibrate spectral gamma logging equipment in environments of low radioactivity. Such environments are routinely encountered during logging of holes at the NTS. Four calibration models were delivered to Building 2201 in January 1985. Each model, or test pit, consists of a stone block with a 12-inch diameter cored borehole. Preliminary radioelement values from the core for the test pits range from 0.58 to 3.83% potassium (K), 0.48 to 29.11 ppm thorium (Th), and 0.62 to 40.42 ppm uranium (U). Two satellite holes, U19ab number2 and U19ab number3, were logged during the winter of 1984-1985. The response of these logs correlates with contents of the naturally radioactive elements K. Th. and U determined in samples from petrologic zones that occur within these holes. Based on these comparisons, the spectral gamma log aids in the recognition and mapping of subsurface stratigraphic units and alteration features associated with unusual concentration of these radioactive elements, such as clay-rich zones

  6. The Scandinavian regional model

    DEFF Research Database (Denmark)

    Torfing, Jacob; Lidström, Anders; Røiseland, Asbjørn

    2015-01-01

    This article maps how the sub-national regional levels of governance in Denmark, Norway and Sweden have changed from a high degree of institutional convergence to a pattern of institutional divergence. It analyses the similarities and differences in the changes in regional governance and discusses...

  7. Non-linear calibration models for near infrared spectroscopy

    DEFF Research Database (Denmark)

    Ni, Wangdong; Nørgaard, Lars; Mørup, Morten

    2014-01-01

    by ridge regression (RR). The performance of the different methods is demonstrated by their practical applications using three real-life near infrared (NIR) data sets. Different aspects of the various approaches including computational time, model interpretability, potential over-fitting using the non-linear...... models on linear problems, robustness to small or medium sample sets, and robustness to pre-processing, are discussed. The results suggest that GPR and BANN are powerful and promising methods for handling linear as well as nonlinear systems, even when the data sets are moderately small. The LS......-SVM), relevance vector machines (RVM), Gaussian process regression (GPR), artificial neural network (ANN), and Bayesian ANN (BANN). In this comparison, partial least squares (PLS) regression is used as a linear benchmark, while the relationship of the methods is considered in terms of traditional calibration...

  8. A New Perspective for the Calibration of Computational Predictor Models.

    Energy Technology Data Exchange (ETDEWEB)

    Crespo, Luis Guillermo

    2014-11-01

    This paper presents a framework for calibrating computational models using data from sev- eral and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncer- tainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of obser- vations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it is a description of the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain (i.e., roll-up and extrapolation).

  9. Modified calibration protocol evaluated in a model-based testing of SBR flexibility

    DEFF Research Database (Denmark)

    Corominas, Lluís; Sin, Gürkan; Puig, Sebastià

    2011-01-01

    The purpose of this paper is to refine the BIOMATH calibration protocol for SBR systems, in particular to develop a pragmatic calibration protocol that takes advantage of SBR information-rich data, defines a simulation strategy to obtain proper initial conditions for model calibration and provide...

  10. Analysis of variation in calibration curves for Kodak XV radiographic film using model-based parameters.

    Science.gov (United States)

    Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L

    2010-08-05

    Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.

  11. Calibrating a forest landscape model to simulate frequent fire in Mediterranean-type shrublands

    Science.gov (United States)

    Syphard, A.D.; Yang, J.; Franklin, J.; He, H.S.; Keeley, J.E.

    2007-01-01

    In Mediterranean-type ecosystems (MTEs), fire disturbance influences the distribution of most plant communities, and altered fire regimes may be more important than climate factors in shaping future MTE vegetation dynamics. Models that simulate the high-frequency fire and post-fire response strategies characteristic of these regions will be important tools for evaluating potential landscape change scenarios. However, few existing models have been designed to simulate these properties over long time frames and broad spatial scales. We refined a landscape disturbance and succession (LANDIS) model to operate on an annual time step and to simulate altered fire regimes in a southern California Mediterranean landscape. After developing a comprehensive set of spatial and non-spatial variables and parameters, we calibrated the model to simulate very high fire frequencies and evaluated the simulations under several parameter scenarios representing hypotheses about system dynamics. The goal was to ensure that observed model behavior would simulate the specified fire regime parameters, and that the predictions were reasonable based on current understanding of community dynamics in the region. After calibration, the two dominant plant functional types responded realistically to different fire regime scenarios. Therefore, this model offers a new alternative for simulating altered fire regimes in MTE landscapes. ?? 2007 Elsevier Ltd. All rights reserved.

  12. Root zone water quality model (RZWQM2): Model use, calibration and validation

    Science.gov (United States)

    Ma, Liwang; Ahuja, Lajpat; Nolan, B.T.; Malone, Robert; Trout, Thomas; Qi, Z.

    2012-01-01

    The Root Zone Water Quality Model (RZWQM2) has been used widely for simulating agricultural management effects on crop production and soil and water quality. Although it is a one-dimensional model, it has many desirable features for the modeling community. This article outlines the principles of calibrating the model component by component with one or more datasets and validating the model with independent datasets. Users should consult the RZWQM2 user manual distributed along with the model and a more detailed protocol on how to calibrate RZWQM2 provided in a book chapter. Two case studies (or examples) are included in this article. One is from an irrigated maize study in Colorado to illustrate the use of field and laboratory measured soil hydraulic properties on simulated soil water and crop production. It also demonstrates the interaction between soil and plant parameters in simulated plant responses to water stresses. The other is from a maize-soybean rotation study in Iowa to show a manual calibration of the model for crop yield, soil water, and N leaching in tile-drained soils. Although the commonly used trial-and-error calibration method works well for experienced users, as shown in the second example, an automated calibration procedure is more objective, as shown in the first example. Furthermore, the incorporation of the Parameter Estimation Software (PEST) into RZWQM2 made the calibration of the model more efficient than a grid (ordered) search of model parameters. In addition, PEST provides sensitivity and uncertainty analyses that should help users in selecting the right parameters to calibrate.

  13. Simultaneous calibration of surface flow and baseflow simulations: A revisit of the SWAT model calibration framework

    Science.gov (United States)

    Accurate analysis of water flow pathways from rainfall to streams is critical for simulating water use, climate change impact, and contaminant transport. In this study, we developed a new scheme to simultaneously calibrate surface flow (SF) and baseflow (BF) simulations of Soil and Water Assessment ...

  14. Calibration of a DG–model for fluorescence microscopy

    DEFF Research Database (Denmark)

    Hansen, Christian Valdemar

    It is well known that diseases like Alzheimer, Parkinson, Corea Huntington and Arteriosclerosis are caused by a jam in intracellular membrane traffic [2]. Hence to improve treatment, a quantitative analysis of intracellular transport is essential. Fluorescence loss in photobleaching (FLIP......) is an impor- tant and widely used microscopy method for visualization of molecular transport processes in living cells. Thus, the motivation for making an automated reliable analysis of the image data is high. In this contribution, we present and comment on the calibration of a Discontinuous......–Galerkin simulator [3, 4] on segmented cell images. The cell geometry is extracted from FLIP images using the Chan– Vese active contours algorithm [1] while the DG simulator is implemented in FEniCS [5]. Simulated FLIP sequences based on optimal parameters from the PDE model are presented, with an overall goal...

  15. The design and realization of calibration apparatus for measuring the concentration of radon in three models

    Energy Technology Data Exchange (ETDEWEB)

    Huiping, Guo [The Second Artillery Engineering College, Xi' an (China)

    2007-06-15

    For satisfying calibration request of radon measure in the laboratory, the calibration apparatus for radon activity measure is designed and realized. The calibration apparatus can auto-control and auto-measure in three models. sequent mode, pulse mode and constant mode. The stability and reliability of the calibration apparatus was tested under the three models. The experimental result shows that the apparatus can provides an adjustable and steady radon activity concentration environment for the research of radon and its progeny and for the calibration of its measure. (authors)

  16. Hydrological processes and model representation: impact of soft data on calibration

    Science.gov (United States)

    J.G. Arnold; M.A. Youssef; H. Yen; M.J. White; A.Y. Sheshukov; A.M. Sadeghi; D.N. Moriasi; J.L. Steiner; Devendra Amatya; R.W. Skaggs; E.B. Haney; J. Jeong; M. Arabi; P.H. Gowda

    2015-01-01

    Hydrologic and water quality models are increasingly used to determine the environmental impacts of climate variability and land management. Due to differing model objectives and differences in monitored data, there are currently no universally accepted procedures for model calibration and validation in the literature. In an effort to develop accepted model calibration...

  17. NSLS-II: Nonlinear Model Calibration for Synchrotrons

    Energy Technology Data Exchange (ETDEWEB)

    Bengtsson, J.

    2010-10-08

    This tech note is essentially a summary of a lecture we delivered to the Acc. Phys. Journal Club Apr, 2010. However, since the estimated accuracy of these methods has been naive and misleading in the field of particle accelerators, i.e., ignores the impact of noise, we will elaborate on this in some detail. A prerequisite for a calibration of the nonlinear Hamiltonian is that the quadratic part has been understood, i.e., that the linear optics for the real accelerator has been calibrated. For synchrotron light source operations, this problem has been solved by the interactive LOCO technique/tool (Linear Optics from Closed Orbits). Before that, in the context of hadron accelerators, it has been done by signal processing of turn-by-turn BPM data. We have outlined how to make a basic calibration of the nonlinear model for synchrotrons. In particular, we have shown how this was done for LEAR, CERN (antiprotons) in the mid-80s. Specifically, our accuracy for frequency estimation was {approx} 1 x 10{sup -5} for 1024 turns (to calibrate the linear optics) and {approx} 1 x 10{sup -4} for 256 turns for tune footprint and betatron spectrum. For a comparison, the estimated tune footprint for stable beam for NSLS-II is {approx}0.1. Since the transverse damping time is {approx}20 msec, i.e., {approx}4,000 turns. There is no fundamental difference for: antiprotons, protons, and electrons in this case. Because the estimated accuracy for these methods in the field of particle accelerators has been naive, i.e., ignoring the impact of noise, we have also derived explicit formula, from first principles, for a quantitative statement. For e.g. N = 256 and 5% noise we obtain {delta}{nu} {approx} 1 x 10{sup -5}. A comparison with the state-of-the-arts in e.g. telecomm and electrical engineering since the 60s is quite revealing. For example, Kalman filter (1960), crucial for the: Ranger, Mariner, and Apollo (including the Lunar Module) missions during the 60s. Or Claude Shannon et al

  18. NSLS-II: Nonlinear Model Calibration for Synchrotrons

    International Nuclear Information System (INIS)

    Bengtsson, J.

    2010-01-01

    This tech note is essentially a summary of a lecture we delivered to the Acc. Phys. Journal Club Apr, 2010. However, since the estimated accuracy of these methods has been naive and misleading in the field of particle accelerators, i.e., ignores the impact of noise, we will elaborate on this in some detail. A prerequisite for a calibration of the nonlinear Hamiltonian is that the quadratic part has been understood, i.e., that the linear optics for the real accelerator has been calibrated. For synchrotron light source operations, this problem has been solved by the interactive LOCO technique/tool (Linear Optics from Closed Orbits). Before that, in the context of hadron accelerators, it has been done by signal processing of turn-by-turn BPM data. We have outlined how to make a basic calibration of the nonlinear model for synchrotrons. In particular, we have shown how this was done for LEAR, CERN (antiprotons) in the mid-80s. Specifically, our accuracy for frequency estimation was ∼ 1 x 10 -5 for 1024 turns (to calibrate the linear optics) and ∼ 1 x 10 -4 for 256 turns for tune footprint and betatron spectrum. For a comparison, the estimated tune footprint for stable beam for NSLS-II is ∼0.1. Since the transverse damping time is ∼20 msec, i.e., ∼4,000 turns. There is no fundamental difference for: antiprotons, protons, and electrons in this case. Because the estimated accuracy for these methods in the field of particle accelerators has been naive, i.e., ignoring the impact of noise, we have also derived explicit formula, from first principles, for a quantitative statement. For e.g. N = 256 and 5% noise we obtain (delta)ν ∼ 1 x 10 -5 . A comparison with the state-of-the-arts in e.g. telecomm and electrical engineering since the 60s is quite revealing. For example, Kalman filter (1960), crucial for the: Ranger, Mariner, and Apollo (including the Lunar Module) missions during the 60s. Or Claude Shannon et al since the 40s for that matter. Conclusion: what

  19. Regionalization of SWAT Model Parameters for Use in Ungauged Watersheds

    Directory of Open Access Journals (Sweden)

    Indrajeet Chaubey

    2010-11-01

    Full Text Available There has been a steady shift towards modeling and model-based approaches as primary methods of assessing watershed response to hydrologic inputs and land management, and of quantifying watershed-wide best management practice (BMP effectiveness. Watershed models often require some degree of calibration and validation to achieve adequate watershed and therefore BMP representation. This is, however, only possible for gauged watersheds. There are many watersheds for which there are very little or no monitoring data available, thus the question as to whether it would be possible to extend and/or generalize model parameters obtained through calibration of gauged watersheds to ungauged watersheds within the same region. This study explored the possibility of developing regionalized model parameter sets for use in ungauged watersheds. The study evaluated two regionalization methods: global averaging, and regression-based parameters, on the SWAT model using data from priority watersheds in Arkansas. Resulting parameters were tested and model performance determined on three gauged watersheds. Nash-Sutcliffe efficiencies (NS for stream flow obtained using regression-based parameters (0.53–0.83 compared well with corresponding values obtained through model calibration (0.45–0.90. Model performance obtained using global averaged parameter values was also generally acceptable (0.4 ≤ NS ≤ 0.75. Results from this study indicate that regionalized parameter sets for the SWAT model can be obtained and used for making satisfactory hydrologic response predictions in ungauged watersheds.

  20. Calibration of a distributed hydrology and land surface model using energy flux measurements

    DEFF Research Database (Denmark)

    Larsen, Morten Andreas Dahl; Refsgaard, Jens Christian; Jensen, Karsten H.

    2016-01-01

    In this study we develop and test a calibration approach on a spatially distributed groundwater-surface water catchment model (MIKE SHE) coupled to a land surface model component with particular focus on the water and energy fluxes. The model is calibrated against time series of eddy flux measure...

  1. Modelling and calibration of a ring-shaped electrostatic meter

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Jianyong [University of Teesside, Middlesbrough TS1 3BA (United Kingdom); Zhou Bin; Xu Chuanlong; Wang Shimin, E-mail: zhoubinde1980@gmail.co [Southeast University, Sipailou 2, Nanjing 210096 (China)

    2009-02-01

    Ring-shaped electrostatic flow meters can provide very useful information on pneumatically transported air-solids mixture. This type of meters are popular in measuring and controlling the pulverized coal flow distribution among conveyors leading to burners in coal-fired power stations, and they have also been used for research purposes, e.g. for the investigation of electrification mechanism of air-solids two-phase flow. In this paper, finite element method (FEM) is employed to analyze the characteristics of ring-shaped electrostatic meters, and a mathematic model has been developed to express the relationship between the meter's voltage output and the motion of charged particles in the sensing volume. The theoretical analysis and the test results using a belt rig demonstrate that the output of the meter depends upon many parameters including the characteristics of conditioning circuitry, the particle velocity vector, the amount and the rate of change of the charge carried by particles, the locations of particles and etc. This paper also introduces a method to optimize the theoretical model via calibration.

  2. Hanford statewide groundwater flow and transport model calibration report

    International Nuclear Information System (INIS)

    Law, A.; Panday, S.; Denslow, C.; Fecht, K.; Knepp, A.

    1996-04-01

    This report presents the results of the development and calibration of a three-dimensional, finite element model (VAM3DCG) for the unconfined groundwater flow system at the Hanford Site. This flow system is the largest radioactively contaminated groundwater system in the United States. Eleven groundwater plumes have been identified containing organics, inorganics, and radionuclides. Because groundwater from the unconfined groundwater system flows into the Columbia River, the development of a groundwater flow model is essential to the long-term management of these plumes. Cost effective decision making requires the capability to predict the effectiveness of various remediation approaches. Some of the alternatives available to remediate groundwater include: pumping contaminated water from the ground for treatment with reinjection or to other disposal facilities; containment of plumes by means of impermeable walls, physical barriers, and hydraulic control measures; and, in some cases, management of groundwater via planned recharge and withdrawals. Implementation of these methods requires a knowledge of the groundwater flow system and how it responds to remedial actions

  3. Energy calibration for LaBr3(Ce) scintillator detector in the region of 1-10 MeV

    International Nuclear Information System (INIS)

    Zhang Jianhua; Zhu Chengsheng; Zeng Jun; Ding Ge; Xiang Qingpei; Liu Zhao; Yang Chaowen

    2013-01-01

    Background: LaBr 3 (Ce) detector has played an important role in detecting explosive, contraband and landmine because of its high y detection efficiency and good energy resolution etc. Purpose: To calibrate detector in wide energy region. Methods: The gamma spectra of NH 4 Cl and C 3 H 6 N 6 induced by 252 Cf neutron source were measured. Results: Comparing their gamma spectra, characteristic gamma lines can be located and the energy calibration curve was obtained. Conclusions: Radio nuclides can be identified by the calibration curve fitted with quadratic or cubic polynomial. (authors)

  4. Vegetation root zone storage and rooting depth, derived from local calibration of a global hydrological model

    Science.gov (United States)

    van der Ent, R.; Van Beek, R.; Sutanudjaja, E.; Wang-Erlandsson, L.; Hessels, T.; Bastiaanssen, W.; Bierkens, M. F.

    2017-12-01

    The storage and dynamics of water in the root zone control many important hydrological processes such as saturation excess overland flow, interflow, recharge, capillary rise, soil evaporation and transpiration. These processes are parameterized in hydrological models or land-surface schemes and the effect on runoff prediction can be large. Root zone parameters in global hydrological models are very uncertain as they cannot be measured directly at the scale on which these models operate. In this paper we calibrate the global hydrological model PCR-GLOBWB using a state-of-the-art ensemble of evaporation fields derived by solving the energy balance for satellite observations. We focus our calibration on the root zone parameters of PCR-GLOBWB and derive spatial patterns of maximum root zone storage. We find these patterns to correspond well with previous research. The parameterization of our model allows for the conversion of maximum root zone storage to root zone depth and we find that these correspond quite well to the point observations where available. We conclude that climate and soil type should be taken into account when regionalizing measured root depth for a certain vegetation type. We equally find that using evaporation rather than discharge better allows for local adjustment of root zone parameters within a basin and thus provides orthogonal data to diagnose and optimize hydrological models and land surface schemes.

  5. Streamflow characteristics from modelled runoff time series: Importance of calibration criteria selection

    Science.gov (United States)

    Poole, Sandra; Vis, Marc; Knight, Rodney; Seibert, Jan

    2017-01-01

    Ecologically relevant streamflow characteristics (SFCs) of ungauged catchments are often estimated from simulated runoff of hydrologic models that were originally calibrated on gauged catchments. However, SFC estimates of the gauged donor catchments and subsequently the ungauged catchments can be substantially uncertain when models are calibrated using traditional approaches based on optimization of statistical performance metrics (e.g., Nash–Sutcliffe model efficiency). An improved calibration strategy for gauged catchments is therefore crucial to help reduce the uncertainties of estimated SFCs for ungauged catchments. The aim of this study was to improve SFC estimates from modeled runoff time series in gauged catchments by explicitly including one or several SFCs in the calibration process. Different types of objective functions were defined consisting of the Nash–Sutcliffe model efficiency, single SFCs, or combinations thereof. We calibrated a bucket-type runoff model (HBV – Hydrologiska Byråns Vattenavdelning – model) for 25 catchments in the Tennessee River basin and evaluated the proposed calibration approach on 13 ecologically relevant SFCs representing major flow regime components and different flow conditions. While the model generally tended to underestimate the tested SFCs related to mean and high-flow conditions, SFCs related to low flow were generally overestimated. The highest estimation accuracies were achieved by a SFC-specific model calibration. Estimates of SFCs not included in the calibration process were of similar quality when comparing a multi-SFC calibration approach to a traditional model efficiency calibration. For practical applications, this implies that SFCs should preferably be estimated from targeted runoff model calibration, and modeled estimates need to be carefully interpreted.

  6. Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers

    Science.gov (United States)

    Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.

    2010-01-01

    This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.

  7. Calibration of a complex activated sludge model for the full-scale wastewater treatment plant

    OpenAIRE

    Liwarska-Bizukojc, Ewa; Olejnik, Dorota; Biernacki, Rafal; Ledakowicz, Stanislaw

    2011-01-01

    In this study, the results of the calibration of the complex activated sludge model implemented in BioWin software for the full-scale wastewater treatment plant are presented. Within the calibration of the model, sensitivity analysis of its parameters and the fractions of carbonaceous substrate were performed. In the steady-state and dynamic calibrations, a successful agreement between the measured and simulated values of the output variables was achieved. Sensitivity analysis revealed that u...

  8. Generator Dynamic Model Validation and Parameter Calibration Using Phasor Measurements at the Point of Connection

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Zhenyu; Du, Pengwei; Kosterev, Dmitry; Yang, Steve

    2013-05-01

    Disturbance data recorded by phasor measurement units (PMU) offers opportunities to improve the integrity of dynamic models. However, manually tuning parameters through play-back events demands significant efforts and engineering experiences. In this paper, a calibration method using the extended Kalman filter (EKF) technique is proposed. The formulation of EKF with parameter calibration is discussed. Case studies are presented to demonstrate its validity. The proposed calibration method is cost-effective, complementary to traditional equipment testing for improving dynamic model quality.

  9. HYDROGRAV - Hydrological model calibration and terrestrial water storage monitoring from GRACE gravimetry and satellite altimetry, First results

    DEFF Research Database (Denmark)

    Andersen, O.B.; Krogh, P.E.; Michailovsky, C.

    2008-01-01

    Space-borne and ground-based time-lapse gravity observations provide new data for water balance monitoring and hydrological model calibration in the future. The HYDROGRAV project (www.hydrograv.dk) will explore the utility of time-lapse gravity surveys for hydrological model calibration and terre...... change from 2002 to 2008 along with in-situ gravity time-lapse observations and radar altimetry monitoring of surface water for the southern Africa river basins will be presented.......Space-borne and ground-based time-lapse gravity observations provide new data for water balance monitoring and hydrological model calibration in the future. The HYDROGRAV project (www.hydrograv.dk) will explore the utility of time-lapse gravity surveys for hydrological model calibration...... and terrestrial water storage monitoring. Merging remote sensing data from GRACE with other remote sensing data like satellite altimetry and also ground based observations are important to hydrological model calibration and water balance monitoring of large regions and can serve as either supplement or as vital...

  10. A new method to calibrate Lagrangian model with ASAR images for oil slick trajectory.

    Science.gov (United States)

    Tian, Siyu; Huang, Xiaoxia; Li, Hongga

    2017-03-15

    Since Lagrangian model coefficients vary with different conditions, it is necessary to calibrate the model to obtain optimal coefficient combination for special oil spill accident. This paper focuses on proposing a new method to calibrate Lagrangian model with time series of Envisat ASAR images. Oil slicks extracted from time series images form a detected trajectory of special oil slick. Lagrangian model is calibrated by minimizing the difference between simulated trajectory and detected trajectory. mean center position distance difference (MCPD) and rotation difference (RD) of Oil slicks' or particles' standard deviational ellipses (SDEs) are calculated as two evaluations. The two parameters are taken to evaluate the performance of Lagrangian transport model with different coefficient combinations. This method is applied to Penglai 19-3 oil spill accident. The simulation result with calibrated model agrees well with related satellite observations. It is suggested the new method is effective to calibrate Lagrangian model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Calibration techniques for the hot wire anemometer in a low velocity region

    International Nuclear Information System (INIS)

    Fujimura, Kaoru; Kawamura, Hiroshi

    1980-03-01

    In connection with experiments on coolant flow in the core of multi-purpose VHTR, a low-velocity calibration wind tunnel was made, and techniques for the hot wire anemometer in the air were investigated. Following are the results. 1) A technique using the frequency of von Karman vortex street is not recommended because of the irregular mode in a low velocity region. 2) A Pitot tube is valid only for the flow velocities larger than 1 m/s. 3) The thermal trace technique is suitable in a relatively wide range of velocity, if velocity defect in the wake is compensated for. When flow velocity is larger than 1 m/s, the thermal trace technique is consistent with the Pitot tube method. (author)

  12. Calibration models for density borehole logging - construction report

    International Nuclear Information System (INIS)

    Engelmann, R.E.; Lewis, R.E.; Stromswold, D.C.

    1995-10-01

    Two machined blocks of magnesium and aluminum alloys form the basis for Hanford's density models. The blocks provide known densities of 1.780 ± 0.002 g/cm 3 and 2.804 ± 0.002 g/cm 3 for calibrating borehole logging tools that measure density based on gamma-ray scattering from a source in the tool. Each block is approximately 33 x 58 x 91 cm (13 x 23 x 36 in.) with cylindrical grooves cut into the sides of the blocks to hold steel casings of inner diameter 15 cm (6 in.) and 20 cm (8 in.). Spacers that can be inserted between the blocks and casings can create air gaps of thickness 0.64, 1.3, 1.9, and 2.5 cm (0.25, 0.5, 0.75 and 1.0 in.), simulating air gaps that can occur in actual wells from hole enlargements behind the casing

  13. Calibration of a γ- Re θ transition model and its application in low-speed flows

    Science.gov (United States)

    Wang, YunTao; Zhang, YuLun; Meng, DeHong; Wang, GunXue; Li, Song

    2014-12-01

    The prediction of laminar-turbulent transition in boundary layer is very important for obtaining accurate aerodynamic characteristics with computational fluid dynamic (CFD) tools, because laminar-turbulent transition is directly related to complex flow phenomena in boundary layer and separated flow in space. Unfortunately, the transition effect isn't included in today's major CFD tools because of non-local calculations in transition modeling. In this paper, Menter's γ- Re θ transition model is calibrated and incorporated into a Reynolds-Averaged Navier-Stokes (RANS) code — Trisonic Platform (TRIP) developed in China Aerodynamic Research and Development Center (CARDC). Based on the experimental data of flat plate from the literature, the empirical correlations involved in the transition model are modified and calibrated numerically. Numerical simulation for low-speed flow of Trapezoidal Wing (Trap Wing) is performed and compared with the corresponding experimental data. It is indicated that the γ- Re θ transition model can accurately predict the location of separation-induced transition and natural transition in the flow region with moderate pressure gradient. The transition model effectively imporves the simulation accuracy of the boundary layer and aerodynamic characteristics.

  14. Bayesian model calibration of computational models in velocimetry diagnosed dynamic compression experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Justin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hund, Lauren [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesian model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.

  15. Effect of calibration data series length on performance and optimal parameters of hydrological model

    Directory of Open Access Journals (Sweden)

    Chuan-zhe Li

    2010-12-01

    Full Text Available In order to assess the effects of calibration data series length on the performance and optimal parameter values of a hydrological model in ungauged or data-limited catchments (data are non-continuous and fragmental in some catchments, we used non-continuous calibration periods for more independent streamflow data for SIMHYD (simple hydrology model calibration. Nash-Sutcliffe efficiency and percentage water balance error were used as performance measures. The particle swarm optimization (PSO method was used to calibrate the rainfall-runoff models. Different lengths of data series ranging from one year to ten years, randomly sampled, were used to study the impact of calibration data series length. Fifty-five relatively unimpaired catchments located all over Australia with daily precipitation, potential evapotranspiration, and streamflow data were tested to obtain more general conclusions. The results show that longer calibration data series do not necessarily result in better model performance. In general, eight years of data are sufficient to obtain steady estimates of model performance and parameters for the SIMHYD model. It is also shown that most humid catchments require fewer calibration data to obtain a good performance and stable parameter values. The model performs better in humid and semi-humid catchments than in arid catchments. Our results may have useful and interesting implications for the efficiency of using limited observation data for hydrological model calibration in different climates.

  16. Improving SWAT model prediction using an upgraded denitrification scheme and constrained auto calibration

    Science.gov (United States)

    The reliability of common calibration practices for process based water quality models has recently been questioned. A so-called “adequately calibrated model” may contain input errors not readily identifiable by model users, or may not realistically represent intra-watershed responses. These short...

  17. Beyond discrimination: A comparison of calibration methods and clinical usefulness of predictive models of readmission risk.

    Science.gov (United States)

    Walsh, Colin G; Sharman, Kavya; Hripcsak, George

    2017-12-01

    Prior to implementing predictive models in novel settings, analyses of calibration and clinical usefulness remain as important as discrimination, but they are not frequently discussed. Calibration is a model's reflection of actual outcome prevalence in its predictions. Clinical usefulness refers to the utilities, costs, and harms of using a predictive model in practice. A decision analytic approach to calibrating and selecting an optimal intervention threshold may help maximize the impact of readmission risk and other preventive interventions. To select a pragmatic means of calibrating predictive models that requires a minimum amount of validation data and that performs well in practice. To evaluate the impact of miscalibration on utility and cost via clinical usefulness analyses. Observational, retrospective cohort study with electronic health record data from 120,000 inpatient admissions at an urban, academic center in Manhattan. The primary outcome was thirty-day readmission for three causes: all-cause, congestive heart failure, and chronic coronary atherosclerotic disease. Predictive modeling was performed via L1-regularized logistic regression. Calibration methods were compared including Platt Scaling, Logistic Calibration, and Prevalence Adjustment. Performance of predictive modeling and calibration was assessed via discrimination (c-statistic), calibration (Spiegelhalter Z-statistic, Root Mean Square Error [RMSE] of binned predictions, Sanders and Murphy Resolutions of the Brier Score, Calibration Slope and Intercept), and clinical usefulness (utility terms represented as costs). The amount of validation data necessary to apply each calibration algorithm was also assessed. C-statistics by diagnosis ranged from 0.7 for all-cause readmission to 0.86 (0.78-0.93) for congestive heart failure. Logistic Calibration and Platt Scaling performed best and this difference required analyzing multiple metrics of calibration simultaneously, in particular Calibration

  18. The scintillating optical fiber isotope experiment: Bevalac calibrations of test models

    International Nuclear Information System (INIS)

    Connell, J.J.; Binns, W.R.; Dowkontt, P.F.; Epstein, J.W.; Israel, M.H.; Klarmann, J.; Washington Univ., St. Louis, MO; Webber, W.R.; Kish, J.C.

    1990-01-01

    The Scintillating Optical Fiber Isotope Experiment (SOFIE) is a Cherenkov dE/dx-range experiment being developed to study the isotopic composition of cosmic rays in the iron region with sufficient resolution to resolve isotopes separated by one mass unit at iron. This instrument images stopping particles with a block of scintillating optical fibers coupled to an image intensified video camera. From the digitized video data the trajectory and range of particles stopping in the fiber bundle can be determined; this information, together with a Cherenkov measurement, is used to determine mass. To facilitate this determination, a new Cherenkov response equation was derived for heavy ions at energies near threshold in thick Cherenkov radiators. Test models of SOFIE were calibrated at the Lawrence Berkeley Laboratory's Bevalac heavy ion accelerator in 1985 and 1986 using beams of iron nuclei with energies of 465 to 515 MeV/nucleon. This paper presents the results of these calibrations and discusses the design of the SOFIE Bevalac test models in the context of the scientific objectives of the eventual balloon experiment. The test models show a mass resolution of σ A ≅0.30 amu and a range resolution of σ R ≅250 μm. These results are sufficient for a successful cosmic ray isotope experiment, thus demonstrating the feasibility of the detector system. The SOFIE test models represent the first successful application in the field of cosmic ray astrophysics of the emerging technology of scintillating optical fibers. (orig.)

  19. Calibration of the APEX Model to Simulate Management Practice Effects on Runoff, Sediment, and Phosphorus Loss.

    Science.gov (United States)

    Bhandari, Ammar B; Nelson, Nathan O; Sweeney, Daniel W; Baffaut, Claire; Lory, John A; Senaviratne, Anomaa; Pierzynski, Gary M; Janssen, Keith A; Barnes, Philip L

    2017-11-01

    Process-based computer models have been proposed as a tool to generate data for Phosphorus (P) Index assessment and development. Although models are commonly used to simulate P loss from agriculture using managements that are different from the calibration data, this use of models has not been fully tested. The objective of this study is to determine if the Agricultural Policy Environmental eXtender (APEX) model can accurately simulate runoff, sediment, total P, and dissolved P loss from 0.4 to 1.5 ha of agricultural fields with managements that are different from the calibration data. The APEX model was calibrated with field-scale data from eight different managements at two locations (management-specific models). The calibrated models were then validated, either with the same management used for calibration or with different managements. Location models were also developed by calibrating APEX with data from all managements. The management-specific models resulted in satisfactory performance when used to simulate runoff, total P, and dissolved P within their respective systems, with > 0.50, Nash-Sutcliffe efficiency > 0.30, and percent bias within ±35% for runoff and ±70% for total and dissolved P. When applied outside the calibration management, the management-specific models only met the minimum performance criteria in one-third of the tests. The location models had better model performance when applied across all managements compared with management-specific models. Our results suggest that models only be applied within the managements used for calibration and that data be included from multiple management systems for calibration when using models to assess management effects on P loss or evaluate P Indices. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  20. Automatic component calibration and error diagnostics for model-based accelerator control. Phase I final report

    International Nuclear Information System (INIS)

    Carl Stern; Martin Lee

    1999-01-01

    Phase I work studied the feasibility of developing software for automatic component calibration and error correction in beamline optics models. A prototype application was developed that corrects quadrupole field strength errors in beamline models

  1. Automatic component calibration and error diagnostics for model-based accelerator control. Phase I final report

    CERN Document Server

    Carl-Stern

    1999-01-01

    Phase I work studied the feasibility of developing software for automatic component calibration and error correction in beamline optics models. A prototype application was developed that corrects quadrupole field strength errors in beamline models.

  2. Comparison of global optimization approaches for robust calibration of hydrologic model parameters

    Science.gov (United States)

    Jung, I. W.

    2015-12-01

    Robustness of the calibrated parameters of hydrologic models is necessary to provide a reliable prediction of future performance of watershed behavior under varying climate conditions. This study investigated calibration performances according to the length of calibration period, objective functions, hydrologic model structures and optimization methods. To do this, the combination of three global optimization methods (i.e. SCE-UA, Micro-GA, and DREAM) and four hydrologic models (i.e. SAC-SMA, GR4J, HBV, and PRMS) was tested with different calibration periods and objective functions. Our results showed that three global optimization methods provided close calibration performances under different calibration periods, objective functions, and hydrologic models. However, using the agreement of index, normalized root mean square error, Nash-Sutcliffe efficiency as the objective function showed better performance than using correlation coefficient and percent bias. Calibration performances according to different calibration periods from one year to seven years were hard to generalize because four hydrologic models have different levels of complexity and different years have different information content of hydrological observation. Acknowledgements This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  3. Calibrating the sqHIMMELI v1.0 wetland methane emission model with hierarchical modeling and adaptive MCMC

    Science.gov (United States)

    Susiluoto, Jouni; Raivonen, Maarit; Backman, Leif; Laine, Marko; Makela, Jarmo; Peltola, Olli; Vesala, Timo; Aalto, Tuula

    2018-03-01

    the early spring net primary production could be used to predict parameters affecting the annual methane production. Even though the calibration is specific to the Siikaneva site, the hierarchical modeling approach is well suited for larger-scale studies and the results of the estimation pave way for a regional or global-scale Bayesian calibration of wetland emission models.

  4. Regional parametrisation of a monthly hydrological model for estimating discharges in ungaued catchments

    Science.gov (United States)

    Hlavcova, K.; Szolgay, J.; Kohnova, S.; Kalas, M.

    2003-04-01

    In the case of the absence of measured runoff optimisation techniques cannot be used to estimate the parameters of monthly rainfall-runoff models. In such a case usually empirical regression methods were used for relating the model parameters to the catchment characteristics in a given region. In the paper a different method for the regional calibration of a monthly water balance model, which can be used for planning purposes, is proposed. Instead of using the regional regression approach a method is proposed, which involves the calibration of a monthly water balance model to gauged sites in the given region simultaneously. A regional objective function was constructed and for the calibration a genetic programming algorithm was employed. It is expected, that the regionally calibrated model parameters can be used in ungauged basins with similar physiographic conditions. The comparison of the performance of such a regional calibration scheme was compared with two single site calibration methods in a region of West Slovakia. The results are based on a study that aimed at computing surface water inflow into a lowland area with valuable groundwater resources. Monthly discharge time series had to be estimated in small ungauged rivers entering the study area.

  5. Genetic Algorithm Calibration of Probabilistic Cellular Automata for Modeling Mining Permit Activity

    Science.gov (United States)

    Louis, S.J.; Raines, G.L.

    2003-01-01

    We use a genetic algorithm to calibrate a spatially and temporally resolved cellular automata to model mining activity on public land in Idaho and western Montana. The genetic algorithm searches through a space of transition rule parameters of a two dimensional cellular automata model to find rule parameters that fit observed mining activity data. Previous work by one of the authors in calibrating the cellular automaton took weeks - the genetic algorithm takes a day and produces rules leading to about the same (or better) fit to observed data. These preliminary results indicate that genetic algorithms are a viable tool in calibrating cellular automata for this application. Experience gained during the calibration of this cellular automata suggests that mineral resource information is a critical factor in the quality of the results. With automated calibration, further refinements of how the mineral-resource information is provided to the cellular automaton will probably improve our model.

  6. The Value of Hydrograph Partitioning Curves for Calibrating Hydrological Models in Glacierized Basins

    Science.gov (United States)

    He, Zhihua; Vorogushyn, Sergiy; Unger-Shayesteh, Katy; Gafurov, Abror; Kalashnikova, Olga; Omorova, Elvira; Merz, Bruno

    2018-03-01

    This study refines the method for calibrating a glacio-hydrological model based on Hydrograph Partitioning Curves (HPCs), and evaluates its value in comparison to multidata set optimization approaches which use glacier mass balance, satellite snow cover images, and discharge. The HPCs are extracted from the observed flow hydrograph using catchment precipitation and temperature gradients. They indicate the periods when the various runoff processes, such as glacier melt or snow melt, dominate the basin hydrograph. The annual cumulative curve of the difference between average daily temperature and melt threshold temperature over the basin, as well as the annual cumulative curve of average daily snowfall on the glacierized areas are used to identify the starting and end dates of snow and glacier ablation periods. Model parameters characterizing different runoff processes are calibrated on different HPCs in a stepwise and iterative way. Results show that the HPC-based method (1) delivers model-internal consistency comparably to the tri-data set calibration method; (2) improves the stability of calibrated parameter values across various calibration periods; and (3) estimates the contributions of runoff components similarly to the tri-data set calibration method. Our findings indicate the potential of the HPC-based approach as an alternative for hydrological model calibration in glacierized basins where other calibration data sets than discharge are often not available or very costly to obtain.

  7. Application of heuristic and machine-learning approach to engine model calibration

    Science.gov (United States)

    Cheng, Jie; Ryu, Kwang R.; Newman, C. E.; Davis, George C.

    1993-03-01

    Automation of engine model calibration procedures is a very challenging task because (1) the calibration process searches for a goal state in a huge, continuous state space, (2) calibration is often a lengthy and frustrating task because of complicated mutual interference among the target parameters, and (3) the calibration problem is heuristic by nature, and often heuristic knowledge for constraining a search cannot be easily acquired from domain experts. A combined heuristic and machine learning approach has, therefore, been adopted to improve the efficiency of model calibration. We developed an intelligent calibration program called ICALIB. It has been used on a daily basis for engine model applications, and has reduced the time required for model calibrations from many hours to a few minutes on average. In this paper, we describe the heuristic control strategies employed in ICALIB such as a hill-climbing search based on a state distance estimation function, incremental problem solution refinement by using a dynamic tolerance window, and calibration target parameter ordering for guiding the search. In addition, we present the application of a machine learning program called GID3* for automatic acquisition of heuristic rules for ordering target parameters.

  8. Multi-Site Calibration of Linear Reservoir Based Geomorphologic Rainfall-Runoff Models

    Directory of Open Access Journals (Sweden)

    Bahram Saeidifarzad

    2014-09-01

    Full Text Available Multi-site optimization of two adapted event-based geomorphologic rainfall-runoff models was presented using Non-dominated Sorting Genetic Algorithm (NSGA-II method for the South Fork Eel River watershed, California. The first model was developed based on Unequal Cascade of Reservoirs (UECR and the second model was presented as a modified version of Geomorphological Unit Hydrograph based on Nash’s model (GUHN. Two calibration strategies were considered as semi-lumped and semi-distributed for imposing (or unimposing the geomorphology relations in the models. The results of models were compared with Nash’s model. Obtained results using the observed data of two stations in the multi-site optimization framework showed reasonable efficiency values in both the calibration and the verification steps. The outcomes also showed that semi-distributed calibration of the modified GUHN model slightly outperformed other models in both upstream and downstream stations during calibration. Both calibration strategies for the developed UECR model during the verification phase showed slightly better performance in the downstream station, but in the upstream station, the modified GUHN model in the semi-lumped strategy slightly outperformed the other models. The semi-lumped calibration strategy could lead to logical lag time parameters related to the basin geomorphology and may be more suitable for data-based statistical analyses of the rainfall-runoff process.

  9. Python tools for rapid development, calibration, and analysis of generalized groundwater-flow models

    Science.gov (United States)

    Starn, J. J.; Belitz, K.

    2014-12-01

    National-scale water-quality data sets for the United States have been available for several decades; however, groundwater models to interpret these data are available for only a small percentage of the country. Generalized models may be adequate to explain and project groundwater-quality trends at the national scale by using regional scale models (defined as watersheds at or between the HUC-6 and HUC-8 levels). Coast-to-coast data such as the National Hydrologic Dataset Plus (NHD+) make it possible to extract the basic building blocks for a model anywhere in the country. IPython notebooks have been developed to automate the creation of generalized groundwater-flow models from the NHD+. The notebook format allows rapid testing of methods for model creation, calibration, and analysis. Capabilities within the Python ecosystem greatly speed up the development and testing of algorithms. GeoPandas is used for very efficient geospatial processing. Raster processing includes the Geospatial Data Abstraction Library and image processing tools. Model creation is made possible through Flopy, a versatile input and output writer for several MODFLOW-based flow and transport model codes. Interpolation, integration, and map plotting included in the standard Python tool stack also are used, making the notebook a comprehensive platform within on to build and evaluate general models. Models with alternative boundary conditions, number of layers, and cell spacing can be tested against one another and evaluated by using water-quality data. Novel calibration criteria were developed by comparing modeled heads to land-surface and surface-water elevations. Information, such as predicted age distributions, can be extracted from general models and tested for its ability to explain water-quality trends. Groundwater ages then can be correlated with horizontal and vertical hydrologic position, a relation that can be used for statistical assessment of likely groundwater-quality conditions

  10. Calibration of the 7—Equation Transition Model for High Reynolds Flows at Low Mach

    Science.gov (United States)

    Colonia, S.; Leble, V.; Steijl, R.; Barakos, G.

    2016-09-01

    The numerical simulation of flows over large-scale wind turbine blades without considering the transition from laminar to fully turbulent flow may result in incorrect estimates of the blade loads and performance. Thanks to its relative simplicity and promising results, the Local-Correlation based Transition Modelling concept represents a valid way to include transitional effects into practical CFD simulations. However, the model involves coefficients that need tuning. In this paper, the γ—equation transition model is assessed and calibrated, for a wide range of Reynolds numbers at low Mach, as needed for wind turbine applications. An aerofoil is used to evaluate the original model and calibrate it; while a large scale wind turbine blade is employed to show that the calibrated model can lead to reliable solutions for complex three-dimensional flows. The calibrated model shows promising results for both two-dimensional and three-dimensional flows, even if cross-flow instabilities are neglected.

  11. hydroPSO: A Versatile Particle Swarm Optimisation R Package for Calibration of Environmental Models

    Science.gov (United States)

    Zambrano-Bigiarini, M.; Rojas, R.

    2012-04-01

    Optimisation (IPSO), Fully Informed Particle Swarm (FIPS), and weighted FIPS (wFIPS). Finally, an advanced sensitivity analysis using the Latin Hypercube One-At-a-Time (LH-OAT) method and user-friendly plotting summaries facilitate the interpretation and assessment of the calibration/optimisation results. We validate hydroPSO against the standard PSO algorithm (SPSO-2007) employing five test functions commonly used to assess the performance of optimisation algorithms. Additionally, we illustrate how the performance of the optimization/calibration engine is boosted by using several of the fine-tune options included in hydroPSO. Finally, we show how to interface SWAT-2005 with hydroPSO to calibrate a semi-distributed hydrological model for the Ega River basin in Spain, and how to interface MODFLOW-2000 and hydroPSO to calibrate a groundwater flow model for the regional aquifer of the Pampa del Tamarugal in Chile. We limit the applications of hydroPSO to study cases dealing with surface water and groundwater models as these two are the authors' areas of expertise. However, based on the flexibility of hydroPSO we believe this package can be implemented to any model code requiring some form of parameter estimation.

  12. Calibration of a parsimonious distributed ecohydrological daily model in a data-scarce basin by exclusively using the spatio-temporal variation of NDVI

    Science.gov (United States)

    Ruiz-Pérez, Guiomar; Koch, Julian; Manfreda, Salvatore; Caylor, Kelly; Francés, Félix

    2017-12-01

    Ecohydrological modeling studies in developing countries, such as sub-Saharan Africa, often face the problem of extensive parametrical requirements and limited available data. Satellite remote sensing data may be able to fill this gap, but require novel methodologies to exploit their spatio-temporal information that could potentially be incorporated into model calibration and validation frameworks. The present study tackles this problem by suggesting an automatic calibration procedure, based on the empirical orthogonal function, for distributed ecohydrological daily models. The procedure is tested with the support of remote sensing data in a data-scarce environment - the upper Ewaso Ngiro river basin in Kenya. In the present application, the TETIS-VEG model is calibrated using only NDVI (Normalized Difference Vegetation Index) data derived from MODIS. The results demonstrate that (1) satellite data of vegetation dynamics can be used to calibrate and validate ecohydrological models in water-controlled and data-scarce regions, (2) the model calibrated using only satellite data is able to reproduce both the spatio-temporal vegetation dynamics and the observed discharge at the outlet and (3) the proposed automatic calibration methodology works satisfactorily and it allows for a straightforward incorporation of spatio-temporal data into the calibration and validation framework of a model.

  13. Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction

    Science.gov (United States)

    Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)

    2001-01-01

    In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.

  14. Effects of temporal and spatial resolution of calibration data on integrated hydrologic water quality model identification

    Science.gov (United States)

    Jiang, Sanyuan; Jomaa, Seifeddine; Büttner, Olaf; Rode, Michael

    2014-05-01

    Hydrological water quality modeling is increasingly used for investigating runoff and nutrient transport processes as well as watershed management but it is mostly unclear how data availablity determins model identification. In this study, the HYPE (HYdrological Predictions for the Environment) model, which is a process-based, semi-distributed hydrological water quality model, was applied in two different mesoscale catchments (Selke (463 km2) and Weida (99 km2)) located in central Germany to simulate discharge and inorganic nitrogen (IN) transport. PEST and DREAM(ZS) were combined with the HYPE model to conduct parameter calibration and uncertainty analysis. Split-sample test was used for model calibration (1994-1999) and validation (1999-2004). IN concentration and daily IN load were found to be highly correlated with discharge, indicating that IN leaching is mainly controlled by runoff. Both dynamics and balances of water and IN load were well captured with NSE greater than 0.83 during validation period. Multi-objective calibration (calibrating hydrological and water quality parameters simultaneously) was found to outperform step-wise calibration in terms of model robustness. Multi-site calibration was able to improve model performance at internal sites, decrease parameter posterior uncertainty and prediction uncertainty. Nitrogen-process parameters calibrated using continuous daily averages of nitrate-N concentration observations produced better and more robust simulations of IN concentration and load, lower posterior parameter uncertainty and IN concentration prediction uncertainty compared to the calibration against uncontinuous biweekly nitrate-N concentration measurements. Both PEST and DREAM(ZS) are efficient in parameter calibration. However, DREAM(ZS) is more sound in terms of parameter identification and uncertainty analysis than PEST because of its capability to evolve parameter posterior distributions and estimate prediction uncertainty based on global

  15. Calibration and validation of the advanced E-Region Wind Interferometer

    Directory of Open Access Journals (Sweden)

    S. K. Kristoffersen

    2013-07-01

    Full Text Available The advanced E-Region Wind Interferometer (ERWIN II combines the imaging capabilities of a CCD detector with the wide field associated with field-widened Michelson interferometry. This instrument is capable of simultaneous multi-directional wind observations for three different airglow emissions (oxygen green line (O(1S at a height of ~97 km, the PQ(7 and P(7 emission lines in the O2(0–1 atmospheric band at ~93 km and P1(3 emission line in the (6, 2 hydroxyl Meinel band at ~87 km on a three minute cadence. In each direction, for 45 s measurements for typical airglow volume emission rates, the instrument is capable of line-of-sight wind precisions of ~1 m s−1 for hydroxyl and O(1S and ~4 m s−1 for O2. This precision is achieved using a new data analysis algorithm which takes advantage of the imaging capabilities of the CCD detector along with knowledge of the instrument phase variation as a function of pixel location across the detector. This instrument is currently located in Eureka, Nunavut as part of the Polar Environment Atmospheric Research Laboratory (PEARL (80°N, 86° W. The details of the physical configuration, the data analysis algorithm, the measurement calibration and validation of the observations from December 2008 and January 2009 are described. Field measurements which demonstrate the capabilities of this instrument are presented. To our knowledge, the wind determinations with this instrument are the most accurate and have the highest observational cadence for airglow wind observations of this region of the atmosphere and match the capabilities of other wind-measuring techniques.

  16. A fundamental parameter-based calibration model for an intrinsic germanium X-ray fluorescence spectrometer

    International Nuclear Information System (INIS)

    Christensen, L.H.; Pind, N.

    1982-01-01

    A matrix-independent fundamental parameter-based calibration model for an energy-dispersive X-ray fluorescence spectrometer has been developed. This model, which is part of a fundamental parameter approach quantification method, accounts for both the excitation and detection probability. For each secondary target a number of relative calibration constants are calculated on the basis of knowledge of the irradiation geometry, the detector specifications, and tabulated fundamental physical parameters. The absolute calibration of the spectrometer is performed by measuring one pure element standard per secondary target. For sample systems where all elements can be analyzed by means of the same secondary target the absolute calibration constant can be determined during the iterative solution of the basic equation. Calculated and experimentally determined relative calibration constants agree to within 5-10% of each other and so do the results obtained from the analysis of an NBS certified alloy using the two sets of constants. (orig.)

  17. PEST modules with regularization for the acceleration of the automatic calibration in hydrodynamic models

    OpenAIRE

    Polomčić, Dušan M.; Bajić, Dragoljub I.; Močević, Jelena M.

    2015-01-01

    The calibration process of hydrodynamic model is done usually manually by 'testing' with different values of hydrogeological parameters and hydraulic characteristics of the boundary conditions. By using the PEST program, automatic calibration of models has been introduced, and it has proved to significantly reduce the subjective influence of the model creator on results. With the relatively new approach of PEST, i.e. with the introduction of so-called 'pilot points', the concept of homogeneou...

  18. New Methods for Kinematic Modelling and Calibration of Robots

    DEFF Research Database (Denmark)

    Søe-Knudsen, Rune

    2014-01-01

    the accuracy in an easy and accessible way. The required equipment is accessible, since the cost is held to a minimum and can be made with conventional processing equipment. Our first method calibrates the kinematics of a robot using known relative positions measured with the robot itself and a plate...... with holes matching the robot tool flange. The second method calibrates the kinematics using two robots. This method allows the robots to carry out the collection of measurements and the adjustment, by themselves, after the robots have been connected. Furthermore, we also propose a method for restoring......Improving a robot's accuracy increases its ability to solve certain tasks, and is therefore valuable. Practical ways of achieving this improved accuracy, even after robot repair, is also valuable. In this work, we introduce methods that improve the robot's accuracy and make it possible to maintain...

  19. Validation and calibration of structural models that combine information from multiple sources.

    Science.gov (United States)

    Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A

    2017-02-01

    Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.

  20. Uncertainty modelling and code calibration for composite materials

    DEFF Research Database (Denmark)

    Toft, Henrik Stensgaard; Branner, Kim; Mishnaevsky, Leon, Jr

    2013-01-01

    and measurement uncertainties which are introduced on the different scales. Typically, these uncertainties are taken into account in the design process using characteristic values and partial safety factors specified in a design standard. The value of the partial safety factors should reflect a reasonable balance...... to wind turbine blades are calibrated for two typical lay-ups using a large number of load cases and ratios between the aerodynamic forces and the inertia forces....

  1. A Low Cost Calibration Method for Urban Drainage Models

    DEFF Research Database (Denmark)

    Rasmussen, Michael R.; Thorndahl, Søren; Schaarup-Jensen, Kjeld

    2008-01-01

    The calibration of the hydrological reduction coefficient is examined for a small catchment. The objective is to determine the hydrological reduction coefficient, which is used for describing how much of the precipitation which falls on impervious areas, that actually ends up in the sewer...... to what can be found with intensive in-sewer measurement of rain and runoff. The results also clearly indicate that there is a large variation in hydrological reduction coefficient between different rain events....

  2. Calibrating mechanistic-empirical pavement performance models with an expert matrix

    Energy Technology Data Exchange (ETDEWEB)

    Tighe, S.; AlAssar, R.; Haas, R. [Waterloo Univ., ON (Canada). Dept. of Civil Engineering; Zhiwei, H. [Stantec Consulting Ltd., Cambridge, ON (Canada)

    2001-07-01

    Proper management of pavement infrastructure requires pavement performance modelling. For the past 20 years, the Ontario Ministry of Transportation has used the Ontario Pavement Analysis of Costs (OPAC) system for pavement design. Pavement needs, however, have changed substantially during that time. To address this need, a new research contract is underway to enhance the model and verify the predictions, particularly at extreme points such as low and high traffic volume pavement design. This initiative included a complete evaluation of the existing OPAC pavement design method, the construction of a new set of pavement performance prediction models, and the development of the flexible pavement design procedure that incorporates reliability analysis. The design was also expanded to include rigid pavement designs and modification of the existing life cycle cost analysis procedure which includes both the agency cost and road user cost. Performance prediction and life-cycle costs were developed based on several factors, including material properties, traffic loads and climate. Construction and maintenance schedules were also considered. The methodology for the calibration and validation of a mechanistic-empirical flexible pavement performance model was described. Mechanistic-empirical design methods combine theory based design such as calculated stresses, strains or deflections with empirical methods, where a measured response is associated with thickness and pavement performance. Elastic layer analysis was used to determine pavement response to determine the most effective design using cumulative Equivalent Single Axle Loads (ESALs), below grade type and layer thickness.The new mechanistic-empirical model separates the environment and traffic effects on performance. This makes it possible to quantify regional differences between Southern and Northern Ontario. In addition, roughness can be calculated in terms of the International Roughness Index or Riding comfort Index

  3. Step wise, multiple objective calibration of a hydrologic model for a snowmelt dominated basin

    Science.gov (United States)

    Hay, L.E.; Leavesley, G.H.; Clark, M.P.; Markstrom, S.L.; Viger, R.J.; Umemoto, M.

    2006-01-01

    The ability to apply a hydrologic model to large numbers of basins for forecasting purposes requires a quick and effective calibration strategy. This paper presents a step wise, multiple objective, automated procedure for hydrologic model calibration. This procedure includes the sequential calibration of a model's simulation of solar radiation (SR), potential evapotranspiration (PET), water balance, and daily runoff. The procedure uses the Shuffled Complex Evolution global search algorithm to calibrate the U.S. Geological Survey's Precipitation Runoff Modeling System in the Yampa River basin of Colorado. This process assures that intermediate states of the model (SR and PET on a monthly mean basis), as well as the water balance and components of the daily hydrograph are simulated, consistently with measured values.

  4. Discharge and Nitrogen Transfer Modelling in the Berze River: A HYPE Setup and Calibration

    Directory of Open Access Journals (Sweden)

    Veinbergs Arturs

    2017-05-01

    Full Text Available This study is focused on water quality and quantity modelling in the Berze River basin located in the Zemgale region of Latvia. The contributing basin area of 872 km2 is furthermore divided into 15 sub-basins designated according to the characteristics of hydrological network and water sampling programme. The river basin of interest is a spatially complex system with agricultural land and forests as two predominant land use types. Complexity of the system reflects in the discharge intensity and diffuse pollution of nitrogen compounds into the water bodies of the river basin. The presence of urban area has an impact as the load from the existing wastewater treatment plants consist up to 76 % of the total nitrogen load in the Berze River basin. Representative data sets of land cover, agricultural field data base for crop distribution analysis, estimation of crop management, soil type map, digital elevation model, drainage conditions, network of water bodies and point sources were used for the modelling procedures. The semi-distributed hydro chemical model HYPE has a setup to simulate discharge and nitrogen transfer. In order to make the model more robust and appropriate for the current study the data sets previously stated were classified by unifying similar spatially located polygons. The data layers were overlaid and 53 hydrological response units (SLCs were created. Agricultural land consists of 48 SLCs with the details of soils, drainage conditions, crop types, and land management practices. Manual calibration procedure was applied to improve the performance of discharge simulation. Simulated discharge values showed good agreement with the observed values with the Nash-Sutcliffe efficiency of 0.82 and bias of −6.6 %. Manual calibration of parameters related to nitrogen leakage simulation was applied to test the most sensitive parameters.

  5. Discharge and Nitrogen Transfer Modelling in the Berze River: A HYPE Setup and Calibration

    Science.gov (United States)

    Veinbergs, Arturs; Lagzdins, Ainis; Jansons, Viesturs; Abramenko, Kaspars; Sudars, Ritvars

    2017-05-01

    This study is focused on water quality and quantity modelling in the Berze River basin located in the Zemgale region of Latvia. The contributing basin area of 872 km2 is furthermore divided into 15 sub-basins designated according to the characteristics of hydrological network and water sampling programme. The river basin of interest is a spatially complex system with agricultural land and forests as two predominant land use types. Complexity of the system reflects in the discharge intensity and diffuse pollution of nitrogen compounds into the water bodies of the river basin. The presence of urban area has an impact as the load from the existing wastewater treatment plants consist up to 76 % of the total nitrogen load in the Berze River basin. Representative data sets of land cover, agricultural field data base for crop distribution analysis, estimation of crop management, soil type map, digital elevation model, drainage conditions, network of water bodies and point sources were used for the modelling procedures. The semi-distributed hydro chemical model HYPE has a setup to simulate discharge and nitrogen transfer. In order to make the model more robust and appropriate for the current study the data sets previously stated were classified by unifying similar spatially located polygons. The data layers were overlaid and 53 hydrological response units (SLCs) were created. Agricultural land consists of 48 SLCs with the details of soils, drainage conditions, crop types, and land management practices. Manual calibration procedure was applied to improve the performance of discharge simulation. Simulated discharge values showed good agreement with the observed values with the Nash-Sutcliffe efficiency of 0.82 and bias of -6.6 %. Manual calibration of parameters related to nitrogen leakage simulation was applied to test the most sensitive parameters.

  6. Calibration factor or calibration coefficient?

    International Nuclear Information System (INIS)

    Meghzifene, A.; Shortt, K.R.

    2002-01-01

    Full text: The IAEA/WHO network of SSDLs was set up in order to establish links between SSDL members and the international measurement system. At the end of 2001, there were 73 network members in 63 Member States. The SSDL network members provide calibration services to end-users at the national or regional level. The results of the calibrations are summarized in a document called calibration report or calibration certificate. The IAEA has been using the term calibration certificate and will continue using the same terminology. The most important information in a calibration certificate is a list of calibration factors and their related uncertainties that apply to the calibrated instrument for the well-defined irradiation and ambient conditions. The IAEA has recently decided to change the term calibration factor to calibration coefficient, to be fully in line with ISO [ISO 31-0], which recommends the use of the term coefficient when it links two quantities A and B (equation 1) that have different dimensions. The term factor should only be used for k when it is used to link the terms A and B that have the same dimensions A=k.B. However, in a typical calibration, an ion chamber is calibrated in terms of a physical quantity such as air kerma, dose to water, ambient dose equivalent, etc. If the chamber is calibrated together with its electrometer, then the calibration refers to the physical quantity to be measured per electrometer unit reading. In this case, the terms referred have different dimensions. The adoption by the Agency of the term coefficient to express the results of calibrations is consistent with the 'International vocabulary of basic and general terms in metrology' prepared jointly by the BIPM, IEC, ISO, OIML and other organizations. The BIPM has changed from factor to coefficient. The authors believe that this is more than just a matter of semantics and recommend that the SSDL network members adopt this change in terminology. (author)

  7. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    International Nuclear Information System (INIS)

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila; Lambas, Diego García; Cora, Sofía A.; Martínez, Cristian A. Vega-; Gargiulo, Ignacio D.; Padilla, Nelson D.; Tecce, Tomás E.; Orsi, Álvaro; Arancibia, Alejandra M. Muñoz

    2015-01-01

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observed galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs

  8. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila; Lambas, Diego García [Instituto de Astronomía Teórica y Experimental, CONICET-UNC, Laprida 854, X5000BGR, Córdoba (Argentina); Cora, Sofía A.; Martínez, Cristian A. Vega-; Gargiulo, Ignacio D. [Consejo Nacional de Investigaciones Científicas y Técnicas, Rivadavia 1917, C1033AAJ Buenos Aires (Argentina); Padilla, Nelson D.; Tecce, Tomás E.; Orsi, Álvaro; Arancibia, Alejandra M. Muñoz, E-mail: andresnicolas@oac.uncor.edu [Instituto de Astrofísica, Pontificia Universidad Católica de Chile, Av. Vicuña Mackenna 4860, Santiago (Chile)

    2015-03-10

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observed galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.

  9. A Fundamental Parameter-Based Calibration Model for an Intrinsic Germanium X-Ray Fluorescence Spectrometer

    DEFF Research Database (Denmark)

    Christensen, Leif Højslet; Pind, Niels

    1982-01-01

    A matrix-independent fundamental parameter-based calibration model for an energy-dispersive X-ray fluorescence spectrometer has been developed. This model, which is part of a fundamental parameter approach quantification method, accounts for both the excitation and detection probability. For each...... secondary target a number of relative calibration constants are calculated on the basis of knowledge of the irradiation geometry, the detector specifications, and tabulated fundamental physical parameters. The absolute calibration of the spectrometer is performed by measuring one pure element standard per...

  10. Influence of smoothing of X-ray spectra on parameters of calibration model

    International Nuclear Information System (INIS)

    Antoniak, W.; Urbanski, P.; Kowalska, E.

    1998-01-01

    Parameters of the calibration model before and after smoothing of X-ray spectra have been investigated. The calibration model was calculated using multivariate procedure - namely the partial least square regression (PLS). Investigations have been performed on an example of six sets of various standards used for calibration of some instruments based on X-ray fluorescence principle. The smoothing methods were compared: regression splines, Savitzky-Golay and Discrete Fourier Transform. The calculations were performed using a software package MATLAB and some home-made programs. (author)

  11. An Open-Source Auto-Calibration Routine Supporting the Stormwater Management Model

    Science.gov (United States)

    Tiernan, E. D.; Hodges, B. R.

    2017-12-01

    The stormwater management model (SWMM) is a clustered model that relies on subcatchment-averaged parameter assignments to correctly capture catchment stormwater runoff behavior. Model calibration is considered a critical step for SWMM performance, an arduous task that most stormwater management designers undertake manually. This research presents an open-source, automated calibration routine that increases the efficiency and accuracy of the model calibration process. The routine makes use of a preliminary sensitivity analysis to reduce the dimensions of the parameter space, at which point a multi-objective function, genetic algorithm (modified Non-dominated Sorting Genetic Algorithm II) determines the Pareto front for the objective functions within the parameter space. The solutions on this Pareto front represent the optimized parameter value sets for the catchment behavior that could not have been reasonably obtained through manual calibration.

  12. Calibration and validation of a model describing complete autotrophic nitrogen removal in a granular SBR system

    DEFF Research Database (Denmark)

    Vangsgaard, Anna Katrine; Mutlu, Ayten Gizem; Gernaey, Krist

    2013-01-01

    BACKGROUND: A validated model describing the nitritation-anammox process in a granular sequencing batch reactor (SBR) system is an important tool for: a) design of future experiments and b) prediction of process performance during optimization, while applying process control, or during system scale......-up. RESULTS: A model was calibrated using a step-wise procedure customized for the specific needs of the system. The important steps in the procedure were initialization, steady-state and dynamic calibration, and validation. A fast and effective initialization approach was developed to approximate pseudo...... screening of the parameter space proposed by Sin et al. (2008) - to find the best fit of the model to dynamic data. Finally, the calibrated model was validated with an independent data set. CONCLUSION: The presented calibration procedure is the first customized procedure for this type of system...

  13. Modeling, Calibration and Control for Extreme-Precision MEMS Deformable Mirrors, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Iris AO will develop electromechanical models and actuator calibration methods to enable open-loop control of MEMS deformable mirrors (DMs) with unprecedented...

  14. Nested sampling algorithm for subsurface flow model selection, uncertainty quantification, and nonlinear calibration

    KAUST Repository

    Elsheikh, A. H.; Wheeler, M. F.; Hoteit, Ibrahim

    2013-01-01

    Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known

  15. Radiolytic modelling of spent fuel oxidative dissolution mechanism. Calibration against UO2 dynamic leaching experiments

    International Nuclear Information System (INIS)

    Merino, J.; Cera, E.; Bruno, J.; Quinones, J.; Casas, I.; Clarens, F.; Gimenez, J.; Pablo, J. de; Rovira, M.; Martinez-Esparza, A.

    2005-01-01

    Calibration and testing are inherent aspects of any modelling exercise and consequently they are key issues in developing a model for the oxidative dissolution of spent fuel. In the present work we present the outcome of the calibration process for the kinetic constants of a UO 2 oxidative dissolution mechanism developed for using in a radiolytic model. Experimental data obtained in dynamic leaching experiments of unirradiated UO 2 has been used for this purpose. The iterative calibration process has provided some insight into the detailed mechanism taking place in the alteration of UO 2 , particularly the role of · OH radicals and their interaction with the carbonate system. The results show that, although more simulations are needed for testing in different experimental systems, the calibrated oxidative dissolution mechanism could be included in radiolytic models to gain confidence in the prediction of the long-term alteration rate of the spent fuel under repository conditions

  16. Evaluation of Uncertainties in hydrogeological modeling and groundwater flow analyses. Model calibration

    International Nuclear Information System (INIS)

    Ijiri, Yuji; Ono, Makoto; Sugihara, Yutaka; Shimo, Michito; Yamamoto, Hajime; Fumimura, Kenichi

    2003-03-01

    This study involves evaluation of uncertainty in hydrogeological modeling and groundwater flow analysis. Three-dimensional groundwater flow in Shobasama site in Tono was analyzed using two continuum models and one discontinuous model. The domain of this study covered area of four kilometers in east-west direction and six kilometers in north-south direction. Moreover, for the purpose of evaluating how uncertainties included in modeling of hydrogeological structure and results of groundwater simulation decreased with progress of investigation research, updating and calibration of the models about several modeling techniques of hydrogeological structure and groundwater flow analysis techniques were carried out, based on the information and knowledge which were newly acquired. The acquired knowledge is as follows. As a result of setting parameters and structures in renewal of the models following to the circumstances by last year, there is no big difference to handling between modeling methods. The model calibration is performed by the method of matching numerical simulation with observation, about the pressure response caused by opening and closing of a packer in MIU-2 borehole. Each analysis technique attains reducing of residual sum of squares of observations and results of numerical simulation by adjusting hydrogeological parameters. However, each model adjusts different parameters as water conductivity, effective porosity, specific storage, and anisotropy. When calibrating models, sometimes it is impossible to explain the phenomena only by adjusting parameters. In such case, another investigation may be required to clarify details of hydrogeological structure more. As a result of comparing research from beginning to this year, the following conclusions are obtained about investigation. (1) The transient hydraulic data are effective means in reducing the uncertainty of hydrogeological structure. (2) Effective porosity for calculating pore water velocity of

  17. Improvement, calibration and validation of a distributed hydrological model over France

    Directory of Open Access Journals (Sweden)

    P. Quintana Seguí

    2009-02-01

    Full Text Available The hydrometeorological model SAFRAN-ISBA-MODCOU (SIM computes water and energy budgets on the land surface and riverflows and the level of several aquifers at the scale of France. SIM is composed of a meteorological analysis system (SAFRAN, a land surface model (ISBA, and a hydrogeological model (MODCOU. In this study, an exponential profile of hydraulic conductivity at saturation is introduced to the model and its impact analysed. It is also studied how calibration modifies the performance of the model. A very simple method of calibration is implemented and applied to the parameters of hydraulic conductivity and subgrid runoff. The study shows that a better description of the hydraulic conductivity of the soil is important to simulate more realistic discharges. It also shows that the calibrated model is more robust than the original SIM. In fact, the calibration mainly affects the processes related to the dynamics of the flow (drainage and runoff, and the rest of relevant processes (like evaporation remain stable. It is also proven that it is only worth introducing the new empirical parameterization of hydraulic conductivity if it is accompanied by a calibration of its parameters, otherwise the simulations can be degraded. In conclusion, it is shown that the new parameterization is necessary to obtain good simulations. Calibration is a tool that must be used to improve the performance of distributed models like SIM that have some empirical parameters.

  18. Predictive sensor based x-ray calibration using a physical model

    International Nuclear Information System (INIS)

    Fuente, Matias de la; Lutz, Peter; Wirtz, Dieter C.; Radermacher, Klaus

    2007-01-01

    Many computer assisted surgery systems are based on intraoperative x-ray images. To achieve reliable and accurate results these images have to be calibrated concerning geometric distortions, which can be distinguished between constant distortions and distortions caused by magnetic fields. Instead of using an intraoperative calibration phantom that has to be visible within each image resulting in overlaying markers, the presented approach directly takes advantage of the physical background of the distortions. Based on a computed physical model of an image intensifier and a magnetic field sensor, an online compensation of distortions can be achieved without the need of an intraoperative calibration phantom. The model has to be adapted once to each specific image intensifier through calibration, which is based on an optimization algorithm systematically altering the physical model parameters, until a minimal error is reached. Once calibrated, the model is able to predict the distortions caused by the measured magnetic field vector and build an appropriate dewarping function. The time needed for model calibration is not yet optimized and takes up to 4 h on a 3 GHz CPU. In contrast, the time needed for distortion correction is less than 1 s and therefore absolutely acceptable for intraoperative use. First evaluations showed that by using the model based dewarping algorithm the distortions of an XRII with a 21 cm FOV could be significantly reduced. The model was able to predict and compensate distortions by approximately 80% to a remaining error of 0.45 mm (max) (0.19 mm rms)

  19. The effects of model complexity and calibration period on groundwater recharge simulations

    Science.gov (United States)

    Moeck, Christian; Van Freyberg, Jana; Schirmer, Mario

    2017-04-01

    A significant number of groundwater recharge models exist that vary in terms of complexity (i.e., structure and parametrization). Typically, model selection and conceptualization is very subjective and can be a key source of uncertainty in the recharge simulations. Another source of uncertainty is the implicit assumption that model parameters, calibrated over historical periods, are also valid for the simulation period. To the best of our knowledge there is no systematic evaluation of the effect of the model complexity and calibration strategy on the performance of recharge models. To address this gap, we utilized a long-term recharge data set (20 years) from a large weighting lysimeter. We performed a differential split sample test with four groundwater recharge models that vary in terms of complexity. They were calibrated using six calibration periods with climatically contrasting conditions in a constrained Monte Carlo approach. Despite the climatically contrasting conditions, all models performed similarly well during the calibration. However, during validation a clear effect of the model structure on model performance was evident. The more complex, physically-based models predicted recharge best, even when calibration and prediction periods had very different climatic conditions. In contrast, more simplistic soil-water balance and lumped model performed poorly under such conditions. For these models we found a strong dependency on the chosen calibration period. In particular, our analysis showed that this can have relevant implications when using recharge models as decision-making tools in a broad range of applications (e.g. water availability, climate change impact studies, water resource management, etc.).

  20. Reflectance of Mercury's Polar Regions: Calibration and Implications for Mercury's Volatiles

    Science.gov (United States)

    Neumann, G. A.; Sun, X.; Cao, A.; Deutsch, A. N.; Head, J. W.

    2018-05-01

    Calibration of laser altimeter reflectances under widely varying conditions is supported by laboratory data from an engineering simulator to address the distribution of volatile deposits in Mercury's polar cold traps.

  1. Multivariate Calibration Models for Sorghum Composition using Near-Infrared Spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Wolfrum, E.; Payne, C.; Stefaniak, T.; Rooney, W.; Dighe, N.; Bean, B.; Dahlberg, J.

    2013-03-01

    NREL developed calibration models based on near-infrared (NIR) spectroscopy coupled with multivariate statistics to predict compositional properties relevant to cellulosic biofuels production for a variety of sorghum cultivars. A robust calibration population was developed in an iterative fashion. The quality of models developed using the same sample geometry on two different types of NIR spectrometers and two different sample geometries on the same spectrometer did not vary greatly.

  2. [Outlier sample discriminating methods for building calibration model in melons quality detecting using NIR spectra].

    Science.gov (United States)

    Tian, Hai-Qing; Wang, Chun-Guang; Zhang, Hai-Jun; Yu, Zhi-Hong; Li, Jian-Kang

    2012-11-01

    Outlier samples strongly influence the precision of the calibration model in soluble solids content measurement of melons using NIR Spectra. According to the possible sources of outlier samples, three methods (predicted concentration residual test; Chauvenet test; leverage and studentized residual test) were used to discriminate these outliers respectively. Nine suspicious outliers were detected from calibration set which including 85 fruit samples. Considering the 9 suspicious outlier samples maybe contain some no-outlier samples, they were reclaimed to the model one by one to see whether they influence the model and prediction precision or not. In this way, 5 samples which were helpful to the model joined in calibration set again, and a new model was developed with the correlation coefficient (r) 0. 889 and root mean square errors for calibration (RMSEC) 0.6010 Brix. For 35 unknown samples, the root mean square errors prediction (RMSEP) was 0.854 degrees Brix. The performance of this model was more better than that developed with non outlier was eliminated from calibration set (r = 0.797, RMSEC= 0.849 degrees Brix, RMSEP = 1.19 degrees Brix), and more representative and stable with all 9 samples were eliminated from calibration set (r = 0.892, RMSEC = 0.605 degrees Brix, RMSEP = 0.862 degrees).

  3. Our calibrated model has poor predictive value: An example from the petroleum industry

    International Nuclear Information System (INIS)

    Carter, J.N.; Ballester, P.J.; Tavassoli, Z.; King, P.R.

    2006-01-01

    It is often assumed that once a model has been calibrated to measurements then it will have some level of predictive capability, although this may be limited. If the model does not have predictive capability then the assumption is that the model needs to be improved in some way. Using an example from the petroleum industry, we show that cases can exit where calibrated models have limited predictive capability. This occurs even when there is no modelling error present. It is also shown that the introduction of a small modelling error can make it impossible to obtain any models with useful predictive capability. We have been unable to find ways of identifying which calibrated models will have some predictive capacity and those which will not

  4. Our calibrated model has poor predictive value: An example from the petroleum industry

    Energy Technology Data Exchange (ETDEWEB)

    Carter, J.N. [Department of Earth Science and Engineering, Imperial College, London (United Kingdom)]. E-mail: j.n.carter@ic.ac.uk; Ballester, P.J. [Department of Earth Science and Engineering, Imperial College, London (United Kingdom); Tavassoli, Z. [Department of Earth Science and Engineering, Imperial College, London (United Kingdom); King, P.R. [Department of Earth Science and Engineering, Imperial College, London (United Kingdom)

    2006-10-15

    It is often assumed that once a model has been calibrated to measurements then it will have some level of predictive capability, although this may be limited. If the model does not have predictive capability then the assumption is that the model needs to be improved in some way. Using an example from the petroleum industry, we show that cases can exit where calibrated models have limited predictive capability. This occurs even when there is no modelling error present. It is also shown that the introduction of a small modelling error can make it impossible to obtain any models with useful predictive capability. We have been unable to find ways of identifying which calibrated models will have some predictive capacity and those which will not.

  5. Calibration of Crustal Historical Earthquakes from Intra-Carpathian Region of Romania

    Science.gov (United States)

    Oros, Eugen; Popa, Mihaela; Rogozea, Maria

    2017-12-01

    The main task of the presented study is to elaborate a set of relations of mutual conversion macroseismic intensity - magnitude, necessary for the calibration of the historical crustal earthquakes produced in the Intra - Carpathian region of Romania, as a prerequisite for homogenization of the parametric catalogue of Romanian earthquakes. To achieve the goal, we selected a set of earthquakes for which we have quality macroseismic data and the Mw moment magnitude obtained instrumentally. These seismic events were used to determine the relations between the Mw and the peak/epicentral intensity, the isoseist surface area for I=3, I=4 and I=5: Mw = f (Imax / Io), Mw = f (Imax / Io, h), Mw = f (A3, A4; A5). We investigated several variants of such relationships and combinations, taking into account that the macroseismic data necessary for the re-evaluation of historical earthquakes in the investigated region are available in several forms. Thus, a number of investigations provided various information resulted after revising initial historical data: 1) Intensity data point (IDP) assimilated or not with the epicentre intensity after analysis of the correlation level with recent seismicity data and / or active tectonics / seismotectonics, 2) Sets of intensities obtained in several localities (IDPs) with variable values having maxims that can be considered equal to epicentral intensity (Io), 3) Sets of intensities obtained in several localities (IDPs) but without obvious maximum values, assimilable with the epicentral intensity, 4) maps with isoseismals, 5) Information on the areas in which the investigated earthquake was felt or the area of perceptiveness (e.g. I = 3 EMS during the day and I = 4 EMS at night) or the surfaces corresponding to a certain degree of well-defined intensity. The obtained relationships were validated using a set of earthquakes with instrumental source parameters (localization, depth, Mw). These relationships lead to redundant results meaningful in

  6. Calibration of Mine Ventilation Network Models Using the Non-Linear Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Guang Xu

    2017-12-01

    Full Text Available Effective ventilation planning is vital to underground mining. To ensure stable operation of the ventilation system and to avoid airflow disorder, mine ventilation network (MVN models have been widely used in simulating and optimizing the mine ventilation system. However, one of the challenges for MVN model simulation is that the simulated airflow distribution results do not match the measured data. To solve this problem, a simple and effective calibration method is proposed based on the non-linear optimization algorithm. The calibrated model not only makes simulated airflow distribution results in accordance with the on-site measured data, but also controls the errors of other parameters within a minimum range. The proposed method was then applied to calibrate an MVN model in a real case, which is built based on ventilation survey results and Ventsim software. Finally, airflow simulation experiments are carried out respectively using data before and after calibration, whose results were compared and analyzed. This showed that the simulated airflows in the calibrated model agreed much better to the ventilation survey data, which verifies the effectiveness of calibrating method.

  7. More efficient evolutionary strategies for model calibration with watershed model for demonstration

    Science.gov (United States)

    Baggett, J. S.; Skahill, B. E.

    2008-12-01

    Evolutionary strategies allow automatic calibration of more complex models than traditional gradient based approaches, but they are more computationally intensive. We present several efficiency enhancements for evolution strategies, many of which are not new, but when combined have been shown to dramatically decrease the number of model runs required for calibration of synthetic problems. To reduce the number of expensive model runs we employ a surrogate objective function for an adaptively determined fraction of the population at each generation (Kern et al., 2006). We demonstrate improvements to the adaptive ranking strategy that increase its efficiency while sacrificing little reliability and further reduce the number of model runs required in densely sampled parts of parameter space. Furthermore, we include a gradient individual in each generation that is usually not selected when the search is in a global phase or when the derivatives are poorly approximated, but when selected near a smooth local minimum can dramatically increase convergence speed (Tahk et al., 2007). Finally, the selection of the gradient individual is used to adapt the size of the population near local minima. We show, by incorporating these enhancements into the Covariance Matrix Adaption Evolution Strategy (CMAES; Hansen, 2006), that their synergetic effect is greater than their individual parts. This hybrid evolutionary strategy exploits smooth structure when it is present but degrades to an ordinary evolutionary strategy, at worst, if smoothness is not present. Calibration of 2D-3D synthetic models with the modified CMAES requires approximately 10%-25% of the model runs of ordinary CMAES. Preliminary demonstration of this hybrid strategy will be shown for watershed model calibration problems. Hansen, N. (2006). The CMA Evolution Strategy: A Comparing Review. In J.A. Lozano, P. Larrañga, I. Inza and E. Bengoetxea (Eds.). Towards a new evolutionary computation. Advances in estimation of

  8. Using genetic algorithm and TOPSIS for Xinanjiang model calibration with a single procedure

    Science.gov (United States)

    Cheng, Chun-Tian; Zhao, Ming-Yan; Chau, K. W.; Wu, Xin-Yu

    2006-01-01

    Genetic Algorithm (GA) is globally oriented in searching and thus useful in optimizing multiobjective problems, especially where the objective functions are ill-defined. Conceptual rainfall-runoff models that aim at predicting streamflow from the knowledge of precipitation over a catchment have become a basic tool for flood forecasting. The parameter calibration of a conceptual model usually involves the multiple criteria for judging the performances of observed data. However, it is often difficult to derive all objective functions for the parameter calibration problem of a conceptual model. Thus, a new method to the multiple criteria parameter calibration problem, which combines GA with TOPSIS (technique for order performance by similarity to ideal solution) for Xinanjiang model, is presented. This study is an immediate further development of authors' previous research (Cheng, C.T., Ou, C.P., Chau, K.W., 2002. Combining a fuzzy optimal model with a genetic algorithm to solve multi-objective rainfall-runoff model calibration. Journal of Hydrology, 268, 72-86), whose obvious disadvantages are to split the whole procedure into two parts and to become difficult to integrally grasp the best behaviors of model during the calibration procedure. The current method integrates the two parts of Xinanjiang rainfall-runoff model calibration together, simplifying the procedures of model calibration and validation and easily demonstrated the intrinsic phenomenon of observed data in integrity. Comparison of results with two-step procedure shows that the current methodology gives similar results to the previous method, is also feasible and robust, but simpler and easier to apply in practice.

  9. Comparison between two calibration models of a measurement system for thyroid monitoring; Comparacao entre dois modelos para calibracao de um sistema de medidas dedicado ao monitoramento de tireoide

    Energy Technology Data Exchange (ETDEWEB)

    Venturini, Luzia [Instituto de Pesquisas Energeicas e Nucleares (IPEN), Sao Paulo, Sp (Brazil). Dept. de Metrologia das Radiacoes]. E-mail: lventur@net.ipen.br

    2005-07-01

    This paper shows a comparison between two theoretical calibration that use two mathematical models to represent the neck region. In the first model thyroid is considered to be just a region limited by two concentric cylinders whose dimensions are those of trachea and neck. The second model uses functional functions to get a better representation of the thyroid geometry. Efficiency values are obtained using Monte Carlo simulation. (author)

  10. Nonlinear propagation model for ultrasound hydrophones calibration in the frequency range up to 100 MHz.

    Science.gov (United States)

    Radulescu, E G; Wójcik, J; Lewin, P A; Nowicki, A

    2003-06-01

    To facilitate the implementation and verification of the new ultrasound hydrophone calibration techniques described in the companion paper (somewhere in this issue) a nonlinear propagation model was developed. A brief outline of the theoretical considerations is presented and the model's advantages and disadvantages are discussed. The results of simulations yielding spatial and temporal acoustic pressure amplitude are also presented and compared with those obtained using KZK and Field II models. Excellent agreement between all models is evidenced. The applicability of the model in discrete wideband calibration of hydrophones is documented in the companion paper somewhere in this volume.

  11. Modelling Machine Tools using Structure Integrated Sensors for Fast Calibration

    Directory of Open Access Journals (Sweden)

    Benjamin Montavon

    2018-02-01

    Full Text Available Monitoring of the relative deviation between commanded and actual tool tip position, which limits the volumetric performance of the machine tool, enables the use of contemporary methods of compensation to reduce tolerance mismatch and the uncertainties of on-machine measurements. The development of a primarily optical sensor setup capable of being integrated into the machine structure without limiting its operating range is presented. The use of a frequency-modulating interferometer and photosensitive arrays in combination with a Gaussian laser beam allows for fast and automated online measurements of the axes’ motion errors and thermal conditions with comparable accuracy, lower cost, and smaller dimensions as compared to state-of-the-art optical measuring instruments for offline machine tool calibration. The development is tested through simulation of the sensor setup based on raytracing and Monte-Carlo techniques.

  12. Modeling and Calibration of a Novel One-Mirror Galvanometric Laser Scanner

    Directory of Open Access Journals (Sweden)

    Chengyi Yu

    2017-01-01

    Full Text Available A laser stripe sensor has limited application when a point cloud of geometric samples on the surface of the object needs to be collected, so a galvanometric laser scanner is designed by using a one-mirror galvanometer element as its mechanical device to drive the laser stripe to sweep along the object. A novel mathematical model is derived for the proposed galvanometer laser scanner without any position assumptions and then a model-driven calibration procedure is proposed. Compared with available model-driven approaches, the influence of machining and assembly errors is considered in the proposed model. Meanwhile, a plane-constraint-based approach is proposed to extract a large number of calibration points effectively and accurately to calibrate the galvanometric laser scanner. Repeatability and accuracy of the galvanometric laser scanner are evaluated on the automobile production line to verify the efficiency and accuracy of the proposed calibration method. Experimental results show that the proposed calibration approach yields similar measurement performance compared with a look-up table calibration method.

  13. Calibration of an estuarine sediment transport model to sediment fluxes as an intermediate step for simulation of geomorphic evolution

    Science.gov (United States)

    Ganju, N.K.; Schoellhamer, D.H.

    2009-01-01

    Modeling geomorphic evolution in estuaries is necessary to model the fate of legacy contaminants in the bed sediment and the effect of climate change, watershed alterations, sea level rise, construction projects, and restoration efforts. Coupled hydrodynamic and sediment transport models used for this purpose typically are calibrated to water level, currents, and/or suspended-sediment concentrations. However, small errors in these tidal-timescale models can accumulate to cause major errors in geomorphic evolution, which may not be obvious. Here we present an intermediate step towards simulating decadal-timescale geomorphic change: calibration to estimated sediment fluxes (mass/time) at two cross-sections within an estuary. Accurate representation of sediment fluxes gives confidence in representation of sediment supply to and from the estuary during those periods. Several years of sediment flux data are available for the landward and seaward boundaries of Suisun Bay, California, the landward-most embayment of San Francisco Bay. Sediment flux observations suggest that episodic freshwater flows export sediment from Suisun Bay, while gravitational circulation during the dry season imports sediment from seaward sources. The Regional Oceanic Modeling System (ROMS), a three-dimensional coupled hydrodynamic/sediment transport model, was adapted for Suisun Bay, for the purposes of hindcasting 19th and 20th century bathymetric change, and simulating geomorphic response to sea level rise and climatic variability in the 21st century. The sediment transport parameters were calibrated using the sediment flux data from 1997 (a relatively wet year) and 2004 (a relatively dry year). The remaining years of data (1998, 2002, 2003) were used for validation. The model represents the inter-annual and annual sediment flux variability, while net sediment import/export is accurately modeled for three of the five years. The use of sediment flux data for calibrating an estuarine geomorphic

  14. CALIBRATING THE JOHNSON-HOLMQUIST CERAMIC MODEL FOR SIC USING CTH

    International Nuclear Information System (INIS)

    Cazamias, J. U.; Bilyk, S. R.

    2009-01-01

    The Johnson-Holmquist ceramic material model has been calibrated and successfully applied to numerically simulate ballistic events using the Lagrangian code EPIC. While the majority of the constants are ''physics'' based, two of the constants for the failed material response are calibrated using ballistic experiments conducted on a confined cylindrical ceramic target. The maximum strength of the failed ceramic is calibrated by matching the penetration velocity. The second refers to the equivalent plastic strain at failure under constant pressure and is calibrated using the dwell time. Use of these two constants in the CTH Eulerian hydrocode does not predict the ballistic response. This difference may be due to the phenomenological nature of the model and the different numerical schemes used by the codes. This paper determines the aforementioned material constants for SiC suitable for simulating ballistic events using CTH.

  15. A case study on robust optimal experimental design for model calibration of ω-Transaminase

    DEFF Research Database (Denmark)

    Daele, Timothy, Van; Van Hauwermeiren, Daan; Ringborg, Rolf Hoffmeyer

    the experimental space. However, it is expected that more informative experiments can be designed to increase the confidence of the parameter estimates. Therefore, we apply Optimal Experimental Design (OED) to the calibrated model of Shin and Kim (1998). The total number of samples was retained to allow fair......” parameter values are not known before finishing the model calibration. However, it is important that the chosen parameter values are close to the real parameter values, otherwise the OED can possibly yield non-informative experiments. To counter this problem, one can use robust OED. The idea of robust OED......Proper calibration of models describing enzyme kinetics can be quite challenging. This is especially the case for more complex models like transaminase models (Shin and Kim, 1998). The latter fitted model parameters, but the confidence on the parameter estimation was not derived. Hence...

  16. Radiometric modeling and calibration of the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) ground based measurement experiment

    Science.gov (United States)

    Tian, Jialin; Smith, William L.; Gazarik, Michael J.

    2008-12-01

    The ultimate remote sensing benefits of the high resolution Infrared radiance spectrometers will be realized with their geostationary satellite implementation in the form of imaging spectrometers. This will enable dynamic features of the atmosphere's thermodynamic fields and pollutant and greenhouse gas constituents to be observed for revolutionary improvements in weather forecasts and more accurate air quality and climate predictions. As an important step toward realizing this application objective, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) Engineering Demonstration Unit (EDU) was successfully developed under the NASA New Millennium Program, 2000-2006. The GIFTS-EDU instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The GIFTS calibration is achieved using internal blackbody calibration references at ambient (260 K) and hot (286 K) temperatures. In this paper, we introduce a refined calibration technique that utilizes Principle Component (PC) analysis to compensate for instrument distortions and artifacts, therefore, enhancing the absolute calibration accuracy. This method is applied to data collected during the GIFTS Ground Based Measurement (GBM) experiment, together with simultaneous observations by the accurately calibrated AERI (Atmospheric Emitted Radiance Interferometer), both simultaneously zenith viewing the sky through the same external scene mirror at ten-minute intervals throughout a cloudless day at Logan Utah on September 13, 2006. The accurately calibrated GIFTS radiances are produced using the first four PC scores in the GIFTS-AERI regression model. Temperature and moisture profiles retrieved from the PC-calibrated GIFTS radiances are verified against radiosonde measurements collected throughout the GIFTS sky measurement period. Using the GIFTS GBM calibration model, we compute the calibrated radiances from data

  17. Impact of the calibration period on the conceptual rainfall-runoff model parameter estimates

    Science.gov (United States)

    Todorovic, Andrijana; Plavsic, Jasna

    2015-04-01

    A conceptual rainfall-runoff model is defined by its structure and parameters, which are commonly inferred through model calibration. Parameter estimates depend on objective function(s), optimisation method, and calibration period. Model calibration over different periods may result in dissimilar parameter estimates, while model efficiency decreases outside calibration period. Problem of model (parameter) transferability, which conditions reliability of hydrologic simulations, has been investigated for decades. In this paper, dependence of the parameter estimates and model performance on calibration period is analysed. The main question that is addressed is: are there any changes in optimised parameters and model efficiency that can be linked to the changes in hydrologic or meteorological variables (flow, precipitation and temperature)? Conceptual, semi-distributed HBV-light model is calibrated over five-year periods shifted by a year (sliding time windows). Length of the calibration periods is selected to enable identification of all parameters. One water year of model warm-up precedes every simulation, which starts with the beginning of a water year. The model is calibrated using the built-in GAP optimisation algorithm. The objective function used for calibration is composed of Nash-Sutcliffe coefficient for flows and logarithms of flows, and volumetric error, all of which participate in the composite objective function with approximately equal weights. Same prior parameter ranges are used in all simulations. The model is calibrated against flows observed at the Slovac stream gauge on the Kolubara River in Serbia (records from 1954 to 2013). There are no trends in precipitation nor in flows, however, there is a statistically significant increasing trend in temperatures at this catchment. Parameter variability across the calibration periods is quantified in terms of standard deviations of normalised parameters, enabling detection of the most variable parameters

  18. The Wally plot approach to assess the calibration of clinical prediction models.

    Science.gov (United States)

    Blanche, Paul; Gerds, Thomas A; Ekstrøm, Claus T

    2017-12-06

    A prediction model is calibrated if, roughly, for any percentage x we can expect that x subjects out of 100 experience the event among all subjects that have a predicted risk of x%. Typically, the calibration assumption is assessed graphically but in practice it is often challenging to judge whether a "disappointing" calibration plot is the consequence of a departure from the calibration assumption, or alternatively just "bad luck" due to sampling variability. We propose a graphical approach which enables the visualization of how much a calibration plot agrees with the calibration assumption to address this issue. The approach is mainly based on the idea of generating new plots which mimic the available data under the calibration assumption. The method handles the common non-trivial situations in which the data contain censored observations and occurrences of competing events. This is done by building on ideas from constrained non-parametric maximum likelihood estimation methods. Two examples from large cohort data illustrate our proposal. The 'wally' R package is provided to make the methodology easily usable.

  19. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    Energy Technology Data Exchange (ETDEWEB)

    and Ben Polly, Joseph Robertson [National Renewable Energy Lab. (NREL), Golden, CO (United States); Polly, Ben [National Renewable Energy Lab. (NREL), Golden, CO (United States); Collis, Jon [Colorado School of Mines, Golden, CO (United States)

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.

  20. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Robertson, J.; Polly, B.; Collis, J.

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.

  1. On the Bayesian calibration of computer model mixtures through experimental data, and the design of predictive models

    Science.gov (United States)

    Karagiannis, Georgios; Lin, Guang

    2017-08-01

    For many real systems, several computer models may exist with different physics and predictive abilities. To achieve more accurate simulations/predictions, it is desirable for these models to be properly combined and calibrated. We propose the Bayesian calibration of computer model mixture method which relies on the idea of representing the real system output as a mixture of the available computer model outputs with unknown input dependent weight functions. The method builds a fully Bayesian predictive model as an emulator for the real system output by combining, weighting, and calibrating the available models in the Bayesian framework. Moreover, it fits a mixture of calibrated computer models that can be used by the domain scientist as a mean to combine the available computer models, in a flexible and principled manner, and perform reliable simulations. It can address realistic cases where one model may be more accurate than the others at different input values because the mixture weights, indicating the contribution of each model, are functions of the input. Inference on the calibration parameters can consider multiple computer models associated with different physics. The method does not require knowledge of the fidelity order of the models. We provide a technique able to mitigate the computational overhead due to the consideration of multiple computer models that is suitable to the mixture model framework. We implement the proposed method in a real-world application involving the Weather Research and Forecasting large-scale climate model.

  2. Intercomparison of hydrological model structures and calibration approaches in climate scenario impact projections

    Science.gov (United States)

    Vansteenkiste, Thomas; Tavakoli, Mohsen; Ntegeka, Victor; De Smedt, Florimond; Batelaan, Okke; Pereira, Fernando; Willems, Patrick

    2014-11-01

    The objective of this paper is to investigate the effects of hydrological model structure and calibration on climate change impact results in hydrology. The uncertainty in the hydrological impact results is assessed by the relative change in runoff volumes and peak and low flow extremes from historical and future climate conditions. The effect of the hydrological model structure is examined through the use of five hydrological models with different spatial resolutions and process descriptions. These were applied to a medium sized catchment in Belgium. The models vary from the lumped conceptual NAM, PDM and VHM models over the intermediate detailed and distributed WetSpa model to the fully distributed MIKE SHE model. The latter model accounts for the 3D groundwater processes and interacts bi-directionally with a full hydrodynamic MIKE 11 river model. After careful and manual calibration of these models, accounting for the accuracy of the peak and low flow extremes and runoff subflows, and the changes in these extremes for changing rainfall conditions, the five models respond in a similar way to the climate scenarios over Belgium. Future projections on peak flows are highly uncertain with expected increases as well as decreases depending on the climate scenario. The projections on future low flows are more uniform; low flows decrease (up to 60%) for all models and for all climate scenarios. However, the uncertainties in the impact projections are high, mainly in the dry season. With respect to the model structural uncertainty, the PDM model simulates significantly higher runoff peak flows under future wet scenarios, which is explained by its specific model structure. For the low flow extremes, the MIKE SHE model projects significantly lower low flows in dry scenario conditions in comparison to the other models, probably due to its large difference in process descriptions for the groundwater component, the groundwater-river interactions. The effect of the model

  3. Regional Reproducibility of BOLD Calibration Parameter M, OEF and Resting-State CMRO2 Measurements with QUO2 MRI.

    Directory of Open Access Journals (Sweden)

    Isabelle Lajoie

    Full Text Available The current generation of calibrated MRI methods goes beyond simple localization of task-related responses to allow the mapping of resting-state cerebral metabolic rate of oxygen (CMRO2 in micromolar units and estimation of oxygen extraction fraction (OEF. Prior to the adoption of such techniques in neuroscience research applications, knowledge about the precision and accuracy of absolute estimates of CMRO2 and OEF is crucial and remains unexplored to this day. In this study, we addressed the question of methodological precision by assessing the regional inter-subject variance and intra-subject reproducibility of the BOLD calibration parameter M, OEF, O2 delivery and absolute CMRO2 estimates derived from a state-of-the-art calibrated BOLD technique, the QUantitative O2 (QUO2 approach. We acquired simultaneous measurements of CBF and R2* at rest and during periods of hypercapnia (HC and hyperoxia (HO on two separate scan sessions within 24 hours using a clinical 3 T MRI scanner. Maps of M, OEF, oxygen delivery and CMRO2, were estimated from the measured end-tidal O2, CBF0, CBFHC/HO and R2*HC/HO. Variability was assessed by computing the between-subject coefficients of variation (bwCV and within-subject CV (wsCV in seven ROIs. All tests GM-averaged values of CBF0, M, OEF, O2 delivery and CMRO2 were: 49.5 ± 6.4 mL/100 g/min, 4.69 ± 0.91%, 0.37 ± 0.06, 377 ± 51 μmol/100 g/min and 143 ± 34 μmol/100 g/min respectively. The variability of parameter estimates was found to be the lowest when averaged throughout all GM, with general trends toward higher CVs when averaged over smaller regions. Among the MRI measurements, the most reproducible across scans was R2*0 (wsCVGM = 0.33% along with CBF0 (wsCVGM = 3.88% and R2*HC (wsCVGM = 6.7%. CBFHC and R2*HO were found to have a higher intra-subject variability (wsCVGM = 22.4% and wsCVGM = 16% respectively, which is likely due to propagation of random measurement errors, especially for CBFHC due to the

  4. Nitrous oxide emissions from cropland: a procedure for calibrating the DayCent biogeochemical model using inverse modelling

    Science.gov (United States)

    Rafique, Rashad; Fienen, Michael N.; Parkin, Timothy B.; Anex, Robert P.

    2013-01-01

    DayCent is a biogeochemical model of intermediate complexity widely used to simulate greenhouse gases (GHG), soil organic carbon and nutrients in crop, grassland, forest and savannah ecosystems. Although this model has been applied to a wide range of ecosystems, it is still typically parameterized through a traditional “trial and error” approach and has not been calibrated using statistical inverse modelling (i.e. algorithmic parameter estimation). The aim of this study is to establish and demonstrate a procedure for calibration of DayCent to improve estimation of GHG emissions. We coupled DayCent with the parameter estimation (PEST) software for inverse modelling. The PEST software can be used for calibration through regularized inversion as well as model sensitivity and uncertainty analysis. The DayCent model was analysed and calibrated using N2O flux data collected over 2 years at the Iowa State University Agronomy and Agricultural Engineering Research Farms, Boone, IA. Crop year 2003 data were used for model calibration and 2004 data were used for validation. The optimization of DayCent model parameters using PEST significantly reduced model residuals relative to the default DayCent parameter values. Parameter estimation improved the model performance by reducing the sum of weighted squared residual difference between measured and modelled outputs by up to 67 %. For the calibration period, simulation with the default model parameter values underestimated mean daily N2O flux by 98 %. After parameter estimation, the model underestimated the mean daily fluxes by 35 %. During the validation period, the calibrated model reduced sum of weighted squared residuals by 20 % relative to the default simulation. Sensitivity analysis performed provides important insights into the model structure providing guidance for model improvement.

  5. Identifying influential data points in hydrological model calibration and their impact on streamflow predictions

    Science.gov (United States)

    Wright, David; Thyer, Mark; Westra, Seth

    2015-04-01

    Highly influential data points are those that have a disproportionately large impact on model performance, parameters and predictions. However, in current hydrological modelling practice the relative influence of individual data points on hydrological model calibration is not commonly evaluated. This presentation illustrates and evaluates several influence diagnostics tools that hydrological modellers can use to assess the relative influence of data. The feasibility and importance of including influence detection diagnostics as a standard tool in hydrological model calibration is discussed. Two classes of influence diagnostics are evaluated: (1) computationally demanding numerical "case deletion" diagnostics; and (2) computationally efficient analytical diagnostics, based on Cook's distance. These diagnostics are compared against hydrologically orientated diagnostics that describe changes in the model parameters (measured through the Mahalanobis distance), performance (objective function displacement) and predictions (mean and maximum streamflow). These influence diagnostics are applied to two case studies: a stage/discharge rating curve model, and a conceptual rainfall-runoff model (GR4J). Removing a single data point from the calibration resulted in differences to mean flow predictions of up to 6% for the rating curve model, and differences to mean and maximum flow predictions of up to 10% and 17%, respectively, for the hydrological model. When using the Nash-Sutcliffe efficiency in calibration, the computationally cheaper Cook's distance metrics produce similar results to the case-deletion metrics at a fraction of the computational cost. However, Cooks distance is adapted from linear regression with inherit assumptions on the data and is therefore less flexible than case deletion. Influential point detection diagnostics show great potential to improve current hydrological modelling practices by identifying highly influential data points. The findings of this

  6. Calibration of uncertain inputs to computer models using experimentally measured quantities and the BMARS emulator

    International Nuclear Information System (INIS)

    Stripling, H.F.; McClarren, R.G.; Kuranz, C.C.; Grosskopf, M.J.; Rutter, E.; Torralva, B.R.

    2011-01-01

    We present a method for calibrating the uncertain inputs to a computer model using available experimental data. The goal of the procedure is to produce posterior distributions of the uncertain inputs such that when samples from the posteriors are used as inputs to future model runs, the model is more likely to replicate (or predict) the experimental response. The calibration is performed by sampling the space of the uncertain inputs, using the computer model (or, more likely, an emulator for the computer model) to assign weights to the samples, and applying the weights to produce the posterior distributions and generate predictions of new experiments within confidence bounds. The method is similar to the Markov chain Monte Carlo (MCMC) calibration methods with independent sampling with the exception that we generate samples beforehand and replace the candidate acceptance routine with a weighting scheme. We apply our method to the calibration of a Hyades 2D model of laser energy deposition in beryllium. We employ a Bayesian Multivariate Adaptive Regression Splines (BMARS) emulator as a surrogate for Hyades 2D. We treat a range of uncertainties in our system, including uncertainties in the experimental inputs, experimental measurement error, and systematic experimental timing errors. The results of the calibration are posterior distributions that both agree with intuition and improve the accuracy and decrease the uncertainty in experimental predictions. (author)

  7. A system-theory-based model for monthly river runoff forecasting: model calibration and optimization

    Directory of Open Access Journals (Sweden)

    Wu Jianhua

    2014-03-01

    Full Text Available River runoff is not only a crucial part of the global water cycle, but it is also an important source for hydropower and an essential element of water balance. This study presents a system-theory-based model for river runoff forecasting taking the Hailiutu River as a case study. The forecasting model, designed for the Hailiutu watershed, was calibrated and verified by long-term precipitation observation data and groundwater exploitation data from the study area. Additionally, frequency analysis, taken as an optimization technique, was applied to improve prediction accuracy. Following model optimization, the overall relative prediction errors are below 10%. The system-theory-based prediction model is applicable to river runoff forecasting, and following optimization by frequency analysis, the prediction error is acceptable.

  8. Airport choice model in multiple airport regions

    Directory of Open Access Journals (Sweden)

    Claudia Muñoz

    2017-02-01

    Full Text Available Purpose: This study aims to analyze travel choices made by air transportation users in multi airport regions because it is a crucial component when planning passenger redistribution policies. The purpose of this study is to find a utility function which makes it possible to know the variables that influence users’ choice of the airports on routes to the main cities in the Colombian territory. Design/methodology/approach: This research generates a Multinomial Logit Model (MNL, which is based on the theory of maximizing utility, and it is based on the data obtained on revealed and stated preference surveys applied to users who reside in the metropolitan area of Aburrá Valley (Colombia. This zone is the only one in the Colombian territory which has two neighboring airports for domestic flights. The airports included in the modeling process were Enrique Olaya Herrera (EOH Airport and José María Córdova (JMC Airport. Several structure models were tested, and the MNL proved to be the most significant revealing the common variables that affect passenger airport choice include the airfare, the price to travel the airport, and the time to get to the airport. Findings and Originality/value: The airport choice model which was calibrated corresponds to a valid powerful tool used to calculate the probability of each analyzed airport of being chosen for domestic flights in the Colombian territory. This is done bearing in mind specific characteristic of each of the attributes contained in the utility function. In addition, these probabilities will be used to calculate future market shares of the two airports considered in this study, and this will be done generating a support tool for airport and airline marketing policies.

  9. Watershed Modeling with ArcSWAT and SUFI2 In Cisadane Catchment Area: Calibration and Validation of River Flow Prediction

    Directory of Open Access Journals (Sweden)

    Iwan Ridwansyah

    2014-04-01

    Full Text Available Increasing of natural resources utilization as a result of population growth and economic development has caused severe damage on the watershed. The impacts of natural disasters such as floods, landslides and droughts become more frequent. Cisadane Catchment Area is one of 108 priority watershed in Indonesia. SWAT is currently applied world wide and considered as a versatile model that can be used to integrate multiple environmental processes, which support more effective watershed management and the development of better informed policy decision. The objective of this study is to examine the applicability of SWAT model for modeling mountainous catchments, focusing on Cisadane catchment Area in west Java Province, Indonesia. The SWAT model simulation was done for the periods of 2005 – 2010 while it used landuse information in 2009. Methods of Sequential Uncertainty Fitting ver. 2 (SUFI2 and combine with manual calibration were used in this study to calibrate a rainfall-runoff. The Calibration is done on 2007 and the validation on 2009, the R2 and Nash Sutchliffe Efficiency (NSE of the calibration were 0.71 and 0.72 respectively and the validation are 0.708 and 0.7 respectively. The monthly average of surface runoff and total water yield from the simulation were 27.7 mm and 2718.4 mm respectively. This study showed SWAT model can be a potential monitoring tool especially for watersheds in Cisadane Catchment Area or in the tropical regions. The model can be used for another purpose, especially in watershed management.

  10. Procedure for the Selection and Validation of a Calibration Model I-Description and Application.

    Science.gov (United States)

    Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D

    2017-05-01

    Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Nested sampling algorithm for subsurface flow model selection, uncertainty quantification, and nonlinear calibration

    KAUST Repository

    Elsheikh, A. H.

    2013-12-01

    Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known as nested sampling (NS), which can simultaneously sample the posterior distribution for uncertainty quantification, and estimate the Bayesian evidence for model selection. Model selection statistics, such as the Bayesian evidence, are needed to choose or assign different weights to different models of different levels of complexities. In this work, we report the first successful application of nested sampling for calibration of several nonlinear subsurface flow problems. The estimated Bayesian evidence by the NS algorithm is used to weight different parameterizations of the subsurface flow models (prior model selection). The results of the numerical evaluation implicitly enforced Occam\\'s razor where simpler models with fewer number of parameters are favored over complex models. The proper level of model complexity was automatically determined based on the information content of the calibration data and the data mismatch of the calibrated model.

  12. Calibration and analysis of genome-based models for microbial ecology.

    Science.gov (United States)

    Louca, Stilianos; Doebeli, Michael

    2015-10-16

    Microbial ecosystem modeling is complicated by the large number of unknown parameters and the lack of appropriate calibration tools. Here we present a novel computational framework for modeling microbial ecosystems, which combines genome-based model construction with statistical analysis and calibration to experimental data. Using this framework, we examined the dynamics of a community of Escherichia coli strains that emerged in laboratory evolution experiments, during which an ancestral strain diversified into two coexisting ecotypes. We constructed a microbial community model comprising the ancestral and the evolved strains, which we calibrated using separate monoculture experiments. Simulations reproduced the successional dynamics in the evolution experiments, and pathway activation patterns observed in microarray transcript profiles. Our approach yielded detailed insights into the metabolic processes that drove bacterial diversification, involving acetate cross-feeding and competition for organic carbon and oxygen. Our framework provides a missing link towards a data-driven mechanistic microbial ecology.

  13. Multi-site calibration, validation, and sensitivity analysis of the MIKE SHE Model for a large watershed in northern China

    Science.gov (United States)

    S. Wang; Z. Zhang; G. Sun; P. Strauss; J. Guo; Y. Tang; A. Yao

    2012-01-01

    Model calibration is essential for hydrologic modeling of large watersheds in a heterogeneous mountain environment. Little guidance is available for model calibration protocols for distributed models that aim at capturing the spatial variability of hydrologic processes. This study used the physically-based distributed hydrologic model, MIKE SHE, to contrast a lumped...

  14. Understanding the Day Cent model: Calibration, sensitivity, and identifiability through inverse modeling

    Science.gov (United States)

    Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.

    2015-01-01

    The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.

  15. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  16. Calibrated Blade-Element/Momentum Theory Aerodynamic Model of the MARIN Stock Wind Turbine: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Goupee, A.; Kimball, R.; de Ridder, E. J.; Helder, J.; Robertson, A.; Jonkman, J.

    2015-04-02

    In this paper, a calibrated blade-element/momentum theory aerodynamic model of the MARIN stock wind turbine is developed and documented. The model is created using open-source software and calibrated to closely emulate experimental data obtained by the DeepCwind Consortium using a genetic algorithm optimization routine. The provided model will be useful for those interested in validating interested in validating floating wind turbine numerical simulators that rely on experiments utilizing the MARIN stock wind turbine—for example, the International Energy Agency Wind Task 30’s Offshore Code Comparison Collaboration Continued, with Correlation project.

  17. Stochastic Modeling of Overtime Occupancy and Its Application in Building Energy Simulation and Calibration

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Kaiyu; Yan, Da; Hong, Tianzhen; Guo, Siyue

    2014-02-28

    Overtime is a common phenomenon around the world. Overtime drives both internal heat gains from occupants, lighting and plug-loads, and HVAC operation during overtime periods. Overtime leads to longer occupancy hours and extended operation of building services systems beyond normal working hours, thus overtime impacts total building energy use. Current literature lacks methods to model overtime occupancy because overtime is stochastic in nature and varies by individual occupants and by time. To address this gap in the literature, this study aims to develop a new stochastic model based on the statistical analysis of measured overtime occupancy data from an office building. A binomial distribution is used to represent the total number of occupants working overtime, while an exponential distribution is used to represent the duration of overtime periods. The overtime model is used to generate overtime occupancy schedules as an input to the energy model of a second office building. The measured and simulated cooling energy use during the overtime period is compared in order to validate the overtime model. A hybrid approach to energy model calibration is proposed and tested, which combines ASHRAE Guideline 14 for the calibration of the energy model during normal working hours, and a proposed KS test for the calibration of the energy model during overtime. The developed stochastic overtime model and the hybrid calibration approach can be used in building energy simulations to improve the accuracy of results, and better understand the characteristics of overtime in office buildings.

  18. Visible spectroscopy calibration transfer model in determining pH of Sala mangoes

    International Nuclear Information System (INIS)

    Yahaya, O.K.M.; MatJafri, M.Z.; Aziz, A.A.; Omar, A.F.

    2015-01-01

    The purpose of this study is to compare the efficiency of calibration transfer procedures between three spectrometers involving two Ocean Optics Inc. spectrometers, namely, QE65000 and Jaz, and also, ASD FieldSpec 3 in measuring the pH of Sala mango by visible reflectance spectroscopy. This study evaluates the ability of these spectrometers in measuring the pH of Sala mango by applying similar calibration algorithms through direct calibration transfer. This visible reflectance spectroscopy technique defines a spectrometer as a master instrument and another spectrometer as a slave. The multiple linear regression (MLR) of calibration model generated using the QE65000 spectrometer is transferred to the Jaz spectrometer and vice versa for Set 1. The same technique is applied for Set 2 with QE65000 spectrometer is transferred to the FieldSpec3 spectrometer and vice versa. For Set 1, the result showed that the QE65000 spectrometer established a calibration model with higher accuracy than that of the Jaz spectrometer. In addition, the calibration model developed on Jaz spectrometer successfully predicted the pH of Sala mango, which was measured using QE65000 spectrometer, with a root means square error of prediction RMSEP = 0.092 pH and coefficients of determination R 2  = 0.892. Moreover, the best prediction result is obtained for Set 2 when the calibration model developed on QE65000 spectrometer is successfully transferred to FieldSpec 3 with R 2  = 0.839 and RMSEP = 0.16 pH

  19. Innovative Calibration Method for System Level Simulation Models of Internal Combustion Engines

    Directory of Open Access Journals (Sweden)

    Ivo Prah

    2016-09-01

    Full Text Available The paper outlines a procedure for the computer-controlled calibration of the combined zero-dimensional (0D and one-dimensional (1D thermodynamic simulation model of a turbocharged internal combustion engine (ICE. The main purpose of the calibration is to determine input parameters of the simulation model in such a way as to achieve the smallest difference between the results of the measurements and the results of the numerical simulations with minimum consumption of the computing time. An innovative calibration methodology is based on a novel interaction between optimization methods and physically based methods of the selected ICE sub-systems. Therein physically based methods were used for steering the division of the integral ICE to several sub-models and for determining parameters of selected components considering their governing equations. Innovative multistage interaction between optimization methods and physically based methods allows, unlike the use of well-established methods that rely only on the optimization techniques, for successful calibration of a large number of input parameters with low time consumption. Therefore, the proposed method is suitable for efficient calibration of simulation models of advanced ICEs.

  20. PEST modules with regularization for the acceleration of the automatic calibration in hydrodynamic models

    Directory of Open Access Journals (Sweden)

    Polomčić Dušan M.

    2015-01-01

    Full Text Available The calibration process of hydrodynamic model is done usually manually by 'testing' with different values of hydrogeological parameters and hydraulic characteristics of the boundary conditions. By using the PEST program, automatic calibration of models has been introduced, and it has proved to significantly reduce the subjective influence of the model creator on results. With the relatively new approach of PEST, i.e. with the introduction of so-called 'pilot points', the concept of homogeneous zones with parameter values of porous media or zones with the given boundary conditions has been outdated. However, the consequence of this kind of automatic calibration is that a significant amount of time is required to perform the calculation. The duration of calibration is measured in hours, sometimes even days. PEST contains two modules for the shortening of that process - Parallel PEST and BeoPEST. The paper presents performed experiments and analysis of different cases of PEST module usage, based on which the reduction in the time required to calibrate the model is done.

  1. MODELING OF KINEMATICS OF A PLASTIC SHAPING AT CALIBRATION OF A THIN-WALLED PRECISION PIPE SINKING

    Directory of Open Access Journals (Sweden)

    E. D. Chertov

    2014-01-01

    Full Text Available Summary. The mathematical model of kinematics of a plastic shaping at the sinking of a thin-walled precision pipe applied to calibration of the ends of the unified elements of the pipeline of aircraft from titanic alloys and corrosion-resistant steel before assembly to the route by means of automatic argon-arc welding of ring joints is developed. For modeling, the power criterion of stability with use of kinematic possible fields of speeds is applied to receiving the top assessment of effort of deformation. The developed model of kinematics of a plastic current allows to receive power parameters of the main condition of process of calibration by sinking and can be used for the solution of a task on stability of process of deformation by results of comparison of power (power parameters for the main (steady and indignant states. Modeling is made in cylindrical system of coordinates by comparison of options of kinematic possible fields of the speeds of a current meeting a condition of incompressibility and kinematic regional conditions. The result of the modeling was selected discontinuous field of high-speed, in which the decrease outer radius (R occurs only by increasing the thickness of the pipe wall (t. For this option the size of pressure of sinking had the smallest value, therefore the chosen field of speeds closely to the valid. It is established that with increase in a step of giving 1 at calibration by the multisector tool the demanded pressure of sinking of q decreases. At an identical step of giving 1 pipe with the smaller relative thickness of (t/r needs to be calibrated the smaller pressure of sinking. With increase of a limit of fluidity at shift of material of pipe preparation pressure of sinking of (q increases.

  2. Calibration of a biome-biogeochemical cycles model for modeling the net primary production of teak forests through inverse modeling of remotely sensed data

    Science.gov (United States)

    Imvitthaya, Chomchid; Honda, Kiyoshi; Lertlum, Surat; Tangtham, Nipon

    2011-01-01

    In this paper, we present the results of a net primary production (NPP) modeling of teak (Tectona grandis Lin F.), an important species in tropical deciduous forests. The biome-biogeochemical cycles or Biome-BGC model was calibrated to estimate net NPP through the inverse modeling approach. A genetic algorithm (GA) was linked with Biome-BGC to determine the optimal ecophysiological model parameters. The Biome-BGC was calibrated by adjusting the ecophysiological model parameters to fit the simulated LAI to the satellite LAI (SPOT-Vegetation), and the best fitness confirmed the high accuracy of generated ecophysioligical parameter from GA. The modeled NPP, using optimized parameters from GA as input data, was evaluated using daily NPP derived by the MODIS satellite and the annual field data in northern Thailand. The results showed that NPP obtained using the optimized ecophysiological parameters were more accurate than those obtained using default literature parameterization. This improvement occurred mainly because the model's optimized parameters reduced the bias by reducing systematic underestimation in the model. These Biome-BGC results can be effectively applied in teak forests in tropical areas. The study proposes a more effective method of using GA to determine ecophysiological parameters at the site level and represents a first step toward the analysis of the carbon budget of teak plantations at the regional scale.

  3. Comparison of Omega-2 and Omega-3 calibration explosions basing on regional seismic data

    International Nuclear Information System (INIS)

    Mikhajlova, N.N.; Sokolova, I.N.

    2001-01-01

    Comparison of different parameters of seismic records of Omega-2 and Omega-3 calibration explosions was performed. It was shown that despite the equal charge the level of seismic oscillations was lower during the Omega-3 explosion than during Omega-2. Spectral composition, polarization of oscillations, wave picture is identical at a given station for both explosions. Assumptions were made on the reason of such difference in seismic effect. (author)

  4. A multi-objective approach to improve SWAT model calibration in alpine catchments

    Science.gov (United States)

    Tuo, Ye; Marcolini, Giorgia; Disse, Markus; Chiogna, Gabriele

    2018-04-01

    Multi-objective hydrological model calibration can represent a valuable solution to reduce model equifinality and parameter uncertainty. The Soil and Water Assessment Tool (SWAT) model is widely applied to investigate water quality and water management issues in alpine catchments. However, the model calibration is generally based on discharge records only, and most of the previous studies have defined a unique set of snow parameters for an entire basin. Only a few studies have considered snow observations to validate model results or have taken into account the possible variability of snow parameters for different subbasins. This work presents and compares three possible calibration approaches. The first two procedures are single-objective calibration procedures, for which all parameters of the SWAT model were calibrated according to river discharge alone. Procedures I and II differ from each other by the assumption used to define snow parameters: The first approach assigned a unique set of snow parameters to the entire basin, whereas the second approach assigned different subbasin-specific sets of snow parameters to each subbasin. The third procedure is a multi-objective calibration, in which we considered snow water equivalent (SWE) information at two different spatial scales (i.e. subbasin and elevation band), in addition to discharge measurements. We tested these approaches in the Upper Adige river basin where a dense network of snow depth measurement stations is available. Only the set of parameters obtained with this multi-objective procedure provided an acceptable prediction of both river discharge and SWE. These findings offer the large community of SWAT users a strategy to improve SWAT modeling in alpine catchments.

  5. How does observation uncertainty influence which stream water samples are most informative for model calibration?

    Science.gov (United States)

    Wang, Ling; van Meerveld, Ilja; Seibert, Jan

    2016-04-01

    Streamflow isotope samples taken during rainfall-runoff events are very useful for multi-criteria model calibration because they can help decrease parameter uncertainty and improve internal model consistency. However, the number of samples that can be collected and analysed is often restricted by practical and financial constraints. It is, therefore, important to choose an appropriate sampling strategy and to obtain samples that have the highest information content for model calibration. We used the Birkenes hydrochemical model and synthetic rainfall, streamflow and isotope data to explore which samples are most informative for model calibration. Starting with error-free observations, we investigated how many samples are needed to obtain a certain model fit. Based on different parameter sets, representing different catchments, and different rainfall events, we also determined which sampling times provide the most informative data for model calibration. Our results show that simulation performance for models calibrated with the isotopic data from two intelligently selected samples was comparable to simulations based on isotopic data for all 100 time steps. The models calibrated with the intelligently selected samples also performed better than the model calibrations with two benchmark sampling strategies (random selection and selection based on hydrologic information). Surprisingly, samples on the rising limb and at the peak were less informative than expected and, generally, samples taken at the end of the event were most informative. The timing of the most informative samples depends on the proportion of different flow components (baseflow, slow response flow, fast response flow and overflow). For events dominated by baseflow and slow response flow, samples taken at the end of the event after the fast response flow has ended were most informative; when the fast response flow was dominant, samples taken near the peak were most informative. However when overflow

  6. Diagnosing the impact of alternative calibration strategies on coupled hydrologic models

    Science.gov (United States)

    Smith, T. J.; Perera, C.; Corrigan, C.

    2017-12-01

    Hydrologic models represent a significant tool for understanding, predicting, and responding to the impacts of water on society and society on water resources and, as such, are used extensively in water resources planning and management. Given this important role, the validity and fidelity of hydrologic models is imperative. While extensive focus has been paid to improving hydrologic models through better process representation, better parameter estimation, and better uncertainty quantification, significant challenges remain. In this study, we explore a number of competing model calibration scenarios for simple, coupled snowmelt-runoff models to better understand the sensitivity / variability of parameterizations and its impact on model performance, robustness, fidelity, and transferability. Our analysis highlights the sensitivity of coupled snowmelt-runoff model parameterizations to alterations in calibration approach, underscores the concept of information content in hydrologic modeling, and provides insight into potential strategies for improving model robustness / fidelity.

  7. Bayesian calibration of terrestrial ecosystem models: a study of advanced Markov chain Monte Carlo methods

    Science.gov (United States)

    Lu, Dan; Ricciuto, Daniel; Walker, Anthony; Safta, Cosmin; Munger, William

    2017-09-01

    Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM) algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results in a better model fit and predictive performance compared to the popular adaptive Metropolis (AM) scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. The result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.

  8. Calibration and validation of the SWAT model for a forested watershed in coastal South Carolina

    Science.gov (United States)

    Devendra M. Amatya; Elizabeth B. Haley; Norman S. Levine; Timothy J. Callahan; Artur Radecki-Pawlik; Manoj K. Jha

    2008-01-01

    Modeling the hydrology of low-gradient coastal watersheds on shallow, poorly drained soils is a challenging task due to the complexities in watershed delineation, runoff generation processes and pathways, flooding, and submergence caused by tropical storms. The objective of the study is to calibrate and validate a GIS-based spatially-distributed hydrologic model, SWAT...

  9. Calibration of a user-defined mine blast model in LSDYNA and comparison with ale simultions

    NARCIS (Netherlands)

    Verreault, J.; Leerdam, P.J.C.; Weerheijm, J.

    2016-01-01

    The calibration of a user-defined blast model implemented in LS-DYNA is presented using full-scale test rig experiments, partly according to the NATO STANAG 4569 AEP-55 Volume 2 specifications where the charge weight varies between 6 kg and 10 kg and the burial depth is 100 mm and deeper. The model

  10. AUTOMATIC CALIBRATION OF A STOCHASTIC-LAGRANGIAN TRANSPORT MODEL (SLAM)

    Science.gov (United States)

    Numerical models are a useful tool in evaluating and designing NAPL remediation systems. Traditional constitutive finite difference and finite element models are complex and expensive to apply. For this reason, this paper presents the application of a simplified stochastic-Lagran...

  11. Modelling and calibration with mechatronic blockset for Simulink

    DEFF Research Database (Denmark)

    Ravn, Ole; Szymkat, Maciej

    1997-01-01

    The paper describes the design considerations for a software tool for modelling and simulation of mechatronic systems. The tool is based on a concept enabling the designer to pick component models that match the physical components of the system to be modelled from a block library. Another...... on the component level and for the whole model. The library that can be extended by the user contains all the standard components, DC-motors, potentiometers, encoders etc. The library is presently being tested in different projects and the response of these users is being incorporated in the code. The Mechatronic...... Simulink Library blockset is implemented basing on MATLAB and Simulink and has been used to model several mechatronic systems....

  12. A simple topography-driven, calibration-free runoff generation model

    Science.gov (United States)

    Gao, H.; Birkel, C.; Hrachowitz, M.; Tetzlaff, D.; Soulsby, C.; Savenije, H. H. G.

    2017-12-01

    Determining the amount of runoff generation from rainfall occupies a central place in rainfall-runoff modelling. Moreover, reading landscapes and developing calibration-free runoff generation models that adequately reflect land surface heterogeneities remains the focus of much hydrological research. In this study, we created a new method to estimate runoff generation - HAND-based Storage Capacity curve (HSC) which uses a topographic index (HAND, Height Above the Nearest Drainage) to identify hydrological similarity and partially the saturated areas of catchments. We then coupled the HSC model with the Mass Curve Technique (MCT) method to estimate root zone storage capacity (SuMax), and obtained the calibration-free runoff generation model HSC-MCT. Both the two models (HSC and HSC-MCT) allow us to estimate runoff generation and simultaneously visualize the spatial dynamic of saturated area. We tested the two models in the data-rich Bruntland Burn (BB) experimental catchment in Scotland with an unusual time series of the field-mapped saturation area extent. The models were subsequently tested in 323 MOPEX (Model Parameter Estimation Experiment) catchments in the United States. HBV and TOPMODEL were used as benchmarks. We found that the HSC performed better in reproducing the spatio-temporal pattern of the observed saturated areas in the BB catchment compared with TOPMODEL which is based on the topographic wetness index (TWI). The HSC also outperformed HBV and TOPMODEL in the MOPEX catchments for both calibration and validation. Despite having no calibrated parameters, the HSC-MCT model also performed comparably well with the calibrated HBV and TOPMODEL, highlighting the robustness of the HSC model to both describe the spatial distribution of the root zone storage capacity and the efficiency of the MCT method to estimate the SuMax. Moreover, the HSC-MCT model facilitated effective visualization of the saturated area, which has the potential to be used for broader

  13. Sensitivity analysis and calibration of a dynamic physically based slope stability model

    Science.gov (United States)

    Zieher, Thomas; Rutzinger, Martin; Schneider-Muntau, Barbara; Perzl, Frank; Leidinger, David; Formayer, Herbert; Geitner, Clemens

    2017-06-01

    Physically based modelling of slope stability on a catchment scale is still a challenging task. When applying a physically based model on such a scale (1 : 10 000 to 1 : 50 000), parameters with a high impact on the model result should be calibrated to account for (i) the spatial variability of parameter values, (ii) shortcomings of the selected model, (iii) uncertainties of laboratory tests and field measurements or (iv) parameters that cannot be derived experimentally or measured in the field (e.g. calibration constants). While systematic parameter calibration is a common task in hydrological modelling, this is rarely done using physically based slope stability models. In the present study a dynamic, physically based, coupled hydrological-geomechanical slope stability model is calibrated based on a limited number of laboratory tests and a detailed multitemporal shallow landslide inventory covering two landslide-triggering rainfall events in the Laternser valley, Vorarlberg (Austria). Sensitive parameters are identified based on a local one-at-a-time sensitivity analysis. These parameters (hydraulic conductivity, specific storage, angle of internal friction for effective stress, cohesion for effective stress) are systematically sampled and calibrated for a landslide-triggering rainfall event in August 2005. The identified model ensemble, including 25 behavioural model runs with the highest portion of correctly predicted landslides and non-landslides, is then validated with another landslide-triggering rainfall event in May 1999. The identified model ensemble correctly predicts the location and the supposed triggering timing of 73.0 % of the observed landslides triggered in August 2005 and 91.5 % of the observed landslides triggered in May 1999. Results of the model ensemble driven with raised precipitation input reveal a slight increase in areas potentially affected by slope failure. At the same time, the peak run-off increases more markedly, suggesting that

  14. An alternative method for calibration of narrow band radiometer using a radiative transfer model

    Energy Technology Data Exchange (ETDEWEB)

    Salvador, J; Wolfram, E; D' Elia, R [Centro de Investigaciones en Laseres y Aplicaciones, CEILAP (CITEFA-CONICET), Juan B. de La Salle 4397 (B1603ALO), Villa Martelli, Buenos Aires (Argentina); Zamorano, F; Casiccia, C [Laboratorio de Ozono y Radiacion UV, Universidad de Magallanes, Punta Arenas (Chile) (Chile); Rosales, A [Universidad Nacional de la Patagonia San Juan Bosco, UNPSJB, Facultad de Ingenieria, Trelew (Argentina) (Argentina); Quel, E, E-mail: jsalvador@citefa.gov.ar [Universidad Nacional de la Patagonia Austral, Unidad Academica Rio Gallegos Avda. Lisandro de la Torre 1070 ciudad de Rio Gallegos-Sta Cruz (Argentina) (Argentina)

    2011-01-01

    The continual monitoring of solar UV radiation is one of the major objectives proposed by many atmosphere research groups. The purpose of this task is to determine the status and degree of progress over time of the anthropogenic composition perturbation of the atmosphere. Such changes affect the intensity of the UV solar radiation transmitted through the atmosphere that then interacts with living organisms and all materials, causing serious consequences in terms of human health and durability of materials that interact with this radiation. One of the many challenges that need to be faced to perform these measurements correctly is the maintenance of periodic calibrations of these instruments. Otherwise, damage caused by the UV radiation received will render any one calibration useless after the passage of some time. This requirement makes the usage of these instruments unattractive, and the lack of frequent calibration may lead to the loss of large amounts of acquired data. Motivated by this need to maintain calibration or, at least, know the degree of stability of instrumental behavior, we have developed a calibration methodology that uses the potential of radiative transfer models to model solar radiation with 5% accuracy or better relative to actual conditions. Voltage values in each radiometer channel involved in the calibration process are carefully selected from clear sky data. Thus, tables are constructed with voltage values corresponding to various atmospheric conditions for a given solar zenith angle. Then we model with a radiative transfer model using the same conditions as for the measurements to assemble sets of values for each zenith angle. The ratio of each group (measured and modeled) allows us to calculate the calibration coefficient value as a function of zenith angle as well as the cosine response presented by the radiometer. The calibration results obtained by this method were compared with those obtained with a Brewer MKIII SN 80 located in the

  15. Bayesian Calibration, Validation and Uncertainty Quantification for Predictive Modelling of Tumour Growth: A Tutorial.

    Science.gov (United States)

    Collis, Joe; Connor, Anthony J; Paczkowski, Marcin; Kannan, Pavitra; Pitt-Francis, Joe; Byrne, Helen M; Hubbard, Matthew E

    2017-04-01

    In this work, we present a pedagogical tumour growth example, in which we apply calibration and validation techniques to an uncertain, Gompertzian model of tumour spheroid growth. The key contribution of this article is the discussion and application of these methods (that are not commonly employed in the field of cancer modelling) in the context of a simple model, whose deterministic analogue is widely known within the community. In the course of the example, we calibrate the model against experimental data that are subject to measurement errors, and then validate the resulting uncertain model predictions. We then analyse the sensitivity of the model predictions to the underlying measurement model. Finally, we propose an elementary learning approach for tuning a threshold parameter in the validation procedure in order to maximize predictive accuracy of our validated model.

  16. Multi-objective calibration of a reservoir model: aggregation and non-dominated sorting approaches

    Science.gov (United States)

    Huang, Y.

    2012-12-01

    Numerical reservoir models can be helpful tools for water resource management. These models are generally calibrated against historical measurement data made in reservoirs. In this study, two methods are proposed for the multi-objective calibration of such models: aggregation and non-dominated sorting methods. Both methods use a hybrid genetic algorithm as an optimization engine and are different in fitness assignment. In the aggregation method, a weighted sum of scaled simulation errors is designed as an overall objective function to measure the fitness of solutions (i.e. parameter values). The contribution of this study to the aggregation method is the correlation analysis and its implication to the choice of weight factors. In the non-dominated sorting method, a novel method based on non-dominated sorting and the method of minimal distance is used to calculate the dummy fitness of solutions. The proposed methods are illustrated using a water quality model that was set up to simulate the water quality of Pepacton Reservoir, which is located to the north of New York City and is used for water supply of city. The study also compares the aggregation and the non-dominated sorting methods. The purpose of this comparison is not to evaluate the pros and cons between the two methods but to determine whether the parameter values, objective function values (simulation errors) and simulated results obtained are significantly different with each other. The final results (objective function values) from the two methods are good compromise between all objective functions, and none of these results are the worst for any objective function. The calibrated model provides an overall good performance and the simulated results with the calibrated parameter values match the observed data better than the un-calibrated parameters, which supports and justifies the use of multi-objective calibration. The results achieved in this study can be very useful for the calibration of water

  17. Presentation, calibration and validation of the low-order, DCESS Earth System Model

    DEFF Research Database (Denmark)

    Shaffer, G.; Olsen, S. Malskaer; Pedersen, Jens Olaf Pepke

    2008-01-01

    A new, low-order Earth system model is described, calibrated and tested against Earth system data. The model features modules for the atmosphere, ocean, ocean sediment, land biosphere and lithosphere and has been designed to simulate global change on time scales of years to millions of years...... remineralization. The lithosphere module considers outgassing, weathering of carbonate and silicate rocks and weathering of rocks containing old organic carbon and phosphorus. Weathering rates are related to mean atmospheric temperatures. A pre-industrial, steady state calibration to Earth system data is carried...

  18. Using genetic algorithms for calibrating simplified models of nuclear reactor dynamics

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Canetta, Raffaele

    2004-01-01

    In this paper the use of genetic algorithms for the estimation of the effective parameters of a model of nuclear reactor dynamics is investigated. The calibration of the effective parameters is achieved by best fitting the model responses of the quantities of interest (e.g., reactor power, average fuel and coolant temperatures) to the actual evolution profiles, here simulated by the Quandry based reactor kinetics (Quark) code available from the Nuclear Energy Agency. Alternative schemes of single- and multi-objective optimization are investigated. The efficiency of convergence of the algorithm with respect to the different effective parameters to be calibrated is studied with reference to the physical relationships involved

  19. Using genetic algorithms for calibrating simplified models of nuclear reactor dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Marseguerra, Marzio E-mail: marzio.marseguerra@polimi.it; Zio, Enrico E-mail: enrico.zio@polimi.it; Canetta, Raffaele

    2004-07-01

    In this paper the use of genetic algorithms for the estimation of the effective parameters of a model of nuclear reactor dynamics is investigated. The calibration of the effective parameters is achieved by best fitting the model responses of the quantities of interest (e.g., reactor power, average fuel and coolant temperatures) to the actual evolution profiles, here simulated by the Quandry based reactor kinetics (Quark) code available from the Nuclear Energy Agency. Alternative schemes of single- and multi-objective optimization are investigated. The efficiency of convergence of the algorithm with respect to the different effective parameters to be calibrated is studied with reference to the physical relationships involved.

  20. Calibration plots for risk prediction models in the presence of competing risks

    DEFF Research Database (Denmark)

    Gerds, Thomas A; Andersen, Per K; Kattan, Michael W

    2014-01-01

    A predicted risk of 17% can be called reliable if it can be expected that the event will occur to about 17 of 100 patients who all received a predicted risk of 17%. Statistical models can predict the absolute risk of an event such as cardiovascular death in the presence of competing risks...... prediction model is well calibrated. The first is lack of independent validation data, the second is right censoring, and the third is that when the risk scale is continuous, the estimation problem is as difficult as density estimation. To deal with these problems, we propose to estimate calibration curves...

  1. Use of regional climate models data for groundwater recharge modelling in Baltic artesian basin

    Science.gov (United States)

    Timuhins, A.; Klints, I.; Sennikovs, J.; Virbulis, J.

    2012-04-01

    Baltic artesian basin (BAB) covers about 480000 square kilometres. BAB includes territory of Latvia, Lithuania, Estonia, parts of Poland, Russia, Belarus and Baltic Sea. The closed hydrogeological mathematical model for the BAB is developed in University of Latvia and reference calculations made in steady state mode. No-flow boundary condition is applied on the bottom and side boundaries of BAB. Hydraulic head is fixed on the seabed and largest lakes, and along the main river lines. Main water supply wells also are presented in the model as a pointwise water extraction. Precipitation is the main source of the groundwater recharge in the BAB region. Infiltration parameterization is responsible for this water source in BAB model. During the early stage of calibration of BAB hydrogeological model an automatic calibration for the hydraulic conductivities of permeable layers and single infiltration rate was attempted. Performing BAB model calibration it was noted that the differences of calculated and observed hydraulic heads (used for calculating the calibration penalty function) can be reduced by introducing a spatially distributed infiltration model. The aim of the present study is improving of the infiltration model, as well as preserving a short computation time (several minutes) for the piezometric head. Direct solution of improvement of the infiltration would be a usage of the advanced hydrological models. However, an accurate hydrological model requires a lot of computational power. It should couple meteorological and hydrological parameters and requires additional calibration. The regional climate model (KNMI-RACMO2 25 km resolution) results from ENSEMBLES project are used for the spatially distributed infiltration field calculation. The infiltration field is constructed as weighted difference of 30 year averaged precipitation and evaporation fields. The weight value is calibrated and a considerable decrease of the value of penalty function of the groundwater

  2. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    Science.gov (United States)

    Xi, Maolong; Lu, Dan; Gui, Dongwei; Qi, Zhiming; Zhang, Guannan

    2017-01-01

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so as to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.

  3. Calibration of controlling input models for pavement management system.

    Science.gov (United States)

    2013-07-01

    The Oklahoma Department of Transportation (ODOT) is currently using the Deighton Total Infrastructure Management System (dTIMS) software for pavement management. This system is based on several input models which are computational backbones to dev...

  4. Value of using remotely sensed evapotranspiration for SWAT model calibration

    Science.gov (United States)

    Hydrologic models are useful management tools for assessing water resources solutions and estimating the potential impact of climate variation scenarios. A comprehensive understanding of the water budget components and especially the evapotranspiration (ET) is critical and often overlooked for adeq...

  5. Calibration of Chaboche Model with a Memory Surface

    Directory of Open Access Journals (Sweden)

    Radim HALAMA

    2013-06-01

    Full Text Available This paper points out a sufficient description of the stress-strain behaviour of the Chaboche nonlinear kinematic hardening model only for materials with the Masing's behaviour, regardless of the number of backstress parts. Subsequently, there are presented two concepts of most widely used memory surfaces: Jiang-Sehitoglu concept (deviatoric plane and Chaboche concept (strain-space. On the base of experimental data of steel ST52 is then shown the possibility of capturing hysteresis loops and cyclic strain curve simultaneously in the usual range for low cycle fatigue calculations. A new model for cyclic hardening/softening behaviour modeling has been also developed based on the Jiang-Sehitoglu memory surface concept. Finally, there are formulated some recommendations for the use of individual models and the direction of further research in conclusions.

  6. Constitutive Model Calibration via Autonomous Multiaxial Experimentation (Postprint)

    Science.gov (United States)

    2016-09-17

    ABSTRACT (Maximum 200 words) Modern plasticity models contain numerous parameters that can be difficult and time consuming to fit using current...Abstract Modern plasticity models contain numerous parameters that can be difficult and time consuming to fit using current methods. Additional...complexity, is a difficult and time consuming process that has historically be a separate process from the experimental testing. As such, additional

  7. Embodying, calibrating and caring for a local model of obesity

    DEFF Research Database (Denmark)

    Winther, Jonas; Hillersdal, Line

    Interdisciplinary research collaborations are increasingly made a mandatory 'standard' within strategic research grants. Collaborations between the natural, social and humanistic sciences are conceptualized as uniquely suited to study pressing societal problems. The obesity epidemic has been...... highlighted as such a problem. Within research communities disparate explanatory models of obesity exist (Ulijaszek 2008) and some of these models of obesity are brought together in the Copenhagen-based interdisciplinary research initiative; Governing Obesity (GO) with the aim of addressing the causes...

  8. Calibration under uncertainty for finite element models of masonry monuments

    Energy Technology Data Exchange (ETDEWEB)

    Atamturktur, Sezer,; Hemez, Francois,; Unal, Cetin

    2010-02-01

    Historical unreinforced masonry buildings often include features such as load bearing unreinforced masonry vaults and their supporting framework of piers, fill, buttresses, and walls. The masonry vaults of such buildings are among the most vulnerable structural components and certainly among the most challenging to analyze. The versatility of finite element (FE) analyses in incorporating various constitutive laws, as well as practically all geometric configurations, has resulted in the widespread use of the FE method for the analysis of complex unreinforced masonry structures over the last three decades. However, an FE model is only as accurate as its input parameters, and there are two fundamental challenges while defining FE model input parameters: (1) material properties and (2) support conditions. The difficulties in defining these two aspects of the FE model arise from the lack of knowledge in the common engineering understanding of masonry behavior. As a result, engineers are unable to define these FE model input parameters with certainty, and, inevitably, uncertainties are introduced to the FE model.

  9. Calibration of HEC-Ras hydrodynamic model using gauged discharge data and flood inundation maps

    Science.gov (United States)

    Tong, Rui; Komma, Jürgen

    2017-04-01

    The estimation of flood is essential for disaster alleviation. Hydrodynamic models are implemented to predict the occurrence and variance of flood in different scales. In practice, the calibration of hydrodynamic models aims to search the best possible parameters for the representation the natural flow resistance. Recent years have seen the calibration of hydrodynamic models being more actual and faster following the advance of earth observation products and computer based optimization techniques. In this study, the Hydrologic Engineering River Analysis System (HEC-Ras) model was set up with high-resolution digital elevation model from Laser scanner for the river Inn in Tyrol, Austria. 10 largest flood events from 19 hourly discharge gauges and flood inundation maps were selected to calibrate the HEC-Ras model. Manning roughness values and lateral inflow factors as parameters were automatically optimized with the Shuffled complex with Principal component analysis (SP-UCI) algorithm developed from the Shuffled Complex Evolution (SCE-UA). Different objective functions (Nash-Sutcliffe model efficiency coefficient, the timing of peak, peak value and Root-mean-square deviation) were used in single or multiple way. It was found that the lateral inflow factor was the most sensitive parameter. SP-UCI algorithm could avoid the local optimal and achieve efficient and effective parameters in the calibration of HEC-Ras model using flood extension images. As results showed, calibration by means of gauged discharge data and flood inundation maps, together with objective function of Nash-Sutcliffe model efficiency coefficient, was very robust to obtain more reliable flood simulation, and also to catch up with the peak value and the timing of peak.

  10. Calibration of Yucca Mountain unsaturated zone flow and transport model using porewater chloride data

    International Nuclear Information System (INIS)

    Liu, Jianchun; Sonnenthal, Eric L.; Bodvarsson, Gudmundur S.

    2002-01-01

    In this study, porewater chloride data from Yucca Mountain, Nevada, are analyzed and modeled by 3-D chemical transport simulations and analytical methods. The simulation modeling approach is based on a continuum formulation of coupled multiphase fluid flow and tracer transport processes through fractured porous rock, using a dual-continuum concept. Infiltration-rate calibrations were using the pore water chloride data. Model results of chloride distributions were improved in matching the observed data with the calibrated infiltration rates. Statistical analyses of the frequency distribution for overall percolation fluxes and chloride concentration in the unsaturated zone system demonstrate that the use of the calibrated infiltration rates had insignificant effect on the distribution of simulated percolation fluxes but significantly changed the predicated distribution of simulated chloride concentrations. An analytical method was also applied to model transient chloride transport. The method was verified by 3-D simulation results as able to capture major chemical transient behavior and trends. Effects of lateral flow in the Paintbrush nonwelded unit on percolation fluxes and chloride distribution were studied by 3-D simulations with increased horizontal permeability. The combined results from these model calibrations furnish important information for the UZ model studies, contributing to performance assessment of the potential repository

  11. Effect of heteroscedasticity treatment in residual error models on model calibration and prediction uncertainty estimation

    Science.gov (United States)

    Sun, Ruochen; Yuan, Huiling; Liu, Xiaoli

    2017-11-01

    The heteroscedasticity treatment in residual error models directly impacts the model calibration and prediction uncertainty estimation. This study compares three methods to deal with the heteroscedasticity, including the explicit linear modeling (LM) method and nonlinear modeling (NL) method using hyperbolic tangent function, as well as the implicit Box-Cox transformation (BC). Then a combined approach (CA) combining the advantages of both LM and BC methods has been proposed. In conjunction with the first order autoregressive model and the skew exponential power (SEP) distribution, four residual error models are generated, namely LM-SEP, NL-SEP, BC-SEP and CA-SEP, and their corresponding likelihood functions are applied to the Variable Infiltration Capacity (VIC) hydrologic model over the Huaihe River basin, China. Results show that the LM-SEP yields the poorest streamflow predictions with the widest uncertainty band and unrealistic negative flows. The NL and BC methods can better deal with the heteroscedasticity and hence their corresponding predictive performances are improved, yet the negative flows cannot be avoided. The CA-SEP produces the most accurate predictions with the highest reliability and effectively avoids the negative flows, because the CA approach is capable of addressing the complicated heteroscedasticity over the study basin.

  12. Modeling and Experimental Analysis of Piezoelectric Shakers for High-Frequency Calibration of Accelerometers

    International Nuclear Information System (INIS)

    Vogl, Gregory W.; Harper, Kari K.; Payne, Bev

    2010-01-01

    Piezoelectric shakers have been developed and used at the National Institute of Standards and Technology (NIST) for decades for high-frequency calibration of accelerometers. Recently, NIST researchers built new piezoelectric shakers in the hopes of reducing the uncertainties in the calibrations of accelerometers while extending the calibration frequency range beyond 20 kHz. The ability to build and measure piezoelectric shakers invites modeling of these systems in order to improve their design for increased performance, which includes a sinusoidal motion with lower distortion, lower cross-axial motion, and an increased frequency range. In this paper, we present a model of piezoelectric shakers and match it to experimental data. The equations of motion for all masses are solved along with the coupled state equations for the piezoelectric actuator. Finally, additional electrical elements like inductors, capacitors, and resistors are added to the piezoelectric actuator for matching of experimental and theoretical frequency responses.

  13. Calibrated and Interactive Modelling of Form-Active Hybrid Structures

    DEFF Research Database (Denmark)

    Quinn, Gregory; Holden Deleuran, Anders; Piker, Daniel

    2016-01-01

    Form-active hybrid structures (FAHS) couple two or more different structural elements of low self weight and low or negligible bending flexural stiffness (such as slender beams, cables and membranes) into one structural assembly of high global stiffness. They offer high load-bearing capacity...... software packages which introduce interruptions and data exchange issues in the modelling pipeline. The mechanical precision, stability and open software architecture of Kangaroo has facilitated the development of proof-of-concept modelling pipelines which tackle this challenge and enable powerful...... materially-informed sketching. Making use of a projection-based dynamic relaxation solver for structural analysis, explorative design has proven to be highly effective....

  14. Experimental validation and calibration of pedestrian loading models for footbridges

    DEFF Research Database (Denmark)

    Ricciardelli, Fransesco; Briatico, C; Ingólfsson, Einar Thór

    2006-01-01

    Different patterns of pedestrian loading of footbridges exist, whose occurrence depends on a number of parameters, such as the bridge span, frequency, damping and mass, and the pedestrian density and activity. In this paper analytical models for the transient action of one walker and for the stat...

  15. An auto-calibration procedure for empirical solar radiation models

    NARCIS (Netherlands)

    Bojanowski, J.S.; Donatelli, Marcello; Skidmore, A.K.; Vrieling, A.

    2013-01-01

    Solar radiation data are an important input for estimating evapotranspiration and modelling crop growth. Direct measurement of solar radiation is now carried out in most European countries, but the network of measuring stations is too sparse for reliable interpolation of measured values. Instead of

  16. The Active Model: a calibration of material intent

    DEFF Research Database (Denmark)

    Ramsgaard Thomsen, Mette; Tamke, Martin

    2012-01-01

    created it. This definition suggests structural characteristics that are perhaps not immediately obvious when implemented within architectural models. It opens the idea that materiality might persist into the digital environment, as well as the digital lingering within the material. It implies questions...

  17. Remote sensing estimation of evapotranspiration for SWAT Model Calibration

    Science.gov (United States)

    Hydrological models are used to assess many water resource problems from water quantity to water quality issues. The accurate assessment of the water budget, primarily the influence of precipitation and evapotranspiration (ET), is a critical first-step evaluation, which is often overlooked in hydro...

  18. Model Insensitive and Calibration Independent Method for Determination of the Downstream Neutral Hydrogen Density Through Ly-alpha Glow Observations

    Science.gov (United States)

    Gangopadhyay, P.; Judge, D. L.

    1996-01-01

    Our knowledge of the various heliospheric phenomena (location of the solar wind termination shock, heliopause configuration and very local interstellar medium parameters) is limited by uncertainties in the available heliospheric plasma models and by calibration uncertainties in the observing instruments. There is, thus, a strong motivation to develop model insensitive and calibration independent methods to reduce the uncertainties in the relevant heliospheric parameters. We have developed such a method to constrain the downstream neutral hydrogen density inside the heliospheric tail. In our approach we have taken advantage of the relative insensitivity of the downstream neutral hydrogen density profile to the specific plasma model adopted. We have also used the fact that the presence of an asymmetric neutral hydrogen cavity surrounding the sun, characteristic of all neutral densities models, results in a higher multiple scattering contribution to the observed glow in the downstream region than in the upstream region. This allows us to approximate the actual density profile with one which is spatially uniform for the purpose of calculating the downstream backscattered glow. Using different spatially constant density profiles, radiative transfer calculations are performed, and the radial dependence of the predicted glow is compared with the observed I/R dependence of Pioneer 10 UV data. Such a comparison bounds the large distance heliospheric neutral hydrogen density in the downstream direction to a value between 0.05 and 0.1/cc.

  19. SWAT application in intensive irrigation systems: Model modification, calibration and validation

    Science.gov (United States)

    Dechmi, Farida; Burguete, Javier; Skhiri, Ahmed

    2012-11-01

    SummaryThe Soil and Water Assessment Tool (SWAT) is a well established, distributed, eco-hydrologic model. However, using the study case of an agricultural intensive irrigated watershed, it was shown that all the model versions are not able to appropriately reproduce the total streamflow in such system when the irrigation source is outside the watershed. The objective of this study was to modify the SWAT2005 version for correctly simulating the main hydrological processes. Crop yield, total streamflow, total suspended sediment (TSS) losses and phosphorus load calibration and validation were performed using field survey information and water quantity and quality data recorded during 2008 and 2009 years in Del Reguero irrigated watershed in Spain. The goodness of the calibration and validation results was assessed using five statistical measures, including the Nash-Sutcliffe efficiency (NSE). Results indicated that the average annual crop yield and actual evapotranspiration estimations were quite satisfactory. On a monthly basis, the values of NSE were 0.90 (calibration) and 0.80 (validation) indicating that the modified model could reproduce accurately the observed streamflow. The TSS losses were also satisfactorily estimated (NSE = 0.72 and 0.52 for the calibration and validation steps). The monthly temporal patterns and all the statistical parameters indicated that the modified SWAT-IRRIG model adequately predicted the total phosphorus (TP) loading. Therefore, the model could be used to assess the impacts of different best management practices on nonpoint phosphorus losses in irrigated systems.

  20. Statistical validation of engineering and scientific models : bounds, calibration, and extrapolation.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Hills, Richard Guy (New Mexico State University, Las Cruces, NM)

    2005-04-01

    Numerical models of complex phenomena often contain approximations due to our inability to fully model the underlying physics, the excessive computational resources required to fully resolve the physics, the need to calibrate constitutive models, or in some cases, our ability to only bound behavior. Here we illustrate the relationship between approximation, calibration, extrapolation, and model validation through a series of examples that use the linear transient convective/dispersion equation to represent the nonlinear behavior of Burgers equation. While the use of these models represents a simplification relative to the types of systems we normally address in engineering and science, the present examples do support the tutorial nature of this document without obscuring the basic issues presented with unnecessarily complex models.

  1. Calibration of a finite element composite delamination model by experiments

    DEFF Research Database (Denmark)

    Gaiotti, M.; Rizzo, C.M.; Branner, Kim

    2013-01-01

    This paper deals with the mechanical behavior under in plane compressive loading of thick and mostly unidirectional glass fiber composite plates made with an initial embedded delamination. The delamination is rectangular in shape, causing the separation of the central part of the plate into two...... distinct sub-laminates. The work focuses on experimental validation of a finite element model built using the 9-noded MITC9 shell elements, which prevent locking effects and aiming to capture the highly non linear buckling features involved in the problem. The geometry has been numerically defined...

  2. Optimal Operational Monetary Policy Rules in an Endogenous Growth Model: a calibrated analysis

    OpenAIRE

    Arato, Hiroki

    2009-01-01

    This paper constructs an endogenous growth New Keynesian model and considers growth and welfare effect of Taylor-type (operational) monetary policy rules. The Ramsey equilibrium and optimal operational monetary policy rule is also computed. In the calibrated model, the Ramseyoptimal volatility of inflation rate is smaller than that in standard exogenous growth New Keynesian model with physical capital accumulation. Optimal operational monetary policy rule makes nominal interest rate respond s...

  3. Uncertainty analyses of the calibrated parameter values of a water quality model

    Science.gov (United States)

    Rode, M.; Suhr, U.; Lindenschmidt, K.-E.

    2003-04-01

    For river basin management water quality models are increasingly used for the analysis and evaluation of different management measures. However substantial uncertainties exist in parameter values depending on the available calibration data. In this paper an uncertainty analysis for a water quality model is presented, which considers the impact of available model calibration data and the variance of input variables. The investigation was conducted based on four extensive flowtime related longitudinal surveys in the River Elbe in the years 1996 to 1999 with varying discharges and seasonal conditions. For the model calculations the deterministic model QSIM of the BfG (Germany) was used. QSIM is a one dimensional water quality model and uses standard algorithms for hydrodynamics and phytoplankton dynamics in running waters, e.g. Michaelis Menten/Monod kinetics, which are used in a wide range of models. The multi-objective calibration of the model was carried out with the nonlinear parameter estimator PEST. The results show that for individual flow time related measuring surveys very good agreements between model calculation and measured values can be obtained. If these parameters are applied to deviating boundary conditions, substantial errors in model calculation can occur. These uncertainties can be decreased with an increased calibration database. More reliable model parameters can be identified, which supply reasonable results for broader boundary conditions. The extension of the application of the parameter set on a wider range of water quality conditions leads to a slight reduction of the model precision for the specific water quality situation. Moreover the investigations show that highly variable water quality variables like the algal biomass always allow a smaller forecast accuracy than variables with lower coefficients of variation like e.g. nitrate.

  4. A continental-scale hydrology and water quality model for Europe: Calibration and uncertainty of a high-resolution large-scale SWAT model

    Science.gov (United States)

    Abbaspour, K. C.; Rouholahnejad, E.; Vaghefi, S.; Srinivasan, R.; Yang, H.; Kløve, B.

    2015-05-01

    A combination of driving forces are increasing pressure on local, national, and regional water supplies needed for irrigation, energy production, industrial uses, domestic purposes, and the environment. In many parts of Europe groundwater quantity, and in particular quality, have come under sever degradation and water levels have decreased resulting in negative environmental impacts. Rapid improvements in the economy of the eastern European block of countries and uncertainties with regard to freshwater availability create challenges for water managers. At the same time, climate change adds a new level of uncertainty with regard to freshwater supplies. In this research we build and calibrate an integrated hydrological model of Europe using the Soil and Water Assessment Tool (SWAT) program. Different components of water resources are simulated and crop yield and water quality are considered at the Hydrological Response Unit (HRU) level. The water resources are quantified at subbasin level with monthly time intervals. Leaching of nitrate into groundwater is also simulated at a finer spatial level (HRU). The use of large-scale, high-resolution water resources models enables consistent and comprehensive examination of integrated system behavior through physically-based, data-driven simulation. In this article we discuss issues with data availability, calibration of large-scale distributed models, and outline procedures for model calibration and uncertainty analysis. The calibrated model and results provide information support to the European Water Framework Directive and lay the basis for further assessment of the impact of climate change on water availability and quality. The approach and methods developed are general and can be applied to any large region around the world.

  5. Optimization of electronic enclosure design for thermal and moisture management using calibrated models of progressive complexity

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Staliulionis, Zygimantas; Shojaee Nasirabadi, Parizad

    2016-01-01

    the development of rigorous calibrated CFD models as well as simple predictive numerical tools, the current paper tackles the optimization of critical features of a typical two-chamber electronic enclosure. The progressive optimization strategy begins the design parameter selection by initially using simpler...

  6. Calibration of the L-MEB model over a coniferous and a deciduous forest

    DEFF Research Database (Denmark)

    Grant, Jennifer P.; Saleh-Contell, Kauzar; Wigneron, Jean-Pierre

    2008-01-01

    In this paper, the L-band Microwave Emission of the Biosphere (L-MEB) model used in the Soil Moisture and Ocean Salinity (SMOS) Level 2 Soil Moisture algorithm is calibrated using L-band (1.4 GHz) microwave measurements over a coniferous (Pine) and a deciduous (mixed/Beech) forest. This resulted...

  7. Displaced calibration of PM10 measurements using spatio-temporal models

    Directory of Open Access Journals (Sweden)

    Daniela Cocchi

    2007-12-01

    Full Text Available PM10 monitoring networks are equipped with heterogeneous samplers. Some of these samplers are known to underestimate true levels of concentrations (non-reference samplers. In this paper we propose a hierarchical spatio-temporal Bayesian model for the calibration of measurements recorded using non-reference samplers, by borrowing strength from non co-located reference sampler measurements.

  8. A parameter for the selection of an optimum balance calibration model by Monte Carlo simulation

    CSIR Research Space (South Africa)

    Bidgood, Peter M

    2013-09-01

    Full Text Available The current trend in balance calibration-matrix generation is to use non-linear regression and statistical methods. Methods typically include Modified-Design-of-Experiment (MDOE), Response-Surface-Models (RSMs) and Analysis of Variance (ANOVA...

  9. Predictive error dependencies when using pilot points and singular value decomposition in groundwater model calibration

    DEFF Research Database (Denmark)

    Christensen, Steen; Doherty, John

    2008-01-01

    super parameters), and that the structural errors caused by using pilot points and super parameters to parameterize the highly heterogeneous log-transmissivity field can be significant. For the test case much effort is put into studying how the calibrated model's ability to make accurate predictions...

  10. Calibration of a semi-distributed hydrological model using discharge and remote sensing data

    NARCIS (Netherlands)

    Muthuwatta, L.P.; Muthuwatta, Lal P.; Booij, Martijn J.; Rientjes, T.H.M.; Rientjes, Tom H.M.; Bos, M.G.; Gieske, A.S.M.; Ahmad, Mobin-Ud-Din; Yilmaz, Koray; Yucel, Ismail; Gupta, Hoshin V.; Wagener, Thorsten; Yang, Dawen; Savenije, Hubert; Neale, Christopher; Kunstmann, Harald; Pomeroy, John

    2009-01-01

    The objective of this study is to present an approach to calibrate a semi-distributed hydrological model using observed streamflow data and actual evapotranspiration time series estimates based on remote sensing data. First, daily actual evapotranspiration is estimated using available MODIS

  11. Performance and Model Calibration of R-D-N Processes in Pilot Plant

    DEFF Research Database (Denmark)

    de la Sota, A.; Larrea, L.; Novak, L.

    1994-01-01

    This paper deals with the first part of an experimental programme in a pilot plant configured for advanced biological nutrient removal processes treating domestic wastewater of Bilbao. The IAWPRC Model No.1 was calibrated in order to optimize the design of the full-scale plant. In this first phas...

  12. Using expert knowledge of the hydrological system to constrain multi-objective calibration of SWAT models

    Science.gov (United States)

    The SWAT model is a helpful tool to predict hydrological processes in a study catchment and their impact on the river discharge at the catchment outlet. For reliable discharge predictions, a precise simulation of hydrological processes is required. Therefore, SWAT has to be calibrated accurately to ...

  13. Calibration of the model SMART2 in the Netherlands, using data available at the European scale

    NARCIS (Netherlands)

    Mol-Dijkstra, J.P.; Kros, J.

    1999-01-01

    The soil acidification model SMART2 has been developed for application on a national to a continental scale. In this study SMART2 is applied at the European scale, which means that SMART2 was applied to the Netherlands with data that are available at the European scale. In order to calibrate SMART2,

  14. Calibration and Monte Carlo modelling of neutron long counters

    CERN Document Server

    Tagziria, H

    2000-01-01

    The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivit...

  15. Model independent approach to the single photoelectron calibration of photomultiplier tubes

    Energy Technology Data Exchange (ETDEWEB)

    Saldanha, R.; Grandi, L.; Guardincerri, Y.; Wester, T.

    2017-08-01

    The accurate calibration of photomultiplier tubes is critical in a wide variety of applications in which it is necessary to know the absolute number of detected photons or precisely determine the resolution of the signal. Conventional calibration methods rely on fitting the photomultiplier response to a low intensity light source with analytical approximations to the single photoelectron distribution, often leading to biased estimates due to the inability to accurately model the full distribution, especially at low charge values. In this paper we present a simple statistical method to extract the relevant single photoelectron calibration parameters without making any assumptions about the underlying single photoelectron distribution. We illustrate the use of this method through the calibration of a Hamamatsu R11410 photomultiplier tube and study the accuracy and precision of the method using Monte Carlo simulations. The method is found to have significantly reduced bias compared to conventional methods and works under a wide range of light intensities, making it suitable for simultaneously calibrating large arrays of photomultiplier tubes.

  16. How does higher frequency monitoring data affect the calibration of a process-based water quality model?

    Science.gov (United States)

    Jackson-Blake, Leah; Helliwell, Rachel

    2015-04-01

    Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, spanning all hydrochemical conditions. However, regulatory agencies and research organisations generally only sample at a fortnightly or monthly frequency, even in well-studied catchments, often missing peak flow events. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by a process-based, semi-distributed catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the Markov Chain Monte Carlo - DiffeRential Evolution Adaptive Metropolis (MCMC-DREAM) algorithm. Calibration to daily data resulted in improved simulation of peak TDP concentrations and improved model performance statistics. Parameter-related uncertainty in simulated TDP was large when fortnightly data was used for calibration, with a 95% credible interval of 26 μg/l. This uncertainty is comparable in size to the difference between Water Framework Directive (WFD) chemical status classes, and would therefore make it difficult to use this calibration to predict shifts in WFD status. The 95% credible interval reduced markedly with the higher frequency monitoring data, to 6 μg/l. The number of parameters that could be reliably auto-calibrated

  17. Prediction of dissolved reactive phosphorus losses from small agricultural catchments: calibration and validation of a parsimonious model

    Directory of Open Access Journals (Sweden)

    C. Hahn

    2013-10-01

    Full Text Available Eutrophication of surface waters due to diffuse phosphorus (P losses continues to be a severe water quality problem worldwide, causing the loss of ecosystem functions of the respective water bodies. Phosphorus in runoff often originates from a small fraction of a catchment only. Targeting mitigation measures to these critical source areas (CSAs is expected to be most efficient and cost-effective, but requires suitable tools. Here we investigated the capability of the parsimonious Rainfall-Runoff-Phosphorus (RRP model to identify CSAs in grassland-dominated catchments based on readily available soil and topographic data. After simultaneous calibration on runoff data from four small hilly catchments on the Swiss Plateau, the model was validated on a different catchment in the same region without further calibration. The RRP model adequately simulated the discharge and dissolved reactive P (DRP export from the validation catchment. Sensitivity analysis showed that the model predictions were robust with respect to the classification of soils into "poorly drained" and "well drained", based on the available soil map. Comparing spatial hydrological model predictions with field data from the validation catchment provided further evidence that the assumptions underlying the model are valid and that the model adequately accounts for the dominant P export processes in the target region. Thus, the parsimonious RRP model is a valuable tool that can be used to determine CSAs. Despite the considerable predictive uncertainty regarding the spatial extent of CSAs, the RRP can provide guidance for the implementation of mitigation measures. The model helps to identify those parts of a catchment where high DRP losses are expected or can be excluded with high confidence. Legacy P was predicted to be the dominant source for DRP losses and thus, in combination with hydrologic active areas, a high risk for water quality.

  18. Sensitivity analysis and calibration of a soil carbon model (SoilGen2 in two contrasting loess forest soils

    Directory of Open Access Journals (Sweden)

    Y. Y. Yu

    2013-01-01

    Full Text Available To accurately estimate past terrestrial carbon pools is the key to understanding the global carbon cycle and its relationship with the climate system. SoilGen2 is a useful tool to obtain aspects of soil properties (including carbon content by simulating soil formation processes; thus it offers an opportunity for both past soil carbon pool reconstruction and future carbon pool prediction. In order to apply it to various environmental conditions, parameters related to carbon cycle process in SoilGen2 are calibrated based on six soil pedons from two typical loess deposition regions (Belgium and China. Sensitivity analysis using the Morris method shows that decomposition rate of humus (kHUM, fraction of incoming plant material as leaf litter (frecto and decomposition rate of resistant plant material (kRPM are the three most sensitive parameters that would cause the greatest uncertainty in simulated change of soil organic carbon in both regions. According to the principle of minimizing the difference between simulated and measured organic carbon by comparing quality indices, the suited values of kHUM, (frecto and kRPM in the model are deduced step by step and validated for independent soil pedons. The difference of calibrated parameters between Belgium and China may be attributed to their different vegetation types and climate conditions. This calibrated model allows more accurate simulation of carbon change in the whole pedon and has potential for future modeling of carbon cycle over long timescales.

  19. Users guide to REGIONAL-1: a regional assessment model

    International Nuclear Information System (INIS)

    Davis, W.E.; Eadie, W.J.; Powell, D.C.

    1979-09-01

    A guide was prepared to allow a user to run the PNL long-range transport model, REGIONAL 1. REGIONAL 1 is a computer model set up to run atmospheric assessments on a regional basis. The model has the capability of being run in three modes for a single time period. The three modes are: (1) no deposition, (2) dry deposition, (3) wet and dry deposition. The guide provides the physical and mathematical basis used in the model for calculating transport, diffusion, and deposition for all three modes. Also the guide includes a program listing with an explanation of the listings and an example in the form of a short-term assessment for 48 hours. The purpose of the example is to allow a person who has past experience with programming and meteorology to operate the assessment model and compare his results with the guide results. This comparison will assure the user that the program is operating in a proper fashion

  20. On Inertial Body Tracking in the Presence of Model Calibration Errors.

    Science.gov (United States)

    Miezal, Markus; Taetz, Bertram; Bleser, Gabriele

    2016-07-22

    In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments-the IMU-to-segment calibrations, subsequently called I2S calibrations-to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and

  1. Experimental calibration of the mathematical model of Air Torque Position dampers with non-cascading blades

    Directory of Open Access Journals (Sweden)

    Bikić Siniša M.

    2016-01-01

    Full Text Available This paper is focused on the mathematical model of the Air Torque Position dampers. The mathematical model establishes a link between the velocity of air in front of the damper, position of the damper blade and the moment acting on the blade caused by the air flow. This research aims to experimentally verify the mathematical model for the damper type with non-cascading blades. Four different types of dampers with non-cascading blades were considered: single blade dampers, dampers with two cross-blades, dampers with two parallel blades and dampers with two blades of which one is a fixed blade in the horizontal position. The case of a damper with a straight pipeline positioned in front of and behind the damper was taken in consideration. Calibration and verification of the mathematical model was conducted experimentally. The experiment was conducted on the laboratory facility for testing dampers used for regulation of the air flow rate in heating, ventilation and air conditioning systems. The design and setup of the laboratory facility, as well as construction, adjustment and calibration of the laboratory damper are presented in this paper. The mathematical model was calibrated by using one set of data, while the verification of the mathematical model was conducted by using the second set of data. The mathematical model was successfully validated and it can be used for accurate measurement of the air velocity on dampers with non-cascading blades under different operating conditions. [Projekat Ministarstva nauke Republike Srbije, br. TR31058

  2. Calibration of a simple and a complex model of global marine biogeochemistry

    Science.gov (United States)

    Kriest, Iris

    2017-11-01

    The assessment of the ocean biota's role in climate change is often carried out with global biogeochemical ocean models that contain many components and involve a high level of parametric uncertainty. Because many data that relate to tracers included in a model are only sparsely observed, assessment of model skill is often restricted to tracers that can be easily measured and assembled. Examination of the models' fit to climatologies of inorganic tracers, after the models have been spun up to steady state, is a common but computationally expensive procedure to assess model performance and reliability. Using new tools that have become available for global model assessment and calibration in steady state, this paper examines two different model types - a complex seven-component model (MOPS) and a very simple four-component model (RetroMOPS) - for their fit to dissolved quantities. Before comparing the models, a subset of their biogeochemical parameters has been optimised against annual-mean nutrients and oxygen. Both model types fit the observations almost equally well. The simple model contains only two nutrients: oxygen and dissolved organic phosphorus (DOP). Its misfit and large-scale tracer distributions are sensitive to the parameterisation of DOP production and decay. The spatio-temporal decoupling of nitrogen and oxygen, and processes involved in their uptake and release, renders oxygen and nitrate valuable tracers for model calibration. In addition, the non-conservative nature of these tracers (with respect to their upper boundary condition) introduces the global bias (fixed nitrogen and oxygen inventory) as a useful additional constraint on model parameters. Dissolved organic phosphorus at the surface behaves antagonistically to phosphate, and suggests that observations of this tracer - although difficult to measure - may be an important asset for model calibration.

  3. Generic precise augmented reality guiding system and its calibration method based on 3D virtual model.

    Science.gov (United States)

    Liu, Miao; Yang, Shourui; Wang, Zhangying; Huang, Shujun; Liu, Yue; Niu, Zhenqi; Zhang, Xiaoxuan; Zhu, Jigui; Zhang, Zonghua

    2016-05-30

    Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique.

  4. Parallel Genetic Algorithms for calibrating Cellular Automata models: Application to lava flows

    International Nuclear Information System (INIS)

    D'Ambrosio, D.; Spataro, W.; Di Gregorio, S.; Calabria Univ., Cosenza; Crisci, G.M.; Rongo, R.; Calabria Univ., Cosenza

    2005-01-01

    Cellular Automata are highly nonlinear dynamical systems which are suitable far simulating natural phenomena whose behaviour may be specified in terms of local interactions. The Cellular Automata model SCIARA, developed far the simulation of lava flows, demonstrated to be able to reproduce the behaviour of Etnean events. However, in order to apply the model far the prediction of future scenarios, a thorough calibrating phase is required. This work presents the application of Genetic Algorithms, general-purpose search algorithms inspired to natural selection and genetics, far the parameters optimisation of the model SCIARA. Difficulties due to the elevated computational time suggested the adoption a Master-Slave Parallel Genetic Algorithm far the calibration of the model with respect to the 2001 Mt. Etna eruption. Results demonstrated the usefulness of the approach, both in terms of computing time and quality of performed simulations

  5. Calibration of the heat balance model for prediction of car climate

    Science.gov (United States)

    Pokorný, Jan; Fišer, Jan; Jícha, Miroslav

    2012-04-01

    In the paper, the authors refer to development a heat balance model to predict car climate and power heat load. Model is developed in Modelica language using Dymola as interpreter. It is a dynamical system, which describes a heat exchange between car cabin and ambient. Inside a car cabin, there is considered heat exchange between air zone, interior and air-conditioning system. It is considered 1D heat transfer with a heat accumulation and a relative movement Sun respect to the car cabin, whilst car is moving. Measurements of the real operating conditions of gave us data for model calibration. The model was calibrated for Škoda Felicia parking-summer scenarios.

  6. Calibration of the heat balance model for prediction of car climate

    Directory of Open Access Journals (Sweden)

    Jícha Miroslav

    2012-04-01

    Full Text Available In the paper, the authors refer to development a heat balance model to predict car climate and power heat load. Model is developed in Modelica language using Dymola as interpreter. It is a dynamical system, which describes a heat exchange between car cabin and ambient. Inside a car cabin, there is considered heat exchange between air zone, interior and air-conditioning system. It is considered 1D heat transfer with a heat accumulation and a relative movement Sun respect to the car cabin, whilst car is moving. Measurements of the real operating conditions of gave us data for model calibration. The model was calibrated for Škoda Felicia parking-summer scenarios.

  7. Approach of regional gravity field modeling from GRACE data for improvement of geoid modeling for Japan

    Science.gov (United States)

    Kuroishi, Y.; Lemoine, F. G.; Rowlands, D. D.

    2006-12-01

    The latest gravimetric geoid model for Japan, JGEOID2004, suffers from errors at long wavelengths (around 1000 km) in a range of +/- 30 cm. The model was developed by combining surface gravity data with a global marine altimetric gravity model, using EGM96 as a foundation, and the errors at long wavelength are presumably attributed to EGM96 errors. The Japanese islands and their vicinity are located in a region of plate convergence boundaries, producing substantial gravity and geoid undulations in a wide range of wavelengths. Because of the geometry of the islands and trenches, precise information on gravity in the surrounding oceans should be incorporated in detail, even if the geoid model is required to be accurate only over land. The Kuroshio Current, which runs south of Japan, causes high sea surface variability, making altimetric gravity field determination complicated. To reduce the long-wavelength errors in the geoid model, we are investigating GRACE data for regional gravity field modeling at long wavelengths in the vicinity of Japan. Our approach is based on exclusive use of inter- satellite range-rate data with calibrated accelerometer data and attitude data, for regional or global gravity field recovery. In the first step, we calibrate accelerometer data in terms of scales and biases by fitting dynamically calculated orbits to GPS-determined precise orbits. The calibration parameters of accelerometer data thus obtained are used in the second step to recover a global/regional gravity anomaly field. This approach is applied to GRACE data obtained for the year 2005 and resulting global/regional gravity models are presented and discussed.

  8. Regional Ocean Modeling System (ROMS): Samoa

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Regional Ocean Modeling System (ROMS) 7-day, 3-hourly forecast for the region surrounding the islands of Samoa at approximately 3-km resolution. While considerable...

  9. Regional Ocean Modeling System (ROMS): Guam

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Regional Ocean Modeling System (ROMS) 6-day, 3-hourly forecast for the region surrounding Guam at approximately 2-km resolution. While considerable effort has been...

  10. Regional Ocean Modeling System (ROMS): Oahu

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Regional Ocean Modeling System (ROMS) 7-day, 3-hourly forecast for the region surrounding the island of Oahu at approximately 1-km resolution. While considerable...

  11. Regional Ocean Modeling System (ROMS): CNMI

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Regional Ocean Modeling System (ROMS) 7-day, 3-hourly forecast for the region surrounding the Commonwealth of the Northern Mariana Islands (CNMI) at approximately...

  12. Spatial scale separation in regional climate modelling

    Energy Technology Data Exchange (ETDEWEB)

    Feser, F.

    2005-07-01

    In this thesis the concept of scale separation is introduced as a tool for first improving regional climate model simulations and, secondly, to explicitly detect and describe the added value obtained by regional modelling. The basic idea behind this is that global and regional climate models have their best performance at different spatial scales. Therefore the regional model should not alter the global model's results at large scales. The for this purpose designed concept of nudging of large scales controls the large scales within the regional model domain and keeps them close to the global forcing model whereby the regional scales are left unchanged. For ensemble simulations nudging of large scales strongly reduces the divergence of the different simulations compared to the standard approach ensemble that occasionally shows large differences for the individual realisations. For climate hindcasts this method leads to results which are on average closer to observed states than the standard approach. Also the analysis of the regional climate model simulation can be improved by separating the results into different spatial domains. This was done by developing and applying digital filters that perform the scale separation effectively without great computational effort. The separation of the results into different spatial scales simplifies model validation and process studies. The search for 'added value' can be conducted on the spatial scales the regional climate model was designed for giving clearer results than by analysing unfiltered meteorological fields. To examine the skill of the different simulations pattern correlation coefficients were calculated between the global reanalyses, the regional climate model simulation and, as a reference, of an operational regional weather analysis. The regional climate model simulation driven with large-scale constraints achieved a high increase in similarity to the operational analyses for medium-scale 2 meter

  13. Inverse modeling as a step in the calibration of the LBL-USGS site-scale model of Yucca Mountain

    International Nuclear Information System (INIS)

    Finsterle, S.; Bodvarsson, G.S.; Chen, G.

    1995-05-01

    Calibration of the LBL-USGS site-scale model of Yucca Mountain is initiated. Inverse modeling techniques are used to match the results of simplified submodels to the observed pressure, saturation, and temperature data. Hydrologic and thermal parameters are determined and compared to the values obtained from laboratory measurements and conventional field test analysis

  14. Calibration of Linked Hydrodynamic and Water Quality Model for Santa Margarita Lagoon

    Science.gov (United States)

    2016-07-01

    was used to drive the transport and water quality kinetics for the simulation of 2007–2009. The sand berm, which controlled the opening/closure of...TECHNICAL REPORT 3015 July 2016 Calibration of Linked Hydrodynamic and Water Quality Model for Santa Margarita Lagoon Final Report Pei...Linked Hydrodynamic and Water Quality Model for Santa Margarita Lagoon Final Report Pei-Fang Wang Chuck Katz Ripan Barua SSC Pacific James

  15. The regression-calibration method for fitting generalized linear models with additive measurement error

    OpenAIRE

    James W. Hardin; Henrik Schmeidiche; Raymond J. Carroll

    2003-01-01

    This paper discusses and illustrates the method of regression calibration. This is a straightforward technique for fitting models with additive measurement error. We present this discussion in terms of generalized linear models (GLMs) following the notation defined in Hardin and Carroll (2003). Discussion will include specified measurement error, measurement error estimated by replicate error-prone proxies, and measurement error estimated by instrumental variables. The discussion focuses on s...

  16. Calibration and Validation of the Dynamic Wake Meandering Model for Implementation in an Aeroelastic Code

    DEFF Research Database (Denmark)

    Aagaard Madsen, Helge; Larsen, Gunner Chr.; Larsen, Torben J.

    2010-01-01

    in an aeroelastic model. Calibration and validation of the different parts of the model is carried out by comparisons with actuator disk and actuator line (ACL) computations as well as with inflow measurements on a full-scale 2 MW turbine. It is shown that the load generating part of the increased turbulence....... Finally, added turbulence characteristics are compared with correlation results from literature. ©2010 American Society of Mechanical Engineers...

  17. Improved method for calibration of exchange flows for a physical transport box model of Tampa Bay, FL USA

    Science.gov (United States)

    Results for both sequential and simultaneous calibration of exchange flows between segments of a 10-box, one-dimensional, well-mixed, bifurcated tidal mixing model for Tampa Bay are reported. Calibrations were conducted for three model options with different mathematical expressi...

  18. Feasibility of the use of optimisation techniques to calibrate the models used in a post-closure radiological assessment

    International Nuclear Information System (INIS)

    Laundy, R.S.

    1991-01-01

    This report addresses the feasibility of the use of optimisation techniques to calibrate the models developed for the impact assessment of a radioactive waste repository. The maximum likelihood method for improving parameter estimates is considered in detail, and non-linear optimisation techniques for finding solutions are reviewed. Applications are described for the calibration of groundwater flow, radionuclide transport and biosphere models. (author)

  19. The dielectric calibration of capacitance probes for soil hydrology using an oscillation frequency response model

    Directory of Open Access Journals (Sweden)

    D. A. Robinson

    1998-01-01

    Full Text Available Capacitance probes are a fast, safe and relatively inexpensive means of measuring the relative permittivity of soils, which can then be used to estimate soil water content. Initial experiments with capacitance probes used empirical calibrations between the frequency response of the instrument and soil water content. This has the disadvantage that the calibrations are instrument-dependent. A twofold calibration strategy is described in this paper; the instrument frequency is turned into relative permittivity (dielectric constant which can then be calibrated against soil water content. This approach offers the advantages of making the second calibration, from soil permittivity to soil water content. instrument-independent and allows comparison with other dielectric methods, such as time domain reflectometry. A physically based model, used to calibrate capacitance probes in terms of relative permittivity (εr is presented. The model, which was developed from circuit analysis, predicts, successfully, the frequency response of the instrument in liquids with different relative permittivities, using only measurements in air and water. lt was used successfully to calibrate 10 prototype surface capacitance insertion probes (SCIPS and a depth capacitance probe. The findings demonstrate that the geometric properties of the instrument electrodes were an important parameter in the model, the value of which could be fixed through measurement. The relationship between apparent soil permittivity and volumetric water content has been the subject of much research in the last 30 years. Two lines of investigation have developed, time domain reflectometry (TDR and capacitance. Both methods claim to measure relative permittivity and should therefore be comparable. This paper demonstrates that the IH capacitance probe overestimates relative permittivity as the ionic conductivity of the medium increases. Electrically conducting ionic solutions were used to test the

  20. ANN-based calibration model of FTIR used in transformer online monitoring

    Science.gov (United States)

    Li, Honglei; Liu, Xian-yong; Zhou, Fangjie; Tan, Kexiong

    2005-02-01

    Recently, chromatography column and gas sensor have been used in online monitoring device of dissolved gases in transformer oil. But some disadvantages still exist in these devices: consumption of carrier gas, requirement of calibration, etc. Since FTIR has high accuracy, consume no carrier gas and require no calibration, the researcher studied the application of FTIR in such monitoring device. Experiments of "Flow gas method" were designed, and spectrum of mixture composed of different gases was collected with A BOMEM MB104 FTIR Spectrometer. A key question in the application of FTIR is that: the absorbance spectrum of 3 fault key gases, including C2H4, CH4 and C2H6, are overlapped seriously at 2700~3400cm-1. Because Absorbance Law is no longer appropriate, a nonlinear calibration model based on BP ANN was setup to in the quantitative analysis. The height absorbance of C2H4, CH4 and C2H6 were adopted as quantitative feature, and all the data were normalized before training the ANN. Computing results show that the calibration model can effectively eliminate the cross disturbance to measurement.

  1. Calibration of a complex activated sludge model for the full-scale wastewater treatment plant.

    Science.gov (United States)

    Liwarska-Bizukojc, Ewa; Olejnik, Dorota; Biernacki, Rafal; Ledakowicz, Stanislaw

    2011-08-01

    In this study, the results of the calibration of the complex activated sludge model implemented in BioWin software for the full-scale wastewater treatment plant are presented. Within the calibration of the model, sensitivity analysis of its parameters and the fractions of carbonaceous substrate were performed. In the steady-state and dynamic calibrations, a successful agreement between the measured and simulated values of the output variables was achieved. Sensitivity analysis revealed that upon the calculations of normalized sensitivity coefficient (S(i,j)) 17 (steady-state) or 19 (dynamic conditions) kinetic and stoichiometric parameters are sensitive. Most of them are associated with growth and decay of ordinary heterotrophic organisms and phosphorus accumulating organisms. The rankings of ten most sensitive parameters established on the basis of the calculations of the mean square sensitivity measure (δ(msqr)j) indicate that irrespective of the fact, whether the steady-state or dynamic calibration was performed, there is an agreement in the sensitivity of parameters.

  2. Including sugar cane in the agro-ecosystem model ORCHIDEE-STICS: calibration and validation

    Science.gov (United States)

    Valade, A.; Vuichard, N.; Ciais, P.; Viovy, N.

    2011-12-01

    Sugarcane is currently the most efficient bioenergy crop with regards to the energy produced per hectare. With approximately half the global bioethanol production in 2005, and a devoted land area expected to expand globally in the years to come, sugar cane is at the heart of the biofuel debate. Dynamic global vegetation models coupled with agronomical models are powerful and novel tools to tackle many of the environmental issues related to biofuels if they are carefully calibrated and validated against field observations. Here we adapt the agro-terrestrial model ORCHIDEE-STICS for sugar cane simulations. Observation data of LAI are used to evaluate the sensitivity of the model to parameters of nitrogen absorption and phenology, which are calibrated in a systematic way for six sites in Australia and La Reunion. We find that the optimal set of parameters is highly dependent on the sites' characteristics and that the model can reproduce satisfactorily the evolution of LAI. This careful calibration of ORCHIDEE-STICS for sugar cane biomass production for different locations and technical itineraries provides a strong basis for further analysis of the impacts of bioenergy-related land use change on carbon cycle budgets. As a next step, a sensitivity analysis is carried out to estimate the uncertainty of the model in biomass and carbon flux simulation due to its parameterization.

  3. Models for Sustainable Regional Development

    DEFF Research Database (Denmark)

    Rasmussen, Lauge Baungaard

    2008-01-01

    The chapter presents a model for integrated cross-cultural knowledge building and entrepreneurship. In addtion, narrative and numeric simulations methods are suggested to promote a further development and implementation of the model in China.......The chapter presents a model for integrated cross-cultural knowledge building and entrepreneurship. In addtion, narrative and numeric simulations methods are suggested to promote a further development and implementation of the model in China....

  4. Daily precipitation statistics in regional climate models

    DEFF Research Database (Denmark)

    Frei, Christoph; Christensen, Jens Hesselbjerg; Déqué, Michel

    2003-01-01

    An evaluation is undertaken of the statistics of daily precipitation as simulated by five regional climate models using comprehensive observations in the region of the European Alps. Four limited area models and one variable-resolution global model are considered, all with a grid spacing of 50 km...

  5. Modeling microelectrode biosensors: free-flow calibration can substantially underestimate tissue concentrations.

    Science.gov (United States)

    Newton, Adam J H; Wall, Mark J; Richardson, Magnus J E

    2017-03-01

    Microelectrode amperometric biosensors are widely used to measure concentrations of analytes in solution and tissue including acetylcholine, adenosine, glucose, and glutamate. A great deal of experimental and modeling effort has been directed at quantifying the response of the biosensors themselves; however, the influence that the macroscopic tissue environment has on biosensor response has not been subjected to the same level of scrutiny. Here we identify an important issue in the way microelectrode biosensors are calibrated that is likely to have led to underestimations of analyte tissue concentrations. Concentration in tissue is typically determined by comparing the biosensor signal to that measured in free-flow calibration conditions. In a free-flow environment the concentration of the analyte at the outer surface of the biosensor can be considered constant. However, in tissue the analyte reaches the biosensor surface by diffusion through the extracellular space. Because the enzymes in the biosensor break down the analyte, a density gradient is set up resulting in a significantly lower concentration of analyte near the biosensor surface. This effect is compounded by the diminished volume fraction (porosity) and reduction in the diffusion coefficient due to obstructions (tortuosity) in tissue. We demonstrate this effect through modeling and experimentally verify our predictions in diffusive environments. NEW & NOTEWORTHY Microelectrode biosensors are typically calibrated in a free-flow environment where the concentrations at the biosensor surface are constant. However, when in tissue, the analyte reaches the biosensor via diffusion and so analyte breakdown by the biosensor results in a concentration gradient and consequently a lower concentration around the biosensor. This effect means that naive free-flow calibration will underestimate tissue concentration. We develop mathematical models to better quantify the discrepancy between the calibration and tissue

  6. Field Measurement and Calibration of HDM-4 Fuel Consumption Model on Interstate Highway in Florida

    Directory of Open Access Journals (Sweden)

    Xin Jiao

    2015-03-01

    Full Text Available Fuel consumptions are measured by operating passenger car and tractor-trailer on two interstate roadway sites in Florida. Each site contains flexible pavement and rigid pavement with similar pavement, traffic and environmental condition. Field test reveals that the average fuel consumption differences between vehicle operating on flexible pavement and rigid pavement at given test condition are 4.04% for tractor-trailer and 2.50% for passenger car, with a fuel saving on rigid pavement. The fuel consumption differences are found statistically significant at 95% confidence level for both vehicle types. Test data are then used to calibrate the Highway Development and Management IV (HDM-4 fuel consumption model and model coefficients are obtained for three sets of observations. Field measurement and prediction by calibrated model shows generally good agreement. Nevertheless, verification and adjustment with more experiment or data sources would be expected in future studies.

  7. Calibrated prediction of Pine Island Glacier retreat during the 21st and 22nd centuries with a coupled flowline model

    Science.gov (United States)

    Gladstone, Rupert M.; Lee, Victoria; Rougier, Jonathan; Payne, Antony J.; Hellmer, Hartmut; Le Brocq, Anne; Shepherd, Andrew; Edwards, Tamsin L.; Gregory, Jonathan; Cornford, Stephen L.

    2012-06-01

    A flowline ice sheet model is coupled to a box model for cavity circulation and configured for the Pine Island Glacier. An ensemble of 5000 simulations are carried out from 1900 to 2200 with varying inputs and parameters, forced by ocean temperatures predicted by a regional ocean model under the A1B ‘business as usual’ emissions scenario. Comparison is made against recent observations to provide a calibrated prediction in the form of a 95% confidence set. Predictions are for monotonic (apart from some small scale fluctuations in a minority of cases) retreat of the grounding line over the next 200 yr with huge uncertainty in the rate of retreat. Full collapse of the main trunk of the PIG during the 22nd century remains a possibility.

  8. Improving plasma shaping accuracy through consolidation of control model maintenance, diagnostic calibration, and hardware change control

    International Nuclear Information System (INIS)

    Baggest, D.S.; Rothweil, D.A.; Pang, S.

    1995-12-01

    With the advent of more sophisticated techniques for control of tokamak plasmas comes the requirement for increasingly more accurate models of plasma processes and tokamak systems. Development of accurate models for DIII-D power systems, vessel, and poloidal coils is already complete, while work continues in development of general plasma response modeling techniques. Increased accuracy in estimates of parameters to be controlled is also required. It is important to ensure that errors in supporting systems such as diagnostic and command circuits do not limit the accuracy of plasma parameter estimates or inhibit the ability to derive accurate plasma/tokamak system models. To address this issue, we have developed more formal power systems change control and power system/magnetic diagnostics calibration procedures. This paper discusses our approach to consolidating the tasks in these closely related areas. This includes, for example, defining criteria for when diagnostics should be re-calibrated along with required calibration tolerances, and implementing methods for tracking power systems hardware modifications and the resultant changes to control models

  9. A conceptual precipitation-runoff modeling suite: Model selection, calibration and predictive uncertainty assessment

    Science.gov (United States)

    Tyler Jon Smith

    2008-01-01

    In Montana and much of the Rocky Mountain West, the single most important parameter in forecasting the controls on regional water resources is snowpack. Despite the heightened importance of snowpack, few studies have considered the representation of uncertainty in coupled snowmelt/hydrologic conceptual models. Uncertainty estimation provides a direct interpretation of...

  10. Three-dimensional DFN Model Development and Calibration: A Case Study for Pahute Mesa, Nevada National Security Site

    Science.gov (United States)

    Pham, H. V.; Parashar, R.; Sund, N. L.; Pohlmann, K.

    2017-12-01

    Pahute Mesa, located in the north-western region of the Nevada National Security Site, is an area where numerous underground nuclear tests were conducted. The mesa contains several fractured aquifers that can potentially provide high permeability pathways for migration of radionuclides away from testing locations. The BULLION Forced-Gradient Experiment (FGE) conducted on Pahute Mesa injected and pumped solute and colloid tracers from a system of three wells for obtaining site-specific information about the transport of radionuclides in fractured rock aquifers. This study aims to develop reliable three-dimensional discrete fracture network (DFN) models to simulate the BULLION FGE as a means for computing realistic ranges of important parameters describing fractured rock. Multiple conceptual DFN models were developed using dfnWorks, a parallelized computational suite developed by Los Alamos National Laboratory, to simulate flow and conservative particle movement in subsurface fractured rocks downgradient from the BULLION test. The model domain is 100x200x100 m and includes the three tracer-test wells of the BULLION FGE and the Pahute Mesa Lava-flow aquifer. The model scenarios considered differ from each other in terms of boundary conditions and fracture density. For each conceptual model, a number of statistically equivalent fracture network realizations were generated using data from fracture characterization studies. We adopt the covariance matrix adaptation-evolution strategy (CMA-ES) as a global local stochastic derivative-free optimization method to calibrate the DFN models using groundwater levels and tracer breakthrough data obtained from the three wells. Models of fracture apertures based on fracture type and size are proposed and the values of apertures in each model are estimated during model calibration. The ranges of fracture aperture values resulting from this study are expected to enhance understanding of radionuclide transport in fractured rocks and

  11. Generation of Natural Runoff Monthly Series at Ungauged Sites Using a Regional Regressive Model

    Directory of Open Access Journals (Sweden)

    Dario Pumo

    2016-05-01

    Full Text Available Many hydrologic applications require reliable estimates of runoff in river basins to face the widespread lack of data, both in time and in space. A regional method for the reconstruction of monthly runoff series is here developed and applied to Sicily (Italy. A simple modeling structure is adopted, consisting of a regression-based rainfall–runoff model with four model parameters, calibrated through a two-step procedure. Monthly runoff estimates are based on precipitation, temperature, and exploiting the autocorrelation with runoff at the previous month. Model parameters are assessed by specific regional equations as a function of easily measurable physical and climate basin descriptors. The first calibration step is aimed at the identification of a set of parameters optimizing model performances at the level of single basin. Such “optimal” sets are used at the second step, part of a regional regression analysis, to establish the regional equations for model parameters assessment as a function of basin attributes. All the gauged watersheds across the region have been analyzed, selecting 53 basins for model calibration and using the other six basins exclusively for validation. Performances, quantitatively evaluated by different statistical indexes, demonstrate relevant model ability in reproducing the observed hydrological time-series at both the monthly and coarser time resolutions. The methodology, which is easily transferable to other arid and semi-arid areas, provides a reliable tool for filling/reconstructing runoff time series at any gauged or ungauged basin of a region.

  12. Optical modeling and polarization calibration for CMB measurements with ACTPol and Advanced ACTPol

    Science.gov (United States)

    Koopman, Brian; Austermann, Jason; Cho, Hsiao-Mei; Coughlin, Kevin P.; Duff, Shannon M.; Gallardo, Patricio A.; Hasselfield, Matthew; Henderson, Shawn W.; Ho, Shuay-Pwu Patty; Hubmayr, Johannes; Irwin, Kent D.; Li, Dale; McMahon, Jeff; Nati, Federico; Niemack, Michael D.; Newburgh, Laura; Page, Lyman A.; Salatino, Maria; Schillaci, Alessandro; Schmitt, Benjamin L.; Simon, Sara M.; Vavagiakis, Eve M.; Ward, Jonathan T.; Wollack, Edward J.

    2016-07-01

    The Atacama Cosmology Telescope Polarimeter (ACTPol) is a polarization sensitive upgrade to the Atacama Cosmology Telescope, located at an elevation of 5190 m on Cerro Toco in Chile. ACTPol uses transition edge sensor bolometers coupled to orthomode transducers to measure both the temperature and polarization of the Cosmic Microwave Background (CMB). Calibration of the detector angles is a critical step in producing polarization maps of the CMB. Polarization angle offsets in the detector calibration can cause leakage in polarization from E to B modes and induce a spurious signal in the EB and TB cross correlations, which eliminates our ability to measure potential cosmological sources of EB and TB signals, such as cosmic birefringence. We calibrate the ACTPol detector angles by ray tracing the designed detector angle through the entire optical chain to determine the projection of each detector angle on the sky. The distribution of calibrated detector polarization angles are consistent with a global offset angle from zero when compared to the EB-nulling offset angle, the angle required to null the EB cross-correlation power spectrum. We present the optical modeling process. The detector angles can be cross checked through observations of known polarized sources, whether this be a galactic source or a laboratory reference standard. To cross check the ACTPol detector angles, we use a thin film polarization grid placed in front of the receiver of the telescope, between the receiver and the secondary reflector. Making use of a rapidly rotating half-wave plate (HWP) mount we spin the polarizing grid at a constant speed, polarizing and rotating the incoming atmospheric signal. The resulting sinusoidal signal is used to determine the detector angles. The optical modeling calibration was shown to be consistent with a global offset angle of zero when compared to EB nulling in the first ACTPol results and will continue to be a part of our calibration implementation. The first

  13. Ultrasound data for laboratory calibration of an analytical model to calculate crack depth on asphalt pavements

    Directory of Open Access Journals (Sweden)

    Miguel A. Franesqui

    2017-08-01

    Full Text Available This article outlines the ultrasound data employed to calibrate in the laboratory an analytical model that permits the calculation of the depth of partial-depth surface-initiated cracks on bituminous pavements using this non-destructive technique. This initial calibration is required so that the model provides sufficient precision during practical application. The ultrasonic pulse transit times were measured on beam samples of different asphalt mixtures (semi-dense asphalt concrete AC-S; asphalt concrete for very thin layers BBTM; and porous asphalt PA. The cracks on the laboratory samples were simulated by means of notches of variable depths. With the data of ultrasound transmission time ratios, curve-fittings were carried out on the analytical model, thus determining the regression parameters and their statistical dispersion. The calibrated models obtained from laboratory datasets were subsequently applied to auscultate the evolution of the crack depth after microwaves exposure in the research article entitled “Top-down cracking self-healing of asphalt pavements with steel filler from industrial waste applying microwaves” (Franesqui et al., 2017 [1].

  14. Ultrasound data for laboratory calibration of an analytical model to calculate crack depth on asphalt pavements.

    Science.gov (United States)

    Franesqui, Miguel A; Yepes, Jorge; García-González, Cándida

    2017-08-01

    This article outlines the ultrasound data employed to calibrate in the laboratory an analytical model that permits the calculation of the depth of partial-depth surface-initiated cracks on bituminous pavements using this non-destructive technique. This initial calibration is required so that the model provides sufficient precision during practical application. The ultrasonic pulse transit times were measured on beam samples of different asphalt mixtures (semi-dense asphalt concrete AC-S; asphalt concrete for very thin layers BBTM; and porous asphalt PA). The cracks on the laboratory samples were simulated by means of notches of variable depths. With the data of ultrasound transmission time ratios, curve-fittings were carried out on the analytical model, thus determining the regression parameters and their statistical dispersion. The calibrated models obtained from laboratory datasets were subsequently applied to auscultate the evolution of the crack depth after microwaves exposure in the research article entitled "Top-down cracking self-healing of asphalt pavements with steel filler from industrial waste applying microwaves" (Franesqui et al., 2017) [1].

  15. Usability of Calibrating Monitor for Soft Proof According to CIE CAM02 Colour Appearance Model

    Directory of Open Access Journals (Sweden)

    Dragoljub Novakovic

    2010-06-01

    Full Text Available Colour appearance models describe viewing conditions and enable simulating appearance of colours under different illuminants and illumination levels according to human perception. Since it is possible to predict how colour would look like when different illuminants are used, colour appearance models are incorporated in some monitor profiling software. Owing to these software, tone reproduction curve can be defined by taking into consideration viewing condition in which display is observed. In this work assessment of CIE CAM02 colour appearance model usage at calibrating LCD monitor for soft proof was tested in order to determine which tone reproduction curve enables better reproduction of colour. Luminance level was kept constant, whereas tone reproduction curves determined by gamma values and by parameters of CIE CAM02 model were varied. Testing was conducted in case where physical print reference is observed under illuminant which has colour temperature according to iso standard for soft-proofing (D50 and also for illuminants D65.  Based on the results of calibrations assessment, subjective and objective assessment of created profiles, as well as on the perceptual test carried out on human observers, differences in image display were defined and conclusions of the adequacy of CAM02 usage at monitor calibration for each of the viewing conditions reached.

  16. Calibration plots for risk prediction models in the presence of competing risks.

    Science.gov (United States)

    Gerds, Thomas A; Andersen, Per K; Kattan, Michael W

    2014-08-15

    A predicted risk of 17% can be called reliable if it can be expected that the event will occur to about 17 of 100 patients who all received a predicted risk of 17%. Statistical models can predict the absolute risk of an event such as cardiovascular death in the presence of competing risks such as death due to other causes. For personalized medicine and patient counseling, it is necessary to check that the model is calibrated in the sense that it provides reliable predictions for all subjects. There are three often encountered practical problems when the aim is to display or test if a risk prediction model is well calibrated. The first is lack of independent validation data, the second is right censoring, and the third is that when the risk scale is continuous, the estimation problem is as difficult as density estimation. To deal with these problems, we propose to estimate calibration curves for competing risks models based on jackknife pseudo-values that are combined with a nearest neighborhood smoother and a cross-validation approach to deal with all three problems. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Multi-model analysis of terrestrial carbon cycles in Japan: limitations and implications of model calibration using eddy flux observations

    Directory of Open Access Journals (Sweden)

    K. Ichii

    2010-07-01

    Full Text Available Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine – based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID, we conducted two simulations: (1 point simulations at four eddy flux sites in Japan and (2 spatial simulations for Japan with a default model (based on original settings and a modified model (based on model parameter tuning using eddy flux data. Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP, most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.

  18. Multi-model analysis of terrestrial carbon cycles in Japan: limitations and implications of model calibration using eddy flux observations

    Science.gov (United States)

    Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.

    2010-07-01

    Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.

  19. MOESHA: A genetic algorithm for automatic calibration and estimation of parameter uncertainty and sensitivity of hydrologic models

    Science.gov (United States)

    Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...

  20. Multi-site calibration, validation, and sensitivity analysis of the MIKE SHE Model for a large watershed in northern China

    Directory of Open Access Journals (Sweden)

    S. Wang

    2012-12-01

    Full Text Available Model calibration is essential for hydrologic modeling of large watersheds in a heterogeneous mountain environment. Little guidance is available for model calibration protocols for distributed models that aim at capturing the spatial variability of hydrologic processes. This study used the physically-based distributed hydrologic model, MIKE SHE, to contrast a lumped calibration protocol that used streamflow measured at one single watershed outlet to a multi-site calibration method which employed streamflow measurements at three stations within the large Chaohe River basin in northern China. Simulation results showed that the single-site calibrated model was able to sufficiently simulate the hydrographs for two of the three stations (Nash-Sutcliffe coefficient of 0.65–0.75, and correlation coefficient 0.81–0.87 during the testing period, but the model performed poorly for the third station (Nash-Sutcliffe coefficient only 0.44. Sensitivity analysis suggested that streamflow of upstream area of the watershed was dominated by slow groundwater, whilst streamflow of middle- and down- stream areas by relatively quick interflow. Therefore, a multi-site calibration protocol was deemed necessary. Due to the potential errors and uncertainties with respect to the representation of spatial variability, performance measures from the multi-site calibration protocol slightly decreased for two of the three stations, whereas it was improved greatly for the third station. We concluded that multi-site calibration protocol reached a compromise in term of model performance for the three stations, reasonably representing the hydrographs of all three stations with Nash-Sutcliffe coefficient ranging from 0.59–072. The multi-site calibration protocol applied in the analysis generally has advantages to the single site calibration protocol.

  1. Calibration model maintenance in melamine resin production: Integrating drift detection, smart sample selection and model adaptation.

    Science.gov (United States)

    Nikzad-Langerodi, Ramin; Lughofer, Edwin; Cernuda, Carlos; Reischer, Thomas; Kantner, Wolfgang; Pawliczek, Marcin; Brandstetter, Markus

    2018-07-12

    The physico-chemical properties of Melamine Formaldehyde (MF) based thermosets are largely influenced by the degree of polymerization (DP) in the underlying resin. On-line supervision of the turbidity point by means of vibrational spectroscopy has recently emerged as a promising technique to monitor the DP of MF resins. However, spectroscopic determination of the DP relies on chemometric models, which are usually sensitive to drifts caused by instrumental and/or sample-associated changes occurring over time. In order to detect the time point when drifts start causing prediction bias, we here explore a universal drift detector based on a faded version of the Page-Hinkley (PH) statistic, which we test in three data streams from an industrial MF resin production process. We employ committee disagreement (CD), computed as the variance of model predictions from an ensemble of partial least squares (PLS) models, as a measure for sample-wise prediction uncertainty and use the PH statistic to detect changes in this quantity. We further explore supervised and unsupervised strategies for (semi-)automatic model adaptation upon detection of a drift. For the former, manual reference measurements are requested whenever statistical thresholds on Hotelling's T 2 and/or Q-Residuals are violated. Models are subsequently re-calibrated using weighted partial least squares in order to increase the influence of newer samples, which increases the flexibility when adapting to new (drifted) states. Unsupervised model adaptation is carried out exploiting the dual antecedent-consequent structure of a recently developed fuzzy systems variant of PLS termed FLEXFIS-PLS. In particular, antecedent parts are updated while maintaining the internal structure of the local linear predictors (i.e. the consequents). We found improved drift detection capability of the CD compared to Hotelling's T 2 and Q-Residuals when used in combination with the proposed PH test. Furthermore, we found that active

  2. Calibration of the k- ɛ model constants for use in CFD applications

    Science.gov (United States)

    Glover, Nina; Guillias, Serge; Malki-Epshtein, Liora

    2011-11-01

    The k- ɛ turbulence model is a popular choice in CFD modelling due to its robust nature and the fact that it has been well validated. However it has been noted in previous research that the k- ɛ model has problems predicting flow separation as well as unconfined and transient flows. The model contains five empirical model constants whose values were found through data fitting for a wide range of flows (Launder 1972) but ad-hoc adjustments are often made to these values depending on the situation being modeled. Here we use the example of flow within a regular street canyon to perform a Bayesian calibration of the model constants against wind tunnel data. This allows us to assess the sensitivity of the CFD model to changes in these constants, find the most suitable values for the constants as well as quantifying the uncertainty related to the constants and the CFD model as a whole.

  3. (Pre-) calibration of a Reduced Complexity Model of the Antarctic Contribution to Sea-level Changes

    Science.gov (United States)

    Ruckert, K. L.; Guan, Y.; Shaffer, G.; Forest, C. E.; Keller, K.

    2015-12-01

    (Pre-) calibration of a Reduced Complexity Model of the Antarctic Contribution to Sea-level ChangesKelsey L. Ruckert1*, Yawen Guan2, Chris E. Forest1,3,7, Gary Shaffer 4,5,6, and Klaus Keller1,7,81 Department of Geosciences, The Pennsylvania State University, University Park, Pennsylvania, USA 2 Department of Statistics, The Pennsylvania State University, University Park, Pennsylvania, USA 3 Department of Meteorology, The Pennsylvania State University, University Park, Pennsylvania, USA 4 GAIA_Antarctica, University of Magallanes, Punta Arenas, Chile 5 Center for Advanced Studies in Arid Zones, La Serena, Chile 6 Niels Bohr Institute, University of Copenhagen, Copenhagen, Denmark 7 Earth and Environmental Systems Institute, The Pennsylvania State University, University Park, Pennsylvania, USA 8 Department of Engineering and Public Policy, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA * Corresponding author. E-mail klr324@psu.eduUnderstanding and projecting future sea-level changes poses nontrivial challenges. Sea-level changes are driven primarily by changes in the density of seawater as well as changes in the size of glaciers and ice sheets. Previous studies have demonstrated that a key source of uncertainties surrounding sea-level projections is the response of the Antarctic ice sheet to warming temperatures. Here we calibrate a previously published and relatively simple model of the Antarctic ice sheet over a hindcast period from the last interglacial period to the present. We apply and compare a range of (pre-) calibration methods, including a Bayesian approach that accounts for heteroskedasticity. We compare the model hindcasts and projections for different levels of model complexity and calibration methods. We compare the projections with the upper bounds from previous studies and find our projections have a narrower range in 2100. Furthermore we discuss the implications for the design of climate risk management strategies.

  4. Evaluation of Hydrologic Simulations Developed Using Multi-Model Synthesis and Remotely-Sensed Data within a Portfolio of Calibration Strategies

    Science.gov (United States)

    Lafontaine, J.; Hay, L.; Markstrom, S. L.

    2016-12-01

    The United States Geological Survey (USGS) has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development, and facilitate the application of hydrologic simulations within the conterminous United States (CONUS). As many stream reaches in the CONUS are either not gaged, or are substantially impacted by water use or flow regulation, ancillary information must be used to determine reasonable parameter estimations for streamflow simulations. Hydrologic models for 1,576 gaged watersheds across the CONUS were developed to test the feasibility of improving streamflow simulations linking physically-based hydrologic models with remotely-sensed data products (i.e. snow water equivalent). Initially, the physically-based models were calibrated to measured streamflow data to provide a baseline for comparison across multiple calibration strategy tests. In addition, not all ancillary datasets are appropriate for application to all parts of the CONUS (e.g. snow water equivalent in the southeastern U.S., where snow is a rarity). As it is not expected that any one data product or model simulation will be sufficient for representing hydrologic behavior across the entire CONUS, a systematic evaluation of which data products improve hydrologic simulations for various regions across the CONUS was performed. The resulting portfolio of calibration strategies can be used to guide selection of an appropriate combination of modeled and measured information for hydrologic model development and calibration. In addition, these calibration strategies have been developed to be flexible so that new data products can be assimilated. This analysis provides a foundation to understand how well models work when sufficient streamflow data are not available and could be used to further inform hydrologic model parameter development for ungaged areas.

  5. Using Bayesian Model Averaging (BMA) to calibrate probabilistic surface temperature forecasts over Iran

    Energy Technology Data Exchange (ETDEWEB)

    Soltanzadeh, I. [Tehran Univ. (Iran, Islamic Republic of). Inst. of Geophysics; Azadi, M.; Vakili, G.A. [Atmospheric Science and Meteorological Research Center (ASMERC), Teheran (Iran, Islamic Republic of)

    2011-07-01

    Using Bayesian Model Averaging (BMA), an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM), with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME) of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009) over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast. (orig.)

  6. Using Bayesian Model Averaging (BMA to calibrate probabilistic surface temperature forecasts over Iran

    Directory of Open Access Journals (Sweden)

    I. Soltanzadeh

    2011-07-01

    Full Text Available Using Bayesian Model Averaging (BMA, an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM, with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP Global Forecast System (GFS and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009 over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast.

  7. Reactive Burn Model Calibration for PETN Using Ultra-High-Speed Phase Contrast Imaging

    Science.gov (United States)

    Johnson, Carl; Ramos, Kyle; Bolme, Cindy; Sanchez, Nathaniel; Barber, John; Montgomery, David

    2017-06-01

    A 1D reactive burn model (RBM) calibration for a plastic bonded high explosive (HE) requires run-to-detonation data. In PETN (pentaerythritol tetranitrate, 1.65 g/cc) the shock to detonation transition (SDT) is on the order of a few millimeters. This rapid SDT imposes experimental length scales that preclude application of traditional calibration methods such as embedded electromagnetic gauge methods (EEGM) which are very effective when used to study 10 - 20 mm thick HE specimens. In recent work at Argonne National Laboratory's Advanced Photon Source we have obtained run-to-detonation data in PETN using ultra-high-speed dynamic phase contrast imaging (PCI). A reactive burn model calibration valid for 1D shock waves is obtained using density profiles spanning the transition to detonation as opposed to particle velocity profiles from EEGM. Particle swarm optimization (PSO) methods were used to operate the LANL hydrocode FLAG iteratively to refine SURF RBM parameters until a suitable parameter set attained. These methods will be presented along with model validation simulations. The novel method described is generally applicable to `sensitive' energetic materials particularly those with areal densities amenable to radiography.

  8. Assessing River Low-Flow Uncertainties Related to Hydrological Model Calibration and Structure under Climate Change Conditions

    Directory of Open Access Journals (Sweden)

    Mélanie Trudel

    2017-03-01

    Full Text Available Low-flow is the flow of water in a river during prolonged dry weather. This paper investigated the uncertainty originating from hydrological model calibration and structure in low-flow simulations under climate change conditions. Two hydrological models of contrasting complexity, GR4J and SWAT, were applied to four sub-watersheds of the Yamaska River, Canada. The two models were calibrated using seven different objective functions including the Nash-Sutcliffe coefficient (NSEQ and six other objective functions more related to low flows. The uncertainty in the model parameters was evaluated using a PARAmeter SOLutions procedure (PARASOL. Twelve climate projections from different combinations of General Circulation Models (GCMs and Regional Circulation Models (RCMs were used to simulate low-flow indices in a reference (1970–2000 and future (2040–2070 horizon. Results indicate that the NSEQ objective function does not properly represent low-flow indices for either model. The NSE objective function applied to the log of the flows shows the lowest total variance for all sub-watersheds. In addition, these hydrological models should be used with care for low-flow studies, since they both show some inconsistent results. The uncertainty is higher for SWAT than for GR4J. With GR4J, the uncertainties in the simulations for the 7Q2 index (the 7-day low-flow value with a 2-year return period are lower for the future period than for the reference period. This can be explained by the analysis of hydrological processes. In the future horizon, a significant worsening of low-flow conditions was projected.

  9. A new calibration model for pointing a radio telescope that considers nonlinear errors in the azimuth axis

    International Nuclear Information System (INIS)

    Kong De-Qing; Wang Song-Gen; Zhang Hong-Bo; Wang Jin-Qing; Wang Min

    2014-01-01

    A new calibration model of a radio telescope that includes pointing error is presented, which considers nonlinear errors in the azimuth axis. For a large radio telescope, in particular for a telescope with a turntable, it is difficult to correct pointing errors using a traditional linear calibration model, because errors produced by the wheel-on-rail or center bearing structures are generally nonlinear. Fourier expansion is made for the oblique error and parameters describing the inclination direction along the azimuth axis based on the linear calibration model, and a new calibration model for pointing is derived. The new pointing model is applied to the 40m radio telescope administered by Yunnan Observatories, which is a telescope that uses a turntable. The results show that this model can significantly reduce the residual systematic errors due to nonlinearity in the azimuth axis compared with the linear model

  10. Fiction and reality in the modelling world - Balance between simplicity and complexity, calibration and identifiability, verification and falsification

    DEFF Research Database (Denmark)

    Harremoës, P.; Madsen, H.

    1999-01-01

    Where is the balance between simplicity and complexity in model prediction of urban drainage structures? The calibration/verification approach to testing of model performance gives an exaggerated sense of certainty. Frequently, the model structure and the parameters are not identifiable by calibr......Where is the balance between simplicity and complexity in model prediction of urban drainage structures? The calibration/verification approach to testing of model performance gives an exaggerated sense of certainty. Frequently, the model structure and the parameters are not identifiable...... by calibration/verification on the basis of the data series available, which generates elements of sheer guessing - unless the universality of the model is be based on induction, i.e. experience from the sum of all previous investigations. There is a need to deal more explicitly with uncertainty...

  11. Calibration of the Nonlinear Accelerator Model at the Diamond Storage Ring

    CERN Document Server

    Bartolini, Riccardo; Rowland, James; Martin, Ian; Schmidt, Frank

    2010-01-01

    The correct implementation of the nonlinear ring model is crucial to achieve the top performance of a synchrotron light source. Several dynamics quantities can be used to compare the real machine with the model and eventually to correct the accelerator. Most of these methods are based on the analysis of turn-by-turn data of excited betatron oscillations. We present the experimental results of the campaign of measurements carried out at the Diamond. A combination of Frequency Map Analysis (FMA) and detuning with momentum measurements has allowed a precise calibration of the nonlinear model capable of reproducing the nonlinear beam dynamics in the storage ring

  12. Sensitivity analysis and development of calibration methodology for near-surface hydrogeology model of Laxemar

    International Nuclear Information System (INIS)

    Aneljung, Maria; Sassner, Mona; Gustafsson, Lars-Goeran

    2007-11-01

    This report describes modelling where the hydrological modelling system MIKE SHE has been used to describe surface hydrology, near-surface hydrogeology, advective transport mechanisms, and the contact between groundwater and surface water within the SKB site investigation area at Laxemar. In the MIKE SHE system, surface water flow is described with the one-dimensional modelling tool MIKE 11, which is fully and dynamically integrated with the groundwater flow module in MIKE SHE. In early 2008, a supplementary data set will be available and a process of updating, rebuilding and calibrating the MIKE SHE model based on this data set will start. Before the calibration on the new data begins, it is important to gather as much knowledge as possible on calibration methods, and to identify critical calibration parameters and areas within the model that require special attention. In this project, the MIKE SHE model has been further developed. The model area has been extended, and the present model also includes an updated bedrock model and a more detailed description of the surface stream network. The numerical model has been updated and optimized, especially regarding the modelling of evapotranspiration and the unsaturated zone, and the coupling between the surface stream network in MIKE 11 and the overland flow in MIKE SHE. An initial calibration has been made and a base case has been defined and evaluated. In connection with the calibration, the most important changes made in the model were the following: The evapotranspiration was reduced. The infiltration capacity was reduced. The hydraulic conductivities of the Quaternary deposits in the water-saturated part of the subsurface were reduced. Data from one surface water level monitoring station, four surface water discharge monitoring stations and 43 groundwater level monitoring stations (SSM series boreholes) have been used to evaluate and calibrate the model. The base case simulations showed a reasonable agreement

  13. Calibration of the rutting model in HDM 4 on the highway network in Macedonia

    Directory of Open Access Journals (Sweden)

    Ognjenovic Slobodan

    2018-01-01

    Full Text Available The World Bank HDM 4 model is adopted in many countries worldwide. It is consisted of the developed models for almost all types of deformation on the pavement structures, but it can’t be used as it is developed everywhere in the world without proper adjustments to local conditions such as traffic load, climate, construction specificities, maintenance level etc. This paper presents the results of the researches carried out in Macedonia for determining calibration coefficient of the rutting model in HDM 4.

  14. Caliver: An R package for CALIbration and VERification of forest fire gridded model outputs.

    Science.gov (United States)

    Vitolo, Claudia; Di Giuseppe, Francesca; D'Andrea, Mirko

    2018-01-01

    The name caliver stands for CALIbration and VERification of forest fire gridded model outputs. This is a package developed for the R programming language and available under an APACHE-2 license from a public repository. In this paper we describe the functionalities of the package and give examples using publicly available datasets. Fire danger model outputs are taken from the modeling components of the European Forest Fire Information System (EFFIS) and observed burned areas from the Global Fire Emission Database (GFED). Complete documentation, including a vignette, is also available within the package.

  15. Analysis of the Gaia RVS Region in ESPaDOnS Spectra of Asteroseismic Calibration Stars

    Science.gov (United States)

    Vesa, Oana; Huber, Daniel; Gaidos, Eric

    2018-01-01

    While surface gravity can be measured from asteroseismology, asteroseismology cannot be applied to every star. Surface gravity is a critical stellar parameter because it can be used to calculate the radii of stars, which is important in the characterization of host stars of exoplanets. Here we present spectroscopic observations from ESPaDOnS on the Canada-France-Hawaii Telescope of 172 benchmark “gold standard” stars observed by the NASA Kepler Mission for which densities and surface gravities have been precisely measured using asteroseismology. The goal is to discover an empirical correlation between the equivalent width of the spectral lines in the infrared Ca II triplet region (from 8470 to 8710 angstroms) against surface gravity and other stellar parameters, such as effective temperature and metallicity. The Mg I line at 8736 angstroms has the best potential to be an indicator of surface gravity so far out of the spectral lines in this region with equivalent width increasing slightly as a function of surface gravity; however, degeneracies with effective temperature and metallicity need to be explored further. If a true indicator for surface gravity can be found, then it can to be applied to the R~11000 Gaia radial velocity spectra, which will be released for millions of stars over the coming years.

  16. Calibration and testing of IKU's oil spill contingency and response (OSCAR) model system

    International Nuclear Information System (INIS)

    Reed, M.; Aamo, O.M.; Downing, K.

    1996-01-01

    A computer modeling system entitled Oil Spill Contingency and Response (OSCAR), was calibrated and tested using a variety of field observations. The objective of the exercise was to establish model credibility and increase confidence in efforts to compare alternate oil spill response strategies, while maintaining a balance between response costs and environmental protection. The key components of the system are IKU's data-based oil weathering model, a three dimensional oil trajectory and chemical fates model, an oil spill combat model, and exposure models for fish, ichthyoplankton, birds, and marine mammals. Most modelled calculations were in good agreement with field observations. One discrepancy was found which could be attributed to an underestimation of wind drift in the current model. 21 refs., 4 tabs., 32 figs

  17. A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models

    International Nuclear Information System (INIS)

    Xu, Jin; Yu, Yaming; Van Dyk, David A.; Kashyap, Vinay L.; Siemiginowska, Aneta; Drake, Jeremy; Ratzlaff, Pete; Connors, Alanna; Meng, Xiao-Li

    2014-01-01

    Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use a principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.

  18. Multiobjecitve Sampling Design for Calibration of Water Distribution Network Model Using Genetic Algorithm and Neural Network

    Directory of Open Access Journals (Sweden)

    Kourosh Behzadian

    2008-03-01

    Full Text Available In this paper, a novel multiobjective optimization model is presented for selecting optimal locations in the water distribution network (WDN with the aim of installing pressure loggers. The pressure data collected at optimal locations will be used later on in the calibration of the proposed WDN model. Objective functions consist of maximization of calibrated model prediction accuracy and minimization of the total cost for sampling design. In order to decrease the model run time, an optimization model has been developed using multiobjective genetic algorithm and adaptive neural network (MOGA-ANN. Neural networks (NNs are initially trained after a number of initial GA generations and periodically retrained and updated after generation of a specified number of full model-analyzed solutions. Trained NNs are replaced with the fitness evaluation of some chromosomes within the GA progress. Using cache prevents objective function evaluation of repetitive chromosomes within GA. Optimal solutions are obtained through pareto-optimal front with respect to the two objective functions. Results show that jointing NNs in MOGA for approximating portions of chromosomes’ fitness in each generation leads to considerable savings in model run time and can be promising for reducing run-time in optimization models with significant computational effort.

  19. Laboratory calibration of density-dependent lines in the extreme ultraviolet spectral region

    Science.gov (United States)

    Lepson, J. K.; Beiersdorfer, P.; Gu, M. F.; Desai, P.; Bitter, M.; Roquemore, L.; Reinke, M. L.

    2012-05-01

    We have been making spectral measurements in the extreme ultraviolet (EUV) from different laboratory sources in order to investigate the electron density dependence of various astrophysically important emission lines and to test the atomic models underlying the diagnostic line ratios. The measurement are being performed at the Livermore EBIT-I electron beam ion trap, the National Spherical Torus Experiment (NSTX) at Princeton, and the Alcator C-Mod tokamak at the Massachusetts Institute of Technology, which together span an electron density of four orders of magnitude and which allow us to test the various models at high and low density limits. Here we present measurements of Fe XXII and Ar XIV, which include new data from an ultra high resolution (λ/Δλ >4000) spectrometer at the EBIT-I facility. We found good agreement between the measurements and modeling calculations for Fe XXII, but poorer agreement for Ar XIV.

  20. The Effect of Sample Size and Data Numbering on Precision of Calibration Model to predict Soil Properties

    Directory of Open Access Journals (Sweden)

    H Mohamadi Monavar

    2017-10-01

    Full Text Available Introduction Precision agriculture (PA is a technology that measures and manages within-field variability, such as physical and chemical properties of soil. The nondestructive and rapid VIS-NIR technology detected a significant correlation between reflectance spectra and the physical and chemical properties of soil. On the other hand, quantitatively predict of soil factors such as nitrogen, carbon, cation exchange capacity and the amount of clay in precision farming is very important. The emphasis of this paper is comparing different techniques of choosing calibration samples such as randomly selected method, chemical data and also based on PCA. Since increasing the number of samples is usually time-consuming and costly, then in this study, the best sampling way -in available methods- was predicted for calibration models. In addition, the effect of sample size on the accuracy of the calibration and validation models was analyzed. Materials and Methods Two hundred and ten soil samples were collected from cultivated farm located in Avarzaman in Hamedan province, Iran. The crop rotation was mostly potato and wheat. Samples were collected from a depth of 20 cm above ground and passed through a 2 mm sieve and air dried at room temperature. Chemical analysis was performed in the soil science laboratory, faculty of agriculture engineering, Bu-ali Sina University, Hamadan, Iran. Two Spectrometer (AvaSpec-ULS 2048- UV-VIS and (FT-NIR100N were used to measure the spectral bands which cover the UV-Vis and NIR region (220-2200 nm. Each soil sample was uniformly tiled in a petri dish and was scanned 20 times. Then the pre-processing methods of multivariate scatter correction (MSC and base line correction (BC were applied on the raw signals using Unscrambler software. The samples were divided into two groups: one group for calibration 105 and the second group was used for validation. Each time, 15 samples were selected randomly and tested the accuracy of

  1. U.S. Department of Energy Office of Legacy Management Calibration Facilities - 12103

    Energy Technology Data Exchange (ETDEWEB)

    Barr, Deborah [U.S. Department of Energy Office of Legacy Management, Grand Junction, Colorado (United States); Traub, David; Widdop, Michael [S.M. Stoller Corporation, Grand Junction, Colorado (United States)

    2012-07-01

    This paper describes radiometric calibration facilities located in Grand Junction, Colorado, and at three secondary calibration sites. These facilities are available to the public for the calibration of radiometric field instrumentation for in-situ measurements of radium (uranium), thorium, and potassium. Both borehole and hand-held instruments may be calibrated at the facilities. Aircraft or vehicle mounted systems for large area surveys may be calibrated at the Grand Junction Regional Airport facility. These calibration models are recognized internationally as stable, well-characterized radiation sources for calibration. Calibration models built in other countries are referenced to the DOE models, which are also widely used as a standard for calibration within the U.S. Calibration models are used to calibrate radiation detectors used in uranium exploration, remediation, and homeland security. (authors)

  2. Robustness of near-infrared calibration models for the prediction of milk constituents during the milking process.

    Science.gov (United States)

    Melfsen, Andreas; Hartung, Eberhard; Haeussermann, Angelika

    2013-02-01

    The robustness of in-line raw milk analysis with near-infrared spectroscopy (NIRS) was tested with respect to the prediction of the raw milk contents fat, protein and lactose. Near-infrared (NIR) spectra of raw milk (n = 3119) were acquired on three different farms during the milking process of 354 milkings over a period of six months. Calibration models were calculated for: a random data set of each farm (fully random internal calibration); first two thirds of the visits per farm (internal calibration); whole datasets of two of the three farms (external calibration), and combinations of external and internal datasets. Validation was done either on the remaining data set per farm (internal validation) or on data of the remaining farms (external validation). Excellent calibration results were obtained when fully randomised internal calibration sets were used for milk analysis. In this case, RPD values of around ten, five and three for the prediction of fat, protein and lactose content, respectively, were achieved. Farm internal calibrations achieved much poorer prediction results especially for the prediction of protein and lactose with RPD values of around two and one respectively. The prediction accuracy improved when validation was done on spectra of an external farm, mainly due to the higher sample variation in external calibration sets in terms of feeding diets and individual cow effects. The results showed that further improvements were achieved when additional farm information was added to the calibration set. One of the main requirements towards a robust calibration model is the ability to predict milk constituents in unknown future milk samples. The robustness and quality of prediction increases with increasing variation of, e.g., feeding and cow individual milk composition in the calibration model.

  3. Alternative ways of using field-based estimates to calibrate ecosystem models and their implications for carbon cycle studies

    Science.gov (United States)

    He, Yujie; Zhuang, Qianlai; McGuire, David; Liu, Yaling; Chen, Min

    2013-01-01

    Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations in modeling regional carbon dynamics and explore the implications of those options. We calibrated the Terrestrial Ecosystem Model on a hierarchy of three vegetation classification levels for the Alaskan boreal forest: species level, plant-functional-type level (PFT level), and biome level, and we examined the differences in simulated carbon dynamics. Species-specific field-based estimates were directly used to parameterize the model for species-level simulations, while weighted averages based on species percent cover were used to generate estimates for PFT- and biome-level model parameterization. We found that calibrated key ecosystem process parameters differed substantially among species and overlapped for species that are categorized into different PFTs. Our analysis of parameter sets suggests that the PFT-level parameterizations primarily reflected the dominant species and that functional information of some species were lost from the PFT-level parameterizations. The biome-level parameterization was primarily representative of the needleleaf PFT and lost information on broadleaf species or PFT function. Our results indicate that PFT-level simulations may be potentially representative of the performance of species-level simulations while biome-level simulations may result in biased estimates. Improved theoretical and empirical justifications for grouping species into PFTs or biomes are needed to adequately represent the dynamics of ecosystem functioning and structure.

  4. New error calibration tests for gravity models using subset solutions and independent data - Applied to GEM-T3

    Science.gov (United States)

    Lerch, F. J.; Nerem, R. S.; Chinn, D. S.; Chan, J. C.; Patel, G. B.; Klosko, S. M.

    1993-01-01

    A new method has been developed to provide a direct test of the error calibrations of gravity models based on actual satellite observations. The basic approach projects the error estimates of the gravity model parameters onto satellite observations, and the results of these projections are then compared with data residual computed from the orbital fits. To allow specific testing of the gravity error calibrations, subset solutions are computed based on the data set and data weighting of the gravity model. The approach is demonstrated using GEM-T3 to show that the gravity error estimates are well calibrated and that reliable predictions of orbit accuracies can be achieved for independent orbits.

  5. Estimation of the limit of detection in semiconductor gas sensors through linearized calibration models.

    Science.gov (United States)

    Burgués, Javier; Jiménez-Soto, Juan Manuel; Marco, Santiago

    2018-07-12

    The limit of detection (LOD) is a key figure of merit in chemical sensing. However, the estimation of this figure of merit is hindered by the non-linear calibration curve characteristic of semiconductor gas sensor technologies such as, metal oxide (MOX), gasFETs or thermoelectric sensors. Additionally, chemical sensors suffer from cross-sensitivities and temporal stability problems. The application of the International Union of Pure and Applied Chemistry (IUPAC) recommendations for univariate LOD estimation in non-linear semiconductor gas sensors is not straightforward due to the strong statistical requirements of the IUPAC methodology (linearity, homoscedasticity, normality). Here, we propose a methodological approach to LOD estimation through linearized calibration models. As an example, the methodology is applied to the detection of low concentrations of carbon monoxide using MOX gas sensors in a scenario where the main source of error is the presence of uncontrolled levels of humidity. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. A Monte Carlo modeling alternative for the API Gamma Ray Calibration Facility

    International Nuclear Information System (INIS)

    Galford, J.E.

    2017-01-01

    The gamma ray pit at the API Calibration Facility, located on the University of Houston campus, defines the API unit for natural gamma ray logs used throughout the petroleum logging industry. Future use of the facility is uncertain. An alternative method is proposed to preserve the gamma ray API unit definition as an industry standard by using Monte Carlo modeling to obtain accurate counting rate-to-API unit conversion factors for gross-counting and spectral gamma ray tool designs. - Highlights: • A Monte Carlo alternative is proposed to replace empirical calibration procedures. • The proposed Monte Carlo alternative preserves the original API unit definition. • MCNP source and materials descriptions are provided for the API gamma ray pit. • Simulated results are presented for several wireline logging tool designs. • The proposed method can be adapted for use with logging-while-drilling tools.

  7. α Centauri A as a potential stellar model calibrator: establishing the nature of its core

    Science.gov (United States)

    Nsamba, B.; Monteiro, M. J. P. F. G.; Campante, T. L.; Cunha, M. S.; Sousa, S. G.

    2018-05-01

    Understanding the physical process responsible for the transport of energy in the core of α Centauri A is of the utmost importance if this star is to be used in the calibration of stellar model physics. Adoption of different parallax measurements available in the literature results in differences in the interferometric radius constraints used in stellar modelling. Further, this is at the origin of the different dynamical mass measurements reported for this star. With the goal of reproducing the revised dynamical mass derived by Pourbaix & Boffin, we modelled the star using two stellar grids varying in the adopted nuclear reaction rates. Asteroseismic and spectroscopic observables were complemented with different interferometric radius constraints during the optimisation procedure. Our findings show that best-fit models reproducing the revised dynamical mass favour the existence of a convective core (≳ 70% of best-fit models), a result that is robust against changes to the model physics. If this mass is accurate, then α Centauri A may be used to calibrate stellar model parameters in the presence of a convective core.

  8. On the possibility of calibrating urban storm-water drainage models using gauge-based adjusted radar rainfall estimates

    OpenAIRE

    Ochoa-Rodriguez, S; Wang, L; Simoes, N; Onof, C; Maksimovi?, ?

    2013-01-01

    24/07/14 meb. Authors did not sign CTA. Traditionally, urban storm water drainage models have been calibrated using only raingauge data, which may result in overly conservative models due to the lack of spatial description of rainfall. With the advent of weather radars, radar rainfall estimates with higher temporal and spatial resolution have become increasingly available and have started to be used operationally for urban storm water model calibration and real time operation. Nonetheless,...

  9. Model calibration and validation for OFMSW and sewage sludge co-digestion reactors

    International Nuclear Information System (INIS)

    Esposito, G.; Frunzo, L.; Panico, A.; Pirozzi, F.

    2011-01-01

    Highlights: → Disintegration is the limiting step of the anaerobic co-digestion process. → Disintegration kinetic constant does not depend on the waste particle size. → Disintegration kinetic constant depends only on the waste nature and composition. → The model calibration can be performed on organic waste of any particle size. - Abstract: A mathematical model has recently been proposed by the authors to simulate the biochemical processes that prevail in a co-digestion reactor fed with sewage sludge and the organic fraction of municipal solid waste. This model is based on the Anaerobic Digestion Model no. 1 of the International Water Association, which has been extended to include the co-digestion processes, using surface-based kinetics to model the organic waste disintegration and conversion to carbohydrates, proteins and lipids. When organic waste solids are present in the reactor influent, the disintegration process is the rate-limiting step of the overall co-digestion process. The main advantage of the proposed modeling approach is that the kinetic constant of such a process does not depend on the waste particle size distribution (PSD) and rather depends only on the nature and composition of the waste particles. The model calibration aimed to assess the kinetic constant of the disintegration process can therefore be conducted using organic waste samples of any PSD, and the resulting value will be suitable for all the organic wastes of the same nature as the investigated samples, independently of their PSD. This assumption was proven in this study by biomethane potential experiments that were conducted on organic waste samples with different particle sizes. The results of these experiments were used to calibrate and validate the mathematical model, resulting in a good agreement between the simulated and observed data for any investigated particle size of the solid waste. This study confirms the strength of the proposed model and calibration procedure

  10. Principal components based support vector regression model for on-line instrument calibration monitoring in NPPs

    International Nuclear Information System (INIS)

    Seo, In Yong; Ha, Bok Nam; Lee, Sung Woo; Shin, Chang Hoon; Kim, Seong Jun

    2010-01-01

    In nuclear power plants (NPPs), periodic sensor calibrations are required to assure that sensors are operating correctly. By checking the sensor's operating status at every fuel outage, faulty sensors may remain undetected for periods of up to 24 months. Moreover, typically, only a few faulty sensors are found to be calibrated. For the safe operation of NPP and the reduction of unnecessary calibration, on-line instrument calibration monitoring is needed. In this study, principal component based auto-associative support vector regression (PCSVR) using response surface methodology (RSM) is proposed for the sensor signal validation of NPPs. This paper describes the design of a PCSVR-based sensor validation system for a power generation system. RSM is employed to determine the optimal values of SVR hyperparameters and is compared to the genetic algorithm (GA). The proposed PCSVR model is confirmed with the actual plant data of Kori Nuclear Power Plant Unit 3 and is compared with the Auto-Associative support vector regression (AASVR) and the auto-associative neural network (AANN) model. The auto-sensitivity of AASVR is improved by around six times by using a PCA, resulting in good detection of sensor drift. Compared to AANN, accuracy and cross-sensitivity are better while the auto-sensitivity is almost the same. Meanwhile, the proposed RSM for the optimization of the PCSVR algorithm performs even better in terms of accuracy, auto-sensitivity, and averaged maximum error, except in averaged RMS error, and this method is much more time efficient compared to the conventional GA method

  11. Sensitivity analysis and development of calibration methodology for near-surface hydrogeology model of Laxemar

    Energy Technology Data Exchange (ETDEWEB)

    Aneljung, Maria; Sassner, Mona; Gustafsson, Lars-Goeran (DHI Sverige AB, Lilla Bommen 1, SE-411 04 Goeteborg (Sweden))

    2007-11-15

    This report describes modelling where the hydrological modelling system MIKE SHE has been used to describe surface hydrology, near-surface hydrogeology, advective transport mechanisms, and the contact between groundwater and surface water within the SKB site investigation area at Laxemar. In the MIKE SHE system, surface water flow is described with the one-dimensional modelling tool MIKE 11, which is fully and dynamically integrated with the groundwater flow module in MIKE SHE. In early 2008, a supplementary data set will be available and a process of updating, rebuilding and calibrating the MIKE SHE model based on this data set will start. Before the calibration on the new data begins, it is important to gather as much knowledge as possible on calibration methods, and to identify critical calibration parameters and areas within the model that require special attention. In this project, the MIKE SHE model has been further developed. The model area has been extended, and the present model also includes an updated bedrock model and a more detailed description of the surface stream network. The numerical model has been updated and optimized, especially regarding the modelling of evapotranspiration and the unsaturated zone, and the coupling between the surface stream network in MIKE 11 and the overland flow in MIKE SHE. An initial calibration has been made and a base case has been defined and evaluated. In connection with the calibration, the most important changes made in the model were the following: The evapotranspiration was reduced. The infiltration capacity was reduced. The hydraulic conductivities of the Quaternary deposits in the water-saturated part of the subsurface were reduced. Data from one surface water level monitoring station, four surface water discharge monitoring stations and 43 groundwater level monitoring stations (SSM series boreholes) have been used to evaluate and calibrate the model. The base case simulations showed a reasonable agreement

  12. Hydrologic Model Development and Calibration: Contrasting a Single- and Multi-Objective Approach for Comparing Model Performance

    Science.gov (United States)

    Asadzadeh, M.; Maclean, A.; Tolson, B. A.; Burn, D. H.

    2009-05-01

    Hydrologic model calibration aims to find a set of parameters that adequately simulates observations of watershed behavior, such as streamflow, or a state variable, such as snow water equivalent (SWE). There are different metrics for evaluating calibration effectiveness that involve quantifying prediction errors, such as the Nash-Sutcliffe (NS) coefficient and bias evaluated for the entire calibration period, on a seasonal basis, for low flows, or for high flows. Many of these metrics are conflicting such that the set of parameters that maximizes the high flow NS differs from the set of parameters that maximizes the low flow NS. Conflicting objectives are very likely when different calibration objectives are based on different fluxes and/or state variables (e.g., NS based on streamflow versus SWE). One of the most popular ways to balance different metrics is to aggregate them based on their importance and find the set of parameters that optimizes a weighted sum of the efficiency metrics. Comparing alternative hydrologic models (e.g., assessing model improvement when a process or more detail is added to the model) based on the aggregated objective might be misleading since it represents one point on the tradeoff of desired error metrics. To derive a more comprehensive model comparison, we solved a bi-objective calibration problem to estimate the tradeoff between two error metrics for each model. Although this approach is computationally more expensive than the aggregation approach, it results in a better understanding of the effectiveness of selected models at each level of every error metric and therefore provides a better rationale for judging relative model quality. The two alternative models used in this study are two MESH hydrologic models (version 1.2) of the Wolf Creek Research basin that differ in their watershed spatial discretization (a single Grouped Response Unit, GRU, versus multiple GRUs). The MESH model, currently under development by Environment

  13. Modeling transducer impulse responses for predicting calibrated pressure pulses with the ultrasound simulation program Field II

    DEFF Research Database (Denmark)

    Bæk, David; Jensen, Jørgen Arendt; Willatzen, Morten

    2010-01-01

    FIELD II is a simulation software capable of predicting the field pressure in front of transducers having any complicated geometry. A calibrated prediction with this program is, however, dependent on an exact voltage-to-surface acceleration impulse response of the transducer. Such impulse response...... is not calculated by FIELD II. This work investigates the usability of combining a one-dimensional multilayer transducer modeling principle with the FIELD II software. Multilayer here refers to a transducer composed of several material layers. Measurements of pressure and current from Pz27 piezoceramic disks...... transducer model and the FIELD II software in combination give good agreement with measurements....

  14. The analytical calibration model of temperature effects on a silicon piezoresistive pressure sensor

    Directory of Open Access Journals (Sweden)

    Meng Nie

    2017-03-01

    Full Text Available Presently, piezoresistive pressure sensors are highly demanded for using in various microelectronic devices. The electrical behavior of these pressure sensor is mainly dependent on the temperature gradient. In this paper, various factors,which includes effect of temperature, doping concentration on the pressure sensitive resistance, package stress, and temperature on the Young’s modulus etc., are responsible for the temperature drift of the pressure sensor are analyzed. Based on the above analysis, an analytical calibration model of the output voltage of the sensor is proposed and the experimental data is validated through a suitable model.

  15. Spatial pattern evaluation of a calibrated national hydrological model - a remote-sensing-based diagnostic approach

    Science.gov (United States)

    Mendiguren, Gorka; Koch, Julian; Stisen, Simon

    2017-11-01

    Distributed hydrological models are traditionally evaluated against discharge stations, emphasizing the temporal and neglecting the spatial component of a model. The present study widens the traditional paradigm by highlighting spatial patterns of evapotranspiration (ET), a key variable at the land-atmosphere interface, obtained from two different approaches at the national scale of Denmark. The first approach is based on a national water resources model (DK-model), using the MIKE-SHE model code, and the second approach utilizes a two-source energy balance model (TSEB) driven mainly by satellite remote sensing data. Ideally, the hydrological model simulation and remote-sensing-based approach should present similar spatial patterns and driving mechanisms of ET. However, the spatial comparison showed that the differences are significant and indicate insufficient spatial pattern performance of the hydrological model.The differences in spatial patterns can partly be explained by the fact that the hydrological model is configured to run in six domains that are calibrated independently from each other, as it is often the case for large-scale multi-basin calibrations. Furthermore, the model incorporates predefined temporal dynamics of leaf area index (LAI), root depth (RD) and crop coefficient (Kc) for each land cover type. This zonal approach of model parameterization ignores the spatiotemporal complexity of the natural system. To overcome this limitation, this study features a modified version of the DK-model in which LAI, RD and Kc are empirically derived using remote sensing data and detailed soil property maps in order to generate a higher degree of spatiotemporal variability and spatial consistency between the six domains. The effects of these changes are analyzed by using empirical orthogonal function (EOF) analysis to evaluate spatial patterns. The EOF analysis shows that including remote-sensing-derived LAI, RD and Kc in the distributed hydrological model adds

  16. Calibrating and validating a FE model for long-term behavior of RC beams

    Directory of Open Access Journals (Sweden)

    Tošić Nikola D.

    2014-01-01

    Full Text Available This study presents the research carried out in finding an optimal finite element (FE model for calculating the long-term behavior of reinforced concrete (RC beams. A multi-purpose finite element software DIANA was used. A benchmark test in the form of a simply supported beam loaded in four point bending was selected for model calibration. The result was the choice of 3-node beam elements, a multi-directional fixed crack model with constant stress cut-off, nonlinear tension softening and constant shear retention and a creep and shrinkage model according to CEB-FIP Model Code 1990. The model was then validated on 14 simply supported beams and 6 continuous beams. Good agreement was found with experimental results (within ±15%.

  17. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples

    Science.gov (United States)

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-01

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses.

  18. Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration

    Science.gov (United States)

    Doherty, John E.; Hunt, Randall J.

    2010-01-01

    Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.

  19. The worth of data to reduce predictive uncertainty of an integrated catchment model by multi-constraint calibration

    Science.gov (United States)

    Koch, J.; Jensen, K. H.; Stisen, S.

    2017-12-01

    Hydrological models that integrate numerical process descriptions across compartments of the water cycle are typically required to undergo thorough model calibration in order to estimate suitable effective model parameters. In this study, we apply a spatially distributed hydrological model code which couples the saturated zone with the unsaturated zone and the energy portioning at the land surface. We conduct a comprehensive multi-constraint model calibration against nine independent observational datasets which reflect both the temporal and the spatial behavior of hydrological response of a 1000km2 large catchment in Denmark. The datasets are obtained from satellite remote sensing and in-situ measurements and cover five keystone hydrological variables: discharge, evapotranspiration, groundwater head, soil moisture and land surface temperature. Results indicate that a balanced optimization can be achieved where errors on objective functions for all nine observational datasets can be reduced simultaneously. The applied calibration framework was tailored with focus on improving the spatial pattern performance; however results suggest that the optimization is still more prone to improve the temporal dimension of model performance. This study features a post-calibration linear uncertainty analysis. This allows quantifying parameter identifiability which is the worth of a specific observational dataset to infer values to model parameters through calibration. Furthermore the ability of an observation to reduce predictive uncertainty is assessed as well. Such findings determine concrete implications on the design of model calibration frameworks and, in more general terms, the acquisition of data in hydrological observatories.

  20. Mathematical Model and Calibration Procedure of a PSD Sensor Used in Local Positioning Systems.

    Science.gov (United States)

    Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Domingo-Perez, Francisco; Tsirigotis, Georgios

    2016-09-15

    Here, we propose a mathematical model and a calibration procedure for a PSD (position sensitive device) sensor equipped with an optical system, to enable accurate measurement of the angle of arrival of one or more beams of light emitted by infrared (IR) transmitters located at distances of between 4 and 6 m. To achieve this objective, it was necessary to characterize the intrinsic parameters that model the system and obtain their values. This first approach was based on a pin-hole model, to which system nonlinearities were added, and this was used to model the points obtained with the nA currents provided by the PSD. In addition, we analyzed the main sources of error, including PSD sensor signal noise, gain factor imbalances and PSD sensor distortion. The results indicated that the proposed model and method provided satisfactory calibration and yielded precise parameter values, enabling accurate measurement of the angle of arrival with a low degree of error, as evidenced by the experimental results.

  1. Development and Calibration of Two-Dimensional Hydrodynamic Model of the Tanana River near Tok, Alaska

    Science.gov (United States)

    Conaway, Jeffrey S.; Moran, Edward H.

    2004-01-01

    Bathymetric and hydraulic data were collected by the U.S. Geological Survey on the Tanana River in proximity to Alaska Department of Transportation and Public Facilities' bridge number 505 at mile 80.5 of the Alaska Highway. Data were collected from August 7-9, 2002, over an approximate 5,000- foot reach of the river. These data were combined with topographic data provided by Alaska Department of Transportation and Public Facilities to generate a two-dimensional hydrodynamic model. The hydrodynamic model was calibrated with water-surface elevations, flow velocities, and flow directions collected at a discharge of 25,600 cubic feet per second. The calibrated model was then used for a simulation of the 100-year recurrence interval discharge of 51,900 cubic feet per second. The existing bridge piers were removed from the model geometry in a second simulation to model the hydraulic conditions in the channel without the piers' influence. The water-surface elevations, flow velocities, and flow directions from these simulations can be used to evaluate the influence of the piers on flow hydraulics and will assist the Alaska Department of Transportation and Public Facilities in the design of a replacement bridge.

  2. Combining engineering and data-driven approaches: Development of a generic fire risk model facilitating calibration

    DEFF Research Database (Denmark)

    De Sanctis, G.; Fischer, K.; Kohler, J.

    2014-01-01

    Fire risk models support decision making for engineering problems under the consistent consideration of the associated uncertainties. Empirical approaches can be used for cost-benefit studies when enough data about the decision problem are available. But often the empirical approaches...... a generic risk model that is calibrated to observed fire loss data. Generic risk models assess the risk of buildings based on specific risk indicators and support risk assessment at a portfolio level. After an introduction to the principles of generic risk assessment, the focus of the present paper...... are not detailed enough. Engineering risk models, on the other hand, may be detailed but typically involve assumptions that may result in a biased risk assessment and make a cost-benefit study problematic. In two related papers it is shown how engineering and data-driven modeling can be combined by developing...

  3. Calibrating a Salt Water Intrusion Model with Time-Domain Electromagnetic Data

    DEFF Research Database (Denmark)

    Herckenrath, Daan; Odlum, Nick; Nenna, Vanessa

    2013-01-01

    Salt water intrusion models are commonly used to support groundwater resource management in coastal aquifers. Concentration data used for model calibration are often sparse and limited in spatial extent. With airborne and ground-based electromagnetic surveys, electrical resistivity models can......, we perform a coupled hydrogeophysical inversion (CHI) in which we use a salt water intrusion model to interpret the geophysical data and guide the geophysical inversion. We refer to this methodology as a Coupled Hydrogeophysical Inversion-State (CHI-S), in which simulated salt concentrations...... are transformed to an electrical resistivity model, after which a geophysical forward response is calculated and compared with the measured geophysical data. This approach was applied for a field site in Santa Cruz County, California, where a time-domain electromagnetic (TDEM) dataset was collected...

  4. Cosmological model-independent Gamma-ray bursts calibration and its cosmological constraint to dark energy

    International Nuclear Information System (INIS)

    Xu, Lixin

    2012-01-01

    As so far, the redshift of Gamma-ray bursts (GRBs) can extend to z ∼ 8 which makes it as a complementary probe of dark energy to supernova Ia (SN Ia). However, the calibration of GRBs is still a big challenge when they are used to constrain cosmological models. Though, the absolute magnitude of GRBs is still unknown, the slopes of GRBs correlations can be used as a useful constraint to dark energy in a completely cosmological model independent way. In this paper, we follow Wang's model-independent distance measurement method and calculate their values by using 109 GRBs events via the so-called Amati relation. Then, we use the obtained model-independent distances to constrain ΛCDM model as an example

  5. Regionalization of the Modified Bartlett-Lewis Rectangular Pulse Stochastic Rainfall Model

    OpenAIRE

    Dongkyun Kim; Francisco Olivera; Huidae Cho; Scott A. Socolofsky

    2013-01-01

    Parameters of the Modified Bartlett-Lewis Rectangular Pulse (MBLRP) stochastic rainfall simulation model were regionalized across the contiguous United States. Three thousand four hundred forty-four National Climate Data Center (NCDC) rain gauges were used to obtain spatial and seasonal patterns of the model parameters. The MBLRP model was calibrated to minimize the discrepancy between the precipitation depth statistics between the observed and MBLRP-generated precipitation time series. These...

  6. Building and calibrating a large-extent and high resolution coupled groundwater-land surface model using globally available data-sets

    Science.gov (United States)

    Sutanudjaja, E. H.; Van Beek, L. P.; de Jong, S. M.; van Geer, F.; Bierkens, M. F.

    2012-12-01

    The current generation of large-scale hydrological models generally lacks a groundwater model component simulating lateral groundwater flow. Large-scale groundwater models are rare due to a lack of hydro-geological data required for their parameterization and a lack of groundwater head data required for their calibration. In this study, we propose an approach to develop a large-extent fully-coupled land surface-groundwater model by using globally available datasets and calibrate it using a combination of discharge observations and remotely-sensed soil moisture data. The underlying objective is to devise a collection of methods that enables one to build and parameterize large-scale groundwater models in data-poor regions. The model used, PCR-GLOBWB-MOD, has a spatial resolution of 1 km x 1 km and operates on a daily basis. It consists of a single-layer MODFLOW groundwater model that is dynamically coupled to the PCR-GLOBWB land surface model. This fully-coupled model accommodates two-way interactions between surface water levels and groundwater head dynamics, as well as between upper soil moisture states and groundwater levels, including a capillary rise mechanism to sustain upper soil storage and thus to fulfill high evaporation demands (during dry conditions). As a test bed, we used the Rhine-Meuse basin, where more than 4000 groundwater head time series have been collected for validation purposes. The model was parameterized using globally available data-sets on surface elevation, drainage direction, land-cover, soil and lithology. Next, the model was calibrated using a brute force approach and massive parallel computing, i.e. by running the coupled groundwater-land surface model for more than 3000 different parameter sets. Here, we varied minimal soil moisture storage and saturated conductivities of the soil layers as well as aquifer transmissivities. Using different regularization strategies and calibration criteria we compared three calibration scenarios

  7. Regional transport model of atmospheric sulfates

    International Nuclear Information System (INIS)

    Rao, K.S.; Thomson, I.; Egan, B.A.

    1977-01-01

    As part of the Sulfate Regional Experiment (SURE) Design Project, a regional transport model of atmospheric sulfates has been developed. This quasi-Lagrangian three-dimensional grid numerical model uses a detailed SO 2 emission inventory of major anthropogenic sources in the Eastern U.S. region, and observed meteorological data during an episode as inputs. The model accounts for advective transport and turbulent diffusion of the pollutants. The chemical transformation of SO 2 and SO 4 /sup =/ and the deposition of the species at the earth's surface are assumed to be linear processes at specified constant rates. The numerical model can predict the daily average concentrations of SO 2 and SO 4 /sup =/ at all receptor locations in the grid region during the episode. Because of the spatial resolution of the grid, this model is particularly suited to investigate the effect of tall stacks in reducing the ambient concentration levels of sulfur pollutants. This paper presents the formulations and assumptions of the regional sulfate transport model. The model inputs and results are discussed. Isopleths of predicted SO 2 and SO 4 /sup =/ concentrations are compared with the observed ground level values. The bulk of the information in this paper is directed to air pollution meteorologists and environmental engineers interested in the atmospheric transport modeling studies of sulfur oxide pollutants

  8. When to Make Mountains out of Molehills: The Pros and Cons of Simple and Complex Model Calibration Procedures

    Science.gov (United States)

    Smith, K. A.; Barker, L. J.; Harrigan, S.; Prudhomme, C.; Hannaford, J.; Tanguy, M.; Parry, S.

    2017-12-01

    Earth and environmental models are relied upon to investigate system responses that cannot otherwise be examined. In simulating physical processes, models have adjustable parameters which may, or may not, have a physical meaning. Determining the values to assign to these model parameters is an enduring challenge for earth and environmental modellers. Selecting different error metrics by which the models results are compared to observations will lead to different sets of calibrated model parameters, and thus different model results. Furthermore, models may exhibit `equifinal' behaviour, where multiple combinations of model parameters lead to equally acceptable model performance against observations. These decisions in model calibration introduce uncertainty that must be considered when model results are used to inform environmental decision-making. This presentation focusses on the uncertainties that derive from the calibration of a four parameter lumped catchment hydrological model (GR4J). The GR models contain an inbuilt automatic calibration algorithm that can satisfactorily calibrate against four error metrics in only a few seconds. However, a single, deterministic model result does not provide information on parameter uncertainty. Furthermore, a modeller interested in extreme events, such as droughts, may wish to calibrate against more low flows specific error metrics. In a comprehensive assessment, the GR4J model has been run with 500,000 Latin Hypercube Sampled parameter sets across 303 catchments in the United Kingdom. These parameter sets have been assessed against six error metrics, including two drought specific metrics. This presentation compares the two approaches, and demonstrates that the inbuilt automatic calibration can outperform the Latin Hypercube experiment approach in single metric assessed performance. However, it is also shown that there are many merits of the more comprehensive assessment, which allows for probabilistic model results, multi

  9. Multiobjective Optimal Algorithm for Automatic Calibration of Daily Streamflow Forecasting Model

    Directory of Open Access Journals (Sweden)

    Yi Liu

    2016-01-01

    Full Text Available Single-objection function cannot describe the characteristics of the complicated hydrologic system. Consequently, it stands to reason that multiobjective functions are needed for calibration of hydrologic model. The multiobjective algorithms based on the theory of nondominate are employed to solve this multiobjective optimal problem. In this paper, a novel multiobjective optimization method based on differential evolution with adaptive Cauchy mutation and Chaos searching (MODE-CMCS is proposed to optimize the daily streamflow forecasting model. Besides, to enhance the diversity performance of Pareto solutions, a more precise crowd distance assigner is presented in this paper. Furthermore, the traditional generalized spread metric (SP is sensitive with the size of Pareto set. A novel diversity performance metric, which is independent of Pareto set size, is put forward in this research. The efficacy of the new algorithm MODE-CMCS is compared with the nondominated sorting genetic algorithm II (NSGA-II on a daily streamflow forecasting model based on support vector machine (SVM. The results verify that the performance of MODE-CMCS is superior to the NSGA-II for automatic calibration of hydrologic model.

  10. Fertilizer Induced Nitrate Pollution in RCW: Calibration of the DNDC Model

    Science.gov (United States)

    El Hailouch, E.; Hornberger, G.; Crane, J. W.

    2012-12-01

    Fertilizer is widely used among urban and suburban households due to the socially driven attention of homeowners to lawn appearance. With high nitrogen content, fertilizer considerably impacts the environment through the emission of the highly potent greenhouse gas nitrous oxide and the leaching of nitrate. Nitrate leaching is significantly important because fertilizer sourced nitrate that is partially leached into soil causes groundwater pollution. In an effort to model the effect of fertilizer application on the environment, the geochemical DeNitrification-DeComposition model (DNDC) was previously developed to quantitatively measure the effects of fertilizer use. The purpose of this study is to use this model more effectively on a large scale through a measurement based calibration. For this reason, leaching was measured and studied on 12 sites in the Richland Creek Watershed (RCW). Information about the fertilization and irrigation regimes of these sites was collected, along with lysimeter readings that gave nitrate fluxes in the soil. A study of the amount and variation in nitrate leaching with respect to the varying geographical locations, time of the year, and fertilization and irrigation regimes has lead to a better understanding of the driving forces behind nitrate leaching. Quantifying the influence of each of these parameters allows for a more accurate calibration of the model thus permitting use that extends beyond the RCW. Measurement of nitrate leaching on a statewide or nationwide level in turn will help guide efforts in the reduction of groundwater pollution caused by fertilizer.

  11. Electronic transport in VO2—Experimentally calibrated Boltzmann transport modeling

    International Nuclear Information System (INIS)

    Kinaci, Alper; Rosenmann, Daniel; Chan, Maria K. Y.; Kado, Motohisa; Ling, Chen; Zhu, Gaohua; Banerjee, Debasish

    2015-01-01

    Materials that undergo metal-insulator transitions (MITs) are under intense study, because the transition is scientifically fascinating and technologically promising for various applications. Among these materials, VO 2 has served as a prototype due to its favorable transition temperature. While the physical underpinnings of the transition have been heavily investigated experimentally and computationally, quantitative modeling of electronic transport in the two phases has yet to be undertaken. In this work, we establish a density-functional-theory (DFT)-based approach with Hubbard U correction (DFT + U) to model electronic transport properties in VO 2 in the semiconducting and metallic regimes, focusing on band transport using the Boltzmann transport equations. We synthesized high quality VO 2 films and measured the transport quantities across the transition, in order to calibrate the free parameters in the model. We find that the experimental calibration of the Hubbard correction term can efficiently and adequately model the metallic and semiconducting phases, allowing for further computational design of MIT materials for desirable transport properties

  12. Electronic transport in VO{sub 2}—Experimentally calibrated Boltzmann transport modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kinaci, Alper; Rosenmann, Daniel; Chan, Maria K. Y., E-mail: debasish.banerjee@toyota.com, E-mail: mchan@anl.gov [Center for Nanoscale Materials, Argonne National Laboratory, Argonne, Illinois 60439 (United States); Kado, Motohisa [Higashifuji Technical Center, Toyota Motor Corporation, Susono, Shizuoka 410-1193 (Japan); Ling, Chen; Zhu, Gaohua; Banerjee, Debasish, E-mail: debasish.banerjee@toyota.com, E-mail: mchan@anl.gov [Materials Research Department, Toyota Motor Engineering and Manufacturing North America, Inc., Ann Arbor, Michigan 48105 (United States)

    2015-12-28

    Materials that undergo metal-insulator transitions (MITs) are under intense study, because the transition is scientifically fascinating and technologically promising for various applications. Among these materials, VO{sub 2} has served as a prototype due to its favorable transition temperature. While the physical underpinnings of the transition have been heavily investigated experimentally and computationally, quantitative modeling of electronic transport in the two phases has yet to be undertaken. In this work, we establish a density-functional-theory (DFT)-based approach with Hubbard U correction (DFT + U) to model electronic transport properties in VO{sub 2} in the semiconducting and metallic regimes, focusing on band transport using the Boltzmann transport equations. We synthesized high quality VO{sub 2} films and measured the transport quantities across the transition, in order to calibrate the free parameters in the model. We find that the experimental calibration of the Hubbard correction term can efficiently and adequately model the metallic and semiconducting phases, allowing for further computational design of MIT materials for desirable transport properties.

  13. Three Different Ways of Calibrating Burger's Contact Model for Viscoelastic Model of Asphalt Mixtures by Discrete Element Method

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Stang, Henrik

    2016-01-01

    modulus. Three different approaches have been used and compared for calibrating the Burger's contact model. Values of the dynamic modulus and phase angle of asphalt mixtures were predicted by conducting DE simulation under dynamic strain control loading. The excellent agreement between the predicted......In this paper the viscoelastic behavior of asphalt mixture was investigated by employing a three-dimensional discrete element method. Combined with Burger's model, three contact models were used for the construction of constitutive asphalt mixture model with viscoelastic properties...

  14. Econometrically calibrated computable general equilibrium models: Applications to the analysis of energy and climate politics

    Science.gov (United States)

    Schu, Kathryn L.

    Economy-energy-environment models are the mainstay of economic assessments of policies to reduce carbon dioxide (CO2) emissions, yet their empirical basis is often criticized as being weak. This thesis addresses these limitations by constructing econometrically calibrated models in two policy areas. The first is a 35-sector computable general equilibrium (CGE) model of the U.S. economy which analyzes the uncertain impacts of CO2 emission abatement. Econometric modeling of sectors' nested constant elasticity of substitution (CES) cost functions based on a 45-year price-quantity dataset yields estimates of capital-labor-energy-material input substitution elasticities and biases of technical change that are incorporated into the CGE model. I use the estimated standard errors and variance-covariance matrices to construct the joint distribution of the parameters of the economy's supply side, which I sample to perform Monte Carlo baseline and counterfactual runs of the model. The resulting probabilistic abatement cost estimates highlight the importance of the uncertainty in baseline emissions growth. The second model is an equilibrium simulation of the market for new vehicles which I use to assess the response of vehicle prices, sales and mileage to CO2 taxes and increased corporate average fuel economy (CAFE) standards. I specify an econometric model of a representative consumer's vehicle preferences using a nested CES expenditure function which incorporates mileage and other characteristics in addition to prices, and develop a novel calibration algorithm to link this structure to vehicle model supplies by manufacturers engaged in Bertrand competition. CO2 taxes' effects on gasoline prices reduce vehicle sales and manufacturers' profits if vehicles' mileage is fixed, but these losses shrink once mileage can be adjusted. Accelerated CAFE standards induce manufacturers to pay fines for noncompliance rather than incur the higher costs of radical mileage improvements

  15. Development of Camera Model and Geometric Calibration/validation of Xsat IRIS Imagery

    Science.gov (United States)

    Kwoh, L. K.; Huang, X.; Tan, W. J.

    2012-07-01

    XSAT, launched on 20 April 2011, is the first micro-satellite designed and built in Singapore. It orbits the Earth at altitude of 822 km in a sun synchronous orbit. The satellite carries a multispectral camera IRIS with three spectral bands - 0.52~0.60 mm for Green, 0.63~0.69 mm for Red and 0.76~0.89 mm for NIR at 12 m resolution. In the design of IRIS camera, the three bands were acquired by three lines of CCDs (NIR, Red and Green). These CCDs were physically separated in the focal plane and their first pixels not absolutely aligned. The micro-satellite platform was also not stable enough to allow for co-registration of the 3 bands with simple linear transformation. In the camera model developed, this platform stability was compensated with 3rd to 4th order polynomials for the satellite's roll, pitch and yaw attitude angles. With the camera model, the camera parameters such as the band to band separations, the alignment of the CCDs relative to each other, as well as the focal length of the camera can be validated or calibrated. The results of calibration with more than 20 images showed that the band to band along-track separation agreed well with the pre-flight values provided by the vendor (0.093° and 0.046° for the NIR vs red and for green vs red CCDs respectively). The cross-track alignments were 0.05 pixel and 5.9 pixel for the NIR vs red and green vs red CCDs respectively. The focal length was found to be shorter by about 0.8%. This was attributed to the lower operating temperature which XSAT is currently operating. With the calibrated parameters and the camera model, a geometric level 1 multispectral image with RPCs can be generated and if required, orthorectified imagery can also be produced.

  16. Comparison of different multi-objective calibration criteria using a conceptual rainfall-runoff model of flood events

    Directory of Open Access Journals (Sweden)

    R. Moussa

    2009-04-01

    Full Text Available A conceptual lumped rainfall-runoff flood event model was developed and applied on the Gardon catchment located in Southern France and various single-objective and multi-objective functions were used for its calibration. The model was calibrated on 15 events and validated on 14 others. The results of both the calibration and validation phases are compared on the basis of their performance with regards to six criteria, three global criteria and three relative criteria representing volume, peakflow, and the root mean square error. The first type of criteria gives more weight to large events whereas the second considers all events to be of equal weight. The results show that the calibrated parameter values are dependent on the type of criteria used. Significant trade-offs are observed between the different objectives: no unique set of parameters is able to satisfy all objectives simultaneously. Instead, the solution to the calibration problem is given by a set of Pareto optimal solutions. From this set of optimal solutions, a balanced aggregated objective function is proposed, as a compromise between up to three objective functions. The single-objective and multi-objective calibration strategies are compared both in terms of parameter variation bounds and simulation quality. The results of this study indicate that two well chosen and non-redundant objective functions are sufficient to calibrate the model and that the use of three objective functions does not necessarily yield different results. The problems of non-uniqueness in model calibration, and the choice of the adequate objective functions for flood event models, emphasise the importance of the modeller's intervention. The recent advances in automatic optimisation techniques do not minimise the user's responsibility, who has to choose multiple criteria based on the aims of the study, his appreciation on the errors induced by data and model structure and his knowledge of the

  17. Incorporation of sedimentological data into a calibrated groundwater flow and transport model

    International Nuclear Information System (INIS)

    Williams, N.J.; Young, S.C.; Barton, D.H.; Hurst, B.T.

    1997-01-01

    Analysis suggests that a high hydraulic conductivity (K) zone is associated with a former river channel at the Portsmouth Gaseous Diffusion Plant (PORTS). A two-dimensional (2-D) and three-dimensional (3-D) groundwater flow model was developed base on a sedimentological model to demonstrate the performance of a horizontal well for plume capture. The model produced a flow field with magnitudes and directions consistent with flow paths inferred from historical trichloroethylene (TCE) plume data. The most dominant feature affecting the well's performance was preferential high- and low-K zones. Based on results from the calibrated flow and transport model, a passive groundwater collection system was designed and built. Initial flow rates and concentrations measured from a gravity-drained horizontal well agree closely to predicted values

  18. Bayesian calibration of thermodynamic parameters for geochemical speciation modeling of cementitious materials

    International Nuclear Information System (INIS)

    Sarkar, S.; Kosson, D.S.; Mahadevan, S.; Meeussen, J.C.L.; Sloot, H. van der; Arnold, J.R.; Brown, K.G.

    2012-01-01

    Chemical equilibrium modeling of cementitious materials requires aqueous–solid equilibrium constants of the controlling mineral phases (K sp ) and the available concentrations of primary components. Inherent randomness of the input and model parameters, experimental measurement error, the assumptions and approximations required for numerical simulation, and inadequate knowledge of the chemical process contribute to uncertainty in model prediction. A numerical simulation framework is developed in this paper to assess uncertainty in K sp values used in geochemical speciation models. A Bayesian statistical method is used in combination with an efficient, adaptive Metropolis sampling technique to develop probability density functions for K sp values. One set of leaching experimental observations is used for calibration and another set is used for comparison to evaluate the applicability of the approach. The estimated probability distributions of K sp values can be used in Monte Carlo simulation to assess uncertainty in the behavior of aqueous–solid partitioning of constituents in cement-based materials.

  19. Calibration of a gamma spectrometer for natural radioactivity measurement. Experimental measurements and Monte Carlo modelling

    International Nuclear Information System (INIS)

    Courtine, Fabien

    2007-03-01

    The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of 137 Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the 60 Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)

  20. A multimethod Global Sensitivity Analysis to aid the calibration of geomechanical models via time-lapse seismic data

    Science.gov (United States)

    Price, D. C.; Angus, D. A.; Garcia, A.; Fisher, Q. J.; Parsons, S.; Kato, J.

    2018-03-01

    Time-lapse seismic attributes are used extensively in the history matching of production simulator models. However, although proven to contain information regarding production induced stress change, it is typically only loosely (i.e. qualitatively) used to calibrate geomechanical models. In this study we conduct a multimethod Global Sensitivity Analysis (GSA) to assess the feasibility and aid the quantitative calibration of geomechanical models via near-offset time-lapse seismic data. Specifically, the calibration of mechanical properties of the overburden. Via the GSA, we analyse the near-offset overburden seismic traveltimes from over 4000 perturbations of a Finite Element (FE) geomechanical model of a typical High Pressure High Temperature (HPHT) reservoir in the North Sea. We find that, out of an initially large set of material properties, the near-offset overburden traveltimes are primarily affected by Young's modulus and the effective stress (i.e. Biot) coefficient. The unexpected significance of the Biot coefficient highlights the importance of modelling fluid flow and pore pressure outside of the reservoir. The FE model is complex and highly nonlinear. Multiple combinations of model parameters can yield equally possible model realizations. Consequently, numerical calibration via a large number of random model perturbations is unfeasible. However, the significant differences in traveltime results suggest that more sophisticated calibration methods could potentially be feasible for finding numerous suitable solutions. The results of the time-varying GSA demonstrate how acquiring multiple vintages of time-lapse seismic data can be advantageous. However, they also suggest that significant overburden near-offset seismic time-shifts, useful for model calibration, may take up to 3 yrs after the start of production to manifest. Due to the nonlinearity of the model behaviour, similar uncertainty in the reservoir mechanical properties appears to influence overburden

  1. Modeling and control of temperature of heat-calibration wind tunnel

    Directory of Open Access Journals (Sweden)

    Li Yunhua

    2012-01-01

    Full Text Available This paper investigates the temperature control of the heat air-flow wind tunnel for sensor temperature-calibration and heat strength experiment. Firstly, a mathematical model was established to describe the dynamic characteristics of the fuel supplying system based on a variable frequency driving pump. Then, based on the classical cascade control, an improved control law with the Smith predictive estimate and the fuzzy proportional-integral-derivative was proposed. The simulation result shows that the control effect of the proposed control strategy is better than the ordinary proportional-integral-derivative cascade control strategy.

  2. A Monte Carlo modeling alternative for the API Gamma Ray Calibration Facility.

    Science.gov (United States)

    Galford, J E

    2017-04-01

    The gamma ray pit at the API Calibration Facility, located on the University of Houston campus, defines the API unit for natural gamma ray logs used throughout the petroleum logging industry. Future use of the facility is uncertain. An alternative method is proposed to preserve the gamma ray API unit definition as an industry standard by using Monte Carlo modeling to obtain accurate counting rate-to-API unit conversion factors for gross-counting and spectral gamma ray tool designs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Evaluating the Efficiency of a Multi-core Aware Multi-objective Optimization Tool for Calibrating the SWAT Model

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, X. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Izaurralde, R. C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Zong, Z. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Zhao, K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Thomson, A. M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2012-08-20

    The efficiency of calibrating physically-based complex hydrologic models is a major concern in the application of those models to understand and manage natural and human activities that affect watershed systems. In this study, we developed a multi-core aware multi-objective evolutionary optimization algorithm (MAMEOA) to improve the efficiency of calibrating a worldwide used watershed model (Soil and Water Assessment Tool (SWAT)). The test results show that MAMEOA can save about 1-9%, 26-51%, and 39-56% time consumed by calibrating SWAT as compared with sequential method by using dual-core, quad-core, and eight-core machines, respectively. Potential and limitations of MAMEOA for calibrating SWAT are discussed. MAMEOA is open source software.

  4. Models of Regional Modernization of the Donbass Region

    Directory of Open Access Journals (Sweden)

    Konstantin Viktorovich Pavlov

    2016-03-01

    Full Text Available The article deals with the methodical approach to the assessment of a level of development of post-industrial and neoindustrial models of economic modernization in regional aspect on the basis of various indicators. The proposed approach is approved with use of the statistical materials characterizing the state of economy in the regions of Donbass. The strategy of development of Donbass areas and their industrial cities would have to be based (we mean the situation if there were no military operations on this territory on the basis of the model of neoindustrialization assuming development of the sphere of the hi-tech industry, automation and a computerization of productive forces, replacement of physical work by intellectual labor that is radically capable to change nature of work and structure of labor balance of this macro-region. Branches and sectors of a socio-economic complex of the Donbass region using achievements of fundamental and applied science, engineering and design thought for increase in a share of automation, a computerization and mechanization of workplaces, physical and brainwork prior to the beginning of a political conflict in Ukraine have the potential of development. The authors also pay attention to need of studying the agglomerative effect from interaction of the cities and areas of the Donbass euroregion. It is connected, first of all, with development of vertical and horizontal mechanisms of active use of potential of so-called “cities-kernels”, capable to make the catalyzing impact on production, social infrastructure, creation and development of new perspective branches in the cities-satellites.

  5. Calibration of CR-39 plastic detectors in various modes and radon measurement in the north-western region of Bangladesh

    International Nuclear Information System (INIS)

    Islam, G.S.; Islam, M.A.; Haque, A.K.F.

    1998-04-01

    Solid State track detectors have been extensively used for the measurement of time integrated radon levels in dwellings under different conditions. The CR-39 plastic detectors were calibrated for bare as well as cup with membrane mode, along with a mono dispersal aerosol 0.2μm in size in an exposure chamber, to find the relationship between track densities and the radon concentration as well as potential alpha energy concentration (WL) of radon. Measurement of the indoor radon and radon daughter concentrations were performed in houses in the north-western region of Bangladesh. In total 163 detectors were placed for measurement of indoor radon activities and 230 detectors for measurement of radon daughter concentrations. To study the underground radon activity, 114 CR-39 detectors in cylinders were used. The indoor radon activity in Naogaon was, in general, found to be higher than that in Rajshahi. The working levels in the mud-built houses were greater than that in brick-built houses. The underground radon activity of Naogaon was found to be 6 times higher than that of Rajshahi. No direct correlation was observed between the underground and indoor radon activity. The average values of radon activity and the working level for the north-western zone of Bangladesh are found to be 91 Bq. m -3 and 16 mWL respectively. (author)

  6. Calibration of the century, apsim and ndicea models of decomposition and n mineralization of plant residues in the humid tropics

    Directory of Open Access Journals (Sweden)

    Alexandre Ferreira do Nascimento

    2011-06-01

    Full Text Available The aim of this study was to calibrate the CENTURY, APSIM and NDICEA simulation models for estimating decomposition and N mineralization rates of plant organic materials (Arachis pintoi, Calopogonium mucunoides, Stizolobium aterrimum, Stylosanthes guyanensis for 360 days in the Atlantic rainforest bioma of Brazil. The models´ default settings overestimated the decomposition and N-mineralization of plant residues, underlining the fact that the models must be calibrated for use under tropical conditions. For example, the APSIM model simulated the decomposition of the Stizolobium aterrimum and Calopogonium mucunoides residues with an error rate of 37.62 and 48.23 %, respectively, by comparison with the observed data, and was the least accurate model in the absence of calibration. At the default settings, the NDICEA model produced an error rate of 10.46 and 14.46 % and the CENTURY model, 21.42 and 31.84 %, respectively, for Stizolobium aterrimum and Calopogonium mucunoides residue decomposition. After calibration, the models showed a high level of accuracy in estimating decomposition and N- mineralization, with an error rate of less than 20 %. The calibrated NDICEA model showed the highest level of accuracy, followed by the APSIM and CENTURY. All models performed poorly in the first few months of decomposition and N-mineralization, indicating the need of an additional parameter for initial microorganism growth on the residues that would take the effect of leaching due to rainfall into account.

  7. Calibration and validation of coarse-grained models of atomic systems: application to semiconductor manufacturing

    Science.gov (United States)

    Farrell, Kathryn; Oden, J. Tinsley

    2014-07-01

    Coarse-grained models of atomic systems, created by aggregating groups of atoms into molecules to reduce the number of degrees of freedom, have been used for decades in important scientific and technological applications. In recent years, interest in developing a more rigorous theory for coarse graining and in assessing the predictivity of coarse-grained models has arisen. In this work, Bayesian methods for the calibration and validation of coarse-grained models of atomistic systems in thermodynamic equilibrium are developed. For specificity, only configurational models of systems in canonical ensembles are considered. Among major challenges in validating coarse-grained models are (1) the development of validation processes that lead to information essential in establishing confidence in the model's ability predict key quantities of interest and (2), above all, the determination of the coarse-grained model itself; that is, the characterization of the molecular architecture, the choice of interaction potentials and thus parameters, which best fit available data. The all-atom model is treated as the "ground truth," and it provides the basis with respect to which properties of the coarse-grained model are compared. This base all-atom model is characterized by an appropriate statistical mechanics framework in this work by canonical ensembles involving only configurational energies. The all-atom model thus supplies data for Bayesian calibration and validation methods for the molecular model. To address the first challenge, we develop priors based on the maximum entropy principle and likelihood functions based on Gaussian approximations of the uncertainties in the parameter-to-observation error. To address challenge (2), we introduce the notion of model plausibilities as a means for model selection. This methodology provides a powerful approach toward constructing coarse-grained models which are most plausible for given all-atom data. We demonstrate the theory and

  8. UNSAT-H infiltration model calibration at the Subsurface Disposal Area, Idaho National Engineering Laboratory

    International Nuclear Information System (INIS)

    Martian, P.

    1995-10-01

    Soil moisture monitoring data from the expanded neutron probe monitoring network located at the Subsurface Disposal Area (SDA) of the Idaho National Engineering Laboratory (INEL) were used to calibrate numerical infiltration models for 15 locations within and near the SDA. These calibrated models were then used to simulate infiltration into the SDA surficial sediments and underlying basalts for the entire operational period of the SDA (1952--1995). The purpose of performing the simulations was to obtain a time variant infiltration source term for future subsurface pathway modeling efforts as part of baseline risk assessment or performance assessments. The simulation results also provided estimates of the average recharge rate for the simulation period and insight into infiltration patterns at the SDA. These results suggest that the average aquifer recharge rate below the SDA may be at least 8 cm/yr and may be as high as 12 cm/yr. These values represent 38 and 57% of the average annual precipitation occurring at the INEL, respectively. The simulation results also indicate that the maximum evaporative depth may vary between 28 and 148 cm and is highly dependent on localized lithology within the SDA

  9. On the Free Vibration Modeling of Spindle Systems: A Calibrated Dynamic Stiffness Matrix

    Directory of Open Access Journals (Sweden)

    Omar Gaber

    2014-01-01

    Full Text Available The effect of bearings on the vibrational behavior of machine tool spindles is investigated. This is done through the development of a calibrated dynamic stiffness matrix (CDSM method, where the bearings flexibility is represented by massless linear spring elements with tuneable stiffness. A dedicated MATLAB code is written to develop and to assemble the element stiffness matrices for the system’s multiple components and to apply the boundary conditions. The developed method is applied to an illustrative example of spindle system. When the spindle bearings are modeled as simply supported boundary conditions, the DSM model results in a fundamental frequency much higher than the system’s nominal value. The simply supported boundary conditions are then replaced by linear spring elements, and the spring constants are adjusted such that the resulting calibrated CDSM model leads to the nominal fundamental frequency of the spindle system. The spindle frequency results are also validated against the experimental data. The proposed method can be effectively applied to predict the vibration characteristics of spindle systems supported by bearings.

  10. A Novel Error Model of Optical Systems and an On-Orbit Calibration Method for Star Sensors

    Directory of Open Access Journals (Sweden)

    Shuang Wang

    2015-12-01

    Full Text Available In order to improve the on-orbit measurement accuracy of star sensors, the effects of image-plane rotary error, image-plane tilt error and distortions of optical systems resulting from the on-orbit thermal environment were studied in this paper. Since these issues will affect the precision of star image point positions, in this paper, a novel measurement error model based on the traditional error model is explored. Due to the orthonormal characteristics of image-plane rotary-tilt errors and the strong nonlinearity among these error parameters, it is difficult to calibrate all the parameters simultaneously. To solve this difficulty, for the new error model, a modified two-step calibration method based on the Extended Kalman Filter (EKF and Least Square Methods (LSM is presented. The former one is used to calibrate the main point drift, focal length error and distortions of optical systems while the latter estimates the image-plane rotary-tilt errors. With this calibration method, the precision of star image point position influenced by the above errors is greatly improved from 15.42% to 1.389%. Finally, the simulation results demonstrate that the presented measurement error model for star sensors has higher precision. Moreover, the proposed two-step method can effectively calibrate model error parameters, and the calibration precision of on-orbit star sensors is also improved obviously.

  11. Energy and externality environmental regional model

    International Nuclear Information System (INIS)

    Baldi, L.; Bianchi, A.; Peri, M.

    2000-01-01

    The use of environmental externalities in both territorial management and the direction of energy and environment, faces the difficulties arising from their calculation. The so-called MACBET regional model, which has been constructed for Lombardy, is a first brand new attempt to overcome them. MACBET is a calculation model to assess environmental and employment externalities connected to energy use [it

  12. Roadway management plan based on rockfall modelling calibration and validation. Application along the Ma-10 road in Mallorca (Spain)

    Science.gov (United States)

    Mateos, Rosa Maria; Garcia, Inmaculada; Reichenbach, Paola; Herrera, Gerardo; Sarro, Roberto; Rius, Joan; Aguilo, Raul

    2016-04-01

    The Tramuntana range, in the northwestern sector of the island of Mallorca (Spain), is frequently affected by rockfalls which have caused significant damage, mainly along the road network. The Ma-10 road constitutes the main transportation corridor on the range with a heavy traffic estimated at 7,200 vehicles per day on average. With a length of 111 km and a tortuous path, the road is the connecting track for 12 municipalities and constitutes a strategic road on the island for many tourist resorts. For the period spanning from 1995 to current times, 63 rockfalls have affected the Ma-10 road with volumes ranging from 0.3m3 to 30,000 m3. Fortunately, no fatalities occurred but numerous blockages on the road took place which caused significant economic losses, valued of around 11 MEuro (Mateos el al., 2013). In this work we present the procedure we have applied to calibrate and validate rockfall modelling in the Tramuntana region, using 103 cases of the available detailed rockfall inventory (Mateos, 2006). We have exploited STONE (Guzzetti et al. 2002), a GIS based rockfall simulation software which computes 2D and 3D rockfall trajectories starting from a DTM and maps of the dynamic rolling friction coefficient and of the normal and tangential energy restitution coefficients. The appropriate identification of these parameters determines the accuracy of the simulation. To calibrate them, we have selected 40 rockfalls along the range which include a wide variety of outcropping lithologies. Coefficients values have been changed in numerous attempts in order to select those where the extent and shape of the simulation matched the field mapping. Best results were summarized with the average statistical values for each parameter and for each geotechnical unit, determining that mode values represent more precisely the data. Initially, for the validation stage, 10 well- known rockfalls exploited in the calibration phase have been selected. Confidence tests have been applied

  13. Electricity Price Forecast Using Combined Models with Adaptive Weights Selected and Errors Calibrated by Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Da Liu

    2013-01-01

    Full Text Available A combined forecast with weights adaptively selected and errors calibrated by Hidden Markov model (HMM is proposed to model the day-ahead electricity price. Firstly several single models were built to forecast the electricity price separately. Then the validation errors from every individual model were transformed into two discrete sequences: an emission sequence and a state sequence to build the HMM, obtaining a transmission matrix and an emission matrix, representing the forecasting ability state of the individual models. The combining weights of the individual models were decided by the state transmission matrixes in HMM and the best predict sample ratio of each individual among all the models in the validation set. The individual forecasts were averaged to get the combining forecast with the weights obtained above. The residuals of combining forecast were calibrated by the possible error calculated by the emission matrix of HMM. A case study of day-ahead electricity market of Pennsylvania-New Jersey-Maryland (PJM, USA, suggests that the proposed method outperforms individual techniques of price forecasting, such as support vector machine (SVM, generalized regression neural networks (GRNN, day-ahead modeling, and self-organized map (SOM similar days modeling.

  14. Transition Radiation Tracker calibration, searches beyond the Standard Model and multiparticle correlations in ATLAS

    CERN Document Server

    Alonso, Alejandro; Torsten, Akesson

    This thesis contains two different aspects of my research work towards physics in proton-proton collisions in the ATLAS experiment at the LHC. The first part is focused on the understanding and developing of a calibration system to obtain the best possible charged particle reconstruction in the Transition Radiation Tracker. The method explained in this thesis is the current calibration technique used in the TRT and it is applied to all the data collected by ATLAS. Thanks to the method developed, the detector design resolution is achieved, and even improved in the central region of the TRT. In the second part, three different analyses are presented. Due to my interest in tracking and thanks to the new energy range available at the LHC, the first analysis is the study of multiparticle correlations at 900 GeV and 7 TeV. This analysis is performed with the first ATLAS data collected during 2010. Two different aspects are studied: the high order moments and an attempt to measure the normalized factorial moments ...

  15. Calibration and validation of models for short-term decomposition and N mineralization of plant residues in the tropics

    Directory of Open Access Journals (Sweden)

    Alexandre Ferreira do Nascimento

    2012-12-01

    Full Text Available Insight of nutrient release patterns associated with the decomposition of plant residues is important for their effective use as a green manure in food production systems. Thus, this study aimed to evaluate the ability of the Century, APSIM and NDICEA simulation models for predicting the decomposition and N mineralization of crop residues in the tropical Atlantic forest biome, Brazil. The simulation models were calibrated based on actual decomposition and N mineralization rates of three types of crop residues with different chemical and biochemical composition. The models were also validated for different pedo-climatic conditions and crop residues conditions. In general, the accuracy of decomposition and N mineralization improved after calibration. Overall RMSE values for the decomposition and N mineralization of the crop materials varied from 7.4 to 64.6% before models calibration compared to 3.7 to 16.3 % after calibration. Therefore, adequate calibration of the models is indispensable for use them under humid tropical conditions. The NDICEA model generally outperformed the other models. However, the decomposition and N mineralization was not very accurate during the first 30 days of incubation, especially for easily decomposable crop residues. An additional model variable may be required to capture initial microbiological growth as affected by the moisture dynamics of the residues, as is the case in surface residues decomposition models.

  16. Calibration of complex models through Bayesian evidence synthesis: a demonstration and tutorial

    Science.gov (United States)

    Jackson, Christopher; Jit, Mark; Sharples, Linda; DeAngelis, Daniela

    2016-01-01

    Summary Decision-analytic models must often be informed using data which are only indirectly related to the main model parameters. The authors outline how to implement a Bayesian synthesis of diverse sources of evidence to calibrate the parameters of a complex model. A graphical model is built to represent how observed data are generated from statistical models with unknown parameters, and how those parameters are related to quantities of interest for decision-making. This forms the basis of an algorithm to estimate a posterior probability distribution, which represents the updated state of evidence for all unknowns given all data and prior beliefs. This process calibrates the quantities of interest against data, and at the same time, propagates all parameter uncertainties to the results used for decision-making. To illustrate these methods, the authors demonstrate how a previously-developed Markov model for the progression of human papillomavirus (HPV16) infection was rebuilt in a Bayesian framework. Transition probabilities between states of disease severity are inferred indirectly from cross-sectional observations of prevalence of HPV16 and HPV16-related disease by age, cervical cancer incidence, and other published information. Previously, a discrete collection of plausible scenarios was identified, but with no further indication of which of these are more plausible. Instead, the authors derive a Bayesian posterior distribution, in which scenarios are implicitly weighted according to how well they are supported by the data. In particular, we emphasise the appropriate choice of prior distributions and checking and comparison of fitted models. PMID:23886677

  17. Calibration and Stokes Imaging with Full Embedded Element Primary Beam Model for the Murchison Widefield Array

    Science.gov (United States)

    Sokolowski, M.; Colegate, T.; Sutinjo, A. T.; Ung, D.; Wayth, R.; Hurley-Walker, N.; Lenc, E.; Pindor, B.; Morgan, J.; Kaplan, D. L.; Bell, M. E.; Callingham, J. R.; Dwarakanath, K. S.; For, Bi-Qing; Gaensler, B. M.; Hancock, P. J.; Hindson, L.; Johnston-Hollitt, M.; Kapińska, A. D.; McKinley, B.; Offringa, A. R.; Procopio, P.; Staveley-Smith, L.; Wu, C.; Zheng, Q.

    2017-11-01

    The Murchison Widefield Array (MWA), located in Western Australia, is one of the low-frequency precursors of the international Square Kilometre Array (SKA) project. In addition to pursuing its own ambitious science programme, it is also a testbed for wide range of future SKA activities ranging from hardware, software to data analysis. The key science programmes for the MWA and SKA require very high dynamic ranges, which challenges calibration and imaging systems. Correct calibration of the instrument and accurate measurements of source flux densities and polarisations require precise characterisation of the telescope's primary beam. Recent results from the MWA GaLactic Extragalactic All-sky Murchison Widefield Array (GLEAM) survey show that the previously implemented Average Embedded Element (AEE) model still leaves residual polarisations errors of up to 10-20% in Stokes Q. We present a new simulation-based Full Embedded Element (FEE) model which is the most rigorous realisation yet of the MWA's primary beam model. It enables efficient calculation of the MWA beam response in arbitrary directions without necessity of spatial interpolation. In the new model, every dipole in the MWA tile (4 × 4 bow-tie dipoles) is simulated separately, taking into account all mutual coupling, ground screen, and soil effects, and therefore accounts for the different properties of the individual dipoles within a tile. We have applied the FEE beam model to GLEAM observations at 200-231 MHz and used false Stokes parameter leakage as a metric to compare the models. We have determined that the FEE model reduced the magnitude and declination-dependent behaviour of false polarisation in Stokes Q and V while retaining low levels of false polarisation in Stokes U.

  18. Calibrating the simple biosphere model for Amazonian tropical forest using field and remote sensing data. I - Average calibration with field data

    Science.gov (United States)

    Sellers, Piers J.; Shuttleworth, W. James; Dorman, Jeff L.; Dalcher, Amnon; Roberts, John M.

    1989-01-01

    Using meteorological and hydrological measurements taken in and above the central-Amazon-basin tropical forest, calibration of the Sellers et al. (1986) simple biosphere (SiB) model are described. The SiB model is a one-dimensional soil-vegetation-atmosphere model designed for use within GCMs models, representing the vegetation cover by analogy with processes operating within a single representative plant. The experimental systems and the procedures used to obtain field data are described, together with the specification of the physiological parameterization required to provide an average description of data. It was found that some of the existing literature on stomatal behavior for tropical species is inconsistent with the observed behavior of the complete canopy in Amazonia, and that the rainfall interception store of the canopy is considerably smaller than originally specified in the SiB model.

  19. Measurement of oxygen extraction fraction (OEF): An optimized BOLD signal model for use with hypercapnic and hyperoxic calibration.

    Science.gov (United States)

    Merola, Alberto; Murphy, Kevin; Stone, Alan J; Germuska, Michael A; Griffeth, Valerie E M; Blockley, Nicholas P; Buxton, Richard B; Wise, Richard G

    2016-04-01

    Several techniques have been proposed to estimate relative changes in cerebral metabolic rate of oxygen consumption (CMRO2) by exploiting combined BOLD fMRI and cerebral blood flow data in conjunction with hypercapnic or hyperoxic respiratory challenges. More recently, methods based on respiratory challenges that include both hypercapnia and hyperoxia have been developed to assess absolute CMRO2, an important parameter for understanding brain energetics. In this paper, we empirically optimize a previously presented "original calibration model" relating BOLD and blood flow signals specifically for the estimation of oxygen extraction fraction (OEF) and absolute CMRO2. To do so, we have created a set of synthetic BOLD signals using a detailed BOLD signal model to reproduce experiments incorporating hypercapnic and hyperoxic respiratory challenges at 3T. A wide range of physiological conditions was simulated by varying input parameter values (baseline cerebral blood volume (CBV0), baseline cerebral blood flow (CBF0), baseline oxygen extraction fraction (OEF0) and hematocrit (Hct)). From the optimization of the calibration model for estimation of OEF and practical considerations of hypercapnic and hyperoxic respiratory challenges, a new "simplified calibration model" is established which reduces the complexity of the original calibration model by substituting the standard parameters α and β with a single parameter θ. The optimal value of θ is determined (θ=0.06) across a range of experimental respiratory challenges. The simplified calibration model gives estimates of OEF0 and absolute CMRO2 closer to the true values used to simulate the experimental data compared to those estimated using the original model incorporating literature values of α and β. Finally, an error propagation analysis demonstrates the susceptibility of the original and simplified calibration models to measurement errors and potential violations in the underlying assumptions of isometabolism

  20. Sediment plume model-a comparison between use of measured turbidity data and satellite images for model calibration.

    Science.gov (United States)

    Sadeghian, Amir; Hudson, Jeff; Wheater, Howard; Lindenschmidt, Karl-Erich

    2017-08-01

    In this study, we built a two-dimensional sediment transport model of Lake Diefenbaker, Saskatchewan, Canada. It was calibrated by using measured turbidity data from stations along the reservoir and satellite images based on a flood event in 2013. In June 2013, there was heavy rainfall for two consecutive days on the frozen and snow-covered ground in the higher elevations of western Alberta, Canada. The runoff from the rainfall and the melted snow caused one of the largest recorded inflows to the headwaters of the South Saskatchewan River and Lake Diefenbaker downstream. An estimated discharge peak of over 5200 m 3 /s arrived at the reservoir inlet with a thick sediment front within a few days. The sediment plume moved quickly through the entire reservoir and remained visible from satellite images for over 2 weeks along most of the reservoir, leading to concerns regarding water quality. The aims of this study are to compare, quantitatively and qualitatively, the efficacy of using turbidity data and satellite images for sediment transport model calibration and to determine how accurately a sediment transport model can simulate sediment transport based on each of them. Both turbidity data and satellite images were very useful for calibrating the sediment transport model quantitatively and qualitatively. Model predictions and turbidity measurements show that the flood water and suspended sediments entered upstream fairly well mixed and moved downstream as overflow with a sharp gradient at the plume front. The model results suggest that the settling and resuspension rates of sediment are directly proportional to flow characteristics and that the use of constant coefficients leads to model underestimation or overestimation unless more data on sediment formation become available. Hence, this study reiterates the significance of the availability of data on sediment distribution and characteristics for building a robust and reliable sediment transport model.

  1. A Calibration-Capture-Recapture Model for Inferring Natual Gas Leak Population Characteristics Using Data from Google Street View Cars

    Science.gov (United States)

    Weller, Z.; Hoeting, J.; von Fischer, J.

    2017-12-01

    Pipeline systems that distribute natural gas (NG) within cities can leak, leading to safety hazards and wasted product. Moreover, these leaks are climate-altering because NG is primarily composed of methane, a potent greenhouse gas. Scientists have recently developed an innovative method for mapping NG leak locations by installing atmospheric methane analyzers on Google Street View cars. We develop new statistical methodology to answer key inferential questions using data collected by these mobile air monitors. The new calibration-capture-recapture (CCR) model utilizes data from controlled methane releases and data collected by GSV cars to provide inference for several desired quantities, including the number of undetected methane sources and the total methane output rate in a surveyed region. The CCR model addresses challenges associated with using a capture-recapture model to analyze data collected by a mobile detection system including variable sampling effort and lack of physically marking individuals. We develop a Markov chain Monte Carlo algorithm for parameter estimation and apply the CCR model to methane data collected in two U.S. cities. The CCR model provides a new framework for inferring the total number of leaks in NG distribution systems and offers critical insights for informing intelligent repair policy that is both cost-effective and environmentally friendly.

  2. Calibration procedure for a potato crop growth model using information from across Europe

    DEFF Research Database (Denmark)

    Heidmann, Tove; Tofteng, Charlotte; Abrahamsen, Per

    2008-01-01

    for adaptation of the Daisy model to new potato varieties or for the improvement of the existing parameter set. The procedure is then, as a starting point, to focus the calibration process on the recommended list of parameters to change. We demonstrate this approach by showing the procedure for recalibrating...... three varieties using all relevant data from the sites. We believe these new parameterisations to be more robust, because they indirectly were based on information from the six different sites. We claim that this procedure combines both local and specific modeller expertise in a way that results in more......In the FertOrgaNic EU project, 3 years of field experiments with drip irrigation and fertigation were carried out at six different sites across Europe, involving seven different varieties of potato. The Daisy model, which simulates plant growth together with water and nitrogen dynamics, was used...

  3. Use of tracer to calibrate water quality models in the river Almendares

    International Nuclear Information System (INIS)

    Dominguez Catasus, Judith; Borroto Portela, Jorge; Perez Machado, Esperanza; Hernandez Garces, Anel

    2003-01-01

    The Almendares river, one of the most important water bodies of the Havana City, is very polluted. The analysis of parameters as dissolved oxygen and biochemical oxygen demand is very helpful for the studies aimed to the recovery of the river. There is a growing recognition around the word that the water quality models are very useful tools to plan sanitary strategies for the management of wastewater contamination to predict the effectiveness of control options to improve water quality to desired levels. In the present work, the advective, steady- state Streeter and Phelps model was calibrated and validated to simulate the effect of multiple-point and distributed sources on the carbonaceous oxygen demand and dissolved oxygen. The use of the 99mTc and the Rodamine WT as tracers allowed determining the hydrodynamic parameters necessary for modeling purposes

  4. Calibration of a Numerical Model for Heat Transfer and Fluid Flow in an Extruder

    DEFF Research Database (Denmark)

    Hofstätter, Thomas; Pedersen, David Bue; Nielsen, Jakob Skov

    2016-01-01

    This paper discusses experiments performed in order to validate simulations on a fused deposition modelling (FDM) extruder. The nozzle has been simulated in terms of heat transfer and fluid flow. In order to calibrate and validate these simulations, experiments were performed giving a significant...... look into the physical behaviour of the nozzle, heating and cooling systems. Experiments on the model were performed at different sub-mm diameters of the extruder. Physical parameters of the model – especially temperature dependent parameters – were set into analytical relationships in order to receive...... dynamical parameters. This research sets the foundation for further research within melted extrusion based additive manufacturing. The heating process of the extruder will be described and a note on the material feeding will be given....

  5. Dam failure analysis/calibration using NWS models on dam failure in Alton, New Hampshire

    International Nuclear Information System (INIS)

    Capone, E.J.

    1998-01-01

    The State of New Hampshire Water Resources Board, the United States Geological Service, and private concerns have compiled data on the cause of a catastrophic failure of the Bergeron Dam in Alton, New Hampshire in March of 1996. Data collected related to the cause of the breach, the breach parameters, the soil characteristics of the failed section, and the limits of downstream flooding. Dam break modeling software was used to calibrate and verify the simulated flood-wave caused by the Bergeron Dam breach. Several scenarios were modeled, using different degrees of detail concerning the topography/channel-geometry of the affected areas. A sensitivity analysis of the important output parameters was completed. The relative importance of model parameters on the results was assessed against the background of observed historical events

  6. Lattice modeling and calibration with turn-by-turn orbit data

    Directory of Open Access Journals (Sweden)

    Xiaobiao Huang

    2010-11-01

    Full Text Available A new method that explores turn-by-turn beam position monitor (BPM data to calibrate lattice models of accelerators is proposed. The turn-by-turn phase space coordinates at one location of the ring are first established using data from two BPMs separated by a simple section with a known transfer matrix, such as a drift space. The phase space coordinates are then tracked with the model to predict positions at other BPMs, which can be compared to measurements. The model is adjusted to minimize the difference between the measured and predicted orbit data. BPM gains and rolls are included as fitting variables. This technique can be applied to either the entire or a section of the ring. We have tested the method experimentally on a part of the SPEAR3 ring.

  7. Lattice modeling and calibration with turn-by-turn orbit data

    Science.gov (United States)

    Huang, Xiaobiao; Sebek, Jim; Martin, Don

    2010-11-01

    A new method that explores turn-by-turn beam position monitor (BPM) data to calibrate lattice models of accelerators is proposed. The turn-by-turn phase space coordinates at one location of the ring are first established using data from two BPMs separated by a simple section with a known transfer matrix, such as a drift space. The phase space coordinates are then tracked with the model to predict positions at other BPMs, which can be compared to measurements. The model is adjusted to minimize the difference between the measured and predicted orbit data. BPM gains and rolls are included as fitting variables. This technique can be applied to either the entire or a section of the ring. We have tested the method experimentally on a part of the SPEAR3 ring.

  8. Calibrating a surface mass-balance model for Austfonna ice cap, Svalbard

    Science.gov (United States)

    Schuler, Thomas Vikhamar; Loe, Even; Taurisano, Andrea; Eiken, Trond; Hagen, Jon Ove; Kohler, Jack

    2007-10-01

    Austfonna (8120 km2) is by far the largest ice mass in the Svalbard archipelago. There is considerable uncertainty about its current state of balance and its possible response to climate change. Over the 2004/05 period, we collected continuous meteorological data series from the ice cap, performed mass-balance measurements using a network of stakes distributed across the ice cap and mapped the distribution of snow accumulation using ground-penetrating radar along several profile lines. These data are used to drive and test a model of the surface mass balance. The spatial accumulation pattern was derived from the snow depth profiles using regression techniques, and ablation was calculated using a temperature-index approach. Model parameters were calibrated using the available field data. Parameter calibration was complicated by the fact that different parameter combinations yield equally acceptable matches to the stake data while the resulting calculated net mass balance differs considerably. Testing model results against multiple criteria is an efficient method to cope with non-uniqueness. In doing so, a range of different data and observations was compared to several different aspects of the model results. We find a systematic underestimation of net balance for parameter combinations that predict observed ice ablation, which suggests that refreezing processes play an important role. To represent these effects in the model, a simple PMAX approach was included in its formulation. Used as a diagnostic tool, the model suggests that the surface mass balance for the period 29 April 2004 to 23 April 2005 was negative (-318 mm w.e.).

  9. Sensitivity analysis and development of calibration methodology for near-surface hydrogeology model of Forsmark

    International Nuclear Information System (INIS)

    Aneljung, Maria; Gustafsson, Lars-Goeran

    2007-04-01

    The hydrological modelling system MIKE SHE has been used to describe near-surface groundwater flow, transport mechanisms and the contact between ground- and surface water at the Forsmark site. The surface water system at Forsmark is described with the 1D modelling tool MIKE 11, which is fully and dynamically integrated with MIKE SHE. In spring 2007, a new data freeze will be available and a process of updating, rebuilding and calibrating the MIKE SHE model will start, based on the latest data set. Prior to this, it is important to gather as much knowledge as possible on calibration methods and to define critical calibration parameters and areas within the model. In this project, an optimization of the numerical description and an initial calibration of the MIKE SHE model has been made, and an updated base case has been defined. Data from 5 surface water level monitoring stations, 4 surface water discharge monitoring stations and 32 groundwater level monitoring stations (SFM soil boreholes) has been used for model calibration and evaluation. The base case simulations generally show a good agreement between calculated and measured water levels and discharges, indicating that the total runoff from the area is well described by the model. Moreover, with two exceptions (SFM0012 and SFM0022) the base case results show very good agreement between calculated and measured groundwater head elevations for boreholes installed below lakes. The model also shows a reasonably good agreement between calculated and measured groundwater head elevations or depths to phreatic surfaces in many other points. The following major types of calculation-measurement differences can be noted: Differences in groundwater level amplitudes due to transpiration processes. Differences in absolute mean groundwater head, due to differences between borehole casing levels and the interpolated DEM. Differences in absolute mean head elevations, due to local errors in hydraulic conductivity values

  10. Sensitivity analysis and development of calibration methodology for near-surface hydrogeology model of Forsmark

    Energy Technology Data Exchange (ETDEWEB)

    Aneljung, Maria; Gustafsson, Lars-Goeran [DHI Water and Environment AB, Goeteborg (Sweden)

    2007-04-15

    The hydrological modelling system MIKE SHE has been used to describe near-surface groundwater flow, transport mechanisms and the contact between ground- and surface water at the Forsmark site. The surface water system at Forsmark is described with the 1D modelling tool MIKE 11, which is fully and dynamically integrated with MIKE SHE. In spring 2007, a new data freeze will be available and a process of updating, rebuilding and calibrating the MIKE SHE model will start, based on the latest data set. Prior to this, it is important to gather as much knowledge as possible on calibration methods and to define critical calibration parameters and areas within the model. In this project, an optimization of the numerical description and an initial calibration of the MIKE SHE model has been made, and an updated base case has been defined. Data from 5 surface water level monitoring stations, 4 surface water discharge monitoring stations and 32 groundwater level monitoring stations (SFM soil boreholes) has been used for model calibration and evaluation. The base case simulations generally show a good agreement between calculated and measured water levels and discharges, indicating that the total runoff from the area is well described by the model. Moreover, with two exceptions (SFM0012 and SFM0022) the base case results show very good agreement between calculated and measured groundwater head elevations for boreholes installed below lakes. The model also shows a reasonably good agreement between calculated and measured groundwater head elevations or depths to phreatic surfaces in many other points. The following major types of calculation-measurement differences can be noted: Differences in groundwater level amplitudes due to transpiration processes. Differences in absolute mean groundwater head, due to differences between borehole casing levels and the interpolated DEM. Differences in absolute mean head elevations, due to local errors in hydraulic conductivity values

  11. Mathematical model and computer programme for theoretical calculation of calibration curves of neutron soil moisture probes with highly effective counters

    International Nuclear Information System (INIS)

    Kolev, N.A.

    1981-07-01

    A mathematical model based on the three group theory for theoretical calculation by means of computer of the calibration curves of neutron soil moisture probes with highly effective counters, is described. Methods for experimental correction of the mathematical model are discussed and proposed. The computer programme described allows the calibration of neutron probes with high or low effective counters, and central or end geometry, with or without linearizing of the calibration curve. The use of two calculation variants and printing of output data gives the possibility not only for calibration, but also for other researches. The separate data inputs for soil and probe temperature allow the temperature influence analysis. The computer programme and calculation examples are given. (author)

  12. Model calibration of a variable refrigerant flow system with a dedicated outdoor air system: A case study

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dongsu [Mississippi State Univ., Starkville, MS (United States); Cox, Sam J. [Mississippi State Univ., Starkville, MS (United States); Cho, Heejin [Mississippi State Univ., Starkville, MS (United States); Im, Piljae [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-10-16

    With increased use of variable refrigerant flow (VRF) systems in the U.S. building sector, interests in capability and rationality of various building energy modeling tools to simulate VRF systems are rising. This paper presents the detailed procedures for model calibration of a VRF system with a dedicated outdoor air system (DOAS) by comparing to detailed measured data from an occupancy emulated small office building. The building energy model is first developed based on as-built drawings, and building and system characteristics available. The whole building energy modeling tool used for the study is U.S. DOE’s EnergyPlus version 8.1. The initial model is, then, calibrated with the hourly measured data from the target building and VRF-DOAS system. In a detailed calibration procedures of the VRF-DOAS, the original EnergyPlus source code is modified to enable the modeling of the specific VRF-DOAS installed in the building. After a proper calibration during cooling and heating seasons, the VRF-DOAS model can reasonably predict the performance of the actual VRF-DOAS system based on the criteria from ASHRAE Guideline 14-2014. The calibration results show that hourly CV-RMSE and NMBE would be 15.7% and 3.8%, respectively, which is deemed to be calibrated. As a result, the whole-building energy usage after calibration of the VRF-DOAS model is 1.9% (78.8 kWh) lower than that of the measurements during comparison period.

  13. Gridded Calibration of Ensemble Wind Vector Forecasts Using Ensemble Model Output Statistics

    Science.gov (United States)

    Lazarus, S. M.; Holman, B. P.; Splitt, M. E.

    2017-12-01

    A computationally efficient method is developed that performs gridded post processing of ensemble wind vector forecasts. An expansive set of idealized WRF model simulations are generated to provide physically consistent high resolution winds over a coastal domain characterized by an intricate land / water mask. Ensemble model output statistics (EMOS) is used to calibrate the ensemble wind vector forecasts at observation locations. The local EMOS predictive parameters (mean and variance) are then spread throughout the grid utilizing flow-dependent statistical relationships extracted from the downscaled WRF winds. Using data withdrawal and 28 east central Florida stations, the method is applied to one year of 24 h wind forecasts from the Global Ensemble Forecast System (GEFS). Compared to the raw GEFS, the approach improves both the deterministic and probabilistic forecast skill. Analysis of multivariate rank histograms indicate the post processed forecasts are calibrated. Two downscaling case studies are presented, a quiescent easterly flow event and a frontal passage. Strengths and weaknesses of the approach are presented and discussed.

  14. Using genetic algorithms to calibrate the user-defined parameters of IIST model for SBLOCA analysis

    International Nuclear Information System (INIS)

    Tsai, Chiung-Wen; Shih, Chunkuan; Wang, Jong-Rong

    2014-01-01

    Highlights: • The genetic algorithm is proposed to search the user-defined parameters of important correlations. • The TRACE IIST model was employed as a case study to demonstrate the capability of GAs. • The multi-objective optimization strategy was incorporated to evaluate multi objective functions simultaneously. - Abstract: The thermal–hydraulic system codes, i.e., TRACE, have been designed to predict, investigate, and simulate nuclear reactor transients and accidents. Implementing relevant correlations, these codes are able to represent important phenomena such as two-phase flow, critical flow, and countercurrent flow. Furthermore, the thermal–hydraulic system codes permit users to modify the coefficients corresponding to the correlations, providing a certain degree of freedom to calibrate the numerical results, i.e., peak cladding temperature. These coefficients are known as user-defined parameters (UDPs). Practically, defining a series of UDPs is complex, highly relied on expert opinions and engineering experiences. This study proposes another approach – the genetic algorithms (GAs), providing rigorous procedures and mitigating human judgments and mistakes, to calibrate the UDPs of important correlations for a 2% small break loss of coolant accident (SBLOCA). The TRACE IIST model was employed as a case study to demonstrate the capability of GAs. The UDPs were evolved by GAs to reduce the deviations between TRACE results and IIST experimental data

  15. Panchromatic Calibration of Astronomical Observations with State-of-the-Art White Dwarf Model Atmospheres

    Science.gov (United States)

    Rauch, T.

    2016-05-01

    Theoretical spectral energy distributions (SEDs) of white dwarfs provide a powerful tool for cross-calibration and sensitivity control of instruments from the far infrared to the X-ray energy range. Such SEDs can be calculated from fully metal-line blanketed NLTE model-atmospheres that are e.g. computed by the Tübingen NLTE Model-Atmosphere Package (TMAP) that has arrived at a high level of sophistication. TMAP was successfully employed for the reliable spectral analysis of many hot, compact post-AGB stars. High-quality stellar spectra obtained over a wide energy range establish a data base with a large number of spectral lines of many successive ions of different species. Their analysis allows to determine effective temperatures, surface gravities, and element abundances of individual (pre-)white dwarfs with very small error ranges. We present applications of TMAP SEDs for spectral analyses of hot, compact stars in the parameter range from (pre-) white dwarfs to neutron stars and demonstrate the improvement of flux calibration using white-dwarf SEDs that are e.g. available via registered services in the Virtual Observatory.

  16. Considering Decision Variable Diversity in Multi-Objective Optimization: Application in Hydrologic Model Calibration

    Science.gov (United States)

    Sahraei, S.; Asadzadeh, M.

    2017-12-01

    Any modern multi-objective global optimization algorithm should be able to archive a well-distributed set of solutions. While the solution diversity in the objective space has been explored extensively in the literature, little attention has been given to the solution diversity in the decision space. Selection metrics such as the hypervolume contribution and crowding distance calculated in the objective space would guide the search toward solutions that are well-distributed across the objective space. In this study, the diversity of solutions in the decision-space is used as the main selection criteria beside the dominance check in multi-objective optimization. To this end, currently archived solutions are clustered in the decision space and the ones in less crowded clusters are given more chance to be selected for generating new solution. The proposed approach is first tested on benchmark mathematical test problems. Second, it is applied to a hydrologic model calibration problem with more than three objective functions. Results show that the chance of finding more sparse set of high-quality solutions increases, and therefore the analyst would receive a well-diverse set of options with maximum amount of information. Pareto Archived-Dynamically Dimensioned Search, which is an efficient and parsimonious multi-objective optimization algorithm for model calibration, is utilized in this study.

  17. Calibrating a multi-model approach to defect production in high energy collision cascades

    International Nuclear Information System (INIS)

    Heinisch, H.L.; Singh, B.N.; Diaz de la Rubia, T.

    1994-01-01

    A multi-model approach to simulating defect production processes at the atomic scale is described that incorporates molecular dynamics (MD), binary collision approximation (BCA) calculations and stochastic annealing simulations. The central hypothesis is that the simple, fast computer codes capable of simulating large numbers of high energy cascades (e.g., BCA codes) can be made to yield the correct defect configurations when their parameters are calibrated using the results of the more physically realistic MD simulations. The calibration procedure is investigated using results of MD simulations of 25 keV cascades in copper. The configurations of point defects are extracted from the MD cascade simulations at the end of the collisional phase, thus providing information similar to that obtained with a binary collision model. The MD collisional phase defect configurations are used as input to the ALSOME annealing simulation code, and values of the ALSOME quenching parameters are determined that yield the best fit to the post-quenching defect configurations of the MD simulations. ((orig.))

  18. Improvements in irrigation system modelling when using remotely sensed ET for calibration

    Science.gov (United States)

    van Opstal, J. D.; Neale, C. M. U.; Lecina, S.

    2014-10-01

    Irrigation system modelling is often used to aid decision-makers in the agricultural sector. It gives insight on the consequences of potential management and infrastructure changes. However, simulating an irrigation district requires a considerable amount of input data to properly represent the system, which is not easily acquired or available. During the simulation process, several assumptions have to be made and the calibration is usually performed only with flow measurements. The advancement of estimating evapotranspiration (ET) using remote sensing is a welcome asset for irrigation system modelling. Remotely-sensed ET can be used to improve the model accuracy in simulating the water balance and the crop production. This study makes use of the Ador-Simulation irrigation system model, which simulates water flows in irrigation districts in both the canal infrastructure and on-field. ET is estimated using an energy balance model, namely SEBAL, which has been proven to function well for agricultural areas. The seasonal ET by the Ador model and the ET from SEBAL are compared. These results determine sub-command areas, which perform well under current assumptions or, conversely, areas that need re-evaluation of assumptions and a re-run of the model. Using a combined approach of the Ador irrigation system model and remote sensing outputs from SEBAL, gives great insights during the modelling process and can accelerate the process. Additionally cost-savings and time-savings are apparent due to the decrease in input data required for simulating large-scale irrigation areas.

  19. Performance of the air2stream model that relates air and stream water temperatures depends on the calibration method

    Science.gov (United States)

    Piotrowski, Adam P.; Napiorkowski, Jaroslaw J.

    2018-06-01

    A number of physical or data-driven models have been proposed to evaluate stream water temperatures based on hydrological and meteorological observations. However, physical models require a large amount of information that is frequently unavailable, while data-based models ignore the physical processes. Recently the air2stream model has been proposed as an intermediate alternative that is based on physical heat budget processes, but it is so simplified that the model may be applied like data-driven ones. However, the price for simplicity is the need to calibrate eight parameters that, although have some physical meaning, cannot be measured or evaluated a priori. As a result, applicability and performance of the air2stream model for a particular stream relies on the efficiency of the calibration method. The original air2stream model uses an inefficient 20-year old approach called Particle Swarm Optimization with inertia weight. This study aims at finding an effective and robust calibration method for the air2stream model. Twelve different optimization algorithms are examined on six different streams from northern USA (states of Washington, Oregon and New York), Poland and Switzerland, located in both high mountains, hilly and lowland areas. It is found that the performance of the air2stream model depends significantly on the calibration method. Two algorithms lead to the best results for each considered stream. The air2stream model, calibrated with the chosen optimization methods, performs favorably against classical streamwater temperature models. The MATLAB code of the air2stream model and the chosen calibration procedure (CoBiDE) are available as Supplementary Material on the Journal of Hydrology web page.

  20. Hydrological model calibration for derived flood frequency analysis using stochastic rainfall and probability distributions of peak flows

    Science.gov (United States)

    Haberlandt, U.; Radtke, I.

    2014-01-01

    Derived flood frequency analysis allows the estimation of design floods with hydrological modeling for poorly observed basins considering change and taking into account flood protection measures. There are several possible choices regarding precipitation input, discharge output and consequently the calibration of the model. The objective of this study is to compare different calibration strategies for a hydrological model considering various types of rainfall input and runoff output data sets and to propose the most suitable approach. Event based and continuous, observed hourly rainfall data as well as disaggregated daily rainfall and stochastically generated hourly rainfall data are used as input for the model. As output, short hourly and longer daily continuous flow time series as well as probability distributions of annual maximum peak flow series are employed. The performance of the strategies is evaluated using the obtained different model parameter sets for continuous simulation of discharge in an independent validation period and by comparing the model derived flood frequency distributions with the observed one. The investigations are carried out for three mesoscale catchments in northern Germany with the hydrological model HEC-HMS (Hydrologic Engineering Center's Hydrologic Modeling System). The results show that (I) the same type of precipitation input data should be used for calibration and application of the hydrological model, (II) a model calibrated using a small sample of extreme values works quite well for the simulation of continuous time series with moderate length but not vice versa, and (III) the best performance with small uncertainty is obtained when stochastic precipitation data and the observed probability distribution of peak flows are used for model calibration. This outcome suggests to calibrate a hydrological model directly on probability distributions of observed peak flows using stochastic rainfall as input if its purpose is the

  1. Calibration of a Land Subsidence Model Using InSAR Data via the Ensemble Kalman Filter.

    Science.gov (United States)

    Li, Liangping; Zhang, Meijing; Katzenstein, Kurt

    2017-11-01

    The application of interferometric synthetic aperture radar (InSAR) has been increasingly used to improve capabilities to model land subsidence in hydrogeologic studies. A number of investigations over the last decade show how spatially detailed time-lapse images of ground displacements could be utilized to advance our understanding for better predictions. In this work, we use simulated land subsidences as observed measurements, mimicking InSAR data to inversely infer inelastic specific storage in a stochastic framework. The inelastic specific storage is assumed as a random variable and modeled using a geostatistical method such that the detailed variations in space could be represented and also that the uncertainties of both characterization of specific storage and prediction of land subsidence can be assessed. The ensemble Kalman filter (EnKF), a real-time data assimilation algorithm, is used to inversely calibrate a land subsidence model by matching simulated subsidences with InSAR data. The performance of the EnKF is demonstrated in a synthetic example in which simulated surface deformations using a reference field are assumed as InSAR data for inverse modeling. The results indicate: (1) the EnKF can be used successfully to calibrate a land subsidence model with InSAR data; the estimation of inelastic specific storage is improved, and uncertainty of prediction is reduced, when all the data are accounted for; and (2) if the same ensemble is used to estimate Kalman gain, the analysis errors could cause filter divergence; thus, it is essential to include localization in the EnKF for InSAR data assimilation. © 2017, National Ground Water Association.

  2. Dynamic calibration and validation of an accelerometer force balance for hypersonic lifting models.

    Science.gov (United States)

    Singh, Prakash; Trivedi, Sharad; Menezes, Viren; Hosseini, Hamid

    2014-01-01

    An accelerometer-based force balance was designed and developed for the measurement of drag, lift, and rolling moment on a blunt-nosed, flapped delta wing in a short-duration hypersonic shock tunnel. Calibration and validation of the balance were carried out by a convolution technique using hammer pulse test and surface pressure measurements. In the hammer pulse test, a known impulse was applied to the model in the appropriate direction using an impulse hammer, and the corresponding output of the balance (acceleration) was recorded. Fast Fourier Transform (FFT) was operated on the output of the balance to generate a system response function, relating the signal output to the corresponding load input. Impulse response functions for three components of the balance, namely, axial, normal, and angular, were obtained for a range of input load. The angular system response function was corresponding to rolling of the model. The impulse response functions thus obtained, through dynamic calibration, were operated on the output (signals) of the balance under hypersonic aerodynamic loading conditions in the tunnel to get the time history of the unknown aerodynamic forces and moments acting on the model. Surface pressure measurements were carried out on the model using high frequency pressure transducers, and forces and moments were deduced thereon. Tests were carried out at model angles of incidence of 0, 5, 10, and 15 degrees. A good agreement was observed among the results of different experimental methods. The balance developed is a comprehensive force/moment measurement device that can be used on complex, lifting, aerodynamic geometries in ground-based hypersonic test facilities.

  3. Calibration of numerical models for small debris flows in Yosemite Valley, California, USA

    Directory of Open Access Journals (Sweden)

    P. Bertolo

    2005-01-01

    Full Text Available This study compares documented debris flow runout distances with numerical simulations in the Yosemite Valley of California, USA, where about 15% of historical events of slope instability can be classified as debris flows and debris slides (Wieczorek and Snyder, 2004. To model debris flows in the Yosemite Valley, we selected six streams with evidence of historical debris flows; three of the debris flow deposits have single channels, and the other three split their pattern in the fan area into two or more channels. From field observations all of the debris flows involved coarse material, with only very small clay content. We applied the one dimensional DAN (Dynamic ANalysis model (Hungr, 1995 and the two-dimensional FLO-2D model (O'Brien et al., 1993 to predict and compare the runout distance and the velocity of the debris flows observed in the study area. As a first step, we calibrated the parameters for the two softwares through the back analysis of three debris- flows channels using a trial-and-error procedure starting with values suggested in the literature. In the second step we applied the selected values to the other channels, in order to evaluate their predictive capabilities. After parameter calibration using three debris flows we obtained results similar to field observations We also obtained a good agreement between the two models for velocities. Both models are strongly influenced by topography: we used the 30 m cell size DTM available for the study area, that is probably not accurate enough for a highly detailed analysis, but it can be sufficient for a first screening.

  4. Online Calibration Methods for the DINA Model with Independent Attributes in CD-CAT

    Science.gov (United States)

    Chen, Ping; Xin, Tao; Wang, Chun; Chang, Hua-Hua

    2012-01-01

    Item replenishing is essential for item bank maintenance in cognitive diagnostic computerized adaptive testing (CD-CAT). In regular CAT, online calibration is commonly used to calibrate the new items continuously. However, until now no reference has publicly become available about online calibration for CD-CAT. Thus, this study investigates the…

  5. Derived flood frequency analysis using different model calibration strategies based on various types of rainfall-runoff data - a comparison

    Science.gov (United States)

    Haberlandt, U.; Radtke, I.

    2013-08-01

    Derived flood frequency analysis allows to estimate design floods with hydrological modelling for poorly observed basins considering change and taking into account flood protection measures. There are several possible choices about precipitation input, discharge output and consequently regarding the calibration of the model. The objective of this study is to compare different calibration strategies for a hydrological model considering various types of rainfall input and runoff output data sets. Event based and continuous observed hourly rainfall data as well as disaggregated daily rainfall and stochastically generated hourly rainfall data are used as input for the model. As output short hourly and longer daily continuous flow time series as well as probability distributions of annual maximum peak flow series are employed. The performance of the strategies is evaluated using the obtained different model parameter sets for continuous simulation of discharge in an independent validation period and by comparing the model derived flood frequency distributions with the observed one. The investigations are carried out for three mesoscale catchments in Northern Germany with the hydrological model HEC-HMS. The results show that: (i) the same type of precipitation input data should be used for calibration and application of the hydrological model, (ii) a model calibrated using a small sample of extreme values works quite well for the simulation of continuous time series with moderate length but not vice versa, (iii) the best performance with small uncertainty is obtained when stochastic precipitation data and the observed probability distribution of peak flows are used for model calibration. This outcome suggests to calibrate a hydrological model directly on probability distributions of observed peak flows using stochastic rainfall as input if its purpose is the application for derived flood frequency analysis.

  6. Calibration of the nonlinear ring model at the Diamond Light Source

    CERN Document Server

    Bartolini, R; Rehm, G; Martin, I P S

    2011-01-01

    Nonlinear beam dynamics plays a crucial role in defining the performance of a storage ring. The beam lifetime, the injection efficiency, and the dynamic and momentum apertures available to the beam are optimized during the design phase by a proper optimization of the linear lattice and of the distribution of sextupole families. The correct implementation of the design model, especially the nonlinear part, is a nontrivial accelerator physics task. Several parameters of the nonlinear dynamics can be used to compare the real machine with the model and eventually to correct the accelerator. Most of these parameters are extracted from the analysis of turn-by-turn data after the excitation of betatron oscillations of the particles in the ring. We present the experimental results of the campaign of measurements carried out at the Diamond storage ring to characterize the nonlinear beam dynamics. A combination of frequency map analysis with the detuning with momentum measurements has allowed for a precise calibration ...

  7. Calibrating vadose zone models with time-lapse gravity data: a forced infiltration experiment

    DEFF Research Database (Denmark)

    Christiansen, Lars; Hansen, Allan Bo; Zibar, Majken Caroline Looms

    A change in soil water content is a change in mass stored in the subsurface, and when large enough, can be measured with a gravity meter. Over the last few decades there has been increased use of ground-based time-lapse gravity measurements to infer hydrogeological parameters. These studies have...... focused on the saturated zone, with specific yield as the most prominent target parameter and with few exceptions, changes in storage in the vadose zone have been considered as noise. Here modeling results are presented suggesting that gravity changes will be measureable when soil moisture changes occur...... in the unsaturated zone. These results are confirmed by field measurements of gravity and georadar data at a forced infiltration experiment conducted over 14 days on a grassland area of 10 m by 10 m. An unsaturated zone infiltration model can be calibrated using the gravity data with good agreement to the field data...

  8. Accurate calibration of the velocity-dependent one-scale model for domain walls

    Energy Technology Data Exchange (ETDEWEB)

    Leite, A.M.M., E-mail: up080322016@alunos.fc.up.pt [Centro de Astrofisica, Universidade do Porto, Rua das Estrelas, 4150-762 Porto (Portugal); Ecole Polytechnique, 91128 Palaiseau Cedex (France); Martins, C.J.A.P., E-mail: Carlos.Martins@astro.up.pt [Centro de Astrofisica, Universidade do Porto, Rua das Estrelas, 4150-762 Porto (Portugal); Shellard, E.P.S., E-mail: E.P.S.Shellard@damtp.cam.ac.uk [Department of Applied Mathematics and Theoretical Physics, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom)

    2013-01-08

    We study the asymptotic scaling properties of standard domain wall networks in several cosmological epochs. We carry out the largest field theory simulations achieved to date, with simulation boxes of size 2048{sup 3}, and confirm that a scale-invariant evolution of the network is indeed the attractor solution. The simulations are also used to obtain an accurate calibration for the velocity-dependent one-scale model for domain walls: we numerically determine the two free model parameters to have the values c{sub w}=0.34{+-}0.16 and k{sub w}=0.98{+-}0.07, which are of higher precision than (but in agreement with) earlier estimates.

  9. Accurate calibration of the velocity-dependent one-scale model for domain walls

    International Nuclear Information System (INIS)

    Leite, A.M.M.; Martins, C.J.A.P.; Shellard, E.P.S.

    2013-01-01

    We study the asymptotic scaling properties of standard domain wall networks in several cosmological epochs. We carry out the largest field theory simulations achieved to date, with simulation boxes of size 2048 3 , and confirm that a scale-invariant evolution of the network is indeed the attractor solution. The simulations are also used to obtain an accurate calibration for the velocity-dependent one-scale model for domain walls: we numerically determine the two free model parameters to have the values c w =0.34±0.16 and k w =0.98±0.07, which are of higher precision than (but in agreement with) earlier estimates.

  10. Comparison and calibration of numerical models from monitoring data of a reinforced concrete highway bridge

    Directory of Open Access Journals (Sweden)

    R. G. M. de Andrade

    Full Text Available The last four decades were important for the Brazilian highway system. Financial investments were made so it could expand and many structural solutions for bridges and viaducts were developed. In parallel, there was a significant raise of pathologies in these structures, due to lack of maintenance procedures. Thus, this paper main purpose is to create a short-term monitoring plan in order to check the structural behavior of a curved highway concrete bridge in current use. A bridge was chosen as a case study. A hierarchy of six numerical models is shown, so it can validate the bridge's structural behaviour. The acquired data from the monitoring was compared with the finest models so a calibration could be made.

  11. Clustered iterative stochastic ensemble method for multi-modal calibration of subsurface flow models

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-05-01

    A novel multi-modal parameter estimation algorithm is introduced. Parameter estimation is an ill-posed inverse problem that might admit many different solutions. This is attributed to the limited amount of measured data used to constrain the inverse problem. The proposed multi-modal model calibration algorithm uses an iterative stochastic ensemble method (ISEM) for parameter estimation. ISEM employs an ensemble of directional derivatives within a Gauss-Newton iteration for nonlinear parameter estimation. ISEM is augmented with a clustering step based on k-means algorithm to form sub-ensembles. These sub-ensembles are used to explore different parts of the search space. Clusters are updated at regular intervals of the algorithm to allow merging of close clusters approaching the same local minima. Numerical testing demonstrates the potential of the proposed algorithm in dealing with multi-modal nonlinear parameter estimation for subsurface flow models. © 2013 Elsevier B.V.

  12. Calibration by Hydrological Response Unit of a National Hydrologic Model to Improve Spatial Representation and Distribution of Parameters

    Science.gov (United States)

    Norton, P. A., II

    2015-12-01

    The U. S. Geological Survey is developing a National Hydrologic Model (NHM) to support consistent hydrologic modeling across the conterminous United States (CONUS). The Precipitation-Runoff Modeling System (PRMS) simulates daily hydrologic and energy processes in watersheds, and is used for the NHM application. For PRMS each watershed is divided into hydrologic response units (HRUs); by default each HRU is assumed to have a uniform hydrologic response. The Geospatial Fabric (GF) is a database containing initial parameter values for input to PRMS and was created for the NHM. The parameter values in the GF were derived from datasets that characterize the physical features of the entire CONUS. The NHM application is composed of more than 100,000 HRUs from the GF. Selected parameter values commonly are adjusted by basin in PRMS using an automated calibration process based on calibration targets, such as streamflow. Providing each HRU with distinct values that captures variability within the CONUS may improve simulation performance of the NHM. During calibration of the NHM by HRU, selected parameter values are adjusted for PRMS based on calibration targets, such as streamflow, snow water equivalent (SWE) and actual evapotranspiration (AET). Simulated SWE, AET, and runoff were compared to value ranges derived from multiple sources (e.g. the Snow Data Assimilation System, the Moderate Resolution Imaging Spectroradiometer (i.e. MODIS) Global Evapotranspiration Project, the Simplified Surface Energy Balance model, and the Monthly Water Balance Model). This provides each HRU with a distinct set of parameter values that captures the variability within the CONUS, leading to improved model performance. We present simulation results from the NHM after preliminary calibration, including the results of basin-level calibration for the NHM using: 1) default initial GF parameter values, and 2) parameter values calibrated by HRU.

  13. A hybrid framework for quantifying the influence of data in hydrological model calibration

    Science.gov (United States)

    Wright, David P.; Thyer, Mark; Westra, Seth; McInerney, David

    2018-06-01

    Influence diagnostics aim to identify a small number of influential data points that have a disproportionate impact on the model parameters and/or predictions. The key issues with current influence diagnostic techniques are that the regression-theory approaches do not provide hydrologically relevant influence metrics, while the case-deletion approaches are computationally expensive to calculate. The main objective of this study is to introduce a new two-stage hybrid framework that overcomes these challenges, by delivering hydrologically relevant influence metrics in a computationally efficient manner. Stage one uses computationally efficient regression-theory influence diagnostics to identify the most influential points based on Cook's distance. Stage two then uses case-deletion influence diagnostics to quantify the influence of points using hydrologically relevant metrics. To illustrate the application of the hybrid framework, we conducted three experiments on 11 hydro-climatologically diverse Australian catchments using the GR4J hydrological model. The first experiment investigated how many data points from stage one need to be retained in order to reliably identify those points that have the hightest influence on hydrologically relevant metrics. We found that a choice of 30-50 is suitable for hydrological applications similar to those explored in this study (30 points identified the most influential data 98% of the time and reduced the required recalibrations by 99% for a 10 year calibration period). The second experiment found little evidence of a change in the magnitude of influence with increasing calibration period length from 1, 2, 5 to 10 years. Even for 10 years the impact of influential points can still be high (>30% influence on maximum predicted flows). The third experiment compared the standard least squares (SLS) objective funct