WorldWideScience

Sample records for regression calibration approach

  1. Regression analysis with categorized regression calibrated exposure: some interesting findings

    Directory of Open Access Journals (Sweden)

    Hjartåker Anette

    2006-07-01

    Full Text Available Abstract Background Regression calibration as a method for handling measurement error is becoming increasingly well-known and used in epidemiologic research. However, the standard version of the method is not appropriate for exposure analyzed on a categorical (e.g. quintile scale, an approach commonly used in epidemiologic studies. A tempting solution could then be to use the predicted continuous exposure obtained through the regression calibration method and treat it as an approximation to the true exposure, that is, include the categorized calibrated exposure in the main regression analysis. Methods We use semi-analytical calculations and simulations to evaluate the performance of the proposed approach compared to the naive approach of not correcting for measurement error, in situations where analyses are performed on quintile scale and when incorporating the original scale into the categorical variables, respectively. We also present analyses of real data, containing measures of folate intake and depression, from the Norwegian Women and Cancer study (NOWAC. Results In cases where extra information is available through replicated measurements and not validation data, regression calibration does not maintain important qualities of the true exposure distribution, thus estimates of variance and percentiles can be severely biased. We show that the outlined approach maintains much, in some cases all, of the misclassification found in the observed exposure. For that reason, regression analysis with the corrected variable included on a categorical scale is still biased. In some cases the corrected estimates are analytically equal to those obtained by the naive approach. Regression calibration is however vastly superior to the naive method when applying the medians of each category in the analysis. Conclusion Regression calibration in its most well-known form is not appropriate for measurement error correction when the exposure is analyzed on a

  2. Regression calibration with more surrogates than mismeasured variables

    KAUST Repository

    Kipnis, Victor

    2012-06-29

    In a recent paper (Weller EA, Milton DK, Eisen EA, Spiegelman D. Regression calibration for logistic regression with multiple surrogates for one exposure. Journal of Statistical Planning and Inference 2007; 137: 449-461), the authors discussed fitting logistic regression models when a scalar main explanatory variable is measured with error by several surrogates, that is, a situation with more surrogates than variables measured with error. They compared two methods of adjusting for measurement error using a regression calibration approximate model as if it were exact. One is the standard regression calibration approach consisting of substituting an estimated conditional expectation of the true covariate given observed data in the logistic regression. The other is a novel two-stage approach when the logistic regression is fitted to multiple surrogates, and then a linear combination of estimated slopes is formed as the estimate of interest. Applying estimated asymptotic variances for both methods in a single data set with some sensitivity analysis, the authors asserted superiority of their two-stage approach. We investigate this claim in some detail. A troubling aspect of the proposed two-stage method is that, unlike standard regression calibration and a natural form of maximum likelihood, the resulting estimates are not invariant to reparameterization of nuisance parameters in the model. We show, however, that, under the regression calibration approximation, the two-stage method is asymptotically equivalent to a maximum likelihood formulation, and is therefore in theory superior to standard regression calibration. However, our extensive finite-sample simulations in the practically important parameter space where the regression calibration model provides a good approximation failed to uncover such superiority of the two-stage method. We also discuss extensions to different data structures.

  3. Regression calibration with more surrogates than mismeasured variables

    KAUST Repository

    Kipnis, Victor; Midthune, Douglas; Freedman, Laurence S.; Carroll, Raymond J.

    2012-01-01

    In a recent paper (Weller EA, Milton DK, Eisen EA, Spiegelman D. Regression calibration for logistic regression with multiple surrogates for one exposure. Journal of Statistical Planning and Inference 2007; 137: 449-461), the authors discussed fitting logistic regression models when a scalar main explanatory variable is measured with error by several surrogates, that is, a situation with more surrogates than variables measured with error. They compared two methods of adjusting for measurement error using a regression calibration approximate model as if it were exact. One is the standard regression calibration approach consisting of substituting an estimated conditional expectation of the true covariate given observed data in the logistic regression. The other is a novel two-stage approach when the logistic regression is fitted to multiple surrogates, and then a linear combination of estimated slopes is formed as the estimate of interest. Applying estimated asymptotic variances for both methods in a single data set with some sensitivity analysis, the authors asserted superiority of their two-stage approach. We investigate this claim in some detail. A troubling aspect of the proposed two-stage method is that, unlike standard regression calibration and a natural form of maximum likelihood, the resulting estimates are not invariant to reparameterization of nuisance parameters in the model. We show, however, that, under the regression calibration approximation, the two-stage method is asymptotically equivalent to a maximum likelihood formulation, and is therefore in theory superior to standard regression calibration. However, our extensive finite-sample simulations in the practically important parameter space where the regression calibration model provides a good approximation failed to uncover such superiority of the two-stage method. We also discuss extensions to different data structures.

  4. The Prediction Properties of Inverse and Reverse Regression for the Simple Linear Calibration Problem

    Science.gov (United States)

    Parker, Peter A.; Geoffrey, Vining G.; Wilson, Sara R.; Szarka, John L., III; Johnson, Nels G.

    2010-01-01

    The calibration of measurement systems is a fundamental but under-studied problem within industrial statistics. The origins of this problem go back to basic chemical analysis based on NIST standards. In today's world these issues extend to mechanical, electrical, and materials engineering. Often, these new scenarios do not provide "gold standards" such as the standard weights provided by NIST. This paper considers the classic "forward regression followed by inverse regression" approach. In this approach the initial experiment treats the "standards" as the regressor and the observed values as the response to calibrate the instrument. The analyst then must invert the resulting regression model in order to use the instrument to make actual measurements in practice. This paper compares this classical approach to "reverse regression," which treats the standards as the response and the observed measurements as the regressor in the calibration experiment. Such an approach is intuitively appealing because it avoids the need for the inverse regression. However, it also violates some of the basic regression assumptions.

  5. Improved Regression Analysis of Temperature-Dependent Strain-Gage Balance Calibration Data

    Science.gov (United States)

    Ulbrich, N.

    2015-01-01

    An improved approach is discussed that may be used to directly include first and second order temperature effects in the load prediction algorithm of a wind tunnel strain-gage balance. The improved approach was designed for the Iterative Method that fits strain-gage outputs as a function of calibration loads and uses a load iteration scheme during the wind tunnel test to predict loads from measured gage outputs. The improved approach assumes that the strain-gage balance is at a constant uniform temperature when it is calibrated and used. First, the method introduces a new independent variable for the regression analysis of the balance calibration data. The new variable is designed as the difference between the uniform temperature of the balance and a global reference temperature. This reference temperature should be the primary calibration temperature of the balance so that, if needed, a tare load iteration can be performed. Then, two temperature{dependent terms are included in the regression models of the gage outputs. They are the temperature difference itself and the square of the temperature difference. Simulated temperature{dependent data obtained from Triumph Aerospace's 2013 calibration of NASA's ARC-30K five component semi{span balance is used to illustrate the application of the improved approach.

  6. Information fusion via constrained principal component regression for robust quantification with incomplete calibrations

    International Nuclear Information System (INIS)

    Vogt, Frank

    2013-01-01

    Graphical abstract: Analysis Task: Determine the albumin (= protein) concentration in microalgae cells as a function of the cells’ nutrient availability. Left Panel: The predicted albumin concentrations as obtained by conventional principal component regression features low reproducibility and are partially higher than the concentrations of algae in which albumin is contained. Right Panel: Augmenting an incomplete PCR calibration with additional expert information derives reasonable albumin concentrations which now reveal a significant dependency on the algae's nutrient situation. -- Highlights: •Make quantitative analyses of compounds embedded in largely unknown chemical matrices robust. •Improved concentration prediction with originally insufficient calibration models. •Chemometric approach for incorporating expertise from other fields and/or researchers. •Ensure chemical, biological, or medicinal meaningfulness of quantitative analyses. -- Abstract: Incomplete calibrations are encountered in many applications and hamper chemometric data analyses. Such situations arise when target analytes are embedded in a chemically complex matrix from which calibration concentrations cannot be determined with reasonable efforts. In other cases, the samples’ chemical composition may fluctuate in an unpredictable way and thus cannot be comprehensively covered by calibration samples. The reason for calibration model to fail is the regression principle itself which seeks to explain measured data optimally in terms of the (potentially incomplete) calibration model but does not consider chemical meaningfulness. This study presents a novel chemometric approach which is based on experimentally feasible calibrations, i.e. concentration series of the target analytes outside the chemical matrix (‘ex situ calibration’). The inherent lack-of-information is then compensated by incorporating additional knowledge in form of regression constraints. Any outside knowledge can be

  7. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies.

    NARCIS (Netherlands)

    Kromhout, D.

    2009-01-01

    Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements of the

  8. Regression Analysis and Calibration Recommendations for the Characterization of Balance Temperature Effects

    Science.gov (United States)

    Ulbrich, N.; Volden, T.

    2018-01-01

    Analysis and use of temperature-dependent wind tunnel strain-gage balance calibration data are discussed in the paper. First, three different methods are presented and compared that may be used to process temperature-dependent strain-gage balance data. The first method uses an extended set of independent variables in order to process the data and predict balance loads. The second method applies an extended load iteration equation during the analysis of balance calibration data. The third method uses temperature-dependent sensitivities for the data analysis. Physical interpretations of the most important temperature-dependent regression model terms are provided that relate temperature compensation imperfections and the temperature-dependent nature of the gage factor to sets of regression model terms. Finally, balance calibration recommendations are listed so that temperature-dependent calibration data can be obtained and successfully processed using the reviewed analysis methods.

  9. On the calibration process of film dosimetry: OLS inverse regression versus WLS inverse prediction

    International Nuclear Information System (INIS)

    Crop, F; Thierens, H; Rompaye, B Van; Paelinck, L; Vakaet, L; Wagter, C De

    2008-01-01

    The purpose of this study was both putting forward a statistically correct model for film calibration and the optimization of this process. A reliable calibration is needed in order to perform accurate reference dosimetry with radiographic (Gafchromic) film. Sometimes, an ordinary least squares simple linear (in the parameters) regression is applied to the dose-optical-density (OD) curve with the dose as a function of OD (inverse regression) or sometimes OD as a function of dose (inverse prediction). The application of a simple linear regression fit is an invalid method because heteroscedasticity of the data is not taken into account. This could lead to erroneous results originating from the calibration process itself and thus to a lower accuracy. In this work, we compare the ordinary least squares (OLS) inverse regression method with the correct weighted least squares (WLS) inverse prediction method to create calibration curves. We found that the OLS inverse regression method could lead to a prediction bias of up to 7.3 cGy at 300 cGy and total prediction errors of 3% or more for Gafchromic EBT film. Application of the WLS inverse prediction method resulted in a maximum prediction bias of 1.4 cGy and total prediction errors below 2% in a 0-400 cGy range. We developed a Monte-Carlo-based process to optimize calibrations, depending on the needs of the experiment. This type of thorough analysis can lead to a higher accuracy for film dosimetry

  10. Use of two-part regression calibration model to correct for measurement error in episodically consumed foods in a single-replicate study design: EPIC case study.

    Science.gov (United States)

    Agogo, George O; van der Voet, Hilko; van't Veer, Pieter; Ferrari, Pietro; Leenders, Max; Muller, David C; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A; Boshuizen, Hendriek

    2014-01-01

    In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model.

  11. A graphical method to evaluate spectral preprocessing in multivariate regression calibrations: example with Savitzky-Golay filters and partial least squares regression.

    Science.gov (United States)

    Delwiche, Stephen R; Reeves, James B

    2010-01-01

    In multivariate regression analysis of spectroscopy data, spectral preprocessing is often performed to reduce unwanted background information (offsets, sloped baselines) or accentuate absorption features in intrinsically overlapping bands. These procedures, also known as pretreatments, are commonly smoothing operations or derivatives. While such operations are often useful in reducing the number of latent variables of the actual decomposition and lowering residual error, they also run the risk of misleading the practitioner into accepting calibration equations that are poorly adapted to samples outside of the calibration. The current study developed a graphical method to examine this effect on partial least squares (PLS) regression calibrations of near-infrared (NIR) reflection spectra of ground wheat meal with two analytes, protein content and sodium dodecyl sulfate sedimentation (SDS) volume (an indicator of the quantity of the gluten proteins that contribute to strong doughs). These two properties were chosen because of their differing abilities to be modeled by NIR spectroscopy: excellent for protein content, fair for SDS sedimentation volume. To further demonstrate the potential pitfalls of preprocessing, an artificial component, a randomly generated value, was included in PLS regression trials. Savitzky-Golay (digital filter) smoothing, first-derivative, and second-derivative preprocess functions (5 to 25 centrally symmetric convolution points, derived from quadratic polynomials) were applied to PLS calibrations of 1 to 15 factors. The results demonstrated the danger of an over reliance on preprocessing when (1) the number of samples used in a multivariate calibration is low (<50), (2) the spectral response of the analyte is weak, and (3) the goodness of the calibration is based on the coefficient of determination (R(2)) rather than a term based on residual error. The graphical method has application to the evaluation of other preprocess functions and various

  12. Linear Calibration – Is It so Simple?

    International Nuclear Information System (INIS)

    Arsova, Diana; Babanova, Sofia; Mandjukov, Petko

    2009-01-01

    Calibration procedure is an important part of instrumental analysis. Usually it is not the major uncertainty source in whole analytical procedure. However, improper calibration might cause a significant bias of the analytical results from the real (certified) value. Standard Gaussian linear regression is the most frequently used mathematical approach for estimation of calibration function parameters. In the present article are discussed some not quite popular, but highly recommended in certain cases methods for parameter estimation, such as: weighted regression, orthogonal regression, robust regression, bracketing calibration etc. Some useful approximations are also presented. Special attention is paid to the statistical criteria which to be used for selection of proper calibration model. Standard UV-VIS spectrometric procedure for determination of phosphates in water was used as a practical example. Several different approaches for estimation of the contribution of calibration to the general un-certainty of the analytical result are presented and compared

  13. Radioligand assays - methods and applications. IV. Uniform regression of hyperbolic and linear radioimmunoassay calibration curves

    Energy Technology Data Exchange (ETDEWEB)

    Keilacker, H; Becker, G; Ziegler, M; Gottschling, H D [Zentralinstitut fuer Diabetes, Karlsburg (German Democratic Republic)

    1980-10-01

    In order to handle all types of radioimmunoassay (RIA) calibration curves obtained in the authors' laboratory in the same way, they tried to find a non-linear expression for their regression which allows calibration curves with different degrees of curvature to be fitted. Considering the two boundary cases of the incubation protocol they derived a hyperbolic inverse regression function: x = a/sub 1/y + a/sub 0/ + asub(-1)y/sup -1/, where x is the total concentration of antigen, asub(i) are constants, and y is the specifically bound radioactivity. An RIA evaluation procedure based on this function is described providing a fitted inverse RIA calibration curve and some statistical quality parameters. The latter are of an order which is normal for RIA systems. There is an excellent agreement between fitted and experimentally obtained calibration curves having a different degree of curvature.

  14. Wind Tunnel Strain-Gage Balance Calibration Data Analysis Using a Weighted Least Squares Approach

    Science.gov (United States)

    Ulbrich, N.; Volden, T.

    2017-01-01

    A new approach is presented that uses a weighted least squares fit to analyze wind tunnel strain-gage balance calibration data. The weighted least squares fit is specifically designed to increase the influence of single-component loadings during the regression analysis. The weighted least squares fit also reduces the impact of calibration load schedule asymmetries on the predicted primary sensitivities of the balance gages. A weighting factor between zero and one is assigned to each calibration data point that depends on a simple count of its intentionally loaded load components or gages. The greater the number of a data point's intentionally loaded load components or gages is, the smaller its weighting factor becomes. The proposed approach is applicable to both the Iterative and Non-Iterative Methods that are used for the analysis of strain-gage balance calibration data in the aerospace testing community. The Iterative Method uses a reasonable estimate of the tare corrected load set as input for the determination of the weighting factors. The Non-Iterative Method, on the other hand, uses gage output differences relative to the natural zeros as input for the determination of the weighting factors. Machine calibration data of a six-component force balance is used to illustrate benefits of the proposed weighted least squares fit. In addition, a detailed derivation of the PRESS residuals associated with a weighted least squares fit is given in the appendices of the paper as this information could not be found in the literature. These PRESS residuals may be needed to evaluate the predictive capabilities of the final regression models that result from a weighted least squares fit of the balance calibration data.

  15. A computational approach to compare regression modelling strategies in prediction research.

    Science.gov (United States)

    Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H

    2016-08-25

    It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.

  16. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies

    DEFF Research Database (Denmark)

    Tybjærg-Hansen, Anne

    2009-01-01

    Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements...... of the risk factors are observed on a subsample. We extend the multivariate RC techniques to a meta-analysis framework where multiple studies provide independent repeat measurements and information on disease outcome. We consider the cases where some or all studies have repeat measurements, and compare study......-specific, averaged and empirical Bayes estimates of RC parameters. Additionally, we allow for binary covariates (e.g. smoking status) and for uncertainty and time trends in the measurement error corrections. Our methods are illustrated using a subset of individual participant data from prospective long-term studies...

  17. A statistical approach to instrument calibration

    Science.gov (United States)

    Robert R. Ziemer; David Strauss

    1978-01-01

    Summary - It has been found that two instruments will yield different numerical values when used to measure identical points. A statistical approach is presented that can be used to approximate the error associated with the calibration of instruments. Included are standard statistical tests that can be used to determine if a number of successive calibrations of the...

  18. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying; Carroll, Raymond J.

    2009-01-01

    . The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a

  19. The regression-calibration method for fitting generalized linear models with additive measurement error

    OpenAIRE

    James W. Hardin; Henrik Schmeidiche; Raymond J. Carroll

    2003-01-01

    This paper discusses and illustrates the method of regression calibration. This is a straightforward technique for fitting models with additive measurement error. We present this discussion in terms of generalized linear models (GLMs) following the notation defined in Hardin and Carroll (2003). Discussion will include specified measurement error, measurement error estimated by replicate error-prone proxies, and measurement error estimated by instrumental variables. The discussion focuses on s...

  20. Principal components based support vector regression model for on-line instrument calibration monitoring in NPPs

    International Nuclear Information System (INIS)

    Seo, In Yong; Ha, Bok Nam; Lee, Sung Woo; Shin, Chang Hoon; Kim, Seong Jun

    2010-01-01

    In nuclear power plants (NPPs), periodic sensor calibrations are required to assure that sensors are operating correctly. By checking the sensor's operating status at every fuel outage, faulty sensors may remain undetected for periods of up to 24 months. Moreover, typically, only a few faulty sensors are found to be calibrated. For the safe operation of NPP and the reduction of unnecessary calibration, on-line instrument calibration monitoring is needed. In this study, principal component based auto-associative support vector regression (PCSVR) using response surface methodology (RSM) is proposed for the sensor signal validation of NPPs. This paper describes the design of a PCSVR-based sensor validation system for a power generation system. RSM is employed to determine the optimal values of SVR hyperparameters and is compared to the genetic algorithm (GA). The proposed PCSVR model is confirmed with the actual plant data of Kori Nuclear Power Plant Unit 3 and is compared with the Auto-Associative support vector regression (AASVR) and the auto-associative neural network (AANN) model. The auto-sensitivity of AASVR is improved by around six times by using a PCA, resulting in good detection of sensor drift. Compared to AANN, accuracy and cross-sensitivity are better while the auto-sensitivity is almost the same. Meanwhile, the proposed RSM for the optimization of the PCSVR algorithm performs even better in terms of accuracy, auto-sensitivity, and averaged maximum error, except in averaged RMS error, and this method is much more time efficient compared to the conventional GA method

  1. Fuzzy multiple linear regression: A computational approach

    Science.gov (United States)

    Juang, C. H.; Huang, X. H.; Fleming, J. W.

    1992-01-01

    This paper presents a new computational approach for performing fuzzy regression. In contrast to Bardossy's approach, the new approach, while dealing with fuzzy variables, closely follows the conventional regression technique. In this approach, treatment of fuzzy input is more 'computational' than 'symbolic.' The following sections first outline the formulation of the new approach, then deal with the implementation and computational scheme, and this is followed by examples to illustrate the new procedure.

  2. A Comparative Investigation of the Combined Effects of Pre-Processing, Wavelength Selection, and Regression Methods on Near-Infrared Calibration Model Performance.

    Science.gov (United States)

    Wan, Jian; Chen, Yi-Chieh; Morris, A Julian; Thennadil, Suresh N

    2017-07-01

    Near-infrared (NIR) spectroscopy is being widely used in various fields ranging from pharmaceutics to the food industry for analyzing chemical and physical properties of the substances concerned. Its advantages over other analytical techniques include available physical interpretation of spectral data, nondestructive nature and high speed of measurements, and little or no need for sample preparation. The successful application of NIR spectroscopy relies on three main aspects: pre-processing of spectral data to eliminate nonlinear variations due to temperature, light scattering effects and many others, selection of those wavelengths that contribute useful information, and identification of suitable calibration models using linear/nonlinear regression . Several methods have been developed for each of these three aspects and many comparative studies of different methods exist for an individual aspect or some combinations. However, there is still a lack of comparative studies for the interactions among these three aspects, which can shed light on what role each aspect plays in the calibration and how to combine various methods of each aspect together to obtain the best calibration model. This paper aims to provide such a comparative study based on four benchmark data sets using three typical pre-processing methods, namely, orthogonal signal correction (OSC), extended multiplicative signal correction (EMSC) and optical path-length estimation and correction (OPLEC); two existing wavelength selection methods, namely, stepwise forward selection (SFS) and genetic algorithm optimization combined with partial least squares regression for spectral data (GAPLSSP); four popular regression methods, namely, partial least squares (PLS), least absolute shrinkage and selection operator (LASSO), least squares support vector machine (LS-SVM), and Gaussian process regression (GPR). The comparative study indicates that, in general, pre-processing of spectral data can play a significant

  3. bayesQR: A Bayesian Approach to Quantile Regression

    Directory of Open Access Journals (Sweden)

    Dries F. Benoit

    2017-01-01

    Full Text Available After its introduction by Koenker and Basset (1978, quantile regression has become an important and popular tool to investigate the conditional response distribution in regression. The R package bayesQR contains a number of routines to estimate quantile regression parameters using a Bayesian approach based on the asymmetric Laplace distribution. The package contains functions for the typical quantile regression with continuous dependent variable, but also supports quantile regression for binary dependent variables. For both types of dependent variables, an approach to variable selection using the adaptive lasso approach is provided. For the binary quantile regression model, the package also contains a routine that calculates the fitted probabilities for each vector of predictors. In addition, functions for summarizing the results, creating traceplots, posterior histograms and drawing quantile plots are included. This paper starts with a brief overview of the theoretical background of the models used in the bayesQR package. The main part of this paper discusses the computational problems that arise in the implementation of the procedure and illustrates the usefulness of the package through selected examples.

  4. Local Strategy Combined with a Wavelength Selection Method for Multivariate Calibration

    Directory of Open Access Journals (Sweden)

    Haitao Chang

    2016-06-01

    Full Text Available One of the essential factors influencing the prediction accuracy of multivariate calibration models is the quality of the calibration data. A local regression strategy, together with a wavelength selection approach, is proposed to build the multivariate calibration models based on partial least squares regression. The local algorithm is applied to create a calibration set of spectra similar to the spectrum of an unknown sample; the synthetic degree of grey relation coefficient is used to evaluate the similarity. A wavelength selection method based on simple-to-use interactive self-modeling mixture analysis minimizes the influence of noisy variables, and the most informative variables of the most similar samples are selected to build the multivariate calibration model based on partial least squares regression. To validate the performance of the proposed method, ultraviolet-visible absorbance spectra of mixed solutions of food coloring analytes in a concentration range of 20–200 µg/mL is measured. Experimental results show that the proposed method can not only enhance the prediction accuracy of the calibration model, but also greatly reduce its complexity.

  5. The Efficiency of OLS Estimators of Structural Parameters in a Simple Linear Regression Model in the Calibration of the Averages Scheme

    Directory of Open Access Journals (Sweden)

    Kowal Robert

    2016-12-01

    Full Text Available A simple linear regression model is one of the pillars of classic econometrics. Multiple areas of research function within its scope. One of the many fundamental questions in the model concerns proving the efficiency of the most commonly used OLS estimators and examining their properties. In the literature of the subject one can find taking back to this scope and certain solutions in that regard. Methodically, they are borrowed from the multiple regression model or also from a boundary partial model. Not everything, however, is here complete and consistent. In the paper a completely new scheme is proposed, based on the implementation of the Cauchy-Schwarz inequality in the arrangement of the constraint aggregated from calibrated appropriately secondary constraints of unbiasedness which in a result of choice the appropriate calibrator for each variable directly leads to showing this property. A separate range-is a matter of choice of such a calibrator. These deliberations, on account of the volume and kinds of the calibration, were divided into a few parts. In the one the efficiency of OLS estimators is proven in a mixed scheme of the calibration by averages, that is preliminary, and in the most basic frames of the proposed methodology. In these frames the future outlines and general premises constituting the base of more distant generalizations are being created.

  6. Nucleonic gauges in Poland and new approach to their calibration

    International Nuclear Information System (INIS)

    Urbanski, P.

    2000-01-01

    The current status of manufacturing and application of radioisotope gauges in Poland is presented. Metrological performance of the gauges is briefly described and their expected future prospects on the market of the industrial measuring instruments are discussed. Progress in electronic engineering and common use of the microprocessor systems in the radioisotope gauges made possible application of the sophisticated methods of signal processing and data treatment, as for example statistical multivariate analysis. Some examples of the multivariate calibration of nucleonic gauges are presented. Application of the partial least square regression (PLS) and artificial neural network (ANN) for calibration of the gauges has been shown. (author)

  7. Evaluation of in-line Raman data for end-point determination of a coating process: Comparison of Science-Based Calibration, PLS-regression and univariate data analysis.

    Science.gov (United States)

    Barimani, Shirin; Kleinebudde, Peter

    2017-10-01

    A multivariate analysis method, Science-Based Calibration (SBC), was used for the first time for endpoint determination of a tablet coating process using Raman data. Two types of tablet cores, placebo and caffeine cores, received a coating suspension comprising a polyvinyl alcohol-polyethylene glycol graft-copolymer and titanium dioxide to a maximum coating thickness of 80µm. Raman spectroscopy was used as in-line PAT tool. The spectra were acquired every minute and correlated to the amount of applied aqueous coating suspension. SBC was compared to another well-known multivariate analysis method, Partial Least Squares-regression (PLS) and a simpler approach, Univariate Data Analysis (UVDA). All developed calibration models had coefficient of determination values (R 2 ) higher than 0.99. The coating endpoints could be predicted with root mean square errors (RMSEP) less than 3.1% of the applied coating suspensions. Compared to PLS and UVDA, SBC proved to be an alternative multivariate calibration method with high predictive power. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Iterative Calibration: A Novel Approach for Calibrating the Molecular Clock Using Complex Geological Events.

    Science.gov (United States)

    Loeza-Quintana, Tzitziki; Adamowicz, Sarah J

    2018-02-01

    During the past 50 years, the molecular clock has become one of the main tools for providing a time scale for the history of life. In the era of robust molecular evolutionary analysis, clock calibration is still one of the most basic steps needing attention. When fossil records are limited, well-dated geological events are the main resource for calibration. However, biogeographic calibrations have often been used in a simplistic manner, for example assuming simultaneous vicariant divergence of multiple sister lineages. Here, we propose a novel iterative calibration approach to define the most appropriate calibration date by seeking congruence between the dates assigned to multiple allopatric divergences and the geological history. Exploring patterns of molecular divergence in 16 trans-Bering sister clades of echinoderms, we demonstrate that the iterative calibration is predominantly advantageous when using complex geological or climatological events-such as the opening/reclosure of the Bering Strait-providing a powerful tool for clock dating that can be applied to other biogeographic calibration systems and further taxa. Using Bayesian analysis, we observed that evolutionary rate variability in the COI-5P gene is generally distributed in a clock-like fashion for Northern echinoderms. The results reveal a large range of genetic divergences, consistent with multiple pulses of trans-Bering migrations. A resulting rate of 2.8% pairwise Kimura-2-parameter sequence divergence per million years is suggested for the COI-5P gene in Northern echinoderms. Given that molecular rates may vary across latitudes and taxa, this study provides a new context for dating the evolutionary history of Arctic marine life.

  9. A Novel Approach to Calibrating Multifunctional Binocular Stereovision Sensor

    International Nuclear Information System (INIS)

    Xue, T; Zhu, J G; Wu, B; Ye, S H

    2006-01-01

    We present a novel multifunctional binocular stereovision sensor for various threedimensional (3D) inspection tasks. It not only avoids the so-called correspondence problem of passive stereo vision, but also possesses the uniform mathematical model. We also propose a novel approach to estimating all the sensor parameters with free-position planar reference object. In this technique, the planar pattern can be moved freely by hand. All the camera intrinsic and extrinsic parameters with coefficient of lens radial and tangential distortion are estimated, and sensor parameters are calibrated based on the 3D measurement model and optimized with the feature point constraint algorithm using the same views in the camera calibration stage. The proposed approach greatly reduces the cost of the calibration equipment, and it is flexible and practical for the vision measurement. It shows that this method has high precision by experiment, and the sensor measured relative error of space length excels 0.3%

  10. The Wally plot approach to assess the calibration of clinical prediction models.

    Science.gov (United States)

    Blanche, Paul; Gerds, Thomas A; Ekstrøm, Claus T

    2017-12-06

    A prediction model is calibrated if, roughly, for any percentage x we can expect that x subjects out of 100 experience the event among all subjects that have a predicted risk of x%. Typically, the calibration assumption is assessed graphically but in practice it is often challenging to judge whether a "disappointing" calibration plot is the consequence of a departure from the calibration assumption, or alternatively just "bad luck" due to sampling variability. We propose a graphical approach which enables the visualization of how much a calibration plot agrees with the calibration assumption to address this issue. The approach is mainly based on the idea of generating new plots which mimic the available data under the calibration assumption. The method handles the common non-trivial situations in which the data contain censored observations and occurrences of competing events. This is done by building on ideas from constrained non-parametric maximum likelihood estimation methods. Two examples from large cohort data illustrate our proposal. The 'wally' R package is provided to make the methodology easily usable.

  11. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    Energy Technology Data Exchange (ETDEWEB)

    and Ben Polly, Joseph Robertson [National Renewable Energy Lab. (NREL), Golden, CO (United States); Polly, Ben [National Renewable Energy Lab. (NREL), Golden, CO (United States); Collis, Jon [Colorado School of Mines, Golden, CO (United States)

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.

  12. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Robertson, J.; Polly, B.; Collis, J.

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.

  13. Calibration and Data Analysis of the MC-130 Air Balance

    Science.gov (United States)

    Booth, Dennis; Ulbrich, N.

    2012-01-01

    Design, calibration, calibration analysis, and intended use of the MC-130 air balance are discussed. The MC-130 balance is an 8.0 inch diameter force balance that has two separate internal air flow systems and one external bellows system. The manual calibration of the balance consisted of a total of 1854 data points with both unpressurized and pressurized air flowing through the balance. A subset of 1160 data points was chosen for the calibration data analysis. The regression analysis of the subset was performed using two fundamentally different analysis approaches. First, the data analysis was performed using a recently developed extension of the Iterative Method. This approach fits gage outputs as a function of both applied balance loads and bellows pressures while still allowing the application of the iteration scheme that is used with the Iterative Method. Then, for comparison, the axial force was also analyzed using the Non-Iterative Method. This alternate approach directly fits loads as a function of measured gage outputs and bellows pressures and does not require a load iteration. The regression models used by both the extended Iterative and Non-Iterative Method were constructed such that they met a set of widely accepted statistical quality requirements. These requirements lead to reliable regression models and prevent overfitting of data because they ensure that no hidden near-linear dependencies between regression model terms exist and that only statistically significant terms are included. Finally, a comparison of the axial force residuals was performed. Overall, axial force estimates obtained from both methods show excellent agreement as the differences of the standard deviation of the axial force residuals are on the order of 0.001 % of the axial force capacity.

  14. Bayesian Nonparametric Regression Analysis of Data with Random Effects Covariates from Longitudinal Measurements

    KAUST Repository

    Ryu, Duchwan; Li, Erning; Mallick, Bani K.

    2010-01-01

    " approach and the regression calibration, via simulations and by an application that investigates the relationship between obesity in adulthood and childhood growth curves. © 2010, The International Biometric Society.

  15. Forecasting exchange rates: a robust regression approach

    OpenAIRE

    Preminger, Arie; Franck, Raphael

    2005-01-01

    The least squares estimation method as well as other ordinary estimation method for regression models can be severely affected by a small number of outliers, thus providing poor out-of-sample forecasts. This paper suggests a robust regression approach, based on the S-estimation method, to construct forecasting models that are less sensitive to data contamination by outliers. A robust linear autoregressive (RAR) and a robust neural network (RNN) models are estimated to study the predictabil...

  16. Uncertainty quantification for radiation measurements: Bottom-up error variance estimation using calibration information

    International Nuclear Information System (INIS)

    Burr, T.; Croft, S.; Krieger, T.; Martin, K.; Norman, C.; Walsh, S.

    2016-01-01

    One example of top-down uncertainty quantification (UQ) involves comparing two or more measurements on each of multiple items. One example of bottom-up UQ expresses a measurement result as a function of one or more input variables that have associated errors, such as a measured count rate, which individually (or collectively) can be evaluated for impact on the uncertainty in the resulting measured value. In practice, it is often found that top-down UQ exhibits larger error variances than bottom-up UQ, because some error sources are present in the fielded assay methods used in top-down UQ that are not present (or not recognized) in the assay studies used in bottom-up UQ. One would like better consistency between the two approaches in order to claim understanding of the measurement process. The purpose of this paper is to refine bottom-up uncertainty estimation by using calibration information so that if there are no unknown error sources, the refined bottom-up uncertainty estimate will agree with the top-down uncertainty estimate to within a specified tolerance. Then, in practice, if the top-down uncertainty estimate is larger than the refined bottom-up uncertainty estimate by more than the specified tolerance, there must be omitted sources of error beyond those predicted from calibration uncertainty. The paper develops a refined bottom-up uncertainty approach for four cases of simple linear calibration: (1) inverse regression with negligible error in predictors, (2) inverse regression with non-negligible error in predictors, (3) classical regression followed by inversion with negligible error in predictors, and (4) classical regression followed by inversion with non-negligible errors in predictors. Our illustrations are of general interest, but are drawn from our experience with nuclear material assay by non-destructive assay. The main example we use is gamma spectroscopy that applies the enrichment meter principle. Previous papers that ignore error in predictors

  17. Fuzzy multinomial logistic regression analysis: A multi-objective programming approach

    Science.gov (United States)

    Abdalla, Hesham A.; El-Sayed, Amany A.; Hamed, Ramadan

    2017-05-01

    Parameter estimation for multinomial logistic regression is usually based on maximizing the likelihood function. For large well-balanced datasets, Maximum Likelihood (ML) estimation is a satisfactory approach. Unfortunately, ML can fail completely or at least produce poor results in terms of estimated probabilities and confidence intervals of parameters, specially for small datasets. In this study, a new approach based on fuzzy concepts is proposed to estimate parameters of the multinomial logistic regression. The study assumes that the parameters of multinomial logistic regression are fuzzy. Based on the extension principle stated by Zadeh and Bárdossy's proposition, a multi-objective programming approach is suggested to estimate these fuzzy parameters. A simulation study is used to evaluate the performance of the new approach versus Maximum likelihood (ML) approach. Results show that the new proposed model outperforms ML in cases of small datasets.

  18. A multi-objective approach to improve SWAT model calibration in alpine catchments

    Science.gov (United States)

    Tuo, Ye; Marcolini, Giorgia; Disse, Markus; Chiogna, Gabriele

    2018-04-01

    Multi-objective hydrological model calibration can represent a valuable solution to reduce model equifinality and parameter uncertainty. The Soil and Water Assessment Tool (SWAT) model is widely applied to investigate water quality and water management issues in alpine catchments. However, the model calibration is generally based on discharge records only, and most of the previous studies have defined a unique set of snow parameters for an entire basin. Only a few studies have considered snow observations to validate model results or have taken into account the possible variability of snow parameters for different subbasins. This work presents and compares three possible calibration approaches. The first two procedures are single-objective calibration procedures, for which all parameters of the SWAT model were calibrated according to river discharge alone. Procedures I and II differ from each other by the assumption used to define snow parameters: The first approach assigned a unique set of snow parameters to the entire basin, whereas the second approach assigned different subbasin-specific sets of snow parameters to each subbasin. The third procedure is a multi-objective calibration, in which we considered snow water equivalent (SWE) information at two different spatial scales (i.e. subbasin and elevation band), in addition to discharge measurements. We tested these approaches in the Upper Adige river basin where a dense network of snow depth measurement stations is available. Only the set of parameters obtained with this multi-objective procedure provided an acceptable prediction of both river discharge and SWE. These findings offer the large community of SWAT users a strategy to improve SWAT modeling in alpine catchments.

  19. Stepwise Regression Analysis of MDOE Balance Calibration Data Acquired at DNW

    Science.gov (United States)

    DeLoach, RIchard; Philipsen, Iwan

    2007-01-01

    This paper reports a comparison of two experiment design methods applied in the calibration of a strain-gage balance. One features a 734-point test matrix in which loads are varied systematically according to a method commonly applied in aerospace research and known in the literature of experiment design as One Factor At a Time (OFAT) testing. Two variations of an alternative experiment design were also executed on the same balance, each with different features of an MDOE experiment design. The Modern Design of Experiments (MDOE) is an integrated process of experiment design, execution, and analysis applied at NASA's Langley Research Center to achieve significant reductions in cycle time, direct operating cost, and experimental uncertainty in aerospace research generally and in balance calibration experiments specifically. Personnel in the Instrumentation and Controls Department of the German Dutch Wind Tunnels (DNW) have applied MDOE methods to evaluate them in the calibration of a balance using an automated calibration machine. The data have been sent to Langley Research Center for analysis and comparison. This paper reports key findings from this analysis. The chief result is that a 100-point calibration exploiting MDOE principles delivered quality comparable to a 700+ point OFAT calibration with significantly reduced cycle time and attendant savings in direct and indirect costs. While the DNW test matrices implemented key MDOE principles and produced excellent results, additional MDOE concepts implemented in balance calibrations at Langley Research Center are also identified and described.

  20. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  1. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    Science.gov (United States)

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  2. Application of heuristic and machine-learning approach to engine model calibration

    Science.gov (United States)

    Cheng, Jie; Ryu, Kwang R.; Newman, C. E.; Davis, George C.

    1993-03-01

    Automation of engine model calibration procedures is a very challenging task because (1) the calibration process searches for a goal state in a huge, continuous state space, (2) calibration is often a lengthy and frustrating task because of complicated mutual interference among the target parameters, and (3) the calibration problem is heuristic by nature, and often heuristic knowledge for constraining a search cannot be easily acquired from domain experts. A combined heuristic and machine learning approach has, therefore, been adopted to improve the efficiency of model calibration. We developed an intelligent calibration program called ICALIB. It has been used on a daily basis for engine model applications, and has reduced the time required for model calibrations from many hours to a few minutes on average. In this paper, we describe the heuristic control strategies employed in ICALIB such as a hill-climbing search based on a state distance estimation function, incremental problem solution refinement by using a dynamic tolerance window, and calibration target parameter ordering for guiding the search. In addition, we present the application of a machine learning program called GID3* for automatic acquisition of heuristic rules for ordering target parameters.

  3. Linear regression in astronomy. II

    Science.gov (United States)

    Feigelson, Eric D.; Babu, Gutti J.

    1992-01-01

    A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

  4. Error-in-variables models in calibration

    Science.gov (United States)

    Lira, I.; Grientschnig, D.

    2017-12-01

    In many calibration operations, the stimuli applied to the measuring system or instrument under test are derived from measurement standards whose values may be considered to be perfectly known. In that case, it is assumed that calibration uncertainty arises solely from inexact measurement of the responses, from imperfect control of the calibration process and from the possible inaccuracy of the calibration model. However, the premise that the stimuli are completely known is never strictly fulfilled and in some instances it may be grossly inadequate. Then, error-in-variables (EIV) regression models have to be employed. In metrology, these models have been approached mostly from the frequentist perspective. In contrast, not much guidance is available on their Bayesian analysis. In this paper, we first present a brief summary of the conventional statistical techniques that have been developed to deal with EIV models in calibration. We then proceed to discuss the alternative Bayesian framework under some simplifying assumptions. Through a detailed example about the calibration of an instrument for measuring flow rates, we provide advice on how the user of the calibration function should employ the latter framework for inferring the stimulus acting on the calibrated device when, in use, a certain response is measured.

  5. Approaches to Low Fuel Regression Rate in Hybrid Rocket Engines

    Directory of Open Access Journals (Sweden)

    Dario Pastrone

    2012-01-01

    Full Text Available Hybrid rocket engines are promising propulsion systems which present appealing features such as safety, low cost, and environmental friendliness. On the other hand, certain issues hamper the development hoped for. The present paper discusses approaches addressing improvements to one of the most important among these issues: low fuel regression rate. To highlight the consequence of such an issue and to better understand the concepts proposed, fundamentals are summarized. Two approaches are presented (multiport grain and high mixture ratio which aim at reducing negative effects without enhancing regression rate. Furthermore, fuel material changes and nonconventional geometries of grain and/or injector are presented as methods to increase fuel regression rate. Although most of these approaches are still at the laboratory or concept scale, many of them are promising.

  6. A Calibration to Predict the Concentrations of Impurities in Plutonium Oxide by Prompt Gamma Analysis Revision 2

    Energy Technology Data Exchange (ETDEWEB)

    Narlesky, Joshua Edward [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kelly, Elizabeth J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-09-10

    This report documents the new PG calibration regression equation. These calibration equations incorporate new data that have become available since revision 1 of “A Calibration to Predict the Concentrations of Impurities in Plutonium Oxide by Prompt Gamma Analysis” was issued [3] The calibration equations are based on a weighted least squares (WLS) approach for the regression. The WLS method gives each data point its proper amount of influence over the parameter estimates. This gives two big advantages, more precise parameter estimates and better and more defensible estimates of uncertainties. The WLS approach makes sense both statistically and experimentally because the variances increase with concentration, and there are physical reasons that the higher measurements are less reliable and should be less influential. The new magnesium calibration includes a correction for sodium and separate calibration equation for items with and without chlorine. These additional calibration equations allow for better predictions and smaller uncertainties for sodium in materials with and without chlorine. Chlorine and sodium have separate equations for RICH materials. Again, these equations give better predictions and smaller uncertainties chlorine and sodium for RICH materials.

  7. A video-based approach to calibrating car-following parameters in VISSIM for urban traffic

    Directory of Open Access Journals (Sweden)

    Zhengyang Lu

    2016-08-01

    Full Text Available Microscopic simulation models need to be calibrated to represent realistic local traffic conditions. Traditional calibration methods are conducted by searching for the model parameter set that minimizes the discrepancies of certain macroscopic metrics between simulation results and field observations. However, this process could easily lead to inappropriate selection of calibration parameters and thus erroneous simulation results. This paper proposes a video-based approach to incorporate direct measurements of car-following parameters into the process of VISSIM model calibration. The proposed method applies automated video processing techniques to extract vehicle trajectory data and utilizes the trajectory data to determine values of certain car-following parameters in VISSIM. This paper first describes the calibration procedure step by step, and then applies the method to a case study of simulating traffic at a signalized intersection in VISSIM. From the field-collected video footage, trajectories of 1229 through-movement vehicles were extracted and analyzed to calibrate three car-following parameters regarding desired speed, desired acceleration, and safe following distance, respectively. The case study demonstrates the advantages and feasibility of the proposed approach.

  8. A game-theoretic approach for calibration of low-cost magnetometers under noise uncertainty

    Science.gov (United States)

    Siddharth, S.; Ali, A. S.; El-Sheimy, N.; Goodall, C. L.; Syed, Z. F.

    2012-02-01

    Pedestrian heading estimation is a fundamental challenge in Global Navigation Satellite System (GNSS)-denied environments. Additionally, the heading observability considerably degrades in low-speed mode of operation (e.g. walking), making this problem even more challenging. The goal of this work is to improve the heading solution when hand-held personal/portable devices, such as cell phones, are used for positioning and to improve the heading estimation in GNSS-denied signal environments. Most smart phones are now equipped with self-contained, low cost, small size and power-efficient sensors, such as magnetometers, gyroscopes and accelerometers. A magnetometer needs calibration before it can be properly employed for navigation purposes. Magnetometers play an important role in absolute heading estimation and are embedded in many smart phones. Before the users navigate with the phone, a calibration is invoked to ensure an improved signal quality. This signal is used later in the heading estimation. In most of the magnetometer-calibration approaches, the motion modes are seldom described to achieve a robust calibration. Also, suitable calibration approaches fail to discuss the stopping criteria for calibration. In this paper, the following three topics are discussed in detail that are important to achieve proper magnetometer-calibration results and in turn the most robust heading solution for the user while taking care of the device misalignment with respect to the user: (a) game-theoretic concepts to attain better filter parameter tuning and robustness in noise uncertainty, (b) best maneuvers with focus on 3D and 2D motion modes and related challenges and (c) investigation of the calibration termination criteria leveraging the calibration robustness and efficiency.

  9. A game-theoretic approach for calibration of low-cost magnetometers under noise uncertainty

    International Nuclear Information System (INIS)

    Siddharth, S; Ali, A S; El-Sheimy, N; Goodall, C L; Syed, Z F

    2012-01-01

    Pedestrian heading estimation is a fundamental challenge in Global Navigation Satellite System (GNSS)-denied environments. Additionally, the heading observability considerably degrades in low-speed mode of operation (e.g. walking), making this problem even more challenging. The goal of this work is to improve the heading solution when hand-held personal/portable devices, such as cell phones, are used for positioning and to improve the heading estimation in GNSS-denied signal environments. Most smart phones are now equipped with self-contained, low cost, small size and power-efficient sensors, such as magnetometers, gyroscopes and accelerometers. A magnetometer needs calibration before it can be properly employed for navigation purposes. Magnetometers play an important role in absolute heading estimation and are embedded in many smart phones. Before the users navigate with the phone, a calibration is invoked to ensure an improved signal quality. This signal is used later in the heading estimation. In most of the magnetometer-calibration approaches, the motion modes are seldom described to achieve a robust calibration. Also, suitable calibration approaches fail to discuss the stopping criteria for calibration. In this paper, the following three topics are discussed in detail that are important to achieve proper magnetometer-calibration results and in turn the most robust heading solution for the user while taking care of the device misalignment with respect to the user: (a) game-theoretic concepts to attain better filter parameter tuning and robustness in noise uncertainty, (b) best maneuvers with focus on 3D and 2D motion modes and related challenges and (c) investigation of the calibration termination criteria leveraging the calibration robustness and efficiency. (paper)

  10. DEM Calibration Approach: design of experiment

    Science.gov (United States)

    Boikov, A. V.; Savelev, R. V.; Payor, V. A.

    2018-05-01

    The problem of DEM models calibration is considered in the article. It is proposed to divide models input parameters into those that require iterative calibration and those that are recommended to measure directly. A new method for model calibration based on the design of the experiment for iteratively calibrated parameters is proposed. The experiment is conducted using a specially designed stand. The results are processed with technical vision algorithms. Approximating functions are obtained and the error of the implemented software and hardware complex is estimated. The prospects of the obtained results are discussed.

  11. Comparison of different calibration methods suited for calibration problems with many variables

    DEFF Research Database (Denmark)

    Holst, Helle

    1992-01-01

    This paper describes and compares different kinds of statistical methods proposed in the literature as suited for solving calibration problems with many variables. These are: principal component regression, partial least-squares, and ridge regression. The statistical techniques themselves do...

  12. A novel approach for absolute radar calibration: formulation and theoretical validation

    Directory of Open Access Journals (Sweden)

    C. Merker

    2015-06-01

    Full Text Available The theoretical framework of a novel approach for absolute radar calibration is presented and its potential analysed by means of synthetic data to lay out a solid basis for future practical application. The method presents the advantage of an absolute calibration with respect to the directly measured reflectivity, without needing a previously calibrated reference device. It requires a setup comprising three radars: two devices oriented towards each other, measuring reflectivity along the same horizontal beam and operating within a strongly attenuated frequency range (e.g. K or X band, and one vertical reflectivity and drop size distribution (DSD profiler below this connecting line, which is to be calibrated. The absolute determination of the calibration factor is based on attenuation estimates. Using synthetic, smooth and geometrically idealised data, calibration is found to perform best using homogeneous precipitation events with rain rates high enough to ensure a distinct attenuation signal (reflectivity above ca. 30 dBZ. Furthermore, the choice of the interval width (in measuring range gates around the vertically pointing radar, needed for attenuation estimation, is found to have an impact on the calibration results. Further analysis is done by means of synthetic data with realistic, inhomogeneous precipitation fields taken from measurements. A calibration factor is calculated for each considered case using the presented method. Based on the distribution of the calculated calibration factors, the most probable value is determined by estimating the mode of a fitted shifted logarithmic normal distribution function. After filtering the data set with respect to rain rate and inhomogeneity and choosing an appropriate length of the considered attenuation path, the estimated uncertainty of the calibration factor is of the order of 1 to 11 %, depending on the chosen interval width. Considering stability and accuracy of the method, an interval of

  13. Influence of rainfall observation network on model calibration and application

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-01-01

    Full Text Available The objective in this study is to investigate the influence of the spatial resolution of the rainfall input on the model calibration and application. The analysis is carried out by varying the distribution of the raingauge network. A meso-scale catchment located in southwest Germany has been selected for this study. First, the semi-distributed HBV model is calibrated with the precipitation interpolated from the available observed rainfall of the different raingauge networks. An automatic calibration method based on the combinatorial optimization algorithm simulated annealing is applied. The performance of the hydrological model is analyzed as a function of the raingauge density. Secondly, the calibrated model is validated using interpolated precipitation from the same raingauge density used for the calibration as well as interpolated precipitation based on networks of reduced and increased raingauge density. Lastly, the effect of missing rainfall data is investigated by using a multiple linear regression approach for filling in the missing measurements. The model, calibrated with the complete set of observed data, is then run in the validation period using the above described precipitation field. The simulated hydrographs obtained in the above described three sets of experiments are analyzed through the comparisons of the computed Nash-Sutcliffe coefficient and several goodness-of-fit indexes. The results show that the model using different raingauge networks might need re-calibration of the model parameters, specifically model calibrated on relatively sparse precipitation information might perform well on dense precipitation information while model calibrated on dense precipitation information fails on sparse precipitation information. Also, the model calibrated with the complete set of observed precipitation and run with incomplete observed data associated with the data estimated using multiple linear regressions, at the locations treated as

  14. Support vector methods for survival analysis: a comparison between ranking and regression approaches.

    Science.gov (United States)

    Van Belle, Vanya; Pelckmans, Kristiaan; Van Huffel, Sabine; Suykens, Johan A K

    2011-10-01

    To compare and evaluate ranking, regression and combined machine learning approaches for the analysis of survival data. The literature describes two approaches based on support vector machines to deal with censored observations. In the first approach the key idea is to rephrase the task as a ranking problem via the concordance index, a problem which can be solved efficiently in a context of structural risk minimization and convex optimization techniques. In a second approach, one uses a regression approach, dealing with censoring by means of inequality constraints. The goal of this paper is then twofold: (i) introducing a new model combining the ranking and regression strategy, which retains the link with existing survival models such as the proportional hazards model via transformation models; and (ii) comparison of the three techniques on 6 clinical and 3 high-dimensional datasets and discussing the relevance of these techniques over classical approaches fur survival data. We compare svm-based survival models based on ranking constraints, based on regression constraints and models based on both ranking and regression constraints. The performance of the models is compared by means of three different measures: (i) the concordance index, measuring the model's discriminating ability; (ii) the logrank test statistic, indicating whether patients with a prognostic index lower than the median prognostic index have a significant different survival than patients with a prognostic index higher than the median; and (iii) the hazard ratio after normalization to restrict the prognostic index between 0 and 1. Our results indicate a significantly better performance for models including regression constraints above models only based on ranking constraints. This work gives empirical evidence that svm-based models using regression constraints perform significantly better than svm-based models based on ranking constraints. Our experiments show a comparable performance for methods

  15. Calibration Under Uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton; Trucano, Timothy Guy

    2005-03-01

    This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.

  16. Phase calibration approaches for radar interferometry and imaging configurations: equatorial spread F results

    Directory of Open Access Journals (Sweden)

    J. L. Chau

    2008-08-01

    Full Text Available In recent years, more and more radar systems with multiple-receiver antennas are being used to study the atmospheric and ionospheric irregularities with either interferometric and/or imaging configurations. In such systems, one of the major challenges is to know the phase offsets between the different receiver channels. Such phases are intrinsic to the system and are due to different cable lengths, filters, attenuators, amplifiers, antenna impedance, etc. Moreover, such phases change as function of time, on different time scales, depending on the specific installation. In this work, we present three approaches using natural targets (radio stars, meteor-head and meteor trail echoes that allow either an absolute or relative phase calibration. In addition, we present the results of using an artificial source (radio beacon for a continuous calibration that complements the previous approaches. These approaches are robust and good alternatives to other approaches, e.g. self-calibration techniques using known data features, or for multiple-receiver configurations constantly changing their receiving elements. In order to show the good performance of the proposed phase calibration techniques, we present new radar imaging results of equatorial spread F (ESF irregularities. Finally we introduce a new way to represent range-time intensity (RTI maps color coded with the Doppler information. Such modified map allows the identification and interpretation of geophysical phenomena, previously hidden in conventional RTI maps, e.g. the time and altitude of occurrence of ESF irregularities pinching off from the bottomside and their respective Doppler velocity.

  17. Design and analysis of experiments classical and regression approaches with SAS

    CERN Document Server

    Onyiah, Leonard C

    2008-01-01

    Introductory Statistical Inference and Regression Analysis Elementary Statistical Inference Regression Analysis Experiments, the Completely Randomized Design (CRD)-Classical and Regression Approaches Experiments Experiments to Compare Treatments Some Basic Ideas Requirements of a Good Experiment One-Way Experimental Layout or the CRD: Design and Analysis Analysis of Experimental Data (Fixed Effects Model) Expected Values for the Sums of Squares The Analysis of Variance (ANOVA) Table Follow-Up Analysis to Check fo

  18. Calibration uncertainty

    DEFF Research Database (Denmark)

    Heydorn, Kaj; Anglov, Thomas

    2002-01-01

    Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration...

  19. Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers

    Science.gov (United States)

    Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.

    2010-01-01

    This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.

  20. A fundamental parameters approach to calibration of the Mars Exploration Rover Alpha Particle X-ray Spectrometer

    Science.gov (United States)

    Campbell, J. L.; Lee, M.; Jones, B. N.; Andrushenko, S. M.; Holmes, N. G.; Maxwell, J. A.; Taylor, S. M.

    2009-04-01

    The detection sensitivities of the Alpha Particle X-ray Spectrometer (APXS) instruments on the Mars Exploration Rovers for a wide range of elements were experimentally determined in 2002 using spectra of geochemical reference materials. A flight spare instrument was similarly calibrated, and the calibration exercise was then continued for this unit with an extended set of geochemical reference materials together with pure elements and simple chemical compounds. The flight spare instrument data are examined in detail here using a newly developed fundamental parameters approach which takes precise account of all the physics inherent in the two X-ray generation techniques involved, namely, X-ray fluorescence and particle-induced X-ray emission. The objectives are to characterize the instrument as fully as possible, to test this new approach, and to determine the accuracy of calibration for major, minor, and trace elements. For some of the lightest elements the resulting calibration exhibits a dependence upon the mineral assemblage of the geological reference material; explanations are suggested for these observations. The results will assist in designing the overall calibration approach for the APXS on the Mars Science Laboratory mission.

  1. eSIP: A Novel Solution-Based Sectioned Image Property Approach for Microscope Calibration.

    Directory of Open Access Journals (Sweden)

    Malte Butzlaff

    Full Text Available Fluorescence confocal microscopy represents one of the central tools in modern sciences. Correspondingly, a growing amount of research relies on the development of novel microscopic methods. During the last decade numerous microscopic approaches were developed for the investigation of various scientific questions. Thereby, the former qualitative imaging methods became replaced by advanced quantitative methods to gain more and more information from a given sample. However, modern microscope systems being as complex as they are, require very precise and appropriate calibration routines, in particular when quantitative measurements should be compared over longer time scales or between different setups. Multispectral beads with sub-resolution size are often used to describe the point spread function and thus the optical properties of the microscope. More recently, a fluorescent layer was utilized to describe the axial profile for each pixel, which allows a spatially resolved characterization. However, fabrication of a thin fluorescent layer with matching refractive index is technically not solved yet. Therefore, we propose a novel type of calibration concept for sectioned image property (SIP measurements which is based on fluorescent solution and makes the calibration concept available for a broader number of users. Compared to the previous approach, additional information can be obtained by application of this extended SIP chart approach, including penetration depth, detected number of photons, and illumination profile shape. Furthermore, due to the fit of the complete profile, our method is less susceptible to noise. Generally, the extended SIP approach represents a simple and highly reproducible method, allowing setup independent calibration and alignment procedures, which is mandatory for advanced quantitative microscopy.

  2. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  3. Calibration with Absolute Shrinkage

    DEFF Research Database (Denmark)

    Øjelund, Henrik; Madsen, Henrik; Thyregod, Poul

    2001-01-01

    In this paper, penalized regression using the L-1 norm on the estimated parameters is proposed for chemometric je calibration. The algorithm is of the lasso type, introduced by Tibshirani in 1996 as a linear regression method with bound on the absolute length of the parameters, but a modification...

  4. Time series modeling by a regression approach based on a latent process.

    Science.gov (United States)

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  5. A Statistical Approach To Prediction Of The CMM Drift Behaviour Using A Calibrated Mechanical Artefact

    Directory of Open Access Journals (Sweden)

    Cuesta Eduardo

    2015-09-01

    Full Text Available This paper presents a multivariate regression predictive model of drift on the Coordinate Measuring Machine (CMM behaviour. Evaluation tests on a CMM with a multi-step gauge were carried out following an extended version of an ISO evaluation procedure with a periodicity of at least once a week and during more than five months. This test procedure consists in measuring the gauge for several range volumes, spatial locations, distances and repetitions. The procedure, environment conditions and even the gauge have been kept invariables, so a massive measurement dataset was collected over time under high repeatability conditions. A multivariate regression analysis has revealed the main parameters that could affect the CMM behaviour, and then detected a trend on the CMM performance drift. A performance model that considers both the size of the measured dimension and the elapsed time since the last CMM calibration has been developed. This model can predict the CMM performance and measurement reliability over time and also can estimate an optimized period between calibrations for a specific measurement length or accuracy level.

  6. A High Precision Approach to Calibrate a Structured Light Vision Sensor in a Robot-Based Three-Dimensional Measurement System

    Directory of Open Access Journals (Sweden)

    Defeng Wu

    2016-08-01

    Full Text Available A robot-based three-dimensional (3D measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in a calibration target. The concentric circle is employed to determine the real projected centres of the circles. Then, a calibration point generation procedure is used with the help of the calibrated robot. When enough calibration points are ready, the radial alignment constraint (RAC method is adopted to calibrate the camera model. A multilayer perceptron neural network (MLPNN is then employed to identify the calibration residuals after the application of the RAC method. Therefore, the hybrid pinhole model and the MLPNN are used to represent the real camera model. Using a standard ball to validate the effectiveness of the presented technique, the experimental results demonstrate that the proposed novel calibration approach can achieve a highly accurate model of the structured light vision sensor.

  7. Modeling Personalized Email Prioritization: Classification-based and Regression-based Approaches

    Energy Technology Data Exchange (ETDEWEB)

    Yoo S.; Yang, Y.; Carbonell, J.

    2011-10-24

    Email overload, even after spam filtering, presents a serious productivity challenge for busy professionals and executives. One solution is automated prioritization of incoming emails to ensure the most important are read and processed quickly, while others are processed later as/if time permits in declining priority levels. This paper presents a study of machine learning approaches to email prioritization into discrete levels, comparing ordinal regression versus classier cascades. Given the ordinal nature of discrete email priority levels, SVM ordinal regression would be expected to perform well, but surprisingly a cascade of SVM classifiers significantly outperforms ordinal regression for email prioritization. In contrast, SVM regression performs well -- better than classifiers -- on selected UCI data sets. This unexpected performance inversion is analyzed and results are presented, providing core functionality for email prioritization systems.

  8. Non-linear calibration models for near infrared spectroscopy

    DEFF Research Database (Denmark)

    Ni, Wangdong; Nørgaard, Lars; Mørup, Morten

    2014-01-01

    by ridge regression (RR). The performance of the different methods is demonstrated by their practical applications using three real-life near infrared (NIR) data sets. Different aspects of the various approaches including computational time, model interpretability, potential over-fitting using the non-linear...... models on linear problems, robustness to small or medium sample sets, and robustness to pre-processing, are discussed. The results suggest that GPR and BANN are powerful and promising methods for handling linear as well as nonlinear systems, even when the data sets are moderately small. The LS......-SVM), relevance vector machines (RVM), Gaussian process regression (GPR), artificial neural network (ANN), and Bayesian ANN (BANN). In this comparison, partial least squares (PLS) regression is used as a linear benchmark, while the relationship of the methods is considered in terms of traditional calibration...

  9. A simplified approach to the pooled analysis of calibration of clinical prediction rules for systematic reviews of validation studies

    Directory of Open Access Journals (Sweden)

    Dimitrov BD

    2015-04-01

    Full Text Available Borislav D Dimitrov,1,2 Nicola Motterlini,2,† Tom Fahey2 1Academic Unit of Primary Care and Population Sciences, University of Southampton, Southampton, United Kingdom; 2HRB Centre for Primary Care Research, Department of General Medicine, Division of Population Health Sciences, Royal College of Surgeons in Ireland, Dublin, Ireland †Nicola Motterlini passed away on November 11, 2012 Objective: Estimating calibration performance of clinical prediction rules (CPRs in systematic reviews of validation studies is not possible when predicted values are neither published nor accessible or sufficient or no individual participant or patient data are available. Our aims were to describe a simplified approach for outcomes prediction and calibration assessment and evaluate its functionality and validity. Study design and methods: Methodological study of systematic reviews of validation studies of CPRs: a ABCD2 rule for prediction of 7 day stroke; and b CRB-65 rule for prediction of 30 day mortality. Predicted outcomes in a sample validation study were computed by CPR distribution patterns (“derivation model”. As confirmation, a logistic regression model (with derivation study coefficients was applied to CPR-based dummy variables in the validation study. Meta-analysis of validation studies provided pooled estimates of “predicted:observed” risk ratios (RRs, 95% confidence intervals (CIs, and indexes of heterogeneity (I2 on forest plots (fixed and random effects models, with and without adjustment of intercepts. The above approach was also applied to the CRB-65 rule. Results: Our simplified method, applied to ABCD2 rule in three risk strata (low, 0–3; intermediate, 4–5; high, 6–7 points, indicated that predictions are identical to those computed by univariate, CPR-based logistic regression model. Discrimination was good (c-statistics =0.61–0.82, however, calibration in some studies was low. In such cases with miscalibration, the under

  10. A Simultaneously Calibration Approach for Installation and Attitude Errors of an INS/GPS/LDS Target Tracker

    Directory of Open Access Journals (Sweden)

    Jianhua Cheng

    2015-02-01

    Full Text Available To obtain the absolute position of a target is one of the basic topics for non-cooperated target tracking problems. In this paper, we present a simultaneously calibration method for an Inertial navigation system (INS/Global position system (GPS/Laser distance scanner (LDS integrated system based target positioning approach. The INS/GPS integrated system provides the attitude and position of observer, and LDS offers the distance between the observer and the target. The two most significant errors are taken into jointly consideration and analyzed: (1 the attitude measure error of INS/GPS; (2 the installation error between INS/GPS and LDS subsystems. Consequently, a INS/GPS/LDS based target positioning approach considering these two errors is proposed. In order to improve the performance of this approach, a novel calibration method is designed to simultaneously estimate and compensate these two main errors. Finally, simulations are conducted to access the performance of the proposed target positioning approach and the designed simultaneously calibration method.

  11. A simultaneously calibration approach for installation and attitude errors of an INS/GPS/LDS target tracker.

    Science.gov (United States)

    Cheng, Jianhua; Chen, Daidai; Sun, Xiangyu; Wang, Tongda

    2015-02-04

    To obtain the absolute position of a target is one of the basic topics for non-cooperated target tracking problems. In this paper, we present a simultaneously calibration method for an Inertial navigation system (INS)/Global position system (GPS)/Laser distance scanner (LDS) integrated system based target positioning approach. The INS/GPS integrated system provides the attitude and position of observer, and LDS offers the distance between the observer and the target. The two most significant errors are taken into jointly consideration and analyzed: (1) the attitude measure error of INS/GPS; (2) the installation error between INS/GPS and LDS subsystems. Consequently, a INS/GPS/LDS based target positioning approach considering these two errors is proposed. In order to improve the performance of this approach, a novel calibration method is designed to simultaneously estimate and compensate these two main errors. Finally, simulations are conducted to access the performance of the proposed target positioning approach and the designed simultaneously calibration method.

  12. Testing for marginal linear effects in quantile regression

    KAUST Repository

    Wang, Huixia Judy

    2017-10-23

    The paper develops a new marginal testing procedure to detect significant predictors that are associated with the conditional quantiles of a scalar response. The idea is to fit the marginal quantile regression on each predictor one at a time, and then to base the test on the t-statistics that are associated with the most predictive predictors. A resampling method is devised to calibrate this test statistic, which has non-regular limiting behaviour due to the selection of the most predictive variables. Asymptotic validity of the procedure is established in a general quantile regression setting in which the marginal quantile regression models can be misspecified. Even though a fixed dimension is assumed to derive the asymptotic results, the test proposed is applicable and computationally feasible for large dimensional predictors. The method is more flexible than existing marginal screening test methods based on mean regression and has the added advantage of being robust against outliers in the response. The approach is illustrated by using an application to a human immunodeficiency virus drug resistance data set.

  13. Testing for marginal linear effects in quantile regression

    KAUST Repository

    Wang, Huixia Judy; McKeague, Ian W.; Qian, Min

    2017-01-01

    The paper develops a new marginal testing procedure to detect significant predictors that are associated with the conditional quantiles of a scalar response. The idea is to fit the marginal quantile regression on each predictor one at a time, and then to base the test on the t-statistics that are associated with the most predictive predictors. A resampling method is devised to calibrate this test statistic, which has non-regular limiting behaviour due to the selection of the most predictive variables. Asymptotic validity of the procedure is established in a general quantile regression setting in which the marginal quantile regression models can be misspecified. Even though a fixed dimension is assumed to derive the asymptotic results, the test proposed is applicable and computationally feasible for large dimensional predictors. The method is more flexible than existing marginal screening test methods based on mean regression and has the added advantage of being robust against outliers in the response. The approach is illustrated by using an application to a human immunodeficiency virus drug resistance data set.

  14. Straight line fitting and predictions: On a marginal likelihood approach to linear regression and errors-in-variables models

    Science.gov (United States)

    Christiansen, Bo

    2015-04-01

    Linear regression methods are without doubt the most used approaches to describe and predict data in the physical sciences. They are often good first order approximations and they are in general easier to apply and interpret than more advanced methods. However, even the properties of univariate regression can lead to debate over the appropriateness of various models as witnessed by the recent discussion about climate reconstruction methods. Before linear regression is applied important choices have to be made regarding the origins of the noise terms and regarding which of the two variables under consideration that should be treated as the independent variable. These decisions are often not easy to make but they may have a considerable impact on the results. We seek to give a unified probabilistic - Bayesian with flat priors - treatment of univariate linear regression and prediction by taking, as starting point, the general errors-in-variables model (Christiansen, J. Clim., 27, 2014-2031, 2014). Other versions of linear regression can be obtained as limits of this model. We derive the likelihood of the model parameters and predictands of the general errors-in-variables model by marginalizing over the nuisance parameters. The resulting likelihood is relatively simple and easy to analyze and calculate. The well known unidentifiability of the errors-in-variables model is manifested as the absence of a well-defined maximum in the likelihood. However, this does not mean that probabilistic inference can not be made; the marginal likelihoods of model parameters and the predictands have, in general, well-defined maxima. We also include a probabilistic version of classical calibration and show how it is related to the errors-in-variables model. The results are illustrated by an example from the coupling between the lower stratosphere and the troposphere in the Northern Hemisphere winter.

  15. A land use regression model for ambient ultrafine particles in Montreal, Canada: A comparison of linear regression and a machine learning approach.

    Science.gov (United States)

    Weichenthal, Scott; Ryswyk, Keith Van; Goldstein, Alon; Bagg, Scott; Shekkarizfard, Maryam; Hatzopoulou, Marianne

    2016-04-01

    Existing evidence suggests that ambient ultrafine particles (UFPs) (regression model for UFPs in Montreal, Canada using mobile monitoring data collected from 414 road segments during the summer and winter months between 2011 and 2012. Two different approaches were examined for model development including standard multivariable linear regression and a machine learning approach (kernel-based regularized least squares (KRLS)) that learns the functional form of covariate impacts on ambient UFP concentrations from the data. The final models included parameters for population density, ambient temperature and wind speed, land use parameters (park space and open space), length of local roads and rail, and estimated annual average NOx emissions from traffic. The final multivariable linear regression model explained 62% of the spatial variation in ambient UFP concentrations whereas the KRLS model explained 79% of the variance. The KRLS model performed slightly better than the linear regression model when evaluated using an external dataset (R(2)=0.58 vs. 0.55) or a cross-validation procedure (R(2)=0.67 vs. 0.60). In general, our findings suggest that the KRLS approach may offer modest improvements in predictive performance compared to standard multivariable linear regression models used to estimate spatial variations in ambient UFPs. However, differences in predictive performance were not statistically significant when evaluated using the cross-validation procedure. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  16. Feature-based automatic color calibration for networked camera system

    Science.gov (United States)

    Yamamoto, Shoji; Taki, Keisuke; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi

    2011-01-01

    In this paper, we have developed a feature-based automatic color calibration by using an area-based detection and adaptive nonlinear regression method. Simple color matching of chartless is achieved by using the characteristic of overlapping image area with each camera. Accurate detection of common object is achieved by the area-based detection that combines MSER with SIFT. Adaptive color calibration by using the color of detected object is calculated by nonlinear regression method. This method can indicate the contribution of object's color for color calibration, and automatic selection notification for user is performed by this function. Experimental result show that the accuracy of the calibration improves gradually. It is clear that this method can endure practical use of multi-camera color calibration if an enough sample is obtained.

  17. Bayesian Nonparametric Regression Analysis of Data with Random Effects Covariates from Longitudinal Measurements

    KAUST Repository

    Ryu, Duchwan

    2010-09-28

    We consider nonparametric regression analysis in a generalized linear model (GLM) framework for data with covariates that are the subject-specific random effects of longitudinal measurements. The usual assumption that the effects of the longitudinal covariate processes are linear in the GLM may be unrealistic and if this happens it can cast doubt on the inference of observed covariate effects. Allowing the regression functions to be unknown, we propose to apply Bayesian nonparametric methods including cubic smoothing splines or P-splines for the possible nonlinearity and use an additive model in this complex setting. To improve computational efficiency, we propose the use of data-augmentation schemes. The approach allows flexible covariance structures for the random effects and within-subject measurement errors of the longitudinal processes. The posterior model space is explored through a Markov chain Monte Carlo (MCMC) sampler. The proposed methods are illustrated and compared to other approaches, the "naive" approach and the regression calibration, via simulations and by an application that investigates the relationship between obesity in adulthood and childhood growth curves. © 2010, The International Biometric Society.

  18. Multiple regression approach to predict turbine-generator output for Chinshan nuclear power plant

    International Nuclear Information System (INIS)

    Chan, Yea-Kuang; Tsai, Yu-Ching

    2017-01-01

    The objective of this study is to develop a turbine cycle model using the multiple regression approach to estimate the turbine-generator output for the Chinshan Nuclear Power Plant (NPP). The plant operating data was verified using a linear regression model with a corresponding 95% confidence interval for the operating data. In this study, the key parameters were selected as inputs for the multiple regression based turbine cycle model. The proposed model was used to estimate the turbine-generator output. The effectiveness of the proposed turbine cycle model was demonstrated by using plant operating data obtained from the Chinshan NPP Unit 2. The results show that this multiple regression based turbine cycle model can be used to accurately estimate the turbine-generator output. In addition, this study also provides an alternative approach with simple and easy features to evaluate the thermal performance for nuclear power plants.

  19. Multiple regression approach to predict turbine-generator output for Chinshan nuclear power plant

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Yea-Kuang; Tsai, Yu-Ching [Institute of Nuclear Energy Research, Taoyuan City, Taiwan (China). Nuclear Engineering Division

    2017-03-15

    The objective of this study is to develop a turbine cycle model using the multiple regression approach to estimate the turbine-generator output for the Chinshan Nuclear Power Plant (NPP). The plant operating data was verified using a linear regression model with a corresponding 95% confidence interval for the operating data. In this study, the key parameters were selected as inputs for the multiple regression based turbine cycle model. The proposed model was used to estimate the turbine-generator output. The effectiveness of the proposed turbine cycle model was demonstrated by using plant operating data obtained from the Chinshan NPP Unit 2. The results show that this multiple regression based turbine cycle model can be used to accurately estimate the turbine-generator output. In addition, this study also provides an alternative approach with simple and easy features to evaluate the thermal performance for nuclear power plants.

  20. Bayesian approach to errors-in-variables in regression models

    Science.gov (United States)

    Rozliman, Nur Aainaa; Ibrahim, Adriana Irawati Nur; Yunus, Rossita Mohammad

    2017-05-01

    In many applications and experiments, data sets are often contaminated with error or mismeasured covariates. When at least one of the covariates in a model is measured with error, Errors-in-Variables (EIV) model can be used. Measurement error, when not corrected, would cause misleading statistical inferences and analysis. Therefore, our goal is to examine the relationship of the outcome variable and the unobserved exposure variable given the observed mismeasured surrogate by applying the Bayesian formulation to the EIV model. We shall extend the flexible parametric method proposed by Hossain and Gustafson (2009) to another nonlinear regression model which is the Poisson regression model. We shall then illustrate the application of this approach via a simulation study using Markov chain Monte Carlo sampling methods.

  1. Calibration methods for the Hargreaves-Samani equation

    Directory of Open Access Journals (Sweden)

    Lucas Borges Ferreira

    Full Text Available ABSTRACT The estimation of the reference evapotranspiration is an important factor for hydrological studies, design and management of irrigation systems, among others. The Penman Monteith equation presents high precision and accuracy in the estimation of this variable. However, its use becomes limited due to the large number of required meteorological data. In this context, the Hargreaves-Samani equation could be used as alternative, although, for a better performance a local calibration is required. Thus, the aim was to compare the calibration process of the Hargreaves-Samani equation by linear regression, by adjustment of the coefficients (A and B and exponent (C of the equation and by combinations of the two previous alternatives. Daily data from 6 weather stations, located in the state of Minas Gerais, from the period 1997 to 2016 were used. The calibration of the Hargreaves-Samani equation was performed in five ways: calibration by linear regression, adjustment of parameter “A”, adjustment of parameters “A” and “C”, adjustment of parameters “A”, “B” and “C” and adjustment of parameters “A”, “B” and “C” followed by calibration by linear regression. The performances of the models were evaluated based on the statistical indicators mean absolute error, mean bias error, Willmott’s index of agreement, correlation coefficient and performance index. All the studied methodologies promoted better estimations of reference evapotranspiration. The simultaneous adjustment of the empirical parameters “A”, “B” and “C” was the best alternative for calibration of the Hargreaves-Samani equation.

  2. The development of an efficient mass balance approach for the purity assignment of organic calibration standards.

    Science.gov (United States)

    Davies, Stephen R; Alamgir, Mahiuddin; Chan, Benjamin K H; Dang, Thao; Jones, Kai; Krishnaswami, Maya; Luo, Yawen; Mitchell, Peter S R; Moawad, Michael; Swan, Hilton; Tarrant, Greg J

    2015-10-01

    The purity determination of organic calibration standards using the traditional mass balance approach is described. Demonstrated examples highlight the potential for bias in each measurement and the need to implement an approach that provides a cross-check for each result, affording fit for purpose purity values in a timely and cost-effective manner. Chromatographic techniques such as gas chromatography with flame ionisation detection (GC-FID) and high-performance liquid chromatography with UV detection (HPLC-UV), combined with mass and NMR spectroscopy, provide a detailed impurity profile allowing an efficient conversion of chromatographic peak areas into relative mass fractions, generally avoiding the need to calibrate each impurity present. For samples analysed by GC-FID, a conservative measurement uncertainty budget is described, including a component to cover potential variations in the response of each unidentified impurity. An alternative approach is also detailed in which extensive purification eliminates the detector response factor issue, facilitating the certification of a super-pure calibration standard which can be used to quantify the main component in less-pure candidate materials. This latter approach is particularly useful when applying HPLC analysis with UV detection. Key to the success of this approach is the application of both qualitative and quantitative (1)H NMR spectroscopy.

  3. STELLAR LOCUS REGRESSION: ACCURATE COLOR CALIBRATION AND THE REAL-TIME DETERMINATION OF GALAXY CLUSTER PHOTOMETRIC REDSHIFTS

    International Nuclear Information System (INIS)

    High, F. William; Stubbs, Christopher W.; Rest, Armin; Stalder, Brian; Challis, Peter

    2009-01-01

    We present stellar locus regression (SLR), a method of directly adjusting the instrumental broadband optical colors of stars to bring them into accord with a universal stellar color-color locus, producing accurately calibrated colors for both stars and galaxies. This is achieved without first establishing individual zero points for each passband, and can be performed in real-time at the telescope. We demonstrate how SLR naturally makes one wholesale correction for differences in instrumental response, for atmospheric transparency, for atmospheric extinction, and for Galactic extinction. We perform an example SLR treatment of Sloan Digital Sky Survey data over a wide range of Galactic dust values and independently recover the direction and magnitude of the canonical Galactic reddening vector with 14-18 mmag rms uncertainties. We then isolate the effect of atmospheric extinction, showing that SLR accounts for this and returns precise colors over a wide range of air mass, with 5-14 mmag rms residuals. We demonstrate that SLR-corrected colors are sufficiently accurate to allow photometric redshift estimates for galaxy clusters (using red sequence galaxies) with an uncertainty σ(z)/(1 + z) = 0.6% per cluster for redshifts 0.09 < z < 0.25. Finally, we identify our objects in the 2MASS all-sky catalog, and produce i-band zero points typically accurate to 18 mmag using only SLR. We offer open-source access to our IDL routines, validated and verified for the implementation of this technique, at http://stellar-locus-regression.googlecode.com.

  4. STELLAR COLOR REGRESSION: A SPECTROSCOPY-BASED METHOD FOR COLOR CALIBRATION TO A FEW MILLIMAGNITUDE ACCURACY AND THE RECALIBRATION OF STRIPE 82

    International Nuclear Information System (INIS)

    Yuan, Haibo; Liu, Xiaowei; Xiang, Maosheng; Huang, Yang; Zhang, Huihua; Chen, Bingqiu

    2015-01-01

    In this paper we propose a spectroscopy-based stellar color regression (SCR) method to perform accurate color calibration for modern imaging surveys, taking advantage of millions of stellar spectra now available. The method is straightforward, insensitive to systematic errors in the spectroscopically determined stellar atmospheric parameters, applicable to regions that are effectively covered by spectroscopic surveys, and capable of delivering an accuracy of a few millimagnitudes for color calibration. As an illustration, we have applied the method to the Sloan Digital Sky Survey (SDSS) Stripe 82 data. With a total number of 23,759 spectroscopically targeted stars, we have mapped out the small but strongly correlated color zero-point errors present in the photometric catalog of Stripe 82, and we improve the color calibration by a factor of two to three. Our study also reveals some small but significant magnitude dependence errors in the z band for some charge-coupled devices (CCDs). Such errors are likely to be present in all the SDSS photometric data. Our results are compared with those from a completely independent test based on the intrinsic colors of red galaxies presented by Ivezić et al. The comparison, as well as other tests, shows that the SCR method has achieved a color calibration internally consistent at a level of about 5 mmag in u – g, 3 mmag in g – r, and 2 mmag in r – i and i – z. Given the power of the SCR method, we discuss briefly the potential benefits by applying the method to existing, ongoing, and upcoming imaging surveys

  5. Intercomparison of hydrological model structures and calibration approaches in climate scenario impact projections

    Science.gov (United States)

    Vansteenkiste, Thomas; Tavakoli, Mohsen; Ntegeka, Victor; De Smedt, Florimond; Batelaan, Okke; Pereira, Fernando; Willems, Patrick

    2014-11-01

    calibration was tested by comparing the manual calibration approach with automatic calibrations of the VHM model based on different objective functions. The calibration approach did not significantly alter the model results for peak flow, but the low flow projections were again highly influenced. Model choice as well as calibration strategy hence have a critical impact on low flows, more than on peak flows. These results highlight the high uncertainty in low flow modelling, especially in a climate change context.

  6. Detection of Differential Item Functioning with Nonlinear Regression: A Non-IRT Approach Accounting for Guessing

    Science.gov (United States)

    Drabinová, Adéla; Martinková, Patrícia

    2017-01-01

    In this article we present a general approach not relying on item response theory models (non-IRT) to detect differential item functioning (DIF) in dichotomous items with presence of guessing. The proposed nonlinear regression (NLR) procedure for DIF detection is an extension of method based on logistic regression. As a non-IRT approach, NLR can…

  7. RGB color calibration for quantitative image analysis: the "3D thin-plate spline" warping approach.

    Science.gov (United States)

    Menesatti, Paolo; Angelini, Claudio; Pallottino, Federico; Antonucci, Francesca; Aguzzi, Jacopo; Costa, Corrado

    2012-01-01

    In the last years the need to numerically define color by its coordinates in n-dimensional space has increased strongly. Colorimetric calibration is fundamental in food processing and other biological disciplines to quantitatively compare samples' color during workflow with many devices. Several software programmes are available to perform standardized colorimetric procedures, but they are often too imprecise for scientific purposes. In this study, we applied the Thin-Plate Spline interpolation algorithm to calibrate colours in sRGB space (the corresponding Matlab code is reported in the Appendix). This was compared with other two approaches. The first is based on a commercial calibration system (ProfileMaker) and the second on a Partial Least Square analysis. Moreover, to explore device variability and resolution two different cameras were adopted and for each sensor, three consecutive pictures were acquired under four different light conditions. According to our results, the Thin-Plate Spline approach reported a very high efficiency of calibration allowing the possibility to create a revolution in the in-field applicative context of colour quantification not only in food sciences, but also in other biological disciplines. These results are of great importance for scientific color evaluation when lighting conditions are not controlled. Moreover, it allows the use of low cost instruments while still returning scientifically sound quantitative data.

  8. 40 CFR 92.119 - Hydrocarbon analyzer calibration.

    Science.gov (United States)

    2010-07-01

    ... concentrations. (5) Perform a linear least square regression on the data generated. Use an equation of the form y... periodic calibration: (a) Initial and periodic optimization of detector response. Prior to introduction... to find the linear chart deflection (z) for each calibration gas concentration (y). (7) Determine the...

  9. A novel multivariate approach using science-based calibration for direct coating thickness determination in real-time NIR process monitoring.

    Science.gov (United States)

    Möltgen, C-V; Herdling, T; Reich, G

    2013-11-01

    This study demonstrates an approach, using science-based calibration (SBC), for direct coating thickness determination on heart-shaped tablets in real-time. Near-Infrared (NIR) spectra were collected during four full industrial pan coating operations. The tablets were coated with a thin hydroxypropyl methylcellulose (HPMC) film up to a film thickness of 28 μm. The application of SBC permits the calibration of the NIR spectral data without using costly determined reference values. This is due to the fact that SBC combines classical methods to estimate the coating signal and statistical methods for the noise estimation. The approach enabled the use of NIR for the measurement of the film thickness increase from around 8 to 28 μm of four independent batches in real-time. The developed model provided a spectroscopic limit of detection for the coating thickness of 0.64 ± 0.03 μm root-mean square (RMS). In the commonly used statistical methods for calibration, such as Partial Least Squares (PLS), sufficiently varying reference values are needed for calibration. For thin non-functional coatings this is a challenge because the quality of the model depends on the accuracy of the selected calibration standards. The obvious and simple approach of SBC eliminates many of the problems associated with the conventional statistical methods and offers an alternative for multivariate calibration. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Crime Modeling using Spatial Regression Approach

    Science.gov (United States)

    Saleh Ahmar, Ansari; Adiatma; Kasim Aidid, M.

    2018-01-01

    Act of criminality in Indonesia increased both variety and quantity every year. As murder, rape, assault, vandalism, theft, fraud, fencing, and other cases that make people feel unsafe. Risk of society exposed to crime is the number of reported cases in the police institution. The higher of the number of reporter to the police institution then the number of crime in the region is increasing. In this research, modeling criminality in South Sulawesi, Indonesia with the dependent variable used is the society exposed to the risk of crime. Modelling done by area approach is the using Spatial Autoregressive (SAR) and Spatial Error Model (SEM) methods. The independent variable used is the population density, the number of poor population, GDP per capita, unemployment and the human development index (HDI). Based on the analysis using spatial regression can be shown that there are no dependencies spatial both lag or errors in South Sulawesi.

  11. Ordinary least square regression, orthogonal regression, geometric mean regression and their applications in aerosol science

    International Nuclear Information System (INIS)

    Leng Ling; Zhang Tianyi; Kleinman, Lawrence; Zhu Wei

    2007-01-01

    Regression analysis, especially the ordinary least squares method which assumes that errors are confined to the dependent variable, has seen a fair share of its applications in aerosol science. The ordinary least squares approach, however, could be problematic due to the fact that atmospheric data often does not lend itself to calling one variable independent and the other dependent. Errors often exist for both measurements. In this work, we examine two regression approaches available to accommodate this situation. They are orthogonal regression and geometric mean regression. Comparisons are made theoretically as well as numerically through an aerosol study examining whether the ratio of organic aerosol to CO would change with age

  12. Dual Regression

    OpenAIRE

    Spady, Richard; Stouli, Sami

    2012-01-01

    We propose dual regression as an alternative to the quantile regression process for the global estimation of conditional distribution functions under minimal assumptions. Dual regression provides all the interpretational power of the quantile regression process while avoiding the need for repairing the intersecting conditional quantile surfaces that quantile regression often produces in practice. Our approach introduces a mathematical programming characterization of conditional distribution f...

  13. Applied Regression Modeling A Business Approach

    CERN Document Server

    Pardoe, Iain

    2012-01-01

    An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a

  14. The quantile regression approach to efficiency measurement: insights from Monte Carlo simulations.

    Science.gov (United States)

    Liu, Chunping; Laporte, Audrey; Ferguson, Brian S

    2008-09-01

    In the health economics literature there is an ongoing debate over approaches used to estimate the efficiency of health systems at various levels, from the level of the individual hospital - or nursing home - up to that of the health system as a whole. The two most widely used approaches to evaluating the efficiency with which various units deliver care are non-parametric data envelopment analysis (DEA) and parametric stochastic frontier analysis (SFA). Productivity researchers tend to have very strong preferences over which methodology to use for efficiency estimation. In this paper, we use Monte Carlo simulation to compare the performance of DEA and SFA in terms of their ability to accurately estimate efficiency. We also evaluate quantile regression as a potential alternative approach. A Cobb-Douglas production function, random error terms and a technical inefficiency term with different distributions are used to calculate the observed output. The results, based on these experiments, suggest that neither DEA nor SFA can be regarded as clearly dominant, and that, depending on the quantile estimated, the quantile regression approach may be a useful addition to the armamentarium of methods for estimating technical efficiency.

  15. Approach for Self-Calibrating CO2 Measurements with Linear Membrane-Based Gas Sensors

    Directory of Open Access Journals (Sweden)

    Detlef Lazik

    2016-11-01

    Full Text Available Linear membrane-based gas sensors that can be advantageously applied for the measurement of a single gas component in large heterogeneous systems, e.g., for representative determination of CO2 in the subsurface, can be designed depending on the properties of the observation object. A resulting disadvantage is that the permeation-based sensor response depends on operating conditions, the individual site-adapted sensor geometry, the membrane material, and the target gas component. Therefore, calibration is needed, especially of the slope, which could change over several orders of magnitude. A calibration-free approach based on an internal gas standard is developed to overcome the multi-criterial slope dependency. This results in a normalization of sensor response and enables the sensor to assess the significance of measurement. The approach was proofed on the example of CO2 analysis in dry air with tubular PDMS membranes for various CO2 concentrations of an internal standard. Negligible temperature dependency was found within an 18 K range. The transformation behavior of the measurement signal and the influence of concentration variations of the internal standard on the measurement signal were shown. Offsets that were adjusted based on the stated theory for the given measurement conditions and material data from the literature were in agreement with the experimentally determined offsets. A measurement comparison with an NDIR reference sensor shows an unexpectedly low bias (<1% of the non-calibrated sensor response, and comparable statistical uncertainty.

  16. Absolute radiometric calibration of Landsat using a pseudo invariant calibration site

    Science.gov (United States)

    Helder, D.; Thome, K.J.; Mishra, N.; Chander, G.; Xiong, Xiaoxiong; Angal, A.; Choi, Tae-young

    2013-01-01

    Pseudo invariant calibration sites (PICS) have been used for on-orbit radiometric trending of optical satellite systems for more than 15 years. This approach to vicarious calibration has demonstrated a high degree of reliability and repeatability at the level of 1-3% depending on the site, spectral channel, and imaging geometries. A variety of sensors have used this approach for trending because it is broadly applicable and easy to implement. Models to describe the surface reflectance properties, as well as the intervening atmosphere have also been developed to improve the precision of the method. However, one limiting factor of using PICS is that an absolute calibration capability has not yet been fully developed. Because of this, PICS are primarily limited to providing only long term trending information for individual sensors or cross-calibration opportunities between two sensors. This paper builds an argument that PICS can be used more extensively for absolute calibration. To illustrate this, a simple empirical model is developed for the well-known Libya 4 PICS based on observations by Terra MODIS and EO-1 Hyperion. The model is validated by comparing model predicted top-of-atmosphere reflectance values to actual measurements made by the Landsat ETM+ sensor reflective bands. Following this, an outline is presented to develop a more comprehensive and accurate PICS absolute calibration model that can be Système international d'unités (SI) traceable. These initial concepts suggest that absolute calibration using PICS is possible on a broad scale and can lead to improved on-orbit calibration capabilities for optical satellite sensors.

  17. Influence of smoothing of X-ray spectra on parameters of calibration model

    International Nuclear Information System (INIS)

    Antoniak, W.; Urbanski, P.; Kowalska, E.

    1998-01-01

    Parameters of the calibration model before and after smoothing of X-ray spectra have been investigated. The calibration model was calculated using multivariate procedure - namely the partial least square regression (PLS). Investigations have been performed on an example of six sets of various standards used for calibration of some instruments based on X-ray fluorescence principle. The smoothing methods were compared: regression splines, Savitzky-Golay and Discrete Fourier Transform. The calculations were performed using a software package MATLAB and some home-made programs. (author)

  18. A Quantile Regression Approach to Estimating the Distribution of Anesthetic Procedure Time during Induction.

    Directory of Open Access Journals (Sweden)

    Hsin-Lun Wu

    Full Text Available Although procedure time analyses are important for operating room management, it is not easy to extract useful information from clinical procedure time data. A novel approach was proposed to analyze procedure time during anesthetic induction. A two-step regression analysis was performed to explore influential factors of anesthetic induction time (AIT. Linear regression with stepwise model selection was used to select significant correlates of AIT and then quantile regression was employed to illustrate the dynamic relationships between AIT and selected variables at distinct quantiles. A total of 1,060 patients were analyzed. The first and second-year residents (R1-R2 required longer AIT than the third and fourth-year residents and attending anesthesiologists (p = 0.006. Factors prolonging AIT included American Society of Anesthesiologist physical status ≧ III, arterial, central venous and epidural catheterization, and use of bronchoscopy. Presence of surgeon before induction would decrease AIT (p < 0.001. Types of surgery also had significant influence on AIT. Quantile regression satisfactorily estimated extra time needed to complete induction for each influential factor at distinct quantiles. Our analysis on AIT demonstrated the benefit of quantile regression analysis to provide more comprehensive view of the relationships between procedure time and related factors. This novel two-step regression approach has potential applications to procedure time analysis in operating room management.

  19. Function approximation with polynomial regression slines

    International Nuclear Information System (INIS)

    Urbanski, P.

    1996-01-01

    Principles of the polynomial regression splines as well as algorithms and programs for their computation are presented. The programs prepared using software package MATLAB are generally intended for approximation of the X-ray spectra and can be applied in the multivariate calibration of radiometric gauges. (author)

  20. Modified parity space averaging approaches for online cross-calibration of redundant sensors in nuclear reactors

    Directory of Open Access Journals (Sweden)

    Moath Kassim

    2018-05-01

    Full Text Available To maintain safety and reliability of reactors, redundant sensors are usually used to measure critical variables and estimate their averaged time-dependency. Nonhealthy sensors can badly influence the estimation result of the process variable. Since online condition monitoring was introduced, the online cross-calibration method has been widely used to detect any anomaly of sensor readings among the redundant group. The cross-calibration method has four main averaging techniques: simple averaging, band averaging, weighted averaging, and parity space averaging (PSA. PSA is used to weigh redundant signals based on their error bounds and their band consistency. Using the consistency weighting factor (C, PSA assigns more weight to consistent signals that have shared bands, based on how many bands they share, and gives inconsistent signals of very low weight. In this article, three approaches are introduced for improving the PSA technique: the first is to add another consistency factor, so called trend consistency (TC, to include a consideration of the preserving of any characteristic edge that reflects the behavior of equipment/component measured by the process parameter; the second approach proposes replacing the error bound/accuracy based weighting factor (Wa with a weighting factor based on the Euclidean distance (Wd, and the third approach proposes applying Wd,TC,andC, all together. Cold neutron source data sets of four redundant hydrogen pressure transmitters from a research reactor were used to perform the validation and verification. Results showed that the second and third modified approaches lead to reasonable improvement of the PSA technique. All approaches implemented in this study were similar in that they have the capability to (1 identify and isolate a drifted sensor that should undergo calibration, (2 identify a faulty sensor/s due to long and continuous missing data range, and (3 identify a healthy sensor. Keywords: Nuclear Reactors

  1. Poisson regression approach for modeling fatal injury rates amongst Malaysian workers

    International Nuclear Information System (INIS)

    Kamarulzaman Ibrahim; Heng Khai Theng

    2005-01-01

    Many safety studies are based on the analysis carried out on injury surveillance data. The injury surveillance data gathered for the analysis include information on number of employees at risk of injury in each of several strata where the strata are defined in terms of a series of important predictor variables. Further insight into the relationship between fatal injury rates and predictor variables may be obtained by the poisson regression approach. Poisson regression is widely used in analyzing count data. In this study, poisson regression is used to model the relationship between fatal injury rates and predictor variables which are year (1995-2002), gender, recording system and industry type. Data for the analysis were obtained from PERKESO and Jabatan Perangkaan Malaysia. It is found that the assumption that the data follow poisson distribution has been violated. After correction for the problem of over dispersion, the predictor variables that are found to be significant in the model are gender, system of recording, industry type, two interaction effects (interaction between recording system and industry type and between year and industry type). Introduction Regression analysis is one of the most popular

  2. RGB Color Calibration for Quantitative Image Analysis: The “3D Thin-Plate Spline” Warping Approach

    Directory of Open Access Journals (Sweden)

    Corrado Costa

    2012-05-01

    Full Text Available In the last years the need to numerically define color by its coordinates in n-dimensional space has increased strongly. Colorimetric calibration is fundamental in food processing and other biological disciplines to quantitatively compare samples’ color during workflow with many devices. Several software programmes are available to perform standardized colorimetric procedures, but they are often too imprecise for scientific purposes. In this study, we applied the Thin-Plate Spline interpolation algorithm to calibrate colours in sRGB space (the corresponding Matlab code is reported in the Appendix. This was compared with other two approaches. The first is based on a commercial calibration system (ProfileMaker and the second on a Partial Least Square analysis. Moreover, to explore device variability and resolution two different cameras were adopted and for each sensor, three consecutive pictures were acquired under four different light conditions. According to our results, the Thin-Plate Spline approach reported a very high efficiency of calibration allowing the possibility to create a revolution in the in-field applicative context of colour quantification not only in food sciences, but also in other biological disciplines. These results are of great importance for scientific color evaluation when lighting conditions are not controlled. Moreover, it allows the use of low cost instruments while still returning scientifically sound quantitative data.

  3. RGB Color Calibration for Quantitative Image Analysis: The “3D Thin-Plate Spline” Warping Approach

    Science.gov (United States)

    Menesatti, Paolo; Angelini, Claudio; Pallottino, Federico; Antonucci, Francesca; Aguzzi, Jacopo; Costa, Corrado

    2012-01-01

    In the last years the need to numerically define color by its coordinates in n-dimensional space has increased strongly. Colorimetric calibration is fundamental in food processing and other biological disciplines to quantitatively compare samples' color during workflow with many devices. Several software programmes are available to perform standardized colorimetric procedures, but they are often too imprecise for scientific purposes. In this study, we applied the Thin-Plate Spline interpolation algorithm to calibrate colours in sRGB space (the corresponding Matlab code is reported in the Appendix). This was compared with other two approaches. The first is based on a commercial calibration system (ProfileMaker) and the second on a Partial Least Square analysis. Moreover, to explore device variability and resolution two different cameras were adopted and for each sensor, three consecutive pictures were acquired under four different light conditions. According to our results, the Thin-Plate Spline approach reported a very high efficiency of calibration allowing the possibility to create a revolution in the in-field applicative context of colour quantification not only in food sciences, but also in other biological disciplines. These results are of great importance for scientific color evaluation when lighting conditions are not controlled. Moreover, it allows the use of low cost instruments while still returning scientifically sound quantitative data. PMID:22969337

  4. Does intense monitoring matter? A quantile regression approach

    Directory of Open Access Journals (Sweden)

    Fekri Ali Shawtari

    2017-06-01

    Full Text Available Corporate governance has become a centre of attention in corporate management at both micro and macro levels due to adverse consequences and repercussion of insufficient accountability. In this study, we include the Malaysian stock market as sample to explore the impact of intense monitoring on the relationship between intellectual capital performance and market valuation. The objectives of the paper are threefold: i to investigate whether intense monitoring affects the intellectual capital performance of listed companies; ii to explore the impact of intense monitoring on firm value; iii to examine the extent to which the directors serving more than two board committees affects the linkage between intellectual capital performance and firms' value. We employ two approaches, namely, the Ordinary Least Square (OLS and the quantile regression approach. The purpose of the latter is to estimate and generate inference about conditional quantile functions. This method is useful when the conditional distribution does not have a standard shape such as an asymmetric, fat-tailed, or truncated distribution. In terms of variables, the intellectual capital is measured using the value added intellectual coefficient (VAIC, while the market valuation is proxied by firm's market capitalization. The findings of the quantile regression shows that some of the results do not coincide with the results of OLS. We found that intensity of monitoring does not influence the intellectual capital of all firms. It is also evident that intensity of monitoring does not influence the market valuation. However, to some extent, it moderates the relationship between intellectual capital performance and market valuation. This paper contributes to the existing literature as it presents new empirical evidences on the moderating effects of the intensity of monitoring of the board committees on the relationship between performance and intellectual capital.

  5. Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration

    Science.gov (United States)

    Doherty, John E.; Hunt, Randall J.

    2010-01-01

    Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.

  6. A Novel Imbalanced Data Classification Approach Based on Logistic Regression and Fisher Discriminant

    Directory of Open Access Journals (Sweden)

    Baofeng Shi

    2015-01-01

    Full Text Available We introduce an imbalanced data classification approach based on logistic regression significant discriminant and Fisher discriminant. First of all, a key indicators extraction model based on logistic regression significant discriminant and correlation analysis is derived to extract features for customer classification. Secondly, on the basis of the linear weighted utilizing Fisher discriminant, a customer scoring model is established. And then, a customer rating model where the customer number of all ratings follows normal distribution is constructed. The performance of the proposed model and the classical SVM classification method are evaluated in terms of their ability to correctly classify consumers as default customer or nondefault customer. Empirical results using the data of 2157 customers in financial engineering suggest that the proposed approach better performance than the SVM model in dealing with imbalanced data classification. Moreover, our approach contributes to locating the qualified customers for the banks and the bond investors.

  7. Calibration of higher eigenmodes of cantilevers

    International Nuclear Information System (INIS)

    Labuda, Aleksander; Kocun, Marta; Walsh, Tim; Meinhold, Jieh; Proksch, Tania; Meinhold, Waiman; Anderson, Caleb; Proksch, Roger; Lysy, Martin

    2016-01-01

    A method is presented for calibrating the higher eigenmodes (resonant modes) of atomic force microscopy cantilevers that can be performed prior to any tip-sample interaction. The method leverages recent efforts in accurately calibrating the first eigenmode by providing the higher-mode stiffness as a ratio to the first mode stiffness. A one-time calibration routine must be performed for every cantilever type to determine a power-law relationship between stiffness and frequency, which is then stored for future use on similar cantilevers. Then, future calibrations only require a measurement of the ratio of resonant frequencies and the stiffness of the first mode. This method is verified through stiffness measurements using three independent approaches: interferometric measurement, AC approach-curve calibration, and finite element analysis simulation. Power-law values for calibrating higher-mode stiffnesses are reported for several cantilever models. Once the higher-mode stiffnesses are known, the amplitude of each mode can also be calibrated from the thermal spectrum by application of the equipartition theorem.

  8. Calibration of higher eigenmodes of cantilevers

    Energy Technology Data Exchange (ETDEWEB)

    Labuda, Aleksander; Kocun, Marta; Walsh, Tim; Meinhold, Jieh; Proksch, Tania; Meinhold, Waiman; Anderson, Caleb; Proksch, Roger [Asylum Research, an Oxford Instruments Company, Santa Barbara, California 93117 (United States); Lysy, Martin [Department of Statistics and Actuarial Science, University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada)

    2016-07-15

    A method is presented for calibrating the higher eigenmodes (resonant modes) of atomic force microscopy cantilevers that can be performed prior to any tip-sample interaction. The method leverages recent efforts in accurately calibrating the first eigenmode by providing the higher-mode stiffness as a ratio to the first mode stiffness. A one-time calibration routine must be performed for every cantilever type to determine a power-law relationship between stiffness and frequency, which is then stored for future use on similar cantilevers. Then, future calibrations only require a measurement of the ratio of resonant frequencies and the stiffness of the first mode. This method is verified through stiffness measurements using three independent approaches: interferometric measurement, AC approach-curve calibration, and finite element analysis simulation. Power-law values for calibrating higher-mode stiffnesses are reported for several cantilever models. Once the higher-mode stiffnesses are known, the amplitude of each mode can also be calibrated from the thermal spectrum by application of the equipartition theorem.

  9. Reactor operations, inspection and maintenance. PNGS Calibration Program

    International Nuclear Information System (INIS)

    Lopez, E.

    1997-01-01

    The PNGS Calibration Program is being implemented as a response to various concerns identified in recent PEER evaluations and AECB audits. Identified areas of concern were the approach to instrument calibration of Special Safety Systems (SSS). The implementation of a calibration program is a significant improvement in operating practices. A systematic and comprehensive approach to calibration of instrumentation will improve the quality of operation of the plant with a positive contribution to PNGS safety of operation and economic objectives. This paper describes the strategy to implement the proposed calibration program and describes its calibration data requirements. (DM)

  10. Calibration through on-line monitoring of instruments channels

    International Nuclear Information System (INIS)

    James, R.W.

    1996-01-01

    Plant technical specifications require periodic calibration of instrument channels, and this has traditionally meant calibration at fixed time intervals for nearly all instruments. Experience has shown that unnecessarily frequent calibrations reduce channel availability and reliability, impact outage durations, and increase maintenance costs. An alternative approach to satisfying existing requirements for periodic calibration consists of on-line monitoring and quantitative comparison of instrument channels during operation to identify instrument degradation and failure. A Utility Working Group has been formed by EPRI to support the technical activities necessary to achieve generic NRC acceptance of on-line monitoring of redundant instrument channels as a basis for determining when to perform calibrations. A topical report proposing NRC acceptance of this approach was submitted in August 1995, and the Working Group is currently resolving NRC technical questions. This paper describes the proposed approach and the current status of the topical report with regard to NRC review. While these activities will not preclude utilities from continuing to use existing calibration approaches, successful acceptance of this performance-based approach will allow utilities to substantially reduce the number of calibrations which are performed. Concurrent benefits will include reduced I ampersand C impact on outage durations and improved sensitivity to instrument channel performance

  11. Statistical methods in regression and calibration analysis of chromosome aberration data

    International Nuclear Information System (INIS)

    Merkle, W.

    1983-01-01

    The method of iteratively reweighted least squares for the regression analysis of Poisson distributed chromosome aberration data is reviewed in the context of other fit procedures used in the cytogenetic literature. As an application of the resulting regression curves methods for calculating confidence intervals on dose from aberration yield are described and compared, and, for the linear quadratic model a confidence interval is given. Emphasis is placed on the rational interpretation and the limitations of various methods from a statistical point of view. (orig./MG)

  12. Observation models in radiocarbon calibration

    International Nuclear Information System (INIS)

    Jones, M.D.; Nicholls, G.K.

    2001-01-01

    The observation model underlying any calibration process dictates the precise mathematical details of the calibration calculations. Accordingly it is important that an appropriate observation model is used. Here this is illustrated with reference to the use of reservoir offsets where the standard calibration approach is based on a different model to that which the practitioners clearly believe is being applied. This sort of error can give rise to significantly erroneous calibration results. (author). 12 refs., 1 fig

  13. Immune Algorithm Complex Method for Transducer Calibration

    Directory of Open Access Journals (Sweden)

    YU Jiangming

    2014-08-01

    Full Text Available As a key link in engineering test tasks, the transducer calibration has significant influence on accuracy and reliability of test results. Because of unknown and complex nonlinear characteristics, conventional method can’t achieve satisfactory accuracy. An Immune algorithm complex modeling approach is proposed, and the simulated studies on the calibration of third multiple output transducers is made respectively by use of the developed complex modeling. The simulated and experimental results show that the Immune algorithm complex modeling approach can improve significantly calibration precision comparison with traditional calibration methods.

  14. Analysing inequalities in Germany a structured additive distributional regression approach

    CERN Document Server

    Silbersdorff, Alexander

    2017-01-01

    This book seeks new perspectives on the growing inequalities that our societies face, putting forward Structured Additive Distributional Regression as a means of statistical analysis that circumvents the common problem of analytical reduction to simple point estimators. This new approach allows the observed discrepancy between the individuals’ realities and the abstract representation of those realities to be explicitly taken into consideration using the arithmetic mean alone. In turn, the method is applied to the question of economic inequality in Germany.

  15. Auto-associative Kernel Regression Model with Weighted Distance Metric for Instrument Drift Monitoring

    International Nuclear Information System (INIS)

    Shin, Ho Cheol; Park, Moon Ghu; You, Skin

    2006-01-01

    Recently, many on-line approaches to instrument channel surveillance (drift monitoring and fault detection) have been reported worldwide. On-line monitoring (OLM) method evaluates instrument channel performance by assessing its consistency with other plant indications through parametric or non-parametric models. The heart of an OLM system is the model giving an estimate of the true process parameter value against individual measurements. This model gives process parameter estimate calculated as a function of other plant measurements which can be used to identify small sensor drifts that would require the sensor to be manually calibrated or replaced. This paper describes an improvement of auto associative kernel regression (AAKR) by introducing a correlation coefficient weighting on kernel distances. The prediction performance of the developed method is compared with conventional auto-associative kernel regression

  16. Initial tank calibration at NUCEF critical facility. 2

    International Nuclear Information System (INIS)

    Yanagisawa, Hiroshi

    1994-07-01

    Analyses on initial tank calibration data were carried out for the purpose of the nuclear material accountancy and control for critical facilities in NUCEF: Nuclear Fuel Cycle Safety Engineering Research Facility. Calibration functions to evaluate volume of nuclear material solution in accountancy tanks were determined by regression analysis on the data considering dimension and shape of the tank. The analyses on dip-tube separation (probe separation), which are necessary to evaluate solution density in the tanks, were also carried out. As a result, regression errors of volume calculated with the calibration functions were within 0.05 lit. (0.01%) at a nominal level of Pu accountancy tanks. Errors of the evaluated dip-tube separations were also small, e.g. within 0.2mm (0.11%). Therefore, it was estimated that systematic errors of bulk measurements would satisfy the target value of NUCEF critical facilities (0.3% for Pu accountancy tanks). This paper summarizes the data analysis methods, results of analysis and evaluated errors. (author)

  17. Performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data.

    Science.gov (United States)

    Yelland, Lisa N; Salter, Amy B; Ryan, Philip

    2011-10-15

    Modified Poisson regression, which combines a log Poisson regression model with robust variance estimation, is a useful alternative to log binomial regression for estimating relative risks. Previous studies have shown both analytically and by simulation that modified Poisson regression is appropriate for independent prospective data. This method is often applied to clustered prospective data, despite a lack of evidence to support its use in this setting. The purpose of this article is to evaluate the performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data, by using generalized estimating equations to account for clustering. A simulation study is conducted to compare log binomial regression and modified Poisson regression for analyzing clustered data from intervention and observational studies. Both methods generally perform well in terms of bias, type I error, and coverage. Unlike log binomial regression, modified Poisson regression is not prone to convergence problems. The methods are contrasted by using example data sets from 2 large studies. The results presented in this article support the use of modified Poisson regression as an alternative to log binomial regression for analyzing clustered prospective data when clustering is taken into account by using generalized estimating equations.

  18. Approach for Self-Calibrating CO₂ Measurements with Linear Membrane-Based Gas Sensors.

    Science.gov (United States)

    Lazik, Detlef; Sood, Pramit

    2016-11-17

    Linear membrane-based gas sensors that can be advantageously applied for the measurement of a single gas component in large heterogeneous systems, e.g., for representative determination of CO₂ in the subsurface, can be designed depending on the properties of the observation object. A resulting disadvantage is that the permeation-based sensor response depends on operating conditions, the individual site-adapted sensor geometry, the membrane material, and the target gas component. Therefore, calibration is needed, especially of the slope, which could change over several orders of magnitude. A calibration-free approach based on an internal gas standard is developed to overcome the multi-criterial slope dependency. This results in a normalization of sensor response and enables the sensor to assess the significance of measurement. The approach was proofed on the example of CO₂ analysis in dry air with tubular PDMS membranes for various CO₂ concentrations of an internal standard. Negligible temperature dependency was found within an 18 K range. The transformation behavior of the measurement signal and the influence of concentration variations of the internal standard on the measurement signal were shown. Offsets that were adjusted based on the stated theory for the given measurement conditions and material data from the literature were in agreement with the experimentally determined offsets. A measurement comparison with an NDIR reference sensor shows an unexpectedly low bias (sensor response, and comparable statistical uncertainty.

  19. Boosted beta regression.

    Directory of Open Access Journals (Sweden)

    Matthias Schmid

    Full Text Available Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1. Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures.

  20. Photometric calibration of the COMBO-17 survey with the Softassign Procrustes Matching method

    Science.gov (United States)

    Sheikhbahaee, Z.; Nakajima, R.; Erben, T.; Schneider, P.; Hildebrandt, H.; Becker, A. C.

    2017-11-01

    Accurate photometric calibration of optical data is crucial for photometric redshift estimation. We present the Softassign Procrustes Matching (SPM) method to improve the colour calibration upon the commonly used Stellar Locus Regression (SLR) method for the COMBO-17 survey. Our colour calibration approach can be categorised as a point-set matching method, which is frequently used in medical imaging and pattern recognition. We attain a photometric redshift precision Δz/(1 + zs) of better than 2 per cent. Our method is based on aligning the stellar locus of the uncalibrated stars to that of a spectroscopic sample of the Sloan Digital Sky Survey standard stars. We achieve our goal by finding a correspondence matrix between the two point-sets and applying the matrix to estimate the appropriate translations in multidimensional colour space. The SPM method is able to find the translation between two point-sets, despite the existence of noise and incompleteness of the common structures in the sets, as long as there is a distinct structure in at least one of the colour-colour pairs. We demonstrate the precision of our colour calibration method with a mock catalogue. The SPM colour calibration code is publicly available at https://neuronphysics@bitbucket.org/neuronphysics/spm.git.

  1. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    Science.gov (United States)

    Ulbrich, Norbert Manfred

    2013-01-01

    A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.

  2. Retro-regression--another important multivariate regression improvement.

    Science.gov (United States)

    Randić, M

    2001-01-01

    We review the serious problem associated with instabilities of the coefficients of regression equations, referred to as the MRA (multivariate regression analysis) "nightmare of the first kind". This is manifested when in a stepwise regression a descriptor is included or excluded from a regression. The consequence is an unpredictable change of the coefficients of the descriptors that remain in the regression equation. We follow with consideration of an even more serious problem, referred to as the MRA "nightmare of the second kind", arising when optimal descriptors are selected from a large pool of descriptors. This process typically causes at different steps of the stepwise regression a replacement of several previously used descriptors by new ones. We describe a procedure that resolves these difficulties. The approach is illustrated on boiling points of nonanes which are considered (1) by using an ordered connectivity basis; (2) by using an ordering resulting from application of greedy algorithm; and (3) by using an ordering derived from an exhaustive search for optimal descriptors. A novel variant of multiple regression analysis, called retro-regression (RR), is outlined showing how it resolves the ambiguities associated with both "nightmares" of the first and the second kind of MRA.

  3. Regression algorithm for emotion detection

    OpenAIRE

    Berthelon , Franck; Sander , Peter

    2013-01-01

    International audience; We present here two components of a computational system for emotion detection. PEMs (Personalized Emotion Maps) store links between bodily expressions and emotion values, and are individually calibrated to capture each person's emotion profile. They are an implementation based on aspects of Scherer's theoretical complex system model of emotion~\\cite{scherer00, scherer09}. We also present a regression algorithm that determines a person's emotional feeling from sensor m...

  4. [Hyperspectral Estimation of Apple Tree Canopy LAI Based on SVM and RF Regression].

    Science.gov (United States)

    Han, Zhao-ying; Zhu, Xi-cun; Fang, Xian-yi; Wang, Zhuo-yuan; Wang, Ling; Zhao, Geng-Xing; Jiang, Yuan-mao

    2016-03-01

    Leaf area index (LAI) is the dynamic index of crop population size. Hyperspectral technology can be used to estimate apple canopy LAI rapidly and nondestructively. It can be provide a reference for monitoring the tree growing and yield estimation. The Red Fuji apple trees of full bearing fruit are the researching objects. Ninety apple trees canopies spectral reflectance and LAI values were measured by the ASD Fieldspec3 spectrometer and LAI-2200 in thirty orchards in constant two years in Qixia research area of Shandong Province. The optimal vegetation indices were selected by the method of correlation analysis of the original spectral reflectance and vegetation indices. The models of predicting the LAI were built with the multivariate regression analysis method of support vector machine (SVM) and random forest (RF). The new vegetation indices, GNDVI527, ND-VI676, RVI682, FD-NVI656 and GRVI517 and the previous two main vegetation indices, NDVI670 and NDVI705, are in accordance with LAI. In the RF regression model, the calibration set decision coefficient C-R2 of 0.920 and validation set decision coefficient V-R2 of 0.889 are higher than the SVM regression model by 0.045 and 0.033 respectively. The root mean square error of calibration set C-RMSE of 0.249, the root mean square error validation set V-RMSE of 0.236 are lower than that of the SVM regression model by 0.054 and 0.058 respectively. Relative analysis of calibrating error C-RPD and relative analysis of validation set V-RPD reached 3.363 and 2.520, 0.598 and 0.262, respectively, which were higher than the SVM regression model. The measured and predicted the scatterplot trend line slope of the calibration set and validation set C-S and V-S are close to 1. The estimation result of RF regression model is better than that of the SVM. RF regression model can be used to estimate the LAI of red Fuji apple trees in full fruit period.

  5. Convert a low-cost sensor to a colorimeter using an improved regression method

    Science.gov (United States)

    Wu, Yifeng

    2008-01-01

    Closed loop color calibration is a process to maintain consistent color reproduction for color printers. To perform closed loop color calibration, a pre-designed color target should be printed, and automatically measured by a color measuring instrument. A low cost sensor has been embedded to the printer to perform the color measurement. A series of sensor calibration and color conversion methods have been developed. The purpose is to get accurate colorimetric measurement from the data measured by the low cost sensor. In order to get high accuracy colorimetric measurement, we need carefully calibrate the sensor, and minimize all possible errors during the color conversion. After comparing several classical color conversion methods, a regression based color conversion method has been selected. The regression is a powerful method to estimate the color conversion functions. But the main difficulty to use this method is to find an appropriate function to describe the relationship between the input and the output data. In this paper, we propose to use 1D pre-linearization tables to improve the linearity between the input sensor measuring data and the output colorimetric data. Using this method, we can increase the accuracy of the regression method, so as to improve the accuracy of the color conversion.

  6. SPRT Calibration Uncertainties and Internal Quality Control at a Commercial SPRT Calibration Facility

    Science.gov (United States)

    Wiandt, T. J.

    2008-06-01

    The Hart Scientific Division of the Fluke Corporation operates two accredited standard platinum resistance thermometer (SPRT) calibration facilities, one at the Hart Scientific factory in Utah, USA, and the other at a service facility in Norwich, UK. The US facility is accredited through National Voluntary Laboratory Accreditation Program (NVLAP), and the UK facility is accredited through UKAS. Both provide SPRT calibrations using similar equipment and procedures, and at similar levels of uncertainty. These uncertainties are among the lowest available commercially. To achieve and maintain low uncertainties, it is required that the calibration procedures be thorough and optimized. However, to minimize customer downtime, it is also important that the instruments be calibrated in a timely manner and returned to the customer. Consequently, subjecting the instrument to repeated calibrations or extensive repeated measurements is not a viable approach. Additionally, these laboratories provide SPRT calibration services involving a wide variety of SPRT designs. These designs behave differently, yet predictably, when subjected to calibration measurements. To this end, an evaluation strategy involving both statistical process control and internal consistency measures is utilized to provide confidence in both the instrument calibration and the calibration process. This article describes the calibration facilities, procedure, uncertainty analysis, and internal quality assurance measures employed in the calibration of SPRTs. Data will be reviewed and generalities will be presented. Finally, challenges and considerations for future improvements will be discussed.

  7. Actuator-Assisted Calibration of Freehand 3D Ultrasound System.

    Science.gov (United States)

    Koo, Terry K; Silvia, Nathaniel

    2018-01-01

    Freehand three-dimensional (3D) ultrasound has been used independently of other technologies to analyze complex geometries or registered with other imaging modalities to aid surgical and radiotherapy planning. A fundamental requirement for all freehand 3D ultrasound systems is probe calibration. The purpose of this study was to develop an actuator-assisted approach to facilitate freehand 3D ultrasound calibration using point-based phantoms. We modified the mathematical formulation of the calibration problem to eliminate the need of imaging the point targets at different viewing angles and developed an actuator-assisted approach/setup to facilitate quick and consistent collection of point targets spanning the entire image field of view. The actuator-assisted approach was applied to a commonly used cross wire phantom as well as two custom-made point-based phantoms (original and modified), each containing 7 collinear point targets, and compared the results with the traditional freehand cross wire phantom calibration in terms of calibration reproducibility, point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time. Results demonstrated that the actuator-assisted single cross wire phantom calibration significantly improved the calibration reproducibility and offered similar point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time with respect to the freehand cross wire phantom calibration. On the other hand, the actuator-assisted modified "collinear point target" phantom calibration offered similar precision and accuracy when compared to the freehand cross wire phantom calibration, but it reduced the data acquisition time by 57%. It appears that both actuator-assisted cross wire phantom and modified collinear point target phantom calibration approaches are viable options for freehand 3D ultrasound calibration.

  8. Evaluating fossil calibrations for dating phylogenies in light of rates of molecular evolution: a comparison of three approaches.

    Science.gov (United States)

    Lukoschek, Vimoksalehi; Scott Keogh, J; Avise, John C

    2012-01-01

    Evolutionary and biogeographic studies increasingly rely on calibrated molecular clocks to date key events. Although there has been significant recent progress in development of the techniques used for molecular dating, many issues remain. In particular, controversies abound over the appropriate use and placement of fossils for calibrating molecular clocks. Several methods have been proposed for evaluating candidate fossils; however, few studies have compared the results obtained by different approaches. Moreover, no previous study has incorporated the effects of nucleotide saturation from different data types in the evaluation of candidate fossils. In order to address these issues, we compared three approaches for evaluating fossil calibrations: the single-fossil cross-validation method of Near, Meylan, and Shaffer (2005. Assessing concordance of fossil calibration points in molecular clock studies: an example using turtles. Am. Nat. 165:137-146), the empirical fossil coverage method of Marshall (2008. A simple method for bracketing absolute divergence times on molecular phylogenies using multiple fossil calibration points. Am. Nat. 171:726-742), and the Bayesian multicalibration method of Sanders and Lee (2007. Evaluating molecular clock calibrations using Bayesian analyses with soft and hard bounds. Biol. Lett. 3:275-279) and explicitly incorporate the effects of data type (nuclear vs. mitochondrial DNA) for identifying the most reliable or congruent fossil calibrations. We used advanced (Caenophidian) snakes as a case study; however, our results are applicable to any taxonomic group with multiple candidate fossils, provided appropriate taxon sampling and sufficient molecular sequence data are available. We found that data type strongly influenced which fossil calibrations were identified as outliers, regardless of which method was used. Despite the use of complex partitioned models of sequence evolution and multiple calibrations throughout the tree, saturation

  9. Calibrating a multi-model approach to defect production in high energy collision cascades

    International Nuclear Information System (INIS)

    Heinisch, H.L.; Singh, B.N.; Diaz de la Rubia, T.

    1994-01-01

    A multi-model approach to simulating defect production processes at the atomic scale is described that incorporates molecular dynamics (MD), binary collision approximation (BCA) calculations and stochastic annealing simulations. The central hypothesis is that the simple, fast computer codes capable of simulating large numbers of high energy cascades (e.g., BCA codes) can be made to yield the correct defect configurations when their parameters are calibrated using the results of the more physically realistic MD simulations. The calibration procedure is investigated using results of MD simulations of 25 keV cascades in copper. The configurations of point defects are extracted from the MD cascade simulations at the end of the collisional phase, thus providing information similar to that obtained with a binary collision model. The MD collisional phase defect configurations are used as input to the ALSOME annealing simulation code, and values of the ALSOME quenching parameters are determined that yield the best fit to the post-quenching defect configurations of the MD simulations. ((orig.))

  10. A novel calibration approach of MODIS AOD data to predict PM2.5 concentrations

    Directory of Open Access Journals (Sweden)

    P. Koutrakis

    2011-08-01

    Full Text Available Epidemiological studies investigating the human health effects of PM2.5 are susceptible to exposure measurement errors, a form of bias in exposure estimates, since they rely on data from a limited number of PM2.5 monitors within their study area. Satellite data can be used to expand spatial coverage, potentially enhancing our ability to estimate location- or subject-specific exposures to PM2.5, but some have reported poor predictive power. A new methodology was developed to calibrate aerosol optical depth (AOD data obtained from the Moderate Resolution Imaging Spectroradiometer (MODIS. Subsequently, this method was used to predict ground daily PM2.5 concentrations in the New England region. 2003 MODIS AOD data corresponding to the New England region were retrieved, and PM2.5 concentrations measured at 26 US Environmental Protection Agency (EPA PM2.5 monitoring sites were used to calibrate the AOD data. A mixed effects model which allows day-to-day variability in daily PM2.5-AOD relationships was used to predict location-specific PM2.5 levels. PM2.5 concentrations measured at the monitoring sites were compared to those predicted for the corresponding grid cells. Both cross-sectional and longitudinal comparisons between the observed and predicted concentrations suggested that the proposed new calibration approach renders MODIS AOD data a potentially useful predictor of PM2.5 concentrations. Furthermore, the estimated PM2.5 levels within the study domain were examined in relation to air pollution sources. Our approach made it possible to investigate the spatial patterns of PM2.5 concentrations within the study domain.

  11. Corporate Social Responsibility and Financial Performance: A Two Least Regression Approach

    Directory of Open Access Journals (Sweden)

    Alexander Olawumi Dabor

    2017-12-01

    Full Text Available The objective of this study is to investigate the casuality between corporate social responsibility and firm financial performance. The study employed two least square regression approaches. Fifty-two firms were selected using the scientific method. The findings revealed that corporate social responsibility and firm performance in manufacturing sector are mutually related at 5%. The study recommended that management of manufacturing companies in Nigeria should expend on CSR to boost profitability and corporate image.

  12. Calibrated photostimulated luminescence is an effective approach to identify irradiated orange during storage

    International Nuclear Information System (INIS)

    Jo, Yunhee; Sanyal, Bhaskar; Chung, Namhyeok; Lee, Hyun-Gyu; Park, Yunji; Park, Hae-Jun; Kwon, Joong-Ho

    2015-01-01

    Photostimulated luminescence (PSL) has been employed as a fast screening method for various irradiated foods. In this study the potential use of PSL was evaluated to identify oranges irradiated with gamma ray, electron beam and X-ray (0–2 kGy) and stored under different conditions for 6 weeks. The effects of light conditions (natural light, artificial light, and dark) and storage temperatures (4 and 20 °C) on PSL photon counts (PCs) during post-irradiation periods were studied. Non-irradiated samples always showed negative values of PCs, while irradiated oranges exhibited intermediate results after first PSL measurements. However, the irradiated samples had much higher PCs. The PCs of all the samples declined as the storage time increased. Calibrated second PSL measurements showed PSL ratio <10 for the irradiated samples after 3 weeks of irradiation confirming their irradiation status in all the storage conditions. Calibrated PSL and sample storage in dark at 4 °C were found out to be most suitable approaches to identify irradiated oranges during storage. - Highlights: • Photostimulatedluminescence (PSL) was studied to identify irradiated orange for quarantine application. • PSL detection efficiency was compared amonggamma,electron, and X irradiation during shelf-life of oranges • PSL properties of samples were characterized by standard samples • Calibrated PSL gave a clear verdict on irradiation extending potential of PSL technique

  13. Improved intact soil-core carbon determination applying regression shrinkage and variable selection techniques to complete spectrum laser-induced breakdown spectroscopy (LIBS).

    Science.gov (United States)

    Bricklemyer, Ross S; Brown, David J; Turk, Philip J; Clegg, Sam M

    2013-10-01

    Laser-induced breakdown spectroscopy (LIBS) provides a potential method for rapid, in situ soil C measurement. In previous research on the application of LIBS to intact soil cores, we hypothesized that ultraviolet (UV) spectrum LIBS (200-300 nm) might not provide sufficient elemental information to reliably discriminate between soil organic C (SOC) and inorganic C (IC). In this study, using a custom complete spectrum (245-925 nm) core-scanning LIBS instrument, we analyzed 60 intact soil cores from six wheat fields. Predictive multi-response partial least squares (PLS2) models using full and reduced spectrum LIBS were compared for directly determining soil total C (TC), IC, and SOC. Two regression shrinkage and variable selection approaches, the least absolute shrinkage and selection operator (LASSO) and sparse multivariate regression with covariance estimation (MRCE), were tested for soil C predictions and the identification of wavelengths important for soil C prediction. Using complete spectrum LIBS for PLS2 modeling reduced the calibration standard error of prediction (SEP) 15 and 19% for TC and IC, respectively, compared to UV spectrum LIBS. The LASSO and MRCE approaches provided significantly improved calibration accuracy and reduced SEP 32-55% over UV spectrum PLS2 models. We conclude that (1) complete spectrum LIBS is superior to UV spectrum LIBS for predicting soil C for intact soil cores without pretreatment; (2) LASSO and MRCE approaches provide improved calibration prediction accuracy over PLS2 but require additional testing with increased soil and target analyte diversity; and (3) measurement errors associated with analyzing intact cores (e.g., sample density and surface roughness) require further study and quantification.

  14. Calibration in atomic spectrometry: A tutorial review dealing with quality criteria, weighting procedures and possible curvatures

    International Nuclear Information System (INIS)

    Mermet, Jean-Michel

    2010-01-01

    Calibration is required to obtain analyte concentrations in atomic spectrometry. To take full benefit of it, the adequacy of the coefficient of determination r 2 is discussed, and its use is compared with the uncertainty due to the prediction bands of the regression. Also discussed from a tutorial point of view are the influence of the weighting procedure and of different weighting factors, and the comparison between linear and quadratic regression to cope with curvatures. They are illustrated with examples based on the use of ICP-AES with nebulization and laser ablation, and of LIBS. Use of a calibration graph over several orders of magnitude may be problematic as well as the use of a quadratic regression to cope with possible curvatures. Instrument softwares that allow reprocessing of the calibration by selecting standards around the expected analyte concentration are convenient for optimizing the calibration procedure.

  15. Uncertainty of pesticide residue concentration determined from ordinary and weighted linear regression curve.

    Science.gov (United States)

    Yolci Omeroglu, Perihan; Ambrus, Árpad; Boyacioglu, Dilek

    2018-03-28

    Determination of pesticide residues is based on calibration curves constructed for each batch of analysis. Calibration standard solutions are prepared from a known amount of reference material at different concentration levels covering the concentration range of the analyte in the analysed samples. In the scope of this study, the applicability of both ordinary linear and weighted linear regression (OLR and WLR) for pesticide residue analysis was investigated. We used 782 multipoint calibration curves obtained for 72 different analytical batches with high-pressure liquid chromatography equipped with an ultraviolet detector, and gas chromatography with electron capture, nitrogen phosphorus or mass spectrophotometer detectors. Quality criteria of the linear curves including regression coefficient, standard deviation of relative residuals and deviation of back calculated concentrations were calculated both for WLR and OLR methods. Moreover, the relative uncertainty of the predicted analyte concentration was estimated for both methods. It was concluded that calibration curve based on WLR complies with all the quality criteria set by international guidelines compared to those calculated with OLR. It means that all the data fit well with WLR for pesticide residue analysis. It was estimated that, regardless of the actual concentration range of the calibration, relative uncertainty at the lowest calibrated level ranged between 0.3% and 113.7% for OLR and between 0.2% and 22.1% for WLR. At or above 1/3 of the calibrated range, uncertainty of calibration curve ranged between 0.1% and 16.3% for OLR and 0% and 12.2% for WLR, and therefore, the two methods gave comparable results.

  16. A multi-scale relevance vector regression approach for daily urban water demand forecasting

    Science.gov (United States)

    Bai, Yun; Wang, Pu; Li, Chuan; Xie, Jingjing; Wang, Yin

    2014-09-01

    Water is one of the most important resources for economic and social developments. Daily water demand forecasting is an effective measure for scheduling urban water facilities. This work proposes a multi-scale relevance vector regression (MSRVR) approach to forecast daily urban water demand. The approach uses the stationary wavelet transform to decompose historical time series of daily water supplies into different scales. At each scale, the wavelet coefficients are used to train a machine-learning model using the relevance vector regression (RVR) method. The estimated coefficients of the RVR outputs for all of the scales are employed to reconstruct the forecasting result through the inverse wavelet transform. To better facilitate the MSRVR forecasting, the chaos features of the daily water supply series are analyzed to determine the input variables of the RVR model. In addition, an adaptive chaos particle swarm optimization algorithm is used to find the optimal combination of the RVR model parameters. The MSRVR approach is evaluated using real data collected from two waterworks and is compared with recently reported methods. The results show that the proposed MSRVR method can forecast daily urban water demand much more precisely in terms of the normalized root-mean-square error, correlation coefficient, and mean absolute percentage error criteria.

  17. SCIAMACHY Level 1 data: calibration concept and in-flight calibration

    Science.gov (United States)

    Lichtenberg, G.; Kleipool, Q.; Krijger, J. M.; van Soest, G.; van Hees, R.; Tilstra, L. G.; Acarreta, J. R.; Aben, I.; Ahlers, B.; Bovensmann, H.; Chance, K.; Gloudemans, A. M. S.; Hoogeveen, R. W. M.; Jongma, R. T. N.; Noël, S.; Piters, A.; Schrijver, H.; Schrijvers, C.; Sioris, C. E.; Skupin, J.; Slijkhuis, S.; Stammes, P.; Wuttke, M.

    2006-11-01

    The calibration of SCIAMACHY was thoroughly checked since the instrument was launched on-board ENVISAT in February 2002. While SCIAMACHY's functional performance is excellent since launch, a number of technical difficulties have appeared, that required adjustments to the calibration. The problems can be separated into three types: (1) Those caused by the instrument and/or platform environment. Among these are the high water content in the satellite structure and/or MLI layer. This results in the deposition of ice on the detectors in channels 7 and 8 which seriously affects the retrievals in the IR, mostly because of the continuous change of the slit function caused by scattering of the light through the ice layer. Additionally a light leak in channel 7 severely hampers any retrieval from this channel. (2) Problems due to errors in the on-ground calibration and/or data processing affecting for example the radiometric calibration. A new approach based on a mixture of on-ground and in-flight data is shortly described here. (3) Problems caused by principal limitations of the calibration concept, e.g. the possible appearance of spectral structures after the polarisation correction due to unavoidable errors in the determination of atmospheric polarisation. In this paper we give a complete overview of the calibration and problems that still have to be solved. We will also give an indication of the effect of calibration problems on retrievals where possible. Since the operational processing chain is currently being updated and no newly processed data are available at this point in time, for some calibration issues only a rough estimate of the effect on Level 2 products can be given. However, it is the intention of this paper to serve as a future reference for detailed studies into specific calibration issues.

  18. Understanding poisson regression.

    Science.gov (United States)

    Hayat, Matthew J; Higgins, Melinda

    2014-04-01

    Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes. Copyright 2014, SLACK Incorporated.

  19. AucPR: an AUC-based approach using penalized regression for disease prediction with high-dimensional omics data.

    Science.gov (United States)

    Yu, Wenbao; Park, Taesung

    2014-01-01

    It is common to get an optimal combination of markers for disease classification and prediction when multiple markers are available. Many approaches based on the area under the receiver operating characteristic curve (AUC) have been proposed. Existing works based on AUC in a high-dimensional context depend mainly on a non-parametric, smooth approximation of AUC, with no work using a parametric AUC-based approach, for high-dimensional data. We propose an AUC-based approach using penalized regression (AucPR), which is a parametric method used for obtaining a linear combination for maximizing the AUC. To obtain the AUC maximizer in a high-dimensional context, we transform a classical parametric AUC maximizer, which is used in a low-dimensional context, into a regression framework and thus, apply the penalization regression approach directly. Two kinds of penalization, lasso and elastic net, are considered. The parametric approach can avoid some of the difficulties of a conventional non-parametric AUC-based approach, such as the lack of an appropriate concave objective function and a prudent choice of the smoothing parameter. We apply the proposed AucPR for gene selection and classification using four real microarray and synthetic data. Through numerical studies, AucPR is shown to perform better than the penalized logistic regression and the nonparametric AUC-based method, in the sense of AUC and sensitivity for a given specificity, particularly when there are many correlated genes. We propose a powerful parametric and easily-implementable linear classifier AucPR, for gene selection and disease prediction for high-dimensional data. AucPR is recommended for its good prediction performance. Beside gene expression microarray data, AucPR can be applied to other types of high-dimensional omics data, such as miRNA and protein data.

  20. Comparison of global optimization approaches for robust calibration of hydrologic model parameters

    Science.gov (United States)

    Jung, I. W.

    2015-12-01

    Robustness of the calibrated parameters of hydrologic models is necessary to provide a reliable prediction of future performance of watershed behavior under varying climate conditions. This study investigated calibration performances according to the length of calibration period, objective functions, hydrologic model structures and optimization methods. To do this, the combination of three global optimization methods (i.e. SCE-UA, Micro-GA, and DREAM) and four hydrologic models (i.e. SAC-SMA, GR4J, HBV, and PRMS) was tested with different calibration periods and objective functions. Our results showed that three global optimization methods provided close calibration performances under different calibration periods, objective functions, and hydrologic models. However, using the agreement of index, normalized root mean square error, Nash-Sutcliffe efficiency as the objective function showed better performance than using correlation coefficient and percent bias. Calibration performances according to different calibration periods from one year to seven years were hard to generalize because four hydrologic models have different levels of complexity and different years have different information content of hydrological observation. Acknowledgements This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  1. Retinal vascular calibres are significantly associated with cardiovascular risk factors

    DEFF Research Database (Denmark)

    von Hanno, T.; Bertelsen, G.; Sjølie, Anne K.

    2014-01-01

    . Association between retinal vessel calibre and the cardiovascular risk factors was assessed by multivariable linear and logistic regression analyses. Results: Retinal arteriolar calibre was independently associated with age, blood pressure, HbA1c and smoking in women and men, and with HDL cholesterol in men......Purpose: To describe the association between retinal vascular calibres and cardiovascular risk factors. Methods: Population-based cross-sectional study including 6353 participants of the TromsO Eye Study in Norway aged 38-87years. Retinal arteriolar calibre (central retinal artery equivalent...... cardiovascular risk factors were independently associated with retinal vascular calibre, with stronger effect of HDL cholesterol and BMI in men than in women. Blood pressure and smoking contributed most to the explained variance....

  2. Use of empirical likelihood to calibrate auxiliary information in partly linear monotone regression models.

    Science.gov (United States)

    Chen, Baojiang; Qin, Jing

    2014-05-10

    In statistical analysis, a regression model is needed if one is interested in finding the relationship between a response variable and covariates. When the response depends on the covariate, then it may also depend on the function of this covariate. If one has no knowledge of this functional form but expect for monotonic increasing or decreasing, then the isotonic regression model is preferable. Estimation of parameters for isotonic regression models is based on the pool-adjacent-violators algorithm (PAVA), where the monotonicity constraints are built in. With missing data, people often employ the augmented estimating method to improve estimation efficiency by incorporating auxiliary information through a working regression model. However, under the framework of the isotonic regression model, the PAVA does not work as the monotonicity constraints are violated. In this paper, we develop an empirical likelihood-based method for isotonic regression model to incorporate the auxiliary information. Because the monotonicity constraints still hold, the PAVA can be used for parameter estimation. Simulation studies demonstrate that the proposed method can yield more efficient estimates, and in some situations, the efficiency improvement is substantial. We apply this method to a dementia study. Copyright © 2013 John Wiley & Sons, Ltd.

  3. Comparative analysis of neural network and regression based condition monitoring approaches for wind turbine fault detection

    DEFF Research Database (Denmark)

    Schlechtingen, Meik; Santos, Ilmar

    2011-01-01

    This paper presents the research results of a comparison of three different model based approaches for wind turbine fault detection in online SCADA data, by applying developed models to five real measured faults and anomalies. The regression based model as the simplest approach to build a normal...

  4. Iterative Strain-Gage Balance Calibration Data Analysis for Extended Independent Variable Sets

    Science.gov (United States)

    Ulbrich, Norbert Manfred

    2011-01-01

    A new method was developed that makes it possible to use an extended set of independent calibration variables for an iterative analysis of wind tunnel strain gage balance calibration data. The new method permits the application of the iterative analysis method whenever the total number of balance loads and other independent calibration variables is greater than the total number of measured strain gage outputs. Iteration equations used by the iterative analysis method have the limitation that the number of independent and dependent variables must match. The new method circumvents this limitation. It simply adds a missing dependent variable to the original data set by using an additional independent variable also as an additional dependent variable. Then, the desired solution of the regression analysis problem can be obtained that fits each gage output as a function of both the original and additional independent calibration variables. The final regression coefficients can be converted to data reduction matrix coefficients because the missing dependent variables were added to the data set without changing the regression analysis result for each gage output. Therefore, the new method still supports the application of the two load iteration equation choices that the iterative method traditionally uses for the prediction of balance loads during a wind tunnel test. An example is discussed in the paper that illustrates the application of the new method to a realistic simulation of temperature dependent calibration data set of a six component balance.

  5. Comparison of beta-binomial regression model approaches to analyze health-related quality of life data.

    Science.gov (United States)

    Najera-Zuloaga, Josu; Lee, Dae-Jin; Arostegui, Inmaculada

    2017-01-01

    Health-related quality of life has become an increasingly important indicator of health status in clinical trials and epidemiological research. Moreover, the study of the relationship of health-related quality of life with patients and disease characteristics has become one of the primary aims of many health-related quality of life studies. Health-related quality of life scores are usually assumed to be distributed as binomial random variables and often highly skewed. The use of the beta-binomial distribution in the regression context has been proposed to model such data; however, the beta-binomial regression has been performed by means of two different approaches in the literature: (i) beta-binomial distribution with a logistic link; and (ii) hierarchical generalized linear models. None of the existing literature in the analysis of health-related quality of life survey data has performed a comparison of both approaches in terms of adequacy and regression parameter interpretation context. This paper is motivated by the analysis of a real data application of health-related quality of life outcomes in patients with Chronic Obstructive Pulmonary Disease, where the use of both approaches yields to contradictory results in terms of covariate effects significance and consequently the interpretation of the most relevant factors in health-related quality of life. We present an explanation of the results in both methodologies through a simulation study and address the need to apply the proper approach in the analysis of health-related quality of life survey data for practitioners, providing an R package.

  6. New approach for calibration the efficiency of HPGe detectors

    International Nuclear Information System (INIS)

    Alnour, I.A.; Wagiran, H.; Suhaimi Hamzah; Siong, W.B.; Mohd Suhaimi Elias

    2013-01-01

    Full-text: This work evaluates the efficiency calibrating of HPGe detector coupled with Canberra GC3018 with Genie 2000 software and Ortec GEM25-76-XLB-C with Gamma Vision software; available at Neutron activation analysis laboratory in Malaysian Nuclear Agency (NM). The efficiency calibration curve was constructed from measurement of an IAEA, standard gamma point sources set composed by 214 Am, 57 Co, 133 Ba, 152 Eu, 137 Cs and 60 Co. The efficiency calibrations were performed for three different geometries: 5, 10 and 15 cm distances from the end cap detector. The polynomial parameters functions were simulated through a computer program, MATLAB in order to find an accurate fit to the experimental data points. The efficiency equation was established from the known fitted parameters which allow for the efficiency evaluation at particular energy of interest. The study shows that significant deviations in the efficiency, depending on the source-detector distance and photon energy. (author)

  7. Sample size calculation to externally validate scoring systems based on logistic regression models.

    Directory of Open Access Journals (Sweden)

    Antonio Palazón-Bru

    Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.

  8. Quantile Regression Methods

    DEFF Research Database (Denmark)

    Fitzenberger, Bernd; Wilke, Ralf Andreas

    2015-01-01

    if the mean regression model does not. We provide a short informal introduction into the principle of quantile regression which includes an illustrative application from empirical labor market research. This is followed by briefly sketching the underlying statistical model for linear quantile regression based......Quantile regression is emerging as a popular statistical approach, which complements the estimation of conditional mean models. While the latter only focuses on one aspect of the conditional distribution of the dependent variable, the mean, quantile regression provides more detailed insights...... by modeling conditional quantiles. Quantile regression can therefore detect whether the partial effect of a regressor on the conditional quantiles is the same for all quantiles or differs across quantiles. Quantile regression can provide evidence for a statistical relationship between two variables even...

  9. Independent variable complexity for regional regression of the flow duration curve in ungauged basins

    Science.gov (United States)

    Fouad, Geoffrey; Skupin, André; Hope, Allen

    2016-04-01

    The flow duration curve (FDC) is one of the most widely used tools to quantify streamflow. Its percentile flows are often required for water resource applications, but these values must be predicted for ungauged basins with insufficient or no streamflow data. Regional regression is a commonly used approach for predicting percentile flows that involves identifying hydrologic regions and calibrating regression models to each region. The independent variables used to describe the physiographic and climatic setting of the basins are a critical component of regional regression, yet few studies have investigated their effect on resulting predictions. In this study, the complexity of the independent variables needed for regional regression is investigated. Different levels of variable complexity are applied for a regional regression consisting of 918 basins in the US. Both the hydrologic regions and regression models are determined according to the different sets of variables, and the accuracy of resulting predictions is assessed. The different sets of variables include (1) a simple set of three variables strongly tied to the FDC (mean annual precipitation, potential evapotranspiration, and baseflow index), (2) a traditional set of variables describing the average physiographic and climatic conditions of the basins, and (3) a more complex set of variables extending the traditional variables to include statistics describing the distribution of physiographic data and temporal components of climatic data. The latter set of variables is not typically used in regional regression, and is evaluated for its potential to predict percentile flows. The simplest set of only three variables performed similarly to the other more complex sets of variables. Traditional variables used to describe climate, topography, and soil offered little more to the predictions, and the experimental set of variables describing the distribution of basin data in more detail did not improve predictions

  10. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras

    Directory of Open Access Journals (Sweden)

    Yajie Liao

    2017-06-01

    Full Text Available Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices, which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer’s calibration.

  11. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras.

    Science.gov (United States)

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-06-24

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer's calibration.

  12. Model independent approach to the single photoelectron calibration of photomultiplier tubes

    Energy Technology Data Exchange (ETDEWEB)

    Saldanha, R.; Grandi, L.; Guardincerri, Y.; Wester, T.

    2017-08-01

    The accurate calibration of photomultiplier tubes is critical in a wide variety of applications in which it is necessary to know the absolute number of detected photons or precisely determine the resolution of the signal. Conventional calibration methods rely on fitting the photomultiplier response to a low intensity light source with analytical approximations to the single photoelectron distribution, often leading to biased estimates due to the inability to accurately model the full distribution, especially at low charge values. In this paper we present a simple statistical method to extract the relevant single photoelectron calibration parameters without making any assumptions about the underlying single photoelectron distribution. We illustrate the use of this method through the calibration of a Hamamatsu R11410 photomultiplier tube and study the accuracy and precision of the method using Monte Carlo simulations. The method is found to have significantly reduced bias compared to conventional methods and works under a wide range of light intensities, making it suitable for simultaneously calibrating large arrays of photomultiplier tubes.

  13. Hydrologic Model Development and Calibration: Contrasting a Single- and Multi-Objective Approach for Comparing Model Performance

    Science.gov (United States)

    Asadzadeh, M.; Maclean, A.; Tolson, B. A.; Burn, D. H.

    2009-05-01

    Hydrologic model calibration aims to find a set of parameters that adequately simulates observations of watershed behavior, such as streamflow, or a state variable, such as snow water equivalent (SWE). There are different metrics for evaluating calibration effectiveness that involve quantifying prediction errors, such as the Nash-Sutcliffe (NS) coefficient and bias evaluated for the entire calibration period, on a seasonal basis, for low flows, or for high flows. Many of these metrics are conflicting such that the set of parameters that maximizes the high flow NS differs from the set of parameters that maximizes the low flow NS. Conflicting objectives are very likely when different calibration objectives are based on different fluxes and/or state variables (e.g., NS based on streamflow versus SWE). One of the most popular ways to balance different metrics is to aggregate them based on their importance and find the set of parameters that optimizes a weighted sum of the efficiency metrics. Comparing alternative hydrologic models (e.g., assessing model improvement when a process or more detail is added to the model) based on the aggregated objective might be misleading since it represents one point on the tradeoff of desired error metrics. To derive a more comprehensive model comparison, we solved a bi-objective calibration problem to estimate the tradeoff between two error metrics for each model. Although this approach is computationally more expensive than the aggregation approach, it results in a better understanding of the effectiveness of selected models at each level of every error metric and therefore provides a better rationale for judging relative model quality. The two alternative models used in this study are two MESH hydrologic models (version 1.2) of the Wolf Creek Research basin that differ in their watershed spatial discretization (a single Grouped Response Unit, GRU, versus multiple GRUs). The MESH model, currently under development by Environment

  14. Cumulative error models for the tank calibration problem

    International Nuclear Information System (INIS)

    Goldman, A.; Anderson, L.G.; Weber, J.

    1983-01-01

    The purpose of a tank calibration equation is to obtain an estimate of the liquid volume that corresponds to a liquid level measurement. Calibration experimental errors occur in both liquid level and liquid volume measurements. If one of the errors is relatively small, the calibration equation can be determined from wellknown regression and calibration methods. If both variables are assumed to be in error, then for linear cases a prototype model should be considered. Many investigators are not familiar with this model or do not have computing facilities capable of obtaining numerical solutions. This paper discusses and compares three linear models that approximate the prototype model and have the advantage of much simpler computations. Comparisons among the four models and recommendations of suitability are made from simulations and from analyses of six sets of experimental data

  15. Calibration of the maximum carboxylation velocity (Vcmax using data mining techniques and ecophysiological data from the Brazilian semiarid region, for use in Dynamic Global Vegetation Models

    Directory of Open Access Journals (Sweden)

    L. F. C. Rezende

    Full Text Available Abstract The semiarid region of northeastern Brazil, the Caatinga, is extremely important due to its biodiversity and endemism. Measurements of plant physiology are crucial to the calibration of Dynamic Global Vegetation Models (DGVMs that are currently used to simulate the responses of vegetation in face of global changes. In a field work realized in an area of preserved Caatinga forest located in Petrolina, Pernambuco, measurements of carbon assimilation (in response to light and CO2 were performed on 11 individuals of Poincianella microphylla, a native species that is abundant in this region. These data were used to calibrate the maximum carboxylation velocity (Vcmax used in the INLAND model. The calibration techniques used were Multiple Linear Regression (MLR, and data mining techniques as the Classification And Regression Tree (CART and K-MEANS. The results were compared to the UNCALIBRATED model. It was found that simulated Gross Primary Productivity (GPP reached 72% of observed GPP when using the calibrated Vcmax values, whereas the UNCALIBRATED approach accounted for 42% of observed GPP. Thus, this work shows the benefits of calibrating DGVMs using field ecophysiological measurements, especially in areas where field data is scarce or non-existent, such as in the Caatinga.

  16. Calibrated photostimulated luminescence is an effective approach to identify irradiated orange during storage

    Science.gov (United States)

    Jo, Yunhee; Sanyal, Bhaskar; Chung, Namhyeok; Lee, Hyun-Gyu; Park, Yunji; Park, Hae-Jun; Kwon, Joong-Ho

    2015-06-01

    Photostimulated luminescence (PSL) has been employed as a fast screening method for various irradiated foods. In this study the potential use of PSL was evaluated to identify oranges irradiated with gamma ray, electron beam and X-ray (0-2 kGy) and stored under different conditions for 6 weeks. The effects of light conditions (natural light, artificial light, and dark) and storage temperatures (4 and 20 °C) on PSL photon counts (PCs) during post-irradiation periods were studied. Non-irradiated samples always showed negative values of PCs, while irradiated oranges exhibited intermediate results after first PSL measurements. However, the irradiated samples had much higher PCs. The PCs of all the samples declined as the storage time increased. Calibrated second PSL measurements showed PSL ratio <10 for the irradiated samples after 3 weeks of irradiation confirming their irradiation status in all the storage conditions. Calibrated PSL and sample storage in dark at 4 °C were found out to be most suitable approaches to identify irradiated oranges during storage.

  17. Estimation of age in forensic medicine using multivariate approach to image analysis

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey V.; Belyaev, Ivan; Fominykh, Sergey

    2009-01-01

    approach based on statistical analysis of grey-level co-occurrence matrix, fractal analysis, wavelet transformation and Angle Measure Technique. Projection on latent structures regression was chosen for calibration and prediction. The method has been applied to 70 male and 63 female individuals aged from...... 21 to 93 and results were compared with traditional approach. Some important questions and problems have been raised....

  18. Regression Benchmarking: An Approach to Quality Assurance in Performance

    OpenAIRE

    Bulej, Lubomír

    2005-01-01

    The paper presents a short summary of our work in the area of regression benchmarking and its application to software development. Specially, we explain the concept of regression benchmarking, the requirements for employing regression testing in a software project, and methods used for analyzing the vast amounts of data resulting from repeated benchmarking. We present the application of regression benchmarking on a real software project and conclude with a glimpse at the challenges for the fu...

  19. Theory of net analyte signal vectors in inverse regression

    DEFF Research Database (Denmark)

    Bro, R.; Andersen, Charlotte Møller

    2003-01-01

    The. net analyte signal and the net analyte signal vector are useful measures in building and optimizing multivariate calibration models. In this paper a theory for their use in inverse regression is developed. The theory of net analyte signal was originally derived from classical least squares...

  20. Modelling the return distribution of salmon farming companies : a quantile regression approach

    OpenAIRE

    Jacobsen, Fredrik

    2017-01-01

    The salmon farming industry has gained increased attention from investors, portfolio managers, financial analysts and other stakeholders the recent years. Despite this development, very little is known about the risk and return of salmon farming company stocks, and especially how the relationship between risk and return varies under different market conditions, given the volatile nature of the salmon farming industry. We approach this problem by using quantile regression to examine the relati...

  1. Extrinsic Calibration of Camera and 2D Laser Sensors without Overlap

    Directory of Open Access Journals (Sweden)

    Khalil M. Ahmad Yousef

    2017-10-01

    Full Text Available Extrinsic calibration of a camera and a 2D laser range finder (lidar sensors is crucial in sensor data fusion applications; for example SLAM algorithms used in mobile robot platforms. The fundamental challenge of extrinsic calibration is when the camera-lidar sensors do not overlap or share the same field of view. In this paper we propose a novel and flexible approach for the extrinsic calibration of a camera-lidar system without overlap, which can be used for robotic platform self-calibration. The approach is based on the robot–world hand–eye calibration (RWHE problem; proven to have efficient and accurate solutions. First, the system was mapped to the RWHE calibration problem modeled as the linear relationship AX = ZB , where X and Z are unknown calibration matrices. Then, we computed the transformation matrix B , which was the main challenge in the above mapping. The computation is based on reasonable assumptions about geometric structure in the calibration environment. The reliability and accuracy of the proposed approach is compared to a state-of-the-art method in extrinsic 2D lidar to camera calibration. Experimental results from real datasets indicate that the proposed approach provides better results with an L2 norm translational and rotational deviations of 314 mm and 0 . 12 ∘ respectively.

  2. Extrinsic Calibration of Camera and 2D Laser Sensors without Overlap.

    Science.gov (United States)

    Ahmad Yousef, Khalil M; Mohd, Bassam J; Al-Widyan, Khalid; Hayajneh, Thaier

    2017-10-14

    Extrinsic calibration of a camera and a 2D laser range finder (lidar) sensors is crucial in sensor data fusion applications; for example SLAM algorithms used in mobile robot platforms. The fundamental challenge of extrinsic calibration is when the camera-lidar sensors do not overlap or share the same field of view. In this paper we propose a novel and flexible approach for the extrinsic calibration of a camera-lidar system without overlap, which can be used for robotic platform self-calibration. The approach is based on the robot-world hand-eye calibration (RWHE) problem; proven to have efficient and accurate solutions. First, the system was mapped to the RWHE calibration problem modeled as the linear relationship AX = ZB , where X and Z are unknown calibration matrices. Then, we computed the transformation matrix B , which was the main challenge in the above mapping. The computation is based on reasonable assumptions about geometric structure in the calibration environment. The reliability and accuracy of the proposed approach is compared to a state-of-the-art method in extrinsic 2D lidar to camera calibration. Experimental results from real datasets indicate that the proposed approach provides better results with an L2 norm translational and rotational deviations of 314 mm and 0 . 12 ∘ respectively.

  3. In situ neutron moisture meter calibration in lateritic soils

    International Nuclear Information System (INIS)

    Ruprecht, J.K.; Schofield, N.J.

    1990-01-01

    An in situ calibration procedure for complex lateritic soils of the jarrah forest of Western Australia is described. The calibration is based on non-destructive sampling of each access tube and on a regression of change in water content on change in neutron count ratio at 'wet' and 'dry' times of the year. Calibration equations with adequate precision were produced. However, there were high residual errors in the calibration equations which were due to a number of factors including soil water variability, the presence of a duricrust layer, soil sampling of gravelly soils and the variability of the cement slurry annulus surrounding each access tube. The calibration equations derived did not compare well with those from other studies in south-west Western Australia, but there was reasonable agreement with the general equations obtained by the Institute of Hydrology, U.K. 15 refs., 6 figs., 2 tabs

  4. Matrix Factorisation-based Calibration For Air Quality Crowd-sensing

    Science.gov (United States)

    Dorffer, Clement; Puigt, Matthieu; Delmaire, Gilles; Roussel, Gilles; Rouvoy, Romain; Sagnier, Isabelle

    2017-04-01

    sensors share some information using the APISENSE® crowdsensing platform and we aim to calibrate the sensor responses from the data directly. For that purpose, we express the sensor readings as a low-rank matrix with missing entries and we revisit self-calibration as a Matrix Factorization (MF) problem. In our proposed framework, one factor matrix contains the calibration parameters while the other is structured by the calibration model and contains some values of the sensed phenomenon. The MF calibration approach also uses the precise measurements from ATMO—the French public institution—to drive the calibration of the mobile sensors. MF calibration can be improved using, e.g., the mean calibration parameters provided by the sensor manufacturers, or using sparse priors or a model of the physical phenomenon. All our approaches are shown to provide a better calibration accuracy than matrix-completion-based and robust-regression-based methods, even in difficult scenarios involving a lot of missing data and/or very few accurate references. When combined with a dictionary of air quality patterns, our experiments suggest that MF is not only able to perform sensor network calibration but also to provide detailed maps of air quality.

  5. Calibration of groundwater vulnerability mapping using the generalized reduced gradient method.

    Science.gov (United States)

    Elçi, Alper

    2017-12-01

    Groundwater vulnerability assessment studies are essential in water resources management. Overlay-and-index methods such as DRASTIC are widely used for mapping of groundwater vulnerability, however, these methods mainly suffer from a subjective selection of model parameters. The objective of this study is to introduce a calibration procedure that results in a more accurate assessment of groundwater vulnerability. The improvement of the assessment is formulated as a parameter optimization problem using an objective function that is based on the correlation between actual groundwater contamination and vulnerability index values. The non-linear optimization problem is solved with the generalized-reduced-gradient (GRG) method, which is numerical algorithm based optimization method. To demonstrate the applicability of the procedure, a vulnerability map for the Tahtali stream basin is calibrated using nitrate concentration data. The calibration procedure is easy to implement and aims the maximization of correlation between observed pollutant concentrations and groundwater vulnerability index values. The influence of each vulnerability parameter in the calculation of the vulnerability index is assessed by performing a single-parameter sensitivity analysis. Results of the sensitivity analysis show that all factors are effective on the final vulnerability index. Calibration of the vulnerability map improves the correlation between index values and measured nitrate concentrations by 19%. The regression coefficient increases from 0.280 to 0.485. It is evident that the spatial distribution and the proportions of vulnerability class areas are significantly altered with the calibration process. Although the applicability of the calibration method is demonstrated on the DRASTIC model, the applicability of the approach is not specific to a certain model and can also be easily applied to other overlay-and-index methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Spatial pattern evaluation of a calibrated national hydrological model - a remote-sensing-based diagnostic approach

    Science.gov (United States)

    Mendiguren, Gorka; Koch, Julian; Stisen, Simon

    2017-11-01

    Distributed hydrological models are traditionally evaluated against discharge stations, emphasizing the temporal and neglecting the spatial component of a model. The present study widens the traditional paradigm by highlighting spatial patterns of evapotranspiration (ET), a key variable at the land-atmosphere interface, obtained from two different approaches at the national scale of Denmark. The first approach is based on a national water resources model (DK-model), using the MIKE-SHE model code, and the second approach utilizes a two-source energy balance model (TSEB) driven mainly by satellite remote sensing data. Ideally, the hydrological model simulation and remote-sensing-based approach should present similar spatial patterns and driving mechanisms of ET. However, the spatial comparison showed that the differences are significant and indicate insufficient spatial pattern performance of the hydrological model.The differences in spatial patterns can partly be explained by the fact that the hydrological model is configured to run in six domains that are calibrated independently from each other, as it is often the case for large-scale multi-basin calibrations. Furthermore, the model incorporates predefined temporal dynamics of leaf area index (LAI), root depth (RD) and crop coefficient (Kc) for each land cover type. This zonal approach of model parameterization ignores the spatiotemporal complexity of the natural system. To overcome this limitation, this study features a modified version of the DK-model in which LAI, RD and Kc are empirically derived using remote sensing data and detailed soil property maps in order to generate a higher degree of spatiotemporal variability and spatial consistency between the six domains. The effects of these changes are analyzed by using empirical orthogonal function (EOF) analysis to evaluate spatial patterns. The EOF analysis shows that including remote-sensing-derived LAI, RD and Kc in the distributed hydrological model adds

  7. Model Calibration in Option Pricing

    Directory of Open Access Journals (Sweden)

    Andre Loerx

    2012-04-01

    Full Text Available We consider calibration problems for models of pricing derivatives which occur in mathematical finance. We discuss various approaches such as using stochastic differential equations or partial differential equations for the modeling process. We discuss the development in the past literature and give an outlook into modern approaches of modelling. Furthermore, we address important numerical issues in the valuation of options and likewise the calibration of these models. This leads to interesting problems in optimization, where, e.g., the use of adjoint equations or the choice of the parametrization for the model parameters play an important role.

  8. Mechanisms of neuroblastoma regression

    Science.gov (United States)

    Brodeur, Garrett M.; Bagatell, Rochelle

    2014-01-01

    Recent genomic and biological studies of neuroblastoma have shed light on the dramatic heterogeneity in the clinical behaviour of this disease, which spans from spontaneous regression or differentiation in some patients, to relentless disease progression in others, despite intensive multimodality therapy. This evidence also suggests several possible mechanisms to explain the phenomena of spontaneous regression in neuroblastomas, including neurotrophin deprivation, humoral or cellular immunity, loss of telomerase activity and alterations in epigenetic regulation. A better understanding of the mechanisms of spontaneous regression might help to identify optimal therapeutic approaches for patients with these tumours. Currently, the most druggable mechanism is the delayed activation of developmentally programmed cell death regulated by the tropomyosin receptor kinase A pathway. Indeed, targeted therapy aimed at inhibiting neurotrophin receptors might be used in lieu of conventional chemotherapy or radiation in infants with biologically favourable tumours that require treatment. Alternative approaches consist of breaking immune tolerance to tumour antigens or activating neurotrophin receptor pathways to induce neuronal differentiation. These approaches are likely to be most effective against biologically favourable tumours, but they might also provide insights into treatment of biologically unfavourable tumours. We describe the different mechanisms of spontaneous neuroblastoma regression and the consequent therapeutic approaches. PMID:25331179

  9. Efficient mass calibration of magnetic sector mass spectrometers

    International Nuclear Information System (INIS)

    Roddick, J.C.

    1996-01-01

    Magnetic sector mass spectrometers used for automatic acquisition of precise isotopic data are usually controlled with Hall probes and software that uses polynomial equations to define and calibrate the mass-field relations required for mass focusing. This procedure requires a number of reference masses and careful tuning to define and maintain an accurate mass calibration. A simplified equation is presented and applied to several different magnetically controlled mass spectrometers. The equation accounts for nonlinearity in typical Hall probe controlled mass-field relations, reduces calibration to a linear fitting procedure, and is sufficiently accurate to permit calibration over a mass range of 2 to 200 amu with only two defining masses. Procedures developed can quickly correct for normal drift in calibrations and compensate for drift during isotopic analysis over a limited mass range such as a single element. The equation is: Field A·Mass 1/2 + B·(Mass) p where A, B, and p are constants. The power value p has a characteristic value for a Hall probe/controller and is insensitive to changing conditions, thus reducing calibration to a linear regression to determine optimum A and B. (author). 1 ref., 1 tab., 6 figs

  10. Bayesian logistic regression approaches to predict incorrect DRG assignment.

    Science.gov (United States)

    Suleiman, Mani; Demirhan, Haydar; Boyd, Leanne; Girosi, Federico; Aksakalli, Vural

    2018-05-07

    Episodes of care involving similar diagnoses and treatments and requiring similar levels of resource utilisation are grouped to the same Diagnosis-Related Group (DRG). In jurisdictions which implement DRG based payment systems, DRGs are a major determinant of funding for inpatient care. Hence, service providers often dedicate auditing staff to the task of checking that episodes have been coded to the correct DRG. The use of statistical models to estimate an episode's probability of DRG error can significantly improve the efficiency of clinical coding audits. This study implements Bayesian logistic regression models with weakly informative prior distributions to estimate the likelihood that episodes require a DRG revision, comparing these models with each other and to classical maximum likelihood estimates. All Bayesian approaches had more stable model parameters than maximum likelihood. The best performing Bayesian model improved overall classification per- formance by 6% compared to maximum likelihood, with a 34% gain compared to random classification, respectively. We found that the original DRG, coder and the day of coding all have a significant effect on the likelihood of DRG error. Use of Bayesian approaches has improved model parameter stability and classification accuracy. This method has already lead to improved audit efficiency in an operational capacity.

  11. Improving ASTER GDEM Accuracy Using Land Use-Based Linear Regression Methods: A Case Study of Lianyungang, East China

    Directory of Open Access Journals (Sweden)

    Xiaoyan Yang

    2018-04-01

    Full Text Available The Advanced Spaceborne Thermal-Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM is important to a wide range of geographical and environmental studies. Its accuracy, to some extent associated with land-use types reflecting topography, vegetation coverage, and human activities, impacts the results and conclusions of these studies. In order to improve the accuracy of ASTER GDEM prior to its application, we investigated ASTER GDEM errors based on individual land-use types and proposed two linear regression calibration methods, one considering only land use-specific errors and the other considering the impact of both land-use and topography. Our calibration methods were tested on the coastal prefectural city of Lianyungang in eastern China. Results indicate that (1 ASTER GDEM is highly accurate for rice, wheat, grass and mining lands but less accurate for scenic, garden, wood and bare lands; (2 despite improvements in ASTER GDEM2 accuracy, multiple linear regression calibration requires more data (topography and a relatively complex calibration process; (3 simple linear regression calibration proves a practicable and simplified means to systematically investigate and improve the impact of land-use on ASTER GDEM accuracy. Our method is applicable to areas with detailed land-use data based on highly accurate field-based point-elevation measurements.

  12. Calibration factor or calibration coefficient?

    International Nuclear Information System (INIS)

    Meghzifene, A.; Shortt, K.R.

    2002-01-01

    Full text: The IAEA/WHO network of SSDLs was set up in order to establish links between SSDL members and the international measurement system. At the end of 2001, there were 73 network members in 63 Member States. The SSDL network members provide calibration services to end-users at the national or regional level. The results of the calibrations are summarized in a document called calibration report or calibration certificate. The IAEA has been using the term calibration certificate and will continue using the same terminology. The most important information in a calibration certificate is a list of calibration factors and their related uncertainties that apply to the calibrated instrument for the well-defined irradiation and ambient conditions. The IAEA has recently decided to change the term calibration factor to calibration coefficient, to be fully in line with ISO [ISO 31-0], which recommends the use of the term coefficient when it links two quantities A and B (equation 1) that have different dimensions. The term factor should only be used for k when it is used to link the terms A and B that have the same dimensions A=k.B. However, in a typical calibration, an ion chamber is calibrated in terms of a physical quantity such as air kerma, dose to water, ambient dose equivalent, etc. If the chamber is calibrated together with its electrometer, then the calibration refers to the physical quantity to be measured per electrometer unit reading. In this case, the terms referred have different dimensions. The adoption by the Agency of the term coefficient to express the results of calibrations is consistent with the 'International vocabulary of basic and general terms in metrology' prepared jointly by the BIPM, IEC, ISO, OIML and other organizations. The BIPM has changed from factor to coefficient. The authors believe that this is more than just a matter of semantics and recommend that the SSDL network members adopt this change in terminology. (author)

  13. Least Squares Adjustment: Linear and Nonlinear Weighted Regression Analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2007-01-01

    This note primarily describes the mathematics of least squares regression analysis as it is often used in geodesy including land surveying and satellite positioning applications. In these fields regression is often termed adjustment. The note also contains a couple of typical land surveying...... and satellite positioning application examples. In these application areas we are typically interested in the parameters in the model typically 2- or 3-D positions and not in predictive modelling which is often the main concern in other regression analysis applications. Adjustment is often used to obtain...... the clock error) and to obtain estimates of the uncertainty with which the position is determined. Regression analysis is used in many other fields of application both in the natural, the technical and the social sciences. Examples may be curve fitting, calibration, establishing relationships between...

  14. Calibration and use of plate meter regressions for pasture mass estimation in an Appalachian silvopasture

    Science.gov (United States)

    A standardized plate meter for measuring pasture mass was calibrated at the Agroforestry Research and Demonstration Site in Blacksburg, VA, using six ungrazed plots of established tall fescue (Festuca arundinaceae) overseeded with orchardgrass (Dactylis glomerata). Each plot was interplanted with b...

  15. Mutual-Coupling Based Phased-Array Calibration: A Robust and Versatile Approach

    NARCIS (Netherlands)

    Bekers, D.J.; Dijk, R. van; Vliet, F.E. van

    2013-01-01

    The transmit and receive modules of a large phased array are often calibrated for amplitude and phase variations by an internal calibration network and an offline characterization of the complete array in an anechoic chamber. Such a solution is less obvious in view of current trends towards

  16. Tax Evasion, Information Reporting, and the Regressive Bias Prediction

    DEFF Research Database (Denmark)

    Boserup, Simon Halphen; Pinje, Jori Veng

    2013-01-01

    evasion and audit probabilities once we account for information reporting in the tax compliance game. When conditioning on information reporting, we find that both reduced-form evidence and simulations exhibit the predicted regressive bias. However, in the overall economy, this bias is negated by the tax......Models of rational tax evasion and optimal enforcement invariably predict a regressive bias in the effective tax system, which reduces redistribution in the economy. Using Danish administrative data, we show that a calibrated structural model of this type replicates moments and correlations of tax...

  17. A machine learning calibration model using random forests to improve sensor performance for lower-cost air quality monitoring

    Science.gov (United States)

    Zimmerman, Naomi; Presto, Albert A.; Kumar, Sriniwasa P. N.; Gu, Jason; Hauryliuk, Aliaksei; Robinson, Ellis S.; Robinson, Allen L.; Subramanian, R.

    2018-01-01

    Low-cost sensing strategies hold the promise of denser air quality monitoring networks, which could significantly improve our understanding of personal air pollution exposure. Additionally, low-cost air quality sensors could be deployed to areas where limited monitoring exists. However, low-cost sensors are frequently sensitive to environmental conditions and pollutant cross-sensitivities, which have historically been poorly addressed by laboratory calibrations, limiting their utility for monitoring. In this study, we investigated different calibration models for the Real-time Affordable Multi-Pollutant (RAMP) sensor package, which measures CO, NO2, O3, and CO2. We explored three methods: (1) laboratory univariate linear regression, (2) empirical multiple linear regression, and (3) machine-learning-based calibration models using random forests (RF). Calibration models were developed for 16-19 RAMP monitors (varied by pollutant) using training and testing windows spanning August 2016 through February 2017 in Pittsburgh, PA, US. The random forest models matched (CO) or significantly outperformed (NO2, CO2, O3) the other calibration models, and their accuracy and precision were robust over time for testing windows of up to 16 weeks. Following calibration, average mean absolute error on the testing data set from the random forest models was 38 ppb for CO (14 % relative error), 10 ppm for CO2 (2 % relative error), 3.5 ppb for NO2 (29 % relative error), and 3.4 ppb for O3 (15 % relative error), and Pearson r versus the reference monitors exceeded 0.8 for most units. Model performance is explored in detail, including a quantification of model variable importance, accuracy across different concentration ranges, and performance in a range of monitoring contexts including the National Ambient Air Quality Standards (NAAQS) and the US EPA Air Sensors Guidebook recommendations of minimum data quality for personal exposure measurement. A key strength of the RF approach is that

  18. Application of calibrations to hyperspectral images of food grains: example for wheat falling number

    Directory of Open Access Journals (Sweden)

    Nicola Caporaso

    2017-04-01

    Full Text Available The presence of a few kernels with sprouting problems in a batch of wheat can result in enzymatic activity sufficient to compromise flour functionality and bread quality. This is commonly assessed using the Hagberg Falling Number (HFN method, which is a batch analysis. Hyperspectral imaging (HSI can provide analysis at the single grain level with potential for improved performance. The present paper deals with the development and application of calibrations obtained using an HSI system working in the near infrared (NIR region (~900–2500 nm and reference measurements of HFN. A partial least squares regression calibration has been built using 425 wheat samples with a HFN range of 62–318 s, including field and laboratory pre-germinated samples placed under wet conditions. Two different approaches were tested to apply calibrations: i application of the calibration to each pixel, followed by calculation of the average of the resulting values for each object (kernel; ii calculation of the average spectrum for each object, followed by application of the calibration to the mean spectrum. The calibration performance achieved for HFN (R2 = 0.6; RMSEC ~ 50 s; RMSEP ~ 63 s compares favourably with other studies using NIR spectroscopy. Linear spectral pre-treatments lead to similar results when applying the two methods, while non-linear treatments such as standard normal variate showed obvious differences between these approaches. A classification model based on linear discriminant analysis (LDA was also applied to segregate wheat kernels into low (250 s HFN groups. LDA correctly classified 86.4% of the samples, with a classification accuracy of 97.9% when using an HFN threshold of 150 s. These results are promising in terms of wheat quality assessment using a rapid and non-destructive technique which is able to analyse wheat properties on a single-kernel basis, and to classify samples as acceptable or unacceptable for flour production.

  19. New approach for calibration and interpretation of IRAD GAGE vibrating-wire stressmeters

    International Nuclear Information System (INIS)

    Mao, N.

    1986-05-01

    IRAD GAGE vibrating-wire stressmeters were installed in the Spent Fuel Facility at the Nevada Test Site to measure the change in in-situ stress during the Spent Fuel Test-Climax (SFT-C). This paper discusses the results of removing a cylindrical section of rock and gages as a unit through overcoring, and the subsequent post-test calibration of the stressmeters in the laboratory. The estimated in-situ stresses based on post test calibration data are quite consistent with those directly measured in nearby holes. The magnitude of stress change calculated from pre-test calibration data is generally much smaller than that estimated from post test calibration data. 11 refs., 5 figs., 2 tabs

  20. A Gaussian mixture copula model based localized Gaussian process regression approach for long-term wind speed prediction

    International Nuclear Information System (INIS)

    Yu, Jie; Chen, Kuilin; Mori, Junichi; Rashid, Mudassir M.

    2013-01-01

    Optimizing wind power generation and controlling the operation of wind turbines to efficiently harness the renewable wind energy is a challenging task due to the intermittency and unpredictable nature of wind speed, which has significant influence on wind power production. A new approach for long-term wind speed forecasting is developed in this study by integrating GMCM (Gaussian mixture copula model) and localized GPR (Gaussian process regression). The time series of wind speed is first classified into multiple non-Gaussian components through the Gaussian mixture copula model and then Bayesian inference strategy is employed to incorporate the various non-Gaussian components using the posterior probabilities. Further, the localized Gaussian process regression models corresponding to different non-Gaussian components are built to characterize the stochastic uncertainty and non-stationary seasonality of the wind speed data. The various localized GPR models are integrated through the posterior probabilities as the weightings so that a global predictive model is developed for the prediction of wind speed. The proposed GMCM–GPR approach is demonstrated using wind speed data from various wind farm locations and compared against the GMCM-based ARIMA (auto-regressive integrated moving average) and SVR (support vector regression) methods. In contrast to GMCM–ARIMA and GMCM–SVR methods, the proposed GMCM–GPR model is able to well characterize the multi-seasonality and uncertainty of wind speed series for accurate long-term prediction. - Highlights: • A novel predictive modeling method is proposed for long-term wind speed forecasting. • Gaussian mixture copula model is estimated to characterize the multi-seasonality. • Localized Gaussian process regression models can deal with the random uncertainty. • Multiple GPR models are integrated through Bayesian inference strategy. • The proposed approach shows higher prediction accuracy and reliability

  1. A Visual Analytics Approach for Correlation, Classification, and Regression Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Steed, Chad A [ORNL; SwanII, J. Edward [Mississippi State University (MSU); Fitzpatrick, Patrick J. [Mississippi State University (MSU); Jankun-Kelly, T.J. [Mississippi State University (MSU)

    2012-02-01

    New approaches that combine the strengths of humans and machines are necessary to equip analysts with the proper tools for exploring today's increasing complex, multivariate data sets. In this paper, a novel visual data mining framework, called the Multidimensional Data eXplorer (MDX), is described that addresses the challenges of today's data by combining automated statistical analytics with a highly interactive parallel coordinates based canvas. In addition to several intuitive interaction capabilities, this framework offers a rich set of graphical statistical indicators, interactive regression analysis, visual correlation mining, automated axis arrangements and filtering, and data classification techniques. The current work provides a detailed description of the system as well as a discussion of key design aspects and critical feedback from domain experts.

  2. UNIVERSAL AUTO-CALIBRATION FOR A RAPID BATTERY IMPEDANCE SPECTRUM MEASUREMENT DEVICE

    Energy Technology Data Exchange (ETDEWEB)

    Jon P. Christophersen; John L. Morrison; William H. Morrison

    2014-03-01

    Electrochemical impedance spectroscopy has been shown to be a valuable tool for diagnostics and prognostics of energy storage devices such as batteries and ultra-capacitors. Although measurements have been typically confined to laboratory environments, rapid impedance spectrum measurement techniques have been developed for on-line, embedded applications as well. The prototype hardware for the rapid technique has been validated using lithium-ion batteries, but issues with calibration had also been identified. A new, universal automatic calibration technique was developed to address the identified issues while also enabling a more simplified approach. A single, broad-frequency range is used to calibrate the system and then scaled to the actual range and conditions used when measuring a device under test. The range used for calibration must be broad relative to the expected measurement conditions for the scaling to be successful. Validation studies were performed by comparing the universal calibration approach with data acquired from targeted calibration ranges based on the expected range of performance for the device under test. First, a mid-level shunt range was used for calibration and used to measure devices with lower and higher impedance. Next, a high excitation current level was used for calibration, followed by measurements using lower currents. Finally, calibration was performed over a wide frequency range and used to measure test articles with a lower set of frequencies. In all cases, the universal calibration approach compared very well with results acquired following a targeted calibration. Additionally, the shunts used for the automated calibration technique were successfully characterized such that the rapid impedance measurements compare very well with laboratory-scale measurements. These data indicate that the universal approach can be successfully used for onboard rapid impedance spectra measurements for a broad set of test devices and range of

  3. A fuzzy regression with support vector machine approach to the estimation of horizontal global solar radiation

    International Nuclear Information System (INIS)

    Baser, Furkan; Demirhan, Haydar

    2017-01-01

    Accurate estimation of the amount of horizontal global solar radiation for a particular field is an important input for decision processes in solar radiation investments. In this article, we focus on the estimation of yearly mean daily horizontal global solar radiation by using an approach that utilizes fuzzy regression functions with support vector machine (FRF-SVM). This approach is not seriously affected by outlier observations and does not suffer from the over-fitting problem. To demonstrate the utility of the FRF-SVM approach in the estimation of horizontal global solar radiation, we conduct an empirical study over a dataset collected in Turkey and applied the FRF-SVM approach with several kernel functions. Then, we compare the estimation accuracy of the FRF-SVM approach to an adaptive neuro-fuzzy system and a coplot supported-genetic programming approach. We observe that the FRF-SVM approach with a Gaussian kernel function is not affected by both outliers and over-fitting problem and gives the most accurate estimates of horizontal global solar radiation among the applied approaches. Consequently, the use of hybrid fuzzy functions and support vector machine approaches is found beneficial in long-term forecasting of horizontal global solar radiation over a region with complex climatic and terrestrial characteristics. - Highlights: • A fuzzy regression functions with support vector machines approach is proposed. • The approach is robust against outlier observations and over-fitting problem. • Estimation accuracy of the model is superior to several existent alternatives. • A new solar radiation estimation model is proposed for the region of Turkey. • The model is useful under complex terrestrial and climatic conditions.

  4. Calibration biases in logical reasoning tasks

    Directory of Open Access Journals (Sweden)

    Guillermo Macbeth

    2013-08-01

    Full Text Available The aim of this contribution is to present an experimental study about calibration in deductive reasoning tasks. Calibration is defi ned as the empirical convergence or divergence between the objective and the subjective success. The underconfi dence bias is understood as the dominance of the former over the latter. The hypothesis of this study states that the form of the propositions presented in the experiment is critical for calibration phenomena. Affi rmative and negative propositions are distinguished in their cognitive processing. Results suggests that monotonous compound propositions are prone to underconfi dence. An heuristic approach to this phenomenon is proposed. The activation of a monotony heuristic would produce an illusion of simplicity that generates the calibration bias. These evidence is analysed in the context of the metacognitive modeling of calibration phenomena.

  5. A different approach to estimate nonlinear regression model using numerical methods

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].

  6. Multivariate calibration applied to the quantitative analysis of infrared spectra

    Energy Technology Data Exchange (ETDEWEB)

    Haaland, D.M.

    1991-01-01

    Multivariate calibration methods are very useful for improving the precision, accuracy, and reliability of quantitative spectral analyses. Spectroscopists can more effectively use these sophisticated statistical tools if they have a qualitative understanding of the techniques involved. A qualitative picture of the factor analysis multivariate calibration methods of partial least squares (PLS) and principal component regression (PCR) is presented using infrared calibrations based upon spectra of phosphosilicate glass thin films on silicon wafers. Comparisons of the relative prediction abilities of four different multivariate calibration methods are given based on Monte Carlo simulations of spectral calibration and prediction data. The success of multivariate spectral calibrations is demonstrated for several quantitative infrared studies. The infrared absorption and emission spectra of thin-film dielectrics used in the manufacture of microelectronic devices demonstrate rapid, nondestructive at-line and in-situ analyses using PLS calibrations. Finally, the application of multivariate spectral calibrations to reagentless analysis of blood is presented. We have found that the determination of glucose in whole blood taken from diabetics can be precisely monitored from the PLS calibration of either mind- or near-infrared spectra of the blood. Progress toward the non-invasive determination of glucose levels in diabetics is an ultimate goal of this research. 13 refs., 4 figs.

  7. A critical comparison of systematic calibration protocols for activated sludge models: a SWOT analysis.

    Science.gov (United States)

    Sin, Gürkan; Van Hulle, Stijn W H; De Pauw, Dirk J W; van Griensven, Ann; Vanrolleghem, Peter A

    2005-07-01

    Modelling activated sludge systems has gained an increasing momentum after the introduction of activated sludge models (ASMs) in 1987. Application of dynamic models for full-scale systems requires essentially a calibration of the chosen ASM to the case under study. Numerous full-scale model applications have been performed so far which were mostly based on ad hoc approaches and expert knowledge. Further, each modelling study has followed a different calibration approach: e.g. different influent wastewater characterization methods, different kinetic parameter estimation methods, different selection of parameters to be calibrated, different priorities within the calibration steps, etc. In short, there was no standard approach in performing the calibration study, which makes it difficult, if not impossible, to (1) compare different calibrations of ASMs with each other and (2) perform internal quality checks for each calibration study. To address these concerns, systematic calibration protocols have recently been proposed to bring guidance to the modeling of activated sludge systems and in particular to the calibration of full-scale models. In this contribution four existing calibration approaches (BIOMATH, HSG, STOWA and WERF) will be critically discussed using a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis. It will also be assessed in what way these approaches can be further developed in view of further improving the quality of ASM calibration. In this respect, the potential of automating some steps of the calibration procedure by use of mathematical algorithms is highlighted.

  8. A prediction model for spontaneous regression of cervical intraepithelial neoplasia grade 2, based on simple clinical parameters.

    Science.gov (United States)

    Koeneman, Margot M; van Lint, Freyja H M; van Kuijk, Sander M J; Smits, Luc J M; Kooreman, Loes F S; Kruitwagen, Roy F P M; Kruse, Arnold J

    2017-01-01

    This study aims to develop a prediction model for spontaneous regression of cervical intraepithelial neoplasia grade 2 (CIN 2) lesions based on simple clinicopathological parameters. The study was conducted at Maastricht University Medical Center, the Netherlands. The prediction model was developed in a retrospective cohort of 129 women with a histologic diagnosis of CIN 2 who were managed by watchful waiting for 6 to 24months. Five potential predictors for spontaneous regression were selected based on the literature and expert opinion and were analyzed in a multivariable logistic regression model, followed by backward stepwise deletion based on the Wald test. The prediction model was internally validated by the bootstrapping method. Discriminative capacity and accuracy were tested by assessing the area under the receiver operating characteristic curve (AUC) and a calibration plot. Disease regression within 24months was seen in 91 (71%) of 129 patients. A prediction model was developed including the following variables: smoking, Papanicolaou test outcome before the CIN 2 diagnosis, concomitant CIN 1 diagnosis in the same biopsy, and more than 1 biopsy containing CIN 2. Not smoking, Papanicolaou class predictive of disease regression. The AUC was 69.2% (95% confidence interval, 58.5%-79.9%), indicating a moderate discriminative ability of the model. The calibration plot indicated good calibration of the predicted probabilities. This prediction model for spontaneous regression of CIN 2 may aid physicians in the personalized management of these lesions. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Efficient mass calibration of magnetic sector mass spectrometers

    Energy Technology Data Exchange (ETDEWEB)

    Roddick, J C

    1997-12-31

    Magnetic sector mass spectrometers used for automatic acquisition of precise isotopic data are usually controlled with Hall probes and software that uses polynomial equations to define and calibrate the mass-field relations required for mass focusing. This procedure requires a number of reference masses and careful tuning to define and maintain an accurate mass calibration. A simplified equation is presented and applied to several different magnetically controlled mass spectrometers. The equation accounts for nonlinearity in typical Hall probe controlled mass-field relations, reduces calibration to a linear fitting procedure, and is sufficiently accurate to permit calibration over a mass range of 2 to 200 amu with only two defining masses. Procedures developed can quickly correct for normal drift in calibrations and compensate for drift during isotopic analysis over a limited mass range such as a single element. The equation is: Field A{center_dot}Mass{sup 1/2} + B{center_dot}(Mass){sup p} where A, B, and p are constants. The power value p has a characteristic value for a Hall probe/controller and is insensitive to changing conditions, thus reducing calibration to a linear regression to determine optimum A and B. (author). 1 ref., 1 tab., 6 figs.

  10. In-orbit calibration approach of the MICROSCOPE experiment for the test of the equivalence principle at 10-15

    International Nuclear Information System (INIS)

    Pradels, Gregory; Touboul, Pierre

    2003-01-01

    The MICROSCOPE mission is a space experiment of fundamental physics which aims to test the equality between the gravitational and inertial mass with a 10 -15 accuracy. Considering these scientific objectives, very weak accelerations have to be controlled and measured in orbit. By modelling the expected acceleration signals applied to the MICROSCOPE instrument in orbit, the developed analytic model of the mission measurement shows the requirements for instrument calibration. Because of on-ground perturbations, the instrument cannot be calibrated in the laboratory and an in-orbit procedure has to be defined. The proposed approach exploits the drag-free system of the satellite and is an important element of the future data analysis of the MICROSCOPE space experiment

  11. A New Approach for Checking and Complementing CALIPSO Lidar Calibration

    Science.gov (United States)

    Josset, Damien B.; Vaughan, Mark A.; Hu, Yongxiang; Avery, Melody A.; Powell, Kathleen A.; Hunt, William H.; Winker, David M.; Pelon, Jacques; Trepte, Charles R.; Lucker, Patricia L.; hide

    2010-01-01

    We have been studying the backscatter ratio of the two CALIPSO wavelengths for 3 different targets. We are showing the ratio of integrate attenuated backscatter coefficient for cirrus clouds, ocean surface and liquid. Water clouds for one month of nightime data (left:July,right:December), Only opaque cirrus classified as randomly oriented ice[1] are used. For ocean and water clouds, only the clearest shots, determined by a threshold on integrated attenuated backscatter are used. Two things can be immediately observed: 1. A similar trend (black dotted line) is visible using all targets, the color ratio shows a tendency to be higher north and lower south for those two months. 2. The water clouds average value is around 15% lower than ocean surface and cirrus clouds. This is due to the different multiple scattering at 532 nm and 1064 nm [2] which strongly impact the water cloud retrieval. Conclusion: Different targets can be used to improve CALIPSO 1064 nm calibration accuracy. All of them show the signature of an instrumental calibration shift. Multiple scattering introduce a bias in liquid water cloud signal but it still compares very well with all other methods and should not be overlooked. The effect of multiple scattering in liquid and ice clouds will be the subject of future research. If there really is a sampling issue. Combining all methods to increase the sampling, mapping the calibration coefficient or trying to reach an orbit per orbit calibration seems an appropriate way.

  12. Calibration of radiation monitors at nuclear power plants

    International Nuclear Information System (INIS)

    Boudreau, L.; Miller, A.D.; Naughton, M.D.

    1994-03-01

    This work was performed to provide guidance to the utilities in the primary and secondary calibration of the radiation monitoring systems (RMS) installed in nuclear power plants. These systems are installed in nuclear power plants to monitor ongoing processes, identify changing radiation fields, predict and limit personnel radiation exposures and measure and control discharge of radioactive materials to the environment. RMS are checked and calibrated on a continuing basis to ensure their precision and accuracy. This report discusses various approaches towards primary and secondary calibrations of the RMS equipment in light of accepted practices at typical power plants and recent interpretations of regulatory guidance. Detailed calibration techniques and overall system responses, trends, and practices are discussed. Industry, utility, and regulatory sources were contacted to create an overall consensus of the most reasonable approaches to optimizing the performance of this equipment

  13. Strategic development of a multivariate calibration model for the uniformity testing of tablets by transmission NIR analysis.

    Science.gov (United States)

    Sasakura, D; Nakayama, K; Sakamoto, T; Chikuma, T

    2015-05-01

    The use of transmission near infrared spectroscopy (TNIRS) is of particular interest in the pharmaceutical industry. This is because TNIRS does not require sample preparation and can analyze several tens of tablet samples in an hour. It has the capability to measure all relevant information from a tablet, while still on the production line. However, TNIRS has a narrow spectrum range and overtone vibrations often overlap. To perform content uniformity testing in tablets by TNIRS, various properties in the tableting process need to be analyzed by a multivariate prediction model, such as a Partial Least Square Regression modeling. One issue is that typical approaches require several hundred reference samples to act as the basis of the method rather than a strategically designed method. This means that many batches are needed to prepare the reference samples; this requires time and is not cost effective. Our group investigated the concentration dependence of the calibration model with a strategic design. Consequently, we developed a more effective approach to the TNIRS calibration model than the existing methodology.

  14. A Two-Stage Penalized Logistic Regression Approach to Case-Control Genome-Wide Association Studies

    Directory of Open Access Journals (Sweden)

    Jingyuan Zhao

    2012-01-01

    Full Text Available We propose a two-stage penalized logistic regression approach to case-control genome-wide association studies. This approach consists of a screening stage and a selection stage. In the screening stage, main-effect and interaction-effect features are screened by using L1-penalized logistic like-lihoods. In the selection stage, the retained features are ranked by the logistic likelihood with the smoothly clipped absolute deviation (SCAD penalty (Fan and Li, 2001 and Jeffrey’s Prior penalty (Firth, 1993, a sequence of nested candidate models are formed, and the models are assessed by a family of extended Bayesian information criteria (J. Chen and Z. Chen, 2008. The proposed approach is applied to the analysis of the prostate cancer data of the Cancer Genetic Markers of Susceptibility (CGEMS project in the National Cancer Institute, USA. Simulation studies are carried out to compare the approach with the pair-wise multiple testing approach (Marchini et al. 2005 and the LASSO-patternsearch algorithm (Shi et al. 2007.

  15. Quality control of online calibration in computerized assessment

    NARCIS (Netherlands)

    Glas, Cornelis A.W.

    In computerized adaptive testing, updating item parameter estimates using adaptive testing data is often called online calibration. This study investigated how to evaluate whether the adaptive testing data used for online calibration sufficiently fit the item response model used. Three approaches

  16. Solid laboratory calibration of a nonimaging spectroradiometer.

    Science.gov (United States)

    Schaepman, M E; Dangel, S

    2000-07-20

    Field-based nonimaging spectroradiometers are often used in vicarious calibration experiments for airborne or spaceborne imaging spectrometers. The calibration uncertainties associated with these ground measurements contribute substantially to the overall modeling error in radiance- or reflectance-based vicarious calibration experiments. Because of limitations in the radiometric stability of compact field spectroradiometers, vicarious calibration experiments are based primarily on reflectance measurements rather than on radiance measurements. To characterize the overall uncertainty of radiance-based approaches and assess the sources of uncertainty, we carried out a full laboratory calibration. This laboratory calibration of a nonimaging spectroradiometer is based on a measurement plan targeted at achieving a calibration. The individual calibration steps include characterization of the signal-to-noise ratio, the noise equivalent signal, the dark current, the wavelength calibration, the spectral sampling interval, the nonlinearity, directional and positional effects, the spectral scattering, the field of view, the polarization, the size-of-source effects, and the temperature dependence of a particular instrument. The traceability of the radiance calibration is established to a secondary National Institute of Standards and Technology calibration standard by use of a 95% confidence interval and results in an uncertainty of less than ?7.1% for all spectroradiometer bands.

  17. Modeling and Calibration of a Novel One-Mirror Galvanometric Laser Scanner

    Directory of Open Access Journals (Sweden)

    Chengyi Yu

    2017-01-01

    Full Text Available A laser stripe sensor has limited application when a point cloud of geometric samples on the surface of the object needs to be collected, so a galvanometric laser scanner is designed by using a one-mirror galvanometer element as its mechanical device to drive the laser stripe to sweep along the object. A novel mathematical model is derived for the proposed galvanometer laser scanner without any position assumptions and then a model-driven calibration procedure is proposed. Compared with available model-driven approaches, the influence of machining and assembly errors is considered in the proposed model. Meanwhile, a plane-constraint-based approach is proposed to extract a large number of calibration points effectively and accurately to calibrate the galvanometric laser scanner. Repeatability and accuracy of the galvanometric laser scanner are evaluated on the automobile production line to verify the efficiency and accuracy of the proposed calibration method. Experimental results show that the proposed calibration approach yields similar measurement performance compared with a look-up table calibration method.

  18. Background stratified Poisson regression analysis of cohort data.

    Science.gov (United States)

    Richardson, David B; Langholz, Bryan

    2012-03-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models.

  19. Use of beam probes for rigidity calibration of the A1900 fragment separator

    Energy Technology Data Exchange (ETDEWEB)

    Ginter, T.N. [National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824 (United States); Farinon, F. [Facility for Rare Isotope Beams, Michigan State University, East Lansing, MI 48824 (United States); Baumann, T. [National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824 (United States); Hausmann, M. [Facility for Rare Isotope Beams, Michigan State University, East Lansing, MI 48824 (United States); Kwan, E.; Naviliat Cuncic, O. [National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824 (United States); Portillo, M. [Facility for Rare Isotope Beams, Michigan State University, East Lansing, MI 48824 (United States); Rogers, A.M.; Stetson, J.; Sumithrarachchi, C. [National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824 (United States); Villari, A.C.C. [Facility for Rare Isotope Beams, Michigan State University, East Lansing, MI 48824 (United States); Williams, S.J. [National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824 (United States)

    2016-06-01

    Use of a beam-based approach is presented for establishing a rigidity calibration for the A1900 fragment separator located at the National Superconducting Cyclotron Laboratory. Also presented is why an alternative approach to the rigidity calibration – using detailed field maps of individual magnetic components – is not a feasible basis for deriving an accurate calibration. The level of accuracy achieved for the rigidity calibration is ±0.1%.

  20. Information management for maintenance of instrument calibration data

    International Nuclear Information System (INIS)

    Tam, Y.

    1995-01-01

    This paper discusses the rationale for developing a calibration information system at Ontario Hydro Pickering Nuclear Division (PND), including the approach to calibration information problems, the identification of existing processes, discovery of alternatives, selection of the best alternative and project development. (author)

  1. Detection of epistatic effects with logic regression and a classical linear regression model.

    Science.gov (United States)

    Malina, Magdalena; Ickstadt, Katja; Schwender, Holger; Posch, Martin; Bogdan, Małgorzata

    2014-02-01

    To locate multiple interacting quantitative trait loci (QTL) influencing a trait of interest within experimental populations, usually methods as the Cockerham's model are applied. Within this framework, interactions are understood as the part of the joined effect of several genes which cannot be explained as the sum of their additive effects. However, if a change in the phenotype (as disease) is caused by Boolean combinations of genotypes of several QTLs, this Cockerham's approach is often not capable to identify them properly. To detect such interactions more efficiently, we propose a logic regression framework. Even though with the logic regression approach a larger number of models has to be considered (requiring more stringent multiple testing correction) the efficient representation of higher order logic interactions in logic regression models leads to a significant increase of power to detect such interactions as compared to a Cockerham's approach. The increase in power is demonstrated analytically for a simple two-way interaction model and illustrated in more complex settings with simulation study and real data analysis.

  2. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    Science.gov (United States)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  3. A Quality Evaluation of Single and Multiple Camera Calibration Approaches for an Indoor Multi Camera Tracking System

    Directory of Open Access Journals (Sweden)

    M. Adduci

    2014-06-01

    Full Text Available Human detection and tracking has been a prominent research area for several scientists around the globe. State of the art algorithms have been implemented, refined and accelerated to significantly improve the detection rate and eliminate false positives. While 2D approaches are well investigated, 3D human detection and tracking is still an unexplored research field. In both 2D/3D cases, introducing a multi camera system could vastly expand the accuracy and confidence of the tracking process. Within this work, a quality evaluation is performed on a multi RGB-D camera indoor tracking system for examining how camera calibration and pose can affect the quality of human tracks in the scene, independently from the detection and tracking approach used. After performing a calibration step on every Kinect sensor, state of the art single camera pose estimators were evaluated for checking how good the quality of the poses is estimated using planar objects such as an ordinate chessboard. With this information, a bundle block adjustment and ICP were performed for verifying the accuracy of the single pose estimators in a multi camera configuration system. Results have shown that single camera estimators provide high accuracy results of less than half a pixel forcing the bundle to converge after very few iterations. In relation to ICP, relative information between cloud pairs is more or less preserved giving a low score of fitting between concatenated pairs. Finally, sensor calibration proved to be an essential step for achieving maximum accuracy in the generated point clouds, and therefore in the accuracy of the produced 3D trajectories, from each sensor.

  4. Option price calibration from Renyi entropy

    International Nuclear Information System (INIS)

    Brody, Dorje C.; Buckley, Ian R.C.; Constantinou, Irene C.

    2007-01-01

    The calibration of the risk-neutral density function for the future asset price, based on the maximisation of the entropy measure of Renyi, is proposed. Whilst the conventional approach based on the use of logarithmic entropy measure fails to produce the observed power-law distribution when calibrated against option prices, the approach outlined here is shown to produce the desired form of the distribution. Procedures for the maximisation of the Renyi entropy under constraints are outlined in detail, and a number of interesting properties of the resulting power-law distributions are also derived. The result is applied to efficiently evaluate prices of path-independent derivatives

  5. The nuisance of nuisance regression: spectral misspecification in a common approach to resting-state fMRI preprocessing reintroduces noise and obscures functional connectivity.

    Science.gov (United States)

    Hallquist, Michael N; Hwang, Kai; Luna, Beatriz

    2013-11-15

    Recent resting-state functional connectivity fMRI (RS-fcMRI) research has demonstrated that head motion during fMRI acquisition systematically influences connectivity estimates despite bandpass filtering and nuisance regression, which are intended to reduce such nuisance variability. We provide evidence that the effects of head motion and other nuisance signals are poorly controlled when the fMRI time series are bandpass-filtered but the regressors are unfiltered, resulting in the inadvertent reintroduction of nuisance-related variation into frequencies previously suppressed by the bandpass filter, as well as suboptimal correction for noise signals in the frequencies of interest. This is important because many RS-fcMRI studies, including some focusing on motion-related artifacts, have applied this approach. In two cohorts of individuals (n=117 and 22) who completed resting-state fMRI scans, we found that the bandpass-regress approach consistently overestimated functional connectivity across the brain, typically on the order of r=.10-.35, relative to a simultaneous bandpass filtering and nuisance regression approach. Inflated correlations under the bandpass-regress approach were associated with head motion and cardiac artifacts. Furthermore, distance-related differences in the association of head motion and connectivity estimates were much weaker for the simultaneous filtering approach. We recommend that future RS-fcMRI studies ensure that the frequencies of nuisance regressors and fMRI data match prior to nuisance regression, and we advocate a simultaneous bandpass filtering and nuisance regression strategy that better controls nuisance-related variability. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. On-line monitoring for calibration reduction

    International Nuclear Information System (INIS)

    Hoffmann, M.

    2005-09-01

    On-Line Monitoring evaluates instrument channel performance by assessing its consistency with other plant indications. Elimination or reduction of unnecessary field calibrations can reduce associated labour costs, reduce personnel radiation exposure, and reduce the potential for calibration errors. On-line calibration monitoring is an important technique to implement a state-based maintenance approach and reduce unnecessary field calibrations. In this report we will look at how the concept is currently applied in the industry and what the arising needs are as it becomes more commonplace. We will also look at the PEANO System, a tool developed by the Halden Project to perform signal validation and on-line calibration monitoring. Some issues will be identified that are being addressed in the further development of these tools to better serve the future needs of the industry in this area. An outline for how to improve these points and which aspects should be taken into account is described in detail. (Author)

  7. Background stratified Poisson regression analysis of cohort data

    International Nuclear Information System (INIS)

    Richardson, David B.; Langholz, Bryan

    2012-01-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. (orig.)

  8. Calibration of the Herschel SPIRE Fourier Transform Spectrometer

    OpenAIRE

    Swinyard, Bruce; Polehampton, E. T.; Hopwood, R.; Valtchanov, I.; Lu, N.; Fulton, T.; Benielli, D.; Imhof, P.; Marchili, N.; Baluteau, J.- P.; Bendo, G. J.; Ferlet, M.; Griffin, Matthew Jason; Lim, T. L.; Makiwa, G.

    2014-01-01

    The Herschel Spectral and Photometric REceiver (SPIRE) instrument consists of an imaging photometric camera and an imaging Fourier Transform Spectrometer (FTS), both operating over a frequency range of ∼450–1550 GHz. In this paper, we briefly review the FTS design, operation, and data reduction, and describe in detail the approach taken to relative calibration (removal of instrument signatures) and absolute calibration against standard astronomical sources. The calibration scheme assumes a sp...

  9. Advanced statistics: linear regression, part II: multiple linear regression.

    Science.gov (United States)

    Marill, Keith A

    2004-01-01

    The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.

  10. Self Calibrating Flow Estimation in Waste Water Pumping Stations

    DEFF Research Database (Denmark)

    Kallesøe, Carsten Skovmose; Knudsen, Torben

    2016-01-01

    Knowledge about where waste water is flowing in waste water networks is essential to optimize the operation of the network pumping stations. However, installation of flow sensors is expensive and requires regular maintenance. This paper proposes an alternative approach where the pumps and the waste...... water pit are used for estimating both the inflow and the pump flow of the pumping station. Due to the nature of waste water, the waste water pumps are heavily affected by wear and tear. To compensate for the wear of the pumps, the pump parameters, used for the flow estimation, are automatically...... calibrated. This calibration is done based on data batches stored at each pump cycle, hence makes the approach a self calibrating system. The approach is tested on a pumping station operating in a real waste water network....

  11. PLEIADES ABSOLUTE CALIBRATION : INFLIGHT CALIBRATION SITES AND METHODOLOGY

    Directory of Open Access Journals (Sweden)

    S. Lachérade

    2012-07-01

    Full Text Available In-flight calibration of space sensors once in orbit is a decisive step to be able to fulfil the mission objectives. This article presents the methods of the in-flight absolute calibration processed during the commissioning phase. Four In-flight calibration methods are used: absolute calibration, cross-calibration with reference sensors such as PARASOL or MERIS, multi-temporal monitoring and inter-bands calibration. These algorithms are based on acquisitions over natural targets such as African deserts, Antarctic sites, La Crau (Automatic calibration station and Oceans (Calibration over molecular scattering or also new extra-terrestrial sites such as the Moon and selected stars. After an overview of the instrument and a description of the calibration sites, it is pointed out how each method is able to address one or several aspects of the calibration. We focus on how these methods complete each other in their operational use, and how they help building a coherent set of information that addresses all aspects of in-orbit calibration. Finally, we present the perspectives that the high level of agility of PLEIADES offers for the improvement of its calibration and a better characterization of the calibration sites.

  12. Short-term wind speed prediction using an unscented Kalman filter based state-space support vector regression approach

    International Nuclear Information System (INIS)

    Chen, Kuilin; Yu, Jie

    2014-01-01

    Highlights: • A novel hybrid modeling method is proposed for short-term wind speed forecasting. • Support vector regression model is constructed to formulate nonlinear state-space framework. • Unscented Kalman filter is adopted to recursively update states under random uncertainty. • The new SVR–UKF approach is compared to several conventional methods for short-term wind speed prediction. • The proposed method demonstrates higher prediction accuracy and reliability. - Abstract: Accurate wind speed forecasting is becoming increasingly important to improve and optimize renewable wind power generation. Particularly, reliable short-term wind speed prediction can enable model predictive control of wind turbines and real-time optimization of wind farm operation. However, this task remains challenging due to the strong stochastic nature and dynamic uncertainty of wind speed. In this study, unscented Kalman filter (UKF) is integrated with support vector regression (SVR) based state-space model in order to precisely update the short-term estimation of wind speed sequence. In the proposed SVR–UKF approach, support vector regression is first employed to formulate a nonlinear state-space model and then unscented Kalman filter is adopted to perform dynamic state estimation recursively on wind sequence with stochastic uncertainty. The novel SVR–UKF method is compared with artificial neural networks (ANNs), SVR, autoregressive (AR) and autoregressive integrated with Kalman filter (AR-Kalman) approaches for predicting short-term wind speed sequences collected from three sites in Massachusetts, USA. The forecasting results indicate that the proposed method has much better performance in both one-step-ahead and multi-step-ahead wind speed predictions than the other approaches across all the locations

  13. Generic System for Remote Testing and Calibration of Measuring Instruments: Security Architecture

    Science.gov (United States)

    Jurčević, M.; Hegeduš, H.; Golub, M.

    2010-01-01

    Testing and calibration of laboratory instruments and reference standards is a routine activity and is a resource and time consuming process. Since many of the modern instruments include some communication interfaces, it is possible to create a remote calibration system. This approach addresses a wide range of possible applications and permits to drive a number of different devices. On the other hand, remote calibration process involves a number of security issues due to recommendations specified in standard ISO/IEC 17025, since it is not under total control of the calibration laboratory personnel who will sign the calibration certificate. This approach implies that the traceability and integrity of the calibration process directly depends on the collected measurement data. The reliable and secure remote control and monitoring of instruments is a crucial aspect of internet-enabled calibration procedure.

  14. Effects of gypsum and bulk density on neutron probe calibration curves

    International Nuclear Information System (INIS)

    Arslan, Awadis; Razzouk, A.K.

    1993-10-01

    The effects of gypsum and bulk density on the neutron probe calibration curve were studied in the laboratory and in the field. The effect of bulk density was negligible for the soil studied in the laboratory, while it was significant for the field calibration. An increase in the slope of moisture content on a volume basis vs. count ratio with increasing gypsum content at the soil was observed in the laboratory calibration. A simple method for correction of the calibration curve for gypsum content was adopted to obtain a specific curve for each layer. The adapted method requires the gypsum fraction to be estimated for each layer and then incorporated in the calibration curve to improve the coefficient of determination. A field calibration showed an improvement of the determination coefficient by introducing bulk density and gypsum fraction, in addition to count ratio using moisture content on a volume basis as a dependent variable in multi linear regression analysis. The same procedure was successful with variable gravel fractions. (author). 18 refs., 3 figs., 2 tabs

  15. Standards for Standardized Logistic Regression Coefficients

    Science.gov (United States)

    Menard, Scott

    2011-01-01

    Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…

  16. The New Approach to Camera Calibration – GCPs or TLS Data?

    Directory of Open Access Journals (Sweden)

    J. Markiewicz

    2016-06-01

    Full Text Available Camera calibration is one of the basic photogrammetric tasks responsible for the quality of processed products. The majority of calibration is performed with a specially designed test field or during the self-calibration process. The research presented in this paper aims to answer the question of whether it is necessary to use control points designed in the standard way for determination of camera interior orientation parameters. Data from close-range laser scanning can be used as an alternative. The experiments shown in this work demonstrate the potential of laser measurements, since the number of points that may be involved in the calculation is much larger than that of commonly used ground control points. The problem which still exists is the correct and automatic identification of object details in the image, taken with a tested camera, as well as in the data set registered with the laser scanner.

  17. Advanced Method of the Elastomagnetic Sensors Calibration

    Directory of Open Access Journals (Sweden)

    Mikulas Prascak

    2004-01-01

    Full Text Available Elastomagnetic method (EM method is a highly sensitive non-contact evaluation method for measuring tensile and compressive stress in steel. The latest development of measuring devices and EM sensors has shown that the thermomagnetic phenomenon has a stron influence on th accuracy during the EM sensor calibration. To eliminate the influence of this effect a two dimensional regression method is presented.

  18. Detection of sensor degradation using K-means clustering and support vector regression in nuclear power plant

    International Nuclear Information System (INIS)

    Seo, Inyong; Ha, Bokam; Lee, Sungwoo; Shin, Changhoon; Lee, Jaeyong; Kim, Seongjun

    2011-01-01

    In a nuclear power plant (NPP), periodic sensor calibrations are required to assure sensors are operating correctly. However, only a few faulty sensors are found to be rectified. For the safe operation of an NPP and the reduction of unnecessary calibration, on-line calibration monitoring is needed. In this study, an on-line calibration monitoring called KPCSVR using k-means clustering and principal component based Auto-Associative support vector regression (PCSVR) is proposed for nuclear power plant. To reduce the training time of the model, k-means clustering method was used. Response surface methodology is employed to efficiently determine the optimal values of support vector regression hyperparameters. The proposed KPCSVR model was confirmed with actual plant data of Kori Nuclear Power Plant Unit 3 which were measured from the primary and secondary systems of the plant, and compared with the PCSVR model. By using data clustering, the average accuracy of PCSVR improved from 1.228×10 -4 to 0.472×10 -4 and the average sensitivity of PCSVR from 0.0930 to 0.0909, which results in good detection of sensor drift. Moreover, the training time is greatly reduced from 123.5 to 31.5 sec. (author)

  19. Multi-objective calibration of a reservoir model: aggregation and non-dominated sorting approaches

    Science.gov (United States)

    Huang, Y.

    2012-12-01

    Numerical reservoir models can be helpful tools for water resource management. These models are generally calibrated against historical measurement data made in reservoirs. In this study, two methods are proposed for the multi-objective calibration of such models: aggregation and non-dominated sorting methods. Both methods use a hybrid genetic algorithm as an optimization engine and are different in fitness assignment. In the aggregation method, a weighted sum of scaled simulation errors is designed as an overall objective function to measure the fitness of solutions (i.e. parameter values). The contribution of this study to the aggregation method is the correlation analysis and its implication to the choice of weight factors. In the non-dominated sorting method, a novel method based on non-dominated sorting and the method of minimal distance is used to calculate the dummy fitness of solutions. The proposed methods are illustrated using a water quality model that was set up to simulate the water quality of Pepacton Reservoir, which is located to the north of New York City and is used for water supply of city. The study also compares the aggregation and the non-dominated sorting methods. The purpose of this comparison is not to evaluate the pros and cons between the two methods but to determine whether the parameter values, objective function values (simulation errors) and simulated results obtained are significantly different with each other. The final results (objective function values) from the two methods are good compromise between all objective functions, and none of these results are the worst for any objective function. The calibrated model provides an overall good performance and the simulated results with the calibrated parameter values match the observed data better than the un-calibrated parameters, which supports and justifies the use of multi-objective calibration. The results achieved in this study can be very useful for the calibration of water

  20. Streamflow characteristics from modelled runoff time series: Importance of calibration criteria selection

    Science.gov (United States)

    Poole, Sandra; Vis, Marc; Knight, Rodney; Seibert, Jan

    2017-01-01

    Ecologically relevant streamflow characteristics (SFCs) of ungauged catchments are often estimated from simulated runoff of hydrologic models that were originally calibrated on gauged catchments. However, SFC estimates of the gauged donor catchments and subsequently the ungauged catchments can be substantially uncertain when models are calibrated using traditional approaches based on optimization of statistical performance metrics (e.g., Nash–Sutcliffe model efficiency). An improved calibration strategy for gauged catchments is therefore crucial to help reduce the uncertainties of estimated SFCs for ungauged catchments. The aim of this study was to improve SFC estimates from modeled runoff time series in gauged catchments by explicitly including one or several SFCs in the calibration process. Different types of objective functions were defined consisting of the Nash–Sutcliffe model efficiency, single SFCs, or combinations thereof. We calibrated a bucket-type runoff model (HBV – Hydrologiska Byråns Vattenavdelning – model) for 25 catchments in the Tennessee River basin and evaluated the proposed calibration approach on 13 ecologically relevant SFCs representing major flow regime components and different flow conditions. While the model generally tended to underestimate the tested SFCs related to mean and high-flow conditions, SFCs related to low flow were generally overestimated. The highest estimation accuracies were achieved by a SFC-specific model calibration. Estimates of SFCs not included in the calibration process were of similar quality when comparing a multi-SFC calibration approach to a traditional model efficiency calibration. For practical applications, this implies that SFCs should preferably be estimated from targeted runoff model calibration, and modeled estimates need to be carefully interpreted.

  1. Experiments on mobile robot stereo vision system calibration under hardware imperfection

    Directory of Open Access Journals (Sweden)

    Safin Ramil

    2018-01-01

    Full Text Available Calibration is essential for any robot vision system for achieving high accuracy in deriving objects metric information. One of typical requirements for a stereo vison system in order to obtain better calibration results is to guarantee that both cameras keep the same vertical level. However, cameras may be displaced due to severe conditions of a robot operating or some other circumstances. This paper presents our experimental approach to the problem of a mobile robot stereo vision system calibration under a hardware imperfection. In our experiments, we used crawler-type mobile robot «Servosila Engineer». Stereo system cameras of the robot were displaced relative to each other, causing loss of surrounding environment information. We implemented and verified checkerboard and circle grid based calibration methods. The two methods comparison demonstrated that a circle grid based calibration should be preferred over a classical checkerboard calibration approach.

  2. Tests of Local Hadron Calibration Approaches in ATLAS Combined Beam Tests

    International Nuclear Information System (INIS)

    Grahn, Karl-Johan; Kiryunin, Andrey; Pospelov, Guennadi

    2011-01-01

    Three ATLAS calorimeters in the region of the forward crack at |η| 3.2 in the nominal ATLAS setup and a typical section of the two barrel calorimeters at |η| = 0.45 of ATLAS have been exposed to combined beam tests with single electrons and pions. Detailed shower shape studies of electrons and pions with comparisons to various Geant4 based simulations utilizing different physics lists are presented for the endcap beam test. The local hadron calibration approach as used in the full Atlas setup has been applied to the endcap beam test data. An extension of it using layer correlations has been tested with the barrel test beam data. Both methods utilize modular correction steps based on shower shape variables to correct for invisible energy inside the reconstructed clusters in the calorimeters (compensation) and for lost energy deposits outside of the reconstructed clusters (dead material and out-of-cluster deposits). Results for both methods and comparisons to Monte Carlo simulations are presented.

  3. Pathological assessment of liver fibrosis regression

    Directory of Open Access Journals (Sweden)

    WANG Bingqiong

    2017-03-01

    Full Text Available Hepatic fibrosis is the common pathological outcome of chronic hepatic diseases. An accurate assessment of fibrosis degree provides an important reference for a definite diagnosis of diseases, treatment decision-making, treatment outcome monitoring, and prognostic evaluation. At present, many clinical studies have proven that regression of hepatic fibrosis and early-stage liver cirrhosis can be achieved by effective treatment, and a correct evaluation of fibrosis regression has become a hot topic in clinical research. Liver biopsy has long been regarded as the gold standard for the assessment of hepatic fibrosis, and thus it plays an important role in the evaluation of fibrosis regression. This article reviews the clinical application of current pathological staging systems in the evaluation of fibrosis regression from the perspectives of semi-quantitative scoring system, quantitative approach, and qualitative approach, in order to propose a better pathological evaluation system for the assessment of fibrosis regression.

  4. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    Directory of Open Access Journals (Sweden)

    Zheng You

    2013-04-01

    Full Text Available The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.

  5. Optical system error analysis and calibration method of high-accuracy star trackers.

    Science.gov (United States)

    Sun, Ting; Xing, Fei; You, Zheng

    2013-04-08

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.

  6. Evaluation of accuracy of linear regression models in predicting urban stormwater discharge characteristics.

    Science.gov (United States)

    Madarang, Krish J; Kang, Joo-Hyon

    2014-06-01

    Stormwater runoff has been identified as a source of pollution for the environment, especially for receiving waters. In order to quantify and manage the impacts of stormwater runoff on the environment, predictive models and mathematical models have been developed. Predictive tools such as regression models have been widely used to predict stormwater discharge characteristics. Storm event characteristics, such as antecedent dry days (ADD), have been related to response variables, such as pollutant loads and concentrations. However it has been a controversial issue among many studies to consider ADD as an important variable in predicting stormwater discharge characteristics. In this study, we examined the accuracy of general linear regression models in predicting discharge characteristics of roadway runoff. A total of 17 storm events were monitored in two highway segments, located in Gwangju, Korea. Data from the monitoring were used to calibrate United States Environmental Protection Agency's Storm Water Management Model (SWMM). The calibrated SWMM was simulated for 55 storm events, and the results of total suspended solid (TSS) discharge loads and event mean concentrations (EMC) were extracted. From these data, linear regression models were developed. R(2) and p-values of the regression of ADD for both TSS loads and EMCs were investigated. Results showed that pollutant loads were better predicted than pollutant EMC in the multiple regression models. Regression may not provide the true effect of site-specific characteristics, due to uncertainty in the data. Copyright © 2014 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.

  7. A flexible approach to light pen calibration for a monocular-vision-based coordinate measuring system

    International Nuclear Information System (INIS)

    Fu, Shuai; Zhang, Liyan; Ye, Nan; Liu, Shenglan; Zhang, WeiZhong

    2014-01-01

    A monocular-vision-based coordinate measuring system (MVB-CMS) obtains the 3D coordinates of the probe tip center of a light pen by analyzing the monocular image of the target points on the light pen. The light pen calibration, including the target point calibration and the probe tip center calibration, is critical to guarantee the accuracy of the MVB-CMS. The currently used method resorts to special equipment to calibrate the feature points on the light pen in a separate offsite procedure and uses the system camera to calibrate the probe tip center onsite. Instead, a complete onsite light pen calibration method is proposed in this paper. It needs only several auxiliary target points with the same visual features of the light pen targets and two or more cone holes with known distance(s). The target point calibration and the probe tip center calibration are jointly implemented by simply taking two groups of images of the light pen with the camera of the system. The proposed method requires no extra equipment other than the system camera for the calibration, so it is easier to implement and flexible for use. It has been incorporated in a large field-of-view MVB-CMS, which uses active luminous infrared LEDs as the target points. Experimental results demonstrate the accuracy and effectiveness of the proposed method. (paper)

  8. Beyond discrimination: A comparison of calibration methods and clinical usefulness of predictive models of readmission risk.

    Science.gov (United States)

    Walsh, Colin G; Sharman, Kavya; Hripcsak, George

    2017-12-01

    Prior to implementing predictive models in novel settings, analyses of calibration and clinical usefulness remain as important as discrimination, but they are not frequently discussed. Calibration is a model's reflection of actual outcome prevalence in its predictions. Clinical usefulness refers to the utilities, costs, and harms of using a predictive model in practice. A decision analytic approach to calibrating and selecting an optimal intervention threshold may help maximize the impact of readmission risk and other preventive interventions. To select a pragmatic means of calibrating predictive models that requires a minimum amount of validation data and that performs well in practice. To evaluate the impact of miscalibration on utility and cost via clinical usefulness analyses. Observational, retrospective cohort study with electronic health record data from 120,000 inpatient admissions at an urban, academic center in Manhattan. The primary outcome was thirty-day readmission for three causes: all-cause, congestive heart failure, and chronic coronary atherosclerotic disease. Predictive modeling was performed via L1-regularized logistic regression. Calibration methods were compared including Platt Scaling, Logistic Calibration, and Prevalence Adjustment. Performance of predictive modeling and calibration was assessed via discrimination (c-statistic), calibration (Spiegelhalter Z-statistic, Root Mean Square Error [RMSE] of binned predictions, Sanders and Murphy Resolutions of the Brier Score, Calibration Slope and Intercept), and clinical usefulness (utility terms represented as costs). The amount of validation data necessary to apply each calibration algorithm was also assessed. C-statistics by diagnosis ranged from 0.7 for all-cause readmission to 0.86 (0.78-0.93) for congestive heart failure. Logistic Calibration and Platt Scaling performed best and this difference required analyzing multiple metrics of calibration simultaneously, in particular Calibration

  9. Comparison of process estimation techniques for on-line calibration monitoring

    International Nuclear Information System (INIS)

    Shumaker, B. D.; Hashemian, H. M.; Morton, G. W.

    2006-01-01

    The goal of on-line calibration monitoring is to reduce the number of unnecessary calibrations performed each refueling cycle on pressure, level, and flow transmitters in nuclear power plants. The effort requires a baseline for determining calibration drift and thereby the need for a calibration. There are two ways to establish the baseline: averaging and modeling. Averaging techniques have proven to be highly successful in the applications when there are a large number of redundant transmitters; but, for systems with little or no redundancy, averaging methods are not always reliable. That is, for non-redundant transmitters, more sophisticated process estimation techniques are needed to augment or replace the averaging techniques. This paper explores three well-known process estimation techniques; namely Independent Component Analysis (ICA), Auto-Associative Neural Networks (AANN), and Auto-Associative Kernel Regression (AAKR). Using experience and data from an operating nuclear plant, the paper will present an evaluation of the effectiveness of these methods in detecting transmitter drift in actual plant conditions. (authors)

  10. State of the art: two-phase flow calibration techniques

    International Nuclear Information System (INIS)

    Stanley, M.L.

    1977-01-01

    The nuclear community faces a particularly difficult problem relating to the calibration of instrumentation in a two-phase flow steam/water environment. The rationale of the approach to water reactor safety questions in the United States demands that accurate measurements of mass flows in a decompressing two-phase flow be made. An accurate measurement dictates an accurate calibration. This paper addresses three questions relating to the state of the art in two-phase calibration: (1) What do we mean by calibration. (2) What is done now. (3) What should be done

  11. Quality control of on-line calibration in computerized assessment

    NARCIS (Netherlands)

    Glas, Cornelis A.W.

    1998-01-01

    In computerized adaptive testing, updating parameter estimates using adaptive testing data is often called online calibration. In this paper, how to evaluate whether the adaptive testing model used for online calibration fits the item response model used sufficiently is studied. Three approaches are

  12. A Practical pedestrian approach to parsimonious regression with inaccurate inputs

    Directory of Open Access Journals (Sweden)

    Seppo Karrila

    2014-04-01

    Full Text Available A measurement result often dictates an interval containing the correct value. Interval data is also created by roundoff, truncation, and binning. We focus on such common interval uncertainty in data. Inaccuracy in model inputs is typically ignored on model fitting. We provide a practical approach for regression with inaccurate data: the mathematics is easy, and the linear programming formulations simple to use even in a spreadsheet. This self-contained elementary presentation introduces interval linear systems and requires only basic knowledge of algebra. Feature selection is automatic; but can be controlled to find only a few most relevant inputs; and joint feature selection is enabled for multiple modeled outputs. With more features than cases, a novel connection to compressed sensing emerges: robustness against interval errors-in-variables implies model parsimony, and the input inaccuracies determine the regularization term. A small numerical example highlights counterintuitive results and a dramatic difference to total least squares.

  13. Cross calibration of the Landsat-7 ETM+ and EO-1 ALI sensor

    Science.gov (United States)

    Chander, G.; Meyer, D.J.; Helder, D.L.

    2004-01-01

    As part of the Earth Observer 1 (EO-1) Mission, the Advanced Land Imager (ALI) demonstrates a potential technological direction for Landsat Data Continuity Missions. To evaluate ALI's capabilities in this role, a cross-calibration methodology has been developed using image pairs from the Landsat-7 (L7) Enhanced Thematic Mapper Plus (ETM+) and EO-1 (ALI) to verify the radiometric calibration of ALI with respect to the well-calibrated L7 ETM+ sensor. Results have been obtained using two different approaches. The first approach involves calibration of nearly simultaneous surface observations based on image statistics from areas observed simultaneously by the two sensors. The second approach uses vicarious calibration techniques to compare the predicted top-of-atmosphere radiance derived from ground reference data collected during the overpass to the measured radiance obtained from the sensor. The results indicate that the relative sensor chip assemblies gains agree with the ETM+ visible and near-infrared bands to within 2% and the shortwave infrared bands to within 4%.

  14. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  15. Calibration of the DSCOVR EPIC Visible and NIR Channels using MODIS Terra and Aqua Data and EPIC Lunar Observations

    Science.gov (United States)

    Geogdzhayev, Igor V.; Marshak, Alexander

    2018-01-01

    The unique position of the Deep Space Climate Observatory (DSCOVR) Earth Polychromatic Imaging Camera (EPIC) at the Lagrange 1 point makes an important addition to the data from currently operating low Earth orbit observing instruments. EPIC instrument does not have an onboard calibration facility. One approach to its calibration is to compare EPIC observations to the measurements from polar-orbiting radiometers. Moderate Resolution Imaging Spectroradiometer (MODIS) is a natural choice for such comparison due to its well-established calibration record and wide use in remote sensing. We use MODIS Aqua and Terra L1B 1km reflectances to infer calibration coefficients for four EPIC visible and NIR channels: 443, 551, 680 and 780 nm. MODIS and EPIC measurements made between June 2015 and 2016 are employed for comparison. We first identify favorable MODIS pixels with scattering angle matching temporarily collocated EPIC observations. Each EPIC pixel is then spatially collocated to a subset of the favorable MODIS pixels within 25 km radius. Standard deviation of the selected MODIS pixels as well as of the adjacent EPIC pixels is used to find the most homogeneous scenes. These scenes are then used to determine calibration coefficients using a linear regression between EPIC counts/sec and reflectances in the close MODIS spectral channels. We present thus inferred EPIC calibration coefficients and discuss sources of uncertainties. The lunar EPIC observations are used to calibrate EPIC O2 absorbing channels (688 and 764 nm), assuming that there is a small difference between moon reflectances separated by approx.10 nm in wavelength provided the calibration factors of the red (680 nm) and near-IR (780 nm) are known from comparison between EPIC and MODIS.

  16. CryoSat SIRAL Calibration and Performance

    Science.gov (United States)

    Fornari, Marco; Scagliola, Michele; Tagliani, Nicolas; Parrinello, Tommaso

    2013-04-01

    The main payload of CryoSat is a Ku band pulse-width limited radar altimeter, called SIRAL (Synthetic interferometric radar altimeter), that transmits pulses at a high pulse repetition frequency thus making the received echoes phase coherent and suitable for azimuth processing. This allows to reach an along track resolution of about 250 meters which is a significant improvement over traditional pulse-width limited altimeters. Due to the fact that SIRAL is a phase coherent pulse-width limited radar altimeter, a proper calibration approach has been developed, including both an internal and external calibration. The internal calibration monitors the instrument impulse response and the transfer function, like traditional altimeters. In addition to that, the interferometer requires a special calibration developed ad hoc for SIRAL. The external calibration is performed with the use of a ground transponder, located in Svalbard, which receives SIRAL signal and sends the echo back to the satellite. Internal calibration data are processed on ground by the CryoSat Instrument Processing Facility (IPF1) and then applied to the science data. By April 2013, almost 3 years of calibration data will be available, which will be shown in this poster. The external calibration (transponder) data are processed and analyzed independently from the operational chain. The use of an external transponder has been very useful to determine instrument performance and for the tuning of the on-ground processor. This poster presents the transponder results in terms of range noise and datation error.

  17. Borehole-calibration methods used in cased and uncased test holes to determine moisture profiles in the unsaturated zone, Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Hammermeister, D.P.; Kneiblher, C.R.; Klenke, J.

    1985-01-01

    The use of drilling and coring methods that minimize the disturbance of formation rock and core has permitted field calibration of neutron-moisture tools in relatively large diameter cased and uncased boreholes at Yucca Mountain, Nevada. For 5.5-inch diameter cased holes, there was reasonable agreement between a field calibration in alluvium-colluvium and a laboratory calibration in a chamber containing silica sand. There was little difference between moisture-content profiles obtained in a neutron-access hole with a hand-held neutron-moisture meter and an automated borehole-logging tool using laboratory-generated calibration curves. Field calibrations utilizing linear regression analyses and as many as 119 data pairs show a good correlation between neutron-moisture counts and volumetric water content for sections of uncased 6-inch diameter boreholes in nonwelded and bedded tuff. Regression coefficients ranged from 0.80 to 0.94. There were only small differences between calibration curves in 4.25- and 6-inch uncased sections of boreholes. Results of analyzing field calibration data to determine the effects of formation density on calibration curves were inconclusive. Further experimental and theoretical work is outlined

  18. Fermentation process tracking through enhanced spectral calibration modeling.

    Science.gov (United States)

    Triadaphillou, Sophia; Martin, Elaine; Montague, Gary; Norden, Alison; Jeffkins, Paul; Stimpson, Sarah

    2007-06-15

    The FDA process analytical technology (PAT) initiative will materialize in a significant increase in the number of installations of spectroscopic instrumentation. However, to attain the greatest benefit from the data generated, there is a need for calibration procedures that extract the maximum information content. For example, in fermentation processes, the interpretation of the resulting spectra is challenging as a consequence of the large number of wavelengths recorded, the underlying correlation structure that is evident between the wavelengths and the impact of the measurement environment. Approaches to the development of calibration models have been based on the application of partial least squares (PLS) either to the full spectral signature or to a subset of wavelengths. This paper presents a new approach to calibration modeling that combines a wavelength selection procedure, spectral window selection (SWS), where windows of wavelengths are automatically selected which are subsequently used as the basis of the calibration model. However, due to the non-uniqueness of the windows selected when the algorithm is executed repeatedly, multiple models are constructed and these are then combined using stacking thereby increasing the robustness of the final calibration model. The methodology is applied to data generated during the monitoring of broth concentrations in an industrial fermentation process from on-line near-infrared (NIR) and mid-infrared (MIR) spectrometers. It is shown that the proposed calibration modeling procedure outperforms traditional calibration procedures, as well as enabling the identification of the critical regions of the spectra with regard to the fermentation process.

  19. Calibration

    International Nuclear Information System (INIS)

    Greacen, E.L.; Correll, R.L.; Cunningham, R.B.; Johns, G.G.; Nicolls, K.D.

    1981-01-01

    Procedures common to different methods of calibration of neutron moisture meters are outlined and laboratory and field calibration methods compared. Gross errors which arise from faulty calibration techniques are described. The count rate can be affected by the dry bulk density of the soil, the volumetric content of constitutional hydrogen and other chemical components of the soil and soil solution. Calibration is further complicated by the fact that the neutron meter responds more strongly to the soil properties close to the detector and source. The differences in slope of calibration curves for different soils can be as much as 40%

  20. Combining engineering and data-driven approaches: Development of a generic fire risk model facilitating calibration

    DEFF Research Database (Denmark)

    De Sanctis, G.; Fischer, K.; Kohler, J.

    2014-01-01

    Fire risk models support decision making for engineering problems under the consistent consideration of the associated uncertainties. Empirical approaches can be used for cost-benefit studies when enough data about the decision problem are available. But often the empirical approaches...... a generic risk model that is calibrated to observed fire loss data. Generic risk models assess the risk of buildings based on specific risk indicators and support risk assessment at a portfolio level. After an introduction to the principles of generic risk assessment, the focus of the present paper...... are not detailed enough. Engineering risk models, on the other hand, may be detailed but typically involve assumptions that may result in a biased risk assessment and make a cost-benefit study problematic. In two related papers it is shown how engineering and data-driven modeling can be combined by developing...

  1. An Introduction to the Hybrid Approach of Neural Networks and the Linear Regression Model : An Illustration in the Hedonic Pricing Model of Building Costs

    OpenAIRE

    浅野, 美代子; マーコ, ユー K.W.

    2007-01-01

    This paper introduces the hybrid approach of neural networks and linear regression model proposed by Asano and Tsubaki (2003). Neural networks are often credited with its superiority in data consistency whereas the linear regression model provides simple interpretation of the data enabling researchers to verify their hypotheses. The hybrid approach aims at combing the strengths of these two well-established statistical methods. A step-by-step procedure for performing the hybrid approach is pr...

  2. QUANTITATIVE ELECTRONIC STRUCTURE - ACTIVITY RELATIONSHIP OF ANTIMALARIAL COMPOUND OF ARTEMISININ DERIVATIVES USING PRINCIPAL COMPONENT REGRESSION APPROACH

    Directory of Open Access Journals (Sweden)

    Paul Robert Martin Werfette

    2010-06-01

    Full Text Available Analysis of quantitative structure - activity relationship (QSAR for a series of antimalarial compound artemisinin derivatives has been done using principal component regression. The descriptors for QSAR study were representation of electronic structure i.e. atomic net charges of the artemisinin skeleton calculated by AM1 semi-empirical method. The antimalarial activity of the compound was expressed in log 1/IC50 which is an experimental data. The main purpose of the principal component analysis approach is to transform a large data set of atomic net charges to simplify into a data set which known as latent variables. The best QSAR equation to analyze of log 1/IC50 can be obtained from the regression method as a linear function of several latent variables i.e. x1, x2, x3, x4 and x5. The best QSAR model is expressed in the following equation,  (;;   Keywords: QSAR, antimalarial, artemisinin, principal component regression

  3. Partitioning of late gestation energy expenditure in ewes using indirect calorimetry and a linear regression approach

    DEFF Research Database (Denmark)

    Kiani, Alishir; Chwalibog, André; Nielsen, Mette O

    2007-01-01

    Late gestation energy expenditure (EE(gest)) originates from energy expenditure (EE) of development of conceptus (EE(conceptus)) and EE of homeorhetic adaptation of metabolism (EE(homeorhetic)). Even though EE(gest) is relatively easy to quantify, its partitioning is problematic. In the present...... study metabolizable energy (ME) intake ranges for twin-bearing ewes were 220-440, 350- 700, 350-900 kJ per metabolic body weight (W0.75) at week seven, five, two pre-partum respectively. Indirect calorimetry and a linear regression approach were used to quantify EE(gest) and then partition to EE......(conceptus) and EE(homeorhetic). Energy expenditure of basal metabolism of the non-gravid tissues (EE(bmng)), derived from the intercept of the linear regression equation of retained energy [kJ/W0.75] and ME intake [kJ/W(0.75)], was 298 [kJ/ W0.75]. Values of the intercepts of the regression equations at week seven...

  4. Temperature corrected-calibration of GRACE's accelerometer

    Science.gov (United States)

    Encarnacao, J.; Save, H.; Siemes, C.; Doornbos, E.; Tapley, B. D.

    2017-12-01

    Since April 2011, the thermal control of the accelerometers on board the GRACE satellites has been turned off. The time series of along-track bias clearly show a drastic change in the behaviour of this parameter, while the calibration model has remained unchanged throughout the entire mission lifetime. In an effort to improve the quality of the gravity field models produced at CSR in future mission-long re-processing of GRACE data, we quantify the added value of different calibration strategies. In one approach, the temperature effects that distort the raw accelerometer measurements collected without thermal control are corrected considering the housekeeping temperature readings. In this way, one single calibration strategy can be consistently applied during the whole mission lifetime, since it is valid to thermal the conditions before and after April 2011. Finally, we illustrate that the resulting calibrated accelerations are suitable for neutral thermospheric density studies.

  5. Bootstrapped neural nets versus regression kriging in the digital mapping of pedological attributes: the automatic and time-consuming perspectives

    Science.gov (United States)

    Langella, Giuliano; Basile, Angelo; Bonfante, Antonello; Manna, Piero; Terribile, Fabio

    2013-04-01

    Digital soil mapping procedures are widespread used to build two-dimensional continuous maps about several pedological attributes. Our work addressed a regression kriging (RK) technique and a bootstrapped artificial neural network approach in order to evaluate and compare (i) the accuracy of prediction, (ii) the susceptibility of being included in automatic engines (e.g. to constitute web processing services), and (iii) the time cost needed for calibrating models and for making predictions. Regression kriging is maybe the most widely used geostatistical technique in the digital soil mapping literature. Here we tried to apply the EBLUP regression kriging as it is deemed to be the most statistically sound RK flavor by pedometricians. An unusual multi-parametric and nonlinear machine learning approach was accomplished, called BAGAP (Bootstrap aggregating Artificial neural networks with Genetic Algorithms and Principal component regression). BAGAP combines a selected set of weighted neural nets having specified characteristics to yield an ensemble response. The purpose of applying these two particular models is to ascertain whether and how much a more cumbersome machine learning method could be much promising in making more accurate/precise predictions. Being aware of the difficulty to handle objects based on EBLUP-RK as well as BAGAP when they are embedded in environmental applications, we explore the susceptibility of them in being wrapped within Web Processing Services. Two further kinds of aspects are faced for an exhaustive evaluation and comparison: automaticity and time of calculation with/without high performance computing leverage.

  6. Calibrating Legal Judgments

    Directory of Open Access Journals (Sweden)

    Frederick Schauer

    2017-09-01

    Full Text Available Objective to study the notion and essence of legal judgments calibration the possibilities of using it in the lawenforcement activity to explore the expenses and advantages of using it. Methods dialectic approach to the cognition of social phenomena which enables to analyze them in historical development and functioning in the context of the integrity of objective and subjective factors it determined the choice of the following research methods formallegal comparative legal sociological methods of cognitive psychology and philosophy. Results In ordinary life people who assess other peoplersaquos judgments typically take into account the other judgments of those they are assessing in order to calibrate the judgment presently being assessed. The restaurant and hotel rating website TripAdvisor is exemplary because it facilitates calibration by providing access to a raterrsaquos previous ratings. Such information allows a user to see whether a particular rating comes from a rater who is enthusiastic about every place she patronizes or instead from someone who is incessantly hard to please. And even when less systematized as in assessing a letter of recommendation or college transcript calibration by recourse to the decisional history of those whose judgments are being assessed is ubiquitous. Yet despite the ubiquity and utility of such calibration the legal system seems perversely to reject it. Appellate courts do not openly adjust their standard of review based on the previous judgments of the judge whose decision they are reviewing nor do judges in reviewing legislative or administrative decisions magistrates in evaluating search warrant representations or jurors in assessing witness perception. In most legal domains calibration by reference to the prior decisions of the reviewee is invisible either because it does not exist or because reviewing bodies are unwilling to admit using what they in fact know and employ. Scientific novelty for the first

  7. Calibration of Galileo signals for time metrology.

    Science.gov (United States)

    Defraigne, Pascale; Aerts, Wim; Cerretto, Giancarlo; Cantoni, Elena; Sleewaegen, Jean-Marie

    2014-12-01

    Using global navigation satellite system (GNSS) signals for accurate timing and time transfer requires the knowledge of all electric delays of the signals inside the receiving system. GNSS stations dedicated to timing or time transfer are classically calibrated only for Global Positioning System (GPS) signals. This paper proposes a procedure to determine the hardware delays of a GNSS receiving station for Galileo signals, once the delays of the GPS signals are known. This approach makes use of the broadcast satellite inter-signal biases, and is based on the ionospheric delay measured from dual-frequency combinations of GPS and Galileo signals. The uncertainty on the so-determined hardware delays is estimated to 3.7 ns for each isolated code in the L5 frequency band, and 4.2 ns for the ionosphere-free combination of E1 with a code of the L5 frequency band. For the calibration of a time transfer link between two stations, another approach can be used, based on the difference between the common-view time transfer results obtained with calibrated GPS data and with uncalibrated Galileo data. It is shown that the results obtained with this approach or with the ionospheric method are equivalent.

  8. Laser Calibration of the ATLAS Tile Calorimeter

    CERN Document Server

    Di Gregorio, Giulia; The ATLAS collaboration

    2017-01-01

    High performance stability of the ATLAS Tile Calorimeter is achieved with a set of calibration procedures. One step of the calibration procedure is based on measurements of the response stability to laser excitation of the PMTs that are used to readout the calorimeter cells. A facility to study in lab the PMT stability response is operating in the PISA-INFN laboratories since 2015. Goals of the tests in lab are to study the time evolution of the PMT response to reproduce and to understand the origin of the response drifts seen with the PMT mounted on the Tile calorimeter in its normal operating during LHC run I and run II. A new statistical approach was developed to measure drift of the absolute gain. This approach was applied to both the ATLAS laser calibration data and to data collected in the Pisa local laboratory. The preliminary results from these two studies are shown.

  9. Application of effective variance method for contamination monitor calibration

    International Nuclear Information System (INIS)

    Goncalez, O.L.; Freitas, I.S.M. de.

    1990-01-01

    In this report, the calibration of a thin window Geiger-Muller type monitor for alpha superficial contamination is presented. The calibration curve is obtained by the method of the least-squares fitting with effective variance. The method and the approach for the calculation are briefly discussed. (author)

  10. A regression approach for Zircaloy-2 in-reactor creep constitutive equations

    International Nuclear Information System (INIS)

    Yung Liu, Y.; Bement, A.L.

    1977-01-01

    In this paper the methodology of multiple regressions as applied to Zircaloy-2 in-reactor creep data analysis and construction of constitutive equation are illustrated. While the resulting constitutive equation can be used in creep analysis of in-reactor Zircaloy structural components, the methodology itself is entirely general and can be applied to any creep data analysis. The promising aspects of multiple regression creep data analysis are briefly outlined as follows: (1) When there are more than one variable involved, there is no need to make the assumption that each variable affects the response independently. No separate normalizations are required either and the estimation of parameters is obtained by solving many simultaneous equations. The number of simultaneous equations is equal to the number of data sets. (2) Regression statistics such as R 2 - and F-statistics provide measures of the significance of regression creep equation in correlating the overall data. The relative weights of each variable on the response can also be obtained. (3) Special regression techniques such as step-wise, ridge, and robust regressions and residual plots, etc., provide diagnostic tools for model selections. Multiple regression analysis performed on a set of carefully selected Zircaloy-2 in-reactor creep data leads to a model which provides excellent correlations for the data. (Auth.)

  11. A new approach for interexaminer reliability data analysis on dental caries calibration

    Directory of Open Access Journals (Sweden)

    Andréa Videira Assaf

    2007-12-01

    Full Text Available Objectives: a to evaluate the interexaminer reliability in caries detection considering different diagnostic thresholds and b to indicate, by using Kappa statistics, the best way of measuring interexaminer agreement during the calibration process in dental caries surveys. Methods: Eleven dentists participated in the initial training, which was divided into theoretical discussions and practical activities, and calibration exercises, performed at baseline, 3 and 6 months after the initial training. For the examinations of 6-7-year-old schoolchildren, the World Health Organization (WHO recommendations were followed and different diagnostic thresholds were used: WHO (decayed/missing/filled teeth - DMFT index and WHO + IL (initial lesion diagnostic thresholds. The interexaminer reliability was calculated by Kappa statistics, according to WHO and WHO+IL thresholds considering: a the entire dentition; b upper/lower jaws; c sextants; d each tooth individually. Results: Interexaminer reliability was high for both diagnostic thresholds; nevertheless, it decreased in all calibration sections when considering teeth individually. Conclusion: The interexaminer reliability was possible during the period of 6 months, under both caries diagnosis thresholds. However, great disagreement was observed for posterior teeth, especially using the WHO+IL criteria. Analysis considering dental elements individually was the best way of detecting interexaminer disagreement during the calibration sections.

  12. Statistical Validation of Calibrated Wind Data Collected From NOAA's Hurricane Hunter Aircraft

    Science.gov (United States)

    Graham, K.; Sears, I. T.; Holmes, M.; Henning, R. G.; Damiano, A. B.; Parrish, J. R.; Flaherty, P. T.

    2015-12-01

    Obtaining accurate in situ meteorological measurements from the NOAA G-IV Hurricane Hunter Aircraft currently requires annual wind calibration flights. This project attempts to demonstrate whether an alternate method to wind calibration flights can be implemented using data collected from many previous hurricane, winter storm, and surveying flights. Wind derivations require using airplane attack and slip angles, airplane pitch, pressure differentials, dynamic pressures, ground speeds, true air speeds, and several other variables measured by instruments on the aircraft. Through the use of linear regression models, future wind measurements may be fit to past statistical models. This method of wind calibration could replace the need for annual wind calibration flights, decreasing NOAA expenses and providing more accurate data. This would help to ensure all data users have reliable data and ultimately contribute to NOAA's goal of building of a Weather Ready Nation.

  13. NIST display colorimeter calibration facility

    Science.gov (United States)

    Brown, Steven W.; Ohno, Yoshihiro

    2003-07-01

    A facility has been developed at the National Institute of Standards and Technology (NIST) to provide calibration services for color-measuring instruments to address the need for improving and certifying the measurement uncertainties of this type of instrument. While NIST has active programs in photometry, flat panel display metrology, and color and appearance measurements, these are the first services offered by NIST tailored to color-measuring instruments for displays. An overview of the facility, the calibration approach, and associated uncertainties are presented. Details of a new tunable colorimetric source and the development of new transfer standard instruments are discussed.

  14. Selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays and impacts of using incorrect weighting factors on curve stability, data quality, and assay performance.

    Science.gov (United States)

    Gu, Huidong; Liu, Guowen; Wang, Jian; Aubry, Anne-Françoise; Arnold, Mark E

    2014-09-16

    A simple procedure for selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays is reported. The correct weighting factor is determined by the relationship between the standard deviation of instrument responses (σ) and the concentrations (x). The weighting factor of 1, 1/x, or 1/x(2) should be selected if, over the entire concentration range, σ is a constant, σ(2) is proportional to x, or σ is proportional to x, respectively. For the first time, we demonstrated with detailed scientific reasoning, solid historical data, and convincing justification that 1/x(2) should always be used as the weighting factor for all bioanalytical LC-MS/MS assays. The impacts of using incorrect weighting factors on curve stability, data quality, and assay performance were thoroughly investigated. It was found that the most stable curve could be obtained when the correct weighting factor was used, whereas other curves using incorrect weighting factors were unstable. It was also found that there was a very insignificant impact on the concentrations reported with calibration curves using incorrect weighting factors as the concentrations were always reported with the passing curves which actually overlapped with or were very close to the curves using the correct weighting factor. However, the use of incorrect weighting factors did impact the assay performance significantly. Finally, the difference between the weighting factors of 1/x(2) and 1/y(2) was discussed. All of the findings can be generalized and applied into other quantitative analysis techniques using calibration curves with weighted least-squares regression algorithm.

  15. Boosted regression trees, multivariate adaptive regression splines and their two-step combinations with multiple linear regression or partial least squares to predict blood-brain barrier passage: a case study.

    Science.gov (United States)

    Deconinck, E; Zhang, M H; Petitet, F; Dubus, E; Ijjaali, I; Coomans, D; Vander Heyden, Y

    2008-02-18

    The use of some unconventional non-linear modeling techniques, i.e. classification and regression trees and multivariate adaptive regression splines-based methods, was explored to model the blood-brain barrier (BBB) passage of drugs and drug-like molecules. The data set contains BBB passage values for 299 structural and pharmacological diverse drugs, originating from a structured knowledge-based database. Models were built using boosted regression trees (BRT) and multivariate adaptive regression splines (MARS), as well as their respective combinations with stepwise multiple linear regression (MLR) and partial least squares (PLS) regression in two-step approaches. The best models were obtained using combinations of MARS with either stepwise MLR or PLS. It could be concluded that the use of combinations of a linear with a non-linear modeling technique results in some improved properties compared to the individual linear and non-linear models and that, when the use of such a combination is appropriate, combinations using MARS as non-linear technique should be preferred over those with BRT, due to some serious drawbacks of the BRT approaches.

  16. Improving quantitative precision and throughput by reducing calibrator use in liquid chromatography-tandem mass spectrometry

    International Nuclear Information System (INIS)

    Rule, Geoffrey S.; Rockwood, Alan L.

    2016-01-01

    To improve efficiency in our mass spectrometry laboratories we have made efforts to reduce the number of calibration standards utilized for quantitation over time. We often analyze three or more batches of 96 samples per day, on a single instrument, for a number of assays. With a conventional calibration scheme at six concentration levels this amounts to more than 5000 calibration points per year. Modern LC-tandem mass spectrometric instrumentation is extremely rugged however, and isotopically labelled internal standards are widely available. This made us consider whether alternative calibration strategies could be utilized to reduce the number of calibration standards analyzed while still retaining high precision and accurate quantitation. Here we demonstrate how, by utilizing a single calibration point in each sample batch, and using the resulting response factor (RF) to update an existing, historical response factor (HRF), we are able to obtain improved precision over a conventional multipoint calibration approach, as judged by quality control samples. The laboratory component of this study was conducted with an existing LC tandem mass spectrometric method for three androgen analytes in our production laboratory. Using examples from both simulated and laboratory data we illustrate several aspects of our single point alternative calibration strategy and compare it with a conventional, multipoint calibration approach. We conclude that both the cost and burden of preparing multiple calibration standards with every batch of samples can be reduced while at the same time maintaining, or even improving, analytical quality. - Highlights: • Use of a weighted single point calibration approach improves quantitative precision. • A weighted response factor approach incorporates historical calibration information. • Several scenarios are discussed with regard to their influence on quantitation.

  17. Improving quantitative precision and throughput by reducing calibrator use in liquid chromatography-tandem mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Rule, Geoffrey S., E-mail: geoffrey.s.rule@aruplab.com [ARUP Institute for Clinical and Experimental Pathology, 500 Chipeta Way, Salt Lake City, UT 84108 (United States); Rockwood, Alan L. [ARUP Institute for Clinical and Experimental Pathology, 500 Chipeta Way, Salt Lake City, UT 84108 (United States); Department of Pathology, University of Utah School of Medicine, 2100 Jones Medical Research Bldg., Salt Lake City, UT 84132 (United States)

    2016-05-05

    To improve efficiency in our mass spectrometry laboratories we have made efforts to reduce the number of calibration standards utilized for quantitation over time. We often analyze three or more batches of 96 samples per day, on a single instrument, for a number of assays. With a conventional calibration scheme at six concentration levels this amounts to more than 5000 calibration points per year. Modern LC-tandem mass spectrometric instrumentation is extremely rugged however, and isotopically labelled internal standards are widely available. This made us consider whether alternative calibration strategies could be utilized to reduce the number of calibration standards analyzed while still retaining high precision and accurate quantitation. Here we demonstrate how, by utilizing a single calibration point in each sample batch, and using the resulting response factor (RF) to update an existing, historical response factor (HRF), we are able to obtain improved precision over a conventional multipoint calibration approach, as judged by quality control samples. The laboratory component of this study was conducted with an existing LC tandem mass spectrometric method for three androgen analytes in our production laboratory. Using examples from both simulated and laboratory data we illustrate several aspects of our single point alternative calibration strategy and compare it with a conventional, multipoint calibration approach. We conclude that both the cost and burden of preparing multiple calibration standards with every batch of samples can be reduced while at the same time maintaining, or even improving, analytical quality. - Highlights: • Use of a weighted single point calibration approach improves quantitative precision. • A weighted response factor approach incorporates historical calibration information. • Several scenarios are discussed with regard to their influence on quantitation.

  18. Modeling of the Monthly Rainfall-Runoff Process Through Regressions

    Directory of Open Access Journals (Sweden)

    Campos-Aranda Daniel Francisco

    2014-10-01

    Full Text Available To solve the problems associated with the assessment of water resources of a river, the modeling of the rainfall-runoff process (RRP allows the deduction of runoff missing data and to extend its record, since generally the information available on precipitation is larger. It also enables the estimation of inputs to reservoirs, when their building led to the suppression of the gauging station. The simplest mathematical model that can be set for the RRP is the linear regression or curve on a monthly basis. Such a model is described in detail and is calibrated with the simultaneous record of monthly rainfall and runoff in Ballesmi hydrometric station, which covers 35 years. Since the runoff of this station has an important contribution from the spring discharge, the record is corrected first by removing that contribution. In order to do this a procedure was developed based either on the monthly average regional runoff coefficients or on nearby and similar watershed; in this case the Tancuilín gauging station was used. Both stations belong to the Partial Hydrologic Region No. 26 (Lower Rio Panuco and are located within the state of San Luis Potosi, México. The study performed indicates that the monthly regression model, due to its conceptual approach, faithfully reproduces monthly average runoff volumes and achieves an excellent approximation in relation to the dispersion, proved by calculation of the means and standard deviations.

  19. A review and comparison of Bayesian and likelihood-based inferences in beta regression and zero-or-one-inflated beta regression.

    Science.gov (United States)

    Liu, Fang; Eugenio, Evercita C

    2018-04-01

    Beta regression is an increasingly popular statistical technique in medical research for modeling of outcomes that assume values in (0, 1), such as proportions and patient reported outcomes. When outcomes take values in the intervals [0,1), (0,1], or [0,1], zero-or-one-inflated beta (zoib) regression can be used. We provide a thorough review on beta regression and zoib regression in the modeling, inferential, and computational aspects via the likelihood-based and Bayesian approaches. We demonstrate the statistical and practical importance of correctly modeling the inflation at zero/one rather than ad hoc replacing them with values close to zero/one via simulation studies; the latter approach can lead to biased estimates and invalid inferences. We show via simulation studies that the likelihood-based approach is computationally faster in general than MCMC algorithms used in the Bayesian inferences, but runs the risk of non-convergence, large biases, and sensitivity to starting values in the optimization algorithm especially with clustered/correlated data, data with sparse inflation at zero and one, and data that warrant regularization of the likelihood. The disadvantages of the regular likelihood-based approach make the Bayesian approach an attractive alternative in these cases. Software packages and tools for fitting beta and zoib regressions in both the likelihood-based and Bayesian frameworks are also reviewed.

  20. Reduction of interferences in graphite furnace atomic absorption spectrometry by multiple linear regression modelling

    Science.gov (United States)

    Grotti, Marco; Abelmoschi, Maria Luisa; Soggia, Francesco; Tiberiade, Christian; Frache, Roberto

    2000-12-01

    The multivariate effects of Na, K, Mg and Ca as nitrates on the electrothermal atomisation of manganese, cadmium and iron were studied by multiple linear regression modelling. Since the models proved to efficiently predict the effects of the considered matrix elements in a wide range of concentrations, they were applied to correct the interferences occurring in the determination of trace elements in seawater after pre-concentration of the analytes. In order to obtain a statistically significant number of samples, a large volume of the certified seawater reference materials CASS-3 and NASS-3 was treated with Chelex-100 resin; then, the chelating resin was separated from the solution, divided into several sub-samples, each of them was eluted with nitric acid and analysed by electrothermal atomic absorption spectrometry (for trace element determinations) and inductively coupled plasma optical emission spectrometry (for matrix element determinations). To minimise any other systematic error besides that due to matrix effects, accuracy of the pre-concentration step and contamination levels of the procedure were checked by inductively coupled plasma mass spectrometric measurements. Analytical results obtained by applying the multiple linear regression models were compared with those obtained with other calibration methods, such as external calibration using acid-based standards, external calibration using matrix-matched standards and the analyte addition technique. Empirical models proved to efficiently reduce interferences occurring in the analysis of real samples, allowing an improvement of accuracy better than for other calibration methods.

  1. Calibration is action specific but perturbation of perceptual units is not.

    Science.gov (United States)

    Pan, Jing S; Coats, Rachel O; Bingham, Geoffrey P

    2014-02-01

    G. P. Bingham and C. C. Pagano (1998, The necessity of a perception/action approach to definite distance perception: Monocular distance perception to guide reaching. Journal of Experimental Psychology: Human Perception and Performance, 24, 145-168) argued that metric space perception should be investigated using relevant action measures because calibration is an intrinsic component of perception/action that yields accurate targeted actions. They described calibration as a mapping from embodied units of perception to embodied units of action. This mapping theory yields a number of predictions. We tested two of them. The first prediction is that calibration should be action specific because what is calibrated is a mapping from perceptual units to a unit of action. Thus, calibration does not generalize to other actions. This prediction is consistent with the "action-specific approach" to calibration (D. R. Proffitt, 2008, An action specific approach to spatial perception. In R. L. Klatzky, B. MacWhinney, & M. Behrmann (Eds.), Embodiment, ego-space and action (pp. 179-202). New York, NY: Psychology Press.). The second prediction is that a change in perceptual units should generalize to all relevant actions that are guided using that perceptual information. The same perceptual units can be mapped to different actions. Change in the unit affects all relevant actions. This prediction is consistent with the "general purpose perception approach" (J. M. Loomis & J. W. Philbeck, 2008, Measuring spatial perception with spatial updating and action. In R. L. Klatzky, B. MacWhinney, & M. Behrmann (Eds.), Embodiment, ego-space and action (pp. 1-43). New York, NY: Psychology Press). In Experiment 1, two targeted actions, throwing and extended reaching were tested to determine if they were comparable in precision and in response to distorted calibration. They were. Comparing these actions, the first prediction was tested in Experiment 2 and confirmed. The second prediction was

  2. Logic regression and its extensions.

    Science.gov (United States)

    Schwender, Holger; Ruczinski, Ingo

    2010-01-01

    Logic regression is an adaptive classification and regression procedure, initially developed to reveal interacting single nucleotide polymorphisms (SNPs) in genetic association studies. In general, this approach can be used in any setting with binary predictors, when the interaction of these covariates is of primary interest. Logic regression searches for Boolean (logic) combinations of binary variables that best explain the variability in the outcome variable, and thus, reveals variables and interactions that are associated with the response and/or have predictive capabilities. The logic expressions are embedded in a generalized linear regression framework, and thus, logic regression can handle a variety of outcome types, such as binary responses in case-control studies, numeric responses, and time-to-event data. In this chapter, we provide an introduction to the logic regression methodology, list some applications in public health and medicine, and summarize some of the direct extensions and modifications of logic regression that have been proposed in the literature. Copyright © 2010 Elsevier Inc. All rights reserved.

  3. Calibrating a surface mass-balance model for Austfonna ice cap, Svalbard

    Science.gov (United States)

    Schuler, Thomas Vikhamar; Loe, Even; Taurisano, Andrea; Eiken, Trond; Hagen, Jon Ove; Kohler, Jack

    2007-10-01

    Austfonna (8120 km2) is by far the largest ice mass in the Svalbard archipelago. There is considerable uncertainty about its current state of balance and its possible response to climate change. Over the 2004/05 period, we collected continuous meteorological data series from the ice cap, performed mass-balance measurements using a network of stakes distributed across the ice cap and mapped the distribution of snow accumulation using ground-penetrating radar along several profile lines. These data are used to drive and test a model of the surface mass balance. The spatial accumulation pattern was derived from the snow depth profiles using regression techniques, and ablation was calculated using a temperature-index approach. Model parameters were calibrated using the available field data. Parameter calibration was complicated by the fact that different parameter combinations yield equally acceptable matches to the stake data while the resulting calculated net mass balance differs considerably. Testing model results against multiple criteria is an efficient method to cope with non-uniqueness. In doing so, a range of different data and observations was compared to several different aspects of the model results. We find a systematic underestimation of net balance for parameter combinations that predict observed ice ablation, which suggests that refreezing processes play an important role. To represent these effects in the model, a simple PMAX approach was included in its formulation. Used as a diagnostic tool, the model suggests that the surface mass balance for the period 29 April 2004 to 23 April 2005 was negative (-318 mm w.e.).

  4. Linear regression crash prediction models : issues and proposed solutions.

    Science.gov (United States)

    2010-05-01

    The paper develops a linear regression model approach that can be applied to : crash data to predict vehicle crashes. The proposed approach involves novice data aggregation : to satisfy linear regression assumptions; namely error structure normality ...

  5. Calibration pipeline for VIR data

    Science.gov (United States)

    Carraro, F.; Fonte, S.; Coradini, A.; Filacchione, G.; de Sanctis, M. C.; Ammannito, E.; Capria, M. T.; Cartacci, M.; Noschese, R.; Tosi, F.; Capaccioni, F.

    2011-10-01

    During the second quarter of 2011 VIR-MS (VIS and IR Mapping Spectrometer) [1] aboard Dawn mission [2] has approached Vesta in order to start a long period of acquisitions that will end at the beginning of 2012. Data acquired by each instrument always require a calibration process in order to remove all the instrument effects that could affect the scientific evaluations and analysis. VIR-MS instrument team has realized a calibration pipeline which has the goal of producing calibrated (1b level) data starting from the raw (1a level) ones. The other goal of the tool has been the check of the goodness of acquired data by means of the evaluation of a series of minimum requisites of each data file, such as the percentage of the saturated pixels, the presence of spikes or the mean S/N ratio of each qube.

  6. Intersatellite Calibration of Microwave Radiometers for GPM

    Science.gov (United States)

    Wilheit, T. T.

    2010-12-01

    The aim of the GPM mission is to measure precipitation globally with high temporal resolution by using a constellation of satellites logically united by the GPM Core Satellite which will be in a non-sunsynchronous, medium inclination orbit. The usefulness of the combined product depends on the consistency of precipitation retrievals from the various microwave radiometers. The calibration requirements for this consistency are quite daunting requiring a multi-layered approach. The radiometers can vary considerably in their frequencies, view angles, polarizations and spatial resolutions depending on their primary application and other constraints. The planned parametric algorithms will correct for the varying viewing parameters, but they are still vulnerable to calibration errors, both relative and absolute. The GPM Intersatellite Calibration Working Group (aka X-CAL) will adjust the calibration of all the radiometers to a common consensus standard for the GPM Level 1C product to be used in precipitation retrievals. Finally, each Precipitation Algorithm Working Group must have its own strategy for removing the residual errors. If the final adjustments are small, the credibility of the precipitation retrievals will be enhanced. Before intercomparing, the radiometers must be self consistent on a scan-wise and orbit-wise basis. Pre-screening for this consistency constitutes the first step in the intercomparison. The radiometers are then compared pair-wise with the microwave radiometer (GMI) on the GPM Core Satellite. Two distinct approaches are used for sake of cross-checking the results. On the one hand, nearly simultaneous observations are collected at the cross-over points of the orbits and the observations of one are converted to virtual observations of the other using a radiative transfer model to permit comparisons. The complementary approach collects histograms of brightness temperature from each instrument. In each case a model is needed to translate the

  7. Calibration of the DSCOVR EPIC visible and NIR channels using MODIS Terra and Aqua data and EPIC lunar observations

    Directory of Open Access Journals (Sweden)

    I. V. Geogdzhayev

    2018-01-01

    Full Text Available The unique position of the Deep Space Climate Observatory (DSCOVR Earth Polychromatic Imaging Camera (EPIC at the Lagrange 1 point makes an important addition to the data from currently operating low Earth orbit observing instruments. EPIC instrument does not have an onboard calibration facility. One approach to its calibration is to compare EPIC observations to the measurements from polar-orbiting radiometers. Moderate Resolution Imaging Spectroradiometer (MODIS is a natural choice for such comparison due to its well-established calibration record and wide use in remote sensing. We use MODIS Aqua and Terra L1B 1 km reflectances to infer calibration coefficients for four EPIC visible and NIR channels: 443, 551, 680 and 780 nm. MODIS and EPIC measurements made between June 2015 and 2016 are employed for comparison. We first identify favorable MODIS pixels with scattering angle matching temporarily collocated EPIC observations. Each EPIC pixel is then spatially collocated to a subset of the favorable MODIS pixels within 25 km radius. Standard deviation of the selected MODIS pixels as well as of the adjacent EPIC pixels is used to find the most homogeneous scenes. These scenes are then used to determine calibration coefficients using a linear regression between EPIC counts s−1 and reflectances in the close MODIS spectral channels. We present thus inferred EPIC calibration coefficients and discuss sources of uncertainties. The lunar EPIC observations are used to calibrate EPIC O2 absorbing channels (688 and 764 nm, assuming that there is a small difference between moon reflectances separated by  ∼  10 nm in wavelength and provided the calibration factors of the red (680 nm and NIR (780 nm are known from comparison between EPIC and MODIS.

  8. Online Sensor Calibration Assessment in Nuclear Power Systems

    International Nuclear Information System (INIS)

    Coble, Jamie B.; Ramuhalli, Pradeep; Meyer, Ryan M.; Hashemian, Hash

    2013-01-01

    Safe, efficient, and economic operation of nuclear systems (nuclear power plants, fuel fabrication and storage, used fuel processing, etc.) relies on transmission of accurate and reliable measurements. During operation, sensors degrade due to age, environmental exposure, and maintenance interventions. Sensor degradation can affect the measured and transmitted signals, including sensor failure, signal drift, sensor response time, etc. Currently, periodic sensor recalibration is performed to avoid these problems. Sensor recalibration activities include both calibration assessment and adjustment (if necessary). In nuclear power plants, periodic recalibration of safety-related sensors is required by the plant technical specifications. Recalibration typically occurs during refueling outages (about every 18 to 24 months). Non-safety-related sensors also undergo recalibration, though not as frequently. However, this approach to maintaining sensor calibration and performance is time-consuming and expensive, leading to unnecessary maintenance, increased radiation exposure to maintenance personnel, and potential damage to sensors. Online monitoring (OLM) of sensor performance is a non-invasive approach to assess instrument calibration. OLM can mitigate many of the limitations of the current periodic recalibration practice by providing more frequent assessment of calibration and identifying those sensors that are operating outside of calibration tolerance limits without removing sensors or interrupting operation. This can support extended operating intervals for unfaulted sensors and target recalibration efforts to only degraded sensors

  9. Hybrid PSO-ASVR-based method for data fitting in the calibration of infrared radiometer

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Sen; Li, Chengwei, E-mail: heikuanghit@163.com [School of Electrical Engineering and Automation, Harbin Institute of Technology, Harbin 150001 (China)

    2016-06-15

    The present paper describes a hybrid particle swarm optimization-adaptive support vector regression (PSO-ASVR)-based method for data fitting in the calibration of infrared radiometer. The proposed hybrid PSO-ASVR-based method is based on PSO in combination with Adaptive Processing and Support Vector Regression (SVR). The optimization technique involves setting parameters in the ASVR fitting procedure, which significantly improves the fitting accuracy. However, its use in the calibration of infrared radiometer has not yet been widely explored. Bearing this in mind, the PSO-ASVR-based method, which is based on the statistical learning theory, is successfully used here to get the relationship between the radiation of a standard source and the response of an infrared radiometer. Main advantages of this method are the flexible adjustment mechanism in data processing and the optimization mechanism in a kernel parameter setting of SVR. Numerical examples and applications to the calibration of infrared radiometer are performed to verify the performance of PSO-ASVR-based method compared to conventional data fitting methods.

  10. A hybrid framework for quantifying the influence of data in hydrological model calibration

    Science.gov (United States)

    Wright, David P.; Thyer, Mark; Westra, Seth; McInerney, David

    2018-06-01

    Influence diagnostics aim to identify a small number of influential data points that have a disproportionate impact on the model parameters and/or predictions. The key issues with current influence diagnostic techniques are that the regression-theory approaches do not provide hydrologically relevant influence metrics, while the case-deletion approaches are computationally expensive to calculate. The main objective of this study is to introduce a new two-stage hybrid framework that overcomes these challenges, by delivering hydrologically relevant influence metrics in a computationally efficient manner. Stage one uses computationally efficient regression-theory influence diagnostics to identify the most influential points based on Cook's distance. Stage two then uses case-deletion influence diagnostics to quantify the influence of points using hydrologically relevant metrics. To illustrate the application of the hybrid framework, we conducted three experiments on 11 hydro-climatologically diverse Australian catchments using the GR4J hydrological model. The first experiment investigated how many data points from stage one need to be retained in order to reliably identify those points that have the hightest influence on hydrologically relevant metrics. We found that a choice of 30-50 is suitable for hydrological applications similar to those explored in this study (30 points identified the most influential data 98% of the time and reduced the required recalibrations by 99% for a 10 year calibration period). The second experiment found little evidence of a change in the magnitude of influence with increasing calibration period length from 1, 2, 5 to 10 years. Even for 10 years the impact of influential points can still be high (>30% influence on maximum predicted flows). The third experiment compared the standard least squares (SLS) objective function with the weighted least squares (WLS) objective function on a 10 year calibration period. In two out of three flow

  11. An automated calibration laboratory for flight research instrumentation: Requirements and a proposed design approach

    Science.gov (United States)

    Oneill-Rood, Nora; Glover, Richard D.

    1990-01-01

    NASA's Dryden Flight Research Facility (Ames-Dryden), operates a diverse fleet of research aircraft which are heavily instrumented to provide both real time data for in-flight monitoring and recorded data for postflight analysis. Ames-Dryden's existing automated calibration (AUTOCAL) laboratory is a computerized facility which tests aircraft sensors to certify accuracy for anticipated harsh flight environments. Recently, a major AUTOCAL lab upgrade was initiated; the goal of this modernization is to enhance productivity and improve configuration management for both software and test data. The new system will have multiple testing stations employing distributed processing linked by a local area network to a centralized database. The baseline requirements for the new AUTOCAL lab and the design approach being taken for its mechanization are described.

  12. Interferometric detection of single gold nanoparticles calibrated against TEM size distributions

    DEFF Research Database (Denmark)

    Zhang, Lixue; Christensen, Sune; Bendix, Pól Martin

    2015-01-01

    Single nanoparticle analysis: An interferometric optical approach calibrates sizes of gold nanoparticles (AuNPs) from the interference intensities by calibrating their interferometric signals against the corresponding transmission electron microscopy measurements. This method is used to investigate...

  13. A holistic calibration method with iterative distortion compensation for stereo deflectometry

    Science.gov (United States)

    Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian

    2018-07-01

    This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.

  14. Wind Tunnel Balance Calibration: Are 1,000,000 Data Points Enough?

    Science.gov (United States)

    Rhew, Ray D.; Parker, Peter A.

    2016-01-01

    Measurement systems are typically calibrated based on standard practices established by a metrology standards laboratory, for example the National Institute for Standards and Technology (NIST), or dictated by an organization's metrology manual. Therefore, the calibration is designed and executed according to an established procedure. However, for many aerodynamic research measurement systems a universally accepted standard, traceable approach does not exist. Therefore, a strategy for how to develop a calibration protocol is left to the developer or user to define based on experience and recommended practice in their respective industry. Wind tunnel balances are one such measurement system. Many different calibration systems, load schedules and procedures have been developed for balances with little consensus on a recommended approach. Especially lacking is guidance the number of calibration data points needed. Regrettably, the number of data points tends to be correlated with the perceived quality of the calibration. Often, the number of data points is associated with ones ability to generate the data rather than by a defined need in support of measurement objectives. Hence the title of the paper was conceived to challenge recent observations in the wind tunnel balance community that shows an ever increasing desire for more data points per calibration absent of guidance to determine when there are enough. This paper presents fundamental concepts and theory to aid in the development of calibration procedures for wind tunnel balances and provides a framework that is generally applicable to the characterization and calibration of other measurement systems. Questions that need to be answered are for example: What constitutes an adequate calibration? How much data are needed in the calibration? How good is the calibration? This paper will assist a practitioner in answering these questions by presenting an underlying theory on how to evaluate a calibration based on

  15. A simple approach for EPID dosimetric calibration to overcome the effect of image-lag and ghosting

    International Nuclear Information System (INIS)

    Alshanqity, Mukhtar; Duane, Simon; Nisbet, Andrew

    2012-01-01

    EPID dosimetry has known drawbacks. The main issue is that a measurable residual signal is observed after the end of irradiation for prolonged periods of time, thus making measurement difficult. We present a detailed analysis of EPID response and suggest a simple, yet accurate approach for calibration that avoids the complexity of incorporating ghosting and image-lag using the maximum integrated signal instead of the total integrated signal. This approach is linear with dose and independent of dose rate. - Highlights: ► Image-lag and ghosting effects dosimetric accuracy. ► Image-lag and ghosting result in the reduction of total integrated signal for low doses. ► Residual signal is the most significant result for the image-lag and ghosting effects. ► Image-lag and ghosting can result in under-dosing of up to 2.5%.

  16. Accuracy of Bayes and Logistic Regression Subscale Probabilities for Educational and Certification Tests

    Science.gov (United States)

    Rudner, Lawrence

    2016-01-01

    In the machine learning literature, it is commonly accepted as fact that as calibration sample sizes increase, Naïve Bayes classifiers initially outperform Logistic Regression classifiers in terms of classification accuracy. Applied to subtests from an on-line final examination and from a highly regarded certification examination, this study shows…

  17. Measurement reduction for mutual coupling calibration in DOA estimation

    Science.gov (United States)

    Aksoy, Taylan; Tuncer, T. Engin

    2012-01-01

    Mutual coupling is an important source of error in antenna arrays that should be compensated for super resolution direction-of-arrival (DOA) algorithms, such as Multiple Signal Classification (MUSIC) algorithm. A crucial step in array calibration is the determination of the mutual coupling coefficients for the antenna array. In this paper, a system theoretic approach is presented for the mutual coupling characterization of antenna arrays. The comprehension and implementation of this approach is simple leading to further advantages in calibration measurement reduction. In this context, a measurement reduction method for antenna arrays with omni-directional and identical elements is proposed which is based on the symmetry planes in the array geometry. The proposed method significantly decreases the number of measurements during the calibration process. This method is evaluated using different array types whose responses and the mutual coupling characteristics are obtained through numerical electromagnetic simulations. It is shown that a single calibration measurement is sufficient for uniform circular arrays. Certain important and interesting characteristics observed during the experiments are outlined.

  18. Functional data analysis of generalized regression quantiles

    KAUST Repository

    Guo, Mengmeng; Zhou, Lan; Huang, Jianhua Z.; Hä rdle, Wolfgang Karl

    2013-01-01

    Generalized regression quantiles, including the conditional quantiles and expectiles as special cases, are useful alternatives to the conditional means for characterizing a conditional distribution, especially when the interest lies in the tails. We develop a functional data analysis approach to jointly estimate a family of generalized regression quantiles. Our approach assumes that the generalized regression quantiles share some common features that can be summarized by a small number of principal component functions. The principal component functions are modeled as splines and are estimated by minimizing a penalized asymmetric loss measure. An iterative least asymmetrically weighted squares algorithm is developed for computation. While separate estimation of individual generalized regression quantiles usually suffers from large variability due to lack of sufficient data, by borrowing strength across data sets, our joint estimation approach significantly improves the estimation efficiency, which is demonstrated in a simulation study. The proposed method is applied to data from 159 weather stations in China to obtain the generalized quantile curves of the volatility of the temperature at these stations. © 2013 Springer Science+Business Media New York.

  19. Functional data analysis of generalized regression quantiles

    KAUST Repository

    Guo, Mengmeng

    2013-11-05

    Generalized regression quantiles, including the conditional quantiles and expectiles as special cases, are useful alternatives to the conditional means for characterizing a conditional distribution, especially when the interest lies in the tails. We develop a functional data analysis approach to jointly estimate a family of generalized regression quantiles. Our approach assumes that the generalized regression quantiles share some common features that can be summarized by a small number of principal component functions. The principal component functions are modeled as splines and are estimated by minimizing a penalized asymmetric loss measure. An iterative least asymmetrically weighted squares algorithm is developed for computation. While separate estimation of individual generalized regression quantiles usually suffers from large variability due to lack of sufficient data, by borrowing strength across data sets, our joint estimation approach significantly improves the estimation efficiency, which is demonstrated in a simulation study. The proposed method is applied to data from 159 weather stations in China to obtain the generalized quantile curves of the volatility of the temperature at these stations. © 2013 Springer Science+Business Media New York.

  20. Identifying Interacting Genetic Variations by Fish-Swarm Logic Regression

    Science.gov (United States)

    Yang, Aiyuan; Yan, Chunxia; Zhu, Feng; Zhao, Zhongmeng; Cao, Zhi

    2013-01-01

    Understanding associations between genotypes and complex traits is a fundamental problem in human genetics. A major open problem in mapping phenotypes is that of identifying a set of interacting genetic variants, which might contribute to complex traits. Logic regression (LR) is a powerful multivariant association tool. Several LR-based approaches have been successfully applied to different datasets. However, these approaches are not adequate with regard to accuracy and efficiency. In this paper, we propose a new LR-based approach, called fish-swarm logic regression (FSLR), which improves the logic regression process by incorporating swarm optimization. In our approach, a school of fish agents are conducted in parallel. Each fish agent holds a regression model, while the school searches for better models through various preset behaviors. A swarm algorithm improves the accuracy and the efficiency by speeding up the convergence and preventing it from dropping into local optimums. We apply our approach on a real screening dataset and a series of simulation scenarios. Compared to three existing LR-based approaches, our approach outperforms them by having lower type I and type II error rates, being able to identify more preset causal sites, and performing at faster speeds. PMID:23984382

  1. Identifying Interacting Genetic Variations by Fish-Swarm Logic Regression

    Directory of Open Access Journals (Sweden)

    Xuanping Zhang

    2013-01-01

    Full Text Available Understanding associations between genotypes and complex traits is a fundamental problem in human genetics. A major open problem in mapping phenotypes is that of identifying a set of interacting genetic variants, which might contribute to complex traits. Logic regression (LR is a powerful multivariant association tool. Several LR-based approaches have been successfully applied to different datasets. However, these approaches are not adequate with regard to accuracy and efficiency. In this paper, we propose a new LR-based approach, called fish-swarm logic regression (FSLR, which improves the logic regression process by incorporating swarm optimization. In our approach, a school of fish agents are conducted in parallel. Each fish agent holds a regression model, while the school searches for better models through various preset behaviors. A swarm algorithm improves the accuracy and the efficiency by speeding up the convergence and preventing it from dropping into local optimums. We apply our approach on a real screening dataset and a series of simulation scenarios. Compared to three existing LR-based approaches, our approach outperforms them by having lower type I and type II error rates, being able to identify more preset causal sites, and performing at faster speeds.

  2. Calibration of the dietary data obtained from the Brazilian center of the Natural History of HPV Infection in Men study: the HIM Study.

    Science.gov (United States)

    Teixeira, Juliana Araujo; Baggio, Maria Luiza; Fisberg, Regina Mara; Marchioni, Dirce Maria Lobo

    2010-12-01

    The objective of this study was to estimate the regressions calibration for the dietary data that were measured using the quantitative food frequency questionnaire (QFFQ) in the Natural History of HPV Infection in Men: the HIM Study in Brazil. A sample of 98 individuals from the HIM study answered one QFFQ and three 24-hour recalls (24HR) at interviews. The calibration was performed using linear regression analysis in which the 24HR was the dependent variable and the QFFQ was the independent variable. Age, body mass index, physical activity, income and schooling were used as adjustment variables in the models. The geometric means between the 24HR and the calibration-corrected QFFQ were statistically equal. The dispersion graphs between the instruments demonstrate increased correlation after making the correction, although there is greater dispersion of the points with worse explanatory power of the models. Identification of the regressions calibration for the dietary data of the HIM study will make it possible to estimate the effect of the diet on HPV infection, corrected for the measurement error of the QFFQ.

  3. Cost-of-illness studies based on massive data: a prevalence-based, top-down regression approach.

    Science.gov (United States)

    Stollenwerk, Björn; Welchowski, Thomas; Vogl, Matthias; Stock, Stephanie

    2016-04-01

    Despite the increasing availability of routine data, no analysis method has yet been presented for cost-of-illness (COI) studies based on massive data. We aim, first, to present such a method and, second, to assess the relevance of the associated gain in numerical efficiency. We propose a prevalence-based, top-down regression approach consisting of five steps: aggregating the data; fitting a generalized additive model (GAM); predicting costs via the fitted GAM; comparing predicted costs between prevalent and non-prevalent subjects; and quantifying the stochastic uncertainty via error propagation. To demonstrate the method, it was applied to aggregated data in the context of chronic lung disease to German sickness funds data (from 1999), covering over 7.3 million insured. To assess the gain in numerical efficiency, the computational time of the innovative approach has been compared with corresponding GAMs applied to simulated individual-level data. Furthermore, the probability of model failure was modeled via logistic regression. Applying the innovative method was reasonably fast (19 min). In contrast, regarding patient-level data, computational time increased disproportionately by sample size. Furthermore, using patient-level data was accompanied by a substantial risk of model failure (about 80 % for 6 million subjects). The gain in computational efficiency of the innovative COI method seems to be of practical relevance. Furthermore, it may yield more precise cost estimates.

  4. Alternative regression models to assess increase in childhood BMI

    OpenAIRE

    Beyerlein, Andreas; Fahrmeir, Ludwig; Mansmann, Ulrich; Toschke, André M

    2008-01-01

    Abstract Background Body mass index (BMI) data usually have skewed distributions, for which common statistical modeling approaches such as simple linear or logistic regression have limitations. Methods Different regression approaches to predict childhood BMI by goodness-of-fit measures and means of interpretation were compared including generalized linear models (GLMs), quantile regression and Generalized Additive Models for Location, Scale and Shape (GAMLSS). We analyzed data of 4967 childre...

  5. Uranium cross-calibration measurements using an active well coincidence counter

    International Nuclear Information System (INIS)

    Nikolaev, V.; Prochine, I.; Smirnov, V.; Ensslin, N.; Carillo, L.

    1998-01-01

    This paper reports on the cross-calibration of an Active Well Coincidence Counter for use in the Materials Protection, Control, and Accountability Graduate Program at the Moscow State Engineering Physics Institute (MEPhI). The cross-calibration procedure and its application to nuclear material types available at MEPhI for instructional purposes is described. Cross-calibration results at Los Alamos and initial applications at MEPhI are summarized. Based on the results so far, the authors conclude that the cross-calibration approach seems useful, with good prospects for potential applications at other Russian and US Dept. of Energy facilities

  6. Validity of the reduced-sample insulin modified frequently-sampled intravenous glucose tolerance test using the nonlinear regression approach.

    Science.gov (United States)

    Sumner, Anne E; Luercio, Marcella F; Frempong, Barbara A; Ricks, Madia; Sen, Sabyasachi; Kushner, Harvey; Tulloch-Reid, Marshall K

    2009-02-01

    The disposition index, the product of the insulin sensitivity index (S(I)) and the acute insulin response to glucose, is linked in African Americans to chromosome 11q. This link was determined with S(I) calculated with the nonlinear regression approach to the minimal model and data from the reduced-sample insulin-modified frequently-sampled intravenous glucose tolerance test (Reduced-Sample-IM-FSIGT). However, the application of the nonlinear regression approach to calculate S(I) using data from the Reduced-Sample-IM-FSIGT has been challenged as being not only inaccurate but also having a high failure rate in insulin-resistant subjects. Our goal was to determine the accuracy and failure rate of the Reduced-Sample-IM-FSIGT using the nonlinear regression approach to the minimal model. With S(I) from the Full-Sample-IM-FSIGT considered the standard and using the nonlinear regression approach to the minimal model, we compared the agreement between S(I) from the Full- and Reduced-Sample-IM-FSIGT protocols. One hundred African Americans (body mass index, 31.3 +/- 7.6 kg/m(2) [mean +/- SD]; range, 19.0-56.9 kg/m(2)) had FSIGTs. Glucose (0.3 g/kg) was given at baseline. Insulin was infused from 20 to 25 minutes (total insulin dose, 0.02 U/kg). For the Full-Sample-IM-FSIGT, S(I) was calculated based on the glucose and insulin samples taken at -1, 1, 2, 3, 4, 5, 6, 7, 8,10, 12, 14, 16, 19, 22, 23, 24, 25, 27, 30, 40, 50, 60, 70, 80, 90, 100, 120, 150, and 180 minutes. For the Reduced-Sample-FSIGT, S(I) was calculated based on the time points that appear in bold. Agreement was determined by Spearman correlation, concordance, and the Bland-Altman method. In addition, for both protocols, the population was divided into tertiles of S(I). Insulin resistance was defined by the lowest tertile of S(I) from the Full-Sample-IM-FSIGT. The distribution of subjects across tertiles was compared by rank order and kappa statistic. We found that the rate of failure of resolution of S(I) by

  7. Comparison of two-concentration with multi-concentration linear regressions: Retrospective data analysis of multiple regulated LC-MS bioanalytical projects.

    Science.gov (United States)

    Musuku, Adrien; Tan, Aimin; Awaiye, Kayode; Trabelsi, Fethi

    2013-09-01

    Linear calibration is usually performed using eight to ten calibration concentration levels in regulated LC-MS bioanalysis because a minimum of six are specified in regulatory guidelines. However, we have previously reported that two-concentration linear calibration is as reliable as or even better than using multiple concentrations. The purpose of this research is to compare two-concentration with multiple-concentration linear calibration through retrospective data analysis of multiple bioanalytical projects that were conducted in an independent regulated bioanalytical laboratory. A total of 12 bioanalytical projects were randomly selected: two validations and two studies for each of the three most commonly used types of sample extraction methods (protein precipitation, liquid-liquid extraction, solid-phase extraction). When the existing data were retrospectively linearly regressed using only the lowest and the highest concentration levels, no extra batch failure/QC rejection was observed and the differences in accuracy and precision between the original multi-concentration regression and the new two-concentration linear regression are negligible. Specifically, the differences in overall mean apparent bias (square root of mean individual bias squares) are within the ranges of -0.3% to 0.7% and 0.1-0.7% for the validations and studies, respectively. The differences in mean QC concentrations are within the ranges of -0.6% to 1.8% and -0.8% to 2.5% for the validations and studies, respectively. The differences in %CV are within the ranges of -0.7% to 0.9% and -0.3% to 0.6% for the validations and studies, respectively. The average differences in study sample concentrations are within the range of -0.8% to 2.3%. With two-concentration linear regression, an average of 13% of time and cost could have been saved for each batch together with 53% of saving in the lead-in for each project (the preparation of working standard solutions, spiking, and aliquoting). Furthermore

  8. Model-free prediction and regression a transformation-based approach to inference

    CERN Document Server

    Politis, Dimitris N

    2015-01-01

    The Model-Free Prediction Principle expounded upon in this monograph is based on the simple notion of transforming a complex dataset to one that is easier to work with, e.g., i.i.d. or Gaussian. As such, it restores the emphasis on observable quantities, i.e., current and future data, as opposed to unobservable model parameters and estimates thereof, and yields optimal predictors in diverse settings such as regression and time series. Furthermore, the Model-Free Bootstrap takes us beyond point prediction in order to construct frequentist prediction intervals without resort to unrealistic assumptions such as normality. Prediction has been traditionally approached via a model-based paradigm, i.e., (a) fit a model to the data at hand, and (b) use the fitted model to extrapolate/predict future data. Due to both mathematical and computational constraints, 20th century statistical practice focused mostly on parametric models. Fortunately, with the advent of widely accessible powerful computing in the late 1970s, co...

  9. Dither Gyro Scale Factor Calibration: GOES-16 Flight Experience

    Science.gov (United States)

    Reth, Alan D.; Freesland, Douglas C.; Krimchansky, Alexander

    2018-01-01

    This poster is a sequel to a paper presented at the 34th Annual AAS Guidance and Control Conference in 2011, which first introduced dither-based calibration of gyro scale factors. The dither approach uses very small excitations, avoiding the need to take instruments offline during gyro scale factor calibration. In 2017, the dither calibration technique was successfully used to estimate gyro scale factors on the GOES-16 satellite. On-orbit dither calibration results were compared to more traditional methods using large angle spacecraft slews about each gyro axis, requiring interruption of science. The results demonstrate that the dither technique can estimate gyro scale factors to better than 2000 ppm during normal science observations.

  10. A new measurement-while-drilling gamma ray log calibrator

    International Nuclear Information System (INIS)

    Meisner, J.; Brooks, A.; Wisniewski, W.

    1985-01-01

    Many of the present methods of calibration for both wireline and MWD gamma ray detectors use a point source at a fixed distance from the detector. MWD calibration errors are introduced from scattering effects, from spectral differences, from position sensitivity and form lack of cylindrical geometry. A new method has been developed at Exploration Logging INc. (EXLOG) that eliminates these errors. The method uses a wrap-around or annular calibrator, referenced to the University of Houston gamma ray API pit. The new calibrator is designed to simulate the API pit's gamma ray emission spectrum with a finite amount of natural source material in the annular shape. Because of the thickness of steel between the MWD gamma ray detector and the formation, there is theoretical necessity for spectral matching. A simple theoretical approach was used to calibrate the new calibrator. Spectral matching allows a closer approximation to wireline logs and makes it possible to estimate the relative spectral content of a formation

  11. Satellite rainfall retrieval by logistic regression

    Science.gov (United States)

    Chiu, Long S.

    1986-01-01

    The potential use of logistic regression in rainfall estimation from satellite measurements is investigated. Satellite measurements provide covariate information in terms of radiances from different remote sensors.The logistic regression technique can effectively accommodate many covariates and test their significance in the estimation. The outcome from the logistical model is the probability that the rainrate of a satellite pixel is above a certain threshold. By varying the thresholds, a rainrate histogram can be obtained, from which the mean and the variant can be estimated. A logistical model is developed and applied to rainfall data collected during GATE, using as covariates the fractional rain area and a radiance measurement which is deduced from a microwave temperature-rainrate relation. It is demonstrated that the fractional rain area is an important covariate in the model, consistent with the use of the so-called Area Time Integral in estimating total rain volume in other studies. To calibrate the logistical model, simulated rain fields generated by rainfield models with prescribed parameters are needed. A stringent test of the logistical model is its ability to recover the prescribed parameters of simulated rain fields. A rain field simulation model which preserves the fractional rain area and lognormality of rainrates as found in GATE is developed. A stochastic regression model of branching and immigration whose solutions are lognormally distributed in some asymptotic limits has also been developed.

  12. Travelling gradient thermocouple calibration

    International Nuclear Information System (INIS)

    Broomfield, G.H.

    1975-01-01

    A short discussion of the origins of the thermocouple EMF is used to re-introduce the idea that the Peltier and Thompson effects are indistinguishable from one another. Thermocouples may be viewed as devices which generate an EMF at junctions or as integrators of EMF's developed in thermal gradients. The thermal gradient view is considered the more appropriate, because of its better accord with theory and behaviour, the correct approach to calibration, and investigation of service effects is immediately obvious. Inhomogeneities arise in thermocouples during manufacture and in service. The results of travelling gradient measurements are used to show that such effects are revealed with a resolution which depends on the length of the gradient although they may be masked during simple immersion calibration. Proposed tests on thermocouples irradiated in a nuclear reactor are discussed

  13. Calibration Lessons Learned from Hyperion Experience

    Science.gov (United States)

    Casement, S.; Ho, K.; Sandor-Leahy, S.; Biggar, S.; Czapla-Myers, J.; McCorkel, J.; Thome, K.

    2009-12-01

    improved methodologies during sensor development and additional capabilities of for on-orbit calibration systems that will enable reduction of the error terms. A key lesson learned was the approach taken to combine the various calibration results of Hyperion which differed by more than uncertainties in the methods.

  14. Calibrating the absolute amplitude scale for air showers measured at LOFAR

    International Nuclear Information System (INIS)

    Nelles, A.; Hörandel, J. R.; Karskens, T.; Krause, M.; Corstanje, A.; Enriquez, J. E.; Falcke, H.; Rachen, J. P.; Rossetto, L.; Schellart, P.; Buitink, S.; Erdmann, M.; Krause, R.; Haungs, A.; Hiller, R.; Huege, T.; Link, K.; Schröder, F. G.; Norden, M. J.; Scholten, O.

    2015-01-01

    Air showers induced by cosmic rays create nanosecond pulses detectable at radio frequencies. These pulses have been measured successfully in the past few years at the LOw-Frequency ARray (LOFAR) and are used to study the properties of cosmic rays. For a complete understanding of this phenomenon and the underlying physical processes, an absolute calibration of the detecting antenna system is needed. We present three approaches that were used to check and improve the antenna model of LOFAR and to provide an absolute calibration of the whole system for air shower measurements. Two methods are based on calibrated reference sources and one on a calibration approach using the diffuse radio emission of the Galaxy, optimized for short data-sets. An accuracy of 19% in amplitude is reached. The absolute calibration is also compared to predictions from air shower simulations. These results are used to set an absolute energy scale for air shower measurements and can be used as a basis for an absolute scale for the measurement of astronomical transients with LOFAR

  15. Self-calibration of a cone-beam micro-CT system

    International Nuclear Information System (INIS)

    Patel, V.; Chityala, R. N.; Hoffmann, K. R.; Ionita, C. N.; Bednarek, D. R.; Rudin, S.

    2009-01-01

    Use of cone-beam computed tomography (CBCT) is becoming more frequent. For proper reconstruction, the geometry of the CBCT systems must be known. While the system can be designed to reduce errors in the geometry, calibration measurements must still be performed and corrections applied. Investigators have proposed techniques using calibration objects for system calibration. In this study, the authors present methods to calibrate a rotary-stage CB micro-CT (CBμCT) system using only the images acquired of the object to be reconstructed, i.e., without the use of calibration objects. Projection images are acquired using a CBμCT system constructed in the authors' laboratories. Dark- and flat-field corrections are performed. Exposure variations are detected and quantified using analysis of image regions with an unobstructed view of the x-ray source. Translations that occur during the acquisition in the horizontal direction are detected, quantified, and corrected based on sinogram analysis. The axis of rotation is determined using registration of antiposed projection images. These techniques were evaluated using data obtained with calibration objects and phantoms. The physical geometric axis of rotation is determined and aligned with the rotational axis (assumed to be the center of the detector plane) used in the reconstruction process. The parameters describing this axis agree to within 0.1 mm and 0.3 deg with those determined using other techniques. Blurring due to residual calibration errors has a point-spread function in the reconstructed planes with a full-width-at-half-maximum of less than 125 μm in a tangential direction and essentially zero in the radial direction for the rotating object. The authors have used this approach on over 100 acquisitions over the past 2 years and have regularly obtained high-quality reconstructions, i.e., without artifacts and no detectable blurring of the reconstructed objects. This self-calibrating approach not only obviates

  16. A laser calibration system for the STAR TPC

    CERN Document Server

    Lebedev, A

    2002-01-01

    A Time Projection Chamber (TPC) is the primary tracking detector for the STAR experiment at RHIC. A laser calibration system was built to calibrate and monitor the TPC tracking performance. The laser system uses a novel design which produces approx 500 thin, ionizing beams distributed throughout the tracking volume. This new approach is significantly simpler than the traditional ones, and provides complete TPC coverage at a reduced cost. The laser system was used during the RHIC 2000 summer run to measure drift velocities with about 0.02% accuracy and to monitor the TPC performance. Calibration runs were made with and without a magnetic field to check B field map corrections.

  17. Semi-empirical neutron tool calibration (one and two-group approximation)

    International Nuclear Information System (INIS)

    Czubek, J.A.

    1988-01-01

    The physical principles of the new method of calibration of neutron tools for the rock porosity determination are given. A short description of the physics of neutron transport in the matter is presented together with some remarks on the elementary interactions of neutrons with nuclei (cross sections, group cross sections etc.). The definitions of the main integral parameters characterizing the neutron transport in the rock media are given. The three main approaches to the calibration problem: empirical, theoretical and semi-empirical are presented with some more detailed description of the latter one. The new semi-empirical approach is described. The method is based on the definition of the apparent slowing down or migration length for neutrons sensed by the neutron tool situated in the real borehole-rock conditions. To calculate this apparent slowing down or migration lengths the ratio of the proper space moments of the neutron distribution along the borehole axis is used. Theoretical results are given for one- and two-group diffusion approximations in the rock-borehole geometrical conditions when the tool is in the sidewall position. The physical and chemical parameters are given for the calibration blocks of the Logging Company in Zielona Gora. Using these data the neutron parameters of the calibration blocks have been calculated. An example, how to determine the calibration curve for the dual detector tool applying this new method and using the neutron parameters mentioned above together with the measurements performed in the calibration blocks, is given. The most important advantage of the new semi-empirical method of calibration is the possibility of setting on the unique calibration curve all experimental calibration data obtained for a given neutron tool for different porosities, lithologies and borehole diameters. 52 refs., 21 figs., 21 tabs. (author)

  18. CryoSat-2 SIRAL Calibration and Performance

    Science.gov (United States)

    Fornari, M.; Scagliola, M.; Tagliani, N.; Parrinello, T.

    2012-12-01

    The main payload of CryoSat-2 is a Ku band pulse-width limited radar altimeter, called SIRAL (Synthetic interferometric radar altimeter), that transmits pulses at a high pulse repetition frequency thus making the received echoes phase coherent and suitable for azimuth processing. This allows to reach an along track resolution of about 250 meters which is a significant improvement over traditional pulse-width limited altimeters. Due to the fact that SIRAL is a phase coherent pulse-width limited radar altimeter, a proper calibration approach has been developed, including both an internal and external calibration. The internal calibration monitors the instrument impulse response and the transfer function, like traditional altimeters. In addition to that, the interferometer requires a special calibration developed ad hoc for SIRAL. The external calibration is performed with the use of a ground transponder, located in Svalbard, which receives SIRAL signal and sends the echo back to the satellite. Internal calibration data are processed on ground by the CryoSat-2 Instrument Processing Facility (IPF1) and then applied to the science data. In December 2012, two and a half years of calibration data will be available, which will be shown in this poster. The external calibration (transponder) data are processed and analyzed independently from the operational chain. The use of an external transponder has been very useful to determine instrument performance and for the tuning of the on-ground processor. This poster presents the transponder results in terms of range noise and datation error.

  19. Another look at volume self-calibration: calibration and self-calibration within a pinhole model of Scheimpflug cameras

    International Nuclear Information System (INIS)

    Cornic, Philippe; Le Besnerais, Guy; Champagnat, Frédéric; Illoul, Cédric; Cheminet, Adam; Le Sant, Yves; Leclaire, Benjamin

    2016-01-01

    We address calibration and self-calibration of tomographic PIV experiments within a pinhole model of cameras. A complete and explicit pinhole model of a camera equipped with a 2-tilt angles Scheimpflug adapter is presented. It is then used in a calibration procedure based on a freely moving calibration plate. While the resulting calibrations are accurate enough for Tomo-PIV, we confirm, through a simple experiment, that they are not stable in time, and illustrate how the pinhole framework can be used to provide a quantitative evaluation of geometrical drifts in the setup. We propose an original self-calibration method based on global optimization of the extrinsic parameters of the pinhole model. These methods are successfully applied to the tomographic PIV of an air jet experiment. An unexpected by-product of our work is to show that volume self-calibration induces a change in the world frame coordinates. Provided the calibration drift is small, as generally observed in PIV, the bias on the estimated velocity field is negligible but the absolute location cannot be accurately recovered using standard calibration data. (paper)

  20. Radiometric Cross-Calibration of the Chilean Satellite FASat-C Using RapidEye and EO-1 Hyperion Data and a Simultaneous Nadir Overpass Approach

    Directory of Open Access Journals (Sweden)

    Carolina Barrientos

    2016-07-01

    Full Text Available The absolute radiometric calibration of a satellite sensor is the critical factor that ensures the usefulness of the acquired data for quantitative applications on remote sensing. This work presents the results of the first cross-calibration of the sensor on board the Sistema Satelital de Observación de la Tierra (SSOT Chilean satellite or Air Force Satellite FASat-C. RapidEye-MSI was chosen as the reference sensor, and a simultaneous Nadir Overpass Approach (SNO was applied. The biases caused by differences in the spectral responses of both instruments were compensated through an adjustment factor derived from EO-1 Hyperion data. Through this method, the variations affecting the radiometric response of New AstroSat Optical Modular Instrument (NAOMI-1, have been corrected based on collections over the Frenchman Flat calibration site. The results of a preliminary evaluation of the pre-flight and updated coefficients have shown a significant improvement in the accuracy of at-sensor radiances and TOA reflectances: an average agreement of 2.63% (RMSE was achieved for the multispectral bands of both instruments. This research will provide a basis for the continuity of calibration and validation tasks of future Chilean space missions.

  1. Cumulative sum quality control for calibrated breast density measurements

    International Nuclear Information System (INIS)

    Heine, John J.; Cao Ke; Beam, Craig

    2009-01-01

    Purpose: Breast density is a significant breast cancer risk factor. Although various methods are used to estimate breast density, there is no standard measurement for this important factor. The authors are developing a breast density standardization method for use in full field digital mammography (FFDM). The approach calibrates for interpatient acquisition technique differences. The calibration produces a normalized breast density pixel value scale. The method relies on first generating a baseline (BL) calibration dataset, which required extensive phantom imaging. Standardizing prospective mammograms with calibration data generated in the past could introduce unanticipated error in the standardized output if the calibration dataset is no longer valid. Methods: Sample points from the BL calibration dataset were imaged approximately biweekly over an extended timeframe. These serial samples were used to evaluate the BL dataset reproducibility and quantify the serial calibration accuracy. The cumulative sum (Cusum) quality control method was used to evaluate the serial sampling. Results: There is considerable drift in the serial sample points from the BL calibration dataset that is x-ray beam dependent. Systematic deviation from the BL dataset caused significant calibration errors. This system drift was not captured with routine system quality control measures. Cusum analysis indicated that the drift is a sign of system wear and eventual x-ray tube failure. Conclusions: The BL calibration dataset must be monitored and periodically updated, when necessary, to account for sustained system variations to maintain the calibration accuracy.

  2. Cumulative sum quality control for calibrated breast density measurements

    Energy Technology Data Exchange (ETDEWEB)

    Heine, John J.; Cao Ke; Beam, Craig [Cancer Prevention and Control Division, Moffitt Cancer Center, 12902 Magnolia Drive, Tampa, Florida 33612 (United States); Division of Epidemiology and Biostatistics, School of Public Health, University of Illinois at Chicago, 1603 W. Taylor St., Chicago, Illinois 60612 (United States)

    2009-12-15

    Purpose: Breast density is a significant breast cancer risk factor. Although various methods are used to estimate breast density, there is no standard measurement for this important factor. The authors are developing a breast density standardization method for use in full field digital mammography (FFDM). The approach calibrates for interpatient acquisition technique differences. The calibration produces a normalized breast density pixel value scale. The method relies on first generating a baseline (BL) calibration dataset, which required extensive phantom imaging. Standardizing prospective mammograms with calibration data generated in the past could introduce unanticipated error in the standardized output if the calibration dataset is no longer valid. Methods: Sample points from the BL calibration dataset were imaged approximately biweekly over an extended timeframe. These serial samples were used to evaluate the BL dataset reproducibility and quantify the serial calibration accuracy. The cumulative sum (Cusum) quality control method was used to evaluate the serial sampling. Results: There is considerable drift in the serial sample points from the BL calibration dataset that is x-ray beam dependent. Systematic deviation from the BL dataset caused significant calibration errors. This system drift was not captured with routine system quality control measures. Cusum analysis indicated that the drift is a sign of system wear and eventual x-ray tube failure. Conclusions: The BL calibration dataset must be monitored and periodically updated, when necessary, to account for sustained system variations to maintain the calibration accuracy.

  3. Methods for identifying SNP interactions: a review on variations of Logic Regression, Random Forest and Bayesian logistic regression.

    Science.gov (United States)

    Chen, Carla Chia-Ming; Schwender, Holger; Keith, Jonathan; Nunkesser, Robin; Mengersen, Kerrie; Macrossan, Paula

    2011-01-01

    Due to advancements in computational ability, enhanced technology and a reduction in the price of genotyping, more data are being generated for understanding genetic associations with diseases and disorders. However, with the availability of large data sets comes the inherent challenges of new methods of statistical analysis and modeling. Considering a complex phenotype may be the effect of a combination of multiple loci, various statistical methods have been developed for identifying genetic epistasis effects. Among these methods, logic regression (LR) is an intriguing approach incorporating tree-like structures. Various methods have built on the original LR to improve different aspects of the model. In this study, we review four variations of LR, namely Logic Feature Selection, Monte Carlo Logic Regression, Genetic Programming for Association Studies, and Modified Logic Regression-Gene Expression Programming, and investigate the performance of each method using simulated and real genotype data. We contrast these with another tree-like approach, namely Random Forests, and a Bayesian logistic regression with stochastic search variable selection.

  4. Investigating the complex relationship between in situ Southern Ocean pCO2 and its ocean physics and biogeochemical drivers using a nonparametric regression approach

    CSIR Research Space (South Africa)

    Pretorius, W

    2014-01-01

    Full Text Available the relationship more accurately in terms of MSE, RMSE and MAE, than a standard parametric approach (multiple linear regression). These results provide a platform for using the developed nonparametric regression model based on in situ measurements to predict p...

  5. Modal and Wave Load Identification by ARMA Calibration

    DEFF Research Database (Denmark)

    Jensen, Jens Kristian Jehrbo; Kirkegaard, Poul Henning; Brincker, Rune

    1992-01-01

    In this note, modal parameter and wave load identification by calibration of ARMA models are considered for a simple offshore structure. The theory of identification by ARMA calibration is introduced as an identification technique in the time domain, which can be applied for white noise–excited s......In this note, modal parameter and wave load identification by calibration of ARMA models are considered for a simple offshore structure. The theory of identification by ARMA calibration is introduced as an identification technique in the time domain, which can be applied for white noise...... by an experimental example of a monopile model excited by random waves. The identification results show that the approach is able to give very reliable estimates of the modal parameters. Furthermore, a comparison of the identified wave load process and the calculated load process based on the Morison equation shows...

  6. Redundant interferometric calibration as a complex optimization problem

    Science.gov (United States)

    Grobler, T. L.; Bernardi, G.; Kenyon, J. S.; Parsons, A. R.; Smirnov, O. M.

    2018-05-01

    Observations of the redshifted 21 cm line from the epoch of reionization have recently motivated the construction of low-frequency radio arrays with highly redundant configurations. These configurations provide an alternative calibration strategy - `redundant calibration' - and boost sensitivity on specific spatial scales. In this paper, we formulate calibration of redundant interferometric arrays as a complex optimization problem. We solve this optimization problem via the Levenberg-Marquardt algorithm. This calibration approach is more robust to initial conditions than current algorithms and, by leveraging an approximate matrix inversion, allows for further optimization and an efficient implementation (`redundant STEFCAL'). We also investigated using the preconditioned conjugate gradient method as an alternative to the approximate matrix inverse, but found that its computational performance is not competitive with respect to `redundant STEFCAL'. The efficient implementation of this new algorithm is made publicly available.

  7. Semi-empirical approach for calibration of CR-39 detectors in diffusion chambers for radon measurements

    International Nuclear Information System (INIS)

    Pereyra A, P.; Lopez H, M. E.; Palacios F, D.; Sajo B, L.; Valdivia, P.

    2016-10-01

    Simulated and measured calibration of PADC detectors is given for cylindrical diffusion chambers employed in environmental radon measurements. The method is based on determining the minimum alpha energy (E min ), average critical angle (<Θ c >), and fraction of 218 Po atoms; the volume of the chamber (f 1 ), are compared to commercially available devices. Radon concentration for exposed detectors is obtained from induced track densities and the well-established calibration coefficient for NRPB monitor. Calibration coefficient of a PADC detector in a cylindrical diffusion chamber of any size is determined under the same chemical etching conditions and track analysis methodology. In this study the results of numerical examples and comparison between experimental calibration coefficients and simulation purpose made code. Results show that the developed method is applicable when uncertainties of 10% are acceptable. (Author)

  8. Calibration Curve of Neutron Moisture Meter for Sandy Soil under Drip Irrigation System

    International Nuclear Information System (INIS)

    Mohammad, Abd El- Moniem M.; Gendy, R. W.; Bedaiwy, M. N.

    2004-01-01

    The aim of this work is to construct a neutron calibration curve in order to be able to use the neutron probe in sandy soils under drip irrigation systems. The experimental work was conducted at the Soil and Water Department of the Nuclear Research Center, Atomic Energy Authority. Three replicates were used along the lateral lines of the drip irrigation system. For each dripper, ten neutron access tubes were installed to 100-cm depth at distances of 5, 15 and 25 cm from the dripper location around the drippers on the lateral line, as well as between lateral lines. The neutron calibrations were determined at 30, 45, and 60-cm depths. Determining coefficients as well as t-test in pairs were employed to detect the accuracy of the calibrations. Results indicated that in order for the neutron calibration curve to express the whole wet area around the emitter; three-access tubes must be installed at distances of 5, 15, and 25 cm from the emitter. This calibration curve will be correlating the average count ratio (CR) at the studied soil depth of the three locations (5, 15, and 25-cm distances from the emitter) to the average moisture content (θ) for this soil depth of the entire wetted area. This procedure should be repeated at different times in order to obtain different θ and C.R values, so that the regression equation of calibration curve at this soil depth can be obtained. To determine the soil moisture content, the average CR of the three locations must be taken and substituted into the regression equation representing the neutron calibration curve. Results taken from access tubes placed at distances of 15 cm from the emitter, showed good agreement with the average calibration curve both for the 45- and the 60-cm depths, suggesting that the 15-cm distance may provide a suitable substitute for the simultaneous use of the three different distances of 5, 15 and 25 cm. However, the obtained results show also that the neutron calibration curves of the 30-cm depth for

  9. Comprehensive Calibration and Validation Site for Information Remote Sensing

    Science.gov (United States)

    Li, C. R.; Tang, L. L.; Ma, L. L.; Zhou, Y. S.; Gao, C. X.; Wang, N.; Li, X. H.; Wang, X. H.; Zhu, X. H.

    2015-04-01

    As a naturally part of information technology, Remote Sensing (RS) is strongly required to provide very precise and accurate information product to serve industry, academy and the public at this information economic era. To meet the needs of high quality RS product, building a fully functional and advanced calibration system, including measuring instruments, measuring approaches and target site become extremely important. Supported by MOST of China via national plan, great progress has been made to construct a comprehensive calibration and validation (Cal&Val) site, which integrates most functions of RS sensor aviation testing, EO satellite on-orbit caration and performance assessment and RS product validation at this site located in Baotou, 600km west of Beijing. The site is equipped with various artificial standard targets, including portable and permanent targets, which supports for long-term calibration and validation. A number of fine-designed ground measuring instruments and airborne standard sensors are developed for realizing high-accuracy stepwise validation, an approach in avoiding or reducing uncertainties caused from nonsynchronized measurement. As part of contribution to worldwide Cal&Val study coordinated by CEOS-WGCV, Baotou site is offering its support to Radiometric Calibration Network of Automated Instruments (RadCalNet), with an aim of providing demonstrated global standard automated radiometric calibration service in cooperation with ESA, NASA, CNES and NPL. Furthermore, several Cal&Val campaigns have been performed during the past years to calibrate and validate the spaceborne/airborne optical and SAR sensors, and the results of some typical demonstration are discussed in this study.

  10. A regression approach for zircaloy-2 in-reactor creep constitutive equations

    International Nuclear Information System (INIS)

    Yung Liu, Y.; Bement, A.L.

    1977-01-01

    In this paper the methodology of multiple regressions as applied to zircaloy-2 in-reactor creep data analysis and construction of constitutive equation are illustrated. While the resulting constitutive equation can be used in creep analysis of in-reactor zircaloy structural components, the methodology itself is entirely general and can be applied to any creep data analysis. From data analysis and model development point of views, both the assumption of independence and prior committment to specific model forms are unacceptable. One would desire means which can not only estimate the required parameters directly from data but also provide basis for model selections, viz., one model against others. Basic understanding of the physics of deformation is important in choosing the forms of starting physical model equations, but the justifications must rely on their abilities in correlating the overall data. The promising aspects of multiple regression creep data analysis are briefly outlined as follows: (1) when there are more than one variable involved, there is no need to make the assumption that each variable affects the response independently. No separate normalizations are required either and the estimation of parameters is obtained by solving many simultaneous equations. The number of simultaneous equations is equal to the number of data sets, (2) regression statistics such as R 2 - and F-statistics provide measures of the significance of regression creep equation in correlating the overall data. The relative weights of each variable on the response can also be obtained. (3) Special regression techniques such as step-wise, ridge, and robust regressions and residual plots, etc., provide diagnostic tools for model selections

  11. Active point out-of-plane ultrasound calibration

    Science.gov (United States)

    Cheng, Alexis; Guo, Xiaoyu; Zhang, Haichong K.; Kang, Hyunjae; Etienne-Cummings, Ralph; Boctor, Emad M.

    2015-03-01

    Image-guided surgery systems are often used to provide surgeons with informational support. Due to several unique advantages such as ease of use, real-time image acquisition, and no ionizing radiation, ultrasound is a common intraoperative medical imaging modality used in image-guided surgery systems. To perform advanced forms of guidance with ultrasound, such as virtual image overlays or automated robotic actuation, an ultrasound calibration process must be performed. This process recovers the rigid body transformation between a tracked marker attached to the transducer and the ultrasound image. Point-based phantoms are considered to be accurate, but their calibration framework assumes that the point is in the image plane. In this work, we present the use of an active point phantom and a calibration framework that accounts for the elevational uncertainty of the point. Given the lateral and axial position of the point in the ultrasound image, we approximate a circle in the axial-elevational plane with a radius equal to the axial position. The standard approach transforms all of the imaged points to be a single physical point. In our approach, we minimize the distances between the circular subsets of each image, with them ideally intersecting at a single point. We simulated in noiseless and noisy cases, presenting results on out-of-plane estimation errors, calibration estimation errors, and point reconstruction precision. We also performed an experiment using a robot arm as the tracker, resulting in a point reconstruction precision of 0.64mm.

  12. Bias and Uncertainty in Regression-Calibrated Models of Groundwater Flow in Heterogeneous Media

    DEFF Research Database (Denmark)

    Cooley, R.L.; Christensen, Steen

    2006-01-01

    small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate θ* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear...... are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis....

  13. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    Science.gov (United States)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  14. Calibration of neutron detectors on the Joint European Torus.

    Science.gov (United States)

    Batistoni, Paola; Popovichev, S; Conroy, S; Lengar, I; Čufar, A; Abhangi, M; Snoj, L; Horton, L

    2017-10-01

    The present paper describes the findings of the calibration of the neutron yield monitors on the Joint European Torus (JET) performed in 2013 using a 252 Cf source deployed inside the torus by the remote handling system, with particular regard to the calibration of fission chambers which provide the time resolved neutron yield from JET plasmas. The experimental data obtained in toroidal, radial, and vertical scans are presented. These data are first analysed following an analytical approach adopted in the previous neutron calibrations at JET. In this way, a calibration function for the volumetric plasma source is derived which allows us to understand the importance of the different plasma regions and of different spatial profiles of neutron emissivity on fission chamber response. Neutronics analyses have also been performed to calculate the correction factors needed to derive the plasma calibration factors taking into account the different energy spectrum and angular emission distribution of the calibrating (point) 252 Cf source, the discrete positions compared to the plasma volumetric source, and the calibration circumstances. All correction factors are presented and discussed. We discuss also the lessons learnt which are the basis for the on-going 14 MeV neutron calibration at JET and for ITER.

  15. Proportional Counter Calibration and Analysis for 12C + p Resonance Scattering

    Science.gov (United States)

    Nelson, Austin; Rogachev, Grigory; Uberseder, Ethan; Hooker, Josh; Koshchiy, Yevgen

    2014-09-01

    Light exotic nuclei provide a unique opportunity to test the predictions of modern ab initio theoretical calculations near the drip line. In ab initio approaches, nuclear structure is described starting from bare nucleon-nucleon and three-nucleon interactions. Calculations are very heavy and can only be performed for the lightest nuclei (A objective of this project was to test the performance and perform position calibration of this proportional counter array. The test was done using 12C beam. The excitation function for 12C + p elastic scattering was measured and calibration of the proportional counter was performed using known resonances in 13N. The method of calibration, including solid angle calculations, normalization corrections, and position calibration will be presented. Light exotic nuclei provide a unique opportunity to test the predictions of modern ab initio theoretical calculations near the drip line. In ab initio approaches, nuclear structure is described starting from bare nucleon-nucleon and three-nucleon interactions. Calculations are very heavy and can only be performed for the lightest nuclei (A objective of this project was to test the performance and perform position calibration of this proportional counter array. The test was done using 12C beam. The excitation function for 12C + p elastic scattering was measured and calibration of the proportional counter was performed using known resonances in 13N. The method of calibration, including solid angle calculations, normalization corrections, and position calibration will be presented. Funded by DOE and NSF-REU Program; Grant No. PHY-1263281.

  16. Semi-empirical approach for calibration of CR-39 detectors in diffusion chambers for radon measurements

    Energy Technology Data Exchange (ETDEWEB)

    Pereyra A, P.; Lopez H, M. E. [Pontificia Universidad Catolica del Peru, Av. Universitaria 1801, San Miguel Lima 32 (Peru); Palacios F, D.; Sajo B, L. [Universidad Simon Bolivar, Laboratorio de Fisica Nuclear, Apartado 89000 Caracas (Venezuela, Bolivarian Republic of); Valdivia, P., E-mail: ppereyr@pucp.edu.pe [Universidad Nacional de Ingenieria, Av. Tupac Amaru s/n, Rimac, Lima 25 (Peru)

    2016-10-15

    Simulated and measured calibration of PADC detectors is given for cylindrical diffusion chambers employed in environmental radon measurements. The method is based on determining the minimum alpha energy (E{sub min}), average critical angle (<Θ{sub c}>), and fraction of {sup 218}Po atoms; the volume of the chamber (f{sub 1}), are compared to commercially available devices. Radon concentration for exposed detectors is obtained from induced track densities and the well-established calibration coefficient for NRPB monitor. Calibration coefficient of a PADC detector in a cylindrical diffusion chamber of any size is determined under the same chemical etching conditions and track analysis methodology. In this study the results of numerical examples and comparison between experimental calibration coefficients and simulation purpose made code. Results show that the developed method is applicable when uncertainties of 10% are acceptable. (Author)

  17. Model Calibration in Watershed Hydrology

    Science.gov (United States)

    Yilmaz, Koray K.; Vrugt, Jasper A.; Gupta, Hoshin V.; Sorooshian, Soroosh

    2009-01-01

    Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must, therefore, be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. This Chapter reviews the current state-of-the-art of model calibration in watershed hydrology with special emphasis on our own contributions in the last few decades. We discuss the historical background that has led to current perspectives, and review different approaches for manual and automatic single- and multi-objective parameter estimation. In particular, we highlight the recent developments in the calibration of distributed hydrologic models using parameter dimensionality reduction sampling, parameter regularization and parallel computing.

  18. From Rasch scores to regression

    DEFF Research Database (Denmark)

    Christensen, Karl Bang

    2006-01-01

    Rasch models provide a framework for measurement and modelling latent variables. Having measured a latent variable in a population a comparison of groups will often be of interest. For this purpose the use of observed raw scores will often be inadequate because these lack interval scale propertie....... This paper compares two approaches to group comparison: linear regression models using estimated person locations as outcome variables and latent regression models based on the distribution of the score....

  19. A calibration rig for multi-component internal strain gauge balance using the new design-of-experiment (DOE) approach

    Science.gov (United States)

    Nouri, N. M.; Mostafapour, K.; Kamran, M.

    2018-02-01

    In a closed water-tunnel circuit, the multi-component strain gauge force and moment sensor (also known as balance) are generally used to measure hydrodynamic forces and moments acting on scaled models. These balances are periodically calibrated by static loading. Their performance and accuracy depend significantly on the rig and the method of calibration. In this research, a new calibration rig was designed and constructed to calibrate multi-component internal strain gauge balances. The calibration rig has six degrees of freedom and six different component-loading structures that can be applied separately and synchronously. The system was designed based on the applicability of formal experimental design techniques, using gravity for balance loading and balance positioning and alignment relative to gravity. To evaluate the calibration rig, a six-component internal balance developed by Iran University of Science and Technology was calibrated using response surface methodology. According to the results, calibration rig met all design criteria. This rig provides the means by which various methods of formal experimental design techniques can be implemented. The simplicity of the rig saves time and money in the design of experiments and in balance calibration while simultaneously increasing the accuracy of these activities.

  20. Statistical approach for selection of regression model during validation of bioanalytical method

    Directory of Open Access Journals (Sweden)

    Natalija Nakov

    2014-06-01

    Full Text Available The selection of an adequate regression model is the basis for obtaining accurate and reproducible results during the bionalytical method validation. Given the wide concentration range, frequently present in bioanalytical assays, heteroscedasticity of the data may be expected. Several weighted linear and quadratic regression models were evaluated during the selection of the adequate curve fit using nonparametric statistical tests: One sample rank test and Wilcoxon signed rank test for two independent groups of samples. The results obtained with One sample rank test could not give statistical justification for the selection of linear vs. quadratic regression models because slight differences between the error (presented through the relative residuals were obtained. Estimation of the significance of the differences in the RR was achieved using Wilcoxon signed rank test, where linear and quadratic regression models were treated as two independent groups. The application of this simple non-parametric statistical test provides statistical confirmation of the choice of an adequate regression model.

  1. Prediction accuracy and stability of regression with optimal scaling transformations

    NARCIS (Netherlands)

    Kooij, van der Anita J.

    2007-01-01

    The central topic of this thesis is the CATREG approach to nonlinear regression. This approach finds optimal quantifications for categorical variables and/or nonlinear transformations for numerical variables in regression analysis. (CATREG is implemented in SPSS Categories by the author of the

  2. Scanning Electron Microscope Calibration Using a Multi-Image Non-Linear Minimization Process

    Science.gov (United States)

    Cui, Le; Marchand, Éric

    2015-04-01

    A scanning electron microscope (SEM) calibrating approach based on non-linear minimization procedure is presented in this article. A part of this article has been published in IEEE International Conference on Robotics and Automation (ICRA), 2014. . Both the intrinsic parameters and the extrinsic parameters estimations are achieved simultaneously by minimizing the registration error. The proposed approach considers multi-images of a multi-scale calibration pattern view from different positions and orientations. Since the projection geometry of the scanning electron microscope is different from that of a classical optical sensor, the perspective projection model and the parallel projection model are considered and compared with distortion models. Experiments are realized by varying the position and the orientation of a multi-scale chessboard calibration pattern from 300× to 10,000×. The experimental results show the efficiency and the accuracy of this approach.

  3. Instrument calibration reduction through on-line monitoring in the USA. Annex IV

    International Nuclear Information System (INIS)

    Hashemian, H.M.

    2008-01-01

    Nuclear power plants are required to calibrate important instruments once every fuel cycle. This requirement dates back more than 30 years, when commercial nuclear power plants began to operate. Based on calibration data accumulated over this period, it has been determined that the calibration of some instruments, such as pressure transmitters, do not drift enough to warrant calibration as often as once every fuel cycle. This fact, combined with human resources limitations and reduced maintenance budgets, has provided the motivation for the nuclear industry to develop new technologies for identifying drifting instruments during plant operation. Implementing these technologies allows calibration efforts to be focused on the instruments that have drifted out of tolerance, as opposed to current practice, which calls for calibration verification of almost all instruments every fuel cycle. To date, an array of technologies, referred to collectively as 'on-line calibration monitoring', has been developed to meet this objective. These technologies are based on identifying outlier sensors using techniques that compare a particular sensor's output to a calculated estimate of the actual process the sensor is measuring. If on-line monitoring data are collected during plant startup and/or shutdown periods as well as normal operation, the on-line monitoring approach can help verify the calibration of instruments over their entire operating range. Although on-line calibration monitoring is applicable to most sensors and can cover an entire instrument channel, the main application of this approach in nuclear power plants is currently for pressure transmitters (including level and flow transmitters). (author)

  4. Calibration of sensors for acoustic detection of neutrinos

    Energy Technology Data Exchange (ETDEWEB)

    Ardid, M; Bou-Cabo, M; Espinosa, V; Martinez-Mora, J; Camarena, F; Alba, J [Departament de Fisica Aplicada, E.P.S. Gandia, Universitat Politecnica de Valencia, Cra. Nazaret/Oliva S/N, E-46730 Gandia (Spain)

    2007-09-15

    Calibration of sensors is an important task for the acoustic detection of neutrinos. Different approaches have been tried and used (calibrated hydrophones, resistors, powerful lasers, light bulbs explosion, etc.) We propose some methods for calibration that can be used in both the lab and the telescope ('in situ'). In this paper, different studies following these methods and their results are reported. First, we describe the reciprocity calibration method for acoustic sensors. Since it is a simple method and calibrated hydrophones are not needed, this technique is accessible for any lab. Moreover, the technique could be used to calibrate the sensors of a neutrino telescope just by using themselves (reciprocally). A comparison of this technique using different kind of signals (MLS, TSP, tone bursts, white noise), and in different propagation conditions is presented. The limitations of the technique are shown, as well as some possibilities to overcome them. The second aspect treated is the obtaining of neutrinolike signals for calibration. Probably, the most convenient way to do it would be to generate these signals from transducers directly. Since transducers do not usually have a flat frequency response, distortion is produced, and neutrino-like signals could be difficult to achieve. We present some equalization techniques to offset this effect. In this sense, the use of inverse filter based in Mourjopoulos theory seems to be quite convenient.

  5. A parameter for the selection of an optimum balance calibration model by Monte Carlo simulation

    CSIR Research Space (South Africa)

    Bidgood, Peter M

    2013-09-01

    Full Text Available The current trend in balance calibration-matrix generation is to use non-linear regression and statistical methods. Methods typically include Modified-Design-of-Experiment (MDOE), Response-Surface-Models (RSMs) and Analysis of Variance (ANOVA...

  6. Testing for Stock Market Contagion: A Quantile Regression Approach

    NARCIS (Netherlands)

    S.Y. Park (Sung); W. Wang (Wendun); N. Huang (Naijing)

    2015-01-01

    markdownabstract__Abstract__ Regarding the asymmetric and leptokurtic behavior of financial data, we propose a new contagion test in the quantile regression framework that is robust to model misspecification. Unlike conventional correlation-based tests, the proposed quantile contagion test

  7. On Solving Lq-Penalized Regressions

    Directory of Open Access Journals (Sweden)

    Tracy Zhou Wu

    2007-01-01

    Full Text Available Lq-penalized regression arises in multidimensional statistical modelling where all or part of the regression coefficients are penalized to achieve both accuracy and parsimony of statistical models. There is often substantial computational difficulty except for the quadratic penalty case. The difficulty is partly due to the nonsmoothness of the objective function inherited from the use of the absolute value. We propose a new solution method for the general Lq-penalized regression problem based on space transformation and thus efficient optimization algorithms. The new method has immediate applications in statistics, notably in penalized spline smoothing problems. In particular, the LASSO problem is shown to be polynomial time solvable. Numerical studies show promise of our approach.

  8. SPECTRAL RECONSTRUCTION BASED ON SVM FOR CROSS CALIBRATION

    Directory of Open Access Journals (Sweden)

    H. Gao

    2017-05-01

    Full Text Available Chinese HY-1C/1D satellites will use a 5nm/10nm-resolutional visible-near infrared(VNIR hyperspectral sensor with the solar calibrator to cross-calibrate with other sensors. The hyperspectral radiance data are composed of average radiance in the sensor’s passbands and bear a spectral smoothing effect, a transform from the hyperspectral radiance data to the 1-nm-resolution apparent spectral radiance by spectral reconstruction need to be implemented. In order to solve the problem of noise cumulation and deterioration after several times of iteration by the iterative algorithm, a novel regression method based on SVM is proposed, which can approach arbitrary complex non-linear relationship closely and provide with better generalization capability by learning. In the opinion of system, the relationship between the apparent radiance and equivalent radiance is nonlinear mapping introduced by spectral response function(SRF, SVM transform the low-dimensional non-linear question into high-dimensional linear question though kernel function, obtaining global optimal solution by virtue of quadratic form. The experiment is performed using 6S-simulated spectrums considering the SRF and SNR of the hyperspectral sensor, measured reflectance spectrums of water body and different atmosphere conditions. The contrastive result shows: firstly, the proposed method is with more reconstructed accuracy especially to the high-frequency signal; secondly, while the spectral resolution of the hyperspectral sensor reduces, the proposed method performs better than the iterative method; finally, the root mean square relative error(RMSRE which is used to evaluate the difference of the reconstructed spectrum and the real spectrum over the whole spectral range is calculated, it decreses by one time at least by proposed method.

  9. Meta-Modeling by Symbolic Regression and Pareto Simulated Annealing

    NARCIS (Netherlands)

    Stinstra, E.; Rennen, G.; Teeuwen, G.J.A.

    2006-01-01

    The subject of this paper is a new approach to Symbolic Regression.Other publications on Symbolic Regression use Genetic Programming.This paper describes an alternative method based on Pareto Simulated Annealing.Our method is based on linear regression for the estimation of constants.Interval

  10. Exposure-rate calibration using large-area calibration pads

    International Nuclear Information System (INIS)

    Novak, E.F.

    1988-09-01

    The US Department of Energy (DOE) Office of Remedial Action and Waste Technology established the Technical Measurements Center (TMC) at the DOE Grand Junction Projects Office (GJPO) in Grand Junction, Colorado, to standardize, calibrate, and compare measurements made in support of DOE remedial action programs. A set of large-area, radioelement-enriched concrete pads was constructed by the DOE in 1978 at the Walker Field Airport in Grand Junction for use as calibration standards for airborne gamma-ray spectrometer systems. The use of these pads was investigated by the TMC as potential calibration standards for portable scintillometers employed in measuring gamma-ray exposure rates at Uranium Mill Tailings Remedial Action (UMTRA) project sites. Data acquired on the pads using a pressurized ionization chamber (PIC) and three scintillometers are presented as an illustration of an instrumental calibration. Conclusions and recommended calibration procedures are discussed, based on the results of these data

  11. Estimating energy expenditure from heart rate in older adults: a case for calibration.

    Science.gov (United States)

    Schrack, Jennifer A; Zipunnikov, Vadim; Goldsmith, Jeff; Bandeen-Roche, Karen; Crainiceanu, Ciprian M; Ferrucci, Luigi

    2014-01-01

    Accurate measurement of free-living energy expenditure is vital to understanding changes in energy metabolism with aging. The efficacy of heart rate as a surrogate for energy expenditure is rooted in the assumption of a linear function between heart rate and energy expenditure, but its validity and reliability in older adults remains unclear. To assess the validity and reliability of the linear function between heart rate and energy expenditure in older adults using different levels of calibration. Heart rate and energy expenditure were assessed across five levels of exertion in 290 adults participating in the Baltimore Longitudinal Study of Aging. Correlation and random effects regression analyses assessed the linearity of the relationship between heart rate and energy expenditure and cross-validation models assessed predictive performance. Heart rate and energy expenditure were highly correlated (r=0.98) and linear regardless of age or sex. Intra-person variability was low but inter-person variability was high, with substantial heterogeneity of the random intercept (s.d. =0.372) despite similar slopes. Cross-validation models indicated individual calibration data substantially improves accuracy predictions of energy expenditure from heart rate, reducing the potential for considerable measurement bias. Although using five calibration measures provided the greatest reduction in the standard deviation of prediction errors (1.08 kcals/min), substantial improvement was also noted with two (0.75 kcals/min). These findings indicate standard regression equations may be used to make population-level inferences when estimating energy expenditure from heart rate in older adults but caution should be exercised when making inferences at the individual level without proper calibration.

  12. The Value of Hydrograph Partitioning Curves for Calibrating Hydrological Models in Glacierized Basins

    Science.gov (United States)

    He, Zhihua; Vorogushyn, Sergiy; Unger-Shayesteh, Katy; Gafurov, Abror; Kalashnikova, Olga; Omorova, Elvira; Merz, Bruno

    2018-03-01

    This study refines the method for calibrating a glacio-hydrological model based on Hydrograph Partitioning Curves (HPCs), and evaluates its value in comparison to multidata set optimization approaches which use glacier mass balance, satellite snow cover images, and discharge. The HPCs are extracted from the observed flow hydrograph using catchment precipitation and temperature gradients. They indicate the periods when the various runoff processes, such as glacier melt or snow melt, dominate the basin hydrograph. The annual cumulative curve of the difference between average daily temperature and melt threshold temperature over the basin, as well as the annual cumulative curve of average daily snowfall on the glacierized areas are used to identify the starting and end dates of snow and glacier ablation periods. Model parameters characterizing different runoff processes are calibrated on different HPCs in a stepwise and iterative way. Results show that the HPC-based method (1) delivers model-internal consistency comparably to the tri-data set calibration method; (2) improves the stability of calibrated parameter values across various calibration periods; and (3) estimates the contributions of runoff components similarly to the tri-data set calibration method. Our findings indicate the potential of the HPC-based approach as an alternative for hydrological model calibration in glacierized basins where other calibration data sets than discharge are often not available or very costly to obtain.

  13. Towards improved local hybrid functionals by calibration of exchange-energy densities

    International Nuclear Information System (INIS)

    Arbuznikov, Alexei V.; Kaupp, Martin

    2014-01-01

    A new approach for the calibration of (semi-)local and exact exchange-energy densities in the context of local hybrid functionals is reported. The calibration functions are derived from only the electron density and its spatial derivatives, avoiding spatial derivatives of the exact-exchange energy density or other computationally unfavorable contributions. The calibration functions fulfill the seven more important out of nine known exact constraints. It is shown that calibration improves substantially the definition of a non-dynamical correlation energy term for generalized gradient approximation (GGA)-based local hybrids. Moreover, gauge artifacts in the potential-energy curves of noble-gas dimers may be corrected by calibration. The developed calibration functions are then evaluated for a large range of energy-related properties (atomization energies, reaction barriers, ionization potentials, electron affinities, and total atomic energies) of three sets of local hybrids, using a simple one-parameter local-mixing. The functionals are based on (a) local spin-density approximation (LSDA) or (b) Perdew-Burke-Ernzerhof (PBE) exchange and correlation, and on (c) Becke-88 (B88) exchange and Lee-Yang-Parr (LYP) correlation. While the uncalibrated GGA-based functionals usually provide very poor thermochemical data, calibration allows a dramatic improvement, accompanied by only a small deterioration of reaction barriers. In particular, an optimized BLYP-based local-hybrid functional has been found that is a substantial improvement over the underlying global hybrids, as well as over previously reported LSDA-based local hybrids. It is expected that the present calibration approach will pave the way towards new generations of more accurate hyper-GGA functionals based on a local mixing of exchange-energy densities

  14. Calibrating the Truax Rough Rider seed drill for restoration plantings

    Science.gov (United States)

    Loren St. John; Brent Cornforth; Boyd Simonson; Dan Ogle; Derek Tilley

    2008-01-01

    The purpose of this technical note is to provide a step-by-step approach to calibrating the Truax Rough Rider range drill, a relatively new, state-of-the-art rangeland drill. To achieve the desired outcome of a seeding project, an important step following proper weed control and seedbed preparation is the calibration of the seeding equipment to ensure the recommended...

  15. Geographically weighted negative binomial regression applied to zonal level safety performance models.

    Science.gov (United States)

    Gomes, Marcos José Timbó Lima; Cunto, Flávio; da Silva, Alan Ricardo

    2017-09-01

    Generalized Linear Models (GLM) with negative binomial distribution for errors, have been widely used to estimate safety at the level of transportation planning. The limited ability of this technique to take spatial effects into account can be overcome through the use of local models from spatial regression techniques, such as Geographically Weighted Poisson Regression (GWPR). Although GWPR is a system that deals with spatial dependency and heterogeneity and has already been used in some road safety studies at the planning level, it fails to account for the possible overdispersion that can be found in the observations on road-traffic crashes. Two approaches were adopted for the Geographically Weighted Negative Binomial Regression (GWNBR) model to allow discrete data to be modeled in a non-stationary form and to take note of the overdispersion of the data: the first examines the constant overdispersion for all the traffic zones and the second includes the variable for each spatial unit. This research conducts a comparative analysis between non-spatial global crash prediction models and spatial local GWPR and GWNBR at the level of traffic zones in Fortaleza/Brazil. A geographic database of 126 traffic zones was compiled from the available data on exposure, network characteristics, socioeconomic factors and land use. The models were calibrated by using the frequency of injury crashes as a dependent variable and the results showed that GWPR and GWNBR achieved a better performance than GLM for the average residuals and likelihood as well as reducing the spatial autocorrelation of the residuals, and the GWNBR model was more able to capture the spatial heterogeneity of the crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Essay on Option Pricing, Hedging and Calibration

    DEFF Research Database (Denmark)

    da Silva Ribeiro, André Manuel

    Quantitative finance is concerned about applying mathematics to financial markets.This thesis is a collection of essays that study different problems in this field: How efficient are option price approximations to calibrate a stochastic volatilitymodel? (Chapter 2) How different is the discretely...... of dynamics? (Chapter 5) How can we formulate a simple free-arbitrage model to price correlationswaps? (Chapter 6) A summary of the work presented in this thesis: Approximation Behooves Calibration In this paper we show that calibration based on an expansion approximation for option prices in the Heston...... stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005 to 2009. Discretely Sampled Variance Options: A Stochastic Approximation Approach In this paper, we expand Drimus and Farkas (2012) framework to price variance options on discretely sampled...

  17. Impact of regression methods on improved effects of soil structure on soil water retention estimates

    Science.gov (United States)

    Nguyen, Phuong Minh; De Pue, Jan; Le, Khoa Van; Cornelis, Wim

    2015-06-01

    Increasing the accuracy of pedotransfer functions (PTFs), an indirect method for predicting non-readily available soil features such as soil water retention characteristics (SWRC), is of crucial importance for large scale agro-hydrological modeling. Adding significant predictors (i.e., soil structure), and implementing more flexible regression algorithms are among the main strategies of PTFs improvement. The aim of this study was to investigate whether the improved effect of categorical soil structure information on estimating soil-water content at various matric potentials, which has been reported in literature, could be enduringly captured by regression techniques other than the usually applied linear regression. Two data mining techniques, i.e., Support Vector Machines (SVM), and k-Nearest Neighbors (kNN), which have been recently introduced as promising tools for PTF development, were utilized to test if the incorporation of soil structure will improve PTF's accuracy under a context of rather limited training data. The results show that incorporating descriptive soil structure information, i.e., massive, structured and structureless, as grouping criterion can improve the accuracy of PTFs derived by SVM approach in the range of matric potential of -6 to -33 kPa (average RMSE decreased up to 0.005 m3 m-3 after grouping, depending on matric potentials). The improvement was primarily attributed to the outperformance of SVM-PTFs calibrated on structureless soils. No improvement was obtained with kNN technique, at least not in our study in which the data set became limited in size after grouping. Since there is an impact of regression techniques on the improved effect of incorporating qualitative soil structure information, selecting a proper technique will help to maximize the combined influence of flexible regression algorithms and soil structure information on PTF accuracy.

  18. CryoSat/SIRAL Cal1 Calibration Orbits

    Science.gov (United States)

    Scagliola, Michele; Fornari, Marco; Bouffard, Jerome; Parrinello, Tommaso

    2017-04-01

    The main payload of CryoSat is a Ku band pulsewidth limited radar altimeter, called SIRAL (Synthetic interferometric radar altimeter), that transmits pulses at a high pulse repetition frequency thus making the received echoes phase coherent and suitable for SAR processing. This allows to reach an along track resolution that is significantly improved with respect to traditional pulse-width limited altimeters. Due to the fact that SIRAL is a phase coherent pulse-width limited radar altimeter, a proper calibration approach has been developed. In fact, not only corrections for transfer function, gain and instrument path delay have to be computed (as in previous altimeters), but also corrections for phase (SAR/SARIn) and phase difference between the two receiving chains (SARIN only). Recalling that the CryoSat's orbit has a high inclination of 92° and it is non-sun-synchronous, the temperature of the SIRAL changes continuously along the orbit with a period of about 480 days and it is also function of the ascending/descending passes. By analysis of the CAL1 calibration corrections, it has been verified that the internal path delay and the instrument gain variation measured on the SIRAL are affected by the thermal status of the instrument and as a consequence they are expected to vary along the orbit. In order to gain knowledge on the calibration corrections (i.e. the instrument behavior) as function of latitude and temperature, it has been planned to command a few number of orbits where only CAL1 calibration acquisitions are continuously performed. The analysis of the CAL1 calibration corrections produced along the Calibration orbits can be also useful to verify whether the current calibration plan is able to provide sufficiently accurate corrections for the instrument acquisitions at any latitude. In 2016, the CryoSat/SIRAL Cal1 Calibration Orbits have been commanded two times, a first time the 20th of July 2016 and a second time the 24th of November 2016, and they

  19. The Use of Nonparametric Kernel Regression Methods in Econometric Production Analysis

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard

    and nonparametric estimations of production functions in order to evaluate the optimal firm size. The second paper discusses the use of parametric and nonparametric regression methods to estimate panel data regression models. The third paper analyses production risk, price uncertainty, and farmers' risk preferences...... within a nonparametric panel data regression framework. The fourth paper analyses the technical efficiency of dairy farms with environmental output using nonparametric kernel regression in a semiparametric stochastic frontier analysis. The results provided in this PhD thesis show that nonparametric......This PhD thesis addresses one of the fundamental problems in applied econometric analysis, namely the econometric estimation of regression functions. The conventional approach to regression analysis is the parametric approach, which requires the researcher to specify the form of the regression...

  20. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  1. An alternate approach to calibrating FAC-predictive models using inspection data (Log Book No. 398)

    International Nuclear Information System (INIS)

    Pulido, J.E.; Ksiazek, P.E.; Alecksick, R.M.

    2004-01-01

    Flow-Accelerated Corrosion (FAC) of piping and fittings in Nuclear Energy Plants can pose a threat to personnel safety, reduce plant availability, and result in undesirable challenges to plant safety systems. For these reasons, accurate predictions of FAC-induced wear rates are extremely valuable in that they allow action to be taken prior to component failure. The EPRI recommended method of predicting FAC wear rates for inspected as well as uninspected components allows for calibration of predictions through the use of wall thickness measurements obtained from UT inspections. This method uses a simple linear correction based on the median value of the ratio of measured to predicted thickness. An alternate approach is presented that takes local thermodynamic variations into account, thus resulting in an improved correlation with measured data. (author)

  2. Heterogeneous effects of oil shocks on exchange rates: evidence from a quantile regression approach.

    Science.gov (United States)

    Su, Xianfang; Zhu, Huiming; You, Wanhai; Ren, Yinghua

    2016-01-01

    The determinants of exchange rates have attracted considerable attention among researchers over the past several decades. Most studies, however, ignore the possibility that the impact of oil shocks on exchange rates could vary across the exchange rate returns distribution. We employ a quantile regression approach to address this issue. Our results indicate that the effect of oil shocks on exchange rates is heterogeneous across quantiles. A large US depreciation or appreciation tends to heighten the effects of oil shocks on exchange rate returns. Positive oil demand shocks lead to appreciation pressures in oil-exporting countries and this result is robust across lower and upper return distributions. These results offer rich and useful information for investors and decision-makers.

  3. Analyses of Crawford's uvby β calibrations using the pulsational variations of FG Vir

    Science.gov (United States)

    Haas, P.

    Crawford's uvby β calibration method is examined for A-type stars by comparing it with the pulsational variations of the observable m1, c1 and MV for the δ Scuti star FG Vir. The fit between the calibration values of m1 and MV and the respective measurements for FG Vir are tested as a function of temperature based on 3068 4-colour values taken at the Observatorio de Sierra Nevada in Spain during the years 2002 and 2003. Testing is performed by means of linear regression. The fit between the measured index m1 of FG Vir and the m1 index of the Hyades is nearly perfect. A fit between the calibration value MV and the measured values of FG Vir cannot be obtained with Crawford's calibration procedure in a straightforward manner. In order to achieve an optimal fit for MV two modifications of the calibration procedure are investigated and discussed. (i) the position of the ZAMS given by Crawford is replaced by the position of the ZAMS given by Mermilliod; (ii) the influence of the mass difference on c1 is taken into account.

  4. Direction of Effects in Multiple Linear Regression Models.

    Science.gov (United States)

    Wiedermann, Wolfgang; von Eye, Alexander

    2015-01-01

    Previous studies analyzed asymmetric properties of the Pearson correlation coefficient using higher than second order moments. These asymmetric properties can be used to determine the direction of dependence in a linear regression setting (i.e., establish which of two variables is more likely to be on the outcome side) within the framework of cross-sectional observational data. Extant approaches are restricted to the bivariate regression case. The present contribution extends the direction of dependence methodology to a multiple linear regression setting by analyzing distributional properties of residuals of competing multiple regression models. It is shown that, under certain conditions, the third central moments of estimated regression residuals can be used to decide upon direction of effects. In addition, three different approaches for statistical inference are discussed: a combined D'Agostino normality test, a skewness difference test, and a bootstrap difference test. Type I error and power of the procedures are assessed using Monte Carlo simulations, and an empirical example is provided for illustrative purposes. In the discussion, issues concerning the quality of psychological data, possible extensions of the proposed methods to the fourth central moment of regression residuals, and potential applications are addressed.

  5. Calibration of the ATLAS Transition Radiation Tracker

    CERN Document Server

    The ATLAS collaboration

    2011-01-01

    The Transition Radiation Tracker (TRT) is the outermost charged particle tracking device of the ATLAS Inner Detector. The TRT has about 300,000 straws, each of which is a proportional drift tube with a diameter of 4 mm. For a precise measurement of the trajectory of a charged particle (track), the relation between the measured time of the start of the signal and the distance of closest approach between the track and the anode wire needs to be calibrated. In this note, we present the calibration of the TRT detector during the first year of 7 TeV collision data-taking.

  6. A calibration-free formulation of the complementary relationship of evaporation for continental-scale hydrology

    Science.gov (United States)

    Szilagyi, Jozsef; Crago, Richard; Qualls, Russell

    2017-01-01

    An important scaling consideration is introduced into the formulation of the complementary relationship (CR) of land surface evapotranspiration (ET) by specifying the maximum possible evaporation rate (Epmax) of a small water body (or wet patch) as a result of adiabatic drying from the prevailing near-neutral atmospheric conditions. In dimensionless form the CR therefore becomes yB = f(Epmax-EpEpmax-EwxB) = f(X) = 2X2 - X3, where yB = ET/Ep, xB = Ew/Ep. Ew is the wet-environment evaporation rate as given by the Priestley-Taylor equation, Ep is the evaporation rate of the same small wet surface for which Epmax is specified and estimated by the Penman equation. With the help of North American Regional Reanalysis data, the CR this way yields better continental-scale performance than earlier, calibrated versions of it and is on par with current land surface model results, the latter requiring vegetation, soil information and soil moisture bookkeeping. Validation has been performed by Parameter-Elevation Regressions on Independent Slopes Model precipitation and United States Geological Survey runoff data. A novel approach is also introduced to calculate the value of the Priestley-Taylor parameter to be used with continental-scale data, making the new formulation of the CR completely calibration free.

  7. Radiometric and spectral calibrations of the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) using principle component analysis

    Science.gov (United States)

    Tian, Jialin; Smith, William L.; Gazarik, Michael J.

    2008-10-01

    applied to data collected during an atmospheric measurement experiment with the GIFTS, together with simultaneous observations by the accurately calibrated AERI (Atmospheric Emitted Radiance Interferometer), both simultaneously zenith viewing the sky through the same external scene mirror at ten-minute intervals throughout a cloudless day at Logan Utah on September 13, 2006. The PC vectors of the calibrated radiance spectra are defined from the AERI observations and regression matrices relating the initial GIFTS radiance PC scores to the AERI radiance PC scores are calculated using the least squares inverse method. A new set of accurately calibrated GIFTS radiances are produced using the first four PC scores in the regression model. Temperature and moisture profiles retrieved from the PC-calibrated GIFTS radiances are verified against radiosonde measurements collected throughout the GIFTS sky measurement period.

  8. Auto calibration of a cone-beam-CT

    International Nuclear Information System (INIS)

    Gross, Daniel; Heil, Ulrich; Schulze, Ralf; Schoemer, Elmar; Schwanecke, Ulrich

    2012-01-01

    Purpose: This paper introduces a novel autocalibration method for cone-beam-CTs (CBCT) or flat-panel CTs, assuming a perfect rotation. The method is based on ellipse-fitting. Autocalibration refers to accurate recovery of the geometric alignment of a CBCT device from projection images alone, without any manual measurements. Methods: The authors use test objects containing small arbitrarily positioned radio-opaque markers. No information regarding the relative positions of the markers is used. In practice, the authors use three to eight metal ball bearings (diameter of 1 mm), e.g., positioned roughly in a vertical line such that their projection image curves on the detector preferably form large ellipses over the circular orbit. From this ellipse-to-curve mapping and also from its inversion the authors derive an explicit formula. Nonlinear optimization based on this mapping enables them to determine the six relevant parameters of the system up to the device rotation angle, which is sufficient to define the geometry of a CBCT-machine assuming a perfect rotational movement. These parameters also include out-of-plane rotations. The authors evaluate their method by simulation based on data used in two similar approaches [L. Smekal, M. Kachelriess, S. E, and K. Wa, “Geometric misalignment and calibration in cone-beam tomography,” Med. Phys. 31(12), 3242–3266 (2004); K. Yang, A. L. C. Kwan, D. F. Miller, and J. M. Boone, “A geometric calibration method for cone beam CT systems,” Med. Phys. 33(6), 1695–1706 (2006)]. This allows a direct comparison of accuracy. Furthermore, the authors present real-world 3D reconstructions of a dry human spine segment and an electronic device. The reconstructions were computed from projections taken with a commercial dental CBCT device having two different focus-to-detector distances that were both calibrated with their method. The authors compare their reconstruction with a reconstruction computed by the manufacturer of the

  9. Differentiating regressed melanoma from regressed lichenoid keratosis.

    Science.gov (United States)

    Chan, Aegean H; Shulman, Kenneth J; Lee, Bonnie A

    2017-04-01

    Distinguishing regressed lichen planus-like keratosis (LPLK) from regressed melanoma can be difficult on histopathologic examination, potentially resulting in mismanagement of patients. We aimed to identify histopathologic features by which regressed melanoma can be differentiated from regressed LPLK. Twenty actively inflamed LPLK, 12 LPLK with regression and 15 melanomas with regression were compared and evaluated by hematoxylin and eosin staining as well as Melan-A, microphthalmia transcription factor (MiTF) and cytokeratin (AE1/AE3) immunostaining. (1) A total of 40% of regressed melanomas showed complete or near complete loss of melanocytes within the epidermis with Melan-A and MiTF immunostaining, while 8% of regressed LPLK exhibited this finding. (2) Necrotic keratinocytes were seen in the epidermis in 33% regressed melanomas as opposed to all of the regressed LPLK. (3) A dense infiltrate of melanophages in the papillary dermis was seen in 40% of regressed melanomas, a feature not seen in regressed LPLK. In summary, our findings suggest that a complete or near complete loss of melanocytes within the epidermis strongly favors a regressed melanoma over a regressed LPLK. In addition, necrotic epidermal keratinocytes and the presence of a dense band-like distribution of dermal melanophages can be helpful in differentiating these lesions. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. Camera calibration method of binocular stereo vision based on OpenCV

    Science.gov (United States)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  11. Borehole Volumetric Strainmeter Calibration From a Nearby Seismic Broadband Array at Etna Volcano

    Science.gov (United States)

    Currenti, G.; Zuccarello, L.; Bonaccorso, A.; Sicali, A.

    2017-10-01

    Strainmeter and broadband seismic signals have been analyzed jointly with the aim of calibrating a borehole strainmeter at Etna volcano by using a seismo-geodetic technique. Our results reveal a good coherence between the dynamic strains estimated from seismometer data and strains recorded by a dilatometer in a low-frequency range [0.03-0.06 Hz] at the arrival of teleseismic waves. This significant coherence enabled estimating the calibration coefficient and making a comparison with calibration results derived from other methods. In particular, we verified that the proposed approach provides a calibration coefficient that matches the results obtained from the comparison of the recorded strain both with theoretical strain tides and with normal-mode synthetic straingrams. The approach presented here has the advantage of exploiting recorded seismic data, avoiding the use of computed strain from theoretical models.

  12. Regression model of support vector machines for least squares prediction of crystallinity of cracking catalysts by infrared spectroscopy

    International Nuclear Information System (INIS)

    Comesanna Garcia, Yumirka; Dago Morales, Angel; Talavera Bustamante, Isneri

    2010-01-01

    The recently introduction of the least squares support vector machines method for regression purposes in the field of Chemometrics has provided several advantages to linear and nonlinear multivariate calibration methods. The objective of the paper was to propose the use of the least squares support vector machine as an alternative multivariate calibration method for the prediction of the percentage of crystallinity of fluidized catalytic cracking catalysts, by means of Fourier transform mid-infrared spectroscopy. A linear kernel was used in the calculations of the regression model. The optimization of its gamma parameter was carried out using the leave-one-out cross-validation procedure. The root mean square error of prediction was used to measure the performance of the model. The accuracy of the results obtained with the application of the method is in accordance with the uncertainty of the X-ray powder diffraction reference method. To compare the generalization capability of the developed method, a comparison study was carried out, taking into account the results achieved with the new model and those reached through the application of linear calibration methods. The developed method can be easily implemented in refinery laboratories

  13. Lateral force calibration in atomic force microscopy: A new lateral force calibration method and general guidelines for optimization

    International Nuclear Information System (INIS)

    Cannara, Rachel J.; Eglin, Michael; Carpick, Robert W.

    2006-01-01

    Proper force calibration is a critical step in atomic and lateral force microscopies (AFM/LFM). The recently published torsional Sader method [C. P. Green et al., Rev. Sci. Instrum. 75, 1988 (2004)] facilitates the calculation of torsional spring constants of rectangular AFM cantilevers by eliminating the need to obtain information or make assumptions regarding the cantilever's material properties and thickness, both of which are difficult to measure. Complete force calibration of the lateral signal in LFM requires measurement of the lateral signal deflection sensitivity as well. In this article, we introduce a complete lateral force calibration procedure that employs the torsional Sader method and does not require making contact between the tip and any sample. In this method, a colloidal sphere is attached to a 'test' cantilever of the same width, but different length and material as the 'target' cantilever of interest. The lateral signal sensitivity is calibrated by loading the colloidal sphere laterally against a vertical sidewall. The signal sensitivity for the target cantilever is then corrected for the tip length, total signal strength, and in-plane bending of the cantilevers. We discuss the advantages and disadvantages of this approach in comparison with the other established lateral force calibration techniques, and make a direct comparison with the 'wedge' calibration method. The methods agree to within 5%. The propagation of errors is explicitly considered for both methods and the sources of disagreement discussed. Finally, we show that the lateral signal sensitivity is substantially reduced when the laser spot is not centered on the detector

  14. Relative accuracy of spatial predictive models for lynx Lynx canadensis derived using logistic regression-AIC, multiple criteria evaluation and Bayesian approaches

    Directory of Open Access Journals (Sweden)

    Shelley M. ALEXANDER

    2009-02-01

    Full Text Available We compared probability surfaces derived using one set of environmental variables in three Geographic Information Systems (GIS-based approaches: logistic regression and Akaike’s Information Criterion (AIC, Multiple Criteria Evaluation (MCE, and Bayesian Analysis (specifically Dempster-Shafer theory. We used lynx Lynx canadensis as our focal species, and developed our environment relationship model using track data collected in Banff National Park, Alberta, Canada, during winters from 1997 to 2000. The accuracy of the three spatial models were compared using a contingency table method. We determined the percentage of cases in which both presence and absence points were correctly classified (overall accuracy, the failure to predict a species where it occurred (omission error and the prediction of presence where there was absence (commission error. Our overall accuracy showed the logistic regression approach was the most accurate (74.51%. The multiple criteria evaluation was intermediate (39.22%, while the Dempster-Shafer (D-S theory model was the poorest (29.90%. However, omission and commission error tell us a different story: logistic regression had the lowest commission error, while D-S theory produced the lowest omission error. Our results provide evidence that habitat modellers should evaluate all three error measures when ascribing confidence in their model. We suggest that for our study area at least, the logistic regression model is optimal. However, where sample size is small or the species is very rare, it may also be useful to explore and/or use a more ecologically cautious modelling approach (e.g. Dempster-Shafer that would over-predict, protect more sites, and thereby minimize the risk of missing critical habitat in conservation plans[Current Zoology 55(1: 28 – 40, 2009].

  15. Intercomparison and calibration of dose calibrators used in nuclear medicine facilities

    CERN Document Server

    Costa, A M D

    2003-01-01

    The aim of this work was to establish a working standard for intercomparison and calibration of dose calibrators used in most of nuclear medicine facilities for the determination of the activity of radionuclides administered to patients in specific examinations or therapeutic procedures. A commercial dose calibrator, a set of standard radioactive sources, and syringes, vials and ampoules with radionuclide solutions used in nuclear medicine were utilized in this work. The commercial dose calibrator was calibrated for radionuclide solutions used in nuclear medicine. Simple instrument tests, such as linearity response and variation response with the source volume at a constant source activity concentration were performed. This instrument may be used as a reference system for intercomparison and calibration of other activity meters, as a method of quality control of dose calibrators utilized in nuclear medicine facilities.

  16. IMU-based online kinematic calibration of robot manipulator.

    Science.gov (United States)

    Du, Guanglong; Zhang, Ping

    2013-01-01

    Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU) is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA) and Kalman Filter (KF) to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF) is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods.

  17. IMU-Based Online Kinematic Calibration of Robot Manipulator

    Directory of Open Access Journals (Sweden)

    Guanglong Du

    2013-01-01

    Full Text Available Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA and Kalman Filter (KF to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods.

  18. Generation of Natural Runoff Monthly Series at Ungauged Sites Using a Regional Regressive Model

    Directory of Open Access Journals (Sweden)

    Dario Pumo

    2016-05-01

    Full Text Available Many hydrologic applications require reliable estimates of runoff in river basins to face the widespread lack of data, both in time and in space. A regional method for the reconstruction of monthly runoff series is here developed and applied to Sicily (Italy. A simple modeling structure is adopted, consisting of a regression-based rainfall–runoff model with four model parameters, calibrated through a two-step procedure. Monthly runoff estimates are based on precipitation, temperature, and exploiting the autocorrelation with runoff at the previous month. Model parameters are assessed by specific regional equations as a function of easily measurable physical and climate basin descriptors. The first calibration step is aimed at the identification of a set of parameters optimizing model performances at the level of single basin. Such “optimal” sets are used at the second step, part of a regional regression analysis, to establish the regional equations for model parameters assessment as a function of basin attributes. All the gauged watersheds across the region have been analyzed, selecting 53 basins for model calibration and using the other six basins exclusively for validation. Performances, quantitatively evaluated by different statistical indexes, demonstrate relevant model ability in reproducing the observed hydrological time-series at both the monthly and coarser time resolutions. The methodology, which is easily transferable to other arid and semi-arid areas, provides a reliable tool for filling/reconstructing runoff time series at any gauged or ungauged basin of a region.

  19. Utilização de regressão multivariada para avaliação espectrofotométrica da demanda química de oxigênio em amostras de relevância ambiental Use of multivariate regression in spectrophotometric evaluation of chemical oxigen demand in samples of environmental relevance

    Directory of Open Access Journals (Sweden)

    Patricio Peralta-Zamora

    2005-10-01

    Full Text Available In this work, a partial least squares regression routine was used to develop a multivariate calibration model to predict the chemical oxygen demand (COD in substrates of environmental relevance (paper effluents and landfill leachates from UV-Vis spectral data. The calibration models permit the fast determination of the COD with typical relative errors lower by 10% with respect to the conventional methodology.

  20. A new sewage exfiltration model--parameters and calibration.

    Science.gov (United States)

    Karpf, Christian; Krebs, Peter

    2011-01-01

    Exfiltration of waste water from sewer systems represents a potential danger for the soil and the aquifer. Common models, which are used to describe the exfiltration process, are based on the law of Darcy, extended by a more or less detailed consideration of the expansion of leaks, the characteristics of the soil and the colmation layer. But, due to the complexity of the exfiltration process, the calibration of these models includes a significant uncertainty. In this paper, a new exfiltration approach is introduced, which implements the dynamics of the clogging process and the structural conditions near sewer leaks. The calibration is realised according to experimental studies and analysis of groundwater infiltration to sewers. Furthermore, exfiltration rates and the sensitivity of the approach are estimated and evaluated, respectively, by Monte-Carlo simulations.

  1. IMU Calibration and Validation in a Factory, Remote on Land and at Sea

    DEFF Research Database (Denmark)

    Jørgensen, Martin Juhl; Paccagnan, Dario; Poulsen, Niels Kjølstad

    2014-01-01

    relevance for gyro-compassing grade optical gyroscopes and force-rebalanced pendulous accelerometers: Scale factor, bias and sensor axes misalignments. Focus is on low-dynamic marine applications e.g., subsea construction and survey. Two different methods of calibration are investigated: Kalman smoothing...... using an Aided Inertial Navigation System (AINS) framework, augmenting the error state Kalman filter (ESKF) to include the full set of IMU calibration parameters and a least squares approach, where the calibration parameters are determined by minimizing the magnitude of the INS error differential...... equation output. A method of evaluating calibrations is introduced and discussed. The two calibration methods are evaluated for factory use and results compared to a legacy proprietary method as well as in-field calibration/verification on land and at sea. The calibration methods shows similar navigation...

  2. Mercury CEM Calibration

    Energy Technology Data Exchange (ETDEWEB)

    John Schabron; Joseph Rovani; Mark Sanderson

    2008-02-29

    outputs of mercury generators are compared to one another using a nesting procedure which allows direct comparison of one generator with another and eliminates analyzer variability effects. The qualification portion of the EPA interim traceability protocol requires the vendors to define generator performance as affected by variables such as pressure, temperature, line voltage, and shipping. WRI is focusing efforts to determine actual generator performance related to the variables defined in the qualification portion of the interim protocol. The protocol will then be further revised by EPA based on what can actually be achieved with the generators. Another focus of the study is to evaluate approaches for field verification of generator performance. Upcoming work includes evaluation of oxidized mercury calibration generators, for which a separate protocol will be prepared by EPA. In addition, the variability of the spectrometers/analyzers under various environmental conditions needs to be defined and understood better. A main objective of the current work is to provide data on the performance and capabilities of elemental mercury generator/calibration systems for the development of realistic NIST traceability protocols for mercury vapor standards for continuous emission CEM calibration. This work is providing a direct contribution to the enablement of continuous emissions monitoring at coal-fired power plants in conformance with the CAMR. EPA Specification 12 states that mercury CEMs must be calibrated with NIST-traceable standards (Federal Register 2005). The initial draft of an elemental mercury generator traceability protocol was circulated by EPA in May 2007 for comment, and an interim protocol was issued in August 2007 (EPA 2007). Initially it was assumed that the calibration and implementation of mercury CEMs would be relatively simple, and implementation would follow the implementation of the Clean Air Interstate Rule (CAIR) SO{sub 2} and NO{sub x} monitoring, and

  3. The simple procedure for the fluxgate magnetometers calibration

    Science.gov (United States)

    Marusenkov, Andriy

    2014-05-01

    The fluxgate magnetometers are widely used in geophysics investigations including the geomagnetic field monitoring at the global network of geomagnetic observatories as well as for electromagnetic sounding of the Earth's crust conductivity. For solving these tasks the magnetometers have to be calibrated with an appropriate level of accuracy. As a particular case, the ways to satisfy the recent requirements to the scaling and orientation errors of 1-second INTERNAGNET magnetometers are considered in the work. The goal of the present study was to choose a simple and reliable calibration method for estimation of scale factors and angular errors of the three-axis magnetometers in the field. There are a large number of the scalar calibration methods, which use a free rotation of the sensor in the calibration field followed by complicated data processing procedures for numerical solution of the high-order equations set. The chosen approach also exploits the Earth's magnetic field as a calibrating signal, but, in contrast to other methods, the sensor has to be oriented in some particular positions in respect to the total field vector, instead of the sensor free rotation. This allows to use very simple and straightforward linear computation formulas and, as a result, to achieve more reliable estimations of the calibrated parameters. The estimation of the scale factors is performed by the sequential aligning of each component of the sensor in two positions: parallel and anti-parallel to the Earth's magnetic field vector. The estimation of non-orthogonality angles between each pair of components is performed after sequential aligning of the components at the angles +/- 45 and +/- 135 degrees of arc in respect to the total field vector. Due to such four positions approach the estimations of the non-orthogonality angles are invariant to the zero offsets and non-linearity of transfer functions of the components. The experimental justifying of the proposed method by means of the

  4. Novel gravimetric measurement technique for quantitative volume calibration in the sub-microliter range

    International Nuclear Information System (INIS)

    Liang, Dong; Zengerle, Roland; Steinert, Chris; Ernst, Andreas; Koltay, Peter; Bammesberger, Stefan; Tanguy, Laurent

    2013-01-01

    We present a novel measurement method based on the gravimetric principles adapted from the ASTM E542 and ISO 4787 standards for quantitative volume determination in the sub-microliter range. Such a method is particularly important for the calibration of non-contact micro dispensers as well as other microfluidic devices. The novel method is based on the linear regression analysis of continuously monitored gravimetric results and therefore is referred to as ‘gravimetric regression method (GRM)’. In this context, the regression analysis is necessary to compensate the mass loss due to evaporation that is significant for very small dispensing volumes. A full assessment of the measurement uncertainty of GRM is presented and results in a standard measurement uncertainty around 6 nl for dosage volumes in the range from 40 nl to 1 µl. The GRM has been experimentally benchmarked with a dual-dye ratiometric photometric method (Artel Inc., Westbrook, ME, USA), which can provide traceability of measurement to the International System of Units (SI) through reference standards maintained by NIST. Good precision (max. CV = 2.8%) and consistency (bias around 7 nl in the volume range from 40 to 400 nl) have been observed comparing the two methods. Based on the ASTM and ISO standards on the one hand and the benchmark with the photometric method on the other hand, two different approaches for establishing traceability for the GRM are discussed. (paper)

  5. Gaussian Process Regression Model in Spatial Logistic Regression

    Science.gov (United States)

    Sofro, A.; Oktaviarina, A.

    2018-01-01

    Spatial analysis has developed very quickly in the last decade. One of the favorite approaches is based on the neighbourhood of the region. Unfortunately, there are some limitations such as difficulty in prediction. Therefore, we offer Gaussian process regression (GPR) to accommodate the issue. In this paper, we will focus on spatial modeling with GPR for binomial data with logit link function. The performance of the model will be investigated. We will discuss the inference of how to estimate the parameters and hyper-parameters and to predict as well. Furthermore, simulation studies will be explained in the last section.

  6. Regression-based approach for testing the association between multi-region haplotype configuration and complex trait

    Directory of Open Access Journals (Sweden)

    Zhao Hongbo

    2009-09-01

    Full Text Available Abstract Background It is quite common that the genetic architecture of complex traits involves many genes and their interactions. Therefore, dealing with multiple unlinked genomic regions simultaneously is desirable. Results In this paper we develop a regression-based approach to assess the interactions of haplotypes that belong to different unlinked regions, and we use score statistics to test the null hypothesis of non-genetic association. Additionally, multiple marker combinations at each unlinked region are considered. The multiple tests are settled via the minP approach. The P value of the "best" multi-region multi-marker configuration is corrected via Monte-Carlo simulations. Through simulation studies, we assess the performance of the proposed approach and demonstrate its validity and power in testing for haplotype interaction association. Conclusion Our simulations showed that, for binary trait without covariates, our proposed methods prove to be equal and even more powerful than htr and hapcc which are part of the FAMHAP program. Additionally, our model can be applied to a wider variety of traits and allow adjustment for other covariates. To test the validity, our methods are applied to analyze the association between four unlinked candidate genes and pig meat quality.

  7. Visible spectroscopy calibration transfer model in determining pH of Sala mangoes

    International Nuclear Information System (INIS)

    Yahaya, O.K.M.; MatJafri, M.Z.; Aziz, A.A.; Omar, A.F.

    2015-01-01

    The purpose of this study is to compare the efficiency of calibration transfer procedures between three spectrometers involving two Ocean Optics Inc. spectrometers, namely, QE65000 and Jaz, and also, ASD FieldSpec 3 in measuring the pH of Sala mango by visible reflectance spectroscopy. This study evaluates the ability of these spectrometers in measuring the pH of Sala mango by applying similar calibration algorithms through direct calibration transfer. This visible reflectance spectroscopy technique defines a spectrometer as a master instrument and another spectrometer as a slave. The multiple linear regression (MLR) of calibration model generated using the QE65000 spectrometer is transferred to the Jaz spectrometer and vice versa for Set 1. The same technique is applied for Set 2 with QE65000 spectrometer is transferred to the FieldSpec3 spectrometer and vice versa. For Set 1, the result showed that the QE65000 spectrometer established a calibration model with higher accuracy than that of the Jaz spectrometer. In addition, the calibration model developed on Jaz spectrometer successfully predicted the pH of Sala mango, which was measured using QE65000 spectrometer, with a root means square error of prediction RMSEP = 0.092 pH and coefficients of determination R 2  = 0.892. Moreover, the best prediction result is obtained for Set 2 when the calibration model developed on QE65000 spectrometer is successfully transferred to FieldSpec 3 with R 2  = 0.839 and RMSEP = 0.16 pH

  8. A single model procedure for estimating tank calibration equations

    International Nuclear Information System (INIS)

    Liebetrau, A.M.

    1997-10-01

    A fundamental component of any accountability system for nuclear materials is a tank calibration equation that relates the height of liquid in a tank to its volume. Tank volume calibration equations are typically determined from pairs of height and volume measurements taken in a series of calibration runs. After raw calibration data are standardized to a fixed set of reference conditions, the calibration equation is typically fit by dividing the data into several segments--corresponding to regions in the tank--and independently fitting the data for each segment. The estimates obtained for individual segments must then be combined to obtain an estimate of the entire calibration function. This process is tedious and time-consuming. Moreover, uncertainty estimates may be misleading because it is difficult to properly model run-to-run variability and between-segment correlation. In this paper, the authors describe a model whose parameters can be estimated simultaneously for all segments of the calibration data, thereby eliminating the need for segment-by-segment estimation. The essence of the proposed model is to define a suitable polynomial to fit to each segment and then extend its definition to the domain of the entire calibration function, so that it (the entire calibration function) can be expressed as the sum of these extended polynomials. The model provides defensible estimates of between-run variability and yields a proper treatment of between-segment correlations. A portable software package, called TANCS, has been developed to facilitate the acquisition, standardization, and analysis of tank calibration data. The TANCS package was used for the calculations in an example presented to illustrate the unified modeling approach described in this paper. With TANCS, a trial calibration function can be estimated and evaluated in a matter of minutes

  9. Regionalization of meso-scale physically based nitrogen modeling outputs to the macro-scale by the use of regression trees

    Science.gov (United States)

    Künne, A.; Fink, M.; Kipka, H.; Krause, P.; Flügel, W.-A.

    2012-06-01

    In this paper, a method is presented to estimate excess nitrogen on large scales considering single field processes. The approach was implemented by using the physically based model J2000-S to simulate the nitrogen balance as well as the hydrological dynamics within meso-scale test catchments. The model input data, the parameterization, the results and a detailed system understanding were used to generate the regression tree models with GUIDE (Loh, 2002). For each landscape type in the federal state of Thuringia a regression tree was calibrated and validated using the model data and results of excess nitrogen from the test catchments. Hydrological parameters such as precipitation and evapotranspiration were also used to predict excess nitrogen by the regression tree model. Hence they had to be calculated and regionalized as well for the state of Thuringia. Here the model J2000g was used to simulate the water balance on the macro scale. With the regression trees the excess nitrogen was regionalized for each landscape type of Thuringia. The approach allows calculating the potential nitrogen input into the streams of the drainage area. The results show that the applied methodology was able to transfer the detailed model results of the meso-scale catchments to the entire state of Thuringia by low computing time without losing the detailed knowledge from the nitrogen transport modeling. This was validated with modeling results from Fink (2004) in a catchment lying in the regionalization area. The regionalized and modeled excess nitrogen correspond with 94%. The study was conducted within the framework of a project in collaboration with the Thuringian Environmental Ministry, whose overall aim was to assess the effect of agro-environmental measures regarding load reduction in the water bodies of Thuringia to fulfill the requirements of the European Water Framework Directive (Bäse et al., 2007; Fink, 2006; Fink et al., 2007).

  10. Single camera multi-view anthropometric measurement of human height and mid-upper arm circumference using linear regression.

    Science.gov (United States)

    Liu, Yingying; Sowmya, Arcot; Khamis, Heba

    2018-01-01

    Manually measured anthropometric quantities are used in many applications including human malnutrition assessment. Training is required to collect anthropometric measurements manually, which is not ideal in resource-constrained environments. Photogrammetric methods have been gaining attention in recent years, due to the availability and affordability of digital cameras. The primary goal is to demonstrate that height and mid-upper arm circumference (MUAC)-indicators of malnutrition-can be accurately estimated by applying linear regression to distance measurements from photographs of participants taken from five views, and determine the optimal view combinations. A secondary goal is to observe the effect on estimate error of two approaches which reduce complexity of the setup, computational requirements and the expertise required of the observer. Thirty-one participants (11 female, 20 male; 18-37 years) were photographed from five views. Distances were computed using both camera calibration and reference object techniques from manually annotated photos. To estimate height, linear regression was applied to the distances between the top of the participants head and the floor, as well as the height of a bounding box enclosing the participant's silhouette which eliminates the need to identify the floor. To estimate MUAC, linear regression was applied to the mid-upper arm width. Estimates were computed for all view combinations and performance was compared to other photogrammetric methods from the literature-linear distance method for height, and shape models for MUAC. The mean absolute difference (MAD) between the linear regression estimates and manual measurements were smaller compared to other methods. For the optimal view combinations (smallest MAD), the technical error of measurement and coefficient of reliability also indicate the linear regression methods are more reliable. The optimal view combination was the front and side views. When estimating height by linear

  11. Modal and Wave Load Identification by ARMA Calibration

    DEFF Research Database (Denmark)

    Jensen, Jens Kristian Jehrbo; Kirkegaard, Poul Henning; Brincker, Rune

    In this paper modal parameter as well as wave load identification by calibration of ARMA models is considered for a simple offshore structure. The theory of identification by ARMA calibration is presented as an identification technique in the time domain which can be applied for white noise excited...... systems. The technique is generalized also to include the case of ambient excitation processes such as wave excitation which are non-white processes. Due to those results a simple but effective approach for identification of the load process is proposed. Finally the theoretical presentation is illustrated...

  12. A New Empirical Metallicity Calibration for Vilnius Photometry

    Directory of Open Access Journals (Sweden)

    Bartašiūtė S.

    2013-12-01

    Full Text Available We present a new calibration of the seven-color Vilnius system in terms of [Fe/H], applicable to F–M stars in the metallicity range −2.8 ≤[Fe/H]≤ +0.5. We employ a purely empirical approach, based on ~1000 calibrating stars with high-resolution spectroscopic abundance determinations. It is shown that the color index P–Y is the best choice for a most accurate and sensitive abundance indicator for both dwarf and giant stars. Using it, [Fe/H] values can be determined with an accuracy of ±0.12 dex for stars of solar and mildly subsolar metallicity and ±0.17 dex for stars with [Fe/H] < −1. The new calibration is a significant improvement over the previous one used to date.

  13. The absolute radiometric calibration of the advanced very high resolution radiometer

    Science.gov (United States)

    Slater, P. N.; Teillet, P. M.; Ding, Y.

    1988-01-01

    An increasing number of remote sensing investigations require radiometrically calibrated imagery from NOAA Advanced Very High Resolution Radiation (AVHRR) sensors. Although a prelaunch calibration is done for these sensors, there is no capability for monitoring any changes in the in-flight absolute calibration for the visible and near infrared spectral channels. Hence, the possibility of using the reflectance-based method developed at White Sands for in-orbit calibration of LANDSAT Thematic Mapper (TM) and SPOT Haute Resolution Visible (HVR) data to calibrate the AVHRR sensor was investigated. Three diffrent approaches were considered: Method 1 - ground and atmospheric measurements and reference to another calibrated satellite sensor; Method 2 - ground and atmospheric measurements with no reference to another sensor; and Method 3 - no ground and atmospheric measurements but reference to another satellite sensor. The purpose is to describe an investigation on the use of Method 2 to calibrate NOAA-9 AVHRR channels 1 and 2 with the help of ground and atmospheric measurements at Rogers (dry) Lake, Edwards Air Force Base (EAFB) in the Mojave desert of California.

  14. The absolute radiometric calibration of the advanced very high resolution radiometer

    Science.gov (United States)

    Slater, P. N.; Teillet, P. M.; Ding, Y.

    1988-10-01

    An increasing number of remote sensing investigations require radiometrically calibrated imagery from NOAA Advanced Very High Resolution Radiation (AVHRR) sensors. Although a prelaunch calibration is done for these sensors, there is no capability for monitoring any changes in the in-flight absolute calibration for the visible and near infrared spectral channels. Hence, the possibility of using the reflectance-based method developed at White Sands for in-orbit calibration of LANDSAT Thematic Mapper (TM) and SPOT Haute Resolution Visible (HVR) data to calibrate the AVHRR sensor was investigated. Three diffrent approaches were considered: Method 1 - ground and atmospheric measurements and reference to another calibrated satellite sensor; Method 2 - ground and atmospheric measurements with no reference to another sensor; and Method 3 - no ground and atmospheric measurements but reference to another satellite sensor. The purpose is to describe an investigation on the use of Method 2 to calibrate NOAA-9 AVHRR channels 1 and 2 with the help of ground and atmospheric measurements at Rogers (dry) Lake, Edwards Air Force Base (EAFB) in the Mojave desert of California.

  15. Method validation using weighted linear regression models for quantification of UV filters in water samples.

    Science.gov (United States)

    da Silva, Claudia Pereira; Emídio, Elissandro Soares; de Marchi, Mary Rosa Rodrigues

    2015-01-01

    This paper describes the validation of a method consisting of solid-phase extraction followed by gas chromatography-tandem mass spectrometry for the analysis of the ultraviolet (UV) filters benzophenone-3, ethylhexyl salicylate, ethylhexyl methoxycinnamate and octocrylene. The method validation criteria included evaluation of selectivity, analytical curve, trueness, precision, limits of detection and limits of quantification. The non-weighted linear regression model has traditionally been used for calibration, but it is not necessarily the optimal model in all cases. Because the assumption of homoscedasticity was not met for the analytical data in this work, a weighted least squares linear regression was used for the calibration method. The evaluated analytical parameters were satisfactory for the analytes and showed recoveries at four fortification levels between 62% and 107%, with relative standard deviations less than 14%. The detection limits ranged from 7.6 to 24.1 ng L(-1). The proposed method was used to determine the amount of UV filters in water samples from water treatment plants in Araraquara and Jau in São Paulo, Brazil. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Validation of a densimeter calibration procedure for a secondary calibration laboratory

    International Nuclear Information System (INIS)

    Alpizar Herrera, Juan Carlos

    2014-01-01

    A survey was conducted to quantify the need for calibration of a density measurement instrument at the research units at the Sede Rodrigo Facio of the Universidad de Costa Rica. A calibration procedure was documented for the instrument that presented the highest demand in the survey by the calibration service. A study of INTE-ISO/IEC 17025: 2005 and specifically in section 5.4 of this standard was done, to document the procedure for calibrating densimeters. Densimeter calibration procedures and standards were sought from different national and international sources. The method of hydrostatic weighing or Cuckow method was the basis of the defined procedure. Documenting the calibration procedure and creating other documents was performed for data acquisition log, intermediate calculation log and calibration certificate copy. A veracity test was performed using as reference laboratory a laboratory of calibration secondary national as part of the validation process of the documented procedure. The results of the E_n statistic of 0.41; 0.34 and 0.46 for the calibration points 90%, 50% and 10% were obtained for the densimeter scale respectively. A reproducibility analysis of the method was performed with satisfactory results. Different suppliers were contacted to estimate the economic costs of the equipment and materials, needed to develop the documented method of densimeter calibration. The acquisition of an analytical balance was recommended, instead of a precision scale, in order to improve the results obtained with the documented method [es

  17. A Gaussian process regression based hybrid approach for short-term wind speed prediction

    International Nuclear Information System (INIS)

    Zhang, Chi; Wei, Haikun; Zhao, Xin; Liu, Tianhong; Zhang, Kanjian

    2016-01-01

    Highlights: • A novel hybrid approach is proposed for short-term wind speed prediction. • This method combines the parametric AR model with the non-parametric GPR model. • The relative importance of different inputs is considered. • Different types of covariance functions are considered and combined. • It can provide both accurate point forecasts and satisfactory prediction intervals. - Abstract: This paper proposes a hybrid model based on autoregressive (AR) model and Gaussian process regression (GPR) for probabilistic wind speed forecasting. In the proposed approach, the AR model is employed to capture the overall structure from wind speed series, and the GPR is adopted to extract the local structure. Additionally, automatic relevance determination (ARD) is used to take into account the relative importance of different inputs, and different types of covariance functions are combined to capture the characteristics of the data. The proposed hybrid model is compared with the persistence model, artificial neural network (ANN), and support vector machine (SVM) for one-step ahead forecasting, using wind speed data collected from three wind farms in China. The forecasting results indicate that the proposed method can not only improve point forecasts compared with other methods, but also generate satisfactory prediction intervals.

  18. Analyzing hospitalization data: potential limitations of Poisson regression.

    Science.gov (United States)

    Weaver, Colin G; Ravani, Pietro; Oliver, Matthew J; Austin, Peter C; Quinn, Robert R

    2015-08-01

    Poisson regression is commonly used to analyze hospitalization data when outcomes are expressed as counts (e.g. number of days in hospital). However, data often violate the assumptions on which Poisson regression is based. More appropriate extensions of this model, while available, are rarely used. We compared hospitalization data between 206 patients treated with hemodialysis (HD) and 107 treated with peritoneal dialysis (PD) using Poisson regression and compared results from standard Poisson regression with those obtained using three other approaches for modeling count data: negative binomial (NB) regression, zero-inflated Poisson (ZIP) regression and zero-inflated negative binomial (ZINB) regression. We examined the appropriateness of each model and compared the results obtained with each approach. During a mean 1.9 years of follow-up, 183 of 313 patients (58%) were never hospitalized (indicating an excess of 'zeros'). The data also displayed overdispersion (variance greater than mean), violating another assumption of the Poisson model. Using four criteria, we determined that the NB and ZINB models performed best. According to these two models, patients treated with HD experienced similar hospitalization rates as those receiving PD {NB rate ratio (RR): 1.04 [bootstrapped 95% confidence interval (CI): 0.49-2.20]; ZINB summary RR: 1.21 (bootstrapped 95% CI 0.60-2.46)}. Poisson and ZIP models fit the data poorly and had much larger point estimates than the NB and ZINB models [Poisson RR: 1.93 (bootstrapped 95% CI 0.88-4.23); ZIP summary RR: 1.84 (bootstrapped 95% CI 0.88-3.84)]. We found substantially different results when modeling hospitalization data, depending on the approach used. Our results argue strongly for a sound model selection process and improved reporting around statistical methods used for modeling count data. © The Author 2015. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  19. Calibration belt for quality-of-care assessment based on dichotomous outcomes.

    Directory of Open Access Journals (Sweden)

    Stefano Finazzi

    Full Text Available Prognostic models applied in medicine must be validated on independent samples, before their use can be recommended. The assessment of calibration, i.e., the model's ability to provide reliable predictions, is crucial in external validation studies. Besides having several shortcomings, statistical techniques such as the computation of the standardized mortality ratio (SMR and its confidence intervals, the Hosmer-Lemeshow statistics, and the Cox calibration test, are all non-informative with respect to calibration across risk classes. Accordingly, calibration plots reporting expected versus observed outcomes across risk subsets have been used for many years. Erroneously, the points in the plot (frequently representing deciles of risk have been connected with lines, generating false calibration curves. Here we propose a methodology to create a confidence band for the calibration curve based on a function that relates expected to observed probabilities across classes of risk. The calibration belt allows the ranges of risk to be spotted where there is a significant deviation from the ideal calibration, and the direction of the deviation to be indicated. This method thus offers a more analytical view in the assessment of quality of care, compared to other approaches.

  20. Fractal approach to computer-analytical modelling of tree crown

    International Nuclear Information System (INIS)

    Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.

    1993-09-01

    In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs

  1. Regression of environmental noise in LIGO data

    International Nuclear Information System (INIS)

    Tiwari, V; Klimenko, S; Mitselmakher, G; Necula, V; Drago, M; Prodi, G; Frolov, V; Yakushin, I; Re, V; Salemi, F; Vedovato, G

    2015-01-01

    We address the problem of noise regression in the output of gravitational-wave (GW) interferometers, using data from the physical environmental monitors (PEM). The objective of the regression analysis is to predict environmental noise in the GW channel from the PEM measurements. One of the most promising regression methods is based on the construction of Wiener–Kolmogorov (WK) filters. Using this method, the seismic noise cancellation from the LIGO GW channel has already been performed. In the presented approach the WK method has been extended, incorporating banks of Wiener filters in the time–frequency domain, multi-channel analysis and regulation schemes, which greatly enhance the versatility of the regression analysis. Also we present the first results on regression of the bi-coherent noise in the LIGO data. (paper)

  2. Calibration, monitoring, and control of complex detector systems

    International Nuclear Information System (INIS)

    Breidenbach, M.

    1981-01-01

    LEP detectors will probably be complex devices having tens of subsystems; some subsystems having perhaps tens of thousands of channels. Reasonable design goals for such a detector will include economic use of money and people, rapid and reliable calibration and monitoring of the detector, and simple control and operation of the device. The synchronous operation of an e + e - storage ring, coupled with its relatively low interaction rate, allow the design of simple circuits for time and charge measurements. These circuits, and more importantly, the basic detector channels, can usually be tested and calibrated by signal injection into the detector. Present detectors utilize semi-autonomous controllers which collect such calibration data and calculate statistics as well as control sparse data scans. Straightforward improvements in programming technology should move the entire calibration into these local controllers, so that calibration and testing time will be a constant independent of the number of channels in a system. Considerable programming effort may be saved by emphasizing the similarities of the subsystems, so that the subsystems can be described by a reasonable database and general purpose calibration and test routines can be used. Monitoring of the apparatus will probably continue to be of two classes: 'passive' histogramming of channel occupancies and other more complex combinations of the data; and 'active' injection of test patterns and calibration signals during a run. The relative importance of active monitoring will increase for the low data rates expected off resonances at high s. Experience at SPEAR and PEP is used to illustrate these approaches. (Auth.)

  3. Calibration, Monitoring, and Control of Complex Detector Systems

    Science.gov (United States)

    Breidenbach, M.

    1981-04-01

    LEP Detectors will probably be complex devices having tens of subsystems; some subsystems having perhaps tens of thousands of channels. Reasonable design goals for such a detector will include economic use of money and people, rapid and reliable calibration and monitoring of the detector, and simple control and operation of the device. The synchronous operation of an e+e- storage ring, coupled with its relatively low interaction rate, allow the design of simple circuits for time and charge measurements. These circuits, and more importantly, the basic detector channels, can usually be tested and calibrated by signal injection into the detector. Present detectors utilize semi-autonomous controllers which collect such calibration data and calculate statistics as well as control sparse data scans. Straightforward improvements in programming technology should move the entire calibration into these local controllers, so that calibration and testing time will be a constant independent of the number of channels in a system. Considerable programming effort may be saved by emphasizing the similarities of the subsystems, so that the subsystems can be described by a reasonable database and general purpose calibration and test routines can be used. Monitoring of the apparatus will probably continue to be of two classes: "passive" histogramming of channel occupancies and other more complex combinations of the data; and "active" injection of test patterns and calibration signals during a run. The relative importance of active monitoring will increase for the low data rates expected off resonances at high s. Experience at SPEAR and PEP is used to illustrate these approaches.

  4. Laser calibration of the ATLAS Tile Calorimeter

    CERN Document Server

    Di Gregorio, Giulia; The ATLAS collaboration

    2017-01-01

    High performance stability of the ATLAS Tile calorimeter is achieved with a set of calibration procedures. One step of the calibrtion procedure is based on measurements of the response stability to laser excitation of the photomultipliers (PMTs) that are used to readout the calorimeter cells. A facility to study in lab the PMT stability response is operating in the PISA-INFN laboratories since 2015. Goals of the test in lab are to study the time evolution of the PMT response to reproduce and to understand the origin of the resonse drifts seen with the PMT mounted on the Tile calorimeter in its normal operation during LHC run I and run II. A new statistical approach was developed to measure the drift of the absolute gain. This approach was applied to both the ATLAS laser calibration data and to the data collected in the Pisa local laboratory. The preliminary results from these two studies are shown.

  5. Ordinary Least Squares and Quantile Regression: An Inquiry-Based Learning Approach to a Comparison of Regression Methods

    Science.gov (United States)

    Helmreich, James E.; Krog, K. Peter

    2018-01-01

    We present a short, inquiry-based learning course on concepts and methods underlying ordinary least squares (OLS), least absolute deviation (LAD), and quantile regression (QR). Students investigate squared, absolute, and weighted absolute distance functions (metrics) as location measures. Using differential calculus and properties of convex…

  6. Explaining the heterogeneous scrapie surveillance figures across Europe: a meta-regression approach

    Directory of Open Access Journals (Sweden)

    Ru Giuseppe

    2007-06-01

    Full Text Available Abstract Background Two annual surveys, the abattoir and the fallen stock, monitor the presence of scrapie across Europe. A simple comparison between the prevalence estimates in different countries reveals that, in 2003, the abattoir survey appears to detect more scrapie in some countries. This is contrary to evidence suggesting the greater ability of the fallen stock survey to detect the disease. We applied meta-analysis techniques to study this apparent heterogeneity in the behaviour of the surveys across Europe. Furthermore, we conducted a meta-regression analysis to assess the effect of country-specific characteristics on the variability. We have chosen the odds ratios between the two surveys to inform the underlying relationship between them and to allow comparisons between the countries under the meta-regression framework. Baseline risks, those of the slaughtered populations across Europe, and country-specific covariates, available from the European Commission Report, were inputted in the model to explain the heterogeneity. Results Our results show the presence of significant heterogeneity in the odds ratios between countries and no reduction in the variability after adjustment for the different risks in the baseline populations. Three countries contributed the most to the overall heterogeneity: Germany, Ireland and The Netherlands. The inclusion of country-specific covariates did not, in general, reduce the variability except for one variable: the proportion of the total adult sheep population sampled as fallen stock by each country. A large residual heterogeneity remained in the model indicating the presence of substantial effect variability between countries. Conclusion The meta-analysis approach was useful to assess the level of heterogeneity in the implementation of the surveys and to explore the reasons for the variation between countries.

  7. Updating a synchronous fluorescence spectroscopic virgin olive oil adulteration calibration to a new geographical region.

    Science.gov (United States)

    Kunz, Matthew Ross; Ottaway, Joshua; Kalivas, John H; Georgiou, Constantinos A; Mousdis, George A

    2011-02-23

    Detecting and quantifying extra virgin olive adulteration is of great importance to the olive oil industry. Many spectroscopic methods in conjunction with multivariate analysis have been used to solve these issues. However, successes to date are limited as calibration models are built to a specific set of geographical regions, growing seasons, cultivars, and oil extraction methods (the composite primary condition). Samples from new geographical regions, growing seasons, etc. (secondary conditions) are not always correctly predicted by the primary model due to different olive oil and/or adulterant compositions stemming from secondary conditions not matching the primary conditions. Three Tikhonov regularization (TR) variants are used in this paper to allow adulterant (sunflower oil) concentration predictions in samples from geographical regions not part of the original primary calibration domain. Of the three TR variants, ridge regression with an additional 2-norm penalty provides the smallest validation sample prediction errors. Although the paper reports on using TR for model updating to predict adulterant oil concentration, the methods should also be applicable to updating models distinguishing adulterated samples from pure extra virgin olive oil. Additionally, the approaches are general and can be used with other spectroscopic methods and adulterants as well as with other agriculture products.

  8. Simple laser vision sensor calibration for surface profiling applications

    Science.gov (United States)

    Abu-Nabah, Bassam A.; ElSoussi, Adnane O.; Al Alami, Abed ElRahman K.

    2016-09-01

    Due to the relatively large structures in the Oil and Gas industry, original equipment manufacturers (OEMs) have been implementing custom-designed laser vision sensor (LVS) surface profiling systems as part of quality control in their manufacturing processes. The rough manufacturing environment and the continuous movement and misalignment of these custom-designed tools adversely affect the accuracy of laser-based vision surface profiling applications. Accordingly, Oil and Gas businesses have been raising the demand from the OEMs to implement practical and robust LVS calibration techniques prior to running any visual inspections. This effort introduces an LVS calibration technique representing a simplified version of two known calibration techniques, which are commonly implemented to obtain a calibrated LVS system for surface profiling applications. Both calibration techniques are implemented virtually and experimentally to scan simulated and three-dimensional (3D) printed features of known profiles, respectively. Scanned data is transformed from the camera frame to points in the world coordinate system and compared with the input profiles to validate the introduced calibration technique capability against the more complex approach and preliminarily assess the measurement technique for weld profiling applications. Moreover, the sensitivity to stand-off distances is analyzed to illustrate the practicality of the presented technique.

  9. Field calibration of electrochemical NO2 sensors in a citizen science context

    Science.gov (United States)

    Mijling, Bas; Jiang, Qijun; de Jonge, Dave; Bocconi, Stefano

    2018-03-01

    In many urban areas the population is exposed to elevated levels of air pollution. However, real-time air quality is usually only measured at few locations. These measurements provide a general picture of the state of the air, but they are unable to monitor local differences. New low-cost sensor technology is available for several years now, and has the potential to extend official monitoring networks significantly even though the current generation of sensors suffer from various technical issues.Citizen science experiments based on these sensors must be designed carefully to avoid generation of data which is of poor or even useless quality. This study explores the added value of the 2016 Urban AirQ campaign, which focused on measuring nitrogen dioxide (NO2) in Amsterdam, the Netherlands. Sixteen low-cost air quality sensor devices were built and distributed among volunteers living close to roads with high traffic volume for a 2-month measurement period. Each electrochemical sensor was calibrated in-field next to an air monitoring station during an 8-day period, resulting in R2 ranging from 0.3 to 0.7. When temperature and relative humidity are included in a multilinear regression approach, the NO2 accuracy is improved significantly, with R2 ranging from 0.6 to 0.9. Recalibration after the campaign is crucial, as all sensors show a significant signal drift in the 2-month measurement period. The measurement series between the calibration periods can be corrected for after the measurement period by taking a weighted average of the calibration coefficients.Validation against an independent air monitoring station shows good agreement. Using our approach, the standard deviation of a typical sensor device for NO2 measurements was found to be 7 µg m-3, provided that temperatures are below 30 °C. Stronger ozone titration on street sides causes an underestimation of NO2 concentrations, which 75 % of the time is less than 2.3 µg m-3.Our findings show that citizen science

  10. Ensemble preprocessing of near-infrared (NIR) spectra for multivariate calibration

    International Nuclear Information System (INIS)

    Xu Lu; Zhou Yanping; Tang Lijuan; Wu Hailong; Jiang Jianhui; Shen Guoli; Yu Ruqin

    2008-01-01

    Preprocessing of raw near-infrared (NIR) spectral data is indispensable in multivariate calibration when the measured spectra are subject to significant noises, baselines and other undesirable factors. However, due to the lack of sufficient prior information and an incomplete knowledge of the raw data, NIR spectra preprocessing in multivariate calibration is still trial and error. How to select a proper method depends largely on both the nature of the data and the expertise and experience of the practitioners. This might limit the applications of multivariate calibration in many fields, where researchers are not very familiar with the characteristics of many preprocessing methods unique in chemometrics and have difficulties to select the most suitable methods. Another problem is many preprocessing methods, when used alone, might degrade the data in certain aspects or lose some useful information while improving certain qualities of the data. In order to tackle these problems, this paper proposes a new concept of data preprocessing, ensemble preprocessing method, where partial least squares (PLSs) models built on differently preprocessed data are combined by Monte Carlo cross validation (MCCV) stacked regression. Little or no prior information of the data and expertise are required. Moreover, fusion of complementary information obtained by different preprocessing methods often leads to a more stable and accurate calibration model. The investigation of two real data sets has demonstrated the advantages of the proposed method

  11. On-line quantile regression in the RKHS (Reproducing Kernel Hilbert Space) for operational probabilistic forecasting of wind power

    International Nuclear Information System (INIS)

    Gallego-Castillo, Cristobal; Bessa, Ricardo; Cavalcante, Laura; Lopez-Garcia, Oscar

    2016-01-01

    Wind power probabilistic forecast is being used as input in several decision-making problems, such as stochastic unit commitment, operating reserve setting and electricity market bidding. This work introduces a new on-line quantile regression model based on the Reproducing Kernel Hilbert Space (RKHS) framework. Its application to the field of wind power forecasting involves a discussion on the choice of the bias term of the quantile models, and the consideration of the operational framework in order to mimic real conditions. Benchmark against linear and splines quantile regression models was performed for a real case study during a 18 months period. Model parameter selection was based on k-fold crossvalidation. Results showed a noticeable improvement in terms of calibration, a key criterion for the wind power industry. Modest improvements in terms of Continuous Ranked Probability Score (CRPS) were also observed for prediction horizons between 6 and 20 h ahead. - Highlights: • New online quantile regression model based on the Reproducing Kernel Hilbert Space. • First application to operational probabilistic wind power forecasting. • Modest improvements of CRPS for prediction horizons between 6 and 20 h ahead. • Noticeable improvements in terms of Calibration due to online learning.

  12. Validation of regression models for nitrate concentrations in the upper groundwater in sandy soils

    International Nuclear Information System (INIS)

    Sonneveld, M.P.W.; Brus, D.J.; Roelsma, J.

    2010-01-01

    For Dutch sandy regions, linear regression models have been developed that predict nitrate concentrations in the upper groundwater on the basis of residual nitrate contents in the soil in autumn. The objective of our study was to validate these regression models for one particular sandy region dominated by dairy farming. No data from this area were used for calibrating the regression models. The model was validated by additional probability sampling. This sample was used to estimate errors in 1) the predicted areal fractions where the EU standard of 50 mg l -1 is exceeded for farms with low N surpluses (ALT) and farms with higher N surpluses (REF); 2) predicted cumulative frequency distributions of nitrate concentration for both groups of farms. Both the errors in the predicted areal fractions as well as the errors in the predicted cumulative frequency distributions indicate that the regression models are invalid for the sandy soils of this study area. - This study indicates that linear regression models that predict nitrate concentrations in the upper groundwater using residual soil N contents should be applied with care.

  13. Development of theoretical oxygen saturation calibration curve based on optical density ratio and optical simulation approach

    Science.gov (United States)

    Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia

    2017-09-01

    The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.

  14. Calibration method for projector-camera-based telecentric fringe projection profilometry system.

    Science.gov (United States)

    Liu, Haibo; Lin, Huijing; Yao, Linshen

    2017-12-11

    By combining a fringe projection setup with a telecentric lens, a fringe pattern could be projected and imaged within a small area, making it possible to measure the three-dimensional (3D) surfaces of micro-components. This paper focuses on the flexible calibration of the fringe projection profilometry (FPP) system using a telecentric lens. An analytical telecentric projector-camera calibration model is introduced, in which the rig structure parameters remain invariant for all views, and the 3D calibration target can be located on the projector image plane with sub-pixel precision. Based on the presented calibration model, a two-step calibration procedure is proposed. First, the initial parameters, e.g., the projector-camera rig, projector intrinsic matrix, and coordinates of the control points of a 3D calibration target, are estimated using the affine camera factorization calibration method. Second, a bundle adjustment algorithm with various simultaneous views is applied to refine the calibrated parameters, especially the rig structure parameters and coordinates of the control points forth 3D target. Because the control points are determined during the calibration, there is no need for an accurate 3D reference target, whose is costly and extremely difficult to fabricate, particularly for tiny objects used to calibrate the telecentric FPP system. Real experiments were performed to validate the performance of the proposed calibration method. The test results showed that the proposed approach is very accurate and reliable.

  15. ORNL calibrations facility

    International Nuclear Information System (INIS)

    Berger, C.D.; Gupton, E.D.; Lane, B.H.; Miller, J.H.; Nichols, S.W.

    1982-08-01

    The ORNL Calibrations Facility is operated by the Instrumentation Group of the Industrial Safety and Applied Health Physics Division. Its primary purpose is to maintain radiation calibration standards for calibration of ORNL health physics instruments and personnel dosimeters. This report includes a discussion of the radioactive sources and ancillary equipment in use and a step-by-step procedure for calibration of those survey instruments and personnel dosimeters in routine use at ORNL

  16. CryoSat-2 SIRAL Calibration: Strategy, Application and Results

    Science.gov (United States)

    Parrinello, T.; Fornari, M.; Bouzinac, C.; Scagliola, M.; Tagliani, N.

    2012-04-01

    The main payload of CryoSat-2 is a Ku band pulsewidth limited radar altimeter, called SIRAL (Synthetic interferometric radar altimeter), that transmits pulses at a high pulse repetition frequency thus making the received echoes phase coherent and suitable for azimuth processing. This allows to reach an along track resolution of about 250 meters which is an important improvement over traditional pulse-width limited altimeters. Due to the fact that SIRAL is a phase coherent pulse-width limited radar altimeter, a proper calibration approach has been developed. In fact, not only the corrections for transfer function amplitude with respect to frequency, gain and instrument path delay have to be computed but it is also needed to provide corrections for transfer function phase with respect to frequency and AGC setting as well as the phase variation across bursts of pulses. As a consequence, SIRAL performs regularly four types of calibrations: (1) CAL1 in order to calibrate the internal path delay and peak power variation, (2) CAL2 in order to compensate the instrument transfer function, (3) CAL4 to calibrate the interferometer and (4) AutoCal, a specific sequence in order to calibrate the gain and phase difference for each AGC setting. Commissioning phase results (April-December 2010) revealed high stability of the instrument, which made possible to reduce the calibration frequency during Operations. Internal calibration data are processed on ground by the CryoSat-2 Instrument Processing Facility (IPF1) and then applied to the science data. In this poster we will describe as first the calibration strategy and then how the four different types of calibration are applied to science data. Moreover the calibration results over almost 2 years of mission will be presented, analyzing their temporal evolution in order to highlight the stability of the instrument over its life.

  17. Calibrated birth-death phylogenetic time-tree priors for bayesian inference.

    Science.gov (United States)

    Heled, Joseph; Drummond, Alexei J

    2015-05-01

    Here we introduce a general class of multiple calibration birth-death tree priors for use in Bayesian phylogenetic inference. All tree priors in this class separate ancestral node heights into a set of "calibrated nodes" and "uncalibrated nodes" such that the marginal distribution of the calibrated nodes is user-specified whereas the density ratio of the birth-death prior is retained for trees with equal values for the calibrated nodes. We describe two formulations, one in which the calibration information informs the prior on ranked tree topologies, through the (conditional) prior, and the other which factorizes the prior on divergence times and ranked topologies, thus allowing uniform, or any arbitrary prior distribution on ranked topologies. Although the first of these formulations has some attractive properties, the algorithm we present for computing its prior density is computationally intensive. However, the second formulation is always faster and computationally efficient for up to six calibrations. We demonstrate the utility of the new class of multiple-calibration tree priors using both small simulations and a real-world analysis and compare the results to existing schemes. The two new calibrated tree priors described in this article offer greater flexibility and control of prior specification in calibrated time-tree inference and divergence time dating, and will remove the need for indirect approaches to the assessment of the combined effect of calibration densities and tree priors in Bayesian phylogenetic inference. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  18. Ultra-portable field transfer radiometer for vicarious calibration of earth imaging sensors

    Science.gov (United States)

    Thome, Kurtis; Wenny, Brian; Anderson, Nikolaus; McCorkel, Joel; Czapla-Myers, Jeffrey; Biggar, Stuart

    2018-06-01

    A small portable transfer radiometer has been developed as part of an effort to ensure the quality of upwelling radiance from test sites used for vicarious calibration in the solar reflective. The test sites are used to predict top-of-atmosphere reflectance relying on ground-based measurements of the atmosphere and surface. The portable transfer radiometer is designed for one-person operation for on-site field calibration of instrumentation used to determine ground-leaving radiance. The current work describes the detector- and source-based radiometric calibration of the transfer radiometer highlighting the expected accuracy and SI-traceability. The results indicate differences between the detector-based and source-based results greater than the combined uncertainties of the approaches. Results from recent field deployments of the transfer radiometer using a solar radiation based calibration agree with the source-based laboratory calibration within the combined uncertainties of the methods. The detector-based results show a significant difference to the solar-based calibration. The source-based calibration is used as the basis for a radiance-based calibration of the Landsat-8 Operational Land Imager that agrees with the OLI calibration to within the uncertainties of the methods.

  19. Evaluation of an ASM1 Model Calibration Precedure on a Municipal-Industrial Wastewater Treatment Plant

    DEFF Research Database (Denmark)

    Petersen, Britta; Gernaey, Krist; Henze, Mogens

    2002-01-01

    treatment plant. In the case that was studied it was important to have a detailed description of the process dynamics, since the model was to be used as the basis for optimisation scenarios in a later phase. Therefore, a complete model calibration procedure was applied including: (1) a description......The purpose of the calibrated model determines how to approach a model calibration, e.g. which information is needed and to which level of detail the model should be calibrated. A systematic model calibration procedure was therefore defined and evaluated for a municipal–industrial wastewater...

  20. Patient positioning with X-ray detector self-calibration for image guided therapy

    International Nuclear Information System (INIS)

    Selby, B.P.; Sakas, G.; Stilla, U.; Groch, W.-D.

    2011-01-01

    Full text: Automatic alignment estimation from projection images has a range of applications, but misaligned cameras induce inaccuracies. Calibration methods for optical cameras requiring calibration bodies or detectable features have been a matter of research for years. Not so for image guided therapy, although exact patient pose recovery is crucial. To image patient anatomy, X-ray instead of optical equipment is used. Feature detection is often infeasible. Furthermore, a method not requiring a calibration body, usable during treatment, would be desirable to improve accuracy of the patient alignment. We present a novel approach not relying on image features but combining intensity based calibration with 3D pose recovery. A stereoscopic X-ray camera model is proposed, and effects of erroneous parameters on the patient alignment are evaluated. The relevant camera parameters are automatically computed by comparison of X-ray to CT images and are incorporated in the patient alignment computation. The methods were tested with ground truth data of an anatomic phantom with artificially produced misalignments and available real-patient images from a particle therapy machine. We show that our approach can compensate patient alignment errors through mis-calibration of a camera from more than 5 mm to below 0.2 mm. Usage of images with artificial noise shows that the method is robust against image degradation of 2-5%. X-ray camera sel calibration improves accuracy when cameras are misaligned. We could show that rigid body alignment was computed more accurately and that self-calibration is possible, even if detection of corresponding image features is not. (author)

  1. Comparison of exact, efron and breslow parameter approach method on hazard ratio and stratified cox regression model

    Science.gov (United States)

    Fatekurohman, Mohamat; Nurmala, Nita; Anggraeni, Dian

    2018-04-01

    Lungs are the most important organ, in the case of respiratory system. Problems related to disorder of the lungs are various, i.e. pneumonia, emphysema, tuberculosis and lung cancer. Comparing all those problems, lung cancer is the most harmful. Considering about that, the aim of this research applies survival analysis and factors affecting the endurance of the lung cancer patient using comparison of exact, Efron and Breslow parameter approach method on hazard ratio and stratified cox regression model. The data applied are based on the medical records of lung cancer patients in Jember Paru-paru hospital on 2016, east java, Indonesia. The factors affecting the endurance of the lung cancer patients can be classified into several criteria, i.e. sex, age, hemoglobin, leukocytes, erythrocytes, sedimentation rate of blood, therapy status, general condition, body weight. The result shows that exact method of stratified cox regression model is better than other. On the other hand, the endurance of the patients is affected by their age and the general conditions.

  2. Calibrating cellular automaton models for pedestrians walking through corners

    Science.gov (United States)

    Dias, Charitha; Lovreglio, Ruggiero

    2018-05-01

    Cellular Automata (CA) based pedestrian simulation models have gained remarkable popularity as they are simpler and easier to implement compared to other microscopic modeling approaches. However, incorporating traditional floor field representations in CA models to simulate pedestrian corner navigation behavior could result in unrealistic behaviors. Even though several previous studies have attempted to enhance CA models to realistically simulate pedestrian maneuvers around bends, such modifications have not been calibrated or validated against empirical data. In this study, two static floor field (SFF) representations, namely 'discrete representation' and 'continuous representation', are calibrated for CA-models to represent pedestrians' walking behavior around 90° bends. Trajectory data collected through a controlled experiment are used to calibrate these model representations. Calibration results indicate that although both floor field representations can represent pedestrians' corner navigation behavior, the 'continuous' representation fits the data better. Output of this study could be beneficial for enhancing the reliability of existing CA-based models by representing pedestrians' corner navigation behaviors more realistically.

  3. Capital Structure Arbitrage under a Risk-Neutral Calibration

    Directory of Open Access Journals (Sweden)

    Peter J. Zeitsch

    2017-01-01

    Full Text Available By reinterpreting the calibration of structural models, a reassessment of the importance of the input variables is undertaken. The analysis shows that volatility is the key parameter to any calibration exercise, by several orders of magnitude. To maximize the sensitivity to volatility, a simple formulation of Merton’s model is proposed that employs deep out-of-the-money option implied volatilities. The methodology also eliminates the use of historic data to specify the default barrier, thereby leading to a full risk-neutral calibration. Subsequently, a new technique for identifying and hedging capital structure arbitrage opportunities is illustrated. The approach seeks to hedge the volatility risk, or vega, as opposed to the exposure from the underlying equity itself, or delta. The results question the efficacy of the common arbitrage strategy of only executing the delta hedge.

  4. Consequences of Secondary Calibrations on Divergence Time Estimates.

    Directory of Open Access Journals (Sweden)

    John J Schenk

    Full Text Available Secondary calibrations (calibrations based on the results of previous molecular dating studies are commonly applied in divergence time analyses in groups that lack fossil data; however, the consequences of applying secondary calibrations in a relaxed-clock approach are not fully understood. I tested whether applying the posterior estimate from a primary study as a prior distribution in a secondary study results in consistent age and uncertainty estimates. I compared age estimates from simulations with 100 randomly replicated secondary trees. On average, the 95% credible intervals of node ages for secondary estimates were significantly younger and narrower than primary estimates. The primary and secondary age estimates were significantly different in 97% of the replicates after Bonferroni corrections. Greater error in magnitude was associated with deeper than shallower nodes, but the opposite was found when standardized by median node age, and a significant positive relationship was determined between the number of tips/age of secondary trees and the total amount of error. When two secondary calibrated nodes were analyzed, estimates remained significantly different, and although the minimum and median estimates were associated with less error, maximum age estimates and credible interval widths had greater error. The shape of the prior also influenced error, in which applying a normal, rather than uniform, prior distribution resulted in greater error. Secondary calibrations, in summary, lead to a false impression of precision and the distribution of age estimates shift away from those that would be inferred by the primary analysis. These results suggest that secondary calibrations should not be applied as the only source of calibration in divergence time analyses that test time-dependent hypotheses until the additional error associated with secondary calibrations is more properly modeled to take into account increased uncertainty in age estimates.

  5. An Ionospheric Index Model based on Linear Regression and Neural Network Approaches

    Science.gov (United States)

    Tshisaphungo, Mpho; McKinnell, Lee-Anne; Bosco Habarulema, John

    2017-04-01

    The ionosphere is well known to reflect radio wave signals in the high frequency (HF) band due to the present of electron and ions within the region. To optimise the use of long distance HF communications, it is important to understand the drivers of ionospheric storms and accurately predict the propagation conditions especially during disturbed days. This paper presents the development of an ionospheric storm-time index over the South African region for the application of HF communication users. The model will result into a valuable tool to measure the complex ionospheric behaviour in an operational space weather monitoring and forecasting environment. The development of an ionospheric storm-time index is based on a single ionosonde station data over Grahamstown (33.3°S,26.5°E), South Africa. Critical frequency of the F2 layer (foF2) measurements for a period 1996-2014 were considered for this study. The model was developed based on linear regression and neural network approaches. In this talk validation results for low, medium and high solar activity periods will be discussed to demonstrate model's performance.

  6. Kalman Filter for Calibrating a Telescope Focal Plane

    Science.gov (United States)

    Kang, Bryan; Bayard, David

    2006-01-01

    The instrument-pointing frame (IPF) Kalman filter, and an algorithm that implements this filter, have been devised for calibrating the focal plane of a telescope. As used here, calibration signifies, more specifically, a combination of measurements and calculations directed toward ensuring accuracy in aiming the telescope and determining the locations of objects imaged in various arrays of photodetectors in instruments located on the focal plane. The IPF Kalman filter was originally intended for application to a spaceborne infrared astronomical telescope, but can also be applied to other spaceborne and ground-based telescopes. In the traditional approach to calibration of a telescope, (1) one team of experts concentrates on estimating parameters (e.g., pointing alignments and gyroscope drifts) that are classified as being of primarily an engineering nature, (2) another team of experts concentrates on estimating calibration parameters (e.g., plate scales and optical distortions) that are classified as being primarily of a scientific nature, and (3) the two teams repeatedly exchange data in an iterative process in which each team refines its estimates with the help of the data provided by the other team. This iterative process is inefficient and uneconomical because it is time-consuming and entails the maintenance of two survey teams and the development of computer programs specific to the requirements of each team. Moreover, theoretical analysis reveals that the engineering/ science iterative approach is not optimal in that it does not yield the best estimates of focal-plane parameters and, depending on the application, may not even enable convergence toward a set of estimates.

  7. Effective radiation attenuation calibration for breast density: compression thickness influences and correction

    Directory of Open Access Journals (Sweden)

    Thomas Jerry A

    2010-11-01

    Full Text Available Abstract Background Calibrating mammograms to produce a standardized breast density measurement for breast cancer risk analysis requires an accurate spatial measure of the compressed breast thickness. Thickness inaccuracies due to the nominal system readout value and compression paddle orientation induce unacceptable errors in the calibration. Method A thickness correction was developed and evaluated using a fully specified two-component surrogate breast model. A previously developed calibration approach based on effective radiation attenuation coefficient measurements was used in the analysis. Water and oil were used to construct phantoms to replicate the deformable properties of the breast. Phantoms consisting of measured proportions of water and oil were used to estimate calibration errors without correction, evaluate the thickness correction, and investigate the reproducibility of the various calibration representations under compression thickness variations. Results The average thickness uncertainty due to compression paddle warp was characterized to within 0.5 mm. The relative calibration error was reduced to 7% from 48-68% with the correction. The normalized effective radiation attenuation coefficient (planar representation was reproducible under intra-sample compression thickness variations compared with calibrated volume measures. Conclusion Incorporating this thickness correction into the rigid breast tissue equivalent calibration method should improve the calibration accuracy of mammograms for risk assessments using the reproducible planar calibration measure.

  8. Ideas for fast accelerator model calibration

    International Nuclear Information System (INIS)

    Corbett, J.

    1997-05-01

    With the advent of a simple matrix inversion technique, measurement-based storage ring modeling has made rapid progress in recent years. Using fast computers with large memory, the matrix inversion procedure typically adjusts up to 10 3 model variables to fit the order of 10 5 measurements. The results have been surprisingly accurate. Physics aside, one of the next frontiers is to simplify the process and to reduce computation time. In this paper, the authors discuss two approaches to speed up the model calibration process: recursive least-squares fitting and a piecewise fitting approach

  9. There is No Quantum Regression Theorem

    International Nuclear Information System (INIS)

    Ford, G.W.; OConnell, R.F.

    1996-01-01

    The Onsager regression hypothesis states that the regression of fluctuations is governed by macroscopic equations describing the approach to equilibrium. It is here asserted that this hypothesis fails in the quantum case. This is shown first by explicit calculation for the example of quantum Brownian motion of an oscillator and then in general from the fluctuation-dissipation theorem. It is asserted that the correct generalization of the Onsager hypothesis is the fluctuation-dissipation theorem. copyright 1996 The American Physical Society

  10. Comparison of multinomial logistic regression and logistic regression: which is more efficient in allocating land use?

    Science.gov (United States)

    Lin, Yingzhi; Deng, Xiangzheng; Li, Xing; Ma, Enjun

    2014-12-01

    Spatially explicit simulation of land use change is the basis for estimating the effects of land use and cover change on energy fluxes, ecology and the environment. At the pixel level, logistic regression is one of the most common approaches used in spatially explicit land use allocation models to determine the relationship between land use and its causal factors in driving land use change, and thereby to evaluate land use suitability. However, these models have a drawback in that they do not determine/allocate land use based on the direct relationship between land use change and its driving factors. Consequently, a multinomial logistic regression method was introduced to address this flaw, and thereby, judge the suitability of a type of land use in any given pixel in a case study area of the Jiangxi Province, China. A comparison of the two regression methods indicated that the proportion of correctly allocated pixels using multinomial logistic regression was 92.98%, which was 8.47% higher than that obtained using logistic regression. Paired t-test results also showed that pixels were more clearly distinguished by multinomial logistic regression than by logistic regression. In conclusion, multinomial logistic regression is a more efficient and accurate method for the spatial allocation of land use changes. The application of this method in future land use change studies may improve the accuracy of predicting the effects of land use and cover change on energy fluxes, ecology, and environment.

  11. The Public-Private Sector Wage Gap in Zambia in the 1990s: A Quantile Regression Approach

    DEFF Research Database (Denmark)

    Nielsen, Helena Skyt; Rosholm, Michael

    2001-01-01

    of economic transition, because items as privatization and deregulation were on the political agenda. The focus is placed on the public-private sector wage gap, and the results show that this gap was relatively favorable for the low-skilled and less favorable for the high-skilled. This picture was further......We investigate the determinants of wages in Zambia and based on the quantile regression approach, we analyze how their effects differ at different points in the wage distribution and over time. We use three cross-sections of Zambian household data from the early nineties, which was a period...

  12. Testing hypotheses for differences between linear regression lines

    Science.gov (United States)

    Stanley J. Zarnoch

    2009-01-01

    Five hypotheses are identified for testing differences between simple linear regression lines. The distinctions between these hypotheses are based on a priori assumptions and illustrated with full and reduced models. The contrast approach is presented as an easy and complete method for testing for overall differences between the regressions and for making pairwise...

  13. Calibration of a distributed hydrology and land surface model using energy flux measurements

    DEFF Research Database (Denmark)

    Larsen, Morten Andreas Dahl; Refsgaard, Jens Christian; Jensen, Karsten H.

    2016-01-01

    In this study we develop and test a calibration approach on a spatially distributed groundwater-surface water catchment model (MIKE SHE) coupled to a land surface model component with particular focus on the water and energy fluxes. The model is calibrated against time series of eddy flux measure...

  14. Calibration of the Diameter Distribution Derived from the Area-based Approach with Individual Tree-based Diameter Estimates Using the Airborne Laser Scanning

    Science.gov (United States)

    Xu, Q.; Hou, Z.; Maltamo, M.; Tokola, T.

    2015-12-01

    Diameter distributions of trees are important indicators of current forest stand structure and future dynamics. A new method was proposed in the study to combine the diameter distributions derived from the area-based approach (ABA) and the diameter distribution derived from the individual tree detection (ITD) in order to obtain more accurate forest stand attributes. Since dominant trees can be reliably detected and measured by the Lidar data via the ITD, the focus of the study is to retrieve the suppressed trees (trees that were missed by the ITD) from the ABA. Replacement and histogram matching were respectively employed at the plot level to retrieve the suppressed trees. Cut point was detected from the ITD-derived diameter distribution for each sample plot to distinguish dominant trees from the suppressed trees. The results showed that calibrated diameter distributions were more accurate in terms of error index and the entire growing stock estimates. Compared with the best performer between the ABA and the ITD, calibrated diameter distributions decreased the relative RMSE of the estimated entire growing stock, saw log and pulpwood fractions by 2.81%, 3.05% and 7.73% points respectively. Calibration improved the estimation of pulpwood fraction significantly, resulting in a negligible bias of the estimated entire growing stock.

  15. Spitzer/JWST Cross Calibration: IRAC Observations of Potential Calibrators for JWST

    Science.gov (United States)

    Carey, Sean J.; Gordon, Karl D.; Lowrance, Patrick; Ingalls, James G.; Glaccum, William J.; Grillmair, Carl J.; E Krick, Jessica; Laine, Seppo J.; Fazio, Giovanni G.; Hora, Joseph L.; Bohlin, Ralph

    2017-06-01

    We present observations at 3.6 and 4.5 microns using IRAC on the Spitzer Space Telescope of a set of main sequence A stars and white dwarfs that are potential calibrators across the JWST instrument suite. The stars range from brightnesses of 4.4 to 15 mag in K band. The calibration observations use a similar redundancy to the observing strategy for the IRAC primary calibrators (Reach et al. 2005) and the photometry is obtained using identical methods and instrumental photometric corrections as those applied to the IRAC primary calibrators (Carey et al. 2009). The resulting photometry is then compared to the predictions based on spectra from the CALSPEC Calibration Database (http://www.stsci.edu/hst/observatory/crds/calspec.html) and the IRAC bandpasses. These observations are part of an ongoing collaboration between IPAC and STScI investigating absolute calibration in the infrared.

  16. Node-to-node field calibration of wireless distributed air pollution sensor network.

    Science.gov (United States)

    Kizel, Fadi; Etzion, Yael; Shafran-Nathan, Rakefet; Levy, Ilan; Fishbain, Barak; Bartonova, Alena; Broday, David M

    2018-02-01

    Low-cost air quality sensors offer high-resolution spatiotemporal measurements that can be used for air resources management and exposure estimation. Yet, such sensors require frequent calibration to provide reliable data, since even after a laboratory calibration they might not report correct values when they are deployed in the field, due to interference with other pollutants, as a result of sensitivity to environmental conditions and due to sensor aging and drift. Field calibration has been suggested as a means for overcoming these limitations, with the common strategy involving periodical collocations of the sensors at an air quality monitoring station. However, the cost and complexity involved in relocating numerous sensor nodes back and forth, and the loss of data during the repeated calibration periods make this strategy inefficient. This work examines an alternative approach, a node-to-node (N2N) calibration, where only one sensor in each chain is directly calibrated against the reference measurements and the rest of the sensors are calibrated sequentially one against the other while they are deployed and collocated in pairs. The calibration can be performed multiple times as a routine procedure. This procedure minimizes the total number of sensor relocations, and enables calibration while simultaneously collecting data at the deployment sites. We studied N2N chain calibration and the propagation of the calibration error analytically, computationally and experimentally. The in-situ N2N calibration is shown to be generic and applicable for different pollutants, sensing technologies, sensor platforms, chain lengths, and sensor order within the chain. In particular, we show that chain calibration of three nodes, each calibrated for a week, propagate calibration errors that are similar to those found in direct field calibration. Hence, N2N calibration is shown to be suitable for calibration of distributed sensor networks. Copyright © 2017 Elsevier Ltd. All

  17. A convolutional neural network approach to calibrating the rotation axis for X-ray computed tomography.

    Science.gov (United States)

    Yang, Xiaogang; De Carlo, Francesco; Phatak, Charudatta; Gürsoy, Dogˇa

    2017-03-01

    This paper presents an algorithm to calibrate the center-of-rotation for X-ray tomography by using a machine learning approach, the Convolutional Neural Network (CNN). The algorithm shows excellent accuracy from the evaluation of synthetic data with various noise ratios. It is further validated with experimental data of four different shale samples measured at the Advanced Photon Source and at the Swiss Light Source. The results are as good as those determined by visual inspection and show better robustness than conventional methods. CNN has also great potential for reducing or removing other artifacts caused by instrument instability, detector non-linearity, etc. An open-source toolbox, which integrates the CNN methods described in this paper, is freely available through GitHub at tomography/xlearn and can be easily integrated into existing computational pipelines available at various synchrotron facilities. Source code, documentation and information on how to contribute are also provided.

  18. Online examiner calibration across specialties.

    Science.gov (United States)

    Sturman, Nancy; Wong, Wai Yee; Turner, Jane; Allan, Chris

    2017-09-26

    Integrating undergraduate medical curricula horizontally across clinical medical specialties may be a more patient-centred and learner-centred approach than rotating students through specialty-specific teaching and assessment, but requires some interspecialty calibration of examiner judgements. Our aim was to evaluate the acceptability and feasibility of an online pilot of interdisciplinary examiner calibration. Fair clinical assessment is important to both medical students and clinical teachers METHODS: Clinical teachers were invited to rate video-recorded student objective structured clinical examination (OSCE) performances and join subsequent online discussions using the university's learning management system. Post-project survey free-text and Likert-scale participant responses were analysed to evaluate the acceptability of the pilot and to identify recommendations for improvement. Although 68 clinicians were recruited to participate, and there were 1599 hits on recordings and discussion threads, only 25 clinical teachers rated at least one student performance, and 18 posted at least one comment. Participants, including rural doctors, appeared to value the opportunity for interdisciplinary rating calibration and discussion. Although the asynchronous online format had advantages, especially for rural doctors, participants reported considerable IT challenges. Our findings suggest that fair clinical assessment is important to both medical students and clinical teachers. Interspecialty discussions about assessment may have the potential to enrich intraspecialty perspectives, enhance interspecialty engagement and collaboration, and improve the quality of clinical teacher assessment. Better alignment of university and hospital systems, a face to face component and other modifications may have enhanced clinician engagement with this project. Findings suggest that specialty assessment cultures and content expertise may not be barriers to pursuing more integrated

  19. On-line calibration of process instrumentation channels in nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Hashemian, H.M.; Farmer, J.P. [Analysis and Measurement Services Corp., Knoxville, TN (United States)

    1995-04-01

    An on-line instrumentation monitoring system was developed and validated for use in nuclear power plants. This system continuously monitors the calibration status of instrument channels and determines whether or not they require manual calibrations. This is accomplished by comparing the output of each instrument channel to an estimate of the process it is monitoring. If the deviation of the instrument channel from the process estimate is greater than an allowable limit, then the instrument is said to be {open_quotes}out of calibration{close_quotes} and manual adjustments are made to correct the calibration. The success of the on-line monitoring system depends on the accuracy of the process estimation. The system described in this paper incorporates both simple intercomparison techniques as well as analytical approaches in the form of data-driven empirical modeling to estimate the process. On-line testing of the calibration of process instrumentation channels will reduce the number of manual calibrations currently performed, thereby reducing both costs to utilities and radiation exposure to plant personnel.

  20. An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, Allison, E-mail: lewis.allison10@gmail.com [Department of Mathematics, North Carolina State University, Raleigh, NC 27695 (United States); Smith, Ralph [Department of Mathematics, North Carolina State University, Raleigh, NC 27695 (United States); Williams, Brian [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Figueroa, Victor [Sandia National Laboratories, Albuquerque, NM 87185 (United States)

    2016-11-01

    For many simulation models, it can be prohibitively expensive or physically infeasible to obtain a complete set of experimental data to calibrate model parameters. In such cases, one can alternatively employ validated higher-fidelity codes to generate simulated data, which can be used to calibrate the lower-fidelity code. In this paper, we employ an information-theoretic framework to determine the reduction in parameter uncertainty that is obtained by evaluating the high-fidelity code at a specific set of design conditions. These conditions are chosen sequentially, based on the amount of information that they contribute to the low-fidelity model parameters. The goal is to employ Bayesian experimental design techniques to minimize the number of high-fidelity code evaluations required to accurately calibrate the low-fidelity model. We illustrate the performance of this framework using heat and diffusion examples, a 1-D kinetic neutron diffusion equation, and a particle transport model, and include initial results from the integration of the high-fidelity thermal-hydraulics code Hydra-TH with a low-fidelity exponential model for the friction correlation factor.

  1. Simultaneous calibration phantom commission and geometry calibration in cone beam CT

    Science.gov (United States)

    Xu, Yuan; Yang, Shuai; Ma, Jianhui; Li, Bin; Wu, Shuyu; Qi, Hongliang; Zhou, Linghong

    2017-09-01

    Geometry calibration is a vital step for describing the geometry of a cone beam computed tomography (CBCT) system and is a prerequisite for CBCT reconstruction. In current methods, calibration phantom commission and geometry calibration are divided into two independent tasks. Small errors in ball-bearing (BB) positioning in the phantom-making step will severely degrade the quality of phantom calibration. To solve this problem, we propose an integrated method to simultaneously realize geometry phantom commission and geometry calibration. Instead of assuming the accuracy of the geometry phantom, the integrated method considers BB centers in the phantom as an optimized parameter in the workflow. Specifically, an evaluation phantom and the corresponding evaluation contrast index are used to evaluate geometry artifacts for optimizing the BB coordinates in the geometry phantom. After utilizing particle swarm optimization, the CBCT geometry and BB coordinates in the geometry phantom are calibrated accurately and are then directly used for the next geometry calibration task in other CBCT systems. To evaluate the proposed method, both qualitative and quantitative studies were performed on simulated and realistic CBCT data. The spatial resolution of reconstructed images using dental CBCT can reach up to 15 line pair cm-1. The proposed method is also superior to the Wiesent method in experiments. This paper shows that the proposed method is attractive for simultaneous and accurate geometry phantom commission and geometry calibration.

  2. A new approach to nuclear reactor design optimization using genetic algorithms and regression analysis

    International Nuclear Information System (INIS)

    Kumar, Akansha; Tsvetkov, Pavel V.

    2015-01-01

    desired power peaking limits, desired effective and infinite neutron multiplication factors, high fast fission factor, high thermal efficiency in the conversion from thermal energy to electrical energy using the Brayton cycle, and high fuel burn-up. It is to be noted that we have kept the total mass of the fuel as constant. In this work, we present a module based (modular) approach to perform the optimization wherein, we have defined the following modules: single fuel pin cell, whole core, thermal–hydraulics, and energy conversion. In each of the modules we have defined a specific set of parameters and optimization objectives. The GA system (GAS), and RS together, play the role of optimizing each of the individual modules, and integrating the modules to determine the final nuclear reactor core. However, implementation of GA could lead to a local minimum or a non-unique set of parameters, those meet the specific optimization objectives. The GA code is built using Java, neutronic analysis using MCNP6, thermal–hydraulics calculations using Java, and regression analysis using R

  3. Determination of calibration equations by means of the generalized least squares method

    International Nuclear Information System (INIS)

    Zijp, W.L.

    1984-12-01

    For the determination of two-dimensional calibration curves (e.g. in tank calibration procedures) or of three dimensional calibration equations (e.g. for the calibration of NDA equipment for enrichment measurements) one performs measurements under well chosen conditions, where all observables of interest (inclusive the values of the standard material) are subject to measurement uncertainties. Moreover correlations in several measurements may occur. This document describes the mathematical-statistical approach to determine the values of the model parameters and their covariance matrix, which fit best to the mathematical model for the calibration equation. The formulae are based on the method of generalized least squares where the term generalized implies that non-linear equations in the unknown parameters and also covariance matrices of the measurement data of the calibration can be taken into account. In the general case an iteration procedure is required. No iteration is required when the model is linear in the parameters and the covariance matrices for the measurements of co-ordinates of the calibration points are proportional to each other

  4. Calibration of environmental radionuclide transfer models using a Bayesian approach with Markov chain Monte Carlo simulations and model comparisons - Calibration of radionuclides transfer models in the environment using a Bayesian approach with Markov chain Monte Carlo simulation and comparison of models

    Energy Technology Data Exchange (ETDEWEB)

    Nicoulaud-Gouin, V.; Giacalone, M.; Gonze, M.A. [Institut de Radioprotection et de Surete Nucleaire-PRP-ENV/SERIS/LM2E (France); Martin-Garin, A.; Garcia-Sanchez, L. [IRSN-PRP-ENV/SERIS/L2BT (France)

    2014-07-01

    Calibration of transfer models according to observation data is a challenge, especially if parameters uncertainty is required, and if competing models should be decided between them. Generally two main calibration methods are used: The frequentist approach in which the unknown parameter of interest is supposed fixed and its estimation is based on the data only. In this category, least squared method has many restrictions in nonlinear models and competing models need to be nested in order to be compared. The bayesian inference in which the unknown parameter of interest is supposed random and its estimation is based on the data and on prior information. Compared to frequentist method, it provides probability density functions and therefore pointwise estimation with credible intervals. However, in practical cases, Bayesian inference is a complex problem of numerical integration, which explains its low use in operational modeling including radioecology. This study aims to illustrate the interest and feasibility of Bayesian approach in radioecology particularly in the case of ordinary differential equations with non-constant coefficients models, which cover most radiological risk assessment models, notably those implemented in the Symbiose platform (Gonze et al, 2010). Markov Chain Monte Carlo (MCMC) method (Metropolis et al., 1953) was used because the posterior expectations are intractable integrals. The invariant distribution of the parameters was performed by the metropolis-Hasting algorithm (Hastings, 1970). GNU-MCSim software (Bois and Maszle, 2011) a bayesian hierarchical framework, was used to deal with nonlinear differential models. Two case studies including this type of model were investigated: An Equilibrium Kinetic sorption model (EK) (e.g. van Genuchten et al, 1974), with experimental data concerning {sup 137}Cs and {sup 85}Sr sorption and desorption in different soils studied in stirred flow-through reactors. This model, generalizing the K{sub d} approach

  5. Vandalism Detection in Wikipedia: a Bag-of-Words Classifier Approach

    OpenAIRE

    Belani, Amit

    2010-01-01

    A bag-of-words based probabilistic classifier is trained using regularized logistic regression to detect vandalism in the English Wikipedia. Isotonic regression is used to calibrate the class membership probabilities. Learning curve, reliability, ROC, and cost analysis are performed.

  6. Calibration and intercomparison methods of dose calibrators used in nuclear medicine facilities

    International Nuclear Information System (INIS)

    Costa, Alessandro Martins da

    1999-01-01

    Dose calibrators are used in most of the nuclear medicine facilities to determine the amount of radioactivity administered to a patient in a particular investigation or therapeutic procedure. It is therefore of vital importance that the equipment used presents good performance and is regular;y calibrated at a authorized laboratory. This occurs of adequate quality assurance procedures are carried out. Such quality control tests should be performed daily, other biannually or yearly, testing, for example, its accuracy and precision, the reproducibility and response linearity. In this work a commercial dose calibrator was calibrated with solution of radionuclides used in nuclear medicine. Simple instrument tests, such as response linearity and the response variation of the source volume increase at a constant source activity concentration, were performed. This instrument can now be used as a working standard for calibration of other dose calibrators/ An intercomparison procedure was proposed as a method of quality control of dose calibrators used in nuclear medicine facilities. (author)

  7. Modified Regression Correlation Coefficient for Poisson Regression Model

    Science.gov (United States)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  8. An Improved Approach for RSSI-Based only Calibration-Free Real-Time Indoor Localization on IEEE 802.11 and 802.15.4 Wireless Networks

    Directory of Open Access Journals (Sweden)

    Marco Passafiume

    2017-03-01

    Full Text Available Assuming a reliable and responsive spatial contextualization service is a must-have in IEEE 802.11 and 802.15.4 wireless networks, a suitable approach consists of the implementation of localization capabilities, as an additional application layer to the communication protocol stack. Considering the applicative scenario where satellite-based positioning applications are denied, such as indoor environments, and excluding data packet arrivals time measurements due to lack of time resolution, received signal strength indicator (RSSI measurements, obtained according to IEEE 802.11 and 802.15.4 data access technologies, are the unique data sources suitable for indoor geo-referencing using COTS devices. In the existing literature, many RSSI based localization systems are introduced and experimentally validated, nevertheless they require periodic calibrations and significant information fusion from different sensors that dramatically decrease overall systems reliability and their effective availability. This motivates the work presented in this paper, which introduces an approach for an RSSI-based calibration-free and real-time indoor localization. While switched-beam array-based hardware (compliant with IEEE 802.15.4 router functionality has already been presented by the author, the focus of this paper is the creation of an algorithmic layer for use with the pre-existing hardware capable to enable full localization and data contextualization over a standard 802.15.4 wireless sensor network using only RSSI information without the need of lengthy offline calibration phase. System validation reports the localization results in a typical indoor site, where the system has shown high accuracy, leading to a sub-metrical overall mean error and an almost 100% site coverage within 1 m localization error.

  9. An Improved Approach for RSSI-Based only Calibration-Free Real-Time Indoor Localization on IEEE 802.11 and 802.15.4 Wireless Networks.

    Science.gov (United States)

    Passafiume, Marco; Maddio, Stefano; Cidronali, Alessandro

    2017-03-29

    Assuming a reliable and responsive spatial contextualization service is a must-have in IEEE 802.11 and 802.15.4 wireless networks, a suitable approach consists of the implementation of localization capabilities, as an additional application layer to the communication protocol stack. Considering the applicative scenario where satellite-based positioning applications are denied, such as indoor environments, and excluding data packet arrivals time measurements due to lack of time resolution, received signal strength indicator (RSSI) measurements, obtained according to IEEE 802.11 and 802.15.4 data access technologies, are the unique data sources suitable for indoor geo-referencing using COTS devices. In the existing literature, many RSSI based localization systems are introduced and experimentally validated, nevertheless they require periodic calibrations and significant information fusion from different sensors that dramatically decrease overall systems reliability and their effective availability. This motivates the work presented in this paper, which introduces an approach for an RSSI-based calibration-free and real-time indoor localization. While switched-beam array-based hardware (compliant with IEEE 802.15.4 router functionality) has already been presented by the author, the focus of this paper is the creation of an algorithmic layer for use with the pre-existing hardware capable to enable full localization and data contextualization over a standard 802.15.4 wireless sensor network using only RSSI information without the need of lengthy offline calibration phase. System validation reports the localization results in a typical indoor site, where the system has shown high accuracy, leading to a sub-metrical overall mean error and an almost 100% site coverage within 1 m localization error.

  10. Robust multi-objective calibration strategies – possibilities for improving flood forecasting

    Directory of Open Access Journals (Sweden)

    G. H. Schmitz

    2012-10-01

    Full Text Available Process-oriented rainfall-runoff models are designed to approximate the complex hydrologic processes within a specific catchment and in particular to simulate the discharge at the catchment outlet. Most of these models exhibit a high degree of complexity and require the determination of various parameters by calibration. Recently, automatic calibration methods became popular in order to identify parameter vectors with high corresponding model performance. The model performance is often assessed by a purpose-oriented objective function. Practical experience suggests that in many situations one single objective function cannot adequately describe the model's ability to represent any aspect of the catchment's behaviour. This is regardless of whether the objective is aggregated of several criteria that measure different (possibly opposite aspects of the system behaviour. One strategy to circumvent this problem is to define multiple objective functions and to apply a multi-objective optimisation algorithm to identify the set of Pareto optimal or non-dominated solutions. Nonetheless, there is a major disadvantage of automatic calibration procedures that understand the problem of model calibration just as the solution of an optimisation problem: due to the complex-shaped response surface, the estimated solution of the optimisation problem can result in different near-optimum parameter vectors that can lead to a very different performance on the validation data. Bárdossy and Singh (2008 studied this problem for single-objective calibration problems using the example of hydrological models and proposed a geometrical sampling approach called Robust Parameter Estimation (ROPE. This approach applies the concept of data depth in order to overcome the shortcomings of automatic calibration procedures and find a set of robust parameter vectors. Recent studies confirmed the effectivity of this method. However, all ROPE approaches published so far just identify

  11. Model Calibration of Exciter and PSS Using Extended Kalman Filter

    Energy Technology Data Exchange (ETDEWEB)

    Kalsi, Karanjit; Du, Pengwei; Huang, Zhenyu

    2012-07-26

    Power system modeling and controls continue to become more complex with the advent of smart grid technologies and large-scale deployment of renewable energy resources. As demonstrated in recent studies, inaccurate system models could lead to large-scale blackouts, thereby motivating the need for model calibration. Current methods of model calibration rely on manual tuning based on engineering experience, are time consuming and could yield inaccurate parameter estimates. In this paper, the Extended Kalman Filter (EKF) is used as a tool to calibrate exciter and Power System Stabilizer (PSS) models of a particular type of machine in the Western Electricity Coordinating Council (WECC). The EKF-based parameter estimation is a recursive prediction-correction process which uses the mismatch between simulation and measurement to adjust the model parameters at every time step. Numerical simulations using actual field test data demonstrate the effectiveness of the proposed approach in calibrating the parameters.

  12. Producing The New Regressive Left

    DEFF Research Database (Denmark)

    Crone, Christine

    members, this thesis investigates a growing political trend and ideological discourse in the Arab world that I have called The New Regressive Left. On the premise that a media outlet can function as a forum for ideology production, the thesis argues that an analysis of this material can help to trace...... the contexture of The New Regressive Left. If the first part of the thesis lays out the theoretical approach and draws the contextual framework, through an exploration of the surrounding Arab media-and ideoscapes, the second part is an analytical investigation of the discourse that permeates the programmes aired...... becomes clear from the analytical chapters is the emergence of the new cross-ideological alliance of The New Regressive Left. This emerging coalition between Shia Muslims, religious minorities, parts of the Arab Left, secular cultural producers, and the remnants of the political,strategic resistance...

  13. Nonlinear Forecasting With Many Predictors Using Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter; Groenen, Patrick J.F.; Heij, Christiaan

    This paper puts forward kernel ridge regression as an approach for forecasting with many predictors that are related nonlinearly to the target variable. In kernel ridge regression, the observed predictor variables are mapped nonlinearly into a high-dimensional space, where estimation of the predi...

  14. Mechanics of log calibration

    International Nuclear Information System (INIS)

    Waller, W.C.; Cram, M.E.; Hall, J.E.

    1975-01-01

    For any measurement to have meaning, it must be related to generally accepted standard units by a valid and specified system of comparison. To calibrate well-logging tools, sensing systems are designed which produce consistent and repeatable indications over the range for which the tool was intended. The basics of calibration theory, procedures, and calibration record presentations are reviewed. Calibrations for induction, electrical, radioactivity, and sonic logging tools will be discussed. The authors' intent is to provide an understanding of the sources of errors, of the way errors are minimized in the calibration process, and of the significance of changes in recorded calibration data

  15. Output-Only Modal Parameter Recursive Estimation of Time-Varying Structures via a Kernel Ridge Regression FS-TARMA Approach

    Directory of Open Access Journals (Sweden)

    Zhi-Sai Ma

    2017-01-01

    Full Text Available Modal parameter estimation plays an important role in vibration-based damage detection and is worth more attention and investigation, as changes in modal parameters are usually being used as damage indicators. This paper focuses on the problem of output-only modal parameter recursive estimation of time-varying structures based upon parameterized representations of the time-dependent autoregressive moving average (TARMA. A kernel ridge regression functional series TARMA (FS-TARMA recursive identification scheme is proposed and subsequently employed for the modal parameter estimation of a numerical three-degree-of-freedom time-varying structural system and a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudolinear regression FS-TARMA approach via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics in a recursive manner.

  16. Planktonic Foraminifera Proxies Calibration Off the NW Iberian Margin: Nutrients Approach

    Science.gov (United States)

    Salgueiro, E.; Castro, C. G.; Zuniga, D.; Martin, P. A.; Groeneveld, J.; de la Granda, F.; Villaceiros-Robineau, N.; Alonso-Perez, F.; Alberto, A.; Rodrigues, T.; Rufino, M. M.; Abrantes, F. F. G.; Voelker, A. H. L.

    2014-12-01

    Planktonic foraminifera (PF) shells preserved in marine sediments are a useful tool to reconstruct productivity conditions at different geological timescales. However, the accuracy of these paleoreconstructions depends on the data set and calibration quality. Several calibration works have been defining and improving the use of proxies for productivity and nutrient cycling parameters. Our contribution is centred on a multi-proxy calibration at a regional coastal upwelling system. To minimize the existing uncertainties affecting the use of trace elements and C stable isotopes as productivity proxy in the high productivity upwelling areas, we investigate the content and distribution of Ba/Ca and δ13C in the water column, its transference into the planktonic foraminifera shells, and, how the living planktonic foraminifera Ba/Ca and δ13C signal is related to the same planktonic foraminiferal species preserved in the sediment record. This study is based on a large data set from two stations (RAIA - 75m water depth, and CALIBERIA - 350m water depth) located off the NW Iberian margin (41.5-42.5ºN; 9-10ºW), and includes: i) two year monthly water column data (temperature, salinity, nutrients, chlorophyll a, Ba/Ca, and δ13C-DIC); ii) seasonal Ba/Ca, δ13C in several living PF species at both stations; and iii) Ba/Ca and δ13C in several PF species from a large set of core-top sediment samples in the study region. Additionally, total organic carbon and total alkenones were also measured in the sediment. Our results showed the link between productivity proxies in the surface sediment foraminifera assemblage and the processes regulating the actual phytoplankton dynamics in an upwelling area. The understanding of this relationship has special relevance since it gives fundamental information related to the past oceanic biogeochemistry and/or climate and improves the prevision of future changes against possible climate variability due to anthropogenic forcing.

  17. A calibration approach to glandular tissue composition estimation in digital mammography

    International Nuclear Information System (INIS)

    Kaufhold, J.; Thomas, J.A.; Eberhard, J.W.; Galbo, C.E.; Trotter, D.E. Gonzalez

    2002-01-01

    The healthy breast is almost entirely composed of a mixture of fatty, epithelial, and stromal tissues which can be grouped into two distinctly attenuating tissue types: fatty and glandular. Further, the amount of glandular tissue is linked to breast cancer risk, so an objective quantitative analysis of glandular tissue can aid in risk estimation. Highnam and Brady have measured glandular tissue composition objectively. However, they argue that their work should only be used for 'relative' tissue measurements unless a careful calibration has been performed. In this work, we perform such a 'careful calibration' on a digital mammography system and use it to estimate breast tissue composition of patient breasts. We imaged 0%, 50%, and 100% glandular-equivalent phantoms of varying thicknesses for a number of clinically relevant x-ray techniques on a digital mammography system. From these images, we extracted mean signal and noise levels and computed calibration curves that can be used for quantitative tissue composition estimation. In this way, we calculate the percent glandular composition of a patient breast on a pixelwise basis. This tissue composition estimation method was applied to 23 digital mammograms. We estimated the quantitative impact of different error sources on the estimates of tissue composition. These error sources include compressed breast height estimation error, residual scattered radiation, quantum noise, and beam hardening. Errors in the compressed breast height estimate contribute the most error in tissue composition--on the order of ±7% for a 4 cm compressed breast height. The spatially varying scattered radiation will contribute quantitatively less error overall, but may be significant in regions near the skinline. It is calculated that for a 4 cm compressed breast height, a residual scatter signal error is mitigated by approximately sixfold in the composition estimate. The error in composition due to the quantum noise, which is the limiting

  18. European Prospective Investigation into Cancer and Nutrition (EPIC) calibration study: rationale, design and population characteristics

    DEFF Research Database (Denmark)

    Slimani, N.; Kaaks, R.; Ferrari, P.

    2002-01-01

    The European Prospective Investigation into Cancer and Nutrition (EPIC), which covers a large cohort of half a million men and women from 23 European centres in 10 Western European countries, was designed to study the relationship between diet and the risk of chronic diseases, particularly cancer......, a calibration approach was developed. This approach involved an additional dietary assessment common across study populations to re-express individual dietary intakes according to the same reference scale. A single 24-hour diet recall was therefore collected, as the EPIC reference calibration method, from...... in a large multi-centre European study. These studies showed that, despite certain inherent methodological and logistic constraints, a study design such as this one works relatively well in practice. The average response in the calibration study was 78.3% and ranged from 46.5% to 92.5%. The calibration...

  19. Convergent Time-Varying Regression Models for Data Streams: Tracking Concept Drift by the Recursive Parzen-Based Generalized Regression Neural Networks.

    Science.gov (United States)

    Duda, Piotr; Jaworski, Maciej; Rutkowski, Leszek

    2018-03-01

    One of the greatest challenges in data mining is related to processing and analysis of massive data streams. Contrary to traditional static data mining problems, data streams require that each element is processed only once, the amount of allocated memory is constant and the models incorporate changes of investigated streams. A vast majority of available methods have been developed for data stream classification and only a few of them attempted to solve regression problems, using various heuristic approaches. In this paper, we develop mathematically justified regression models working in a time-varying environment. More specifically, we study incremental versions of generalized regression neural networks, called IGRNNs, and we prove their tracking properties - weak (in probability) and strong (with probability one) convergence assuming various concept drift scenarios. First, we present the IGRNNs, based on the Parzen kernels, for modeling stationary systems under nonstationary noise. Next, we extend our approach to modeling time-varying systems under nonstationary noise. We present several types of concept drifts to be handled by our approach in such a way that weak and strong convergence holds under certain conditions. Finally, in the series of simulations, we compare our method with commonly used heuristic approaches, based on forgetting mechanism or sliding windows, to deal with concept drift. Finally, we apply our concept in a real life scenario solving the problem of currency exchange rates prediction.

  20. Scanner calibration revisited

    Directory of Open Access Journals (Sweden)

    Pozhitkov Alexander E

    2010-07-01

    Full Text Available Abstract Background Calibration of a microarray scanner is critical for accurate interpretation of microarray results. Shi et al. (BMC Bioinformatics, 2005, 6, Art. No. S11 Suppl. 2. reported usage of a Full Moon BioSystems slide for calibration. Inspired by the Shi et al. work, we have calibrated microarray scanners in our previous research. We were puzzled however, that most of the signal intensities from a biological sample fell below the sensitivity threshold level determined by the calibration slide. This conundrum led us to re-investigate the quality of calibration provided by the Full Moon BioSystems slide as well as the accuracy of the analysis performed by Shi et al. Methods Signal intensities were recorded on three different microarray scanners at various photomultiplier gain levels using the same calibration slide from Full Moon BioSystems. Data analysis was conducted on raw signal intensities without normalization or transformation of any kind. Weighted least-squares method was used to fit the data. Results We found that initial analysis performed by Shi et al. did not take into account autofluorescence of the Full Moon BioSystems slide, which led to a grossly distorted microarray scanner response. Our analysis revealed that a power-law function, which is explicitly accounting for the slide autofluorescence, perfectly described a relationship between signal intensities and fluorophore quantities. Conclusions Microarray scanners respond in a much less distorted fashion than was reported by Shi et al. Full Moon BioSystems calibration slides are inadequate for performing calibration. We recommend against using these slides.

  1. Calibration methodology for energy management system of a plug-in hybrid electric vehicle

    International Nuclear Information System (INIS)

    Duan, Benming; Wang, Qingnian; Zeng, Xiaohua; Gong, Yinsheng; Song, Dafeng; Wang, Junnian

    2017-01-01

    Highlights: • Calibration theory of EMS is proposed. • A comprehensive evaluating indicator is constructed by radar chart method. • Optimal Latin hypercube design algorithm is introduced to obtain training data. • An approximation model is established by using a RBF neural network. • Offline calibration methodology improves the actual calibration efficiency. - Abstract: This paper presents a new analytical calibration method for energy management strategy designed for a plug-in hybrid electric vehicle. This method improves the actual calibration efficiency to reach a compromise among the conflicting calibration requirements (e.g. emissions and economy). A comprehensive evaluating indicator covering emissions and economic performance is constructed by using a radar chart method. A radial basis functions (RBFs) neural network model is proposed to establish a precise model among control parameters and the comprehensive evaluation indicator. The optimal Latin hypercube design is introduced to obtain the experimental data to train the RBFs neural network model. And multi-island genetic algorithm is used to solve the optimization model. Finally, an offline calibration example is conducted. Results validate the effectiveness of the proposed calibration approach in improving vehicle performance and calibration efficiency.

  2. Comparison of Two Methodologies for Calibrating Satellite Instruments in the Visible and Near-Infrared

    Science.gov (United States)

    Barnes, Robert A.; Brown, Steven W.; Lykke, Keith R.; Guenther, Bruce; Butler, James J.; Schwarting, Thomas; Turpie, Kevin; Moyer, David; DeLuccia, Frank; Moeller, Christopher

    2015-01-01

    Traditionally, satellite instruments that measure Earth-reflected solar radiation in the visible and near infrared wavelength regions have been calibrated for radiance responsivity in a two-step method. In the first step, the relative spectral response (RSR) of the instrument is determined using a nearly monochromatic light source such as a lamp-illuminated monochromator. These sources do not typically fill the field-of-view of the instrument nor act as calibrated sources of light. Consequently, they only provide a relative (not absolute) spectral response for the instrument. In the second step, the instrument views a calibrated source of broadband light, such as a lamp-illuminated integrating sphere. The RSR and the sphere absolute spectral radiance are combined to determine the absolute spectral radiance responsivity (ASR) of the instrument. More recently, a full-aperture absolute calibration approach using widely tunable monochromatic lasers has been developed. Using these sources, the ASR of an instrument can be determined in a single step on a wavelength-by-wavelength basis. From these monochromatic ASRs, the responses of the instrument bands to broadband radiance sources can be calculated directly, eliminating the need for calibrated broadband light sources such as lamp-illuminated integrating spheres. In this work, the traditional broadband source-based calibration of the Suomi National Preparatory Project (SNPP) Visible Infrared Imaging Radiometer Suite (VIIRS) sensor is compared with the laser-based calibration of the sensor. Finally, the impact of the new full-aperture laser-based calibration approach on the on-orbit performance of the sensor is considered.

  3. Calibration and correction procedures for cosmic-ray neutron soil moisture probes located across Australia

    Science.gov (United States)

    Hawdon, Aaron; McJannet, David; Wallace, Jim

    2014-06-01

    The cosmic-ray probe (CRP) provides continuous estimates of soil moisture over an area of ˜30 ha by counting fast neutrons produced from cosmic rays which are predominantly moderated by water molecules in the soil. This paper describes the setup, measurement correction procedures, and field calibration of CRPs at nine locations across Australia with contrasting soil type, climate, and land cover. These probes form the inaugural Australian CRP network, which is known as CosmOz. CRP measurements require neutron count rates to be corrected for effects of atmospheric pressure, water vapor pressure changes, and variations in incoming neutron intensity. We assess the magnitude and importance of these corrections and present standardized approaches for network-wide analysis. In particular, we present a new approach to correct for incoming neutron intensity variations and test its performance against existing procedures used in other studies. Our field calibration results indicate that a generalized calibration function for relating neutron counts to soil moisture is suitable for all soil types, with the possible exception of very sandy soils with low water content. Using multiple calibration data sets, we demonstrate that the generalized calibration function only applies after accounting for persistent sources of hydrogen in the soil profile. Finally, we demonstrate that by following standardized correction procedures and scaling neutron counting rates of all CRPs to a single reference location, differences in calibrations between sites are related to site biomass. This observation provides a means for estimating biomass at a given location or for deriving coefficients for the calibration function in the absence of field calibration data.

  4. A Consistent EPIC Visible Channel Calibration Using VIIRS and MODIS as a Reference.

    Science.gov (United States)

    Haney, C.; Doelling, D. R.; Minnis, P.; Bhatt, R.; Scarino, B. R.; Gopalan, A.

    2017-12-01

    The Earth Polychromatic Imaging Camera (EPIC) aboard the Deep Space Climate Observatory (DSCOVR) satellite constantly images the sunlit disk of Earth from the Lagrange-1 (L1) point in 10 spectral channels spanning the UV, VIS, and NIR spectrums. Recently, the DSCOVR EPIC team has publicly released version 2 dataset, which has implemented improved navigation, stray-light correction, and flat-fielding of the CCD array. The EPIC 2-year data record must be well-calibrated for consistent cloud, aerosol, trace gas, land use and other retrievals. Because EPIC lacks onboard calibrators, the observations made by EPIC channels must be calibrated vicariously using the coincident measurements from radiometrically stable instruments that have onboard calibration systems. MODIS and VIIRS are best-suited instruments for this task as they contain similar spectral bands that are well-calibrated onboard using solar diffusers and lunar tracking. We have previously calibrated the EPIC version 1 dataset by using EPIC and VIIRS angularly matched radiance pairs over both all-sky ocean and deep convective clouds (DCC). We noted that the EPIC image required navigations adjustments, and that the EPIC stray-light correction provided an offset term closer to zero based on the linear regression of the EPIC and VIIRS ray-matched radiance pairs. We will evaluate the EPIC version 2 navigation and stray-light improvements using the same techniques. In addition, we will monitor the EPIC channel calibration over the two years for any temporal degradation or anomalous behavior. These two calibration methods will be further validated using desert and DCC invariant Earth targets. The radiometric characterization of the selected invariant targets is performed using multiple years of MODIS and VIIRS measurements. Results of these studies will be shown at the conference.

  5. Marginal longitudinal semiparametric regression via penalized splines

    KAUST Repository

    Al Kadiri, M.

    2010-08-01

    We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models.

  6. Marginal longitudinal semiparametric regression via penalized splines

    KAUST Repository

    Al Kadiri, M.; Carroll, R.J.; Wand, M.P.

    2010-01-01

    We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models.

  7. Calibration of Correlation Radiometers Using Pseudo-Random Noise Signals

    Directory of Open Access Journals (Sweden)

    Sebastián Pantoja

    2009-08-01

    Full Text Available The calibration of correlation radiometers, and particularly aperture synthesis interferometric radiometers, is a critical issue to ensure their performance. Current calibration techniques are based on the measurement of the cross-correlation of receivers’ outputs when injecting noise from a common noise source requiring a very stable distribution network. For large interferometric radiometers this centralized noise injection approach is very complex from the point of view of mass, volume and phase/amplitude equalization. Distributed noise injection techniques have been proposed as a feasible alternative, but are unable to correct for the so-called “baseline errors” associated with the particular pair of receivers forming the baseline. In this work it is proposed the use of centralized Pseudo-Random Noise (PRN signals to calibrate correlation radiometers. PRNs are sequences of symbols with a long repetition period that have a flat spectrum over a bandwidth which is determined by the symbol rate. Since their spectrum resembles that of thermal noise, they can be used to calibrate correlation radiometers. At the same time, since these sequences are deterministic, new calibration schemes can be envisaged, such as the correlation of each receiver’s output with a baseband local replica of the PRN sequence, as well as new distribution schemes of calibration signals. This work analyzes the general requirements and performance of using PRN sequences for the calibration of microwave correlation radiometers, and particularizes the study to a potential implementation in a large aperture synthesis radiometer using an optical distribution network.

  8. Validation and calibration of structural models that combine information from multiple sources.

    Science.gov (United States)

    Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A

    2017-02-01

    Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.

  9. HYBRID DATA APPROACH FOR SELECTING EFFECTIVE TEST CASES DURING THE REGRESSION TESTING

    OpenAIRE

    Mohan, M.; Shrimali, Tarun

    2017-01-01

    In the software industry, software testing becomes more important in the entire software development life cycle. Software testing is one of the fundamental components of software quality assurances. Software Testing Life Cycle (STLC)is a process involved in testing the complete software, which includes Regression Testing, Unit Testing, Smoke Testing, Integration Testing, Interface Testing, System Testing & etc. In the STLC of Regression testing, test case selection is one of the most importan...

  10. CALIBRATION OF DISTRIBUTED SHALLOW LANDSLIDE MODELS IN FORESTED LANDSCAPES

    Directory of Open Access Journals (Sweden)

    Gian Battista Bischetti

    2010-09-01

    Full Text Available In mountainous-forested soil mantled landscapes all around the world, rainfall-induced shallow landslides are one of the most common hydro-geomorphic hazards, which frequently impact the environment and human lives and properties. In order to produce shallow landslide susceptibility maps, several models have been proposed in the last decade, combining simplified steady state topography- based hydrological models with the infinite slope scheme, in a GIS framework. In the present paper, two of the still open issues are investigated: the assessment of the validity of slope stability models and the inclusion of root cohesion values. In such a perspective the “Stability INdex MAPping” has been applied to a small forested pre-Alpine catchment, adopting different calibrating approaches and target indexes. The Single and the Multiple Calibration Regions modality and three quantitative target indexes – the common Success Rate (SR, the Modified Success Rate (MSR, and a Weighted Modified Success Rate (WMSR herein introduced – are considered. The results obtained show that the target index can 34 003_Bischetti(569_23 1-12-2010 9:48 Pagina 34 significantly affect the values of a model’s parameters and lead to different proportions of stable/unstable areas, both for the Single and the Multiple Calibration Regions approach. The use of SR as the target index leads to an over-prediction of the unstable areas, whereas the use of MSR and WMSR, seems to allow a better discrimination between stable and unstable areas. The Multiple Calibration Regions approach should be preferred, using information on space distribution of vegetation to define the Regions. The use of field-based estimation of root cohesion and sliding depth allows the implementation of slope stability models (SINMAP in our case also without the data needed for calibration. To maximize the inclusion of such parameters into SINMAP, however, the assumption of a uniform distribution of

  11. New liquid chromatographic-chemometric approach for the determination of sunset yellow and tartrazine in commercial preparation.

    Science.gov (United States)

    Dinç, Erdal; Aktaş, A Hakan; Ustündağ, Ozgür

    2005-01-01

    A new liquid chromatographic (LC)-chemometric approach was developed for the determination of sunset yellow (SUN) and tartrazine (TAR) in commercial preparations. This approach uses LC and chemometric calibration methods, i.e., classical least-squares (CLS), principal component regression (PCR), and partial-least squares (PLS), simultaneously. The combined LC-chemometric approaches, denoted as LC-CLS, LC-PCR, and LC-PLS, are based on photodiode array (PDA) detection at multiple wavelengths. Optimum chromatographic separation of SUN and TAR with allura red as the internal standard (IS) was obtained by using a Waters Symmetry C18 column, 5 microm, 4.6 x 250 mm, and 0.2 M acetate buffer (pH 5)-acetonitrile-methano-bidistilled water (55 + 20 + 15 + 10, v/v) as the mobile phase at a flow rate of 1.9 mL/min. The LC data sets consisting of the ratios of analyte peak areas to the IS peak area were obtained by using PDA detection at 5 wavelengths (465, 470, 475, 480, and 485 nm). LC-chemometric calibrations for SUN and TAR were separately constructed by using the relationship between the peak-area ratio and the training sets for each colorant. LC-chemometric approaches were tested for different synthetic mixtures containing SUN and TAR in the presence of the IS. These LC-chemometric calibrations were applied to a commercial preparation of the 2 colorants. The experimental results of the LC-chemometric approaches were compared with those obtained by a developed classical LC method using single-wavelength detection.

  12. When homogeneity meets heterogeneity: the geographically weighted regression with spatial lag approach to prenatal care utilization

    Science.gov (United States)

    Shoff, Carla; Chen, Vivian Yi-Ju; Yang, Tse-Chuan

    2014-01-01

    Using geographically weighted regression (GWR), a recent study by Shoff and colleagues (2012) investigated the place-specific risk factors for prenatal care utilization in the US and found that most of the relationships between late or not prenatal care and its determinants are spatially heterogeneous. However, the GWR approach may be subject to the confounding effect of spatial homogeneity. The goal of this study is to address this concern by including both spatial homogeneity and heterogeneity into the analysis. Specifically, we employ an analytic framework where a spatially lagged (SL) effect of the dependent variable is incorporated into the GWR model, which is called GWR-SL. Using this innovative framework, we found evidence to argue that spatial homogeneity is neglected in the study by Shoff et al. (2012) and the results are changed after considering the spatially lagged effect of prenatal care utilization. The GWR-SL approach allows us to gain a place-specific understanding of prenatal care utilization in US counties. In addition, we compared the GWR-SL results with the results of conventional approaches (i.e., OLS and spatial lag models) and found that GWR-SL is the preferred modeling approach. The new findings help us to better estimate how the predictors are associated with prenatal care utilization across space, and determine whether and how the level of prenatal care utilization in neighboring counties matters. PMID:24893033

  13. Recent developments in the specification and achievement of realistic neutron calibration fields

    International Nuclear Information System (INIS)

    Chartier, J.L.; Kluges, H.; Wiegel, B.; Schraube, H.

    1997-01-01

    In order to calibrate more accurately the neutron dosemeters involved in radiation protection, the concept of 'Realistic Neutron Calibration Fields' is considered as an appropriate alternative solution, making necessary new irradiation facilities which generate well-characterised neutron fields with energy and angular distribution replicating more closely practical workplace conditions. Several experienced laboratories have collaborated on a European project and proposed various approaches which are reviewed in this paper. A short description of the facilities currently in operation is given as well as a few characteristics of the available radiation fields. This description of the state of art is followed by a discussion of the problems to be solved for using such facilities for calibration purposes according to well-specified calibration procedures. (author)

  14. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2011-01-01

    In this paper, two non-parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel-based approaches. The second estimator

  15. A calibration and data assimilation method using the Bayesian MARS emulator

    International Nuclear Information System (INIS)

    Stripling, H.F.; McClarren, R.G.; Kuranz, C.C.; Grosskopf, M.J.; Rutter, E.; Torralva, B.R.

    2013-01-01

    Highlights: ► We outline a transparent, flexible method for the calibration of uncertain inputs to computer models. ► We account for model, data, emulator, and measurement uncertainties. ► The method produces improved predictive results, which are validated using leave one-out experiments. ► Our implementation leverages the Bayesian MARS emulator, but any emulator may be substituted. -- Abstract: We present a method for calibrating the uncertain inputs to a computer model using available experimental data. The goal of the procedure is to estimate the posterior distribution of the uncertain inputs such that when samples from the posterior are used as inputs to future model runs, the model is more likely to replicate (or predict) the experimental response. The calibration is performed by sampling the space of the uncertain inputs, using the computer model (or, more likely, an emulator for the computer model) to assign weights to the samples, and applying the weights to produce the posterior distributions and generate predictions of new experiments with confidence bounds. The method is similar to Metropolis–Hastings calibration methods with independently sampled updates, except that we generate samples beforehand and replace the candidate acceptance routine with a weighting scheme. We apply our method to the calibration of a Hyades 2D model of laser energy deposition in beryllium. We employ a Bayesian Multivariate Adaptive Regression Splines (BMARS) emulator as a surrogate for Hyades 2D. We treat a range of uncertainties in our application, including uncertainties in the experimental inputs, experimental measurement error, and systematic experimental timing errors. The resulting posterior distributions agree with our existing intuition, and we validate the results by performing a series of leave-one-out predictions. We find that the calibrated predictions are considerably more accurate and less uncertain than blind sampling of the forward model alone.

  16. Radiometric calibration of digital cameras using neural networks

    Science.gov (United States)

    Grunwald, Michael; Laube, Pascal; Schall, Martin; Umlauf, Georg; Franz, Matthias O.

    2017-08-01

    Digital cameras are used in a large variety of scientific and industrial applications. For most applications, the acquired data should represent the real light intensity per pixel as accurately as possible. However, digital cameras are subject to physical, electronic and optical effects that lead to errors and noise in the raw image. Temperature- dependent dark current, read noise, optical vignetting or different sensitivities of individual pixels are examples of such effects. The purpose of radiometric calibration is to improve the quality of the resulting images by reducing the influence of the various types of errors on the measured data and thus improving the quality of the overall application. In this context, we present a specialized neural network architecture for radiometric calibration of digital cameras. Neural networks are used to learn a temperature- and exposure-dependent mapping from observed gray-scale values to true light intensities for each pixel. In contrast to classical at-fielding, neural networks have the potential to model nonlinear mappings which allows for accurately capturing the temperature dependence of the dark current and for modeling cameras with nonlinear sensitivities. Both scenarios are highly relevant in industrial applications. The experimental comparison of our network approach to classical at-fielding shows a consistently higher reconstruction quality, also for linear cameras. In addition, the calibration is faster than previous machine learning approaches based on Gaussian processes.

  17. Enhanced Single Seed Trait Predictions in Soybean (Glycine max) and Robust Calibration Model Transfer with Near-Infrared Reflectance Spectroscopy.

    Science.gov (United States)

    Hacisalihoglu, Gokhan; Gustin, Jeffery L; Louisma, Jean; Armstrong, Paul; Peter, Gary F; Walker, Alejandro R; Settles, A Mark

    2016-02-10

    Single seed near-infrared reflectance (NIR) spectroscopy predicts soybean (Glycine max) seed quality traits of moisture, oil, and protein. We tested the accuracy of transferring calibrations between different single seed NIR analyzers of the same design by collecting NIR spectra and analytical trait data for globally diverse soybean germplasm. X-ray microcomputed tomography (μCT) was used to collect seed density and shape traits to enhance the number of soybean traits that can be predicted from single seed NIR. Partial least-squares (PLS) regression gave accurate predictive models for oil, weight, volume, protein, and maximal cross-sectional area of the seed. PLS models for width, length, and density were not predictive. Although principal component analysis (PCA) of the NIR spectra showed that black seed coat color had significant signal, excluding black seeds from the calibrations did not impact model accuracies. Calibrations for oil and protein developed in this study as well as earlier calibrations for a separate NIR analyzer of the same design were used to test the ability to transfer PLS regressions between platforms. PLS models built from data collected on one NIR analyzer had minimal differences in accuracy when applied to spectra collected from a sister device. Model transfer was more robust when spectra were trimmed from 910 to 1679 nm to 955-1635 nm due to divergence of edge wavelengths between the two devices. The ability to transfer calibrations between similar single seed NIR spectrometers facilitates broader adoption of this high-throughput, nondestructive, seed phenotyping technology.

  18. Estimating Loess Plateau Average Annual Precipitation with Multiple Linear Regression Kriging and Geographically Weighted Regression Kriging

    Directory of Open Access Journals (Sweden)

    Qiutong Jin

    2016-06-01

    Full Text Available Estimating the spatial distribution of precipitation is an important and challenging task in hydrology, climatology, ecology, and environmental science. In order to generate a highly accurate distribution map of average annual precipitation for the Loess Plateau in China, multiple linear regression Kriging (MLRK and geographically weighted regression Kriging (GWRK methods were employed using precipitation data from the period 1980–2010 from 435 meteorological stations. The predictors in regression Kriging were selected by stepwise regression analysis from many auxiliary environmental factors, such as elevation (DEM, normalized difference vegetation index (NDVI, solar radiation, slope, and aspect. All predictor distribution maps had a 500 m spatial resolution. Validation precipitation data from 130 hydrometeorological stations were used to assess the prediction accuracies of the MLRK and GWRK approaches. Results showed that both prediction maps with a 500 m spatial resolution interpolated by MLRK and GWRK had a high accuracy and captured detailed spatial distribution data; however, MLRK produced a lower prediction error and a higher variance explanation than GWRK, although the differences were small, in contrast to conclusions from similar studies.

  19. Scientific Impact of MODIS C5 Calibration Degradation and C6+ Improvements

    Science.gov (United States)

    Lyapustin, A.; Wang, Y.; Xiong, X.; Meister, G.; Platnick, S.; Levy, R.; Franz, B.; Korkin, S.; Hilker, T.; Tucker, J.; hide

    2014-01-01

    The Collection 6 (C6) MODIS (Moderate Resolution Imaging Spectroradiometer) land and atmosphere data sets are scheduled for release in 2014. C6 contains significant revisions of the calibration approach to account for sensor aging. This analysis documents the presence of systematic temporal trends in the visible and near-infrared (500 m) bands of the Collection 5 (C5) MODIS Terra and, to lesser extent, in MODIS Aqua geophysical data sets. Sensor degradation is largest in the blue band (B3) of the MODIS sensor on Terra and decreases with wavelength. Calibration degradation causes negative global trends in multiple MODIS C5 products including the dark target algorithm's aerosol optical depth over land and Ångstrom exponent over the ocean, global liquid water and ice cloud optical thickness, as well as surface reflectance and vegetation indices, including the normalized difference vegetation index (NDVI) and enhanced vegetation index (EVI). As the C5 production will be maintained for another year in parallel with C6, one objective of this paper is to raise awareness of the calibration-related trends for the broad MODIS user community. The new C6 calibration approach removes major calibrations trends in the Level 1B (L1B) data. This paper also introduces an enhanced C6C calibration of the MODIS data set which includes an additional polarization correction (PC) to compensate for the increased polarization sensitivity of MODIS Terra since about 2007, as well as detrending and Terra- Aqua cross-calibration over quasi-stable desert calibration sites. The PC algorithm, developed by the MODIS ocean biology processing group (OBPG), removes residual scan angle, mirror side and seasonal biases from aerosol and surface reflectance (SR) records along with spectral distortions of SR. Using the multiangle implementation of atmospheric correction (MAIAC) algorithm over deserts, we have also developed a detrending and cross-calibration method which removes residual decadal trends on

  20. Flexible competing risks regression modeling and goodness-of-fit

    DEFF Research Database (Denmark)

    Scheike, Thomas; Zhang, Mei-Jie

    2008-01-01

    In this paper we consider different approaches for estimation and assessment of covariate effects for the cumulative incidence curve in the competing risks model. The classic approach is to model all cause-specific hazards and then estimate the cumulative incidence curve based on these cause...... models that is easy to fit and contains the Fine-Gray model as a special case. One advantage of this approach is that our regression modeling allows for non-proportional hazards. This leads to a new simple goodness-of-fit procedure for the proportional subdistribution hazards assumption that is very easy...... of the flexible regression models to analyze competing risks data when non-proportionality is present in the data....

  1. Examination of influential observations in penalized spline regression

    Science.gov (United States)

    Türkan, Semra

    2013-10-01

    In parametric or nonparametric regression models, the results of regression analysis are affected by some anomalous observations in the data set. Thus, detection of these observations is one of the major steps in regression analysis. These observations are precisely detected by well-known influence measures. Pena's statistic is one of them. In this study, Pena's approach is formulated for penalized spline regression in terms of ordinary residuals and leverages. The real data and artificial data are used to see illustrate the effectiveness of Pena's statistic as to Cook's distance on detecting influential observations. The results of the study clearly reveal that the proposed measure is superior to Cook's Distance to detect these observations in large data set.

  2. Wavelength calibration with PMAS at 3.5 m Calar Alto Telescope using a tunable astro-comb

    Science.gov (United States)

    Chavez Boggio, J. M.; Fremberg, T.; Bodenmüller, D.; Sandin, C.; Zajnulina, M.; Kelz, A.; Giannone, D.; Rutowska, M.; Moralejo, B.; Roth, M. M.; Wysmolek, M.; Sayinc, H.

    2018-05-01

    On-sky tests conducted with an astro-comb using the Potsdam Multi-Aperture Spectrograph (PMAS) at the 3.5 m Calar Alto Telescope are reported. The proposed astro-comb approach is based on cascaded four-wave mixing between two lasers propagating through dispersion optimized nonlinear fibers. This approach allows for a line spacing that can be continuously tuned over a broad range (from tens of GHz to beyond 1 THz) making it suitable for calibration of low- medium- and high-resolution spectrographs. The astro-comb provides 300 calibration lines and his line-spacing is tracked with a wavemeter having 0.3 pm absolute accuracy. First, we assess the accuracy of Neon calibration by measuring the astro-comb lines with (Neon calibrated) PMAS. The results are compared with expected line positions from wavemeter measurement showing an offset of ∼5-20 pm (4%-16% of one resolution element). This might be the footprint of the accuracy limits from actual Neon calibration. Then, the astro-comb performance as a calibrator is assessed through measurements of the Ca triplet from stellar objects HD3765 and HD219538 as well as with the sky line spectrum, showing the advantage of the proposed astro-comb for wavelength calibration at any resolution.

  3. Predictability of extreme weather events for NE U.S.: improvement of the numerical prediction using a Bayesian regression approach

    Science.gov (United States)

    Yang, J.; Astitha, M.; Anagnostou, E. N.; Hartman, B.; Kallos, G. B.

    2015-12-01

    Weather prediction accuracy has become very important for the Northeast U.S. given the devastating effects of extreme weather events in the recent years. Weather forecasting systems are used towards building strategies to prevent catastrophic losses for human lives and the environment. Concurrently, weather forecast tools and techniques have evolved with improved forecast skill as numerical prediction techniques are strengthened by increased super-computing resources. In this study, we examine the combination of two state-of-the-science atmospheric models (WRF and RAMS/ICLAMS) by utilizing a Bayesian regression approach to improve the prediction of extreme weather events for NE U.S. The basic concept behind the Bayesian regression approach is to take advantage of the strengths of two atmospheric modeling systems and, similar to the multi-model ensemble approach, limit their weaknesses which are related to systematic and random errors in the numerical prediction of physical processes. The first part of this study is focused on retrospective simulations of seventeen storms that affected the region in the period 2004-2013. Optimal variances are estimated by minimizing the root mean square error and are applied to out-of-sample weather events. The applicability and usefulness of this approach are demonstrated by conducting an error analysis based on in-situ observations from meteorological stations of the National Weather Service (NWS) for wind speed and wind direction, and NCEP Stage IV radar data, mosaicked from the regional multi-sensor for precipitation. The preliminary results indicate a significant improvement in the statistical metrics of the modeled-observed pairs for meteorological variables using various combinations of the sixteen events as predictors of the seventeenth. This presentation will illustrate the implemented methodology and the obtained results for wind speed, wind direction and precipitation, as well as set the research steps that will be

  4. Approximating prediction uncertainty for random forest regression models

    Science.gov (United States)

    John W. Coulston; Christine E. Blinn; Valerie A. Thomas; Randolph H. Wynne

    2016-01-01

    Machine learning approaches such as random forest have increased for the spatial modeling and mapping of continuous variables. Random forest is a non-parametric ensemble approach, and unlike traditional regression approaches there is no direct quantification of prediction error. Understanding prediction uncertainty is important when using model-based continuous maps as...

  5. 4SM: A Novel Self-Calibrated Algebraic Ratio Method for Satellite-Derived Bathymetry and Water Column Correction

    Directory of Open Access Journals (Sweden)

    Yann G. Morel

    2017-07-01

    Full Text Available All empirical water column correction methods have consistently been reported to require existing depth sounding data for the purpose of calibrating a simple depth retrieval model; they yield poor results over very bright or very dark bottoms. In contrast, we set out to (i use only the relative radiance data in the image along with published data, and several new assumptions; (ii in order to specify and operate the simplified radiative transfer equation (RTE; (iii for the purpose of retrieving both the satellite derived bathymetry (SDB and the water column corrected spectral reflectance over shallow seabeds. Sea truth regressions show that SDB depths retrieved by the method only need tide correction. Therefore it shall be demonstrated that, under such new assumptions, there is no need for (i formal atmospheric correction; (ii conversion of relative radiance into calibrated reflectance; or (iii existing depth sounding data, to specify the simplified RTE and produce both SDB and spectral water column corrected radiance ready for bottom typing. Moreover, the use of the panchromatic band for that purpose is introduced. Altogether, we named this process the Self-Calibrated Supervised Spectral Shallow-sea Modeler (4SM. This approach requires a trained practitioner, though, to produce its results within hours of downloading the raw image. The ideal raw image should be a “near-nadir” view, exhibit homogeneous atmosphere and water column, include some coverage of optically deep waters and bare land, and lend itself to quality removal of haze, atmospheric adjacency effect, and sun/sky glint.

  6. 4SM: A Novel Self-Calibrated Algebraic Ratio Method for Satellite-Derived Bathymetry and Water Column Correction.

    Science.gov (United States)

    Morel, Yann G; Favoretto, Fabio

    2017-07-21

    All empirical water column correction methods have consistently been reported to require existing depth sounding data for the purpose of calibrating a simple depth retrieval model; they yield poor results over very bright or very dark bottoms. In contrast, we set out to (i) use only the relative radiance data in the image along with published data, and several new assumptions; (ii) in order to specify and operate the simplified radiative transfer equation (RTE); (iii) for the purpose of retrieving both the satellite derived bathymetry (SDB) and the water column corrected spectral reflectance over shallow seabeds. Sea truth regressions show that SDB depths retrieved by the method only need tide correction. Therefore it shall be demonstrated that, under such new assumptions, there is no need for (i) formal atmospheric correction; (ii) conversion of relative radiance into calibrated reflectance; or (iii) existing depth sounding data, to specify the simplified RTE and produce both SDB and spectral water column corrected radiance ready for bottom typing. Moreover, the use of the panchromatic band for that purpose is introduced. Altogether, we named this process the Self-Calibrated Supervised Spectral Shallow-sea Modeler (4SM). This approach requires a trained practitioner, though, to produce its results within hours of downloading the raw image. The ideal raw image should be a "near-nadir" view, exhibit homogeneous atmosphere and water column, include some coverage of optically deep waters and bare land, and lend itself to quality removal of haze, atmospheric adjacency effect, and sun/sky glint.

  7. Bias and Uncertainty in Regression-Calibrated Models of Groundwater Flow in Heterogeneous Media

    DEFF Research Database (Denmark)

    Cooley, R.L.; Christensen, Steen

    2006-01-01

    by a lumped or smoothed m-dimensional approximation γθ*, where γ is an interpolation matrix and θ* is a stochastic vector of parameters. Vector θ* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(γθ*) written in terms......Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector β that reflects both small and large scales of heterogeneity in the inputs...... small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate θ* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear...

  8. A hybrid approach of stepwise regression, logistic regression, support vector machine, and decision tree for forecasting fraudulent financial statements.

    Science.gov (United States)

    Chen, Suduan; Goo, Yeong-Jia James; Shen, Zone-De

    2014-01-01

    As the fraudulent financial statement of an enterprise is increasingly serious with each passing day, establishing a valid forecasting fraudulent financial statement model of an enterprise has become an important question for academic research and financial practice. After screening the important variables using the stepwise regression, the study also matches the logistic regression, support vector machine, and decision tree to construct the classification models to make a comparison. The study adopts financial and nonfinancial variables to assist in establishment of the forecasting fraudulent financial statement model. Research objects are the companies to which the fraudulent and nonfraudulent financial statement happened between years 1998 to 2012. The findings are that financial and nonfinancial information are effectively used to distinguish the fraudulent financial statement, and decision tree C5.0 has the best classification effect 85.71%.

  9. A multi-source satellite data approach for modelling Lake Turkana water level: calibration and validation using satellite altimetry data

    Directory of Open Access Journals (Sweden)

    N. M. Velpuri

    2012-01-01

    Full Text Available Lake Turkana is one of the largest desert lakes in the world and is characterized by high degrees of inter- and intra-annual fluctuations. The hydrology and water balance of this lake have not been well understood due to its remote location and unavailability of reliable ground truth datasets. Managing surface water resources is a great challenge in areas where in-situ data are either limited or unavailable. In this study, multi-source satellite-driven data such as satellite-based rainfall estimates, modelled runoff, evapotranspiration, and a digital elevation dataset were used to model Lake Turkana water levels from 1998 to 2009. Due to the unavailability of reliable lake level data, an approach is presented to calibrate and validate the water balance model of Lake Turkana using a composite lake level product of TOPEX/Poseidon, Jason-1, and ENVISAT satellite altimetry data. Model validation results showed that the satellite-driven water balance model can satisfactorily capture the patterns and seasonal variations of the Lake Turkana water level fluctuations with a Pearson's correlation coefficient of 0.90 and a Nash-Sutcliffe Coefficient of Efficiency (NSCE of 0.80 during the validation period (2004–2009. Model error estimates were within 10% of the natural variability of the lake. Our analysis indicated that fluctuations in Lake Turkana water levels are mainly driven by lake inflows and over-the-lake evaporation. Over-the-lake rainfall contributes only up to 30% of lake evaporative demand. During the modelling time period, Lake Turkana showed seasonal variations of 1–2 m. The lake level fluctuated in the range up to 4 m between the years 1998–2009. This study demonstrated the usefulness of satellite altimetry data to calibrate and validate the satellite-driven hydrological model for Lake Turkana without using any in-situ data. Furthermore, for Lake Turkana, we identified and outlined opportunities and challenges of using a calibrated

  10. A multi-source satellite data approach for modelling Lake Turkana water level: Calibration and validation using satellite altimetry data

    Science.gov (United States)

    Velpuri, N.M.; Senay, G.B.; Asante, K.O.

    2012-01-01

    Lake Turkana is one of the largest desert lakes in the world and is characterized by high degrees of interand intra-annual fluctuations. The hydrology and water balance of this lake have not been well understood due to its remote location and unavailability of reliable ground truth datasets. Managing surface water resources is a great challenge in areas where in-situ data are either limited or unavailable. In this study, multi-source satellite-driven data such as satellite-based rainfall estimates, modelled runoff, evapotranspiration, and a digital elevation dataset were used to model Lake Turkana water levels from 1998 to 2009. Due to the unavailability of reliable lake level data, an approach is presented to calibrate and validate the water balance model of Lake Turkana using a composite lake level product of TOPEX/Poseidon, Jason-1, and ENVISAT satellite altimetry data. Model validation results showed that the satellitedriven water balance model can satisfactorily capture the patterns and seasonal variations of the Lake Turkana water level fluctuations with a Pearson's correlation coefficient of 0.90 and a Nash-Sutcliffe Coefficient of Efficiency (NSCE) of 0.80 during the validation period (2004-2009). Model error estimates were within 10% of the natural variability of the lake. Our analysis indicated that fluctuations in Lake Turkana water levels are mainly driven by lake inflows and over-the-lake evaporation. Over-the-lake rainfall contributes only up to 30% of lake evaporative demand. During the modelling time period, Lake Turkana showed seasonal variations of 1-2m. The lake level fluctuated in the range up to 4m between the years 1998-2009. This study demonstrated the usefulness of satellite altimetry data to calibrate and validate the satellite-driven hydrological model for Lake Turkana without using any in-situ data. Furthermore, for Lake Turkana, we identified and outlined opportunities and challenges of using a calibrated satellite-driven water balance

  11. CALIBRATION PROCEDURES ON OBLIQUE CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    G. Kemper

    2016-06-01

    step with the help of the nadir camera and the GPS/IMU data, an initial orientation correction and radial correction were calculated. With this approach, the whole project was calculated and calibrated in one step. During the iteration process the radial and tangential parameters were switched on individually for the camera heads and after that the camera constants and principal point positions were checked and finally calibrated. Besides that, the bore side calibration can be performed either on basis of the nadir camera and their offsets, or independently for each camera without correlation to the others. This must be performed in a complete mission anyway to get stability between the single camera heads. Determining the lever arms of the nodal-points to the IMU centre needs more caution than for a single camera especially due to the strong tilt angle. Prepared all these previous steps, you get a highly accurate sensor that enables a fully automated data extraction with a rapid update of you existing data. Frequently monitoring urban dynamics is then possible in fully 3D environment.

  12. Modeling oil production based on symbolic regression

    International Nuclear Information System (INIS)

    Yang, Guangfei; Li, Xianneng; Wang, Jianliang; Lian, Lian; Ma, Tieju

    2015-01-01

    Numerous models have been proposed to forecast the future trends of oil production and almost all of them are based on some predefined assumptions with various uncertainties. In this study, we propose a novel data-driven approach that uses symbolic regression to model oil production. We validate our approach on both synthetic and real data, and the results prove that symbolic regression could effectively identify the true models beneath the oil production data and also make reliable predictions. Symbolic regression indicates that world oil production will peak in 2021, which broadly agrees with other techniques used by researchers. Our results also show that the rate of decline after the peak is almost half the rate of increase before the peak, and it takes nearly 12 years to drop 4% from the peak. These predictions are more optimistic than those in several other reports, and the smoother decline will provide the world, especially the developing countries, with more time to orchestrate mitigation plans. -- Highlights: •A data-driven approach has been shown to be effective at modeling the oil production. •The Hubbert model could be discovered automatically from data. •The peak of world oil production is predicted to appear in 2021. •The decline rate after peak is half of the increase rate before peak. •Oil production projected to decline 4% post-peak

  13. Face Alignment via Regressing Local Binary Features.

    Science.gov (United States)

    Ren, Shaoqing; Cao, Xudong; Wei, Yichen; Sun, Jian

    2016-03-01

    This paper presents a highly efficient and accurate regression approach for face alignment. Our approach has two novel components: 1) a set of local binary features and 2) a locality principle for learning those features. The locality principle guides us to learn a set of highly discriminative local binary features for each facial landmark independently. The obtained local binary features are used to jointly learn a linear regression for the final output. This approach achieves the state-of-the-art results when tested on the most challenging benchmarks to date. Furthermore, because extracting and regressing local binary features are computationally very cheap, our system is much faster than previous methods. It achieves over 3000 frames per second (FPS) on a desktop or 300 FPS on a mobile phone for locating a few dozens of landmarks. We also study a key issue that is important but has received little attention in the previous research, which is the face detector used to initialize alignment. We investigate several face detectors and perform quantitative evaluation on how they affect alignment accuracy. We find that an alignment friendly detector can further greatly boost the accuracy of our alignment method, reducing the error up to 16% relatively. To facilitate practical usage of face detection/alignment methods, we also propose a convenient metric to measure how good a detector is for alignment initialization.

  14. Calibration-on-the-spot”: How to calibrate an EMCCD camera from its images

    DEFF Research Database (Denmark)

    Mortensen, Kim; Flyvbjerg, Henrik

    2016-01-01

    In order to count photons with a camera, the camera must be calibrated. Photon counting is necessary, e.g., to determine the precision of localization-based super-resolution microscopy. Here we present a protocol that calibrates an EMCCD camera from information contained in isolated, diffraction-......-limited spots in any image taken by the camera, thus making dedicated calibration procedures redundant by enabling calibration post festum, from images filed without calibration information....

  15. Calibration of reference KAP-meters at SSDL and cross calibration of clinical KAP-meters

    International Nuclear Information System (INIS)

    Hetland, Per O.; Friberg, Eva G.; Oevreboe, Kirsti M.; Bjerke, Hans H.

    2009-01-01

    In the summer of 2007 the secondary standard dosimetry laboratory (SSDL) in Norway established a calibration service for reference air-kerma product meter (KAP-meter). The air-kerma area product, PKA, is a dosimetric quantity that can be directly related to the patient dose and used for risk assessment associated with different x-ray examinations. The calibration of reference KAP-meters at the SSDL gives important information on parameters influencing the calibration factor for different types of KAP-meters. The use of reference KAP-meters calibrated at the SSDL is an easy and reliable way to calibrate or verify the PKA indicated by the x-ray equipment out in the clinics. Material and methods. Twelve KAP-meters were calibrated at the SSDL by use of the substitution method at five diagnostic radiation qualities (RQRs). Results. The calibration factors varied from 0.94 to 1.18. The energy response of the individual KAP-meters varied by a total of 20% between the different RQRs and the typical chamber transmission factors ranged from 0.78 to 0.91. Discussion. It is important to use a calibrated reference KAP-meter and a harmonised calibration method in the PKA calibration in hospitals. The obtained uncertainty in the PKA readings is comparable with other calibration methods if the information in the calibration certificate is correct used, corrections are made and proper positioning of the KAP-chamber is performed. This will ensure a reliable estimate of the patient dose and a proper optimisation of conventional x-ray examinations and interventional procedures

  16. Attitude-independent magnetometer calibration for marine magnetic surveys: regularization issue

    International Nuclear Information System (INIS)

    Wu, Zhitian; Hu, Xiaoping; Wu, Meiping; Cao, Juliang

    2013-01-01

    We have developed an attitude-independent calibration method for a shipboard magnetometer to estimate the absolute strength of the geomagnetic field from a marine vessel. The three-axis magnetometer to be calibrated is fixed on a rigid aluminium boom ahead of the vessel to reduce the magnetic effect of the vessel. Due to the constrained manoeuvres of the vessel, a linear observational equation system for calibration parameter estimation is severely ill-posed. Consequently, if the issue is not mitigated, traditional calibration methods may result in unreliable or unsuccessful solutions. In this paper, the ill-posed problem is solved by using the truncated total least squares (TTLS) technique. This method takes advantage of simultaneously considering errors on both sides of the observation equation. Furthermore, the TTLS method suits strongly ill-posed problems. Simulations and experiments have been performed to assess the performance of the TTLS method and to compare it with the performance of conventional regularization approaches such as the Tikhonov method and truncated single value decomposition. The results show that the proposed algorithm can effectively mitigate the ill-posed problem and is more stable than the compared regularization methods for magnetometer calibration applications. (paper)

  17. Soil texture and depth influence on the neutron probe calibration

    International Nuclear Information System (INIS)

    Santos, Reginaldo Ferreira; Carlesso, Reimar

    1998-01-01

    The neutron probe is an equipment used on determination of the soil water content, based on the fast neutron attenuation. Therefore, there is a calibration need in the field and, consequently, to verify the soil texture and depth influence for to determining the calibration curves in relation to the water content. The study was developed at Santa Maria's Federal University in a lisimeter group, protected from the rains with transparent plastic. There different soil textures, three depths (10, 30 and 50 cm from the soil surface) and four replicates were used. Linear regression equations between neutron counts and soil water contents were made. The results showed that there was interference of the texture and depth of the soil, analyzed jointly, on the calibration curves, and the observed and estimated values varied form o,02 to 0,06 cm3/cm3 of the soil water content and the correlation coefficients were 0,86 0,95 and 0,89 for clayray, franc-silt-clayey and franc-sandy, respectively. For soil texture and depth, analyzed separately, the differences among the values observed in the field and the estimated ones, varied from 0,0 to 0,02 cm3/cm3 soil water content and presented correlation coefficients between 0,97 and 1,0. (author)

  18. Approximate median regression for complex survey data with skewed response.

    Science.gov (United States)

    Fraser, Raphael André; Lipsitz, Stuart R; Sinha, Debajyoti; Fitzmaurice, Garrett M; Pan, Yi

    2016-12-01

    The ready availability of public-use data from various large national complex surveys has immense potential for the assessment of population characteristics using regression models. Complex surveys can be used to identify risk factors for important diseases such as cancer. Existing statistical methods based on estimating equations and/or utilizing resampling methods are often not valid with survey data due to complex survey design features. That is, stratification, multistage sampling, and weighting. In this article, we accommodate these design features in the analysis of highly skewed response variables arising from large complex surveys. Specifically, we propose a double-transform-both-sides (DTBS)'based estimating equations approach to estimate the median regression parameters of the highly skewed response; the DTBS approach applies the same Box-Cox type transformation twice to both the outcome and regression function. The usual sandwich variance estimate can be used in our approach, whereas a resampling approach would be needed for a pseudo-likelihood based on minimizing absolute deviations (MAD). Furthermore, the approach is relatively robust to the true underlying distribution, and has much smaller mean square error than a MAD approach. The method is motivated by an analysis of laboratory data on urinary iodine (UI) concentration from the National Health and Nutrition Examination Survey. © 2016, The International Biometric Society.

  19. Ray-based calibration for the micro optical metrology system

    Science.gov (United States)

    Yin, Yongkai; Wang, Meng; Li, Ameng; Liu, Xiaoli; Peng, Xiang

    2014-05-01

    Fringe projection 3D microscopy (FP-3DM) plays an important role in micro-machining and micro-fabrication. FP-3DM may be realized with quite different arrangements and principles, which make people confused to select an appropriate one for their specific application. This paper introduces the ray-based general imaging model to describe the FP-3DM, which has the potential to get a unified expression for different system arrangements. Meanwhile the dedicated calibration procedure is also presented to realize quantitative 3D imaging. The validity and accuracy of proposed calibration approach is demonstrated with experiments.

  20. A Fundamental Parameter-Based Calibration Model for an Intrinsic Germanium X-Ray Fluorescence Spectrometer

    DEFF Research Database (Denmark)

    Christensen, Leif Højslet; Pind, Niels

    1982-01-01

    A matrix-independent fundamental parameter-based calibration model for an energy-dispersive X-ray fluorescence spectrometer has been developed. This model, which is part of a fundamental parameter approach quantification method, accounts for both the excitation and detection probability. For each...... secondary target a number of relative calibration constants are calculated on the basis of knowledge of the irradiation geometry, the detector specifications, and tabulated fundamental physical parameters. The absolute calibration of the spectrometer is performed by measuring one pure element standard per...

  1. Calibration transfer between electronic nose systems for rapid In situ measurement of pulp and paper industry emissions

    Energy Technology Data Exchange (ETDEWEB)

    Deshmukh, Sharvari [CSIR-National Environmental Engineering and Research Institute, Nagpur (India); Department of Instrumentation and Electronics Engineering, Jadavpur University, Kolkata (India); Kamde, Kalyani [CSIR-National Environmental Engineering and Research Institute, Nagpur (India); Jana, Arun [Center for Development of Advance Computing, Kolkata (India); Korde, Sanjivani [CSIR-National Environmental Engineering and Research Institute, Nagpur (India); Bandyopadhyay, Rajib [Department of Instrumentation and Electronics Engineering, Jadavpur University, Kolkata (India); Sankar, Ravi [Center for Development of Advance Computing, Kolkata (India); Bhattacharyya, Nabarun, E-mail: nabarun.bhattacharya@cdac.in [Center for Development of Advance Computing, Kolkata (India); Pandey, R.A., E-mail: ra_pandey@neeri.res.in [CSIR-National Environmental Engineering and Research Institute, Nagpur (India)

    2014-09-02

    Highlights: • E-nose developed for obnoxious emissions measurement at pulp and paper industrial site. • ANN model developed for prediction of (CH{sub 3}){sub 2}S, (CH{sub 3}){sub 2}S{sub 2}, CH{sub 3}SH and H{sub 2}S concentration. • Calibration transfer methodology developed for transfer between two e-nose instruments. • Box–Behnken design and robust regression used for calibration transfer. • Results show effective transfer of training model from one e-nose system to other. - Abstract: Electronic nose systems when deployed in network mesh can effectively provide a low budget and onsite solution for the industrial obnoxious gaseous measurement. For accurate and identical prediction capability by all the electronic nose systems, a reliable calibration transfer model needs to be implemented in order to overcome the inherent sensor array variability. In this work, robust regression (RR) is used for calibration transfer between two electronic nose systems using a Box–Behnken (BB) design. Out of the two electronic nose systems, one was trained using industrial gas samples by four artificial neural network models, for the measurement of obnoxious odours emitted from pulp and paper industries. The emissions constitute mainly of hydrogen sulphide (H{sub 2}S), methyl mercaptan (MM), dimethyl sulphide (DMS) and dimethyl disulphide (DMDS) in different proportions. A Box–Behnken design consisting of 27 experiment sets based on synthetic gas combinations of H{sub 2}S, MM, DMS and DMDS, were conducted for calibration transfer between two identical electronic nose systems. Identical sensors on both the systems were mapped and the prediction models developed using ANN were then transferred to the second system using BB–RR methodology. The results showed successful transmission of prediction models developed for one system to other system, with the mean absolute error between the actual and predicted concentration of analytes in mg L{sup −1} after calibration

  2. Calibration transfer between electronic nose systems for rapid In situ measurement of pulp and paper industry emissions

    International Nuclear Information System (INIS)

    Deshmukh, Sharvari; Kamde, Kalyani; Jana, Arun; Korde, Sanjivani; Bandyopadhyay, Rajib; Sankar, Ravi; Bhattacharyya, Nabarun; Pandey, R.A.

    2014-01-01

    Highlights: • E-nose developed for obnoxious emissions measurement at pulp and paper industrial site. • ANN model developed for prediction of (CH 3 ) 2 S, (CH 3 ) 2 S 2 , CH 3 SH and H 2 S concentration. • Calibration transfer methodology developed for transfer between two e-nose instruments. • Box–Behnken design and robust regression used for calibration transfer. • Results show effective transfer of training model from one e-nose system to other. - Abstract: Electronic nose systems when deployed in network mesh can effectively provide a low budget and onsite solution for the industrial obnoxious gaseous measurement. For accurate and identical prediction capability by all the electronic nose systems, a reliable calibration transfer model needs to be implemented in order to overcome the inherent sensor array variability. In this work, robust regression (RR) is used for calibration transfer between two electronic nose systems using a Box–Behnken (BB) design. Out of the two electronic nose systems, one was trained using industrial gas samples by four artificial neural network models, for the measurement of obnoxious odours emitted from pulp and paper industries. The emissions constitute mainly of hydrogen sulphide (H 2 S), methyl mercaptan (MM), dimethyl sulphide (DMS) and dimethyl disulphide (DMDS) in different proportions. A Box–Behnken design consisting of 27 experiment sets based on synthetic gas combinations of H 2 S, MM, DMS and DMDS, were conducted for calibration transfer between two identical electronic nose systems. Identical sensors on both the systems were mapped and the prediction models developed using ANN were then transferred to the second system using BB–RR methodology. The results showed successful transmission of prediction models developed for one system to other system, with the mean absolute error between the actual and predicted concentration of analytes in mg L −1 after calibration transfer (on second system) being 0.076, 0

  3. Calibrated expressions for welding and their application to isotherm width in a thick plate

    Directory of Open Access Journals (Sweden)

    Gentry Wood

    2014-09-01

    Full Text Available The present paper introduces a possible solution to the limitations of modern trial and error solutions to welding procedure development. The difficulties of finding generalized solutions to Rosenthal´s equation are discussed and the Minimal Representation and Calibration approach is introduced as a promising procedure for developing these solutions. Dominant factors are identified, with effects from secondary phenomena being taken into account by correction factors. These correction factors are then calibrated and presented in a form that can be easily computed, and therefore be amendable to industry. The approach is then demonstrated by determining the isotherm width from Rosenthal´s thick plate solution. Comparison of the calibrated scaling equations to Rosenthal´s exact solution showed a maximum error of less than 0.8% for any isotherm.

  4. Balanced calibration of resonant piezoelectric RL shunts with quasi-static background flexibility correction

    DEFF Research Database (Denmark)

    Høgsberg, Jan Becker; Krenk, Steen

    2015-01-01

    Resonant RL shunt circuits constitute a robust approach to piezoelectric damping, where the performance with respect to damping of flexible structures requires a precise calibration of the corresponding circuit components. The balanced calibration procedure of the present paper is based on equal ...... that the procedure leads to equal modal damping and effective response reduction, even for rather indirect placement of the transducer, provided that the correction for background flexibility is included in the calibration procedure....

  5. Multiple regression and beyond an introduction to multiple regression and structural equation modeling

    CERN Document Server

    Keith, Timothy Z

    2014-01-01

    Multiple Regression and Beyond offers a conceptually oriented introduction to multiple regression (MR) analysis and structural equation modeling (SEM), along with analyses that flow naturally from those methods. By focusing on the concepts and purposes of MR and related methods, rather than the derivation and calculation of formulae, this book introduces material to students more clearly, and in a less threatening way. In addition to illuminating content necessary for coursework, the accessibility of this approach means students are more likely to be able to conduct research using MR or SEM--and more likely to use the methods wisely. Covers both MR and SEM, while explaining their relevance to one another Also includes path analysis, confirmatory factor analysis, and latent growth modeling Figures and tables throughout provide examples and illustrate key concepts and techniques For additional resources, please visit: http://tzkeith.com/.

  6. A Hybrid Approach of Stepwise Regression, Logistic Regression, Support Vector Machine, and Decision Tree for Forecasting Fraudulent Financial Statements

    Directory of Open Access Journals (Sweden)

    Suduan Chen

    2014-01-01

    Full Text Available As the fraudulent financial statement of an enterprise is increasingly serious with each passing day, establishing a valid forecasting fraudulent financial statement model of an enterprise has become an important question for academic research and financial practice. After screening the important variables using the stepwise regression, the study also matches the logistic regression, support vector machine, and decision tree to construct the classification models to make a comparison. The study adopts financial and nonfinancial variables to assist in establishment of the forecasting fraudulent financial statement model. Research objects are the companies to which the fraudulent and nonfraudulent financial statement happened between years 1998 to 2012. The findings are that financial and nonfinancial information are effectively used to distinguish the fraudulent financial statement, and decision tree C5.0 has the best classification effect 85.71%.

  7. Automation of RELAP5 input calibration and code validation using genetic algorithm

    International Nuclear Information System (INIS)

    Phung, Viet-Anh; Kööp, Kaspar; Grishchenko, Dmitry; Vorobyev, Yury; Kudinov, Pavel

    2016-01-01

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  8. Automation of RELAP5 input calibration and code validation using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Phung, Viet-Anh, E-mail: vaphung@kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Kööp, Kaspar, E-mail: kaspar@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Grishchenko, Dmitry, E-mail: dmitry@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Vorobyev, Yury, E-mail: yura3510@gmail.com [National Research Center “Kurchatov Institute”, Kurchatov square 1, Moscow 123182 (Russian Federation); Kudinov, Pavel, E-mail: pavel@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden)

    2016-04-15

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  9. Finite Algorithms for Robust Linear Regression

    DEFF Research Database (Denmark)

    Madsen, Kaj; Nielsen, Hans Bruun

    1990-01-01

    The Huber M-estimator for robust linear regression is analyzed. Newton type methods for solution of the problem are defined and analyzed, and finite convergence is proved. Numerical experiments with a large number of test problems demonstrate efficiency and indicate that this kind of approach may...

  10. Investigation on calibration parameter of mammography calibration facilities at MINT

    International Nuclear Information System (INIS)

    Asmaliza Hashim; Wan Hazlinda Ismail; Md Saion Salikin; Muhammad Jamal Md Isa; Azuhar Ripin; Norriza Mohd Isa

    2004-01-01

    A mammography calibration facility has been established in the Medical Physics Laboratory, Malaysian Institute for Nuclear Technology Research (MINT). The calibration facility is established at the national level mainly to provide calibration services for radiation measuring test instruments or test tools used in quality assurance programme in mammography, which is being implemented in Malaysia. One of the accepted parameters that determine the quality of a radiation beam is the homogeneity coefficient. It is determined from the values of the 1 st and 2 nd Half Value Layer (HVL). In this paper, the consistency of the mammography machine beam qualities that is available in MINT, is investigated and presented. For calibration purposes, five radiation qualities namely 23, 25, 28, 30 and 35 kV, selectable from the control panel of the X-ray machine is used. Important parameters that are set for this calibration facility are exposure time, tube current, focal spot to detector distance (FDD) and beam size at specific distance. The values of homogeneity coefficient of this laboratory for the past few years tip to now be presented in this paper. Backscatter radiations are also considered in this investigation. (Author)

  11. Determination of boiling point of petrochemicals by gas chromatography-mass spectrometry and multivariate regression analysis of structural activity relationship.

    Science.gov (United States)

    Fakayode, Sayo O; Mitchell, Breanna S; Pollard, David A

    2014-08-01

    Accurate understanding of analyte boiling points (BP) is of critical importance in gas chromatographic (GC) separation and crude oil refinery operation in petrochemical industries. This study reported the first combined use of GC separation and partial-least-square (PLS1) multivariate regression analysis of petrochemical structural activity relationship (SAR) for accurate BP determination of two commercially available (D3710 and MA VHP) calibration gas mix samples. The results of the BP determination using PLS1 multivariate regression were further compared with the results of traditional simulated distillation method of BP determination. The developed PLS1 regression was able to correctly predict analytes BP in D3710 and MA VHP calibration gas mix samples, with a root-mean-square-%-relative-error (RMS%RE) of 6.4%, and 10.8% respectively. In contrast, the overall RMS%RE of 32.9% and 40.4%, respectively obtained for BP determination in D3710 and MA VHP using a traditional simulated distillation method were approximately four times larger than the corresponding RMS%RE of BP prediction using MRA, demonstrating the better predictive ability of MRA. The reported method is rapid, robust, and promising, and can be potentially used routinely for fast analysis, pattern recognition, and analyte BP determination in petrochemical industries. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. A machine vision system for the calibration of digital thermometers

    International Nuclear Information System (INIS)

    Vázquez-Fernández, Esteban; Dacal-Nieto, Angel; González-Jorge, Higinio; Alvarez-Valado, Victor; Martín, Fernando; Formella, Arno

    2009-01-01

    Automation is a key point in many industrial tasks such as calibration and metrology. In this context, machine vision has shown to be a useful tool for automation support, especially when there is no other option available. A system for the calibration of portable measurement devices has been developed. The system uses machine vision to obtain the numerical values shown by displays. A new approach based on human perception of digits, which works in parallel with other more classical classifiers, has been created. The results show the benefits of the system in terms of its usability and robustness, obtaining a success rate higher than 99% in display recognition. The system saves time and effort, and offers the possibility of scheduling calibration tasks without excessive attention by the laboratory technicians

  13. A Statistical Approach to Continuous Self-Calibrating Eye Gaze Tracking for Head-Mounted Virtual Reality Systems

    OpenAIRE

    Tripathi, Subarna; Guenter, Brian

    2016-01-01

    We present a novel, automatic eye gaze tracking scheme inspired by smooth pursuit eye motion while playing mobile games or watching virtual reality contents. Our algorithm continuously calibrates an eye tracking system for a head mounted display. This eliminates the need for an explicit calibration step and automatically compensates for small movements of the headset with respect to the head. The algorithm finds correspondences between corneal motion and screen space motion, and uses these to...

  14. Using Active Learning for Speeding up Calibration in Simulation Models.

    Science.gov (United States)

    Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2016-07-01

    Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.

  15. Variable Selection for Regression Models of Percentile Flows

    Science.gov (United States)

    Fouad, G.

    2017-12-01

    Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high

  16. Detection of Outliers in Regression Model for Medical Data

    Directory of Open Access Journals (Sweden)

    Stephen Raj S

    2017-07-01

    Full Text Available In regression analysis, an outlier is an observation for which the residual is large in magnitude compared to other observations in the data set. The detection of outliers and influential points is an important step of the regression analysis. Outlier detection methods have been used to detect and remove anomalous values from data. In this paper, we detect the presence of outliers in simple linear regression models for medical data set. Chatterjee and Hadi mentioned that the ordinary residuals are not appropriate for diagnostic purposes; a transformed version of them is preferable. First, we investigate the presence of outliers based on existing procedures of residuals and standardized residuals. Next, we have used the new approach of standardized scores for detecting outliers without the use of predicted values. The performance of the new approach was verified with the real-life data.

  17. Soil Moisture ActivePassive (SMAP) L-Band Microwave Radiometer Post-Launch Calibration

    Science.gov (United States)

    Peng, Jinzheng; Piepmeier, Jeffrey R.; Misra, Sidharth; Dinnat, Emmanuel P.; Hudson, Derek; Le Vine, David M.; De Amici, Giovanni; Mohammed, Priscilla N.; Yueh, Simon H.; Meissner, Thomas

    2016-01-01

    The SMAP microwave radiometer is a fully-polarimetric L-band radiometer flown on the SMAP satellite in a 6 AM/ 6 PM sun-synchronous orbit at 685 km altitude. Since April, 2015, the radiometer is under calibration and validation to assess the quality of the radiometer L1B data product. Calibration methods including the SMAP L1B TA2TB (from Antenna Temperature (TA) to the Earth’s surface Brightness Temperature (TB)) algorithm and TA forward models are outlined, and validation approaches to calibration stability/quality are described in this paper including future work. Results show that the current radiometer L1B data satisfies its requirements.

  18. Effects of Serum Creatinine Calibration on Estimated Renal Function in African Americans: the Jackson Heart Study

    Science.gov (United States)

    Wang, Wei; Young, Bessie A.; Fülöp, Tibor; de Boer, Ian H.; Boulware, L. Ebony; Katz, Ronit; Correa, Adolfo; Griswold, Michael E.

    2015-01-01

    Background The calibration to Isotope Dilution Mass Spectroscopy (IDMS) traceable creatinine is essential for valid use of the new Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation to estimate the glomerular filtration rate (GFR). Methods For 5,210 participants in the Jackson Heart Study (JHS), serum creatinine was measured with a multipoint enzymatic spectrophotometric assay at the baseline visit (2000–2004) and re-measured using the Roche enzymatic method, traceable to IDMS in a subset of 206 subjects. The 200 eligible samples (6 were excluded, 1 for failure of the re-measurement and 5 for outliers) were divided into three disjoint sets - training, validation, and test - to select a calibration model, estimate true errors, and assess performance of the final calibration equation. The calibration equation was applied to serum creatinine measurements of 5,210 participants to estimate GFR and the prevalence of CKD. Results The selected Deming regression model provided a slope of 0.968 (95% Confidence Interval (CI), 0.904 to 1.053) and intercept of −0.0248 (95% CI, −0.0862 to 0.0366) with R squared 0.9527. Calibrated serum creatinine showed high agreement with actual measurements when applying to the unused test set (concordance correlation coefficient 0.934, 95% CI, 0.894 to 0.960). The baseline prevalence of CKD in the JHS (2000–2004) was 6.30% using calibrated values, compared with 8.29% using non-calibrated serum creatinine with the CKD-EPI equation (P creatinine measurements in the JHS and the calibrated values provide a lower CKD prevalence estimate. PMID:25806862

  19. Assessing the response of area burned to changing climate in western boreal North America using a Multivariate Adaptive Regression Splines (MARS) approach

    Science.gov (United States)

    Michael S. Balshi; A. David McGuire; Paul Duffy; Mike Flannigan; John Walsh; Jerry Melillo

    2009-01-01

    We developed temporally and spatially explicit relationships between air temperature and fuel moisture codes derived from the Canadian Fire Weather Index System to estimate annual area burned at 2.5o (latitude x longitude) resolution using a Multivariate Adaptive Regression Spline (MARS) approach across Alaska and Canada. Burned area was...

  20. Testing the transferability of regression equations derived from small sub-catchments to a large area in central Sweden

    Directory of Open Access Journals (Sweden)

    C. Xu

    2003-01-01

    Full Text Available There is an ever increasing need to apply hydrological models to catchments where streamflow data are unavailable or to large geographical regions where calibration is not feasible. Estimation of model parameters from spatial physical data is the key issue in the development and application of hydrological models at various scales. To investigate the suitability of transferring the regression equations relating model parameters to physical characteristics developed from small sub-catchments to a large region for estimating model parameters, a conceptual snow and water balance model was optimised on all the sub-catchments in the region. A multiple regression analysis related model parameters to physical data for the catchments and the regression equations derived from the small sub-catchments were used to calculate regional parameter values for the large basin using spatially aggregated physical data. For the model tested, the results support the suitability of transferring the regression equations to the larger region. Keywords: water balance modelling,large scale, multiple regression, regionalisation