Multivariate calibration applied to the quantitative analysis of infrared spectra
Energy Technology Data Exchange (ETDEWEB)
Haaland, D.M.
1991-01-01
Multivariate calibration methods are very useful for improving the precision, accuracy, and reliability of quantitative spectral analyses. Spectroscopists can more effectively use these sophisticated statistical tools if they have a qualitative understanding of the techniques involved. A qualitative picture of the factor analysis multivariate calibration methods of partial least squares (PLS) and principal component regression (PCR) is presented using infrared calibrations based upon spectra of phosphosilicate glass thin films on silicon wafers. Comparisons of the relative prediction abilities of four different multivariate calibration methods are given based on Monte Carlo simulations of spectral calibration and prediction data. The success of multivariate spectral calibrations is demonstrated for several quantitative infrared studies. The infrared absorption and emission spectra of thin-film dielectrics used in the manufacture of microelectronic devices demonstrate rapid, nondestructive at-line and in-situ analyses using PLS calibrations. Finally, the application of multivariate spectral calibrations to reagentless analysis of blood is presented. We have found that the determination of glucose in whole blood taken from diabetics can be precisely monitored from the PLS calibration of either mind- or near-infrared spectra of the blood. Progress toward the non-invasive determination of glucose levels in diabetics is an ultimate goal of this research. 13 refs., 4 figs.
Local Strategy Combined with a Wavelength Selection Method for Multivariate Calibration
Directory of Open Access Journals (Sweden)
Haitao Chang
2016-06-01
Full Text Available One of the essential factors influencing the prediction accuracy of multivariate calibration models is the quality of the calibration data. A local regression strategy, together with a wavelength selection approach, is proposed to build the multivariate calibration models based on partial least squares regression. The local algorithm is applied to create a calibration set of spectra similar to the spectrum of an unknown sample; the synthetic degree of grey relation coefficient is used to evaluate the similarity. A wavelength selection method based on simple-to-use interactive self-modeling mixture analysis minimizes the influence of noisy variables, and the most informative variables of the most similar samples are selected to build the multivariate calibration model based on partial least squares regression. To validate the performance of the proposed method, ultraviolet-visible absorbance spectra of mixed solutions of food coloring analytes in a concentration range of 20–200 µg/mL is measured. Experimental results show that the proposed method can not only enhance the prediction accuracy of the calibration model, but also greatly reduce its complexity.
An Exact Confidence Region in Multivariate Calibration
Mathew, Thomas; Kasala, Subramanyam
1994-01-01
In the multivariate calibration problem using a multivariate linear model, an exact confidence region is constructed. It is shown that the region is always nonempty and is invariant under nonsingular transformations.
Kromhout, D.
2009-01-01
Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements of the
Ensemble preprocessing of near-infrared (NIR) spectra for multivariate calibration
International Nuclear Information System (INIS)
Xu Lu; Zhou Yanping; Tang Lijuan; Wu Hailong; Jiang Jianhui; Shen Guoli; Yu Ruqin
2008-01-01
Preprocessing of raw near-infrared (NIR) spectral data is indispensable in multivariate calibration when the measured spectra are subject to significant noises, baselines and other undesirable factors. However, due to the lack of sufficient prior information and an incomplete knowledge of the raw data, NIR spectra preprocessing in multivariate calibration is still trial and error. How to select a proper method depends largely on both the nature of the data and the expertise and experience of the practitioners. This might limit the applications of multivariate calibration in many fields, where researchers are not very familiar with the characteristics of many preprocessing methods unique in chemometrics and have difficulties to select the most suitable methods. Another problem is many preprocessing methods, when used alone, might degrade the data in certain aspects or lose some useful information while improving certain qualities of the data. In order to tackle these problems, this paper proposes a new concept of data preprocessing, ensemble preprocessing method, where partial least squares (PLSs) models built on differently preprocessed data are combined by Monte Carlo cross validation (MCCV) stacked regression. Little or no prior information of the data and expertise are required. Moreover, fusion of complementary information obtained by different preprocessing methods often leads to a more stable and accurate calibration model. The investigation of two real data sets has demonstrated the advantages of the proposed method
Multivariate calibration with least-squares support vector machines.
Thissen, U.M.J.; Ustun, B.; Melssen, W.J.; Buydens, L.M.C.
2004-01-01
This paper proposes the use of least-squares support vector machines (LS-SVMs) as a relatively new nonlinear multivariate calibration method, capable of dealing with ill-posed problems. LS-SVMs are an extension of "traditional" SVMs that have been introduced recently in the field of chemistry and
Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.
2013-09-01
Four simple, accurate and specific methods were developed and validated for the simultaneous estimation of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in commercial tablets. The derivative spectrophotometric methods include Derivative Ratio Zero Crossing (DRZC) and Double Divisor Ratio Spectra-Derivative Spectrophotometry (DDRS-DS) methods, while the multivariate calibrations used are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods were applied successfully in the determination of the drugs in laboratory-prepared mixtures and in commercial pharmaceutical preparations. The validity of the proposed methods was assessed using the standard addition technique. The linearity of the proposed methods is investigated in the range of 2-32, 4-44 and 2-20 μg/mL for AML, VAL and HCT, respectively.
Darwish, Hany W; Hassan, Said A; Salem, Maissa Y; El-Zeany, Badr A
2013-09-01
Four simple, accurate and specific methods were developed and validated for the simultaneous estimation of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in commercial tablets. The derivative spectrophotometric methods include Derivative Ratio Zero Crossing (DRZC) and Double Divisor Ratio Spectra-Derivative Spectrophotometry (DDRS-DS) methods, while the multivariate calibrations used are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods were applied successfully in the determination of the drugs in laboratory-prepared mixtures and in commercial pharmaceutical preparations. The validity of the proposed methods was assessed using the standard addition technique. The linearity of the proposed methods is investigated in the range of 2-32, 4-44 and 2-20 μg/mL for AML, VAL and HCT, respectively. Copyright © 2013 Elsevier B.V. All rights reserved.
Multivariate Calibration Models for Sorghum Composition using Near-Infrared Spectroscopy
Energy Technology Data Exchange (ETDEWEB)
Wolfrum, E.; Payne, C.; Stefaniak, T.; Rooney, W.; Dighe, N.; Bean, B.; Dahlberg, J.
2013-03-01
NREL developed calibration models based on near-infrared (NIR) spectroscopy coupled with multivariate statistics to predict compositional properties relevant to cellulosic biofuels production for a variety of sorghum cultivars. A robust calibration population was developed in an iterative fashion. The quality of models developed using the same sample geometry on two different types of NIR spectrometers and two different sample geometries on the same spectrometer did not vary greatly.
A multivariate calibration procedure for the tensammetric determination of detergents
Bos, M.
1989-01-01
A multivariate calibration procedure based on singular value decomposition (SVD) and the Ho-Kashyap algorithm is used for the tensammetric determination of the cationic detergents Hyamine 1622, benzalkonium chloride (BACl), N-cetyl-N,N,N-trimethylammonium bromide (CTABr) and mixtures of CTABr and
DEFF Research Database (Denmark)
Tybjærg-Hansen, Anne
2009-01-01
Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements...... of the risk factors are observed on a subsample. We extend the multivariate RC techniques to a meta-analysis framework where multiple studies provide independent repeat measurements and information on disease outcome. We consider the cases where some or all studies have repeat measurements, and compare study......-specific, averaged and empirical Bayes estimates of RC parameters. Additionally, we allow for binary covariates (e.g. smoking status) and for uncertainty and time trends in the measurement error corrections. Our methods are illustrated using a subset of individual participant data from prospective long-term studies...
Applied multivariate statistical analysis
Härdle, Wolfgang Karl
2015-01-01
Focusing on high-dimensional applications, this 4th edition presents the tools and concepts used in multivariate data analysis in a style that is also accessible for non-mathematicians and practitioners. It surveys the basic principles and emphasizes both exploratory and inferential statistics; a new chapter on Variable Selection (Lasso, SCAD and Elastic Net) has also been added. All chapters include practical exercises that highlight applications in different multivariate data analysis fields: in quantitative financial studies, where the joint dynamics of assets are observed; in medicine, where recorded observations of subjects in different locations form the basis for reliable diagnoses and medication; and in quantitative marketing, where consumers’ preferences are collected in order to construct models of consumer behavior. All of these examples involve high to ultra-high dimensions and represent a number of major fields in big data analysis. The fourth edition of this book on Applied Multivariate ...
Özdemir, Durmuş
2006-01-01
Determination of wheat flour quality parameters, such as protein, moisture, dry mass by wet chemistry analyses takes long time. Near infrared spectroscopy (NIR) coupled with multivariate calibration offers a fast and nondestructive alternative to obtain reliable results. However, due to the complexity of the spectra obtained from NIR, some wavelength selection is generally required to improve the predictive ability of multivariate calibration methods. In this study, two different wheat data s...
Yang, Haiqing; Wu, Di; He, Yong
2007-11-01
Near-infrared spectroscopy (NIRS) with the characteristics of high speed, non-destructiveness, high precision and reliable detection data, etc. is a pollution-free, rapid, quantitative and qualitative analysis method. A new approach for variety discrimination of brown sugars using short-wave NIR spectroscopy (800-1050nm) was developed in this work. The relationship between the absorbance spectra and brown sugar varieties was established. The spectral data were compressed by the principal component analysis (PCA). The resulting features can be visualized in principal component (PC) space, which can lead to discovery of structures correlative with the different class of spectral samples. It appears to provide a reasonable variety clustering of brown sugars. The 2-D PCs plot obtained using the first two PCs can be used for the pattern recognition. Least-squares support vector machines (LS-SVM) was applied to solve the multivariate calibration problems in a relatively fast way. The work has shown that short-wave NIR spectroscopy technique is available for the brand identification of brown sugar, and LS-SVM has the better identification ability than PLS when the calibration set is small.
Delwiche, Stephen R; Reeves, James B
2010-01-01
In multivariate regression analysis of spectroscopy data, spectral preprocessing is often performed to reduce unwanted background information (offsets, sloped baselines) or accentuate absorption features in intrinsically overlapping bands. These procedures, also known as pretreatments, are commonly smoothing operations or derivatives. While such operations are often useful in reducing the number of latent variables of the actual decomposition and lowering residual error, they also run the risk of misleading the practitioner into accepting calibration equations that are poorly adapted to samples outside of the calibration. The current study developed a graphical method to examine this effect on partial least squares (PLS) regression calibrations of near-infrared (NIR) reflection spectra of ground wheat meal with two analytes, protein content and sodium dodecyl sulfate sedimentation (SDS) volume (an indicator of the quantity of the gluten proteins that contribute to strong doughs). These two properties were chosen because of their differing abilities to be modeled by NIR spectroscopy: excellent for protein content, fair for SDS sedimentation volume. To further demonstrate the potential pitfalls of preprocessing, an artificial component, a randomly generated value, was included in PLS regression trials. Savitzky-Golay (digital filter) smoothing, first-derivative, and second-derivative preprocess functions (5 to 25 centrally symmetric convolution points, derived from quadratic polynomials) were applied to PLS calibrations of 1 to 15 factors. The results demonstrated the danger of an over reliance on preprocessing when (1) the number of samples used in a multivariate calibration is low (<50), (2) the spectral response of the analyte is weak, and (3) the goodness of the calibration is based on the coefficient of determination (R(2)) rather than a term based on residual error. The graphical method has application to the evaluation of other preprocess functions and various
Determination of fragrance content in perfume by Raman spectroscopy and multivariate calibration
Godinho, Robson B.; Santos, Mauricio C.; Poppi, Ronei J.
2016-03-01
An alternative methodology is herein proposed for determination of fragrance content in perfumes and their classification according to the guidelines established by fine perfume manufacturers. The methodology is based on Raman spectroscopy associated with multivariate calibration, allowing the determination of fragrance content in a fast, nondestructive, and sustainable manner. The results were considered consistent with the conventional method, whose standard error of prediction values was lower than the 1.0%. This result indicates that the proposed technology is a feasible analytical tool for determination of the fragrance content in a hydro-alcoholic solution for use in manufacturing, quality control and regulatory agencies.
Classification of Specialized Farms Applying Multivariate Statistical Methods
Directory of Open Access Journals (Sweden)
Zuzana Hloušková
2017-01-01
Full Text Available Classification of specialized farms applying multivariate statistical methods The paper is aimed at application of advanced multivariate statistical methods when classifying cattle breeding farming enterprises by their economic size. Advantage of the model is its ability to use a few selected indicators compared to the complex methodology of current classification model that requires knowledge of detailed structure of the herd turnover and structure of cultivated crops. Output of the paper is intended to be applied within farm structure research focused on future development of Czech agriculture. As data source, the farming enterprises database for 2014 has been used, from the FADN CZ system. The predictive model proposed exploits knowledge of actual size classes of the farms tested. Outcomes of the linear discriminatory analysis multifactor classification method have supported the chance of filing farming enterprises in the group of Small farms (98 % filed correctly, and the Large and Very Large enterprises (100 % filed correctly. The Medium Size farms have been correctly filed at 58.11 % only. Partial shortages of the process presented have been found when discriminating Medium and Small farms.
Bello, Alessandra; Bianchi, Federica; Careri, Maria; Giannetto, Marco; Mori, Giovanni; Musci, Marilena
2007-11-05
A new NIR method based on multivariate calibration for determination of ethanol in industrially packed wholemeal bread was developed and validated. GC-FID was used as reference method for the determination of actual ethanol concentration of different samples of wholemeal bread with proper content of added ethanol, ranging from 0 to 3.5% (w/w). Stepwise discriminant analysis was carried out on the NIR dataset, in order to reduce the number of original variables by selecting those that were able to discriminate between the samples of different ethanol concentrations. With the so selected variables a multivariate calibration model was then obtained by multiple linear regression. The prediction power of the linear model was optimized by a new "leave one out" method, so that the number of original variables resulted further reduced.
Determination of fragrance content in perfume by Raman spectroscopy and multivariate calibration.
Godinho, Robson B; Santos, Mauricio C; Poppi, Ronei J
2016-03-15
An alternative methodology is herein proposed for determination of fragrance content in perfumes and their classification according to the guidelines established by fine perfume manufacturers. The methodology is based on Raman spectroscopy associated with multivariate calibration, allowing the determination of fragrance content in a fast, nondestructive, and sustainable manner. The results were considered consistent with the conventional method, whose standard error of prediction values was lower than the 1.0%. This result indicates that the proposed technology is a feasible analytical tool for determination of the fragrance content in a hydro-alcoholic solution for use in manufacturing, quality control and regulatory agencies. Copyright © 2015 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Abdelaleem Eglal A
2012-04-01
Full Text Available Abstract Background Metronidazole (MET and Diloxanide Furoate (DF, act as antiprotozoal drugs, in their ternary mixtures with Mebeverine HCl (MEH, an effective antispasmodic drug. This work concerns with the development and validation of two simple, specific and cost effective methods mainly for simultaneous determination of the proposed ternary mixture. In addition, the developed multivariate calibration model has been updated to determine Metronidazole benzoate (METB in its binary mixture with DF in Dimetrol® suspension. Results Method (I is the mean centering of ratio spectra spectrophotometric method (MCR that depends on using the mean centered ratio spectra in two successive steps that eliminates the derivative steps and therefore the signal to noise ratio is enhanced. The developed MCR method has been successfully applied for determination of MET, DF and MEH in different laboratory prepared mixtures and in tablets. Method (II is the partial least square (PLS multivariate calibration method that has been optimized for determination of MET, DF and MEH in Dimetrol ® tablets and by updating the developed model, it has been successfully used for prediction of binary mixtures of DF and Metronidazole Benzoate ester (METB in Dimetrol ® suspension with good accuracy and precision without reconstruction of the calibration set. Conclusion The developed methods have been validated; accuracy, precision and specificity were found to be within the acceptable limits. Moreover results obtained by the suggested methods showed no significant difference when compared with those obtained by reported methods. Graphical Abstract
Goicoechea, H C; Olivieri, A C
1999-08-01
The use of multivariate spectrophotometric calibration is presented for the simultaneous determination of the active components of tablets used in the treatment of pulmonary tuberculosis. The resolution of ternary mixtures of rifampicin, isoniazid and pyrazinamide has been accomplished by using partial least squares (PLS-1) regression analysis. Although the components show an important degree of spectral overlap, they have been simultaneously determined with high accuracy and precision, rapidly and with no need of nonaqueous solvents for dissolving the samples. No interference has been observed from the tablet excipients. A comparison is presented with the related multivariate method of classical least squares (CLS) analysis, which is shown to yield less reliable results due to the severe spectral overlap among the studied compounds. This is highlighted in the case of isoniazid, due to the small absorbances measured for this component.
Processing data collected from radiometric experiments by multivariate technique
International Nuclear Information System (INIS)
Urbanski, P.; Kowalska, E.; Machaj, B.; Jakowiuk, A.
2005-01-01
Multivariate techniques applied for processing data collected from radiometric experiments can provide more efficient extraction of the information contained in the spectra. Several techniques are considered: (i) multivariate calibration using Partial Least Square Regression and Artificial Neural Network, (ii) standardization of the spectra, (iii) smoothing of collected spectra were autocorrelation function and bootstrap were used for the assessment of the processed data, (iv) image processing using Principal Component Analysis. Application of these techniques is illustrated on examples of some industrial applications. (author)
Energy Technology Data Exchange (ETDEWEB)
Barbieri Gonzaga, Fabiano [Chemical Metrology Division, National Institute of Metrology, Quality and Technology, Av. N. S. das Gracas, 50, Xerem, 25250-020, Duque de Caxias, RJ (Brazil); Pasquini, Celio, E-mail: pasquini@iqm.unicamp.br [Institute of Chemistry, State University of Campinas, POB 6154, 13083-970, Campinas, SP (Brazil)
2012-03-15
This work describes a compact and low cost analyzer for laser induced breakdown spectroscopy (LIBS) based on a diode pumped passively Q-switched Nd:LSB microchip laser and a conventional Czerny-Turner spectrograph (spectral range from about 250 to 390 nm) containing a non-intensified, non-gated and non-cooled 1024 pixel linear sensor array. The new LIBS instrument was applied for analyzing steel samples containing chromium and nickel in the concentration range from about 5 to 26% w/w (certified reference materials), integrating the emitted radiation for 40 s under continuous application of laser pulses at 2 kHz for each acquired spectrum (integration of about 80,000 plasmas). The emission data from about 356 to 362 nm and 340 to 354 nm were employed for the construction of two Partial Least Squares (PLS) calibration models for determination of chromium and nickel, respectively. The average relative errors of prediction of chromium and nickel concentrations were 3.7 and 6.7%, respectively, which are similar to or lower than those obtained using higher cost LIBS analyzers. The results have shown that multivariate calibration can help to overcome the decreasing instrumental performance associated with the low cost equipment. - Highlights: Black-Right-Pointing-Pointer Low cost laser induced breakdown spectroscopy instrumentation. Black-Right-Pointing-Pointer Microchip laser based LIBS system. Black-Right-Pointing-Pointer Standard non-intensified, non-gated, non-cooled detector for LIBS system. Black-Right-Pointing-Pointer Improvement of results from a low cost LIBS system using multivariate calibration. Black-Right-Pointing-Pointer Cr and Ni determination in steel by a low cost LIBS system.
Sheykhizadeh, Saheleh; Naseri, Abdolhossein
2018-04-01
Variable selection plays a key role in classification and multivariate calibration. Variable selection methods are aimed at choosing a set of variables, from a large pool of available predictors, relevant to the analyte concentrations estimation, or to achieve better classification results. Many variable selection techniques have now been introduced among which, those which are based on the methodologies of swarm intelligence optimization have been more respected during a few last decades since they are mainly inspired by nature. In this work, a simple and new variable selection algorithm is proposed according to the invasive weed optimization (IWO) concept. IWO is considered a bio-inspired metaheuristic mimicking the weeds ecological behavior in colonizing as well as finding an appropriate place for growth and reproduction; it has been shown to be very adaptive and powerful to environmental changes. In this paper, the first application of IWO, as a very simple and powerful method, to variable selection is reported using different experimental datasets including FTIR and NIR data, so as to undertake classification and multivariate calibration tasks. Accordingly, invasive weed optimization - linear discrimination analysis (IWO-LDA) and invasive weed optimization- partial least squares (IWO-PLS) are introduced for multivariate classification and calibration, respectively.
Directory of Open Access Journals (Sweden)
Xiaomi Wang
2017-02-01
Full Text Available The visible and near-infrared (VNIR spectroscopy prediction model is an effective tool for the prediction of soil organic matter (SOM content. The predictive accuracy of the VNIR model is highly dependent on the selection of the calibration set. However, conventional methods for selecting the calibration set for constructing the VNIR prediction model merely consider either the gradients of SOM or the soil VNIR spectra and neglect the influence of environmental variables. However, soil samples generally present a strong spatial variability, and, thus, the relationship between the SOM content and VNIR spectra may vary with respect to locations and surrounding environments. Hence, VNIR prediction models based on conventional calibration set selection methods would be biased, especially for estimating highly spatially variable soil content (e.g., SOM. To equip the calibration set selection method with the ability to consider SOM spatial variation and environmental influence, this paper proposes an improved method for selecting the calibration set. The proposed method combines the improved multi-variable association relationship clustering mining (MVARC method and the Rank–Kennard–Stone (Rank-KS method in order to synthetically consider the SOM gradient, spectral information, and environmental variables. In the proposed MVARC-R-KS method, MVARC integrates the Apriori algorithm, a density-based clustering algorithm, and the Delaunay triangulation. The MVARC method is first utilized to adaptively mine clustering distribution zones in which environmental variables exert a similar influence on soil samples. The feasibility of the MVARC method is proven by conducting an experiment on a simulated dataset. The calibration set is evenly selected from the clustering zones and the remaining zone by using the Rank-KS algorithm in order to avoid a single property in the selected calibration set. The proposed MVARC-R-KS approach is applied to select a
Applied multivariate statistics with R
Zelterman, Daniel
2015-01-01
This book brings the power of multivariate statistics to graduate-level practitioners, making these analytical methods accessible without lengthy mathematical derivations. Using the open source, shareware program R, Professor Zelterman demonstrates the process and outcomes for a wide array of multivariate statistical applications. Chapters cover graphical displays, linear algebra, univariate, bivariate and multivariate normal distributions, factor methods, linear regression, discrimination and classification, clustering, time series models, and additional methods. Zelterman uses practical examples from diverse disciplines to welcome readers from a variety of academic specialties. Those with backgrounds in statistics will learn new methods while they review more familiar topics. Chapters include exercises, real data sets, and R implementations. The data are interesting, real-world topics, particularly from health and biology-related contexts. As an example of the approach, the text examines a sample from the B...
Applied Statistics: From Bivariate through Multivariate Techniques [with CD-ROM
Warner, Rebecca M.
2007-01-01
This book provides a clear introduction to widely used topics in bivariate and multivariate statistics, including multiple regression, discriminant analysis, MANOVA, factor analysis, and binary logistic regression. The approach is applied and does not require formal mathematics; equations are accompanied by verbal explanations. Students are asked…
Directory of Open Access Journals (Sweden)
Amir H. M. Sarrafi
2011-01-01
Full Text Available Resolution of binary mixture of atorvastatin (ATV and amlodipine (AML with minimum sample pretreatment and without analyte separation has been successfully achieved using a rapid method based on partial least square analysis of UV–spectral data. Multivariate calibration modeling procedures, traditional partial least squares (PLS-2, interval partial least squares (iPLS and synergy partial least squares (siPLS, were applied to select a spectral range that provided the lowest prediction error in comparison to the full-spectrum model. The simultaneous determination of both analytes was possible by PLS processing of sample absorbance between 220-425 nm. The correlation coefficients (R and root mean squared error of cross validation (RMSECV for ATV and AML in synthetic mixture were 0.9991, 0.9958 and 0.4538, 0.2411 in best siPLS models respectively. The optimized method has been used for determination of ATV and AML in amostatin commercial tablets. The proposed method are simple, fast, inexpensive and do not need any separation or preparation methods.
Safi, A.; Campanella, B.; Grifoni, E.; Legnaioli, S.; Lorenzetti, G.; Pagnotta, S.; Poggialini, F.; Ripoll-Seguer, L.; Hidalgo, M.; Palleschi, V.
2018-06-01
The introduction of multivariate calibration curve approach in Laser-Induced Breakdown Spectroscopy (LIBS) quantitative analysis has led to a general improvement of the LIBS analytical performances, since a multivariate approach allows to exploit the redundancy of elemental information that are typically present in a LIBS spectrum. Software packages implementing multivariate methods are available in the most diffused commercial and open source analytical programs; in most of the cases, the multivariate algorithms are robust against noise and operate in unsupervised mode. The reverse of the coin of the availability and ease of use of such packages is the (perceived) difficulty in assessing the reliability of the results obtained which often leads to the consideration of the multivariate algorithms as 'black boxes' whose inner mechanism is supposed to remain hidden to the user. In this paper, we will discuss the dangers of a 'black box' approach in LIBS multivariate analysis, and will discuss how to overcome them using the chemical-physical knowledge that is at the base of any LIBS quantitative analysis.
Möltgen, C-V; Herdling, T; Reich, G
2013-11-01
This study demonstrates an approach, using science-based calibration (SBC), for direct coating thickness determination on heart-shaped tablets in real-time. Near-Infrared (NIR) spectra were collected during four full industrial pan coating operations. The tablets were coated with a thin hydroxypropyl methylcellulose (HPMC) film up to a film thickness of 28 μm. The application of SBC permits the calibration of the NIR spectral data without using costly determined reference values. This is due to the fact that SBC combines classical methods to estimate the coating signal and statistical methods for the noise estimation. The approach enabled the use of NIR for the measurement of the film thickness increase from around 8 to 28 μm of four independent batches in real-time. The developed model provided a spectroscopic limit of detection for the coating thickness of 0.64 ± 0.03 μm root-mean square (RMS). In the commonly used statistical methods for calibration, such as Partial Least Squares (PLS), sufficiently varying reference values are needed for calibration. For thin non-functional coatings this is a challenge because the quality of the model depends on the accuracy of the selected calibration standards. The obvious and simple approach of SBC eliminates many of the problems associated with the conventional statistical methods and offers an alternative for multivariate calibration. Copyright © 2013 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Scapin, Marcos A.; Silva, Clayton P.; Cotrim, Marycel E.B.; Pires, Maria Aparecida F.
2013-01-01
The aim of this work is to establish and validate a methodology for a nondestructive quantitative chemical analysis method for simultaneous determination of the major constituents (U total and Si) and impurities (B, Mg, Al, Cr, Mn, Fe, Co, Ni, Cu, Zn, Mo, Cd,etc.) present in U 3 Si 2 . The method must also meet the needs of nuclear reactors for the nuclear fuel qualification type,MTR, with low cost and analysis time, while also minimizing waste generation. For this purpose, an X-ray fluorescence technique will be applied. The technique is nondestructive,aside from sample preparation procedures that do not require previous chemical treatments (dissolving, digesting), and allows for fast chemical analysis. The fundamental parameters (FP) method was applied to corrections for spectral and matrix effects. The calibration model was obtained via principal component analysis using orthogonal decomposition by the singular value decomposition method (SVD) in U 3 O 8 and U 3 Si 2 samples. The results were compared by means of statistical tests in accordance with ISO 17025 on CRMs of U 3 O 8 from New Brunswick Laboratory (NBL) and 16 U 3 Si 2 samples provided by CCN of IPEN/CNEN-SP. Multivariate calibration is a promising method for determination of major and minor constituents inU 3 Si 2 and U 3 O 8 nuclear fuel, because the precision and accuracy are statistically equivalent to volumetric analysis (U total determination), gravimetric analysis (Si determination), and ICP-OES methods (impurities determination). (author)
Directory of Open Access Journals (Sweden)
Ieda Spacino Scarminio
1998-10-01
Full Text Available A multivariate calibration method to determine chemical compositions of systems with severely overlapped bands is proposed. Q mode factors are determined from the spectral data and subsequently rotated using the varimax and oblique transformation of Imbrie. The method is applied to two sets of simulated data to test the sensitivity of analytical results to random experimental error. The chemical concentrations of alanine and threonine mixture are determined from spectral data of the 302,5 - 548,5 nm region.
Optimization of SPECT calibration for quantification of images applied to dosimetry with iodine-131
International Nuclear Information System (INIS)
Carvalho, Samira Marques de
2018-01-01
SPECT systems calibration plays an essential role in the accuracy of the quantification of images. In this work, in its first stage, an optimized SPECT calibration method was proposed for 131 I studies, considering the partial volume effect (PVE) and the position of the calibration source. In the second stage, the study aimed to investigate the impact of count density and reconstruction parameters on the determination of the calibration factor and the quantification of the image in dosimetry studies, considering the reality of clinical practice in Brazil. In the final step, the study aimed evaluating the influence of several factors in the calibration for absorbed dose calculation using Monte Carlo simulations (MC) GATE code. Calibration was performed by determining a calibration curve (sensitivity versus volume) obtained by applying different thresholds. Then, the calibration factors were determined with an exponential function adjustment. Images were performed with high and low counts densities for several source positions within the simulator. To validate the calibration method, the calibration factors were used for absolute quantification of the total reference activities. The images were reconstructed adopting two approaches of different parameters, usually used in patient images. The methodology developed for the calibration of the tomographic system was easier and faster to implement than other procedures suggested to improve the accuracy of the results. The study also revealed the influence of the location of the calibration source, demonstrating better precision in the absolute quantification considering the location of the target region during the calibration of the system. The study applied in the Brazilian thyroid protocol suggests the revision of the calibration of the SPECT system, including different positions for the reference source, besides acquisitions considering the Signal to Noise Ratio (SNR) of the images. Finally, the doses obtained with the
Sasakura, D; Nakayama, K; Sakamoto, T; Chikuma, T
2015-05-01
The use of transmission near infrared spectroscopy (TNIRS) is of particular interest in the pharmaceutical industry. This is because TNIRS does not require sample preparation and can analyze several tens of tablet samples in an hour. It has the capability to measure all relevant information from a tablet, while still on the production line. However, TNIRS has a narrow spectrum range and overtone vibrations often overlap. To perform content uniformity testing in tablets by TNIRS, various properties in the tableting process need to be analyzed by a multivariate prediction model, such as a Partial Least Square Regression modeling. One issue is that typical approaches require several hundred reference samples to act as the basis of the method rather than a strategically designed method. This means that many batches are needed to prepare the reference samples; this requires time and is not cost effective. Our group investigated the concentration dependence of the calibration model with a strategic design. Consequently, we developed a more effective approach to the TNIRS calibration model than the existing methodology.
An Introduction to Applied Multivariate Analysis
Raykov, Tenko
2008-01-01
Focuses on the core multivariate statistics topics which are of fundamental relevance for its understanding. This book emphasis on the topics that are critical to those in the behavioral, social, and educational sciences.
Evaluation of multivariate calibration models transferred between spectroscopic instruments
DEFF Research Database (Denmark)
Eskildsen, Carl Emil Aae; Hansen, Per W.; Skov, Thomas
2016-01-01
In a setting where multiple spectroscopic instruments are used for the same measurements it may be convenient to develop the calibration model on a single instrument and then transfer this model to the other instruments. In the ideal scenario, all instruments provide the same predictions for the ......In a setting where multiple spectroscopic instruments are used for the same measurements it may be convenient to develop the calibration model on a single instrument and then transfer this model to the other instruments. In the ideal scenario, all instruments provide the same predictions...... for the same samples using the transferred model. However, sometimes the success of a model transfer is evaluated by comparing the transferred model predictions with the reference values. This is not optimal, as uncertainties in the reference method will impact the evaluation. This paper proposes a new method...... for calibration model transfer evaluation. The new method is based on comparing predictions from different instruments, rather than comparing predictions and reference values. A total of 75 flour samples were available for the study. All samples were measured on ten near infrared (NIR) instruments from two...
Directory of Open Access Journals (Sweden)
A. HAKAN AKTAŞ
2008-01-01
Full Text Available In this study, three anti-inflammatory agents, namely ibuprofen, indomethacin and naproxen, were titrated potentiometrically using tetrabutylammonium hydroxide in acetonitrile solvent under a nitrogen atmosphere at 25 °C. MATLAB 7.0 software was applied for data treatment as a multivariate calibration tool in the potentiometric titration procedure. An artificial neural network (ANN was used as a multivariate calibration tool in the potentiometric titration to model the complex non-linear relationship between ibuprofen, indomethacin and naproxen concentrations and the millivolt (mV of the solutions measured after the addition of different volumes of the titrant. The optimized network predicted the concentrations of agents in synthetic mixtures. The results showed that the employed ANN can precede the titration data with an average relative error of prediction of less than 2.30 %.
Barimani, Shirin; Kleinebudde, Peter
2017-10-01
A multivariate analysis method, Science-Based Calibration (SBC), was used for the first time for endpoint determination of a tablet coating process using Raman data. Two types of tablet cores, placebo and caffeine cores, received a coating suspension comprising a polyvinyl alcohol-polyethylene glycol graft-copolymer and titanium dioxide to a maximum coating thickness of 80µm. Raman spectroscopy was used as in-line PAT tool. The spectra were acquired every minute and correlated to the amount of applied aqueous coating suspension. SBC was compared to another well-known multivariate analysis method, Partial Least Squares-regression (PLS) and a simpler approach, Univariate Data Analysis (UVDA). All developed calibration models had coefficient of determination values (R 2 ) higher than 0.99. The coating endpoints could be predicted with root mean square errors (RMSEP) less than 3.1% of the applied coating suspensions. Compared to PLS and UVDA, SBC proved to be an alternative multivariate calibration method with high predictive power. Copyright © 2017 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Geraldo, L.P.; Smith, D.L.
1989-01-01
The methodology of covariance matrix and square methods have been applied in the relative efficiency calibration for a Ge(Li) detector apllied in the relative efficiency calibration for a Ge(Li) detector. Procedures employed to generate, manipulate and test covariance matrices which serve to properly represent uncertainties of experimental data are discussed. Calibration data fitting using least square methods has been performed for a particular experimental data set. (author) [pt
Energy Technology Data Exchange (ETDEWEB)
Tencate, Alister J. [Department of Chemistry, Idaho State University, Pocatello, ID 83209 (United States); Kalivas, John H., E-mail: kalijohn@isu.edu [Department of Chemistry, Idaho State University, Pocatello, ID 83209 (United States); White, Alexander J. [Department of Physics and Optical Engineering, Rose-Hulman Institute of Technology, Terre Huate, IN 47803 (United States)
2016-05-19
New multivariate calibration methods and other processes are being developed that require selection of multiple tuning parameter (penalty) values to form the final model. With one or more tuning parameters, using only one measure of model quality to select final tuning parameter values is not sufficient. Optimization of several model quality measures is challenging. Thus, three fusion ranking methods are investigated for simultaneous assessment of multiple measures of model quality for selecting tuning parameter values. One is a supervised learning fusion rule named sum of ranking differences (SRD). The other two are non-supervised learning processes based on the sum and median operations. The effect of the number of models evaluated on the three fusion rules are also evaluated using three procedures. One procedure uses all models from all possible combinations of the tuning parameters. To reduce the number of models evaluated, an iterative process (only applicable to SRD) is applied and thresholding a model quality measure before applying the fusion rules is also used. A near infrared pharmaceutical data set requiring model updating is used to evaluate the three fusion rules. In this case, calibration of the primary conditions is for the active pharmaceutical ingredient (API) of tablets produced in a laboratory. The secondary conditions for calibration updating is for tablets produced in the full batch setting. Two model updating processes requiring selection of two unique tuning parameter values are studied. One is based on Tikhonov regularization (TR) and the other is a variation of partial least squares (PLS). The three fusion methods are shown to provide equivalent and acceptable results allowing automatic selection of the tuning parameter values. Best tuning parameter values are selected when model quality measures used with the fusion rules are for the small secondary sample set used to form the updated models. In this model updating situation, evaluation of
International Nuclear Information System (INIS)
Tencate, Alister J.; Kalivas, John H.; White, Alexander J.
2016-01-01
New multivariate calibration methods and other processes are being developed that require selection of multiple tuning parameter (penalty) values to form the final model. With one or more tuning parameters, using only one measure of model quality to select final tuning parameter values is not sufficient. Optimization of several model quality measures is challenging. Thus, three fusion ranking methods are investigated for simultaneous assessment of multiple measures of model quality for selecting tuning parameter values. One is a supervised learning fusion rule named sum of ranking differences (SRD). The other two are non-supervised learning processes based on the sum and median operations. The effect of the number of models evaluated on the three fusion rules are also evaluated using three procedures. One procedure uses all models from all possible combinations of the tuning parameters. To reduce the number of models evaluated, an iterative process (only applicable to SRD) is applied and thresholding a model quality measure before applying the fusion rules is also used. A near infrared pharmaceutical data set requiring model updating is used to evaluate the three fusion rules. In this case, calibration of the primary conditions is for the active pharmaceutical ingredient (API) of tablets produced in a laboratory. The secondary conditions for calibration updating is for tablets produced in the full batch setting. Two model updating processes requiring selection of two unique tuning parameter values are studied. One is based on Tikhonov regularization (TR) and the other is a variation of partial least squares (PLS). The three fusion methods are shown to provide equivalent and acceptable results allowing automatic selection of the tuning parameter values. Best tuning parameter values are selected when model quality measures used with the fusion rules are for the small secondary sample set used to form the updated models. In this model updating situation, evaluation of
MULTIVARIATE TECHNIQUES APPLIED TO EVALUATION OF LIGNOCELLULOSIC RESIDUES FOR BIOENERGY PRODUCTION
Directory of Open Access Journals (Sweden)
Thiago de Paula Protásio
2013-12-01
Full Text Available http://dx.doi.org/10.5902/1980509812361The evaluation of lignocellulosic wastes for bioenergy production demands to consider several characteristicsand properties that may be correlated. This fact demands the use of various multivariate analysis techniquesthat allow the evaluation of relevant energetic factors. This work aimed to apply cluster analysis and principalcomponents analyses for the selection and evaluation of lignocellulosic wastes for bioenergy production.8 types of residual biomass were used, whose the elemental components (C, H, O, N, S content, lignin, totalextractives and ashes contents, basic density and higher and lower heating values were determined. Bothmultivariate techniques applied for evaluation and selection of lignocellulosic wastes were efficient andsimilarities were observed between the biomass groups formed by them. Through the interpretation of thefirst principal component obtained, it was possible to create a global development index for the evaluationof the viability of energetic uses of biomass. The interpretation of the second principal component alloweda contrast between nitrogen and sulfur contents with oxygen content.
International Nuclear Information System (INIS)
Batista Braga, Jez Willian; Trevizan, Lilian Cristina; Nunes, Lidiane Cristina; Aparecida Rufini, Iolanda; Santos, Dario; Krug, Francisco Jose
2010-01-01
The application of laser induced breakdown spectrometry (LIBS) aiming the direct analysis of plant materials is a great challenge that still needs efforts for its development and validation. In this way, a series of experimental approaches has been carried out in order to show that LIBS can be used as an alternative method to wet acid digestions based methods for analysis of agricultural and environmental samples. The large amount of information provided by LIBS spectra for these complex samples increases the difficulties for selecting the most appropriated wavelengths for each analyte. Some applications have suggested that improvements in both accuracy and precision can be achieved by the application of multivariate calibration in LIBS data when compared to the univariate regression developed with line emission intensities. In the present work, the performance of univariate and multivariate calibration, based on partial least squares regression (PLSR), was compared for analysis of pellets of plant materials made from an appropriate mixture of cryogenically ground samples with cellulose as the binding agent. The development of a specific PLSR model for each analyte and the selection of spectral regions containing only lines of the analyte of interest were the best conditions for the analysis. In this particular application, these models showed a similar performance, but PLSR seemed to be more robust due to a lower occurrence of outliers in comparison to the univariate method. Data suggests that efforts dealing with sample presentation and fitness of standards for LIBS analysis must be done in order to fulfill the boundary conditions for matrix independent development and validation.
International Nuclear Information System (INIS)
Haight, S.M.; Schwartz, D.T.
1999-01-01
Metal hexacyanoferrate compounds show promise as electrochemically switchable ion exchange materials for use in the cleanup of radioactive wastes such as those found in storage basins and underground tanks at the Department of Energy's Hanford Nuclear Reservation. Reported is the use of line-imaging Raman spectroscopy for the in situ determination of oxidation state profiles in nickel hexacyanoferrate derivatized electrodes under potential control in an electrochemical cell. Line-imaging Raman spectroscopy is used to collect 256 contiguous Raman spectra every ∼5 microm from thin films (ca. 80 nm) formed by electrochemical derivatization of nickel electrodes. The cyanide stretching region of the Raman spectrum of the film is shown to be sensitive to iron oxidation state and is modeled by both univariate and multivariate correlations. Although both correlations fit the calibration set well, the multivariate (principle component regression or PCR) model's predictions of oxidation state are less sensitive to noise in the spectrum, yielding a much smoother oxidation state profile than the univariate model. Oxidation state profiles with spatial resolution of approximately 5 microm are shown for a nickel hexacyanoferrate derivatized electrode in reduced, intermediate, and oxidized states. In situ oxidation state profiles indicate that the 647.1 nm laser illumination photo-oxidizes the derivatized electrodes. This observation is confirmed using photoelectrochemical methods
A Development of the Calibration Tool Applied on Analog I/O Modules for Safety-related Controller
International Nuclear Information System (INIS)
Kim, Jong-Kyun; Yun, Dong-Hwa; Lee, Myeong-Kyun; Yoo, Kwan-Woo
2016-01-01
The purpose of this paper is to develop the calibration tool for analog input/output(I/O) modules. Those modules are components in POSAFE-Q which is a programmable logic controller(PLC) that has been developed for the evaluation of safety-related. In this paper, performance improvement of analog I/O modules is presented by developing and applying the calibration tool for each channel in analog I/O modules. With this tool, the input signal to an analog input module and the output signal from an analog output module are able to be satisfied with a reference value of sensor type and an accuracy of all modules. With RS-232 communication, the manual calibration tool is developed for analog I/O modules of an existing and up-to-date version in POSAFE-Q PLC. As a result of applying this tool, the converted value is performant for a type of input sensor and an accuracy of analog I/O modules
Bauza, María C; Ibañez, Gabriela A; Tauler, Romà; Olivieri, Alejandro C
2012-10-16
A new equation is derived for estimating the sensitivity when the multivariate curve resolution-alternating least-squares (MCR-ALS) method is applied to second-order multivariate calibration data. The validity of the expression is substantiated by extensive Monte Carlo noise addition simulations. The multivariate selectivity can be derived from the new sensitivity expression. Other important figures of merit, such as limit of detection, limit of quantitation, and concentration uncertainty of MCR-ALS quantitative estimations can be easily estimated from the proposed sensitivity expression and the instrumental noise. An experimental example involving the determination of an analyte in the presence of uncalibrated interfering agents is described in detail, involving second-order time-decaying sensitized lanthanide luminescence excitation spectra. The estimated figures of merit are reasonably correlated with the analytical features of the analyzed experimental system.
Rock models at Zielona Gora, Poland applied to the semi-empirical neutron tool calibration
International Nuclear Information System (INIS)
Czubek, J.A.; Ossowski, A.; Zorski, T.; Massalski, T.
1995-01-01
The semi-empirical calibration method applied to the neutron porosity tool is presented in this paper. It was used with the ODSN-102 tool of 70 mm diameter and equipped with an Am-Be neutron source at the calibration facility of Zielona Gora, Poland, inside natural and artificial rocks: four sandstone, four limestone and one dolomite block with borehole diameters of 143 and 216 mm, and three artificial ceramic blocks with borehole diameters of 90 and 180 mm. All blocks were saturated with fresh water, and fresh water was also inside all boreholes. In five blocks mineralized water (200,000 ppm NaCl) was introduced inside the boreholes. All neutron characteristics of the calibration blocks are given in this paper. The semi-empirical method of calibration correlates the tool readings observed experimentally with the general neutron parameter (GNP). This results in a general calibration curve, where the tool readings (TR) vs GNP are situated at one curve irrespective of their origin, i.e. of the formation lithology, borehole diameter, tool stand-off, brine salinity, etc. The n and m power coefficients are obtained experimentally during the calibration procedure. The apparent neutron parameters are defined as those sensed by a neutron tool situated inside the borehole and in real environmental conditions. When they are known, the GNP parameter can be computed analytically for the whole range of porosity at any kind of borehole diameter, formation lithology (including variable rock matrix absorption cross-section and density), borehole and formation salinity, tool stand-off and drilling fluid physical parameters. By this approach all porosity corrections with respect to the standard (e.g. limestone) calibration curve can be generated. (author)
Directory of Open Access Journals (Sweden)
Mohammad-Reza Rashidi
2011-06-01
Full Text Available Introduction: 6-Mercaptopurine (6MP is an important chemotherapeutic drug in the conventional treatment of childhood acute lymphoblastic leukemia (ALL. It is catabolized to 6-thiouric acid (6TUA through 8-hydroxo-6-mercaptopurine (8OH6MP or 6-thioxanthine (6TX intermediates. Methods: High-performance liquid chromatography (HPLC is usually used to determine the contents of therapeutic drugs, metabolites and other important biomedical analytes in biological samples. In the present study, the multivariate calibration methods, partial least squares (PLS-1 and principle component regression (PCR have been developed and validated for the simultaneous determination of 6MP and its oxidative metabolites (6TUA, 8OH6MP and 6TX without analyte separation in spiked human plasma. Mixtures of 6MP, 8-8OH6MP, 6TX and 6TUA have been resolved by PLS-1 and PCR to their UV spectra. Results: Recoveries (% obtained for 6MP, 8-8OH6MP, 6TX and 6TUA were 94.5-97.5, 96.6-103.3, 95.1-96.9 and 93.4-95.8, respectively, using PLS-1 and 96.7-101.3, 96.2-98.8, 95.8-103.3 and 94.3-106.1, respectively, using PCR. The NAS (Net analyte signal concept was used to calculate multivariate analytical figures of merit such as limit of detection (LOD, selectivity and sensitivity. The limit of detections for 6MP, 8-8OH6MP, 6TX and 6TUA were calculated to be 0.734, 0.439, 0.797 and 0.482 µmol L-1, respectively, using PLS and 0.724, 0.418, 0783 and 0.535 µmol L-1, respectively, using PCR. HPLC was also applied as a validation method for simultaneous determination of these thiopurines in the synthetic solutions and human plasma. Conclusion: Combination of spectroscopic techniques and chemometric methods (PLS and PCR has provided a simple but powerful method for simultaneous analysis of multicomponent mixtures.
Sessa, Clarimma; Bagán, Héctor; García, Jose Francisco
2014-10-01
Mid-infrared fiberoptics reflectance spectroscopy (mid-IR FORS) is a very interesting technique for artwork characterization purposes. However, the fact that the spectra obtained are a mixture of surface (specular) and volume (diffuse) reflection is a significant drawback. The physical and chemical features of the artwork surface may produce distortions in the spectra that hinder comparison with reference databases acquired in transmission mode. Several studies attempted to understand the influence of the different variables and propose procedures to improve the interpretation of the spectra. This article is focused on the application of mid-IR FORS and multivariate calibration to the analysis of easel paintings. The objectives are the evaluation of the influence of the surface roughness on the spectra, the influence of the matrix composition for the classification of unknown spectra, and the capability of obtaining pigment composition mappings. A first evaluation of a fast procedure for spectra management and pigment discrimination is discussed. The results demonstrate the capability of multivariate methods, principal component analysis (PCA), and partial least squares discrimination analysis (PLS-DA), to model the distortions of the reflectance spectra and to delimitate and discriminate areas of uniform composition. The roughness of the painting surface is found to be an important factor affecting the shape and relative intensity of the spectra. A mapping of the major pigments of a painting is possible using mid-IR FORS and PLS-DA when the calibration set is a palette that includes the potential pigments present in the artwork mixed with the appropriate binder and that shows the different paint textures.
Energy Technology Data Exchange (ETDEWEB)
Tripathi, Markandey M.; Krishnan, Sundar R.; Srinivasan, Kalyan K.; Yueh, Fang-Yu; Singh, Jagdish P.
2011-09-07
Chemiluminescence emissions from OH*, CH*, C2, and CO2 formed within the reaction zone of premixed flames depend upon the fuel-air equivalence ratio in the burning mixture. In the present paper, a new partial least square regression (PLS-R) based multivariate sensing methodology is investigated and compared with an OH*/CH* intensity ratio-based calibration model for sensing equivalence ratio in atmospheric methane-air premixed flames. Five replications of spectral data at nine different equivalence ratios ranging from 0.73 to 1.48 were used in the calibration of both models. During model development, the PLS-R model was initially validated with the calibration data set using the leave-one-out cross validation technique. Since the PLS-R model used the entire raw spectral intensities, it did not need the nonlinear background subtraction of CO2 emission that is required for typical OH*/CH* intensity ratio calibrations. An unbiased spectral data set (not used in the PLS-R model development), for 28 different equivalence ratio conditions ranging from 0.71 to 1.67, was used to predict equivalence ratios using the PLS-R and the intensity ratio calibration models. It was found that the equivalence ratios predicted with the PLS-R based multivariate calibration model matched the experimentally measured equivalence ratios within 7%; whereas, the OH*/CH* intensity ratio calibration grossly underpredicted equivalence ratios in comparison to measured equivalence ratios, especially under rich conditions ( > 1.2). The practical implications of the chemiluminescence-based multivariate equivalence ratio sensing methodology are also discussed.
Braga, Jez Willian B; Poppi, Ronei Jesus
2004-08-01
Polymorphism is an important property in the quality control of pharmaceutical products. In this regard, partial least squares regression and the net analytical signal were used to build and validate a multivariate calibration model using diffuse reflectance infrared spectroscopy in the region of 900-1100 cm(-1) for the determination of the polymorphic purity of carbamazepine. Physical mixtures of the polymorphs were made by weight, from 80 to 100% (w/w) form III mixed with form I. Figures of merit, such as sensitivity, analytical sensitivity, selectivity, confidence limits, precision (mean, repeatability, intermediate), accuracy, and signal-to-noise ratio were calculated. Feasible results were obtained with maximum absolute error of 2% and an average error of 0.53%, indicating that the proposed methodology can be used by the pharmaceutical industry as an alternative to the X-ray diffraction (United States Pharmacopoeiamethod). Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 93:2124-2134, 2004
Krieger, A.; Catherall, R.; Hochschulz, F.; Kramer, J.; Neugart, R.; Rosendahl, S.; Schipper, J.; Siesling, E.; Weinheimer, Ch.; Yordanov, D.T.; Nortershauser, W.
2011-01-01
A high-voltage divider with accuracy at the ppm level and collinear laser spectroscopy were used to calibrate the highvoltage installation at the radioactive ion beam facility ISOLDE at CERN. The accurate knowledge of this voltage is particularly important for collinear laser spectroscopy measurements. Beam velocity measurements using frequencycomb based collinear laser spectroscopy agree with the new calibration. Applying this, one obtains consistent results for isotope shifts of stable magnesium isotopes measured using collinear spectroscopy and laser spectroscopy on laser-cooled ions in a trap. The long-term stability and the transient behavior during recovery from a voltage dropout were investigated for the different power supplies currently applied at ISOLDE.
Calibration methodology for proportional counters applied to yield measurements of a neutron burst
Energy Technology Data Exchange (ETDEWEB)
Tarifeño-Saldivia, Ariel, E-mail: atarifeno@cchen.cl, E-mail: atarisal@gmail.com; Pavez, Cristian; Soto, Leopoldo [Comisión Chilena de Energía Nuclear, Casilla 188-D, Santiago (Chile); Center for Research and Applications in Plasma Physics and Pulsed Power, P4, Santiago (Chile); Departamento de Ciencias Fisicas, Facultad de Ciencias Exactas, Universidad Andres Bello, Republica 220, Santiago (Chile); Mayer, Roberto E. [Instituto Balseiro and Centro Atómico Bariloche, Comisión Nacional de Energía Atómica and Universidad Nacional de Cuyo, San Carlos de Bariloche R8402AGP (Argentina)
2014-01-15
This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods.
Calibration methodology for proportional counters applied to yield measurements of a neutron burst
International Nuclear Information System (INIS)
Tarifeño-Saldivia, Ariel; Pavez, Cristian; Soto, Leopoldo; Mayer, Roberto E.
2014-01-01
This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods
Energy Technology Data Exchange (ETDEWEB)
Cocco, Lilian Cristina; Yamamoto, Carlos Itsuo [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Lab. de Analise de Combustiveis Automotivos (LACAUTets)
2008-07-01
This work describes the attainment of mathematical models, applying multivariate calibration in infrared spectrum with ATR, from 128 gasoline samples with diverse chemical compositions, collected in a period of two and a half years. Infrared spectra had been used to assemble the input matrix for the modeling, whereas the standardized assays and gaseous chromatography had supplied the output matrices. Ninety samples were been used for training and 38 for testing. In order to calibrate chemical composition from chromatography, the techniques of mass spectrometry and chemical ionization were used to identify unknown substances and improve the fitting of the mathematical models. Two hundred and ninety substances were detected and identified, from which 100 were unknown. Six PLS/PCR models were attained to predict some properties as specific mass, Reid vapor pressure, T10, T50, T90 and PFE from distillation curve. Another six PLS/PCR models were attained to predict the amount of aromatics, paraffins, isoparaffins, naphthenes, olefins and oxygenates. In general, mathematical models were attained with good training fit, with correlation coefficients higher than 0,975 (T10) and reaching a maximum of 0,998 (naphthenes) and they are able to forecast an average chemical percentage and properties of interest from gasoline, with acceptable prediction errors. (author)
Calibration factor or calibration coefficient?
International Nuclear Information System (INIS)
Meghzifene, A.; Shortt, K.R.
2002-01-01
Full text: The IAEA/WHO network of SSDLs was set up in order to establish links between SSDL members and the international measurement system. At the end of 2001, there were 73 network members in 63 Member States. The SSDL network members provide calibration services to end-users at the national or regional level. The results of the calibrations are summarized in a document called calibration report or calibration certificate. The IAEA has been using the term calibration certificate and will continue using the same terminology. The most important information in a calibration certificate is a list of calibration factors and their related uncertainties that apply to the calibrated instrument for the well-defined irradiation and ambient conditions. The IAEA has recently decided to change the term calibration factor to calibration coefficient, to be fully in line with ISO [ISO 31-0], which recommends the use of the term coefficient when it links two quantities A and B (equation 1) that have different dimensions. The term factor should only be used for k when it is used to link the terms A and B that have the same dimensions A=k.B. However, in a typical calibration, an ion chamber is calibrated in terms of a physical quantity such as air kerma, dose to water, ambient dose equivalent, etc. If the chamber is calibrated together with its electrometer, then the calibration refers to the physical quantity to be measured per electrometer unit reading. In this case, the terms referred have different dimensions. The adoption by the Agency of the term coefficient to express the results of calibrations is consistent with the 'International vocabulary of basic and general terms in metrology' prepared jointly by the BIPM, IEC, ISO, OIML and other organizations. The BIPM has changed from factor to coefficient. The authors believe that this is more than just a matter of semantics and recommend that the SSDL network members adopt this change in terminology. (author)
Dinç, Erdal; Ustündağ, Ozgür; Baleanu, Dumitru
2010-08-01
The sole use of pyridoxine hydrochloride during treatment of tuberculosis gives rise to pyridoxine deficiency. Therefore, a combination of pyridoxine hydrochloride and isoniazid is used in pharmaceutical dosage form in tuberculosis treatment to reduce this side effect. In this study, two chemometric methods, partial least squares (PLS) and principal component regression (PCR), were applied to the simultaneous determination of pyridoxine (PYR) and isoniazid (ISO) in their tablets. A concentration training set comprising binary mixtures of PYR and ISO consisting of 20 different combinations were randomly prepared in 0.1 M HCl. Both multivariate calibration models were constructed using the relationships between the concentration data set (concentration data matrix) and absorbance data matrix in the spectral region 200-330 nm. The accuracy and the precision of the proposed chemometric methods were validated by analyzing synthetic mixtures containing the investigated drugs. The recovery results obtained by applying PCR and PLS calibrations to the artificial mixtures were found between 100.0 and 100.7%. Satisfactory results obtained by applying the PLS and PCR methods to both artificial and commercial samples were obtained. The results obtained in this manuscript strongly encourage us to use them for the quality control and the routine analysis of the marketing tablets containing PYR and ISO drugs. Copyright © 2010 John Wiley & Sons, Ltd.
Burgués, Javier; Marco, Santiago
2018-08-17
Metal oxide semiconductor (MOX) sensors are usually temperature-modulated and calibrated with multivariate models such as partial least squares (PLS) to increase the inherent low selectivity of this technology. The multivariate sensor response patterns exhibit heteroscedastic and correlated noise, which suggests that maximum likelihood methods should outperform PLS. One contribution of this paper is the comparison between PLS and maximum likelihood principal components regression (MLPCR) in MOX sensors. PLS is often criticized by the lack of interpretability when the model complexity increases beyond the chemical rank of the problem. This happens in MOX sensors due to cross-sensitivities to interferences, such as temperature or humidity and non-linearity. Additionally, the estimation of fundamental figures of merit, such as the limit of detection (LOD), is still not standardized in multivariate models. Orthogonalization methods, such as orthogonal projection to latent structures (O-PLS), have been successfully applied in other fields to reduce the complexity of PLS models. In this work, we propose a LOD estimation method based on applying the well-accepted univariate LOD formulas to the scores of the first component of an orthogonal PLS model. The resulting LOD is compared to the multivariate LOD range derived from error-propagation. The methodology is applied to data extracted from temperature-modulated MOX sensors (FIS SB-500-12 and Figaro TGS 3870-A04), aiming at the detection of low concentrations of carbon monoxide in the presence of uncontrolled humidity (chemical noise). We found that PLS models were simpler and more accurate than MLPCR models. Average LOD values of 0.79 ppm (FIS) and 1.06 ppm (Figaro) were found using the approach described in this paper. These values were contained within the LOD ranges obtained with the error-propagation approach. The mean LOD increased to 1.13 ppm (FIS) and 1.59 ppm (Figaro) when considering validation samples
Multivariate techniques of analysis for ToF-E recoil spectrometry data
Energy Technology Data Exchange (ETDEWEB)
Whitlow, H.J.; Bouanani, M.E.; Persson, L.; Hult, M.; Jonsson, P.; Johnston, P.N. [Lund Institute of Technology, Solvegatan, (Sweden), Department of Nuclear Physics; Andersson, M. [Uppsala Univ. (Sweden). Dept. of Organic Chemistry; Ostling, M.; Zaring, C. [Royal institute of Technology, Electrum, Kista, (Sweden), Department of Electronics; Johnston, P.N.; Bubb, I.F.; Walker, B.R.; Stannard, W.B. [Royal Melbourne Inst. of Tech., VIC (Australia); Cohen, D.D.; Dytlewski, N. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)
1996-12-31
Multivariate statistical methods are being developed by the Australian -Swedish Recoil Spectrometry Collaboration for quantitative analysis of the wealth of information in Time of Flight (ToF) and energy dispersive Recoil Spectrometry. An overview is presented of progress made in the use of multivariate techniques for energy calibration, separation of mass-overlapped signals and simulation of ToF-E data. 6 refs., 5 figs.
Multivariate techniques of analysis for ToF-E recoil spectrometry data
Energy Technology Data Exchange (ETDEWEB)
Whitlow, H J; Bouanani, M E; Persson, L; Hult, M; Jonsson, P; Johnston, P N [Lund Institute of Technology, Solvegatan, (Sweden), Department of Nuclear Physics; Andersson, M [Uppsala Univ. (Sweden). Dept. of Organic Chemistry; Ostling, M; Zaring, C [Royal institute of Technology, Electrum, Kista, (Sweden), Department of Electronics; Johnston, P N; Bubb, I F; Walker, B R; Stannard, W B [Royal Melbourne Inst. of Tech., VIC (Australia); Cohen, D D; Dytlewski, N [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)
1997-12-31
Multivariate statistical methods are being developed by the Australian -Swedish Recoil Spectrometry Collaboration for quantitative analysis of the wealth of information in Time of Flight (ToF) and energy dispersive Recoil Spectrometry. An overview is presented of progress made in the use of multivariate techniques for energy calibration, separation of mass-overlapped signals and simulation of ToF-E data. 6 refs., 5 figs.
International Nuclear Information System (INIS)
Stripling, H.F.; McClarren, R.G.; Kuranz, C.C.; Grosskopf, M.J.; Rutter, E.; Torralva, B.R.
2011-01-01
We present a method for calibrating the uncertain inputs to a computer model using available experimental data. The goal of the procedure is to produce posterior distributions of the uncertain inputs such that when samples from the posteriors are used as inputs to future model runs, the model is more likely to replicate (or predict) the experimental response. The calibration is performed by sampling the space of the uncertain inputs, using the computer model (or, more likely, an emulator for the computer model) to assign weights to the samples, and applying the weights to produce the posterior distributions and generate predictions of new experiments within confidence bounds. The method is similar to the Markov chain Monte Carlo (MCMC) calibration methods with independent sampling with the exception that we generate samples beforehand and replace the candidate acceptance routine with a weighting scheme. We apply our method to the calibration of a Hyades 2D model of laser energy deposition in beryllium. We employ a Bayesian Multivariate Adaptive Regression Splines (BMARS) emulator as a surrogate for Hyades 2D. We treat a range of uncertainties in our system, including uncertainties in the experimental inputs, experimental measurement error, and systematic experimental timing errors. The results of the calibration are posterior distributions that both agree with intuition and improve the accuracy and decrease the uncertainty in experimental predictions. (author)
A calibration and data assimilation method using the Bayesian MARS emulator
International Nuclear Information System (INIS)
Stripling, H.F.; McClarren, R.G.; Kuranz, C.C.; Grosskopf, M.J.; Rutter, E.; Torralva, B.R.
2013-01-01
Highlights: ► We outline a transparent, flexible method for the calibration of uncertain inputs to computer models. ► We account for model, data, emulator, and measurement uncertainties. ► The method produces improved predictive results, which are validated using leave one-out experiments. ► Our implementation leverages the Bayesian MARS emulator, but any emulator may be substituted. -- Abstract: We present a method for calibrating the uncertain inputs to a computer model using available experimental data. The goal of the procedure is to estimate the posterior distribution of the uncertain inputs such that when samples from the posterior are used as inputs to future model runs, the model is more likely to replicate (or predict) the experimental response. The calibration is performed by sampling the space of the uncertain inputs, using the computer model (or, more likely, an emulator for the computer model) to assign weights to the samples, and applying the weights to produce the posterior distributions and generate predictions of new experiments with confidence bounds. The method is similar to Metropolis–Hastings calibration methods with independently sampled updates, except that we generate samples beforehand and replace the candidate acceptance routine with a weighting scheme. We apply our method to the calibration of a Hyades 2D model of laser energy deposition in beryllium. We employ a Bayesian Multivariate Adaptive Regression Splines (BMARS) emulator as a surrogate for Hyades 2D. We treat a range of uncertainties in our application, including uncertainties in the experimental inputs, experimental measurement error, and systematic experimental timing errors. The resulting posterior distributions agree with our existing intuition, and we validate the results by performing a series of leave-one-out predictions. We find that the calibrated predictions are considerably more accurate and less uncertain than blind sampling of the forward model alone.
de Oliveira Neves, Ana Carolina; Soares, Gustavo Mesquita; de Morais, Stéphanie Cavalcante; da Costa, Fernanda Saadna Lopes; Porto, Dayanne Lopes; de Lima, Kássio Michell Gomes
2012-01-05
This work utilized the near-infrared spectroscopy (NIRS) and multivariate calibration to measure the percentage drug dissolution of four active pharmaceutical ingredients (APIs) (isoniazid, rifampicin, pyrazinamide and ethambutol) in finished pharmaceutical products produced in the Federal University of Rio Grande do Norte (Brazil). The conventional analytical method employed in quality control tests of the dissolution by the pharmaceutical industry is high-performance liquid chromatography (HPLC). The NIRS is a reliable method that offers important advantages for the large-scale production of tablets and for non-destructive analysis. NIR spectra of 38 samples (in triplicate) were measured using a Bomen FT-NIR 160 MB in the range 1100-2500nm. Each spectrum was the average of 50 scans obtained in the diffuse reflectance mode. The dissolution test, which was initially carried out in 900mL of 0.1N hydrochloric acid at 37±0.5°C, was used to determine the percentage a drug that dissolved from each tablet measured at the same time interval (45min) at pH 6.8. The measurement of the four API was performed by HPLC (Shimadzu, Japan) in the gradiente mode. The influence of various spectral pretreatments (Savitzky-Golay smoothing, Multiplicative Scatter Correction (MSC), and Savitzky-Golay derivatives) and multivariate analysis using the partial least squares (PLS) regression algorithm was calculated by the Unscrambler 9.8 (Camo) software. The correlation coefficient (R(2)) for the HPLC determination versus predicted values (NIRS) ranged from 0.88 to 0.98. The root-mean-square error of prediction (RMSEP) obtained from PLS models were 9.99%, 8.63%, 8.57% and 9.97% for isoniazid, rifampicin, ethambutol and pyrazinamide, respectively, indicating that the NIR method is an effective and non-destructive tool for measurement of drug dissolution from tablets. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.
Miaw, Carolina Sheng Whei; Assis, Camila; Silva, Alessandro Rangel Carolino Sales; Cunha, Maria Luísa; Sena, Marcelo Martins; de Souza, Scheilla Vitorino Carvalho
2018-07-15
Grape, orange, peach and passion fruit nectars were formulated and adulterated by dilution with syrup, apple and cashew juices at 10 levels for each adulterant. Attenuated total reflectance Fourier transform mid infrared (ATR-FTIR) spectra were obtained. Partial least squares (PLS) multivariate calibration models allied to different variable selection methods, such as interval partial least squares (iPLS), ordered predictors selection (OPS) and genetic algorithm (GA), were used to quantify the main fruits. PLS improved by iPLS-OPS variable selection showed the highest predictive capacity to quantify the main fruit contents. The selected variables in the final models varied from 72 to 100; the root mean square errors of prediction were estimated from 0.5 to 2.6%; the correlation coefficients of prediction ranged from 0.948 to 0.990; and, the mean relative errors of prediction varied from 3.0 to 6.7%. All of the developed models were validated. Copyright © 2018 Elsevier Ltd. All rights reserved.
Simultaneous calibration of ensemble river flow predictions over an entire range of lead times
Hemri, S.; Fundel, F.; Zappa, M.
2013-10-01
Probabilistic estimates of future water levels and river discharge are usually simulated with hydrologic models using ensemble weather forecasts as main inputs. As hydrologic models are imperfect and the meteorological ensembles tend to be biased and underdispersed, the ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, in order to achieve both reliable and sharp predictions statistical postprocessing is required. In this work Bayesian model averaging (BMA) is applied to statistically postprocess ensemble runoff raw forecasts for a catchment in Switzerland, at lead times ranging from 1 to 240 h. The raw forecasts have been obtained using deterministic and ensemble forcing meteorological models with different forecast lead time ranges. First, BMA is applied based on mixtures of univariate normal distributions, subject to the assumption of independence between distinct lead times. Then, the independence assumption is relaxed in order to estimate multivariate runoff forecasts over the entire range of lead times simultaneously, based on a BMA version that uses multivariate normal distributions. Since river runoff is a highly skewed variable, Box-Cox transformations are applied in order to achieve approximate normality. Both univariate and multivariate BMA approaches are able to generate well calibrated probabilistic forecasts that are considerably sharper than climatological forecasts. Additionally, multivariate BMA provides a promising approach for incorporating temporal dependencies into the postprocessed forecasts. Its major advantage against univariate BMA is an increase in reliability when the forecast system is changing due to model availability.
Energy Technology Data Exchange (ETDEWEB)
Krieger, A., E-mail: kriegea@uni-mainz.d [Institut fuer Kernchemie, Johannes Gutenberg, Universitaet Mainz, Fritz-Strassmann-Weg 2, 55128 Mainz (Germany); Geppert, Ch. [Institut fuer Kernchemie, Johannes Gutenberg, Universitaet Mainz, Fritz-Strassmann-Weg 2, 55128 Mainz (Germany); GSI Helmholtzzentrum fuer Schwerionenforschung, 64291 Darmstadt (Germany); Catherall, R. [CERN, CH-1211 Geneve 23 (Switzerland); Hochschulz, F. [Institut fuer Kernphysik, Universitaet Muenster, 48149 Muenster (Germany); Kraemer, J.; Neugart, R. [Institut fuer Kernchemie, Johannes Gutenberg, Universitaet Mainz, Fritz-Strassmann-Weg 2, 55128 Mainz (Germany); Rosendahl, S. [Institut fuer Kernphysik, Universitaet Muenster, 48149 Muenster (Germany); Schipper, J.; Siesling, E. [CERN, CH-1211 Geneve 23 (Switzerland); Weinheimer, Ch. [Institut fuer Kernphysik, Universitaet Muenster, 48149 Muenster (Germany); Yordanov, D.T. [Max-Planck-Institut fuer Kernphysik, 69117 Heidelberg (Germany); Noertershaeuser, W. [Institut fuer Kernchemie, Johannes Gutenberg, Universitaet Mainz, Fritz-Strassmann-Weg 2, 55128 Mainz (Germany); GSI Helmholtzzentrum fuer Schwerionenforschung, 64291 Darmstadt (Germany)
2011-03-11
A high-voltage divider with accuracy at the ppm level and collinear laser spectroscopy were used to calibrate the high-voltage installation at the radioactive ion beam facility ISOLDE at CERN. The accurate knowledge of this voltage is particularly important for collinear laser spectroscopy measurements. Beam velocity measurements using frequency-comb based collinear laser spectroscopy agree with the new calibration. Applying this, one obtains consistent results for isotope shifts of stable magnesium isotopes measured using collinear spectroscopy and laser spectroscopy on laser-cooled ions in a trap. The long-term stability and the transient behavior during recovery from a voltage dropout were investigated for the different power supplies currently applied at ISOLDE.
International Nuclear Information System (INIS)
Berger, C.D.; Gupton, E.D.; Lane, B.H.; Miller, J.H.; Nichols, S.W.
1982-08-01
The ORNL Calibrations Facility is operated by the Instrumentation Group of the Industrial Safety and Applied Health Physics Division. Its primary purpose is to maintain radiation calibration standards for calibration of ORNL health physics instruments and personnel dosimeters. This report includes a discussion of the radioactive sources and ancillary equipment in use and a step-by-step procedure for calibration of those survey instruments and personnel dosimeters in routine use at ORNL
Pereira, Claudete Fernandes; Pasquini, Celio
2010-05-01
A flow system is proposed to produce a concentration perturbation in liquid samples, aiming at the generation of two-dimensional correlation near-infrared spectra. The system presents advantages in relation to batch systems employed for the same purpose: the experiments are accomplished in a closed system; application of perturbation is rapid and easy; and the experiments can be carried out with micro-scale volumes. The perturbation system has been evaluated in the investigation and selection of relevant variables for multivariate calibration models for the determination of quality parameters of gasoline, including ethanol content, MON (motor octane number), and RON (research octane number). The main advantage of this variable selection approach is the direct association between spectral features and chemical composition, allowing easy interpretation of the regression models.
Vial, Flavie; Wei, Wei; Held, Leonhard
2016-12-20
In an era of ubiquitous electronic collection of animal health data, multivariate surveillance systems (which concurrently monitor several data streams) should have a greater probability of detecting disease events than univariate systems. However, despite their limitations, univariate aberration detection algorithms are used in most active syndromic surveillance (SyS) systems because of their ease of application and interpretation. On the other hand, a stochastic modelling-based approach to multivariate surveillance offers more flexibility, allowing for the retention of historical outbreaks, for overdispersion and for non-stationarity. While such methods are not new, they are yet to be applied to animal health surveillance data. We applied an example of such stochastic model, Held and colleagues' two-component model, to two multivariate animal health datasets from Switzerland. In our first application, multivariate time series of the number of laboratories test requests were derived from Swiss animal diagnostic laboratories. We compare the performance of the two-component model to parallel monitoring using an improved Farrington algorithm and found both methods yield a satisfactorily low false alarm rate. However, the calibration test of the two-component model on the one-step ahead predictions proved satisfactory, making such an approach suitable for outbreak prediction. In our second application, the two-component model was applied to the multivariate time series of the number of cattle abortions and the number of test requests for bovine viral diarrhea (a disease that often results in abortions). We found that there is a two days lagged effect from the number of abortions to the number of test requests. We further compared the joint modelling and univariate modelling of the number of laboratory test requests time series. The joint modelling approach showed evidence of superiority in terms of forecasting abilities. Stochastic modelling approaches offer the
Multivariate analysis: models and method
International Nuclear Information System (INIS)
Sanz Perucha, J.
1990-01-01
Data treatment techniques are increasingly used since computer methods result of wider access. Multivariate analysis consists of a group of statistic methods that are applied to study objects or samples characterized by multiple values. A final goal is decision making. The paper describes the models and methods of multivariate analysis
Ye, Lanhan; Song, Kunlin; Shen, Tingting
2018-01-01
Fast detection of heavy metals is very important for ensuring the quality and safety of crops. Laser-induced breakdown spectroscopy (LIBS), coupled with uni- and multivariate analysis, was applied for quantitative analysis of copper in three kinds of rice (Jiangsu rice, regular rice, and Simiao rice). For univariate analysis, three pre-processing methods were applied to reduce fluctuations, including background normalization, the internal standard method, and the standard normal variate (SNV). Linear regression models showed a strong correlation between spectral intensity and Cu content, with an R2 more than 0.97. The limit of detection (LOD) was around 5 ppm, lower than the tolerance limit of copper in foods. For multivariate analysis, partial least squares regression (PLSR) showed its advantage in extracting effective information for prediction, and its sensitivity reached 1.95 ppm, while support vector machine regression (SVMR) performed better in both calibration and prediction sets, where Rc2 and Rp2 reached 0.9979 and 0.9879, respectively. This study showed that LIBS could be considered as a constructive tool for the quantification of copper contamination in rice. PMID:29495445
de Godoy, Luiz Antonio Fonseca; Hantao, Leandro Wang; Pedroso, Marcio Pozzobon; Poppi, Ronei Jesus; Augusto, Fabio
2011-08-05
The use of multivariate curve resolution (MCR) to build multivariate quantitative models using data obtained from comprehensive two-dimensional gas chromatography with flame ionization detection (GC×GC-FID) is presented and evaluated. The MCR algorithm presents some important features, such as second order advantage and the recovery of the instrumental response for each pure component after optimization by an alternating least squares (ALS) procedure. A model to quantify the essential oil of rosemary was built using a calibration set containing only known concentrations of the essential oil and cereal alcohol as solvent. A calibration curve correlating the concentration of the essential oil of rosemary and the instrumental response obtained from the MCR-ALS algorithm was obtained, and this calibration model was applied to predict the concentration of the oil in complex samples (mixtures of the essential oil, pineapple essence and commercial perfume). The values of the root mean square error of prediction (RMSEP) and of the root mean square error of the percentage deviation (RMSPD) obtained were 0.4% (v/v) and 7.2%, respectively. Additionally, a second model was built and used to evaluate the accuracy of the method. A model to quantify the essential oil of lemon grass was built and its concentration was predicted in the validation set and real perfume samples. The RMSEP and RMSPD obtained were 0.5% (v/v) and 6.9%, respectively, and the concentration of the essential oil of lemon grass in perfume agreed to the value informed by the manufacturer. The result indicates that the MCR algorithm is adequate to resolve the target chromatogram from the complex sample and to build multivariate models of GC×GC-FID data. Copyright © 2011 Elsevier B.V. All rights reserved.
Nucleonic gauges in Poland and new approach to their calibration
International Nuclear Information System (INIS)
Urbanski, P.
2000-01-01
The current status of manufacturing and application of radioisotope gauges in Poland is presented. Metrological performance of the gauges is briefly described and their expected future prospects on the market of the industrial measuring instruments are discussed. Progress in electronic engineering and common use of the microprocessor systems in the radioisotope gauges made possible application of the sophisticated methods of signal processing and data treatment, as for example statistical multivariate analysis. Some examples of the multivariate calibration of nucleonic gauges are presented. Application of the partial least square regression (PLS) and artificial neural network (ANN) for calibration of the gauges has been shown. (author)
International Nuclear Information System (INIS)
Cornic, Philippe; Le Besnerais, Guy; Champagnat, Frédéric; Illoul, Cédric; Cheminet, Adam; Le Sant, Yves; Leclaire, Benjamin
2016-01-01
We address calibration and self-calibration of tomographic PIV experiments within a pinhole model of cameras. A complete and explicit pinhole model of a camera equipped with a 2-tilt angles Scheimpflug adapter is presented. It is then used in a calibration procedure based on a freely moving calibration plate. While the resulting calibrations are accurate enough for Tomo-PIV, we confirm, through a simple experiment, that they are not stable in time, and illustrate how the pinhole framework can be used to provide a quantitative evaluation of geometrical drifts in the setup. We propose an original self-calibration method based on global optimization of the extrinsic parameters of the pinhole model. These methods are successfully applied to the tomographic PIV of an air jet experiment. An unexpected by-product of our work is to show that volume self-calibration induces a change in the world frame coordinates. Provided the calibration drift is small, as generally observed in PIV, the bias on the estimated velocity field is negligible but the absolute location cannot be accurately recovered using standard calibration data. (paper)
Radiometric Calibration of Osmi Imagery Using Solar Calibration
Directory of Open Access Journals (Sweden)
Dong-Han Lee
2000-12-01
Full Text Available OSMI (Ocean Scanning Multi-Spectral Imager raw image data (Level 0 were acquired and radiometrically corrected. We have applied two methods, using solar & dark calibration data from OSMI sensor and comparing with the SeaWiFS data, to the radiometric correction of OSMI raw image data. First, we could get the values of the gain and the offset for each pixel and each band from comparing the solar & dark calibration data with the solar input radiance values, calculated from the transmittance, BRDF (Bidirectional Reflectance Distribution Function and the solar incidence angle (¥â,¥è of OSMI sensor. Applying this calibration data to OSMI raw image data, we got the two odd results, the lower value of the radiometric corrected image data than the expected value, and the Venetian Blind Effect in the radiometric corrected image data. Second, we could get the reasonable results from comparing OSMI raw image data with the SeaWiFS data, and get a new problem of OSMI sensor.
International Nuclear Information System (INIS)
Ribeiro, Laila Lorena X.; Barbosa, Rugles Cesar; Correa, Rosangela S.
2011-01-01
The West Central region of Brazil does not have a basic infrastructure for research, development, training programs, and personnel dosimetry education. All of them applied to environmental, industrial and medical uses. Service deployment for irradiance of TLD, via 137 Cs irradiator J. L. SHEPHERD model 28-8A (444 activity GBq) in CRCN-CO, it is necessary to introduce procedures for calibration of the radiator and other procedures related to dosimetry and calibration. Such procedures should be repeated periodically, as necessary to introduce techniques that make the service of the CRCN-CO a template, and that meet all standards requirements for radioprotection and operation of dosimetry and calibration. The objective of this work was to evaluate the radiation field of Cs-137, and the automatic system which systematizes the calibration procedures attached to a system control target for the radiator/calibration of monitors, and portable dosimeters. (author)
Multivariate statistics exercises and solutions
Härdle, Wolfgang Karl
2015-01-01
The authors present tools and concepts of multivariate data analysis by means of exercises and their solutions. The first part is devoted to graphical techniques. The second part deals with multivariate random variables and presents the derivation of estimators and tests for various practical situations. The last part introduces a wide variety of exercises in applied multivariate data analysis. The book demonstrates the application of simple calculus and basic multivariate methods in real life situations. It contains altogether more than 250 solved exercises which can assist a university teacher in setting up a modern multivariate analysis course. All computer-based exercises are available in the R language. All R codes and data sets may be downloaded via the quantlet download center www.quantlet.org or via the Springer webpage. For interactive display of low-dimensional projections of a multivariate data set, we recommend GGobi.
Rasouli, Zolaikha; Ghavami, Raouf
2016-08-01
Vanillin (VA), vanillic acid (VAI) and syringaldehyde (SIA) are important food additives as flavor enhancers. The current study for the first time is devote to the application of partial least square (PLS-1), partial robust M-regression (PRM) and feed forward neural networks (FFNNs) as linear and nonlinear chemometric methods for the simultaneous detection of binary and ternary mixtures of VA, VAI and SIA using data extracted directly from UV-spectra with overlapped peaks of individual analytes. Under the optimum experimental conditions, for each compound a linear calibration was obtained in the concentration range of 0.61-20.99 [LOD = 0.12], 0.67-23.19 [LOD = 0.13] and 0.73-25.12 [LOD = 0.15] μg mL- 1 for VA, VAI and SIA, respectively. Four calibration sets of standard samples were designed by combination of a full and fractional factorial designs with the use of the seven and three levels for each factor for binary and ternary mixtures, respectively. The results of this study reveal that both the methods of PLS-1 and PRM are similar in terms of predict ability each binary mixtures. The resolution of ternary mixture has been accomplished by FFNNs. Multivariate curve resolution-alternating least squares (MCR-ALS) was applied for the description of spectra from the acid-base titration systems each individual compound, i.e. the resolution of the complex overlapping spectra as well as to interpret the extracted spectral and concentration profiles of any pure chemical species identified. Evolving factor analysis (EFA) and singular value decomposition (SVD) were used to distinguish the number of chemical species. Subsequently, their corresponding dissociation constants were derived. Finally, FFNNs has been used to detection active compounds in real and spiked water samples.
Pérez, Rocío L; Escandar, Graciela M
2014-07-04
Following the green analytical chemistry principles, an efficient strategy involving second-order data provided by liquid chromatography (LC) with diode array detection (DAD) was applied for the simultaneous determination of estriol, 17β-estradiol, 17α-ethinylestradiol and estrone in natural water samples. After a simple pre-concentration step, LC-DAD matrix data were rapidly obtained (in less than 5 min) with a chromatographic system operating isocratically. Applying a second-order calibration algorithm based on multivariate curve resolution with alternating least-squares (MCR-ALS), successful resolution was achieved in the presence of sample constituents that strongly coelute with the analytes. The flexibility of this multivariate model allowed the quantification of the four estrogens in tap, mineral, underground and river water samples. Limits of detection in the range between 3 and 13 ng L(-1), and relative prediction errors from 2 to 11% were achieved. Copyright © 2014 Elsevier B.V. All rights reserved.
Stenlund, Hans; Johansson, Erik; Gottfries, Johan; Trygg, Johan
2009-01-01
Near infrared spectroscopy (NIR) was developed primarily for applications such as the quantitative determination of nutrients in the agricultural and food industries. Examples include the determination of water, protein, and fat within complex samples such as grain and milk. Because of its useful properties, NIR analysis has spread to other areas such as chemistry and pharmaceutical production. NIR spectra consist of infrared overtones and combinations thereof, making interpretation of the results complicated. It can be very difficult to assign peaks to known constituents in the sample. Thus, multivariate analysis (MVA) has been crucial in translating spectral data into information, mainly for predictive purposes. Orthogonal partial least squares (OPLS), a new MVA method, has prediction and modeling properties similar to those of other MVA techniques, e.g., partial least squares (PLS), a method with a long history of use for the analysis of NIR data. OPLS provides an intrinsic algorithmic improvement for the interpretation of NIR data. In this report, four sets of NIR data were analyzed to demonstrate the improved interpretation provided by OPLS. The first two sets included simulated data to demonstrate the overall principles; the third set comprised a statistically replicated design of experiments (DoE), to demonstrate how instrumental difference could be accurately visualized and correctly attributed to Wood's anomaly phenomena; the fourth set was chosen to challenge the MVA by using data relating to powder mixing, a crucial step in the pharmaceutical industry prior to tabletting. Improved interpretation by OPLS was demonstrated for all four examples, as compared to alternative MVA approaches. It is expected that OPLS will be used mostly in applications where improved interpretation is crucial; one such area is process analytical technology (PAT). PAT involves fewer independent samples, i.e., batches, than would be associated with agricultural applications; in
Multivariate pattern dependence.
Directory of Open Access Journals (Sweden)
Stefano Anzellotti
2017-11-01
Full Text Available When we perform a cognitive task, multiple brain regions are engaged. Understanding how these regions interact is a fundamental step to uncover the neural bases of behavior. Most research on the interactions between brain regions has focused on the univariate responses in the regions. However, fine grained patterns of response encode important information, as shown by multivariate pattern analysis. In the present article, we introduce and apply multivariate pattern dependence (MVPD: a technique to study the statistical dependence between brain regions in humans in terms of the multivariate relations between their patterns of responses. MVPD characterizes the responses in each brain region as trajectories in region-specific multidimensional spaces, and models the multivariate relationship between these trajectories. We applied MVPD to the posterior superior temporal sulcus (pSTS and to the fusiform face area (FFA, using a searchlight approach to reveal interactions between these seed regions and the rest of the brain. Across two different experiments, MVPD identified significant statistical dependence not detected by standard functional connectivity. Additionally, MVPD outperformed univariate connectivity in its ability to explain independent variance in the responses of individual voxels. In the end, MVPD uncovered different connectivity profiles associated with different representational subspaces of FFA: the first principal component of FFA shows differential connectivity with occipital and parietal regions implicated in the processing of low-level properties of faces, while the second and third components show differential connectivity with anterior temporal regions implicated in the processing of invariant representations of face identity.
Applying Hierarchical Model Calibration to Automatically Generated Items.
Williamson, David M.; Johnson, Matthew S.; Sinharay, Sandip; Bejar, Isaac I.
This study explored the application of hierarchical model calibration as a means of reducing, if not eliminating, the need for pretesting of automatically generated items from a common item model prior to operational use. Ultimately the successful development of automatic item generation (AIG) systems capable of producing items with highly similar…
DEFF Research Database (Denmark)
Bladt, Mogens; Nielsen, Bo Friis
2012-01-01
Laplace transform. In a longer perspective stochastic and statistical analysis for MVME will in particular apply to any of the previously defined distributions. Multivariate gamma distributions have been used in a variety of fields like hydrology, [11], [10], [6], space (wind modeling) [9] reliability [3......Numerous definitions of multivariate exponential and gamma distributions can be retrieved from the literature [4]. These distribtuions belong to the class of Multivariate Matrix-- Exponetial Distributions (MVME) whenever their joint Laplace transform is a rational function. The majority...... of these distributions further belongs to an important subclass of MVME distributions [5, 1] where the multivariate random vector can be interpreted as a number of simultaneously collected rewards during sojourns in a the states of a Markov chain with one absorbing state, the rest of the states being transient. We...
Gridded Calibration of Ensemble Wind Vector Forecasts Using Ensemble Model Output Statistics
Lazarus, S. M.; Holman, B. P.; Splitt, M. E.
2017-12-01
A computationally efficient method is developed that performs gridded post processing of ensemble wind vector forecasts. An expansive set of idealized WRF model simulations are generated to provide physically consistent high resolution winds over a coastal domain characterized by an intricate land / water mask. Ensemble model output statistics (EMOS) is used to calibrate the ensemble wind vector forecasts at observation locations. The local EMOS predictive parameters (mean and variance) are then spread throughout the grid utilizing flow-dependent statistical relationships extracted from the downscaled WRF winds. Using data withdrawal and 28 east central Florida stations, the method is applied to one year of 24 h wind forecasts from the Global Ensemble Forecast System (GEFS). Compared to the raw GEFS, the approach improves both the deterministic and probabilistic forecast skill. Analysis of multivariate rank histograms indicate the post processed forecasts are calibrated. Two downscaling case studies are presented, a quiescent easterly flow event and a frontal passage. Strengths and weaknesses of the approach are presented and discussed.
Directory of Open Access Journals (Sweden)
Fei Liu
2018-02-01
Full Text Available Fast detection of heavy metals is very important for ensuring the quality and safety of crops. Laser-induced breakdown spectroscopy (LIBS, coupled with uni- and multivariate analysis, was applied for quantitative analysis of copper in three kinds of rice (Jiangsu rice, regular rice, and Simiao rice. For univariate analysis, three pre-processing methods were applied to reduce fluctuations, including background normalization, the internal standard method, and the standard normal variate (SNV. Linear regression models showed a strong correlation between spectral intensity and Cu content, with an R 2 more than 0.97. The limit of detection (LOD was around 5 ppm, lower than the tolerance limit of copper in foods. For multivariate analysis, partial least squares regression (PLSR showed its advantage in extracting effective information for prediction, and its sensitivity reached 1.95 ppm, while support vector machine regression (SVMR performed better in both calibration and prediction sets, where R c 2 and R p 2 reached 0.9979 and 0.9879, respectively. This study showed that LIBS could be considered as a constructive tool for the quantification of copper contamination in rice.
Retinal vascular calibres are significantly associated with cardiovascular risk factors
DEFF Research Database (Denmark)
von Hanno, T.; Bertelsen, G.; Sjølie, Anne K.
2014-01-01
. Association between retinal vessel calibre and the cardiovascular risk factors was assessed by multivariable linear and logistic regression analyses. Results: Retinal arteriolar calibre was independently associated with age, blood pressure, HbA1c and smoking in women and men, and with HDL cholesterol in men......Purpose: To describe the association between retinal vascular calibres and cardiovascular risk factors. Methods: Population-based cross-sectional study including 6353 participants of the TromsO Eye Study in Norway aged 38-87years. Retinal arteriolar calibre (central retinal artery equivalent...... cardiovascular risk factors were independently associated with retinal vascular calibre, with stronger effect of HDL cholesterol and BMI in men than in women. Blood pressure and smoking contributed most to the explained variance....
Calculus of multivariate functions: it's application in business | Awen ...
African Journals Online (AJOL)
Multivariate functions can be applied to situations in business organizations like ... of capital invested in the plant, the size of the labour force and the cost of raw ... of multivariate functions and has considered types of multivariate differentiation ...
Spitzer/JWST Cross Calibration: IRAC Observations of Potential Calibrators for JWST
Carey, Sean J.; Gordon, Karl D.; Lowrance, Patrick; Ingalls, James G.; Glaccum, William J.; Grillmair, Carl J.; E Krick, Jessica; Laine, Seppo J.; Fazio, Giovanni G.; Hora, Joseph L.; Bohlin, Ralph
2017-06-01
We present observations at 3.6 and 4.5 microns using IRAC on the Spitzer Space Telescope of a set of main sequence A stars and white dwarfs that are potential calibrators across the JWST instrument suite. The stars range from brightnesses of 4.4 to 15 mag in K band. The calibration observations use a similar redundancy to the observing strategy for the IRAC primary calibrators (Reach et al. 2005) and the photometry is obtained using identical methods and instrumental photometric corrections as those applied to the IRAC primary calibrators (Carey et al. 2009). The resulting photometry is then compared to the predictions based on spectra from the CALSPEC Calibration Database (http://www.stsci.edu/hst/observatory/crds/calspec.html) and the IRAC bandpasses. These observations are part of an ongoing collaboration between IPAC and STScI investigating absolute calibration in the infrared.
Observation models in radiocarbon calibration
International Nuclear Information System (INIS)
Jones, M.D.; Nicholls, G.K.
2001-01-01
The observation model underlying any calibration process dictates the precise mathematical details of the calibration calculations. Accordingly it is important that an appropriate observation model is used. Here this is illustrated with reference to the use of reservoir offsets where the standard calibration approach is based on a different model to that which the practitioners clearly believe is being applied. This sort of error can give rise to significantly erroneous calibration results. (author). 12 refs., 1 fig
Multivariate Receptor Models for Spatially Correlated Multipollutant Data
Jun, Mikyoung; Park, Eun Sug
2013-01-01
The goal of multivariate receptor modeling is to estimate the profiles of major pollution sources and quantify their impacts based on ambient measurements of pollutants. Traditionally, multivariate receptor modeling has been applied to multiple air
International Nuclear Information System (INIS)
Acosta, Andy L. Romero; Lores, Stefan Gutierrez
2013-01-01
This paper presents the design and implementation of an automated system for measurements in the calibration of reference radiation dosimeters. It was made a software application that performs the acquisition of the measured values of electric charge, calculated calibration coefficient and automates the calibration certificate issuance. These values are stored in a log file on a PC. The use of the application improves control over the calibration process, helps to humanize the work and reduces personnel exposure. The tool developed has been applied to the calibration of dosimeters radiation patterns in the LSCD of the Centro de Proteccion e Higiene de las Radiaciones, Cuba
Influence of smoothing of X-ray spectra on parameters of calibration model
International Nuclear Information System (INIS)
Antoniak, W.; Urbanski, P.; Kowalska, E.
1998-01-01
Parameters of the calibration model before and after smoothing of X-ray spectra have been investigated. The calibration model was calculated using multivariate procedure - namely the partial least square regression (PLS). Investigations have been performed on an example of six sets of various standards used for calibration of some instruments based on X-ray fluorescence principle. The smoothing methods were compared: regression splines, Savitzky-Golay and Discrete Fourier Transform. The calculations were performed using a software package MATLAB and some home-made programs. (author)
RF impedance measurement calibration
International Nuclear Information System (INIS)
Matthews, P.J.; Song, J.J.
1993-01-01
The intent of this note is not to explain all of the available calibration methods in detail. Instead, we will focus on the calibration methods of interest for RF impedance coupling measurements and attempt to explain: (1). The standards and measurements necessary for the various calibration techniques. (2). The advantages and disadvantages of each technique. (3). The mathematical manipulations that need to be applied to the measured standards and devices. (4). An outline of the steps needed for writing a calibration routine that operated from a remote computer. For further details of the various techniques presented in this note, the reader should consult the references
Muon Energy Calibration of the MINOS Detectors
Energy Technology Data Exchange (ETDEWEB)
Miyagawa, Paul S. [Somerville College, Oxford (United Kingdom)
2004-01-01
MINOS is a long-baseline neutrino oscillation experiment designed to search for conclusive evidence of neutrino oscillations and to measure the oscillation parameters precisely. MINOS comprises two iron tracking calorimeters located at Fermilab and Soudan. The Calibration Detector at CERN is a third MINOS detector used as part of the detector response calibration programme. A correct energy calibration between these detectors is crucial for the accurate measurement of oscillation parameters. This thesis presents a calibration developed to produce a uniform response within a detector using cosmic muons. Reconstruction of tracks in cosmic ray data is discussed. This data is utilized to calculate calibration constants for each readout channel of the Calibration Detector. These constants have an average statistical error of 1.8%. The consistency of the constants is demonstrated both within a single run and between runs separated by a few days. Results are presented from applying the calibration to test beam particles measured by the Calibration Detector. The responses are calibrated to within 1.8% systematic error. The potential impact of the calibration on the measurement of oscillation parameters by MINOS is also investigated. Applying the calibration reduces the errors in the measured parameters by ~ 10%, which is equivalent to increasing the amount of data by 20%.
International Nuclear Information System (INIS)
2007-01-01
Measurements of the volume and height of liquid in a process accountancy tank are often made in order to estimate or verify the tank's calibration or volume measurement equation. The calibration equation relates the response of the tank's measurement system to some independent measure of tank volume. The ultimate purpose of the calibration exercise is to estimate the tank's volume measurement equation (the inverse of the calibration equation), which relates tank volume to measurement system response. In this part of ISO 18213, it is assumed that the primary measurement-system response variable is liquid height and that the primary measure of liquid content is volume. This part of ISO 18213 presents procedures for standardizing a set of calibration data to a fixed set of reference conditions so as to minimize the effect of variations in ambient conditions that occur during the measurement process. The procedures presented herein apply generally to measurements of liquid height and volume obtained for the purpose of calibrating a tank (i.e. calibrating a tank's measurement system). When used in connection with other parts of ISO 18213, these procedures apply specifically to tanks equipped with bubbler probe systems for measuring liquid content. The standardization algorithms presented herein can be profitably applied when only estimates of ambient conditions, such as temperature, are available. However, the most reliable results are obtained when relevant ambient conditions are measured for each measurement of volume and liquid height in a set of calibration data. Information is provided on scope, physical principles, data required, calibration data, dimensional changes in the tank, multiple calibration runs and results on standardized calibration data. Four annexes inform about density of water, buoyancy corrections for mass determination, determination of tank heel volume and statistical method for aligning data from several calibration runs. A bibliography is
DEFF Research Database (Denmark)
Santos, Ilmar; Cerda Varela, Alejandro Javier
2013-01-01
The servo valve input signal and the radial injection pressure are the two main parameters responsible for dynamically modifying the journal oil film pressure and generating active fluid film forces in controllable fluid film bearings. Such fluid film forces, resulting from a strong coupling...... domain and the application of such a controllable bearing as a calibrated shaker aiming at determining the frequency response function (FRF) of rotordynamic systems; b) experimental quantification of the influence of the supply pressure and servo valve input signal on the FRF of rotor-journal bearing...... between hydrodynamic, hydrostatic and controllable lubrication regimes, can be used either to control or to excite rotor lateral vibrations. An accurate characterization of the active oil film forces is of fundamental importance to elucidate the feasibility of applying the active lubrication as non...
Directory of Open Access Journals (Sweden)
Bailing Liu
2016-02-01
Full Text Available Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration.
Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia
2016-01-01
Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. PMID:26901203
Multivariate multiscale entropy of financial markets
Lu, Yunfan; Wang, Jun
2017-11-01
In current process of quantifying the dynamical properties of the complex phenomena in financial market system, the multivariate financial time series are widely concerned. In this work, considering the shortcomings and limitations of univariate multiscale entropy in analyzing the multivariate time series, the multivariate multiscale sample entropy (MMSE), which can evaluate the complexity in multiple data channels over different timescales, is applied to quantify the complexity of financial markets. Its effectiveness and advantages have been detected with numerical simulations with two well-known synthetic noise signals. For the first time, the complexity of four generated trivariate return series for each stock trading hour in China stock markets is quantified thanks to the interdisciplinary application of this method. We find that the complexity of trivariate return series in each hour show a significant decreasing trend with the stock trading time progressing. Further, the shuffled multivariate return series and the absolute multivariate return series are also analyzed. As another new attempt, quantifying the complexity of global stock markets (Asia, Europe and America) is carried out by analyzing the multivariate returns from them. Finally we utilize the multivariate multiscale entropy to assess the relative complexity of normalized multivariate return volatility series with different degrees.
A TRMM-Calibrated Infrared Rainfall Algorithm Applied Over Brazil
Negri, A. J.; Xu, L.; Adler, R. F.; Einaudi, Franco (Technical Monitor)
2000-01-01
The development of a satellite infrared technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall in Amazonia are presented. The Convective-Stratiform. Technique, calibrated by coincident, physically retrieved rain rates from the Tropical Rain Measuring Mission (TRMM) Microwave Imager (TMI), is applied during January to April 1999 over northern South America. The diurnal cycle of rainfall, as well as the division between convective and stratiform rainfall is presented. Results compare well (a one-hour lag) with the diurnal cycle derived from Tropical Ocean-Global Atmosphere (TOGA) radar-estimated rainfall in Rondonia. The satellite estimates reveal that the convective rain constitutes, in the mean, 24% of the rain area while accounting for 67% of the rain volume. The effects of geography (rivers, lakes, coasts) and topography on the diurnal cycle of convection are examined. In particular, the Amazon River, downstream of Manaus, is shown to both enhance early morning rainfall and inhibit afternoon convection. Monthly estimates from this technique, dubbed CST/TMI, are verified over a dense rain gage network in the state of Ceara, in northeast Brazil. The CST/TMI showed a high bias equal to +33% of the gage mean, indicating that possibly the TMI estimates alone are also high. The root mean square difference (after removal of the bias) equaled 36.6% of the gage mean. The correlation coefficient was 0.77 based on 72 station-months.
Zhang, Jie; Stonnington, Cynthia; Li, Qingyang; Shi, Jie; Bauer, Robert J; Gutman, Boris A; Chen, Kewei; Reiman, Eric M; Thompson, Paul M; Ye, Jieping; Wang, Yalin
2016-04-01
Alzheimer's disease (AD) is a progressive brain disease. Accurate diagnosis of AD and its prodromal stage, mild cognitive impairment, is crucial for clinical trial design. There is also growing interests in identifying brain imaging biomarkers that help evaluate AD risk presymptomatically. Here, we applied a recently developed multivariate tensor-based morphometry (mTBM) method to extract features from hippocampal surfaces, derived from anatomical brain MRI. For such surface-based features, the feature dimension is usually much larger than the number of subjects. We used dictionary learning and sparse coding to effectively reduce the feature dimensions. With the new features, an Adaboost classifier was employed for binary group classification. In tests on publicly available data from the Alzheimers Disease Neuroimaging Initiative, the new framework outperformed several standard imaging measures in classifying different stages of AD. The new approach combines the efficiency of sparse coding with the sensitivity of surface mTBM, and boosts classification performance.
Improvement of the calibration technique of clinical dosemeters
International Nuclear Information System (INIS)
Ehlin Caldas, L.V.
1988-08-01
Clinical dosemeters constituted of ionization chambers connected to electrometers are usually calibrated as whole systems in appropriate radiation fields against secondary standard dosemeters in calibration laboratories. This work reports on a technique of component calibration procedures separately for chambers and electrometers applied in the calibration laboratory of IPEN-CNEN, Brazil. For electrometer calibration, redundancy was established by using a standard capacitor of 1000pF (General Radio, USA) and a standard current source based on air ionization with Sr 90 (PTW, Germany). The results from both methods applied to several electrometers of clinical dosemeters agreed within 0.4%. The calibration factors for the respective chambers were determined by intercomparing their response to the response of a certified calibrated chamber in a Co 60 calibration beam using a Keithley electrometer type 617. Overall calibration factors compared with the product of the respective component calibration factors for the tested dosemeters showed an agreement better than 0.7%. This deviation has to be considered with regard to an uncertainty of 2.5% in routine calibration of clinical dosemeters. Calibration by components permits to calibrate ionization chambers one at a time for those hospitals who have several ionization chambers but only one electrometer (small hospitals, hospitals in developing countries). 6 refs, 2 figs, 2 tabs
Role of calibration, validation, and relevance in multi-level uncertainty integration
International Nuclear Information System (INIS)
Li, Chenzhao; Mahadevan, Sankaran
2016-01-01
Calibration of model parameters is an essential step in predicting the response of a complicated system, but the lack of data at the system level makes it impossible to conduct this quantification directly. In such a situation, system model parameters are estimated using tests at lower levels of complexity which share the same model parameters with the system. For such a multi-level problem, this paper proposes a methodology to quantify the uncertainty in the system level prediction by integrating calibration, validation and sensitivity analysis at different levels. The proposed approach considers the validity of the models used for parameter estimation at lower levels, as well as the relevance at the lower level to the prediction at the system level. The model validity is evaluated using a model reliability metric, and models with multivariate output are considered. The relevance is quantified by comparing Sobol indices at the lower level and system level, thus measuring the extent to which a lower level test represents the characteristics of the system so that the calibration results can be reliably used in the system level. Finally the results of calibration, validation and relevance analysis are integrated in a roll-up method to predict the system output. - Highlights: • Relevance analysis to quantify the closeness of two models. • Stochastic model reliability metric to integrate multiple validation experiments. • Extend the model reliability metric to deal with multivariate output. • Roll-up formula to integrate calibration, validation, and relevance.
Consequences of Secondary Calibrations on Divergence Time Estimates.
Directory of Open Access Journals (Sweden)
John J Schenk
Full Text Available Secondary calibrations (calibrations based on the results of previous molecular dating studies are commonly applied in divergence time analyses in groups that lack fossil data; however, the consequences of applying secondary calibrations in a relaxed-clock approach are not fully understood. I tested whether applying the posterior estimate from a primary study as a prior distribution in a secondary study results in consistent age and uncertainty estimates. I compared age estimates from simulations with 100 randomly replicated secondary trees. On average, the 95% credible intervals of node ages for secondary estimates were significantly younger and narrower than primary estimates. The primary and secondary age estimates were significantly different in 97% of the replicates after Bonferroni corrections. Greater error in magnitude was associated with deeper than shallower nodes, but the opposite was found when standardized by median node age, and a significant positive relationship was determined between the number of tips/age of secondary trees and the total amount of error. When two secondary calibrated nodes were analyzed, estimates remained significantly different, and although the minimum and median estimates were associated with less error, maximum age estimates and credible interval widths had greater error. The shape of the prior also influenced error, in which applying a normal, rather than uniform, prior distribution resulted in greater error. Secondary calibrations, in summary, lead to a false impression of precision and the distribution of age estimates shift away from those that would be inferred by the primary analysis. These results suggest that secondary calibrations should not be applied as the only source of calibration in divergence time analyses that test time-dependent hypotheses until the additional error associated with secondary calibrations is more properly modeled to take into account increased uncertainty in age estimates.
Guidelines on calibration of neutron measuring devices
International Nuclear Information System (INIS)
Burger, G.
1988-01-01
The International Atomic Energy Agency and the World Health Organization have agreed to establish an IAEA/WHO Network of Secondary Standard Dosimetry Laboratories (SSDLs) in order to improve accuracy in applied radiation dosimetry throughout the world. These SSDLs must be equipped with, and maintain, secondary standard instruments, which have been calibrated against primary standards, and must be nominated by their governments for membership of the network. The majority of the existing SSDLs were established primarily to work with photon radiation (X-rays and gamma rays). Neutron sources are, however, increasingly being applied in industrial processes, research, nuclear power development and radiation biology and medicine. Thus, it is desirable that the SSDLs in countries using neutron sources on a regular basis should also fulfil the minimum requirements to calibrate neutron measuring devices. It is the primary purpose of this handbook to provide guidance on calibration of instruments for radiation protection. A calibration laboratory should also be in a position to calibrate instrumentation being used for the measurement of kerma and absorbed dose and their corresponding rates. This calibration is generally done with photons. In addition, since each neutron field is usually contaminated by photons produced in the source or by scatter in the surrounding media, neutron protection instrumentation has to be tested with respect to its intrinsic photon response. The laboratory will therefore need to possess equipment for photon calibration. This publication deals primarily with methods of applying radioactive neutron sources for calibration of instrumentation, and gives an indication of the space, manpower and facilities needed to fulfil the minimum requirements of a calibration laboratory for neutron work. It is intended to serve as a guide for centres about to start on neutron dosimetry standardization and calibration. 94 refs, 8 figs, 12 tabs
Applying transport-distance specific SOC distribution to calibrate soil erosion model WaTEM
Hu, Yaxian; Heckrath, Goswin J.; Kuhn, Nikolaus J.
2016-04-01
Slope-scale soil erosion, transport and deposition fundamentally decide the spatial redistribution of eroded sediments in terrestrial and aquatic systems, which further affect the burial and decomposition of eroded SOC. However, comparisons of SOC contents between upper eroding slope and lower depositional site cannot fully reflect the movement of eroded SOC in-transit along hillslopes. The actual transport distance of eroded SOC is decided by its settling velocity. So far, the settling velocity distribution of eroded SOC is mostly calculated from mineral particle specific SOC distribution. Yet, soil is mostly eroded in form of aggregates, and the movement of aggregates differs significantly from individual mineral particles. This urges a SOC erodibility parameter based on actual transport distance distribution of eroded fractions to better calibrate soil erosion models. Previous field investigation on a freshly seeded cropland in Denmark has shown immediate deposition of fast settling soil fractions and the associated SOC at footslopes, followed by a fining trend at the slope tail. To further quantify the long-term effects of topography on erosional redistribution of eroded SOC, the actual transport-distance specific SOC distribution observed on the field was applied to a soil erosion model WaTEM (based on USLE). After integrating with local DEM, our calibrated model succeeded in locating the hotspots of enrichment/depletion of eroded SOC on different topographic positions, much better corresponding to the real-world field observation. By extrapolating into repeated erosion events, our projected results on the spatial distribution of eroded SOC are also adequately consistent with the SOC properties in the consecutive sample profiles along the slope.
Inertial Sensor Error Reduction through Calibration and Sensor Fusion
Directory of Open Access Journals (Sweden)
Stefan Lambrecht
2016-02-01
Full Text Available This paper presents the comparison between cooperative and local Kalman Filters (KF for estimating the absolute segment angle, under two calibration conditions. A simplified calibration, that can be replicated in most laboratories; and a complex calibration, similar to that applied by commercial vendors. The cooperative filters use information from either all inertial sensors attached to the body, Matricial KF; or use information from the inertial sensors and the potentiometers of an exoskeleton, Markovian KF. A one minute walking trial of a subject walking with a 6-DoF exoskeleton was used to assess the absolute segment angle of the trunk, thigh, shank, and foot. The results indicate that regardless of the segment and filter applied, the more complex calibration always results in a significantly better performance compared to the simplified calibration. The interaction between filter and calibration suggests that when the quality of the calibration is unknown the Markovian KF is recommended. Applying the complex calibration, the Matricial and Markovian KF perform similarly, with average RMSE below 1.22 degrees. Cooperative KFs perform better or at least equally good as Local KF, we therefore recommend to use cooperative KFs instead of local KFs for control or analysis of walking.
Multivariate meta-analysis: Potential and promise
Jackson, Dan; Riley, Richard; White, Ian R
2011-01-01
The multivariate random effects model is a generalization of the standard univariate model. Multivariate meta-analysis is becoming more commonly used and the techniques and related computer software, although continually under development, are now in place. In order to raise awareness of the multivariate methods, and discuss their advantages and disadvantages, we organized a one day ‘Multivariate meta-analysis’ event at the Royal Statistical Society. In addition to disseminating the most recent developments, we also received an abundance of comments, concerns, insights, critiques and encouragement. This article provides a balanced account of the day's discourse. By giving others the opportunity to respond to our assessment, we hope to ensure that the various view points and opinions are aired before multivariate meta-analysis simply becomes another widely used de facto method without any proper consideration of it by the medical statistics community. We describe the areas of application that multivariate meta-analysis has found, the methods available, the difficulties typically encountered and the arguments for and against the multivariate methods, using four representative but contrasting examples. We conclude that the multivariate methods can be useful, and in particular can provide estimates with better statistical properties, but also that these benefits come at the price of making more assumptions which do not result in better inference in every case. Although there is evidence that multivariate meta-analysis has considerable potential, it must be even more carefully applied than its univariate counterpart in practice. Copyright © 2011 John Wiley & Sons, Ltd. PMID:21268052
Radiation Calibration Measurements
International Nuclear Information System (INIS)
Omondi, C.
2017-01-01
KEBS Radiation Dosimetry mandate are: Custodian of Kenya Standards on Ionizing radiation, Ensure traceability to International System (SI ) and Calibration radiation equipment. RAF 8/040 on Radioisotope applications for troubleshooting and optimizing industrial process established Radiotracer Laboratory objective is to introduce and implement radiotracer technique for problem solving of industrial challenges. Gamma ray scanning technique applied is to Locate blockages, Locate liquid in vapor lines, Locate areas of lost refractory or lining in a pipe and Measure flowing densities. Equipment used for diagnostic and radiation protection must be calibrated to ensure Accuracy and Traceability
Applying Multivariate Discrete Distributions to Genetically Informative Count Data.
Kirkpatrick, Robert M; Neale, Michael C
2016-03-01
We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available.
Weak convergence of marked point processes generated by crossings of multivariate jump processes
DEFF Research Database (Denmark)
Tamborrino, Massimiliano; Sacerdote, Laura; Jacobsen, Martin
2014-01-01
We consider the multivariate point process determined by the crossing times of the components of a multivariate jump process through a multivariate boundary, assuming to reset each component to an initial value after its boundary crossing. We prove that this point process converges weakly...... process converging to a multivariate Ornstein–Uhlenbeck process is discussed as a guideline for applying diffusion limits for jump processes. We apply our theoretical findings to neural network modeling. The proposed model gives a mathematical foundation to the generalization of the class of Leaky...
Farouk, M; Elaziz, Omar Abd; Tawakkol, Shereen M; Hemdan, A; Shehata, Mostafa A
2014-04-05
Four simple, accurate, reproducible, and selective methods have been developed and subsequently validated for the determination of Benazepril (BENZ) alone and in combination with Amlodipine (AML) in pharmaceutical dosage form. The first method is pH induced difference spectrophotometry, where BENZ can be measured in presence of AML as it showed maximum absorption at 237nm and 241nm in 0.1N HCl and 0.1N NaOH, respectively, while AML has no wavelength shift in both solvents. The second method is the new Extended Ratio Subtraction Method (EXRSM) coupled to Ratio Subtraction Method (RSM) for determination of both drugs in commercial dosage form. The third and fourth methods are multivariate calibration which include Principal Component Regression (PCR) and Partial Least Squares (PLSs). A detailed validation of the methods was performed following the ICH guidelines and the standard curves were found to be linear in the range of 2-30μg/mL for BENZ in difference and extended ratio subtraction spectrophotometric method, and 5-30 for AML in EXRSM method, with well accepted mean correlation coefficient for each analyte. The intra-day and inter-day precision and accuracy results were well within the acceptable limits. Copyright © 2013 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Hirotsu, Yuko; Suzuki, Kunihiko; Takano, Kenichi; Kojima, Mitsuhiro
2000-01-01
It is essential for preventing the recurrence of human error incidents to analyze and evaluate them with the emphasis on human factor. Detailed and structured analyses of all incidents at domestic nuclear power plants (NPPs) reported during last 31 years have been conducted based on J-HPES, in which total 193 human error cases are identified. Results obtained by the analyses have been stored into the J-HPES database. In the previous study, by applying multivariate analysis to above case studies, it was suggested that there were several occurrence patterns identified of how errors occur at NPPs. It was also clarified that the causes related to each human error are different depending on age of their occurrence. This paper described the obtained results in respects of periodical transition of human error occurrence patterns. By applying multivariate analysis to the above data, it was suggested there were two types of error occurrence patterns as to each human error type. First type is common occurrence patterns, not depending on the age, and second type is the one influenced by periodical characteristics. (author)
Thompson, Bryony A.; Greenblatt, Marc S.; Vallee, Maxime P.; Herkert, Johanna C.; Tessereau, Chloe; Young, Erin L.; Adzhubey, Ivan A.; Li, Biao; Bell, Russell; Feng, Bingjian; Mooney, Sean D.; Radivojac, Predrag; Sunyaev, Shamil R.; Frebourg, Thierry; Hofstra, Robert M.W.; Sijmons, Rolf H.; Boucher, Ken; Thomas, Alun; Goldgar, David E.; Spurdle, Amanda B.; Tavtigian, Sean V.
2015-01-01
Classification of rare missense substitutions observed during genetic testing for patient management is a considerable problem in clinical genetics. The Bayesian integrated evaluation of unclassified variants is a solution originally developed for BRCA1/2. Here, we take a step toward an analogous system for the mismatch repair (MMR) genes (MLH1, MSH2, MSH6, and PMS2) that confer colon cancer susceptibility in Lynch syndrome by calibrating in silico tools to estimate prior probabilities of pathogenicity for MMR gene missense substitutions. A qualitative five-class classification system was developed and applied to 143 MMR missense variants. This identified 74 missense substitutions suitable for calibration. These substitutions were scored using six different in silico tools (Align-Grantham Variation Grantham Deviation, multivariate analysis of protein polymorphisms [MAPP], Mut-Pred, PolyPhen-2.1, Sorting Intolerant From Tolerant, and Xvar), using curated MMR multiple sequence alignments where possible. The output from each tool was calibrated by regression against the classifications of the 74 missense substitutions; these calibrated outputs are interpretable as prior probabilities of pathogenicity. MAPP was the most accurate tool and MAPP + PolyPhen-2.1 provided the best-combined model (R2 = 0.62 and area under receiver operating characteristic = 0.93). The MAPP + PolyPhen-2.1 output is sufficiently predictive to feed as a continuous variable into the quantitative Bayesian integrated evaluation for clinical classification of MMR gene missense substitutions. PMID:22949387
Liu, Yan; Cai, Wensheng; Shao, Xueguang
2016-12-01
Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses.
Calibration Methods for Reliability-Based Design Codes
DEFF Research Database (Denmark)
Gayton, N.; Mohamed, A.; Sørensen, John Dalsgaard
2004-01-01
The calibration methods are applied to define the optimal code format according to some target safety levels. The calibration procedure can be seen as a specific optimization process where the control variables are the partial factors of the code. Different methods are available in the literature...
Sensor Calibration Design Based on D-Optimality Criterion
Directory of Open Access Journals (Sweden)
Hajiyev Chingiz
2016-09-01
Full Text Available In this study, a procedure for optimal selection of measurement points using the D-optimality criterion to find the best calibration curves of measurement sensors is proposed. The coefficients of calibration curve are evaluated by applying the classical Least Squares Method (LSM. As an example, the problem of optimal selection for standard pressure setters when calibrating a differential pressure sensor is solved. The values obtained from the D-optimum measurement points for calibration of the differential pressure sensor are compared with those from actual experiments. Comparison of the calibration errors corresponding to the D-optimal, A-optimal and Equidistant calibration curves is done.
Schenone, Agustina V; Culzoni, María J; Marsili, Nilda R; Goicoechea, Héctor C
2013-06-01
The performance of MCR-ALS was studied in the modeling of non-linear kinetic-spectrophotometric data acquired by a stopped-flow system for the quantitation of tartrazine in the presence of brilliant blue and sunset yellow FCF as possible interferents. In the present work, MCR-ALS and U-PCA/RBL were firstly applied to remove the contribution of unexpected components not included in the calibration set. Secondly, a polynomial function was used to model the non-linear data obtained by the implementation of the algorithms. MCR-ALS was the only strategy that allowed the determination of tartrazine in test samples accurately. Therefore, it was applied for the analysis of tartrazine in beverage samples with minimum sample preparation and short analysis time. The proposed method was validated by comparison with a chromatographic procedure published in the literature. Mean recovery values between 98% and 100% and relative errors of prediction values between 4% and 9% were indicative of the good performance of the method. Copyright © 2012 Elsevier Ltd. All rights reserved.
Gallagher, Neal B; Blake, Thomas A; Gassman, Paul L; Shaver, Jeremy M; Windig, Willem
2006-07-01
Multivariate curve resolution (MCR) is a powerful technique for extracting chemical information from measured spectra of complex mixtures. A modified MCR technique that utilized both measured and second-derivative spectra to account for observed sample-to-sample variability attributable to changes in soil reflectivity was used to estimate the spectrum of dibutyl phosphate (DBP) adsorbed on two different soil types. This algorithm was applied directly to measurements of reflection spectra of soils coated with analyte without resorting to soil preparations such as grinding or dilution in potassium bromide. The results provided interpretable spectra that can be used to guide strategies for detection and classification of organic analytes adsorbed on soil. Comparisons to the neat DBP liquid spectrum showed that the recovered analyte spectra from both soils showed spectral features from methyl, methylene, hydroxyl, and P=O functional groups, but most conspicuous was the absence of the strong PO-(CH2)3CH3 stretch absorption at 1033 cm(-1). These results are consistent with those obtained previously using extended multiplicative scatter correction.
Reconstructing the calibrated strain signal in the Advanced LIGO detectors
Viets, A. D.; Wade, M.; Urban, A. L.; Kandhasamy, S.; Betzwieser, J.; Brown, Duncan A.; Burguet-Castell, J.; Cahillane, C.; Goetz, E.; Izumi, K.; Karki, S.; Kissel, J. S.; Mendell, G.; Savage, R. L.; Siemens, X.; Tuyenbayev, D.; Weinstein, A. J.
2018-05-01
Advanced LIGO’s raw detector output needs to be calibrated to compute dimensionless strain h(t) . Calibrated strain data is produced in the time domain using both a low-latency, online procedure and a high-latency, offline procedure. The low-latency h(t) data stream is produced in two stages, the first of which is performed on the same computers that operate the detector’s feedback control system. This stage, referred to as the front-end calibration, uses infinite impulse response (IIR) filtering and performs all operations at a 16 384 Hz digital sampling rate. Due to several limitations, this procedure currently introduces certain systematic errors in the calibrated strain data, motivating the second stage of the low-latency procedure, known as the low-latency gstlal calibration pipeline. The gstlal calibration pipeline uses finite impulse response (FIR) filtering to apply corrections to the output of the front-end calibration. It applies time-dependent correction factors to the sensing and actuation components of the calibrated strain to reduce systematic errors. The gstlal calibration pipeline is also used in high latency to recalibrate the data, which is necessary due mainly to online dropouts in the calibrated data and identified improvements to the calibration models or filters.
Multivariate calibration analysis of colorimetric mercury sensing using a molecular probe
International Nuclear Information System (INIS)
Perez-Hernandez, Javier; Albero, Josep; Correig, Xavier; Llobet, Eduard; Palomares, Emilio
2009-01-01
Selectivity is one of the main challenges of sensors, particularly those based on chemical interactions. Multivariate analytical models can determine the concentration of analytes even in the presence of other potential interferences. In this work, we have determined the presence of mercury ions in aqueous solutions in the ppm range (0-2 mg L -1 ) using a ruthenium bis-thiocyanate complex as a chemical probe. Moreover, we have analyzed the mercury-containing solutions with the co-existence of higher concentrations (19.5 mg L -1 ) of other potential competitors such as Cd 2+ , Pb 2+ , Cu 2+ and Zn 2+ ions. Our experimental model is based on partial least squares (PLS) method and other techniques as genetic algorithm and statistical feature selection (SFS) that have been used to refine, beforehand, the analytical data. In summary, we have demonstrated that the root mean square error of prediction without pre-treatment and with statistical feature selection can be reduced from 10.22% to 6.27%
Modal and Wave Load Identification by ARMA Calibration
DEFF Research Database (Denmark)
Jensen, Jens Kristian Jehrbo; Kirkegaard, Poul Henning; Brincker, Rune
1992-01-01
In this note, modal parameter and wave load identification by calibration of ARMA models are considered for a simple offshore structure. The theory of identification by ARMA calibration is introduced as an identification technique in the time domain, which can be applied for white noise–excited s......In this note, modal parameter and wave load identification by calibration of ARMA models are considered for a simple offshore structure. The theory of identification by ARMA calibration is introduced as an identification technique in the time domain, which can be applied for white noise...... by an experimental example of a monopile model excited by random waves. The identification results show that the approach is able to give very reliable estimates of the modal parameters. Furthermore, a comparison of the identified wave load process and the calculated load process based on the Morison equation shows...
Radiometric calibration of the in-flight blackbody calibration system of the GLORIA interferometer
Directory of Open Access Journals (Sweden)
C. Monte
2014-01-01
Atmosphere is an airborne, imaging, infrared Fourier transform spectrometer that applies the limb-imaging technique to perform trace gas and temperature measurements in the Earth's atmosphere with three-dimensional resolution. To ensure the traceability of these measurements to the International Temperature Scale and thereby to an absolute radiance scale, GLORIA carries an on-board calibration system. Basically, it consists of two identical large-area and high-emissivity infrared radiators, which can be continuously and independently operated at two adjustable temperatures in a range from −50 °C to 0 °C during flight. Here we describe the radiometric and thermometric characterization and calibration of the in-flight calibration system at the Reduced Background Calibration Facility of the Physikalisch-Technische Bundesanstalt. This was performed with a standard uncertainty of less than 110 mK. Extensive investigations of the system concerning its absolute radiation temperature and spectral radiance, its temperature homogeneity and its short- and long-term stability are discussed. The traceability chain of these measurements is presented.
DTW-APPROACH FOR UNCORRELATED MULTIVARIATE TIME SERIES IMPUTATION
Phan , Thi-Thu-Hong; Poisson Caillault , Emilie; Bigand , André; Lefebvre , Alain
2017-01-01
International audience; Missing data are inevitable in almost domains of applied sciences. Data analysis with missing values can lead to a loss of efficiency and unreliable results, especially for large missing sub-sequence(s). Some well-known methods for multivariate time series imputation require high correlations between series or their features. In this paper , we propose an approach based on the shape-behaviour relation in low/un-correlated multivariate time series under an assumption of...
Energy Technology Data Exchange (ETDEWEB)
Clegg, Samuel M [Los Alamos National Laboratory; Barefield, James E [Los Alamos National Laboratory; Wiens, Roger C [Los Alamos National Laboratory; Sklute, Elizabeth [MT HOLYOKE COLLEGE; Dyare, Melinda D [MT HOLYOKE COLLEGE
2008-01-01
Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from which unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.
Analysis of multi-species point patterns using multivariate log Gaussian Cox processes
DEFF Research Database (Denmark)
Waagepetersen, Rasmus; Guan, Yongtao; Jalilian, Abdollah
Multivariate log Gaussian Cox processes are flexible models for multivariate point patterns. However, they have so far only been applied in bivariate cases. In this paper we move beyond the bivariate case in order to model multi-species point patterns of tree locations. In particular we address t...... of the data. The selected number of common latent fields provides an index of complexity of the multivariate covariance structure. Hierarchical clustering is used to identify groups of species with similar patterns of dependence on the common latent fields.......Multivariate log Gaussian Cox processes are flexible models for multivariate point patterns. However, they have so far only been applied in bivariate cases. In this paper we move beyond the bivariate case in order to model multi-species point patterns of tree locations. In particular we address...... the problems of identifying parsimonious models and of extracting biologically relevant information from the fitted models. The latent multivariate Gaussian field is decomposed into components given in terms of random fields common to all species and components which are species specific. This allows...
van de Mheen, Lidewij; Schuit, Ewoud; Lim, Arianne C; Porath, Martina M; Papatsonis, Dimitri; Erwich, Jan J; van Eyck, Jim; van Oirschot, Charlotte M; Hummel, Piet; Duvekot, Johannes J; Hasaart, Tom H M; Groenwold, Rolf H H; Moons, Karl G M; de Groot, Christianne J M; Bruinse, Hein W; van Pampus, Maria G; Mol, Ben W J
2014-04-01
To develop a multivariable prognostic model for the risk of preterm delivery in women with multiple pregnancy that includes cervical length measurement at 16 to 21 weeks' gestation and other variables. We used data from a previous randomized trial. We assessed the association between maternal and pregnancy characteristics including cervical length measurement at 16 to 21 weeks' gestation and time to delivery using multivariable Cox regression modelling. Performance of the final model was assessed for the outcomes of preterm and very preterm delivery using calibration and discrimination measures. We studied 507 women, of whom 270 (53%) delivered models for preterm and very preterm delivery had a c-index of 0.68 (95% CI 0.63 to 0.72) and 0.68 (95% CI 0.62 to 0.75), respectively, and showed good calibration. In women with a multiple pregnancy, the risk of preterm delivery can be assessed with a multivariable model incorporating cervical length and other predictors.
Effect of contact stiffness on wedge calibration of lateral force in atomic force microscopy
International Nuclear Information System (INIS)
Wang Fei; Zhao Xuezeng
2007-01-01
Quantitative friction measurement of nanomaterials in atomic force microscope requires accurate calibration method for lateral force. The effect of contact stiffness on lateral force calibration of atomic force microscope is discussed in detail and an improved calibration method is presented. The calibration factor derived from the original method increased with the applied normal load, which indicates that separate calibration should be required for every given applied normal load to keep the accuracy of friction measurement. We improve the original method by introducing the contact factor, which is derived from the contact stiffness between the tip and the sample, to the calculation of calibration factors. The improved method makes the calculation of calibration factors under different applied normal loads possible without repeating the calibration procedure. Comparative experiments on a silicon wafer have been done by both the two methods to validate the method in this article
A Monte Carlo error simulation applied to calibration-free X-ray diffraction phase analysis
International Nuclear Information System (INIS)
Braun, G.E.
1986-01-01
Quantitative phase analysis of a system of n phases can be effected without the need for calibration standards provided at least n different mixtures of these phases are available. A series of linear equations relating diffracted X-ray intensities, weight fractions and quantitation factors coupled with mass balance relationships can be solved for the unknown weight fractions and factors. Uncertainties associated with the measured X-ray intensities, owing to counting of random X-ray quanta, are used to estimate the errors in the calculated parameters utilizing a Monte Carlo simulation. The Monte Carlo approach can be generalized and applied to any quantitative X-ray diffraction phase analysis method. Two examples utilizing mixtures of CaCO 3 , Fe 2 O 3 and CaF 2 with an α-SiO 2 (quartz) internal standard illustrate the quantitative method and corresponding error analysis. One example is well conditioned; the other is poorly conditioned and, therefore, very sensitive to errors in the measured intensities. (orig.)
Calibration of clinical dosemeters in the IAEA water phantom
International Nuclear Information System (INIS)
Caldas, L.V.E.; Albuquerque, M.P.P.
1994-01-01
The procedures recommended by the IAEA Code of Practice were applied at the Calibration Laboratory of Sao Paulo in order to provide in the future the clinical dosemeters users with absorbed dose to water calibration factors for Cobalt 60 radiation beams. In this work the clinical dosemeters were calibrated free in air and in water, and the results were compared, using conversion factors. The several tested clinical dosemeters of different manufacturers and models belong to the laboratory and to hospitals. For the measurements in water the IAEA cubic water phantom was used. The dosemeters were all calibrated free in air in terms of air kerma, and the calibration factors in terms of absorbed dose to water were obtained through conversion factors. the same dosemeters were also calibrated into the water phantom. Good agreement was found between the two methods, the differences were always less than 0.5%. The data obtained during this work show that when the dosemeters are used only in Cobalt 60 radiation and the users apply in the hospital routine work the IAEA Code of Practice, the calibration can be performed directly in the water phantom. This procedure provides the useful calibration factors in terms of absorbed dose to water
Calibration methodology for proportional counters applied to yield measurements of a neutron burst
International Nuclear Information System (INIS)
Tarifeño-Saldivia, Ariel; Pavez, Cristian; Soto, Leopoldo; Mayer, Roberto E
2015-01-01
This work introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from detection of the burst of neutrons. An improvement of more than one order of magnitude in the accuracy of a paraffin wax moderated 3 He-filled tube is obtained by using this methodology with respect to previous calibration methods. (paper)
Variable Acceleration Force Calibration System (VACS)
Rhew, Ray D.; Parker, Peter A.; Johnson, Thomas H.; Landman, Drew
2014-01-01
Conventionally, force balances have been calibrated manually, using a complex system of free hanging precision weights, bell cranks, and/or other mechanical components. Conventional methods may provide sufficient accuracy in some instances, but are often quite complex and labor-intensive, requiring three to four man-weeks to complete each full calibration. To ensure accuracy, gravity-based loading is typically utilized. However, this often causes difficulty when applying loads in three simultaneous, orthogonal axes. A complex system of levers, cranks, and cables must be used, introducing increased sources of systematic error, and significantly increasing the time and labor intensity required to complete the calibration. One aspect of the VACS is a method wherein the mass utilized for calibration is held constant, and the acceleration is changed to thereby generate relatively large forces with relatively small test masses. Multiple forces can be applied to a force balance without changing the test mass, and dynamic forces can be applied by rotation or oscillating acceleration. If rotational motion is utilized, a mass is rigidly attached to a force balance, and the mass is exposed to a rotational field. A large force can be applied by utilizing a large rotational velocity. A centrifuge or rotating table can be used to create the rotational field, and fixtures can be utilized to position the force balance. The acceleration may also be linear. For example, a table that moves linearly and accelerates in a sinusoidal manner may also be utilized. The test mass does not have to move in a path that is parallel to the ground, and no re-leveling is therefore required. Balance deflection corrections may be applied passively by monitoring the orientation of the force balance with a three-axis accelerometer package. Deflections are measured during each test run, and adjustments with respect to the true applied load can be made during the post-processing stage. This paper will
Multivariate Bias Correction Procedures for Improving Water Quality Predictions from the SWAT Model
Arumugam, S.; Libera, D.
2017-12-01
Water quality observations are usually not available on a continuous basis for longer than 1-2 years at a time over a decadal period given the labor requirements making calibrating and validating mechanistic models difficult. Further, any physical model predictions inherently have bias (i.e., under/over estimation) and require post-simulation techniques to preserve the long-term mean monthly attributes. This study suggests a multivariate bias-correction technique and compares to a common technique in improving the performance of the SWAT model in predicting daily streamflow and TN loads across the southeast based on split-sample validation. The approach is a dimension reduction technique, canonical correlation analysis (CCA) that regresses the observed multivariate attributes with the SWAT model simulated values. The common approach is a regression based technique that uses an ordinary least squares regression to adjust model values. The observed cross-correlation between loadings and streamflow is better preserved when using canonical correlation while simultaneously reducing individual biases. Additionally, canonical correlation analysis does a better job in preserving the observed joint likelihood of observed streamflow and loadings. These procedures were applied to 3 watersheds chosen from the Water Quality Network in the Southeast Region; specifically, watersheds with sufficiently large drainage areas and number of observed data points. The performance of these two approaches are compared for the observed period and over a multi-decadal period using loading estimates from the USGS LOADEST model. Lastly, the CCA technique is applied in a forecasting sense by using 1-month ahead forecasts of P & T from ECHAM4.5 as forcings in the SWAT model. Skill in using the SWAT model for forecasting loadings and streamflow at the monthly and seasonal timescale is also discussed.
Stepwise Regression Analysis of MDOE Balance Calibration Data Acquired at DNW
DeLoach, RIchard; Philipsen, Iwan
2007-01-01
This paper reports a comparison of two experiment design methods applied in the calibration of a strain-gage balance. One features a 734-point test matrix in which loads are varied systematically according to a method commonly applied in aerospace research and known in the literature of experiment design as One Factor At a Time (OFAT) testing. Two variations of an alternative experiment design were also executed on the same balance, each with different features of an MDOE experiment design. The Modern Design of Experiments (MDOE) is an integrated process of experiment design, execution, and analysis applied at NASA's Langley Research Center to achieve significant reductions in cycle time, direct operating cost, and experimental uncertainty in aerospace research generally and in balance calibration experiments specifically. Personnel in the Instrumentation and Controls Department of the German Dutch Wind Tunnels (DNW) have applied MDOE methods to evaluate them in the calibration of a balance using an automated calibration machine. The data have been sent to Langley Research Center for analysis and comparison. This paper reports key findings from this analysis. The chief result is that a 100-point calibration exploiting MDOE principles delivered quality comparable to a 700+ point OFAT calibration with significantly reduced cycle time and attendant savings in direct and indirect costs. While the DNW test matrices implemented key MDOE principles and produced excellent results, additional MDOE concepts implemented in balance calibrations at Langley Research Center are also identified and described.
Method of Calibrating a Force Balance
Parker, Peter A. (Inventor); Rhew, Ray D. (Inventor); Johnson, Thomas H. (Inventor); Landman, Drew (Inventor)
2015-01-01
A calibration system and method utilizes acceleration of a mass to generate a force on the mass. An expected value of the force is calculated based on the magnitude and acceleration of the mass. A fixture is utilized to mount the mass to a force balance, and the force balance is calibrated to provide a reading consistent with the expected force determined for a given acceleration. The acceleration can be varied to provide different expected forces, and the force balance can be calibrated for different applied forces. The acceleration may result from linear acceleration of the mass or rotational movement of the mass.
Evaluation of methods to calibrate radiation survey meters
International Nuclear Information System (INIS)
Robinson, R.C.; Arbeau, N.D.
1987-04-01
Calibration requirements for radiation survey meters used in industrial radiography have been reviewed. Information obtained from a literature search, discussions with CSLD inspectors and firms performing calibrations has been considered. Based on this review a set of minimum calibration requirements was generated which, when met, will determine that the survey meter is suited for measurements described in the current AEC Regulations that apply to industrial radiography equipment. These requirements are presented in this report and may be used as guidelines for evaluating calibration methods proposed or in use in industry. 39 refs
Calibration of an electronic nose for poultry farm
Abdullah, A. H.; Shukor, S. A.; Kamis, M. S.; Shakaff, A. Y. M.; Zakaria, A.; Rahim, N. A.; Mamduh, S. M.; Kamarudin, K.; Saad, F. S. A.; Masnan, M. J.; Mustafa, H.
2017-03-01
Malodour from the poultry farms could cause air pollution and therefore potentially dangerous to humans' and animals' health. This issue also poses sustainability risk to the poultry industries due to objections from local community. The aim of this paper is to develop and calibrate a cost effective and efficient electronic nose for poultry farm air monitoring. The instrument main components include sensor chamber, array of specific sensors, microcontroller, signal conditioning circuits and wireless sensor networks. The instrument was calibrated to allow classification of different concentrations of main volatile compounds in the poultry farm malodour. The outcome of the process will also confirm the device's reliability prior to being used for poultry farm malodour assessment. The Multivariate Analysis (HCA and KNN) and Artificial Neural Network (ANN) pattern recognition technique was used to process the acquired data. The results show that the instrument is able to calibrate the samples using ANN classification model with high accuracy. The finding verifies the instrument's performance to be used as an effective poultry farm malodour monitoring.
Minaya, Veronica; Corzo, Gerald; van der Kwast, Johannes; Galarraga, Remigio; Mynett, Arthur
2014-05-01
Simulations of carbon cycling are prone to uncertainties from different sources, which in general are related to input data, parameters and the model representation capacities itself. The gross carbon uptake in the cycle is represented by the gross primary production (GPP), which deals with the spatio-temporal variability of the precipitation and the soil moisture dynamics. This variability associated with uncertainty of the parameters can be modelled by multivariate probabilistic distributions. Our study presents a novel methodology that uses multivariate Copulas analysis to assess the GPP. Multi-species and elevations variables are included in a first scenario of the analysis. Hydro-meteorological conditions that might generate a change in the next 50 or more years are included in a second scenario of this analysis. The biogeochemical model BIOME-BGC was applied in the Ecuadorian Andean region in elevations greater than 4000 masl with the presence of typical vegetation of páramo. The change of GPP over time is crucial for climate scenarios of the carbon cycling in this type of ecosystem. The results help to improve our understanding of the ecosystem function and clarify the dynamics and the relationship with the change of climate variables. Keywords: multivariate analysis, Copula, BIOME-BGC, NPP, páramos
Differential Evolution algorithm applied to FSW model calibration
Idagawa, H. S.; Santos, T. F. A.; Ramirez, A. J.
2014-03-01
Friction Stir Welding (FSW) is a solid state welding process that can be modelled using a Computational Fluid Dynamics (CFD) approach. These models use adjustable parameters to control the heat transfer and the heat input to the weld. These parameters are used to calibrate the model and they are generally determined using the conventional trial and error approach. Since this method is not very efficient, we used the Differential Evolution (DE) algorithm to successfully determine these parameters. In order to improve the success rate and to reduce the computational cost of the method, this work studied different characteristics of the DE algorithm, such as the evolution strategy, the objective function, the mutation scaling factor and the crossover rate. The DE algorithm was tested using a friction stir weld performed on a UNS S32205 Duplex Stainless Steel.
Yu, H.; Gu, H.
2017-12-01
A novel multivariate seismic formation pressure prediction methodology is presented, which incorporates high-resolution seismic velocity data from prestack AVO inversion, and petrophysical data (porosity and shale volume) derived from poststack seismic motion inversion. In contrast to traditional seismic formation prediction methods, the proposed methodology is based on a multivariate pressure prediction model and utilizes a trace-by-trace multivariate regression analysis on seismic-derived petrophysical properties to calibrate model parameters in order to make accurate predictions with higher resolution in both vertical and lateral directions. With prestack time migration velocity as initial velocity model, an AVO inversion was first applied to prestack dataset to obtain high-resolution seismic velocity with higher frequency that is to be used as the velocity input for seismic pressure prediction, and the density dataset to calculate accurate Overburden Pressure (OBP). Seismic Motion Inversion (SMI) is an inversion technique based on Markov Chain Monte Carlo simulation. Both structural variability and similarity of seismic waveform are used to incorporate well log data to characterize the variability of the property to be obtained. In this research, porosity and shale volume are first interpreted on well logs, and then combined with poststack seismic data using SMI to build porosity and shale volume datasets for seismic pressure prediction. A multivariate effective stress model is used to convert velocity, porosity and shale volume datasets to effective stress. After a thorough study of the regional stratigraphic and sedimentary characteristics, a regional normally compacted interval model is built, and then the coefficients in the multivariate prediction model are determined in a trace-by-trace multivariate regression analysis on the petrophysical data. The coefficients are used to convert velocity, porosity and shale volume datasets to effective stress and then
Error-in-variables models in calibration
Lira, I.; Grientschnig, D.
2017-12-01
In many calibration operations, the stimuli applied to the measuring system or instrument under test are derived from measurement standards whose values may be considered to be perfectly known. In that case, it is assumed that calibration uncertainty arises solely from inexact measurement of the responses, from imperfect control of the calibration process and from the possible inaccuracy of the calibration model. However, the premise that the stimuli are completely known is never strictly fulfilled and in some instances it may be grossly inadequate. Then, error-in-variables (EIV) regression models have to be employed. In metrology, these models have been approached mostly from the frequentist perspective. In contrast, not much guidance is available on their Bayesian analysis. In this paper, we first present a brief summary of the conventional statistical techniques that have been developed to deal with EIV models in calibration. We then proceed to discuss the alternative Bayesian framework under some simplifying assumptions. Through a detailed example about the calibration of an instrument for measuring flow rates, we provide advice on how the user of the calibration function should employ the latter framework for inferring the stimulus acting on the calibrated device when, in use, a certain response is measured.
Multivariate missing data in hydrology - Review and applications
Ben Aissia, Mohamed-Aymen; Chebana, Fateh; Ouarda, Taha B. M. J.
2017-12-01
Water resources planning and management require complete data sets of a number of hydrological variables, such as flood peaks and volumes. However, hydrologists are often faced with the problem of missing data (MD) in hydrological databases. Several methods are used to deal with the imputation of MD. During the last decade, multivariate approaches have gained popularity in the field of hydrology, especially in hydrological frequency analysis (HFA). However, treating the MD remains neglected in the multivariate HFA literature whereas the focus has been mainly on the modeling component. For a complete analysis and in order to optimize the use of data, MD should also be treated in the multivariate setting prior to modeling and inference. Imputation of MD in the multivariate hydrological framework can have direct implications on the quality of the estimation. Indeed, the dependence between the series represents important additional information that can be included in the imputation process. The objective of the present paper is to highlight the importance of treating MD in multivariate hydrological frequency analysis by reviewing and applying multivariate imputation methods and by comparing univariate and multivariate imputation methods. An application is carried out for multiple flood attributes on three sites in order to evaluate the performance of the different methods based on the leave-one-out procedure. The results indicate that, the performance of imputation methods can be improved by adopting the multivariate setting, compared to mean substitution and interpolation methods, especially when using the copula-based approach.
Directional outlyingness for multivariate functional data
Dai, Wenlin
2018-04-07
The direction of outlyingness is crucial to describing the centrality of multivariate functional data. Motivated by this idea, classical depth is generalized to directional outlyingness for functional data. Theoretical properties of functional directional outlyingness are investigated and the total outlyingness can be naturally decomposed into two parts: magnitude outlyingness and shape outlyingness which represent the centrality of a curve for magnitude and shape, respectively. This decomposition serves as a visualization tool for the centrality of curves. Furthermore, an outlier detection procedure is proposed based on functional directional outlyingness. This criterion applies to both univariate and multivariate curves and simulation studies show that it outperforms competing methods. Weather and electrocardiogram data demonstrate the practical application of our proposed framework.
On-line monitoring for calibration reduction
International Nuclear Information System (INIS)
Hoffmann, M.
2005-09-01
On-Line Monitoring evaluates instrument channel performance by assessing its consistency with other plant indications. Elimination or reduction of unnecessary field calibrations can reduce associated labour costs, reduce personnel radiation exposure, and reduce the potential for calibration errors. On-line calibration monitoring is an important technique to implement a state-based maintenance approach and reduce unnecessary field calibrations. In this report we will look at how the concept is currently applied in the industry and what the arising needs are as it becomes more commonplace. We will also look at the PEANO System, a tool developed by the Halden Project to perform signal validation and on-line calibration monitoring. Some issues will be identified that are being addressed in the further development of these tools to better serve the future needs of the industry in this area. An outline for how to improve these points and which aspects should be taken into account is described in detail. (Author)
Marine X-band Weather Radar Data Calibration
DEFF Research Database (Denmark)
Thorndahl, Søren Liedtke; Rasmussen, Michael R.
2012-01-01
estimates. This paper presents some of the challenges in small marine X-band radar calibration by comparing three calibration procedures for assessing the relationship between radar and rain gauge data. Validation shows similar results for precipitation volumes but more diverse results on peak rain......Application of weather radar data in urban hydrology is evolving and radar data is now applied for both modelling, analysis, and real time control purposes. In these contexts, it is allimportant that the radar data is well calibrated and adjusted in order to obtain valid quantitative precipitation...
Towards a global network of gamma-ray detector calibration facilities
Tijs, Marco; Koomans, Ronald; Limburg, Han
2016-09-01
Gamma-ray logging tools are applied worldwide. At various locations, calibration facilities are used to calibrate these gamma-ray logging systems. Several attempts have been made to cross-correlate well known calibration pits, but this cross-correlation does not include calibration facilities in Europe or private company calibration facilities. Our aim is to set-up a framework that gives the possibility to interlink all calibration facilities worldwide by using `tools of opportunity' - tools that have been calibrated in different calibration facilities, whether this usage was on a coordinated basis or by coincidence. To compare the measurement of different tools, it is important to understand the behaviour of the tools in the different calibration pits. Borehole properties, such as diameter, fluid, casing and probe diameter strongly influence the outcome of gamma-ray borehole logging. Logs need to be properly calibrated and compensated for these borehole properties in order to obtain in-situ grades or to do cross-hole correlation. Some tool providers provide tool-specific correction curves for this purpose. Others rely on reference measurements against sources of known radionuclide concentration and geometry. In this article, we present an attempt to set-up a framework for transferring `local' calibrations to be applied `globally'. This framework includes corrections for any geometry and detector size to give absolute concentrations of radionuclides from borehole measurements. This model is used to compare measurements in the calibration pits of Grand Junction, located in the USA; Adelaide (previously known as AMDEL), located in Adelaide Australia; and Stonehenge, located at Medusa Explorations BV in the Netherlands.
Calibrating page sized Gafchromic EBT3 films
International Nuclear Information System (INIS)
Crijns, W.; Maes, F.; Heide, U. A. van der; Van den Heuvel, F.
2013-01-01
Purpose: The purpose is the development of a novel calibration method for dosimetry with Gafchromic EBT3 films. The method should be applicable for pretreatment verification of volumetric modulated arc, and intensity modulated radiotherapy. Because the exposed area on film can be large for such treatments, lateral scan errors must be taken into account. The correction for the lateral scan effect is obtained from the calibration data itself. Methods: In this work, the film measurements were modeled using their relative scan values (Transmittance, T). Inside the transmittance domain a linear combination and a parabolic lateral scan correction described the observed transmittance values. The linear combination model, combined a monomer transmittance state (T 0 ) and a polymer transmittance state (T ∞ ) of the film. The dose domain was associated with the observed effects in the transmittance domain through a rational calibration function. On the calibration film only simple static fields were applied and page sized films were used for calibration and measurements (treatment verification). Four different calibration setups were considered and compared with respect to dose estimation accuracy. The first (I) used a calibration table from 32 regions of interest (ROIs) spread on 4 calibration films, the second (II) used 16 ROIs spread on 2 calibration films, the third (III), and fourth (IV) used 8 ROIs spread on a single calibration film. The calibration tables of the setups I, II, and IV contained eight dose levels delivered to different positions on the films, while for setup III only four dose levels were applied. Validation was performed by irradiating film strips with known doses at two different time points over the course of a week. Accuracy of the dose response and the lateral effect correction was estimated using the dose difference and the root mean squared error (RMSE), respectively. Results: A calibration based on two films was the optimal balance between
Financial model calibration using consistency hints.
Abu-Mostafa, Y S
2001-01-01
We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.
Lidar to lidar calibration phase 1
DEFF Research Database (Denmark)
Yordanova, Ginka; Courtney, Michael
This report presents a feasibility study of a lidar to lidar (L2L) calibration procedure. Phase one of the project was conducted at Høvsøre, Denmark. Two windcubes were placed next to the 116m met mast and different methods were applied to obtain the sensing height error of the lidars. The purpose...... is to find the most consistent method and use it in a potential lidar to lidar calibration procedure....
International Nuclear Information System (INIS)
El Bouanani, Mohamed; Hult, Mikael; Persson, Leif; Swietlicki, Erik; Andersson, Margaretha; Oestling, Mikael; Lundberg, Nils; Zaring, Carina; Cohen, D.D.; Dytlewski, Nick; Johnston, P.N.; Walker, S.R.; Bubb, I.F.; Whitlow, H.J.
1994-01-01
Heavy ion recoil spectrometry is rapidly becoming a well established analysis method, but the associated data analysis processing is still not well developed. The pronounced nonlinear response of silicon detectors for heavy ions leads to serious limitation and complication in mass gating, which is the principal factor in obtaining energy spectra with minimal cross talk between elements. To overcome the above limitation, a simple empirical formula with an associated multiple regression method is proposed for the absolute energy calibration of the time of flight-energy dispersive detector telescope used in recoil spectrometry. A radical improvement in mass assignment was realized, which allows a more accurate and improved depth profiling with the important feature of making the data processing much easier. ((orig.))
Colorimetric calibration of wound photography with off-the-shelf devices
Bala, Subhankar; Sirazitdinova, Ekaterina; Deserno, Thomas M.
2017-03-01
Digital cameras are often used in recent days for photographic documentation in medical sciences. However, color reproducibility of same objects suffers from different illuminations and lighting conditions. This variation in color representation is problematic when the images are used for segmentation and measurements based on color thresholds. In this paper, motivated by photographic follow-up of chronic wounds, we assess the impact of (i) gamma correction, (ii) white balancing, (iii) background unification, and (iv) reference card-based color correction. Automatic gamma correction and white balancing are applied to support the calibration procedure, where gamma correction is a nonlinear color transform. For unevenly illuminated images, non- uniform illumination correction is applied. In the last step, we apply colorimetric calibration using a reference color card of 24 patches with known colors. A lattice detection algorithm is used for locating the card. The least squares algorithm is applied for affine color calibration in the RGB model. We have tested the algorithm on images with seven different types of illumination: with and without flash using three different off-the-shelf cameras including smartphones. We analyzed the spread of resulting color value of selected color patch before and after applying the calibration. Additionally, we checked the individual contribution of different steps of the whole calibration process. Using all steps, we were able to achieve a maximum of 81% reduction in standard deviation of color patch values in resulting images comparing to the original images. That supports manual as well as automatic quantitative wound assessments with off-the-shelf devices.
Geert Heidema, A.; Thissen, U.; Boer, J.M.A.; Bouwman, F.G.; Feskens, E.J.M.; Mariman, E.C.M.
2009-01-01
In this study, we applied the multivariate statistical tool Partial Least Squares (PLS) to analyze the relative importance of 83 plasma proteins in relation to coronary heart disease (CHD) mortality and the intermediate end points body mass index, HDL-cholesterol and total cholesterol. From a Dutch
Multivariate generalized linear mixed models using R
Berridge, Damon Mark
2011-01-01
Multivariate Generalized Linear Mixed Models Using R presents robust and methodologically sound models for analyzing large and complex data sets, enabling readers to answer increasingly complex research questions. The book applies the principles of modeling to longitudinal data from panel and related studies via the Sabre software package in R. A Unified Framework for a Broad Class of Models The authors first discuss members of the family of generalized linear models, gradually adding complexity to the modeling framework by incorporating random effects. After reviewing the generalized linear model notation, they illustrate a range of random effects models, including three-level, multivariate, endpoint, event history, and state dependence models. They estimate the multivariate generalized linear mixed models (MGLMMs) using either standard or adaptive Gaussian quadrature. The authors also compare two-level fixed and random effects linear models. The appendices contain additional information on quadrature, model...
Multivariate spectral-analysis of movement-related EEG data
International Nuclear Information System (INIS)
Andrew, C. M.
1997-01-01
The univariate method of event-related desynchronization (ERD) analysis, which quantifies the temporal evolution of power within specific frequency bands from electroencephalographic (EEG) data recorded during a task or event, is extended to an event related multivariate spectral analysis method. With this method, time courses of cross-spectra, phase spectra, coherence spectra, band-averaged coherence values (event-related coherence, ERCoh), partial power spectra and partial coherence spectra are estimated from an ensemble of multivariate event-related EEG trials. This provides a means of investigating relationships between EEG signals recorded over different scalp areas during the performance of a task or the occurrence of an event. The multivariate spectral analysis method is applied to EEG data recorded during three different movement-related studies involving discrete right index finger movements. The first study investigates the impact of the EEG derivation type on the temporal evolution of interhemispheric coherence between activity recorded at electrodes overlying the left and right sensorimotor hand areas during cued finger movement. The question results whether changes in coherence necessarily reflect changes in functional coupling of the cortical structures underlying the recording electrodes. The method is applied to data recorded during voluntary finger movement and a hypothesis, based on an existing global/local model of neocortical dynamics, is formulated to explain the coherence results. The third study applies partial spectral analysis too, and investigates phase relationships of, movement-related data recorded from a full head montage, thereby providing further results strengthening the global/local hypothesis. (author)
Optimization of procedure for calibration with radiometer/photometer
International Nuclear Information System (INIS)
Detilly, Isabelle
2009-01-01
A test procedure for the radiometer/photometer calibrations mark International Light at the Laboratorio de Fotometria y Tecnologia Laser (LAFTA) de la Escuela de Ingenieria Electrica de la Universidad de Costa Rica is established. Two photometric banks are used as experimental set and two calibrations were performed of the International Light. A basic procedure established in the laboratory, is used for calibration from measurements of illuminance and luminous intensity. Some dependent variations of photometric banks used in the calibration process, the programming of the radiometer/photometer and the applied methodology showed the results. The procedure for calibration with radiometer/photometer can be improved by optimizing the programming process of the measurement instrument and possible errors can be minimized by using the recommended procedure. (author) [es
Multivariate statistical characterization of groundwater quality in Ain ...
African Journals Online (AJOL)
Administrator
depends much on the sustainability of the available water resources. Water of .... 18 wells currently in use were selected based on the preliminary field survey carried out to ... In recent times, multivariate statistical methods have been applied ...
Comparison of on-wafer calibrations using the concept of reference impedance
Purroy Martín, Francesc; Pradell i Cara, Lluís
1993-01-01
A novel method that allows to compare different calibration techniques has been developed. It is based on determining the reference impedance of a given Network Analyzer calibration from the reflection coefficient measurement of a physical open circuit. The method has been applied to several on-wafer calibrations. Peer Reviewed
Fourier expansions and multivariable Bessel functions concerning radiation programmes
International Nuclear Information System (INIS)
Dattoli, G.; Richetta, M.; Torre, A.; Chiccoli, C.; Lorenzutta, S.; Maino, G.
1996-01-01
The link between generalized Bessel functions and other special functions is investigated using the Fourier series and the generalized Jacobi-Anger expansion. A new class of multivariable Hermite polynomials is then introduced and their relevance to physical problems discussed. As an example of the power of the method, applied to radiation physics, we analyse the role played by multi-variable Bessel functions in the description of radiation emitted by a charge constrained to a nonlinear oscillation. (author)
Loeza-Quintana, Tzitziki; Adamowicz, Sarah J
2018-02-01
During the past 50 years, the molecular clock has become one of the main tools for providing a time scale for the history of life. In the era of robust molecular evolutionary analysis, clock calibration is still one of the most basic steps needing attention. When fossil records are limited, well-dated geological events are the main resource for calibration. However, biogeographic calibrations have often been used in a simplistic manner, for example assuming simultaneous vicariant divergence of multiple sister lineages. Here, we propose a novel iterative calibration approach to define the most appropriate calibration date by seeking congruence between the dates assigned to multiple allopatric divergences and the geological history. Exploring patterns of molecular divergence in 16 trans-Bering sister clades of echinoderms, we demonstrate that the iterative calibration is predominantly advantageous when using complex geological or climatological events-such as the opening/reclosure of the Bering Strait-providing a powerful tool for clock dating that can be applied to other biogeographic calibration systems and further taxa. Using Bayesian analysis, we observed that evolutionary rate variability in the COI-5P gene is generally distributed in a clock-like fashion for Northern echinoderms. The results reveal a large range of genetic divergences, consistent with multiple pulses of trans-Bering migrations. A resulting rate of 2.8% pairwise Kimura-2-parameter sequence divergence per million years is suggested for the COI-5P gene in Northern echinoderms. Given that molecular rates may vary across latitudes and taxa, this study provides a new context for dating the evolutionary history of Arctic marine life.
Down force calibration stand test report
International Nuclear Information System (INIS)
BOGER, R.M.
1999-01-01
The Down Force Calibration Stand was developed to provide an improved means of calibrating equipment used to apply, display and record Core Sample Truck (CST) down force. Originally, four springs were used in parallel to provide a system of resistance that allowed increasing force over increasing displacement. This spring system, though originally deemed adequate, was eventually found to be unstable laterally. For this reason, it was determined that a new method for resisting down force was needed
A Review of Sensor Calibration Monitoring for Calibration Interval Extension in Nuclear Power Plants
Energy Technology Data Exchange (ETDEWEB)
Coble, Jamie B.; Meyer, Ryan M.; Ramuhalli, Pradeep; Bond, Leonard J.; Hashemian, Hash; Shumaker, Brent; Cummins, Dara
2012-08-31
Currently in the United States, periodic sensor recalibration is required for all safety-related sensors, typically occurring at every refueling outage, and it has emerged as a critical path item for shortening outage duration in some plants. Online monitoring can be employed to identify those sensors that require calibration, allowing for calibration of only those sensors that need it. International application of calibration monitoring, such as at the Sizewell B plant in United Kingdom, has shown that sensors may operate for eight years, or longer, within calibration tolerances. This issue is expected to also be important as the United States looks to the next generation of reactor designs (such as small modular reactors and advanced concepts), given the anticipated longer refueling cycles, proposed advanced sensors, and digital instrumentation and control systems. The U.S. Nuclear Regulatory Commission (NRC) accepted the general concept of online monitoring for sensor calibration monitoring in 2000, but no U.S. plants have been granted the necessary license amendment to apply it. This report presents a state-of-the-art assessment of online calibration monitoring in the nuclear power industry, including sensors, calibration practice, and online monitoring algorithms. This assessment identifies key research needs and gaps that prohibit integration of the NRC-approved online calibration monitoring system in the U.S. nuclear industry. Several needs are identified, including the quantification of uncertainty in online calibration assessment; accurate determination of calibration acceptance criteria and quantification of the effect of acceptance criteria variability on system performance; and assessment of the feasibility of using virtual sensor estimates to replace identified faulty sensors in order to extend operation to the next convenient maintenance opportunity. Understanding the degradation of sensors and the impact of this degradation on signals is key to
Smith, Joseph P; Smith, Frank C; Booksh, Karl S
2018-03-01
Lunar meteorites provide a more random sampling of the surface of the Moon than do the returned lunar samples, and they provide valuable information to help estimate the chemical composition of the lunar crust, the lunar mantle, and the bulk Moon. As of July 2014, ∼96 lunar meteorites had been documented and ten of these are unbrecciated mare basalts. Using Raman imaging with multivariate curve resolution-alternating least squares (MCR-ALS), we investigated portions of polished thin sections of paired, unbrecciated, mare-basalt lunar meteorites that had been collected from the LaPaz Icefield (LAP) of Antarctica-LAP 02205 and LAP 04841. Polarized light microscopy displays that both meteorites are heterogeneous and consist of polydispersed sized and shaped particles of varying chemical composition. For two distinct probed areas within each meteorite, the individual chemical species and associated chemical maps were elucidated using MCR-ALS applied to Raman hyperspectral images. For LAP 02205, spatially and spectrally resolved clinopyroxene, ilmenite, substrate-adhesive epoxy, and diamond polish were observed within the probed areas. Similarly, for LAP 04841, spatially resolved chemical images with corresponding resolved Raman spectra of clinopyroxene, troilite, a high-temperature polymorph of anorthite, substrate-adhesive epoxy, and diamond polish were generated. In both LAP 02205 and LAP 04841, substrate-adhesive epoxy and diamond polish were more readily observed within fractures/veinlet features. Spectrally diverse clinopyroxenes were resolved in LAP 04841. Factors that allow these resolved clinopyroxenes to be differentiated include crystal orientation, spatially distinct chemical zoning of pyroxene crystals, and/or chemical and molecular composition. The minerals identified using this analytical methodology-clinopyroxene, anorthite, ilmenite, and troilite-are consistent with the results of previous studies of the two meteorites using electron microprobe
Calibration Laboratory of the Paul Scherrer Institute
International Nuclear Information System (INIS)
Gmuer, K.; Wernli, C.
1994-01-01
Calibration and working checks of radiation protection instruments are carried out at the Calibration Laboratory of the Paul Scherrer Institute. In view of the new radiation protection regulation, the calibration laboratory received an official federal status. The accreditation procedure in cooperation with the Federal Office of Metrology enabled a critical review of the techniques and methods applied. Specifically, personal responsibilities, time intervals for recalibration of standard instruments, maximum permissible errors of verification, traceability and accuracy of the standard instruments, form and content of the certificates were defined, and the traceability of the standards and quality assurance were reconsidered. (orig.) [de
Calibrating page sized Gafchromic EBT3 films
Energy Technology Data Exchange (ETDEWEB)
Crijns, W.; Maes, F.; Heide, U. A. van der; Van den Heuvel, F. [Department of Radiation Oncology, University Hospitals Leuven, Herestraat 49, 3000 Leuven (Belgium); Department ESAT/PSI-Medical Image Computing, Medical Imaging Research Center, KU Leuven, Herestraat 49, 3000 Leuven (Belgium); Department of Radiation Oncology, The Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX Amsterdam (Netherlands); Department of Radiation Oncology, University Hospitals Leuven, Herestraat 49, 3000 Leuven (Belgium)
2013-01-15
Purpose: The purpose is the development of a novel calibration method for dosimetry with Gafchromic EBT3 films. The method should be applicable for pretreatment verification of volumetric modulated arc, and intensity modulated radiotherapy. Because the exposed area on film can be large for such treatments, lateral scan errors must be taken into account. The correction for the lateral scan effect is obtained from the calibration data itself. Methods: In this work, the film measurements were modeled using their relative scan values (Transmittance, T). Inside the transmittance domain a linear combination and a parabolic lateral scan correction described the observed transmittance values. The linear combination model, combined a monomer transmittance state (T{sub 0}) and a polymer transmittance state (T{sub {infinity}}) of the film. The dose domain was associated with the observed effects in the transmittance domain through a rational calibration function. On the calibration film only simple static fields were applied and page sized films were used for calibration and measurements (treatment verification). Four different calibration setups were considered and compared with respect to dose estimation accuracy. The first (I) used a calibration table from 32 regions of interest (ROIs) spread on 4 calibration films, the second (II) used 16 ROIs spread on 2 calibration films, the third (III), and fourth (IV) used 8 ROIs spread on a single calibration film. The calibration tables of the setups I, II, and IV contained eight dose levels delivered to different positions on the films, while for setup III only four dose levels were applied. Validation was performed by irradiating film strips with known doses at two different time points over the course of a week. Accuracy of the dose response and the lateral effect correction was estimated using the dose difference and the root mean squared error (RMSE), respectively. Results: A calibration based on two films was the optimal
International Nuclear Information System (INIS)
Junghans, A.
1996-01-01
The intensity of a 136 Xe(600 A MeV) beam has been determined by simultaneously measuring the particle rate and the corresponding ionisation current with an ionisation chamber. The ionisation current of this self-calibrating device was compared at higher intensities with the current of a secondary-electron monitor and a calibration of the secondary-electron current was achieved with a precision of 2%. This method can be applied to all high-energy heavy-ion beams. (orig.)
Multivariate Analysis for Quantification of Plutonium(IV) in Nitric Acid Based on Absorption Spectra
Energy Technology Data Exchange (ETDEWEB)
Lines, Amanda M. [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352, United States; Adami, Susan R. [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352, United States; Sinkov, Sergey I. [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352, United States; Lumetta, Gregg J. [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352, United States; Bryan, Samuel A. [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352, United States
2017-08-09
Development of more effective, reliable, and fast methods for monitoring process streams is a growing opportunity for analytical applications. Many fields can benefit from on-line monitoring, including the nuclear fuel cycle where improved methods for monitoring radioactive materials will facilitate maintenance of proper safeguards and ensure safe and efficient processing of materials. On-line process monitoring with a focus on optical spectroscopy can provide a fast, non-destructive method for monitoring chemical species. However, identification and quantification of species can be hindered by the complexity of the solutions if bands overlap or show condition-dependent spectral features. Plutonium (IV) is one example of a species which displays significant spectral variation with changing nitric acid concentration. Single variate analysis (i.e. Beer’s Law) is difficult to apply to the quantification of Pu(IV) unless the nitric acid concentration is known and separate calibration curves have been made for all possible acid strengths. Multivariate, or chemometric, analysis is an approach that allows for the accurate quantification of Pu(IV) without a priori knowledge of nitric acid concentration.
A new multivariate zero-adjusted Poisson model with applications to biomedicine.
Liu, Yin; Tian, Guo-Liang; Tang, Man-Lai; Yuen, Kam Chuen
2018-05-25
Recently, although advances were made on modeling multivariate count data, existing models really has several limitations: (i) The multivariate Poisson log-normal model (Aitchison and Ho, ) cannot be used to fit multivariate count data with excess zero-vectors; (ii) The multivariate zero-inflated Poisson (ZIP) distribution (Li et al., 1999) cannot be used to model zero-truncated/deflated count data and it is difficult to apply to high-dimensional cases; (iii) The Type I multivariate zero-adjusted Poisson (ZAP) distribution (Tian et al., 2017) could only model multivariate count data with a special correlation structure for random components that are all positive or negative. In this paper, we first introduce a new multivariate ZAP distribution, based on a multivariate Poisson distribution, which allows the correlations between components with a more flexible dependency structure, that is some of the correlation coefficients could be positive while others could be negative. We then develop its important distributional properties, and provide efficient statistical inference methods for multivariate ZAP model with or without covariates. Two real data examples in biomedicine are used to illustrate the proposed methods. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Challenges in X-band Weather Radar Data Calibration
DEFF Research Database (Denmark)
Thorndahl, Søren; Rasmussen, Michael R.
2009-01-01
Application of weather radar data in urban hydrology is evolving and radar data is now applied for both modelling, analysis and real time control purposes. In these contexts, it is all-important that the radar data well calibrated and adjusted in order to obtain valid quantitative precipitation e...... estimates. This paper compares two calibration procedures for a small marine X-band radar by comparing radar data with rain gauge data. Validation shows a very good consensus with regards to precipitation volumes, but more diverse results on peak rain intensities.......Application of weather radar data in urban hydrology is evolving and radar data is now applied for both modelling, analysis and real time control purposes. In these contexts, it is all-important that the radar data well calibrated and adjusted in order to obtain valid quantitative precipitation...
The 2007 ESO Instrument Calibration Workshop
Kaufer, Andreas; ESO Workshop
2008-01-01
The 2007 ESO Instrument Calibration workshop brought together more than 120 participants with the objective to a) foster the sharing of information, experience and techniques between observers, instrument developers and instrument operation teams, b) review the actual precision and limitations of the applied instrument calibration plans, and c) collect the current and future requirements by the ESO users. These present proceedings include the majority of the workshop’s contributions and document the status quo of instrument calibration at ESO in large detail. Topics covered are: Optical Spectro-Imagers, Optical Multi-Object Spectrographs, NIR and MIR Spectro-Imagers, High-Resolution Spectrographs, Integral Field Spectrographs, Adaptive Optics Instruments, Polarimetric Instruments, Wide Field Imagers, Interferometric Instruments as well as other crucial aspects such as data flow, quality control, data reduction software and atmospheric effects. It was stated in the workshop that "calibration is a life-long l...
Temperature corrected-calibration of GRACE's accelerometer
Encarnacao, J.; Save, H.; Siemes, C.; Doornbos, E.; Tapley, B. D.
2017-12-01
Since April 2011, the thermal control of the accelerometers on board the GRACE satellites has been turned off. The time series of along-track bias clearly show a drastic change in the behaviour of this parameter, while the calibration model has remained unchanged throughout the entire mission lifetime. In an effort to improve the quality of the gravity field models produced at CSR in future mission-long re-processing of GRACE data, we quantify the added value of different calibration strategies. In one approach, the temperature effects that distort the raw accelerometer measurements collected without thermal control are corrected considering the housekeeping temperature readings. In this way, one single calibration strategy can be consistently applied during the whole mission lifetime, since it is valid to thermal the conditions before and after April 2011. Finally, we illustrate that the resulting calibrated accelerations are suitable for neutral thermospheric density studies.
CryoSat-2 SIRAL Calibration: Strategy, Application and Results
Parrinello, T.; Fornari, M.; Bouzinac, C.; Scagliola, M.; Tagliani, N.
2012-04-01
The main payload of CryoSat-2 is a Ku band pulsewidth limited radar altimeter, called SIRAL (Synthetic interferometric radar altimeter), that transmits pulses at a high pulse repetition frequency thus making the received echoes phase coherent and suitable for azimuth processing. This allows to reach an along track resolution of about 250 meters which is an important improvement over traditional pulse-width limited altimeters. Due to the fact that SIRAL is a phase coherent pulse-width limited radar altimeter, a proper calibration approach has been developed. In fact, not only the corrections for transfer function amplitude with respect to frequency, gain and instrument path delay have to be computed but it is also needed to provide corrections for transfer function phase with respect to frequency and AGC setting as well as the phase variation across bursts of pulses. As a consequence, SIRAL performs regularly four types of calibrations: (1) CAL1 in order to calibrate the internal path delay and peak power variation, (2) CAL2 in order to compensate the instrument transfer function, (3) CAL4 to calibrate the interferometer and (4) AutoCal, a specific sequence in order to calibrate the gain and phase difference for each AGC setting. Commissioning phase results (April-December 2010) revealed high stability of the instrument, which made possible to reduce the calibration frequency during Operations. Internal calibration data are processed on ground by the CryoSat-2 Instrument Processing Facility (IPF1) and then applied to the science data. In this poster we will describe as first the calibration strategy and then how the four different types of calibration are applied to science data. Moreover the calibration results over almost 2 years of mission will be presented, analyzing their temporal evolution in order to highlight the stability of the instrument over its life.
Multivariate statistical analysis a high-dimensional approach
Serdobolskii, V
2000-01-01
In the last few decades the accumulation of large amounts of in formation in numerous applications. has stimtllated an increased in terest in multivariate analysis. Computer technologies allow one to use multi-dimensional and multi-parametric models successfully. At the same time, an interest arose in statistical analysis with a de ficiency of sample data. Nevertheless, it is difficult to describe the recent state of affairs in applied multivariate methods as satisfactory. Unimprovable (dominating) statistical procedures are still unknown except for a few specific cases. The simplest problem of estimat ing the mean vector with minimum quadratic risk is unsolved, even for normal distributions. Commonly used standard linear multivari ate procedures based on the inversion of sample covariance matrices can lead to unstable results or provide no solution in dependence of data. Programs included in standard statistical packages cannot process 'multi-collinear data' and there are no theoretical recommen ...
CryoSat SIRAL Calibration and Performance
Fornari, Marco; Scagliola, Michele; Tagliani, Nicolas; Parrinello, Tommaso
2013-04-01
The main payload of CryoSat is a Ku band pulse-width limited radar altimeter, called SIRAL (Synthetic interferometric radar altimeter), that transmits pulses at a high pulse repetition frequency thus making the received echoes phase coherent and suitable for azimuth processing. This allows to reach an along track resolution of about 250 meters which is a significant improvement over traditional pulse-width limited altimeters. Due to the fact that SIRAL is a phase coherent pulse-width limited radar altimeter, a proper calibration approach has been developed, including both an internal and external calibration. The internal calibration monitors the instrument impulse response and the transfer function, like traditional altimeters. In addition to that, the interferometer requires a special calibration developed ad hoc for SIRAL. The external calibration is performed with the use of a ground transponder, located in Svalbard, which receives SIRAL signal and sends the echo back to the satellite. Internal calibration data are processed on ground by the CryoSat Instrument Processing Facility (IPF1) and then applied to the science data. By April 2013, almost 3 years of calibration data will be available, which will be shown in this poster. The external calibration (transponder) data are processed and analyzed independently from the operational chain. The use of an external transponder has been very useful to determine instrument performance and for the tuning of the on-ground processor. This poster presents the transponder results in terms of range noise and datation error.
Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation
Energy Technology Data Exchange (ETDEWEB)
and Ben Polly, Joseph Robertson [National Renewable Energy Lab. (NREL), Golden, CO (United States); Polly, Ben [National Renewable Energy Lab. (NREL), Golden, CO (United States); Collis, Jon [Colorado School of Mines, Golden, CO (United States)
2013-09-01
This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.
Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation
Energy Technology Data Exchange (ETDEWEB)
Robertson, J.; Polly, B.; Collis, J.
2013-09-01
This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.
Systematic Calibration for a Backpacked Spherical Photogrammetry Imaging System
Rau, J. Y.; Su, B. W.; Hsiao, K. W.; Jhan, J. P.
2016-06-01
A spherical camera can observe the environment for almost 720 degrees' field of view in one shoot, which is useful for augmented reality, environment documentation, or mobile mapping applications. This paper aims to develop a spherical photogrammetry imaging system for the purpose of 3D measurement through a backpacked mobile mapping system (MMS). The used equipment contains a Ladybug-5 spherical camera, a tactical grade positioning and orientation system (POS), i.e. SPAN-CPT, and an odometer, etc. This research aims to directly apply photogrammetric space intersection technique for 3D mapping from a spherical image stereo-pair. For this purpose, several systematic calibration procedures are required, including lens distortion calibration, relative orientation calibration, boresight calibration for direct georeferencing, and spherical image calibration. The lens distortion is serious on the ladybug-5 camera's original 6 images. Meanwhile, for spherical image mosaicking from these original 6 images, we propose the use of their relative orientation and correct their lens distortion at the same time. However, the constructed spherical image still contains systematic error, which will reduce the 3D measurement accuracy. Later for direct georeferencing purpose, we need to establish a ground control field for boresight/lever-arm calibration. Then, we can apply the calibrated parameters to obtain the exterior orientation parameters (EOPs) of all spherical images. In the end, the 3D positioning accuracy after space intersection will be evaluated, including EOPs obtained by structure from motion method.
PLEIADES ABSOLUTE CALIBRATION : INFLIGHT CALIBRATION SITES AND METHODOLOGY
Directory of Open Access Journals (Sweden)
S. Lachérade
2012-07-01
Full Text Available In-flight calibration of space sensors once in orbit is a decisive step to be able to fulfil the mission objectives. This article presents the methods of the in-flight absolute calibration processed during the commissioning phase. Four In-flight calibration methods are used: absolute calibration, cross-calibration with reference sensors such as PARASOL or MERIS, multi-temporal monitoring and inter-bands calibration. These algorithms are based on acquisitions over natural targets such as African deserts, Antarctic sites, La Crau (Automatic calibration station and Oceans (Calibration over molecular scattering or also new extra-terrestrial sites such as the Moon and selected stars. After an overview of the instrument and a description of the calibration sites, it is pointed out how each method is able to address one or several aspects of the calibration. We focus on how these methods complete each other in their operational use, and how they help building a coherent set of information that addresses all aspects of in-orbit calibration. Finally, we present the perspectives that the high level of agility of PLEIADES offers for the improvement of its calibration and a better characterization of the calibration sites.
An Optimal Calibration Method for a MEMS Inertial Measurement Unit
Directory of Open Access Journals (Sweden)
Bin Fang
2014-02-01
Full Text Available An optimal calibration method for a micro-electro-mechanical inertial measurement unit (MIMU is presented in this paper. The accuracy of the MIMU is highly dependent on calibration to remove the deterministic errors of systematic errors, which also contain random errors. The overlapping Allan variance is applied to characterize the types of random error terms in the measurements. The calibration model includes package misalignment error, sensor-to-sensor misalignment error and bias, and a scale factor is built. The new concept of a calibration method, which includes a calibration scheme and a calibration algorithm, is proposed. The calibration scheme is designed by D-optimal and the calibration algorithm is deduced by a Kalman filter. In addition, the thermal calibration is investigated, as the bias and scale factor varied with temperature. The simulations and real tests verify the effectiveness of the proposed calibration method and show that it is better than the traditional method.
Alternative technique to neutron probe calibration in situ
International Nuclear Information System (INIS)
Encarnacao, F.; Carneiro, C.; Dall'Olio, A.
1990-01-01
An alternative technique of neutron probe calibration in situ was applied for Podzolic soil. Under field condition, the neutron probe calibration was performed using a special arrangement that prevented the lateral movement of water around the access tube of the neutron probe. During the experiments, successive amounts of water were uniformly infiltrated through the soil profile. Two plots were set to study the effect of the plot dimension on the slope of the calibration curve. The results obtained shown that the amounts of water transferred to the soil profile were significantly correlated to the integrals of count ratio along the soil profile on both plots. In consequence, the slope of calibration curve in field condition was determined. (author)
On Multivariate Methods in Robust Econometrics
Czech Academy of Sciences Publication Activity Database
Kalina, Jan
2012-01-01
Roč. 21, č. 1 (2012), s. 69-82 ISSN 1210-0455 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : least weighted squares * heteroscedasticity * multivariate statistics * model selection * diagnostics * computational aspects Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.561, year: 2012 http://www.vse.cz/pep/abstrakt.php?IDcl=411
Planck 2013 results. VIII. HFI photometric calibration and mapmaking
DEFF Research Database (Denmark)
Planck Collaboration,; Ade, P. A. R.; Aghanim, N.
2013-01-01
This paper describes the processing applied to the HFI cleaned time-ordered data to produce photometrically calibrated maps. HFI observes the sky over a broad range of frequencies, from 100 to 857 GHz. To get the best accuracy on the calibration on such a large range, two different photometric ca...
Multivariate Sensitivity Analysis of Time-of-Flight Sensor Fusion
Schwarz, Sebastian; Sjöström, Mårten; Olsson, Roger
2014-09-01
Obtaining three-dimensional scenery data is an essential task in computer vision, with diverse applications in various areas such as manufacturing and quality control, security and surveillance, or user interaction and entertainment. Dedicated Time-of-Flight sensors can provide detailed scenery depth in real-time and overcome short-comings of traditional stereo analysis. Nonetheless, they do not provide texture information and have limited spatial resolution. Therefore such sensors are typically combined with high resolution video sensors. Time-of-Flight Sensor Fusion is a highly active field of research. Over the recent years, there have been multiple proposals addressing important topics such as texture-guided depth upsampling and depth data denoising. In this article we take a step back and look at the underlying principles of ToF sensor fusion. We derive the ToF sensor fusion error model and evaluate its sensitivity to inaccuracies in camera calibration and depth measurements. In accordance with our findings, we propose certain courses of action to ensure high quality fusion results. With this multivariate sensitivity analysis of the ToF sensor fusion model, we provide an important guideline for designing, calibrating and running a sophisticated Time-of-Flight sensor fusion capture systems.
Multivariate correction in laser-enhanced ionization with laser sampling
International Nuclear Information System (INIS)
Popov, A.M.; Labutin, T.A.; Sychev, D.N.; Gorbatenko, A.A.; Zorov, N.B.
2007-01-01
The opportunity of normalizing laser-enhanced ionization (LEI) signals by several reference signals (RS) measured simultaneously has been examined in view of correcting variations of laser parameters and matrix interferences. Opto-acoustic, atomic emission and non-selective ionization signals and their paired combination were used as RS for Li determination in aluminum alloys (0-6% Mg, 0-5% Cu, 0-1% Sc, 0-1% Ag). The specific normalization procedure in case of RS essential multicollinearity has been proposed. LEI and RS for each definite ablation pulse energy were plotted in Cartesian co-ordinates (x and y axes - the RS values, z axis - LEI signal). It was found that in the three-dimensional space the slope of the correlation line to the plane of RS depends on the analyte content in the solid sample. The use of this slope has therefore been proposed as a multivariate corrected analytical signal. Multivariate correlative normalization provides analytical signal free of matrix interferences for Al-Mg-Cu-Li alloys. The application of this novel approach to the determination of Li allows plotting unified calibration curves for Al-alloys of different matrix composition
Multivariate correction in laser-enhanced ionization with laser sampling
Energy Technology Data Exchange (ETDEWEB)
Popov, A.M. [Department of Chemistry, M. V. Lomonosov Moscow State University, 119992 Russia Moscow GSP-2, Leninskie Gory 1 build.3 (Russian Federation); Labutin, T.A. [Department of Chemistry, M. V. Lomonosov Moscow State University, 119992 Russia Moscow GSP-2, Leninskie Gory 1 build.3 (Russian Federation)], E-mail: timurla@laser.chem.msu.ru; Sychev, D.N.; Gorbatenko, A.A.; Zorov, N.B. [Department of Chemistry, M. V. Lomonosov Moscow State University, 119992 Russia Moscow GSP-2, Leninskie Gory 1 build.3 (Russian Federation)
2007-03-15
The opportunity of normalizing laser-enhanced ionization (LEI) signals by several reference signals (RS) measured simultaneously has been examined in view of correcting variations of laser parameters and matrix interferences. Opto-acoustic, atomic emission and non-selective ionization signals and their paired combination were used as RS for Li determination in aluminum alloys (0-6% Mg, 0-5% Cu, 0-1% Sc, 0-1% Ag). The specific normalization procedure in case of RS essential multicollinearity has been proposed. LEI and RS for each definite ablation pulse energy were plotted in Cartesian co-ordinates (x and y axes - the RS values, z axis - LEI signal). It was found that in the three-dimensional space the slope of the correlation line to the plane of RS depends on the analyte content in the solid sample. The use of this slope has therefore been proposed as a multivariate corrected analytical signal. Multivariate correlative normalization provides analytical signal free of matrix interferences for Al-Mg-Cu-Li alloys. The application of this novel approach to the determination of Li allows plotting unified calibration curves for Al-alloys of different matrix composition.
Calibration method for a carbon nanotube field-effect transistor biosensor
International Nuclear Information System (INIS)
Abe, Masuhiro; Murata, Katsuyuki; Ataka, Tatsuaki; Matsumoto, Kazuhiko
2008-01-01
An easy calibration method based on the Langmuir adsorption theory is proposed for a carbon nanotube field-effect transistor (NTFET) biosensor. This method was applied to three NTFET biosensors that had approximately the same structure but exhibited different characteristics. After calibration, their experimentally determined characteristics exhibited a good agreement with the calibration curve. The reason why the observed characteristics of these NTFET biosensors differed among the devices was that the carbon nanotube (CNT) that formed the channel was not uniform. Although the controlled growth of a CNT is difficult, it is shown that an NTFET biosensor can be easy calibrated using the proposed calibration method, regardless of the CNT channel structures
Monitoring coordinate measuring machines by calibrated parts
International Nuclear Information System (INIS)
Weckenmann, A; Lorz, J
2005-01-01
Coordinate measuring machines (CMM) are essential for quality assurance and production control in modern manufacturing. Due to the necessity of assuring traceability during the use of CMM, interim checks with calibrated objects carried out periodically. For this purpose usually special artefacts like standardized ball plates, hole plates, ball bars or step gages are measured. Measuring calibrated series parts would be more advantageous. Applying the substitution method of ISO 15530-3: 2000 such parts can be used. It is less cost intensive and less time consuming than measuring expensive special standardized objects in special programmed measurement routines. Moreover, the measurement results can directly compare with the calibration values; thus, direct information on systematic measurement deviations and uncertainty of the measured features are available. The paper describes a procedure for monitoring horizontal-arm CMMs with calibrated sheet metal series parts
Planck 2013 results. VIII. HFI photometric calibration and mapmaking
Ade, P A R; Armitage-Caplan, C; Arnaud, M; Ashdown, M; Atrio-Barandela, F; Aumont, J; Baccigalupi, C; Banday, A J; Barreiro, R B; Battaner, E; Benabed, K; Benoît, A; Benoit-Lévy, A; Bernard, J -P; Bersanelli, M; Bertincourt, B; Bielewicz, P; Bobin, J; Bock, J J; Bond, J R; Borrill, J; Bouchet, F R; Boulanger, F; Bridges, M; Bucher, M; Burigana, C; Cardoso, J -F; Catalano, A; Challinor, A; Chamballu, A; Chary, R -R; Chen, X; Chiang, L -Y; Chiang, H C; Christensen, P R; Church, S; Clements, D L; Colombi, S; Colombo, L P L; Combet, C; Couchot, F; Coulais, A; Crill, B P; Curto, A; Cuttaia, F; Danese, L; Davies, R D; de Bernardis, P; de Rosa, A; de Zotti, G; Delabrouille, J; Delouis, J -M; Désert, F -X; Dickinson, C; Diego, J M; Dole, H; Donzelli, S; Doré, O; Douspis, M; Dupac, X; Efstathiou, G; Enßlin, T A; Eriksen, H K; Filliard, C; Finelli, F; Forni, O; Frailis, M; Franceschi, E; Galeotta, S; Ganga, K; Giard, M; Giardino, G; Giraud-Héraud, Y; González-Nuevo, J; Górski, K M; Gratton, S; Gregorio, A; Gruppuso, A; Hansen, F K; Hanson, D; Harrison, D; Helou, G; Henrot-Versillé, S; Hernández-Monteagudo, C; Herranz, D; Hildebrandt, S R; Hivon, E; Hobson, M; Holmes, W A; Hornstrup, A; Hovest, W; Huffenberger, K M; Jaffe, T R; Jaffe, A H; Jones, W C; Juvela, M; Keihänen, E; Keskitalo, R; Kisner, T S; Kneissl, R; Knoche, J; Knox, L; Kunz, M; Kurki-Suonio, H; Lagache, G; Lamarre, J -M; Lasenby, A; Laureijs, R J; Lawrence, C R; Jeune, M Le; Lellouch, E; Leonardi, R; Leroy, C; Lesgourgues, J; Liguori, M; Lilje, P B; Linden-Vørnle, M; López-Caniego, M; Lubin, P M; Macías-Pérez, J F; Maffei, B; Mandolesi, N; Maris, M; Marshall, D J; Martin, P G; Martínez-González, E; Masi, S; Matarrese, S; Matthai, F; Maurin, L; Mazzotta, P; McGehee, P; Meinhold, P R; Melchiorri, A; Mendes, L; Mennella, A; Migliaccio, M; Mitra, S; Miville-Deschênes, M -A; Moneti, A; Montier, L; Moreno, R; Morgante, G; Mortlock, D; Munshi, D; Murphy, J A; Naselsky, P; Nati, F; Natoli, P; Netterfield, C B; Nørgaard-Nielsen, H U; Noviello, F; Novikov, D; Novikov, I; Osborne, S; Oxborrow, C A; Paci, F; Pagano, L; Pajot, F; Paladini, R; Paoletti, D; Partridge, B; Pasian, F; Patanchon, G; Perdereau, O; Perotto, L; Perrotta, F; Piacentini, F; Piat, M; Pierpaoli, E; Pietrobon, D; Plaszczynski, S; Pointecouteau, E; Polenta, G; Ponthieu, N; Popa, L; Poutanen, T; Pratt, G W; Prézeau, G; Prunet, S; Puget, J -L; Rachen, J P; Reinecke, M; Remazeilles, M; Renault, C; Ricciardi, S; Riller, T; Ristorcelli, I; Rocha, G; Rosset, C; Roudier, G; Rusholme, B; Santos, D; Savini, G; Shellard, E P S; Spencer, L D; Starck, J -L; Stolyarov, V; Stompor, R; Sudiwala, R; Sunyaev, R; Sureau, F; Sutton, D; Suur-Uski, A -S; Sygnet, J -F; Tauber, J A; Tavagnacco, D; Techene, S; Terenzi, L; Tomasi, M; Tristram, M; Tucci, M; Umana, G; Valenziano, L; Valiviita, J; Van Tent, B; Vielva, P; Villa, F; Vittorio, N; Wade, L A; Wandelt, B D; Yvon, D; Zacchei, A; Zonca, A
2014-01-01
This paper describes the processing applied to the HFI cleaned time-ordered data to produce photometrically calibrated maps. HFI observes the sky over a broad range of frequencies, from 100 to 857 GHz. To get the best accuracy on the calibration on such a large range, two different photometric calibration schemes have to be used. The 545 and 857 \\GHz\\ data are calibrated using Uranus and Neptune flux density measurements, compared with models of their atmospheric emissions to calibrate the data. The lower frequencies (below 353 GHz) are calibrated using the cosmological microwave background dipole.One of the components of this anisotropy results from the orbital motion of the satellite in the Solar System, and is therefore time-variable. Photometric calibration is thus tightly linked to mapmaking, which also addresses low frequency noise removal. The 2013 released HFI data show some evidence for apparent gain variations of the HFI bolometers' detection chain. These variations were identified by comparing obse...
Li, Yanming; Nan, Bin; Zhu, Ji
2015-06-01
We propose a multivariate sparse group lasso variable selection and estimation method for data with high-dimensional predictors as well as high-dimensional response variables. The method is carried out through a penalized multivariate multiple linear regression model with an arbitrary group structure for the regression coefficient matrix. It suits many biology studies well in detecting associations between multiple traits and multiple predictors, with each trait and each predictor embedded in some biological functional groups such as genes, pathways or brain regions. The method is able to effectively remove unimportant groups as well as unimportant individual coefficients within important groups, particularly for large p small n problems, and is flexible in handling various complex group structures such as overlapping or nested or multilevel hierarchical structures. The method is evaluated through extensive simulations with comparisons to the conventional lasso and group lasso methods, and is applied to an eQTL association study. © 2015, The International Biometric Society.
J Olive, David
2017-01-01
This text presents methods that are robust to the assumption of a multivariate normal distribution or methods that are robust to certain types of outliers. Instead of using exact theory based on the multivariate normal distribution, the simpler and more applicable large sample theory is given. The text develops among the first practical robust regression and robust multivariate location and dispersion estimators backed by theory. The robust techniques are illustrated for methods such as principal component analysis, canonical correlation analysis, and factor analysis. A simple way to bootstrap confidence regions is also provided. Much of the research on robust multivariate analysis in this book is being published for the first time. The text is suitable for a first course in Multivariate Statistical Analysis or a first course in Robust Statistics. This graduate text is also useful for people who are familiar with the traditional multivariate topics, but want to know more about handling data sets with...
Beam-based calibration system of BPM offset on BEPC
International Nuclear Information System (INIS)
Hu Chunliang
2004-01-01
The ever-increasing demand for better performance from circular accelerators requires improved methods to calibrate beam position monitors (BPM). A beam based calibration system has been established to locate the centers of the BPM with respect to the magnetic center of quadrupole magnets. Additional windings are applied to the quadrupole magnets to make the quadrupole magnetic strength individually adjustable and the number of the power supply of all 32 additional windings is only one. Software system has been finished to automatically measure the offsets of BPMs. The effect of the beam-based calibration system shows that the calibration of BPM has been more quickly and accurately
Energy Technology Data Exchange (ETDEWEB)
Harris, Candace [Florida Agriculture & Mechanic Univ.; Profeta, Luisa [Alakai Defense Systems, Inc.; Akpovo, Codjo [Florida Agriculture & Mechanic Univ.; Stowe, Ashley [Y-12 National Security Complex, Oak Ridge, TN (United States); Johnson, Lewis [Florida Agriculture & Mechanic Univ.
2017-10-09
The psuedo univariate limit of detection was calculated to compare to the multivariate interval. ompared with results from the psuedounivariate LOD, the multivariate LOD includes other factors (i.e. signal uncertainties) and the reveals the significance in creating models that not only use the analyte’s emission line but also its entire molecular spectra.
Multivariate methods and forecasting with IBM SPSS statistics
Aljandali, Abdulkader
2017-01-01
This is the second of a two-part guide to quantitative analysis using the IBM SPSS Statistics software package; this volume focuses on multivariate statistical methods and advanced forecasting techniques. More often than not, regression models involve more than one independent variable. For example, forecasting methods are commonly applied to aggregates such as inflation rates, unemployment, exchange rates, etc., that have complex relationships with determining variables. This book introduces multivariate regression models and provides examples to help understand theory underpinning the model. The book presents the fundamentals of multivariate regression and then moves on to examine several related techniques that have application in business-orientated fields such as logistic and multinomial regression. Forecasting tools such as the Box-Jenkins approach to time series modeling are introduced, as well as exponential smoothing and naïve techniques. This part also covers hot topics such as Factor Analysis, Dis...
Power Estimation in Multivariate Analysis of Variance
Directory of Open Access Journals (Sweden)
Jean François Allaire
2007-09-01
Full Text Available Power is often overlooked in designing multivariate studies for the simple reason that it is believed to be too complicated. In this paper, it is shown that power estimation in multivariate analysis of variance (MANOVA can be approximated using a F distribution for the three popular statistics (Hotelling-Lawley trace, Pillai-Bartlett trace, Wilk`s likelihood ratio. Consequently, the same procedure, as in any statistical test, can be used: computation of the critical F value, computation of the noncentral parameter (as a function of the effect size and finally estimation of power using a noncentral F distribution. Various numerical examples are provided which help to understand and to apply the method. Problems related to post hoc power estimation are discussed.
International Nuclear Information System (INIS)
Greacen, E.L.; Correll, R.L.; Cunningham, R.B.; Johns, G.G.; Nicolls, K.D.
1981-01-01
Procedures common to different methods of calibration of neutron moisture meters are outlined and laboratory and field calibration methods compared. Gross errors which arise from faulty calibration techniques are described. The count rate can be affected by the dry bulk density of the soil, the volumetric content of constitutional hydrogen and other chemical components of the soil and soil solution. Calibration is further complicated by the fact that the neutron meter responds more strongly to the soil properties close to the detector and source. The differences in slope of calibration curves for different soils can be as much as 40%
Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm
Ulbrich, Norbert Manfred
2013-01-01
A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.
Essentials of multivariate data analysis
Spencer, Neil H
2013-01-01
""… this text provides an overview at an introductory level of several methods in multivariate data analysis. It contains in-depth examples from one data set woven throughout the text, and a free [Excel] Add-In to perform the analyses in Excel, with step-by-step instructions provided for each technique. … could be used as a text (possibly supplemental) for courses in other fields where researchers wish to apply these methods without delving too deeply into the underlying statistics.""-The American Statistician, February 2015
Input saturation in nonlinear multivariable processes resolved by nonlinear decoupling
Directory of Open Access Journals (Sweden)
Jens G. Balchen
1995-04-01
Full Text Available A new method is presented for the resolution of the problem of input saturation in nonlinear multivariable process control by means of elementary nonlinear decoupling (END. Input saturation can have serious consequences particularly in multivariable control because it may lead to very undesirable system behaviour and quite often system instability. Many authors have searched for systematic techniques for designing multivariable control systems in which saturation may occur in any of the control variables (inputs, manipulated variables. No generally accepted method seems to have been presented so far which gives a solution in closed form. The method of elementary nonlinear decoupling (END can be applied directly to the case of saturation control variables by deriving as many control strategies as there are combinations of saturating control variables. The method is demonstrated by the multivariable control of a simulated Fluidized Catalytic Cracker (FCC with very convincing results.
Davis, C.; Rozo, E.; Roodman, A.; Alarcon, A.; Cawthon, R.; Gatti, M.; Lin, H.; Miquel, R.; Rykoff, E. S.; Troxel, M. A.; Vielzeuf, P.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Annis, J.; Bechtol, K.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Castander, F. J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Doel, P.; Drlica-Wagner, A.; Fausti Neto, A.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gaztanaga, E.; Gerdes, D. W.; Giannantonio, T.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Jain, B.; James, D. J.; Jeltema, T.; Krause, E.; Kuehn, K.; Kuhlmann, S.; Kuropatkin, N.; Lahav, O.; Li, T. S.; Lima, M.; March, M.; Marshall, J. L.; Martini, P.; Melchior, P.; Ogando, R. L. C.; Plazas, A. A.; Romer, A. K.; Sanchez, E.; Scarpine, V.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, M.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Vikram, V.; Walker, A. R.; Wechsler, R. H.
2018-06-01
Galaxy cross-correlations with high-fidelity redshift samples hold the potential to precisely calibrate systematic photometric redshift uncertainties arising from the unavailability of complete and representative training and validation samples of galaxies. However, application of this technique in the Dark Energy Survey (DES) is hampered by the relatively low number density, small area, and modest redshift overlap between photometric and spectroscopic samples. We propose instead using photometric catalogues with reliable photometric redshifts for photo-z calibration via cross-correlations. We verify the viability of our proposal using redMaPPer clusters from the Sloan Digital Sky Survey (SDSS) to successfully recover the redshift distribution of SDSS spectroscopic galaxies. We demonstrate how to combine photo-z with cross-correlation data to calibrate photometric redshift biases while marginalizing over possible clustering bias evolution in either the calibration or unknown photometric samples. We apply our method to DES Science Verification (DES SV) data in order to constrain the photometric redshift distribution of a galaxy sample selected for weak lensing studies, constraining the mean of the tomographic redshift distributions to a statistical uncertainty of Δz ˜ ±0.01. We forecast that our proposal can, in principle, control photometric redshift uncertainties in DES weak lensing experiments at a level near the intrinsic statistical noise of the experiment over the range of redshifts where redMaPPer clusters are available. Our results provide strong motivation to launch a programme to fully characterize the systematic errors from bias evolution and photo-z shapes in our calibration procedure.
Calibration of dosimeters at 80-120 keV electron irradiation
DEFF Research Database (Denmark)
Miller, A.; Helt-Hansen, J.
to calibrate thin-film dosimeters (Risø B3 and alanine films) by irradiation at the 80–120 keV electron accelerators. This calibration was compared to a 10MeV calibration, and we show that the radiation response of the dosimeter materials (the radiation chemical yield) is constant at these irradiation energies....... However, dose gradients within the dosimeters, when it is irradiated at low electron energies,mean that calibration function here will depend on both irradiation energy and the required effective point of measurement of the dosimeter. These are general effects that apply to any dosimeter that has a non...
DEFF Research Database (Denmark)
Ørregård Nielsen, Morten
2015-01-01
the multivariate non-cointegrated fractional autoregressive integrated moving average (ARIMA) model. The novelty of the consistency result, in particular, is that it applies to a multivariate model and to an arbitrarily large set of admissible parameter values, for which the objective function does not converge...
Design of calibration method in neutron and individual dosimeter
International Nuclear Information System (INIS)
Belkhodia, M.
1984-12-01
Usually albedo dosemeters are calibrated with beam of monoenergetic neutrons. Since neutron energy around neutron sources varies greatly, we applied the calibration method to a mixed field whose energy spectrum lies between 0.025 ev and 10 Mev. The method is based on a mathematical model that deals with the dosimeter response as a function at the neutron energy. The measurements carried out with solid state nuclear track detectors show the dosimeter practical aspect. The albedo dosimeter calibration gave results on good agreement with the international institution recommendations
Implementation Challenges for Multivariable Control: What You Did Not Learn in School
Garg, Sanjay
2008-01-01
Multivariable control allows controller designs that can provide decoupled command tracking and robust performance in the presence of modeling uncertainties. Although the last two decades have seen extensive development of multivariable control theory and example applications to complex systems in software/hardware simulations, there are no production flying systems aircraft or spacecraft, that use multivariable control. This is because of the tremendous challenges associated with implementation of such multivariable control designs. Unfortunately, the curriculum in schools does not provide sufficient time to be able to provide an exposure to the students in such implementation challenges. The objective of this paper is to share the lessons learned by a practitioner of multivariable control in the process of applying some of the modern control theory to the Integrated Flight Propulsion Control (IFPC) design for an advanced Short Take-Off Vertical Landing (STOVL) aircraft simulation.
Global Sensitivity Analysis for multivariate output using Polynomial Chaos Expansion
International Nuclear Information System (INIS)
Garcia-Cabrejo, Oscar; Valocchi, Albert
2014-01-01
Many mathematical and computational models used in engineering produce multivariate output that shows some degree of correlation. However, conventional approaches to Global Sensitivity Analysis (GSA) assume that the output variable is scalar. These approaches are applied on each output variable leading to a large number of sensitivity indices that shows a high degree of redundancy making the interpretation of the results difficult. Two approaches have been proposed for GSA in the case of multivariate output: output decomposition approach [9] and covariance decomposition approach [14] but they are computationally intensive for most practical problems. In this paper, Polynomial Chaos Expansion (PCE) is used for an efficient GSA with multivariate output. The results indicate that PCE allows efficient estimation of the covariance matrix and GSA on the coefficients in the approach defined by Campbell et al. [9], and the development of analytical expressions for the multivariate sensitivity indices defined by Gamboa et al. [14]. - Highlights: • PCE increases computational efficiency in 2 approaches of GSA of multivariate output. • Efficient estimation of covariance matrix of output from coefficients of PCE. • Efficient GSA on coefficients of orthogonal decomposition of the output using PCE. • Analytical expressions of multivariate sensitivity indices from coefficients of PCE
Directory of Open Access Journals (Sweden)
Cuesta Eduardo
2015-09-01
Full Text Available This paper presents a multivariate regression predictive model of drift on the Coordinate Measuring Machine (CMM behaviour. Evaluation tests on a CMM with a multi-step gauge were carried out following an extended version of an ISO evaluation procedure with a periodicity of at least once a week and during more than five months. This test procedure consists in measuring the gauge for several range volumes, spatial locations, distances and repetitions. The procedure, environment conditions and even the gauge have been kept invariables, so a massive measurement dataset was collected over time under high repeatability conditions. A multivariate regression analysis has revealed the main parameters that could affect the CMM behaviour, and then detected a trend on the CMM performance drift. A performance model that considers both the size of the measured dimension and the elapsed time since the last CMM calibration has been developed. This model can predict the CMM performance and measurement reliability over time and also can estimate an optimized period between calibrations for a specific measurement length or accuracy level.
Multivariate Volatility Impulse Response Analysis of GFC News Events
D.E. Allen (David); M.J. McAleer (Michael); R.J. Powell (Robert)
2015-01-01
markdownabstract__Abstract__ This paper applies the Hafner and Herwartz (2006) (hereafter HH) approach to the analysis of multivariate GARCH models using volatility impulse response analysis. The data set features ten years of daily returns series for the New York Stock Exchange Index and the
Multivariable modeling of pressure vessel and piping J-R data
International Nuclear Information System (INIS)
Eason, E.D.; Wright, J.E.; Nelson, E.E.
1991-05-01
Multivariable models were developed for predicting J-R curves from available data, such as material chemistry, radiation exposure, temperature, and Charpy V-notch energy. The present work involved collection of public test data, application of advanced pattern recognition tools, and calibration of improved multivariable models. Separate models were fitted for different material groups, including RPV welds, Linde 80 welds, RPV base metals, piping welds, piping base metals, and the combined database. Three different types of models were developed, involving different combinations of variables that might be available for applications: a Charpy model, a preirradiation Charpy model, and a copper-fluence model. In general, the best results were obtained with the preirradiation Charpy model. The copper-fluence model is recommended only if Charpy data are unavailable, and then only for Linde 80 welds. Relatively good fits were obtained, capable of predicting the values of J for pressure vessel steels to with a standard deviation of 13--18% over the range of test data. The models were qualified for predictive purposes by demonstrating their ability to predict validation data not used for fitting. 20 refs., 45 figs., 16 tabs
Calibration apparatus for a machine-tool
International Nuclear Information System (INIS)
Crespin, G.
1985-01-01
The invention proposes a calibration apparatus for a machine-tool comprising a torque measuring device, where the tool is driven by a motor of which supply electric current is proportional to the torque applied upon the tool and can be controlled and measured, a housing having an aperture through which the rotatable tool can pass. This device alloys to apply a torque on the tool and to measure it from the supply current of the motor. The invention applies, more particularly to the screwing machines used for the mounting of the core containment plates [fr
Kinematic parameter calibration method for industrial robot manipulator using the relative position
International Nuclear Information System (INIS)
Ha, In Chul
2008-01-01
A new calibration method for industrial robot system calibration on a manufacturing floor is presented in this paper. To calibrate the robot system, a laser sensor to measure the distance between robot tool and measurement surface is attached to the robot end-effector and a grid is established in the floor. Given two position command pulses for a robot manipulator and using the position difference between two command pulses, the relative position measurement calibration method will find the real robot kinematic parameters. The procedures developed have been applied to an industrial robot. Finally, the effects of the models used to calibrate the robot are discussed. This calibration method represents an effective, low cost and feasible technique for the industrial robot calibration in lab. projects and industrial environments
Calibration Analysis Software for the ATLAS Pixel Detector
AUTHOR|(INSPIRE)INSPIRE-00372086; The ATLAS collaboration
2016-01-01
The calibration of the ATLAS Pixel detector at LHC fulfils two main purposes: to tune the front-end configuration parameters for establishing the best operational settings and to measure the tuning performance through a subset of scans. An analysis framework has been set up in order to take actions on the detector given the outcome of a calibration scan (e.g. to create a mask for disabling noisy pixels). The software framework to control all aspects of the Pixel detector scans and analyses is called Calibration Console. The introduction of a new layer, equipped with new Front End-I4 Chips, required an update the Console architecture. It now handles scans and scans analyses applied together to chips with different characteristics. An overview of the newly developed Calibration Analysis Software will be presented, together with some preliminary result.
Displaying an Outlier in Multivariate Data | Gordor | Journal of ...
African Journals Online (AJOL)
... a multivariate data set is proposed. The technique involves the projection of the multidimensional data onto a single dimension called the outlier displaying component. When the observations are plotted on this component the outlier is appreciably revealed. Journal of Applied Science and Technology (JAST), Vol. 4, Nos.
Energy Technology Data Exchange (ETDEWEB)
Carvalho, Samira Marques de
2018-04-01
SPECT systems calibration plays an essential role in the accuracy of the quantification of images. In this work, in its first stage, an optimized SPECT calibration method was proposed for {sup 131}I studies, considering the partial volume effect (PVE) and the position of the calibration source. In the second stage, the study aimed to investigate the impact of count density and reconstruction parameters on the determination of the calibration factor and the quantification of the image in dosimetry studies, considering the reality of clinical practice in Brazil. In the final step, the study aimed evaluating the influence of several factors in the calibration for absorbed dose calculation using Monte Carlo simulations (MC) GATE code. Calibration was performed by determining a calibration curve (sensitivity versus volume) obtained by applying different thresholds. Then, the calibration factors were determined with an exponential function adjustment. Images were performed with high and low counts densities for several source positions within the simulator. To validate the calibration method, the calibration factors were used for absolute quantification of the total reference activities. The images were reconstructed adopting two approaches of different parameters, usually used in patient images. The methodology developed for the calibration of the tomographic system was easier and faster to implement than other procedures suggested to improve the accuracy of the results. The study also revealed the influence of the location of the calibration source, demonstrating better precision in the absolute quantification considering the location of the target region during the calibration of the system. The study applied in the Brazilian thyroid protocol suggests the revision of the calibration of the SPECT system, including different positions for the reference source, besides acquisitions considering the Signal to Noise Ratio (SNR) of the images. Finally, the doses obtained
2014-01-01
Background Before considering whether to use a multivariable (diagnostic or prognostic) prediction model, it is essential that its performance be evaluated in data that were not used to develop the model (referred to as external validation). We critically appraised the methodological conduct and reporting of external validation studies of multivariable prediction models. Methods We conducted a systematic review of articles describing some form of external validation of one or more multivariable prediction models indexed in PubMed core clinical journals published in 2010. Study data were extracted in duplicate on design, sample size, handling of missing data, reference to the original study developing the prediction models and predictive performance measures. Results 11,826 articles were identified and 78 were included for full review, which described the evaluation of 120 prediction models. in participant data that were not used to develop the model. Thirty-three articles described both the development of a prediction model and an evaluation of its performance on a separate dataset, and 45 articles described only the evaluation of an existing published prediction model on another dataset. Fifty-seven percent of the prediction models were presented and evaluated as simplified scoring systems. Sixteen percent of articles failed to report the number of outcome events in the validation datasets. Fifty-four percent of studies made no explicit mention of missing data. Sixty-seven percent did not report evaluating model calibration whilst most studies evaluated model discrimination. It was often unclear whether the reported performance measures were for the full regression model or for the simplified models. Conclusions The vast majority of studies describing some form of external validation of a multivariable prediction model were poorly reported with key details frequently not presented. The validation studies were characterised by poor design, inappropriate handling
Multivariate Receptor Models for Spatially Correlated Multipollutant Data
Jun, Mikyoung
2013-08-01
The goal of multivariate receptor modeling is to estimate the profiles of major pollution sources and quantify their impacts based on ambient measurements of pollutants. Traditionally, multivariate receptor modeling has been applied to multiple air pollutant data measured at a single monitoring site or measurements of a single pollutant collected at multiple monitoring sites. Despite the growing availability of multipollutant data collected from multiple monitoring sites, there has not yet been any attempt to incorporate spatial dependence that may exist in such data into multivariate receptor modeling. We propose a spatial statistics extension of multivariate receptor models that enables us to incorporate spatial dependence into estimation of source composition profiles and contributions given the prespecified number of sources and the model identification conditions. The proposed method yields more precise estimates of source profiles by accounting for spatial dependence in the estimation. More importantly, it enables predictions of source contributions at unmonitored sites as well as when there are missing values at monitoring sites. The method is illustrated with simulated data and real multipollutant data collected from eight monitoring sites in Harris County, Texas. Supplementary materials for this article, including data and R code for implementing the methods, are available online on the journal web site. © 2013 Copyright Taylor and Francis Group, LLC.
CryoSat-2 SIRAL Calibration and Performance
Fornari, M.; Scagliola, M.; Tagliani, N.; Parrinello, T.
2012-12-01
The main payload of CryoSat-2 is a Ku band pulse-width limited radar altimeter, called SIRAL (Synthetic interferometric radar altimeter), that transmits pulses at a high pulse repetition frequency thus making the received echoes phase coherent and suitable for azimuth processing. This allows to reach an along track resolution of about 250 meters which is a significant improvement over traditional pulse-width limited altimeters. Due to the fact that SIRAL is a phase coherent pulse-width limited radar altimeter, a proper calibration approach has been developed, including both an internal and external calibration. The internal calibration monitors the instrument impulse response and the transfer function, like traditional altimeters. In addition to that, the interferometer requires a special calibration developed ad hoc for SIRAL. The external calibration is performed with the use of a ground transponder, located in Svalbard, which receives SIRAL signal and sends the echo back to the satellite. Internal calibration data are processed on ground by the CryoSat-2 Instrument Processing Facility (IPF1) and then applied to the science data. In December 2012, two and a half years of calibration data will be available, which will be shown in this poster. The external calibration (transponder) data are processed and analyzed independently from the operational chain. The use of an external transponder has been very useful to determine instrument performance and for the tuning of the on-ground processor. This poster presents the transponder results in terms of range noise and datation error.
FACT. Multivariate extraction of muon ring images
Energy Technology Data Exchange (ETDEWEB)
Noethe, Maximilian; Temme, Fabian; Buss, Jens [Experimentelle Physik 5b, TU Dortmund, Dortmund (Germany); Collaboration: FACT-Collaboration
2016-07-01
In ground-based gamma-ray astronomy, muon ring images are an important event class for instrument calibration and monitoring of its properties. In this talk, a multivariate approach will be presented, that is well suited for real time extraction of muons from data streams of Imaging Atmospheric Cherenkov Telescopes (IACT). FACT, the First G-APD Cherenkov Telescope is located on the Canary Island of La Palma and is the first IACT to use Silicon Photomultipliers for detecting the Cherenkov photons of extensive air showers. In case of FACT, the extracted muon events are used to calculate the time resolution of the camera. In addition, the effect of the mirror alignment in May 2014 on properties of detected muons is investigated. Muon candidates are identified with a random forest classification algorithm. The performance of the classifier is evaluated for different sets of image parameters in order to compare the gain in performance with the computational costs of their calculation.
Efficient mass calibration of magnetic sector mass spectrometers
International Nuclear Information System (INIS)
Roddick, J.C.
1996-01-01
Magnetic sector mass spectrometers used for automatic acquisition of precise isotopic data are usually controlled with Hall probes and software that uses polynomial equations to define and calibrate the mass-field relations required for mass focusing. This procedure requires a number of reference masses and careful tuning to define and maintain an accurate mass calibration. A simplified equation is presented and applied to several different magnetically controlled mass spectrometers. The equation accounts for nonlinearity in typical Hall probe controlled mass-field relations, reduces calibration to a linear fitting procedure, and is sufficiently accurate to permit calibration over a mass range of 2 to 200 amu with only two defining masses. Procedures developed can quickly correct for normal drift in calibrations and compensate for drift during isotopic analysis over a limited mass range such as a single element. The equation is: Field A·Mass 1/2 + B·(Mass) p where A, B, and p are constants. The power value p has a characteristic value for a Hall probe/controller and is insensitive to changing conditions, thus reducing calibration to a linear regression to determine optimum A and B. (author). 1 ref., 1 tab., 6 figs
Multivariate Fréchet copulas and conditional value-at-risk
Directory of Open Access Journals (Sweden)
Werner Hürlimann
2004-01-01
is similar but not identical to the convex family of Fréchet. It is shown that the distribution and stop-loss transform of dependent sums from this multivariate family can be evaluated using explicit integral formulas, and that these dependent sums are bounded in convex order between the corresponding independent and comonotone sums. The model is applied to the evaluation of the economic risk capital for a portfolio of risks using conditional value-at-risk measures. A multivariate conditional value-at-risk vector measure is considered. Its components coincide for the constructed multivariate copula with the conditional value-at-risk measures of the risk components of the portfolio. This yields a fair risk allocation in the sense that each risk component becomes allocated to its coherent conditional value-at-risk.
Enhancing e-waste estimates: Improving data quality by multivariate Input–Output Analysis
Energy Technology Data Exchange (ETDEWEB)
Wang, Feng, E-mail: fwang@unu.edu [Institute for Sustainability and Peace, United Nations University, Hermann-Ehler-Str. 10, 53113 Bonn (Germany); Design for Sustainability Lab, Faculty of Industrial Design Engineering, Delft University of Technology, Landbergstraat 15, 2628CE Delft (Netherlands); Huisman, Jaco [Institute for Sustainability and Peace, United Nations University, Hermann-Ehler-Str. 10, 53113 Bonn (Germany); Design for Sustainability Lab, Faculty of Industrial Design Engineering, Delft University of Technology, Landbergstraat 15, 2628CE Delft (Netherlands); Stevels, Ab [Design for Sustainability Lab, Faculty of Industrial Design Engineering, Delft University of Technology, Landbergstraat 15, 2628CE Delft (Netherlands); Baldé, Cornelis Peter [Institute for Sustainability and Peace, United Nations University, Hermann-Ehler-Str. 10, 53113 Bonn (Germany); Statistics Netherlands, Henri Faasdreef 312, 2492 JP Den Haag (Netherlands)
2013-11-15
Highlights: • A multivariate Input–Output Analysis method for e-waste estimates is proposed. • Applying multivariate analysis to consolidate data can enhance e-waste estimates. • We examine the influence of model selection and data quality on e-waste estimates. • Datasets of all e-waste related variables in a Dutch case study have been provided. • Accurate modeling of time-variant lifespan distributions is critical for estimate. - Abstract: Waste electrical and electronic equipment (or e-waste) is one of the fastest growing waste streams, which encompasses a wide and increasing spectrum of products. Accurate estimation of e-waste generation is difficult, mainly due to lack of high quality data referred to market and socio-economic dynamics. This paper addresses how to enhance e-waste estimates by providing techniques to increase data quality. An advanced, flexible and multivariate Input–Output Analysis (IOA) method is proposed. It links all three pillars in IOA (product sales, stock and lifespan profiles) to construct mathematical relationships between various data points. By applying this method, the data consolidation steps can generate more accurate time-series datasets from available data pool. This can consequently increase the reliability of e-waste estimates compared to the approach without data processing. A case study in the Netherlands is used to apply the advanced IOA model. As a result, for the first time ever, complete datasets of all three variables for estimating all types of e-waste have been obtained. The result of this study also demonstrates significant disparity between various estimation models, arising from the use of data under different conditions. It shows the importance of applying multivariate approach and multiple sources to improve data quality for modelling, specifically using appropriate time-varying lifespan parameters. Following the case study, a roadmap with a procedural guideline is provided to enhance e
Enhancing e-waste estimates: Improving data quality by multivariate Input–Output Analysis
International Nuclear Information System (INIS)
Wang, Feng; Huisman, Jaco; Stevels, Ab; Baldé, Cornelis Peter
2013-01-01
Highlights: • A multivariate Input–Output Analysis method for e-waste estimates is proposed. • Applying multivariate analysis to consolidate data can enhance e-waste estimates. • We examine the influence of model selection and data quality on e-waste estimates. • Datasets of all e-waste related variables in a Dutch case study have been provided. • Accurate modeling of time-variant lifespan distributions is critical for estimate. - Abstract: Waste electrical and electronic equipment (or e-waste) is one of the fastest growing waste streams, which encompasses a wide and increasing spectrum of products. Accurate estimation of e-waste generation is difficult, mainly due to lack of high quality data referred to market and socio-economic dynamics. This paper addresses how to enhance e-waste estimates by providing techniques to increase data quality. An advanced, flexible and multivariate Input–Output Analysis (IOA) method is proposed. It links all three pillars in IOA (product sales, stock and lifespan profiles) to construct mathematical relationships between various data points. By applying this method, the data consolidation steps can generate more accurate time-series datasets from available data pool. This can consequently increase the reliability of e-waste estimates compared to the approach without data processing. A case study in the Netherlands is used to apply the advanced IOA model. As a result, for the first time ever, complete datasets of all three variables for estimating all types of e-waste have been obtained. The result of this study also demonstrates significant disparity between various estimation models, arising from the use of data under different conditions. It shows the importance of applying multivariate approach and multiple sources to improve data quality for modelling, specifically using appropriate time-varying lifespan parameters. Following the case study, a roadmap with a procedural guideline is provided to enhance e
Multivariate Volatility Impulse Response Analysis of GFC News Events
D.E. Allen (David); M.J. McAleer (Michael); R.J. Powell (Robert); A.K. Singh (Abhay)
2015-01-01
textabstractThis paper applies the Hafner and Herwartz (2006) (hereafter HH) approach to the analysis of multivariate GARCH models using volatility impulse response analysis. The data set features ten years of daily returns series for the New York Stock Exchange Index and the FTSE 100 index from the
Acoustic calibration apparatus for calibrating plethysmographic acoustic pressure sensors
Zuckerwar, Allan J. (Inventor); Davis, David C. (Inventor)
1995-01-01
An apparatus for calibrating an acoustic sensor is described. The apparatus includes a transmission material having an acoustic impedance approximately matching the acoustic impedance of the actual acoustic medium existing when the acoustic sensor is applied in actual in-service conditions. An elastic container holds the transmission material. A first sensor is coupled to the container at a first location on the container and a second sensor coupled to the container at a second location on the container, the second location being different from the first location. A sound producing device is coupled to the container and transmits acoustic signals inside the container.
Option price calibration from Renyi entropy
International Nuclear Information System (INIS)
Brody, Dorje C.; Buckley, Ian R.C.; Constantinou, Irene C.
2007-01-01
The calibration of the risk-neutral density function for the future asset price, based on the maximisation of the entropy measure of Renyi, is proposed. Whilst the conventional approach based on the use of logarithmic entropy measure fails to produce the observed power-law distribution when calibrated against option prices, the approach outlined here is shown to produce the desired form of the distribution. Procedures for the maximisation of the Renyi entropy under constraints are outlined in detail, and a number of interesting properties of the resulting power-law distributions are also derived. The result is applied to efficiently evaluate prices of path-independent derivatives
An Outlyingness Matrix for Multivariate Functional Data Classification
Dai, Wenlin
2017-08-25
The classification of multivariate functional data is an important task in scientific research. Unlike point-wise data, functional data are usually classified by their shapes rather than by their scales. We define an outlyingness matrix by extending directional outlyingness, an effective measure of the shape variation of curves that combines the direction of outlyingness with conventional statistical depth. We propose two classifiers based on directional outlyingness and the outlyingness matrix, respectively. Our classifiers provide better performance compared with existing depth-based classifiers when applied on both univariate and multivariate functional data from simulation studies. We also test our methods on two data problems: speech recognition and gesture classification, and obtain results that are consistent with the findings from the simulated data.
Application of multivariate statistical techniques in microbial ecology.
Paliy, O; Shankar, V
2016-03-01
Recent advances in high-throughput methods of molecular analyses have led to an explosion of studies generating large-scale ecological data sets. In particular, noticeable effect has been attained in the field of microbial ecology, where new experimental approaches provided in-depth assessments of the composition, functions and dynamic changes of complex microbial communities. Because even a single high-throughput experiment produces large amount of data, powerful statistical techniques of multivariate analysis are well suited to analyse and interpret these data sets. Many different multivariate techniques are available, and often it is not clear which method should be applied to a particular data set. In this review, we describe and compare the most widely used multivariate statistical techniques including exploratory, interpretive and discriminatory procedures. We consider several important limitations and assumptions of these methods, and we present examples of how these approaches have been utilized in recent studies to provide insight into the ecology of the microbial world. Finally, we offer suggestions for the selection of appropriate methods based on the research question and data set structure. © 2016 John Wiley & Sons Ltd.
Multivariate analysis with LISREL
Jöreskog, Karl G; Y Wallentin, Fan
2016-01-01
This book traces the theory and methodology of multivariate statistical analysis and shows how it can be conducted in practice using the LISREL computer program. It presents not only the typical uses of LISREL, such as confirmatory factor analysis and structural equation models, but also several other multivariate analysis topics, including regression (univariate, multivariate, censored, logistic, and probit), generalized linear models, multilevel analysis, and principal component analysis. It provides numerous examples from several disciplines and discusses and interprets the results, illustrated with sections of output from the LISREL program, in the context of the example. The book is intended for masters and PhD students and researchers in the social, behavioral, economic and many other sciences who require a basic understanding of multivariate statistical theory and methods for their analysis of multivariate data. It can also be used as a textbook on various topics of multivariate statistical analysis.
Directory of Open Access Journals (Sweden)
G. Hartmann
2005-01-01
Full Text Available In order to find a model parameterization such that the hydrological model performs well even under different conditions, appropriate model performance measures have to be determined. A common performance measure is the Nash Sutcliffe efficiency. Usually it is calculated comparing observed and modelled daily values. In this paper a modified version is suggested in order to calibrate a model on different time scales simultaneously (days up to years. A spatially distributed hydrological model based on HBV concept was used. The modelling was applied on the Upper Neckar catchment, a mesoscale river in south western Germany with a basin size of about 4000 km2. The observation period 1961-1990 was divided into four different climatic periods, referred to as "warm", "cold", "wet" and "dry". These sub periods were used to assess the transferability of the model calibration and of the measure of performance. In a first step, the hydrological model was calibrated on a certain period and afterwards applied on the same period. Then, a validation was performed on the climatologically opposite period than the calibration, e.g. the model calibrated on the cold period was applied on the warm period. Optimal parameter sets were identified by an automatic calibration procedure based on Simulated Annealing. The results show, that calibrating a hydrological model that is supposed to handle short as well as long term signals becomes an important task. Especially the objective function has to be chosen very carefully.
Calibration-free laser-induced breakdown spectroscopy for ...
Indian Academy of Sciences (India)
journal of. August 2012 physics pp. 299–310. Calibration-free laser-induced ... for quantitative analysis of materials, illustrated by CF-LIBS applied to a ..... The authors are thankful to BRNS, DAE, Govt. of India for the financial support provided.
An uncertain journey around the tails of multivariate hydrological distributions
Serinaldi, Francesco
2013-10-01
Moving from univariate to multivariate frequency analysis, this study extends the Klemeš' critique of the widespread belief that the increasingly refined mathematical structures of probability functions increase the accuracy and credibility of the extrapolated upper tails of the fitted distribution models. In particular, we discuss key aspects of multivariate frequency analysis applied to hydrological data such as the selection of multivariate design events (i.e., appropriate subsets or scenarios of multiplets that exhibit the same joint probability to be used in design applications) and the assessment of the corresponding uncertainty. Since these problems are often overlooked or treated separately, and sometimes confused, we attempt to clarify properties, advantages, shortcomings, and reliability of results of frequency analysis. We suggest a selection method of multivariate design events with prescribed joint probability based on simple Monte Carlo simulations that accounts for the uncertainty affecting the inference results and the multivariate extreme quantiles. It is also shown that the exploration of the p-level probability regions of a joint distribution returns a set of events that is a subset of the p-level scenarios resulting from an appropriate assessment of the sampling uncertainty, thus tending to overlook more extreme and potentially dangerous events with the same (uncertain) joint probability. Moreover, a quantitative assessment of the uncertainty of multivariate quantiles is provided by introducing the concept of joint confidence intervals. From an operational point of view, the simulated event sets describing the distribution of the multivariate p-level quantiles can be used to perform multivariate risk analysis under sampling uncertainty. As an example of the practical implications of this study, we analyze two case studies already presented in the literature.
Heidema, A Geert; Thissen, Uwe; Boer, Jolanda M A; Bouwman, Freek G; Feskens, Edith J M; Mariman, Edwin C M
2009-06-01
In this study, we applied the multivariate statistical tool Partial Least Squares (PLS) to analyze the relative importance of 83 plasma proteins in relation to coronary heart disease (CHD) mortality and the intermediate end points body mass index, HDL-cholesterol and total cholesterol. From a Dutch monitoring project for cardiovascular disease risk factors, men who died of CHD between initial participation (1987-1991) and end of follow-up (January 1, 2000) (N = 44) and matched controls (N = 44) were selected. Baseline plasma concentrations of proteins were measured by a multiplex immunoassay. With the use of PLS, we identified 15 proteins with prognostic value for CHD mortality and sets of proteins associated with the intermediate end points. Subsequently, sets of proteins and intermediate end points were analyzed together by Principal Components Analysis, indicating that proteins involved in inflammation explained most of the variance, followed by proteins involved in metabolism and proteins associated with total-C. This study is one of the first in which the association of a large number of plasma proteins with CHD mortality and intermediate end points is investigated by applying multivariate statistics, providing insight in the relationships among proteins, intermediate end points and CHD mortality, and a set of proteins with prognostic value.
Lerch, F. J.; Nerem, R. S.; Chinn, D. S.; Chan, J. C.; Patel, G. B.; Klosko, S. M.
1993-01-01
A new method has been developed to provide a direct test of the error calibrations of gravity models based on actual satellite observations. The basic approach projects the error estimates of the gravity model parameters onto satellite observations, and the results of these projections are then compared with data residual computed from the orbital fits. To allow specific testing of the gravity error calibrations, subset solutions are computed based on the data set and data weighting of the gravity model. The approach is demonstrated using GEM-T3 to show that the gravity error estimates are well calibrated and that reliable predictions of orbit accuracies can be achieved for independent orbits.
Calibration of a distributed hydrologic model for six European catchments using remote sensing data
Stisen, S.; Demirel, M. C.; Mendiguren González, G.; Kumar, R.; Rakovec, O.; Samaniego, L. E.
2017-12-01
While observed streamflow has been the single reference for most conventional hydrologic model calibration exercises, the availability of spatially distributed remote sensing observations provide new possibilities for multi-variable calibration assessing both spatial and temporal variability of different hydrologic processes. In this study, we first identify the key transfer parameters of the mesoscale Hydrologic Model (mHM) controlling both the discharge and the spatial distribution of actual evapotranspiration (AET) across six central European catchments (Elbe, Main, Meuse, Moselle, Neckar and Vienne). These catchments are selected based on their limited topographical and climatic variability which enables to evaluate the effect of spatial parameterization on the simulated evapotranspiration patterns. We develop a European scale remote sensing based actual evapotranspiration dataset at a 1 km grid scale driven primarily by land surface temperature observations from MODIS using the TSEB approach. Using the observed AET maps we analyze the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mHM model. This model allows calibrating one-basin-at-a-time or all-basins-together using its unique structure and multi-parameter regionalization approach. Results will indicate any tradeoffs between spatial pattern and discharge simulation during model calibration and through validation against independent internal discharge locations. Moreover, added value on internal water balances will be analyzed.
Radioactive standards and calibration methods for contamination monitoring instruments
Energy Technology Data Exchange (ETDEWEB)
Yoshida, Makoto [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1997-06-01
Contamination monitoring in the facilities for handling unsealed radioactive materials is one of the most important procedures for radiation protection as well as radiation dose monitoring. For implementation of the proper contamination monitoring, radiation measuring instruments should not only be suitable to the purpose of monitoring, but also be well calibrated for the objective qualities of measurement. In the calibration of contamination monitoring instruments, quality reference activities need to be used. They are supplied in different such as extended sources, radioactive solutions or radioactive gases. These reference activities must be traceable to the national standards or equivalent standards. On the other hand, the appropriate calibration methods must be applied for each type of contamination monitoring instruments. In this paper, the concepts of calibration for contamination monitoring instruments, reference sources, determination methods of reference quantities and practical calibration methods of contamination monitoring instruments, including the procedures carried out in Japan Atomic Energy Research Institute and some relevant experimental data. (G.K.)
The Chandra Source Catalog 2.0: Calibrations
Graessle, Dale E.; Evans, Ian N.; Rots, Arnold H.; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Primini, Francis Anthony; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula
2018-01-01
Among the many enhancements implemented for the release of Chandra Source Catalog (CSC) 2.0 are improvements in the processing calibration database (CalDB). We have included a thorough overhaul of the CalDB software used in the processing. The software system upgrade, called "CalDB version 4," allows for a more rational and consistent specification of flight configurations and calibration boundary conditions. Numerous improvements in the specific calibrations applied have also been added. Chandra's radiometric and detector response calibrations vary considerably with time, detector operating temperature, and position on the detector. The CalDB has been enhanced to provide the best calibrations possible to each observation over the fifteen-year period included in CSC 2.0. Calibration updates include an improved ACIS contamination model, as well as updated time-varying gain (i.e., photon energy) and quantum efficiency maps for ACIS and HRC-I. Additionally, improved corrections for the ACIS quantum efficiency losses due to CCD charge transfer inefficiency (CTI) have been added for each of the ten ACIS detectors. These CTI corrections are now time and temperature-dependent, allowing ACIS to maintain a 0.3% energy calibration accuracy over the 0.5-7.0 keV range for any ACIS source in the catalog. Radiometric calibration (effective area) accuracy is estimated at ~4% over that range. We include a few examples where improvements in the Chandra CalDB allow for improved data reduction and modeling for the new CSC.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.
The studies of post-medieval glass by multivariate and X-ray fluorescence analysis
International Nuclear Information System (INIS)
Kierzek, J.; Kunicki-Goldfinger, J.
2002-01-01
Multivariate statistical analysis of the results obtained by energy dispersive X-ray fluorescence analysis has been used in the study of baroque vessel glasses originated from central Europe. X-ray spectrometry can be applied as a completely non-destructive, non-sampling and multi-element method. It is very useful in the studies of valuable historical artefacts. For the last years, multivariate statistical analysis has been developed as an important tool for the archaeometric purposes. Cluster, principal component and discriminant analysis were applied for the classification of the examined objects. The obtained results show that these statistical tools are very useful and complementary in the studies of historical objects. (author)
Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.
2015-12-01
Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.
Study of the indirect calibration of clinical air kerma-area meters
International Nuclear Information System (INIS)
Almeida Junior, Jose N.; Terini, Ricardo A.; Herdade, Silvio B.
2011-01-01
Kerma-area product (P KA ) is a quantity which is independent of the distance to the X-ray tube focal spot and that can be used to assess the effective dose in patients. Clinical P KA meters are usually calibrated on-site by measuring the air kerma with an ion chamber and evaluating the irradiated area by means of a radiographic image. This work presents a preliminary metrological evaluation of the calibration of a device marketed recently (PDC, Patient Dose Calibrator, Radcal), designed for calibrating clinical P KA meters. Results are also shown of applying the PDC to the cross calibration of a clinical P KA meter from a radiology equipment. Results confirm a lower energy dependence of the PDC relative to the tested clinical meter. (author)
Depth-weighted robust multivariate regression with application to sparse data
Dutta, Subhajit; Genton, Marc G.
2017-01-01
A robust method for multivariate regression is developed based on robust estimators of the joint location and scatter matrix of the explanatory and response variables using the notion of data depth. The multivariate regression estimator possesses desirable affine equivariance properties, achieves the best breakdown point of any affine equivariant estimator, and has an influence function which is bounded in both the response as well as the predictor variable. To increase the efficiency of this estimator, a re-weighted estimator based on robust Mahalanobis distances of the residual vectors is proposed. In practice, the method is more stable than existing methods that are constructed using subsamples of the data. The resulting multivariate regression technique is computationally feasible, and turns out to perform better than several popular robust multivariate regression methods when applied to various simulated data as well as a real benchmark data set. When the data dimension is quite high compared to the sample size it is still possible to use meaningful notions of data depth along with the corresponding depth values to construct a robust estimator in a sparse setting.
Depth-weighted robust multivariate regression with application to sparse data
Dutta, Subhajit
2017-04-05
A robust method for multivariate regression is developed based on robust estimators of the joint location and scatter matrix of the explanatory and response variables using the notion of data depth. The multivariate regression estimator possesses desirable affine equivariance properties, achieves the best breakdown point of any affine equivariant estimator, and has an influence function which is bounded in both the response as well as the predictor variable. To increase the efficiency of this estimator, a re-weighted estimator based on robust Mahalanobis distances of the residual vectors is proposed. In practice, the method is more stable than existing methods that are constructed using subsamples of the data. The resulting multivariate regression technique is computationally feasible, and turns out to perform better than several popular robust multivariate regression methods when applied to various simulated data as well as a real benchmark data set. When the data dimension is quite high compared to the sample size it is still possible to use meaningful notions of data depth along with the corresponding depth values to construct a robust estimator in a sparse setting.
Liu, Hai-Zheng; Shi, Ze-Lin; Feng, Bin; Hui, Bin; Zhao, Yao-Hong
2016-03-01
Integrating microgrid polarimeters on focal plane array (FPA) of an infrared detector causes non-uniformity of polarization response. In order to reduce the effect of polarization non-uniformity, this paper constructs an experimental setup for capturing raw flat-field images and proposes a procedure for acquiring non-uniform calibration (NUC) matrix and calibrating raw polarization images. The proposed procedure takes the incident radiation as a polarization vector and offers a calibration matrix for each pixel. Both our matrix calibration and two-point calibration are applied to our mid-wavelength infrared (MWIR) polarization imaging system with integrated microgrid polarimeters. Compared with two point calibration, our matrix calibration reduces non-uniformity by 30 40% under condition of flat-field data test with polarization. The ourdoor scene observation experiment indicates that our calibration can effectively reduce polarization non-uniformity and improve the image quality of our MWIR polarization imaging system.
DEFF Research Database (Denmark)
Ørregård Nielsen, Morten
This paper proves consistency and asymptotic normality for the conditional-sum-of-squares estimator, which is equivalent to the conditional maximum likelihood estimator, in multivariate fractional time series models. The model is parametric and quite general, and, in particular, encompasses...... the multivariate non-cointegrated fractional ARIMA model. The novelty of the consistency result, in particular, is that it applies to a multivariate model and to an arbitrarily large set of admissible parameter values, for which the objective function does not converge uniformly in probablity, thus making...
Multivariate diagnostics and anomaly detection for nuclear safeguards
International Nuclear Information System (INIS)
Burr, T.
1994-01-01
For process control and other reasons, new and future nuclear reprocessing plants are expected to be increasingly more automated than older plants. As a consequence of this automation, the quantity of data potentially available for safeguards may be much greater in future reprocessing plants than in current plants. The authors first review recent literature that applies multivariate Shewhart and multivariate cumulative sum (Cusum) tests to detect anomalous data. These tests are used to evaluate residuals obtained from a simulated three-tank problem in which five variables (volume, density, and concentrations of uranium, plutonium, and nitric acid) in each tank are modeled and measured. They then present results from several simulations involving transfers between the tanks and between the tanks and the environment. Residuals from a no-fault problem in which the measurements and model predictions are both correct are used to develop Cusum test parameters which are then used to test for faults for several simulated anomalous situations, such as an unknown leak or diversion of material from one of the tanks. The leak can be detected by comparing measurements, which estimate the true state of the tank system, with the model predictions, which estimate the state of the tank system as it ''should'' be. The no-fault simulation compares false alarm behavior for the various tests, whereas the anomalous problems allow one to compare the power of the various tests to detect faults under possible diversion scenarios. For comparison with the multivariate tests, univariate tests are also applied to the residuals
Determination of wheat quality by mass spectrometry and multivariate data analysis
DEFF Research Database (Denmark)
Gottlieb, D.M.; Schultz, J.; Petersen, M.
2002-01-01
Multivariate analysis has been applied as support to proteome analysis in order to implement an easier and faster way of data handling based on separation by matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry. The characterisation phase in proteome analysis by means...... of simple visual inspection is a demanding process and also insecure because subjectivity is the controlling element. Multivariate analysis offers, to a considerable extent, objectivity and must therefore be regarded as a neutral way to evaluate results obtained by proteome analysis.Proteome analysis...
The Utilization of Background Spectrum to Calibrate Gamma Spectrometry
International Nuclear Information System (INIS)
Mahrouka, M. M.; Mutawa, A. M.
2004-01-01
Many developed countries have very poor reference standards to calibrate their nuclear instrumentations or may find some difficulties to obtain a reference standard. In this work a simple way for Gamma spectrometry calibration was developed. The method depends on one reference point and additional points from the background. The two derived equations were applied to the analyses of radioactive nuclides in soil and liquid samples prepared by IAEA laboratories through AL MERA Project. The results showed the precision of the methodology used, as well as, the possibility of using some points in the background spectrum as a replacement for reference standards of Gamma spectrometry calibration. (authors)
A video-based approach to calibrating car-following parameters in VISSIM for urban traffic
Directory of Open Access Journals (Sweden)
Zhengyang Lu
2016-08-01
Full Text Available Microscopic simulation models need to be calibrated to represent realistic local traffic conditions. Traditional calibration methods are conducted by searching for the model parameter set that minimizes the discrepancies of certain macroscopic metrics between simulation results and field observations. However, this process could easily lead to inappropriate selection of calibration parameters and thus erroneous simulation results. This paper proposes a video-based approach to incorporate direct measurements of car-following parameters into the process of VISSIM model calibration. The proposed method applies automated video processing techniques to extract vehicle trajectory data and utilizes the trajectory data to determine values of certain car-following parameters in VISSIM. This paper first describes the calibration procedure step by step, and then applies the method to a case study of simulating traffic at a signalized intersection in VISSIM. From the field-collected video footage, trajectories of 1229 through-movement vehicles were extracted and analyzed to calibrate three car-following parameters regarding desired speed, desired acceleration, and safe following distance, respectively. The case study demonstrates the advantages and feasibility of the proposed approach.
Actuator-Assisted Calibration of Freehand 3D Ultrasound System.
Koo, Terry K; Silvia, Nathaniel
2018-01-01
Freehand three-dimensional (3D) ultrasound has been used independently of other technologies to analyze complex geometries or registered with other imaging modalities to aid surgical and radiotherapy planning. A fundamental requirement for all freehand 3D ultrasound systems is probe calibration. The purpose of this study was to develop an actuator-assisted approach to facilitate freehand 3D ultrasound calibration using point-based phantoms. We modified the mathematical formulation of the calibration problem to eliminate the need of imaging the point targets at different viewing angles and developed an actuator-assisted approach/setup to facilitate quick and consistent collection of point targets spanning the entire image field of view. The actuator-assisted approach was applied to a commonly used cross wire phantom as well as two custom-made point-based phantoms (original and modified), each containing 7 collinear point targets, and compared the results with the traditional freehand cross wire phantom calibration in terms of calibration reproducibility, point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time. Results demonstrated that the actuator-assisted single cross wire phantom calibration significantly improved the calibration reproducibility and offered similar point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time with respect to the freehand cross wire phantom calibration. On the other hand, the actuator-assisted modified "collinear point target" phantom calibration offered similar precision and accuracy when compared to the freehand cross wire phantom calibration, but it reduced the data acquisition time by 57%. It appears that both actuator-assisted cross wire phantom and modified collinear point target phantom calibration approaches are viable options for freehand 3D ultrasound calibration.
Mulch materials in processing tomato: a multivariate approach
Directory of Open Access Journals (Sweden)
Marta María Moreno
2013-08-01
Full Text Available Mulch materials of different origins have been introduced into the agricultural sector in recent years alternatively to the standard polyethylene due to its environmental impact. This study aimed to evaluate the multivariate response of mulch materials over three consecutive years in a processing tomato (Solanum lycopersicon L. crop in Central Spain. Two biodegradable plastic mulches (BD1, BD2, one oxo-biodegradable material (OB, two types of paper (PP1, PP2, and one barley straw cover (BS were compared using two control treatments (standard black polyethylene [PE] and manual weed control [MW]. A total of 17 variables relating to yield, fruit quality, and weed control were investigated. Several multivariate statistical techniques were applied, including principal component analysis, cluster analysis, and discriminant analysis. A group of mulch materials comprised of OB and BD2 was found to be comparable to black polyethylene regarding all the variables considered. The weed control variables were found to be an important source of discrimination. The two paper mulches tested did not share the same treatment group membership in any case: PP2 presented a multivariate response more similar to the biodegradable plastics, while PP1 was more similar to BS and MW. Based on our multivariate approach, the materials OB and BD2 can be used as an effective, more environmentally friendly alternative to polyethylene mulches.
Batistela, Vagner Roberto; Pellosi, Diogo Silva; de Souza, Franciane Dutra; da Costa, Willian Ferreira; de Oliveira Santin, Silvana Maria; de Souza, Vagner Roberto; Caetano, Wilker; de Oliveira, Hueder Paulo Moisés; Scarminio, Ieda Spacino; Hioka, Noboru
2011-09-01
Xanthenes form to an important class of dyes which are widely used. Most of them present three acid-base groups: two phenolic sites and one carboxylic site. Therefore, the p Ka determination and the attribution of each group to the corresponding p Ka value is a very important feature. Attempts to obtain reliable p Ka through the potentiometry titration and the electronic absorption spectrophotometry using the first and second orders derivative failed. Due to the close p Ka values allied to strong UV-Vis spectral overlap, multivariate analysis, a powerful chemometric method, is applied in this work. The determination was performed for eosin Y, erythrosin B, and bengal rose B, and also for other synthesized derivatives such as 2-(3,6-dihydroxy-9-acridinyl) benzoic acid, 2,4,5,7-tetranitrofluorescein, eosin methyl ester, and erythrosin methyl ester in water. These last two compounds (esters) permitted to attribute the p Ka of the phenolic group, which is not easily recognizable for some investigated dyes. Besides the p Ka determination, the chemometry allowed for estimating the electronic spectrum of some prevalent protolytic species and the substituents effects evaluation.
Essay on Option Pricing, Hedging and Calibration
DEFF Research Database (Denmark)
da Silva Ribeiro, André Manuel
Quantitative finance is concerned about applying mathematics to financial markets.This thesis is a collection of essays that study different problems in this field: How efficient are option price approximations to calibrate a stochastic volatilitymodel? (Chapter 2) How different is the discretely...... of dynamics? (Chapter 5) How can we formulate a simple free-arbitrage model to price correlationswaps? (Chapter 6) A summary of the work presented in this thesis: Approximation Behooves Calibration In this paper we show that calibration based on an expansion approximation for option prices in the Heston...... stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005 to 2009. Discretely Sampled Variance Options: A Stochastic Approximation Approach In this paper, we expand Drimus and Farkas (2012) framework to price variance options on discretely sampled...
Independent System Calibration of Sentinel-1B
Directory of Open Access Journals (Sweden)
Marco Schwerdt
2017-05-01
Full Text Available Sentinel-1B is the second of two C-Band Synthetic Aperture Radar (SAR satellites of the Sentinel-1 mission, launched in April 2016—two years after the launch of the first satellite, Sentinel-1A. In addition to the commissioning of Sentinel-1B executed by the European Space Agency (ESA, an independent system calibration was performed by the German Aerospace Center (DLR on behalf of ESA. Based on an efficient calibration strategy and the different calibration procedures already developed and applied for Sentinel-1A, extensive measurement campaigns were executed by initializing and aligning DLR’s reference targets deployed on the ground. This paper describes the different activities performed by DLR during the Sentinel-1B commissioning phase and presents the results derived from the analysis and the evaluation of a multitude of data takes and measurements.
ALONSO ABAD, Ariel; Rodriguez, O.; TIBALDI, Fabian; CORTINAS ABRAHANTES, Jose
2002-01-01
In medical studies the categorical endpoints are quite often. Even though nowadays some models for handling this multicategorical variables have been developed their use is not common. This work shows an application of the Multivariate Generalized Linear Models to the analysis of Clinical Trials data. After a theoretical introduction models for ordinal and nominal responses are applied and the main results are discussed. multivariate analysis; multivariate logistic regression; multicategor...
Efficient mass calibration of magnetic sector mass spectrometers
Energy Technology Data Exchange (ETDEWEB)
Roddick, J C
1997-12-31
Magnetic sector mass spectrometers used for automatic acquisition of precise isotopic data are usually controlled with Hall probes and software that uses polynomial equations to define and calibrate the mass-field relations required for mass focusing. This procedure requires a number of reference masses and careful tuning to define and maintain an accurate mass calibration. A simplified equation is presented and applied to several different magnetically controlled mass spectrometers. The equation accounts for nonlinearity in typical Hall probe controlled mass-field relations, reduces calibration to a linear fitting procedure, and is sufficiently accurate to permit calibration over a mass range of 2 to 200 amu with only two defining masses. Procedures developed can quickly correct for normal drift in calibrations and compensate for drift during isotopic analysis over a limited mass range such as a single element. The equation is: Field A{center_dot}Mass{sup 1/2} + B{center_dot}(Mass){sup p} where A, B, and p are constants. The power value p has a characteristic value for a Hall probe/controller and is insensitive to changing conditions, thus reducing calibration to a linear regression to determine optimum A and B. (author). 1 ref., 1 tab., 6 figs.
Improving integrity of on-line grammage measurement with traceable basic calibration.
Kangasrääsiö, Juha
2010-07-01
The automatic control of grammage (basis weight) in paper and board production is based upon on-line grammage measurement. Furthermore, the automatic control of other quality variables such as moisture, ash content and coat weight, may rely on the grammage measurement. The integrity of Kr-85 based on-line grammage measurement systems was studied, by performing basic calibrations with traceably calibrated plastic reference standards. The calibrations were performed according to the EN ISO/IEC 17025 standard, which is a requirement for calibration laboratories. The observed relative measurement errors were 3.3% in the first time calibrations at the 95% confidence level. With the traceable basic calibration method, however, these errors can be reduced to under 0.5%, thus improving the integrity of on-line grammage measurements. Also a standardised algorithm, based on the experience from the performed calibrations, is proposed to ease the adjustment of the different grammage measurement systems. The calibration technique can basically be applied to all beta-radiation based grammage measurements. 2010 ISA. Published by Elsevier Ltd. All rights reserved.
IOT Overview: Calibrations of the VLTI Instruments (MIDI and AMBER)
Morel, S.; Rantakyrö, F.; Rivinius, T.; Stefl, S.; Hummel, C.; Brillant, S.; Schöller, M.; Percheron, I.; Wittkowski, M.; Richichi, A.; Ballester, P.
We present here a short review of the calibration processes that are currently applied to the instruments AMBER and MIDI of the VLTI (Very Large Telescope Interferometer) at Paranal. We first introduce the general principles to calibrate the raw data (the "visibilities") that have been measured by long-baseline optical interferometry. Then, we focus on the specific case of the scientific operation of the VLTI instruments. We explain the criteria that have been used to select calibrator stars for the observations with the VLTI instruments, as well as the routine internal calibration techniques. Among these techniques, the "P2VM" (Pixel-to-Visibility Matrix) in the case of AMBER is explained. Also, the daily monitoring of AMBER and MIDI, that has recently been implemented, is shortly introduced.
Multivariate Analysis and Prediction of Dioxin-Furan ...
Peer Review Draft of Regional Methods Initiative Final Report Dioxins, which are bioaccumulative and environmentally persistent, pose an ongoing risk to human and ecosystem health. Fish constitute a significant source of dioxin exposure for humans and fish-eating wildlife. Current dioxin analytical methods are costly, time-consuming, and produce hazardous by-products. A Danish team developed a novel, multivariate statistical methodology based on the covariance of dioxin-furan congener Toxic Equivalences (TEQs) and fatty acid methyl esters (FAMEs) and applied it to North Atlantic Ocean fishmeal samples. The goal of the current study was to attempt to extend this Danish methodology to 77 whole and composite fish samples from three trophic groups: predator (whole largemouth bass), benthic (whole flathead and channel catfish) and forage fish (composite bluegill, pumpkinseed and green sunfish) from two dioxin contaminated rivers (Pocatalico R. and Kanawha R.) in West Virginia, USA. Multivariate statistical analyses, including, Principal Components Analysis (PCA), Hierarchical Clustering, and Partial Least Squares Regression (PLS), were used to assess the relationship between the FAMEs and TEQs in these dioxin contaminated freshwater fish from the Kanawha and Pocatalico Rivers. These three multivariate statistical methods all confirm that the pattern of Fatty Acid Methyl Esters (FAMEs) in these freshwater fish covaries with and is predictive of the WHO TE
Ranucci, Marco; Castelvecchio, Serenella; Menicanti, Lorenzo; Frigiola, Alessandro; Pelissero, Gabriele
2010-03-01
The European system for cardiac operative risk evaluation (EuroSCORE) is currently used in many institutions and is considered a reference tool in many countries. We hypothesised that too many variables were included in the EuroSCORE using limited patient series. We tested different models using a limited number of variables. A total of 11150 adult patients undergoing cardiac operations at our institution (2001-2007) were retrospectively analysed. The 17 risk factors composing the EuroSCORE were separately analysed and ranked for accuracy of prediction of hospital mortality. Seventeen models were created by progressively including one factor at a time. The models were compared for accuracy with a receiver operating characteristics (ROC) analysis and area under the curve (AUC) evaluation. Calibration was tested with Hosmer-Lemeshow statistics. Clinical performance was assessed by comparing the predicted with the observed mortality rates. The best accuracy (AUC 0.76) was obtained using a model including only age, left ventricular ejection fraction, serum creatinine, emergency operation and non-isolated coronary operation. The EuroSCORE AUC (0.75) was not significantly different. Calibration and clinical performance were better in the five-factor model than in the EuroSCORE. Only in high-risk patients were 12 factors needed to achieve a good performance. Including many factors in multivariable logistic models increases the risk for overfitting, multicollinearity and human error. A five-factor model offers the same level of accuracy but demonstrated better calibration and clinical performance. Models with a limited number of factors may work better than complex models when applied to a limited number of patients. Copyright (c) 2009 European Association for Cardio-Thoracic Surgery. Published by Elsevier B.V. All rights reserved.
Multivariate statistical methods a primer
Manly, Bryan FJ
2004-01-01
THE MATERIAL OF MULTIVARIATE ANALYSISExamples of Multivariate DataPreview of Multivariate MethodsThe Multivariate Normal DistributionComputer ProgramsGraphical MethodsChapter SummaryReferencesMATRIX ALGEBRAThe Need for Matrix AlgebraMatrices and VectorsOperations on MatricesMatrix InversionQuadratic FormsEigenvalues and EigenvectorsVectors of Means and Covariance MatricesFurther Reading Chapter SummaryReferencesDISPLAYING MULTIVARIATE DATAThe Problem of Displaying Many Variables in Two DimensionsPlotting index VariablesThe Draftsman's PlotThe Representation of Individual Data P:ointsProfiles o
Multivariate reference technique for quantitative analysis of fiber-optic tissue Raman spectroscopy.
Bergholt, Mads Sylvest; Duraipandian, Shiyamala; Zheng, Wei; Huang, Zhiwei
2013-12-03
We report a novel method making use of multivariate reference signals of fused silica and sapphire Raman signals generated from a ball-lens fiber-optic Raman probe for quantitative analysis of in vivo tissue Raman measurements in real time. Partial least-squares (PLS) regression modeling is applied to extract the characteristic internal reference Raman signals (e.g., shoulder of the prominent fused silica boson peak (~130 cm(-1)); distinct sapphire ball-lens peaks (380, 417, 646, and 751 cm(-1))) from the ball-lens fiber-optic Raman probe for quantitative analysis of fiber-optic Raman spectroscopy. To evaluate the analytical value of this novel multivariate reference technique, a rapid Raman spectroscopy system coupled with a ball-lens fiber-optic Raman probe is used for in vivo oral tissue Raman measurements (n = 25 subjects) under 785 nm laser excitation powers ranging from 5 to 65 mW. An accurate linear relationship (R(2) = 0.981) with a root-mean-square error of cross validation (RMSECV) of 2.5 mW can be obtained for predicting the laser excitation power changes based on a leave-one-subject-out cross-validation, which is superior to the normal univariate reference method (RMSE = 6.2 mW). A root-mean-square error of prediction (RMSEP) of 2.4 mW (R(2) = 0.985) can also be achieved for laser power prediction in real time when we applied the multivariate method independently on the five new subjects (n = 166 spectra). We further apply the multivariate reference technique for quantitative analysis of gelatin tissue phantoms that gives rise to an RMSEP of ~2.0% (R(2) = 0.998) independent of laser excitation power variations. This work demonstrates that multivariate reference technique can be advantageously used to monitor and correct the variations of laser excitation power and fiber coupling efficiency in situ for standardizing the tissue Raman intensity to realize quantitative analysis of tissue Raman measurements in vivo, which is particularly appealing in
Calibration-measurement unit for the automation of vector network analyzer measurements
Directory of Open Access Journals (Sweden)
I. Rolfes
2008-05-01
Full Text Available With the availability of multi-port vector network analyzers, the need for automated, calibrated measurement facilities increases. In this contribution, a calibration-measurement unit is presented which realizes a repeatable automated calibration of the measurement setup as well as a user-friendly measurement of the device under test (DUT. In difference to commercially available calibration units, which are connected to the ports of the vector network analyzer preceding a measurement and which are then removed so that the DUT can be connected, the presented calibration-measurement unit is permanently connected to the ports of the VNA for the calibration as well as for the measurement of the DUT. This helps to simplify the calibrated measurement of complex scattering parameters. Moreover, a full integration of the calibration unit into the analyzer setup becomes possible. The calibration-measurement unit is based on a multiport switch setup of e.g. electromechanical relays. Under the assumption of symmetry of a switch, on the one hand the unit realizes the connection of calibration standards like one-port reflection standards and two-port through connections between different ports and on the other hand it enables the connection of the DUT. The calibration-measurement unit is applicable for two-port VNAs as well as for multiport VNAs. For the calibration of the unit, methods with completely known calibration standards like SOLT (short, open, load, through as well as self-calibration procedures like TMR or TLR can be applied.
Topics in multivariate approximation and interpolation
Jetter, Kurt
2005-01-01
This book is a collection of eleven articles, written by leading experts and dealing with special topics in Multivariate Approximation and Interpolation. The material discussed here has far-reaching applications in many areas of Applied Mathematics, such as in Computer Aided Geometric Design, in Mathematical Modelling, in Signal and Image Processing and in Machine Learning, to mention a few. The book aims at giving a comprehensive information leading the reader from the fundamental notions and results of each field to the forefront of research. It is an ideal and up-to-date introduction for gr
Graffelman, J.; Eeuwijk, van F.A.
2005-01-01
The scatter plot is a well known and easily applicable graphical tool to explore relationships between two quantitative variables. For the exploration of relations between multiple variables, generalisations of the scatter plot are useful. We present an overview of multivariate scatter plots
Optical - Near Infrared Photometric Calibration of M-dwarf Metallicity and Its Application
Hejazi, Neda; De Robertis, Michael M.; Dawson, Peter C.
2015-01-01
Based on a carefully constructed sample of dwarf stars, a new optical-near infrared photometric calibration to estimate the metallicity of late-type K and early-to-mid-type M dwarfs is presented. The calibration sample has two parts; the first part includes 18 M dwarfs with metallicities determined by high-resolution spectroscopy and the second part contains 49 dwarfs with metallicities obtained through moderate-resolution spectra. By applying this calibration to a large sample of around 1.3 ...
Photometric Calibration of the SPRED at the FTU Tokamak
International Nuclear Information System (INIS)
May, M J
1999-01-01
The SPRED spectrometer was photometrically calibrated by using the FTU tokamak plasma and the Grazing Incidence Time Resolving Spectrometer (GRITS) from the Johns Hopkins University [Stratton, Nucl. Fusion, Vol. 24, No. 6, pp. 767-777, 1984]. The photometric calibration of the GRITS spectrometer was transferred to the SPRED [Fonck, R.J., Applied Optics, Vol. 21, No. 12, p. 2115 (1982)] by directly comparing the intensity of bright lines emitted from the FTU tokamak plasma that were simultaneously measured by both spectrometers. The GRITS spectrometer (λ = 10 - 360 (angstrom); Δλ ∼ 0.7 (angstrom)) was photometrically calibrated in the 50 - 360 (angstrom) spectral range at the SURF II synchrotron light source at NIST in Gaithersburg MD in August 1997. The calibration of each SPRED grating was performed separately. These gratings covered the short wavelengths: 100 - 300 (angstrom)(Δλ - 1.4 (angstrom)) and the long wavelengths: 200 - 1800 (angstrom) (Δλ ∼ 7 (angstrom)). This calibration should be accurate until the microchannel plate of the SPRED is exposed to atmospheric pressure. This calibration is similar to the one obtained by Stratton [Stratton, Rev. Sci. Instrum. 57 (8), pp. 204,3 August 1986
Estimating the decomposition of predictive information in multivariate systems
Faes, Luca; Kugiumtzis, Dimitris; Nollo, Giandomenico; Jurysta, Fabrice; Marinazzo, Daniele
2015-03-01
In the study of complex systems from observed multivariate time series, insight into the evolution of one system may be under investigation, which can be explained by the information storage of the system and the information transfer from other interacting systems. We present a framework for the model-free estimation of information storage and information transfer computed as the terms composing the predictive information about the target of a multivariate dynamical process. The approach tackles the curse of dimensionality employing a nonuniform embedding scheme that selects progressively, among the past components of the multivariate process, only those that contribute most, in terms of conditional mutual information, to the present target process. Moreover, it computes all information-theoretic quantities using a nearest-neighbor technique designed to compensate the bias due to the different dimensionality of individual entropy terms. The resulting estimators of prediction entropy, storage entropy, transfer entropy, and partial transfer entropy are tested on simulations of coupled linear stochastic and nonlinear deterministic dynamic processes, demonstrating the superiority of the proposed approach over the traditional estimators based on uniform embedding. The framework is then applied to multivariate physiologic time series, resulting in physiologically well-interpretable information decompositions of cardiovascular and cardiorespiratory interactions during head-up tilt and of joint brain-heart dynamics during sleep.
Modal and Wave Load Identification by ARMA Calibration
DEFF Research Database (Denmark)
Jensen, Jens Kristian Jehrbo; Kirkegaard, Poul Henning; Brincker, Rune
In this paper modal parameter as well as wave load identification by calibration of ARMA models is considered for a simple offshore structure. The theory of identification by ARMA calibration is presented as an identification technique in the time domain which can be applied for white noise excited...... systems. The technique is generalized also to include the case of ambient excitation processes such as wave excitation which are non-white processes. Due to those results a simple but effective approach for identification of the load process is proposed. Finally the theoretical presentation is illustrated...
DEFF Research Database (Denmark)
Heydorn, Kaj; Anglov, Thomas
2002-01-01
Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration...
Energy Technology Data Exchange (ETDEWEB)
Damatto, Willian B.; Potiens, Maria da Penha A.; Vivolo, Vitor, E-mail: wbdamatto@ipen.br, E-mail: mppotiens@ipen.br, E-mail: vivolo@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2013-07-01
A set of clinical dosimeters (thimble ionization chamber coupled to an electrometer) commonly used in radiotherapy in Brazil and sent to the Calibration Laboratory of IPEN were under several tests and analysis parameters for the dosimeters behaviour were established, specifying their sensitivities and operating characteristics. Applied tests were: repeatability, reproducibility and current leakage. Thus it was possible to determine the most common defects found in these equipment and the actions that could be taken to prevent it (clinical dosimeters quality control programs). The behaviour of 167 dosimeters was analyzed and in this study, 62 of them have been tested. The main problem detected during calibration tests was current leakage, i.e. electronic noise. The tests were applied to the routine measurements at the Calibration Laboratory implementing an ideal calibration procedure. New calibration criteria were established following international recommendations. Therefore, it was made the improvement of the quality control programme of the clinical dosimeters calibration laboratory, benefiting the users of such equipment with better consistent calibration measurements. (author)
International Nuclear Information System (INIS)
Damatto, Willian B.; Potiens, Maria da Penha A.; Vivolo, Vitor
2013-01-01
A set of clinical dosimeters (thimble ionization chamber coupled to an electrometer) commonly used in radiotherapy in Brazil and sent to the Calibration Laboratory of IPEN were under several tests and analysis parameters for the dosimeters behaviour were established, specifying their sensitivities and operating characteristics. Applied tests were: repeatability, reproducibility and current leakage. Thus it was possible to determine the most common defects found in these equipment and the actions that could be taken to prevent it (clinical dosimeters quality control programs). The behaviour of 167 dosimeters was analyzed and in this study, 62 of them have been tested. The main problem detected during calibration tests was current leakage, i.e. electronic noise. The tests were applied to the routine measurements at the Calibration Laboratory implementing an ideal calibration procedure. New calibration criteria were established following international recommendations. Therefore, it was made the improvement of the quality control programme of the clinical dosimeters calibration laboratory, benefiting the users of such equipment with better consistent calibration measurements. (author)
Hot-Wire Calibration at Low Velocities: Revisiting the Vortex Shedding Method
Directory of Open Access Journals (Sweden)
Sohrab S. Sattarzadeh
2013-01-01
Full Text Available The necessity to calibrate hot-wire probes against a known velocity causes problems at low velocities, due to the inherent inaccuracy of pressure transducers at low differential pressures. The vortex shedding calibration method is in this respect a recommended technique to obtain calibration data at low velocities, due to its simplicity and accuracy. However, it has mainly been applied in a low and narrow Reynolds number range known as the laminar vortex shedding regime. Here, on the other hand, we propose to utilize the irregular vortex shedding regime and show where the probe needs to be placed with respect to the cylinder in order to obtain unambiguous calibration data.
Energy Technology Data Exchange (ETDEWEB)
Acosta, Andy L. Romero; Lores, Stefan Gutierrez, E-mail: c19btm@frcuba.co.cu [Centro de Proteccion e Higiene de las Radiaciones (CPHR), La Habana (Cuba)
2013-11-01
This paper presents the design and implementation of an automated system for measurements in the calibration of reference radiation dosimeters. It was made a software application that performs the acquisition of the measured values of electric charge, calculated calibration coefficient and automates the calibration certificate issuance. These values are stored in a log file on a PC. The use of the application improves control over the calibration process, helps to humanize the work and reduces personnel exposure. The tool developed has been applied to the calibration of dosimeters radiation patterns in the LSCD of the Centro de Proteccion e Higiene de las Radiaciones, Cuba.
Model calibration for building energy efficiency simulation
International Nuclear Information System (INIS)
Mustafaraj, Giorgio; Marini, Dashamir; Costa, Andrea; Keane, Marcus
2014-01-01
Highlights: • Developing a 3D model relating to building architecture, occupancy and HVAC operation. • Two calibration stages developed, final model providing accurate results. • Using an onsite weather station for generating the weather data file in EnergyPlus. • Predicting thermal behaviour of underfloor heating, heat pump and natural ventilation. • Monthly energy saving opportunities related to heat pump of 20–27% was identified. - Abstract: This research work deals with an Environmental Research Institute (ERI) building where an underfloor heating system and natural ventilation are the main systems used to maintain comfort condition throughout 80% of the building areas. Firstly, this work involved developing a 3D model relating to building architecture, occupancy and HVAC operation. Secondly, the calibration methodology, which consists of two levels, was then applied in order to insure accuracy and reduce the likelihood of errors. To further improve the accuracy of calibration a historical weather data file related to year 2011, was created from the on-site local weather station of ERI building. After applying the second level of calibration process, the values of Mean bias Error (MBE) and Cumulative Variation of Root Mean Squared Error (CV(RMSE)) on hourly based analysis for heat pump electricity consumption varied within the following ranges: (MBE) hourly from −5.6% to 7.5% and CV(RMSE) hourly from 7.3% to 25.1%. Finally, the building was simulated with EnergyPlus to identify further possibilities of energy savings supplied by a water to water heat pump to underfloor heating system. It found that electricity consumption savings from the heat pump can vary between 20% and 27% on monthly bases
International Nuclear Information System (INIS)
Cabral, T.S.; David, M.
2016-01-01
This work was motivated by the need to decide on the best methodology to be applied in the next contamination monitor calibration comparisons with the Brazilian network of calibration radiation monitors. The calibration factor was chosen as a response calibration performed in the four monitors used in this comparison because it does not require the detector area or probe thereby reducing an important variable. It was observed that the variation of the positioning system may have an influence up to 10% in calibration. The results obtained for the calibration factor showed a difference of up to 31.2%. (author)
Updating the HST/ACS G800L Grism Calibration
Hathi, Nimish P.; Pirzkal, Norbert; Grogin, Norman A.; Chiaberge, Marco; ACS Team
2018-06-01
We present results from our ongoing work on obtaining newly derived trace and wavelength calibrations of the HST/ACS G800L grism and comparing them to previous set of calibrations. Past calibration efforts were based on 2003 observations. New observations of an emission line Wolf-Rayet star (WR96) were recently taken in HST Cycle 25 (PID: 15401). These observations are used to analyze and measure various grism properties, including wavelength calibration, spectral trace/tilt, length/size of grism orders, and spacing between various grism orders. To account for the field dependence, we observe WR96 at 3 different observing positions over the HST/ACS field of view. The three locations are the center of chip 1, the center of chip 2, and the center of the WFC1A-2K subarray (center of WFC Amp A on chip 1). This new data will help us to evaluate any differences in the G800L grism properties compared to previous calibration data, and to apply improved data analysis techniques to update these old measurements.
Application of Multivariable Statistical Techniques in Plant-wide WWTP Control Strategies Analysis
DEFF Research Database (Denmark)
Flores Alsina, Xavier; Comas, J.; Rodríguez-Roda, I.
2007-01-01
The main objective of this paper is to present the application of selected multivariable statistical techniques in plant-wide wastewater treatment plant (WWTP) control strategies analysis. In this study, cluster analysis (CA), principal component analysis/factor analysis (PCA/FA) and discriminant...... analysis (DA) are applied to the evaluation matrix data set obtained by simulation of several control strategies applied to the plant-wide IWA Benchmark Simulation Model No 2 (BSM2). These techniques allow i) to determine natural groups or clusters of control strategies with a similar behaviour, ii......) to find and interpret hidden, complex and casual relation features in the data set and iii) to identify important discriminant variables within the groups found by the cluster analysis. This study illustrates the usefulness of multivariable statistical techniques for both analysis and interpretation...
Prostate Health Index improves multivariable risk prediction of aggressive prostate cancer.
Loeb, Stacy; Shin, Sanghyuk S; Broyles, Dennis L; Wei, John T; Sanda, Martin; Klee, George; Partin, Alan W; Sokoll, Lori; Chan, Daniel W; Bangma, Chris H; van Schaik, Ron H N; Slawin, Kevin M; Marks, Leonard S; Catalona, William J
2017-07-01
To examine the use of the Prostate Health Index (PHI) as a continuous variable in multivariable risk assessment for aggressive prostate cancer in a large multicentre US study. The study population included 728 men, with prostate-specific antigen (PSA) levels of 2-10 ng/mL and a negative digital rectal examination, enrolled in a prospective, multi-site early detection trial. The primary endpoint was aggressive prostate cancer, defined as biopsy Gleason score ≥7. First, we evaluated whether the addition of PHI improves the performance of currently available risk calculators (the Prostate Cancer Prevention Trial [PCPT] and European Randomised Study of Screening for Prostate Cancer [ERSPC] risk calculators). We also designed and internally validated a new PHI-based multivariable predictive model, and created a nomogram. Of 728 men undergoing biopsy, 118 (16.2%) had aggressive prostate cancer. The PHI predicted the risk of aggressive prostate cancer across the spectrum of values. Adding PHI significantly improved the predictive accuracy of the PCPT and ERSPC risk calculators for aggressive disease. A new model was created using age, previous biopsy, prostate volume, PSA and PHI, with an area under the curve of 0.746. The bootstrap-corrected model showed good calibration with observed risk for aggressive prostate cancer and had net benefit on decision-curve analysis. Using PHI as part of multivariable risk assessment leads to a significant improvement in the detection of aggressive prostate cancer, potentially reducing harms from unnecessary prostate biopsy and overdiagnosis. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.
An Outlyingness Matrix for Multivariate Functional Data Classification
Dai, Wenlin; Genton, Marc G.
2017-01-01
outlyingness with conventional statistical depth. We propose two classifiers based on directional outlyingness and the outlyingness matrix, respectively. Our classifiers provide better performance compared with existing depth-based classifiers when applied on both univariate and multivariate functional data from simulation studies. We also test our methods on two data problems: speech recognition and gesture classification, and obtain results that are consistent with the findings from the simulated data.
Research on Miniature Calibre Rail-Guns for the Mechanical Arm
Directory of Open Access Journals (Sweden)
Ronggang Cao
2017-01-01
Full Text Available Rail-gun should not only be used to military applications, but also can be developed as applications in the civilian aspects of the market. With the development of the electromagnetic launch technology, based on the similarity theory, using the existing rail-gun model to guide the construction of more economical miniature calibre rail-guns, and apply it in some machinery and equipment, this idea will open up a wider rail-gun application space. This article will focus on the feasibility of application of miniature calibre rail-guns in the mechanical arm. This paper designs the schematic diagram, then theoretical analyzes force conditions of the armature in the mechanical arm, calculates the possible range of the current amplitude and so on. The existing rail-gun model can be used to guides design the circuit diagram of the miniature calibre rail-gun. Based on the similarity theory and many simulation experiments, designed the experimental parameters of a miniature rail-gun and analyzed the current, Lorentz force, velocity, and location of the existing rail-gun and miniature rail-gun. The results show that the rail-gun launching technology applied to robot arms is feasibility. The application of miniature calibre rail-guns in the mechanical arm will benefit to the further development of rail-guns.
Calibration belt for quality-of-care assessment based on dichotomous outcomes.
Directory of Open Access Journals (Sweden)
Stefano Finazzi
Full Text Available Prognostic models applied in medicine must be validated on independent samples, before their use can be recommended. The assessment of calibration, i.e., the model's ability to provide reliable predictions, is crucial in external validation studies. Besides having several shortcomings, statistical techniques such as the computation of the standardized mortality ratio (SMR and its confidence intervals, the Hosmer-Lemeshow statistics, and the Cox calibration test, are all non-informative with respect to calibration across risk classes. Accordingly, calibration plots reporting expected versus observed outcomes across risk subsets have been used for many years. Erroneously, the points in the plot (frequently representing deciles of risk have been connected with lines, generating false calibration curves. Here we propose a methodology to create a confidence band for the calibration curve based on a function that relates expected to observed probabilities across classes of risk. The calibration belt allows the ranges of risk to be spotted where there is a significant deviation from the ideal calibration, and the direction of the deviation to be indicated. This method thus offers a more analytical view in the assessment of quality of care, compared to other approaches.
Rotation in the dynamic factor modeling of multivariate stationary time series.
Molenaar, P.C.M.; Nesselroade, J.R.
2001-01-01
A special rotation procedure is proposed for the exploratory dynamic factor model for stationary multivariate time series. The rotation procedure applies separately to each univariate component series of a q-variate latent factor series and transforms such a component, initially represented as white
Fakayode, Sayo O; Mitchell, Breanna S; Pollard, David A
2014-08-01
Accurate understanding of analyte boiling points (BP) is of critical importance in gas chromatographic (GC) separation and crude oil refinery operation in petrochemical industries. This study reported the first combined use of GC separation and partial-least-square (PLS1) multivariate regression analysis of petrochemical structural activity relationship (SAR) for accurate BP determination of two commercially available (D3710 and MA VHP) calibration gas mix samples. The results of the BP determination using PLS1 multivariate regression were further compared with the results of traditional simulated distillation method of BP determination. The developed PLS1 regression was able to correctly predict analytes BP in D3710 and MA VHP calibration gas mix samples, with a root-mean-square-%-relative-error (RMS%RE) of 6.4%, and 10.8% respectively. In contrast, the overall RMS%RE of 32.9% and 40.4%, respectively obtained for BP determination in D3710 and MA VHP using a traditional simulated distillation method were approximately four times larger than the corresponding RMS%RE of BP prediction using MRA, demonstrating the better predictive ability of MRA. The reported method is rapid, robust, and promising, and can be potentially used routinely for fast analysis, pattern recognition, and analyte BP determination in petrochemical industries. Copyright © 2014 Elsevier B.V. All rights reserved.
Ytsma, Cai R.; Dyar, M. Darby
2018-01-01
Hydrogen (H) is a critical element to measure on the surface of Mars because its presence in mineral structures is indicative of past hydrous conditions. The Curiosity rover uses the laser-induced breakdown spectrometer (LIBS) on the ChemCam instrument to analyze rocks for their H emission signal at 656.6 nm, from which H can be quantified. Previous LIBS calibrations for H used small data sets measured on standards and/or manufactured mixtures of hydrous minerals and rocks and applied univariate regression to spectra normalized in a variety of ways. However, matrix effects common to LIBS make these calibrations of limited usefulness when applied to the broad range of compositions on the Martian surface. In this study, 198 naturally-occurring hydrous geological samples covering a broad range of bulk compositions with directly-measured H content are used to create more robust prediction models for measuring H in LIBS data acquired under Mars conditions. Both univariate and multivariate prediction models, including partial least square (PLS) and the least absolute shrinkage and selection operator (Lasso), are compared using several different methods for normalization of H peak intensities. Data from the ChemLIBS Mars-analog spectrometer at Mount Holyoke College are compared against spectra from the same samples acquired using a ChemCam-like instrument at Los Alamos National Laboratory and the ChemCam instrument on Mars. Results show that all current normalization and data preprocessing variations for quantifying H result in models with statistically indistinguishable prediction errors (accuracies) ca. ± 1.5 weight percent (wt%) H2O, limiting the applications of LIBS in these implementations for geological studies. This error is too large to allow distinctions among the most common hydrous phases (basalts, amphiboles, micas) to be made, though some clays (e.g., chlorites with ≈ 12 wt% H2O, smectites with 15-20 wt% H2O) and hydrated phases (e.g., gypsum with ≈ 20
Calibration of proportional counters in microdosimetry
International Nuclear Information System (INIS)
Varma, M.N.
1982-01-01
Many microdosimetric spectra for low LET as well as high LET radiations are measured using commercially available (similar to EG and G) Rossi proportional counters. This paper discusses the corrections to be applied to data when calibration of the counter is made using one type of radiation, and then the counter is used in a different radiation field. The principal correction factor is due to differences in W-value of the radiation used for calibration and the radiation for which microdosimetric measurements are made. Both propane and methane base tissue-equivalent (TE) gases are used in these counters. When calibrating the detectors, it is important to use the correct stopping power value for that gas. Deviations in y-bar/sub F/ and y-bar/sub D/ are calculated for 60 Co using different extrapolation procedures from 0.15 keV/μm to zero event size. These deviations can be as large as 30%. Advantages of reporting microdosimetric parameters such as y-bar/sub F/ and y-bar/sub D/ above a certain minimum cut-off are discussed
DEFF Research Database (Denmark)
Petersen, Britta; Gernaey, Krist; Henze, Mogens
2002-01-01
treatment plant. In the case that was studied it was important to have a detailed description of the process dynamics, since the model was to be used as the basis for optimisation scenarios in a later phase. Therefore, a complete model calibration procedure was applied including: (1) a description......The purpose of the calibrated model determines how to approach a model calibration, e.g. which information is needed and to which level of detail the model should be calibrated. A systematic model calibration procedure was therefore defined and evaluated for a municipal–industrial wastewater...
Exposure-rate calibration using large-area calibration pads
International Nuclear Information System (INIS)
Novak, E.F.
1988-09-01
The US Department of Energy (DOE) Office of Remedial Action and Waste Technology established the Technical Measurements Center (TMC) at the DOE Grand Junction Projects Office (GJPO) in Grand Junction, Colorado, to standardize, calibrate, and compare measurements made in support of DOE remedial action programs. A set of large-area, radioelement-enriched concrete pads was constructed by the DOE in 1978 at the Walker Field Airport in Grand Junction for use as calibration standards for airborne gamma-ray spectrometer systems. The use of these pads was investigated by the TMC as potential calibration standards for portable scintillometers employed in measuring gamma-ray exposure rates at Uranium Mill Tailings Remedial Action (UMTRA) project sites. Data acquired on the pads using a pressurized ionization chamber (PIC) and three scintillometers are presented as an illustration of an instrumental calibration. Conclusions and recommended calibration procedures are discussed, based on the results of these data
Skew redundant MEMS IMU calibration using a Kalman filter
International Nuclear Information System (INIS)
Jafari, M; Sahebjameyan, M; Moshiri, B; Najafabadi, T A
2015-01-01
In this paper, a novel calibration procedure for skew redundant inertial measurement units (SRIMUs) based on micro-electro mechanical systems (MEMS) is proposed. A general model of the SRIMU measurements is derived which contains the effects of bias, scale factor error and misalignments. For more accuracy, the effect of lever arms of the accelerometers to the center of the table are modeled and compensated in the calibration procedure. Two separate Kalman filters (KFs) are proposed to perform the estimation of error parameters for gyroscopes and accelerometers. The predictive error minimization (PEM) stochastic modeling method is used to simultaneously model the effect of bias instability and random walk noise on the calibration Kalman filters to diminish the biased estimations. The proposed procedure is simulated numerically and has expected experimental results. The calibration maneuvers are applied using a two-axis angle turntable in a way that the persistency of excitation (PE) condition for parameter estimation is met. For this purpose, a trapezoidal calibration profile is utilized to excite different deterministic error parameters of the accelerometers and a pulse profile is used for the gyroscopes. Furthermore, to evaluate the performance of the proposed KF calibration method, a conventional least squares (LS) calibration procedure is derived for the SRIMUs and the simulation and experimental results compare the functionality of the two proposed methods with each other. (paper)
Visible spectroscopy calibration transfer model in determining pH of Sala mangoes
International Nuclear Information System (INIS)
Yahaya, O.K.M.; MatJafri, M.Z.; Aziz, A.A.; Omar, A.F.
2015-01-01
The purpose of this study is to compare the efficiency of calibration transfer procedures between three spectrometers involving two Ocean Optics Inc. spectrometers, namely, QE65000 and Jaz, and also, ASD FieldSpec 3 in measuring the pH of Sala mango by visible reflectance spectroscopy. This study evaluates the ability of these spectrometers in measuring the pH of Sala mango by applying similar calibration algorithms through direct calibration transfer. This visible reflectance spectroscopy technique defines a spectrometer as a master instrument and another spectrometer as a slave. The multiple linear regression (MLR) of calibration model generated using the QE65000 spectrometer is transferred to the Jaz spectrometer and vice versa for Set 1. The same technique is applied for Set 2 with QE65000 spectrometer is transferred to the FieldSpec3 spectrometer and vice versa. For Set 1, the result showed that the QE65000 spectrometer established a calibration model with higher accuracy than that of the Jaz spectrometer. In addition, the calibration model developed on Jaz spectrometer successfully predicted the pH of Sala mango, which was measured using QE65000 spectrometer, with a root means square error of prediction RMSEP = 0.092 pH and coefficients of determination R 2 = 0.892. Moreover, the best prediction result is obtained for Set 2 when the calibration model developed on QE65000 spectrometer is successfully transferred to FieldSpec 3 with R 2 = 0.839 and RMSEP = 0.16 pH
Simulation research on multivariable fuzzy model predictive control of nuclear power plant
International Nuclear Information System (INIS)
Su Jie
2012-01-01
To improve the dynamic control capabilities of the nuclear power plant, the algorithm of the multivariable nonlinear predictive control based on the fuzzy model was applied in the main parameters control of the nuclear power plant, including control structure and the design of controller in the base of expounding the math model of the turbine and the once-through steam generator. The simulation results show that the respond of the change of the gas turbine speed and the steam pressure under the algorithm of multivariable fuzzy model predictive control is faster than that under the PID control algorithm, and the output value of the gas turbine speed and the steam pressure under the PID control algorithm is 3%-5% more than that under the algorithm of multi-variable fuzzy model predictive control. So it shows that the algorithm of multi-variable fuzzy model predictive control can control the output of the main parameters of the nuclear power plant well and get better control effect. (author)
Comprehensive drought characteristics analysis based on a nonlinear multivariate drought index
Yang, Jie; Chang, Jianxia; Wang, Yimin; Li, Yunyun; Hu, Hui; Chen, Yutong; Huang, Qiang; Yao, Jun
2018-02-01
It is vital to identify drought events and to evaluate multivariate drought characteristics based on a composite drought index for better drought risk assessment and sustainable development of water resources. However, most composite drought indices are constructed by the linear combination, principal component analysis and entropy weight method assuming a linear relationship among different drought indices. In this study, the multidimensional copulas function was applied to construct a nonlinear multivariate drought index (NMDI) to solve the complicated and nonlinear relationship due to its dependence structure and flexibility. The NMDI was constructed by combining meteorological, hydrological, and agricultural variables (precipitation, runoff, and soil moisture) to better reflect the multivariate variables simultaneously. Based on the constructed NMDI and runs theory, drought events for a particular area regarding three drought characteristics: duration, peak, and severity were identified. Finally, multivariate drought risk was analyzed as a tool for providing reliable support in drought decision-making. The results indicate that: (1) multidimensional copulas can effectively solve the complicated and nonlinear relationship among multivariate variables; (2) compared with single and other composite drought indices, the NMDI is slightly more sensitive in capturing recorded drought events; and (3) drought risk shows a spatial variation; out of the five partitions studied, the Jing River Basin as well as the upstream and midstream of the Wei River Basin are characterized by a higher multivariate drought risk. In general, multidimensional copulas provides a reliable way to solve the nonlinear relationship when constructing a comprehensive drought index and evaluating multivariate drought characteristics.
A global model for residential energy use: Uncertainty in calibration to regional data
International Nuclear Information System (INIS)
van Ruijven, Bas; van Vuuren, Detlef P.; de Vries, Bert; van der Sluijs, Jeroen P.
2010-01-01
Uncertainties in energy demand modelling allow for the development of different models, but also leave room for different calibrations of a single model. We apply an automated model calibration procedure to analyse calibration uncertainty of residential sector energy use modelling in the TIMER 2.0 global energy model. This model simulates energy use on the basis of changes in useful energy intensity, technology development (AEEI) and price responses (PIEEI). We find that different implementations of these factors yield behavioural model results. Model calibration uncertainty is identified as influential source for variation in future projections: amounting 30% to 100% around the best estimate. Energy modellers should systematically account for this and communicate calibration uncertainty ranges. (author)
Tanks for liquids: calibration and errors assessment
International Nuclear Information System (INIS)
Espejo, J.M.; Gutierrez Fernandez, J.; Ortiz, J.
1980-01-01
After a brief reference to some of the problems raised by tanks calibration, two methods, theoretical and experimental are presented, so as to achieve it taking into account measurement errors. The method is applied to the transfer of liquid from one tank to another. Further, a practical example is developed. (author)
A Markov Chain Estimator of Multivariate Volatility from High Frequency Data
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Horel, Guillaume; Lunde, Asger
We introduce a multivariate estimator of financial volatility that is based on the theory of Markov chains. The Markov chain framework takes advantage of the discreteness of high-frequency returns. We study the finite sample properties of the estimation in a simulation study and apply...
Laser Calibration of the ATLAS Tile Calorimeter
Di Gregorio, Giulia; The ATLAS collaboration
2017-01-01
High performance stability of the ATLAS Tile Calorimeter is achieved with a set of calibration procedures. One step of the calibration procedure is based on measurements of the response stability to laser excitation of the PMTs that are used to readout the calorimeter cells. A facility to study in lab the PMT stability response is operating in the PISA-INFN laboratories since 2015. Goals of the tests in lab are to study the time evolution of the PMT response to reproduce and to understand the origin of the response drifts seen with the PMT mounted on the Tile calorimeter in its normal operating during LHC run I and run II. A new statistical approach was developed to measure drift of the absolute gain. This approach was applied to both the ATLAS laser calibration data and to data collected in the Pisa local laboratory. The preliminary results from these two studies are shown.
Probabilistic calibration of safety coefficients for flawed components in nuclear engineering
International Nuclear Information System (INIS)
Ardillon, E.; Pitner, P.; Barthelet, B.; Remond, A.
1995-01-01
The current rules applied to verify the flaws acceptance in nuclear components rely on deterministic criteria supposed to ensure the plant safe operation. The interest in have a precise and reliable method to evaluate the safety margins and the integrity of components led Electricite de France to launch an approach to link directly safety coefficients with safety levels. This paper presents a probabilistic methodology to calibrate safety coefficients in relation do reliability target values. The proposed calibration procedure applies to the case of a ferritic flawed pipe using the R 6 procedure for assessing the structure integrity. (author). 5 refs., 5 figs., 1 tab
Integrating Supplementary Application-Based Tutorials in the Multivariable Calculus Course
Verner, I. M.; Aroshas, S.; Berman, A.
2008-01-01
This article presents a study in which applications were integrated in the Multivariable Calculus course at the Technion in the framework of supplementary tutorials. The purpose of the study was to test the opportunity of extending the conventional curriculum by optional applied problem-solving activities and get initial evidence on the possible…
Fermentation process tracking through enhanced spectral calibration modeling.
Triadaphillou, Sophia; Martin, Elaine; Montague, Gary; Norden, Alison; Jeffkins, Paul; Stimpson, Sarah
2007-06-15
The FDA process analytical technology (PAT) initiative will materialize in a significant increase in the number of installations of spectroscopic instrumentation. However, to attain the greatest benefit from the data generated, there is a need for calibration procedures that extract the maximum information content. For example, in fermentation processes, the interpretation of the resulting spectra is challenging as a consequence of the large number of wavelengths recorded, the underlying correlation structure that is evident between the wavelengths and the impact of the measurement environment. Approaches to the development of calibration models have been based on the application of partial least squares (PLS) either to the full spectral signature or to a subset of wavelengths. This paper presents a new approach to calibration modeling that combines a wavelength selection procedure, spectral window selection (SWS), where windows of wavelengths are automatically selected which are subsequently used as the basis of the calibration model. However, due to the non-uniqueness of the windows selected when the algorithm is executed repeatedly, multiple models are constructed and these are then combined using stacking thereby increasing the robustness of the final calibration model. The methodology is applied to data generated during the monitoring of broth concentrations in an industrial fermentation process from on-line near-infrared (NIR) and mid-infrared (MIR) spectrometers. It is shown that the proposed calibration modeling procedure outperforms traditional calibration procedures, as well as enabling the identification of the critical regions of the spectra with regard to the fermentation process.
Teixeira, Filipe; Melo, André; Cordeiro, M. Natália D. S.
2010-09-01
A linear least-squares methodology was used to determine the vibrational scaling factors for the X3LYP density functional. Uncertainties for these scaling factors were calculated according to the method devised by Irikura et al. [J. Phys. Chem. A 109, 8430 (2005)]. The calibration set was systematically partitioned according to several of its descriptors and the scaling factors for X3LYP were recalculated for each subset. The results show that the scaling factors are only significant up to the second digit, irrespective of the calibration set used. Furthermore, multivariate statistical analysis allowed us to conclude that the scaling factors and the associated uncertainties are independent of the size of the calibration set and strongly suggest the practical impossibility of obtaining vibrational scaling factors with more than two significant digits.
Roy, Kevin; Undey, Cenk; Mistretta, Thomas; Naugle, Gregory; Sodhi, Manbir
2014-01-01
Multivariate statistical process monitoring (MSPM) is becoming increasingly utilized to further enhance process monitoring in the biopharmaceutical industry. MSPM can play a critical role when there are many measurements and these measurements are highly correlated, as is typical for many biopharmaceutical operations. Specifically, for processes such as cleaning-in-place (CIP) and steaming-in-place (SIP, also known as sterilization-in-place), control systems typically oversee the execution of the cycles, and verification of the outcome is based on offline assays. These offline assays add to delays and corrective actions may require additional setup times. Moreover, this conventional approach does not take interactive effects of process variables into account and cycle optimization opportunities as well as salient trends in the process may be missed. Therefore, more proactive and holistic online continued verification approaches are desirable. This article demonstrates the application of real-time MSPM to processes such as CIP and SIP with industrial examples. The proposed approach has significant potential for facilitating enhanced continuous verification, improved process understanding, abnormal situation detection, and predictive monitoring, as applied to CIP and SIP operations. © 2014 American Institute of Chemical Engineers.
An Improved Photometric Calibration of the Sloan Digital SkySurvey Imaging Data
Energy Technology Data Exchange (ETDEWEB)
Padmanabhan, Nikhil; Schlegel, David J.; Finkbeiner, Douglas P.; Barentine, J.C.; Blanton, Michael R.; Brewington, Howard J.; Gunn, JamesE.; Harvanek, Michael; Hogg, David W.; Ivezic, Zeljko; Johnston, David; Kent, Stephen M.; Kleinman, S.J.; Knapp, Gillian R.; Krzesinski, Jurek; Long, Dan; Neilsen Jr., Eric H.; Nitta, Atsuko; Loomis, Craig; Lupton,Robert H.; Roweis, Sam; Snedden, Stephanie A.; Strauss, Michael A.; Tucker, Douglas L.
2007-09-30
We present an algorithm to photometrically calibrate widefield optical imaging surveys, that simultaneously solves for thecalibration parameters and relative stellar fluxes using overlappingobservations. The algorithm decouples the problem of "relative"calibrations from that of "absolute" calibrations; the absolutecalibration is reduced to determining a few numbers for the entiresurvey. We pay special attention to the spatial structure of thecalibration errors, allowing one to isolate particular error modes indownstream analyses. Applying this to the SloanDigital Sky Survey imagingdata, we achieve ~;1 percent relative calibration errors across 8500sq.deg/ in griz; the errors are ~;2 percent for the u band. These errorsare dominated by unmodelled atmospheric variations at Apache PointObservatory. These calibrations, dubbed ubercalibration, are now publicwith SDSS Data Release 6, and will be a part of subsequent SDSS datareleases.
CALIBRATING THE JOHNSON-HOLMQUIST CERAMIC MODEL FOR SIC USING CTH
International Nuclear Information System (INIS)
Cazamias, J. U.; Bilyk, S. R.
2009-01-01
The Johnson-Holmquist ceramic material model has been calibrated and successfully applied to numerically simulate ballistic events using the Lagrangian code EPIC. While the majority of the constants are ''physics'' based, two of the constants for the failed material response are calibrated using ballistic experiments conducted on a confined cylindrical ceramic target. The maximum strength of the failed ceramic is calibrated by matching the penetration velocity. The second refers to the equivalent plastic strain at failure under constant pressure and is calibrated using the dwell time. Use of these two constants in the CTH Eulerian hydrocode does not predict the ballistic response. This difference may be due to the phenomenological nature of the model and the different numerical schemes used by the codes. This paper determines the aforementioned material constants for SiC suitable for simulating ballistic events using CTH.
Calibrating a tensor magnetic gradiometer using spin data
Bracken, Robert E.; Smith, David V.; Brown, Philip J.
2005-01-01
Scalar magnetic data are often acquired to discern characteristics of geologic source materials and buried objects. It is evident that a great deal can be done with scalar data, but there are significant advantages to direct measurement of the magnetic gradient tensor in applications with nearby sources, such as unexploded ordnance (UXO). To explore these advantages, we adapted a prototype tensor magnetic gradiometer system (TMGS) and successfully implemented a data-reduction procedure. One of several critical reduction issues is the precise determination of a large group of calibration coefficients for the sensors and sensor array. To resolve these coefficients, we devised a spin calibration method, after similar methods of calibrating space-based magnetometers (Snare, 2001). The spin calibration procedure consists of three parts: (1) collecting data by slowly revolving the sensor array in the Earth?s magnetic field, (2) deriving a comprehensive set of coefficients from the spin data, and (3) applying the coefficients to the survey data. To show that the TMGS functions as a tensor gradiometer, we conducted an experimental survey that verified that the reduction procedure was effective (Bracken and Brown, in press). Therefore, because it was an integral part of the reduction, it can be concluded that the spin calibration was correctly formulated with acceptably small errors.
Calibration method for projector-camera-based telecentric fringe projection profilometry system.
Liu, Haibo; Lin, Huijing; Yao, Linshen
2017-12-11
By combining a fringe projection setup with a telecentric lens, a fringe pattern could be projected and imaged within a small area, making it possible to measure the three-dimensional (3D) surfaces of micro-components. This paper focuses on the flexible calibration of the fringe projection profilometry (FPP) system using a telecentric lens. An analytical telecentric projector-camera calibration model is introduced, in which the rig structure parameters remain invariant for all views, and the 3D calibration target can be located on the projector image plane with sub-pixel precision. Based on the presented calibration model, a two-step calibration procedure is proposed. First, the initial parameters, e.g., the projector-camera rig, projector intrinsic matrix, and coordinates of the control points of a 3D calibration target, are estimated using the affine camera factorization calibration method. Second, a bundle adjustment algorithm with various simultaneous views is applied to refine the calibrated parameters, especially the rig structure parameters and coordinates of the control points forth 3D target. Because the control points are determined during the calibration, there is no need for an accurate 3D reference target, whose is costly and extremely difficult to fabricate, particularly for tiny objects used to calibrate the telecentric FPP system. Real experiments were performed to validate the performance of the proposed calibration method. The test results showed that the proposed approach is very accurate and reliable.
Qiang, Zhimin; Li, Wentao; Li, Mengkai; Bolton, James R; Qu, Jiuhui
2015-01-01
UV radiometers are widely employed for irradiance measurements, but their periodical calibrations not only induce an extra cost but also are time-consuming. In this study, the KI/KIO3 actinometer was applied to calibrate UV radiometer detectors at 254 nm with a quasi-collimated beam apparatus equipped with a low-pressure UV lamp, and feasible calibration conditions were identified. Results indicate that a washer constraining the UV light was indispensable, while the size (10 or 50 mL) of a beaker containing the actinometer solution had little influence when a proper washer was used. The absorption or reflection of UV light by the internal beaker wall led to an underestimation or overestimation of the irradiance determined by the KI/KIO3 actinometer, respectively. The proper range of the washer internal diameter could be obtained via mathematical analysis. A radiometer with a longer service time showed a greater calibration factor. To minimize the interference from the inner wall reflection of the collimating tube, calibrations should be conducted at positions far enough away from the tube bottom. This study demonstrates that after the feasible calibration conditions are identified, the KI/KIO3 actinometer can be applied readily to calibrate UV radiometer detectors at 254 nm. © 2014 The American Society of Photobiology.
Directory of Open Access Journals (Sweden)
Yong-Dong Xu
2017-01-01
Full Text Available Sesame oil produced by the traditional aqueous extraction process (TAEP has been recognized by its pleasant flavor and high nutrition value. This paper developed a rapid and nondestructive method to predict the sesame oil yield by TAEP using near-infrared (NIR spectroscopy. A collection of 145 sesame seed samples was measured by NIR spectroscopy and the relationship between the TAEP oil yield and the spectra was modeled by least-squares support vector machine (LS-SVM. Smoothing, taking second derivatives (D2, and standard normal variate (SNV transformation were performed to remove the unwanted variations in the raw spectra. The results indicated that D2-LS-SVM (4000–9000 cm−1 obtained the most accurate calibration model with root mean square error of prediction (RMSEP of 1.15 (%, w/w. Moreover, the RMSEP was not significantly influenced by different initial values of LS-SVM parameters. The calibration model could be helpful to search for sesame seeds with higher TAEP oil yields.
Bhandari, Ammar B; Nelson, Nathan O; Sweeney, Daniel W; Baffaut, Claire; Lory, John A; Senaviratne, Anomaa; Pierzynski, Gary M; Janssen, Keith A; Barnes, Philip L
2017-11-01
Process-based computer models have been proposed as a tool to generate data for Phosphorus (P) Index assessment and development. Although models are commonly used to simulate P loss from agriculture using managements that are different from the calibration data, this use of models has not been fully tested. The objective of this study is to determine if the Agricultural Policy Environmental eXtender (APEX) model can accurately simulate runoff, sediment, total P, and dissolved P loss from 0.4 to 1.5 ha of agricultural fields with managements that are different from the calibration data. The APEX model was calibrated with field-scale data from eight different managements at two locations (management-specific models). The calibrated models were then validated, either with the same management used for calibration or with different managements. Location models were also developed by calibrating APEX with data from all managements. The management-specific models resulted in satisfactory performance when used to simulate runoff, total P, and dissolved P within their respective systems, with > 0.50, Nash-Sutcliffe efficiency > 0.30, and percent bias within ±35% for runoff and ±70% for total and dissolved P. When applied outside the calibration management, the management-specific models only met the minimum performance criteria in one-third of the tests. The location models had better model performance when applied across all managements compared with management-specific models. Our results suggest that models only be applied within the managements used for calibration and that data be included from multiple management systems for calibration when using models to assess management effects on P loss or evaluate P Indices. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Calibration of PKA meters against ion chambers of two geometries
International Nuclear Information System (INIS)
Almeida Junior, Jose N.; Terini, Ricardo A.; Pereira, Marco A.G.; Herdade, Silvio B.
2011-01-01
Kerma-area product (KAP or PKA) is a quantity that is independent of the distance to the X-ray tube focal spot and that can be used in radiological exams to assess the effective dose in patients. Clinical KAP meters are generally fixed in tube output and they are usually calibrated on-site by measuring the air kerma with an ion chamber and by evaluating the irradiated area by means of a radiographic image. Recently, a device was marketed (PDC, Patient Dose Calibrator, Radcal Co.), which was designed for calibrating clinical KAP meters with traceability to a standard laboratory. This paper presents a metrological evaluation of two methods that can be used in standard laboratories for the calibration of this device, namely, against a reference 30 cc ionization chamber or a reference parallel plates monitor chamber. Lower energy dependence was also obtained when the PDC calibration was made with the monitor chamber. Results are also shown of applying the PDC in hospital environment to the cross calibration of a clinical KAP meter from a radiology equipment. Results confirm lower energy dependence of the PDC relatively to the tested clinical meter. (author)
Object Detection and Tracking-Based Camera Calibration for Normalized Human Height Estimation
Directory of Open Access Journals (Sweden)
Jaehoon Jung
2016-01-01
Full Text Available This paper presents a normalized human height estimation algorithm using an uncalibrated camera. To estimate the normalized human height, the proposed algorithm detects a moving object and performs tracking-based automatic camera calibration. The proposed method consists of three steps: (i moving human detection and tracking, (ii automatic camera calibration, and (iii human height estimation and error correction. The proposed method automatically calibrates camera by detecting moving humans and estimates the human height using error correction. The proposed method can be applied to object-based video surveillance systems and digital forensic.
Graffelman, Jan; van Eeuwijk, Fred
2005-12-01
The scatter plot is a well known and easily applicable graphical tool to explore relationships between two quantitative variables. For the exploration of relations between multiple variables, generalisations of the scatter plot are useful. We present an overview of multivariate scatter plots focussing on the following situations. Firstly, we look at a scatter plot for portraying relations between quantitative variables within one data matrix. Secondly, we discuss a similar plot for the case of qualitative variables. Thirdly, we describe scatter plots for the relationships between two sets of variables where we focus on correlations. Finally, we treat plots of the relationships between multiple response and predictor variables, focussing on the matrix of regression coefficients. We will present both known and new results, where an important original contribution concerns a procedure for the inclusion of scales for the variables in multivariate scatter plots. We provide software for drawing such scales. We illustrate the construction and interpretation of the plots by means of examples on data collected in a genomic research program on taste in tomato.
Intercomparison and calibration of dose calibrators used in nuclear medicine facilities
Costa, A M D
2003-01-01
The aim of this work was to establish a working standard for intercomparison and calibration of dose calibrators used in most of nuclear medicine facilities for the determination of the activity of radionuclides administered to patients in specific examinations or therapeutic procedures. A commercial dose calibrator, a set of standard radioactive sources, and syringes, vials and ampoules with radionuclide solutions used in nuclear medicine were utilized in this work. The commercial dose calibrator was calibrated for radionuclide solutions used in nuclear medicine. Simple instrument tests, such as linearity response and variation response with the source volume at a constant source activity concentration were performed. This instrument may be used as a reference system for intercomparison and calibration of other activity meters, as a method of quality control of dose calibrators utilized in nuclear medicine facilities.
Dead-blow hammer design applied to a calibration target mechanism to dampen excessive rebound
Lim, Brian Y.
1991-01-01
An existing rotary electromagnetic driver was specified to be used to deploy and restow a blackbody calibration target inside of a spacecraft infrared science instrument. However, this target was much more massive than any other previously inherited design applications. The target experienced unacceptable bounce when reaching its stops. Without any design modification, the momentum generated by the driver caused the target to bounce back to its starting position. Initially, elastomeric dampers were used between the driver and the target. However, this design could not prevent the bounce, and it compromised the positional accuracy of the calibration target. A design that successfully met all the requirements incorporated a sealed pocket 85 percent full of 0.75 mm diameter stainless steel balls in the back of the target to provide the effect of a dead-blow hammer. The energy dissipation resulting from the collision of balls in the pocket successfully dampened the excess momentum generated during the target deployment. The disastrous effects of new requirements on a design with a successful flight history, the modifications that were necessary to make the device work, and the tests performed to verify its functionality are described.
Calibration curves for biological dosimetry
International Nuclear Information System (INIS)
Guerrero C, C.; Brena V, M. . E-mail cgc@nuclear.inin.mx
2004-01-01
The generated information by the investigations in different laboratories of the world, included the ININ, in which settles down that certain class of chromosomal leisure it increases in function of the dose and radiation type, has given by result the obtaining of calibrated curves that are applied in the well-known technique as biological dosimetry. In this work is presented a summary of the work made in the laboratory that includes the calibrated curves for gamma radiation of 60 Cobalt and X rays of 250 k Vp, examples of presumed exposure to ionizing radiation, resolved by means of aberration analysis and the corresponding dose estimate through the equations of the respective curves and finally a comparison among the dose calculations in those people affected by the accident of Ciudad Juarez, carried out by the group of Oak Ridge, USA and those obtained in this laboratory. (Author)
Calibration of hydrological models using flow-duration curves
Directory of Open Access Journals (Sweden)
I. K. Westerberg
2011-07-01
Full Text Available The degree of belief we have in predictions from hydrologic models will normally depend on how well they can reproduce observations. Calibrations with traditional performance measures, such as the Nash-Sutcliffe model efficiency, are challenged by problems including: (1 uncertain discharge data, (2 variable sensitivity of different performance measures to different flow magnitudes, (3 influence of unknown input/output errors and (4 inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. This paper explores a calibration method using flow-duration curves (FDCs to address these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs on the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested – based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application, e.g. using more/less EPs at high/low flows. While the method appears less sensitive to epistemic input/output errors than previous use of limits of
Accurate calibration of test mass displacement in the LIGO interferometers
Energy Technology Data Exchange (ETDEWEB)
Goetz, E [University of Michigan, Ann Arbor, MI 48109 (United States); Savage, R L Jr; Garofoli, J; Kawabe, K; Landry, M [LIGO Hanford Observatory, Richland, WA 99352 (United States); Gonzalez, G; Kissel, J; Sung, M [Louisiana State University, Baton Rouge, LA 70803 (United States); Hirose, E [Syracuse University, Syracuse, NY 13244 (United States); Kalmus, P [Columbia University, New York, NY 10027 (United States); O' Reilly, B; Stuver, A [LIGO Livingston Observatory, Livingston, LA 70754 (United States); Siemens, X, E-mail: egoetz@umich.ed, E-mail: savage_r@ligo-wa.caltech.ed [University of Wisconsin-Milwaukee, Milwaukee, WI 53201 (United States)
2010-04-21
We describe three fundamentally different methods we have applied to calibrate the test mass displacement actuators to search for systematic errors in the calibration of the LIGO gravitational-wave detectors. The actuation frequencies tested range from 90 Hz to 1 kHz and the actuation amplitudes range from 10{sup -6} m to 10{sup -18} m. For each of the four test mass actuators measured, the weighted mean coefficient over all frequencies for each technique deviates from the average actuation coefficient for all three techniques by less than 4%. This result indicates that systematic errors in the calibration of the responses of the LIGO detectors to differential length variations are within the stated uncertainties.
Accurate calibration of test mass displacement in the LIGO interferometers
International Nuclear Information System (INIS)
Goetz, E; Savage, R L Jr; Garofoli, J; Kawabe, K; Landry, M; Gonzalez, G; Kissel, J; Sung, M; Hirose, E; Kalmus, P; O'Reilly, B; Stuver, A; Siemens, X
2010-01-01
We describe three fundamentally different methods we have applied to calibrate the test mass displacement actuators to search for systematic errors in the calibration of the LIGO gravitational-wave detectors. The actuation frequencies tested range from 90 Hz to 1 kHz and the actuation amplitudes range from 10 -6 m to 10 -18 m. For each of the four test mass actuators measured, the weighted mean coefficient over all frequencies for each technique deviates from the average actuation coefficient for all three techniques by less than 4%. This result indicates that systematic errors in the calibration of the responses of the LIGO detectors to differential length variations are within the stated uncertainties.
Researches on hazard avoidance cameras calibration of Lunar Rover
Li, Chunyan; Wang, Li; Lu, Xin; Chen, Jihua; Fan, Shenghong
2017-11-01
Lunar Lander and Rover of China will be launched in 2013. It will finish the mission targets of lunar soft landing and patrol exploration. Lunar Rover has forward facing stereo camera pair (Hazcams) for hazard avoidance. Hazcams calibration is essential for stereo vision. The Hazcam optics are f-theta fish-eye lenses with a 120°×120° horizontal/vertical field of view (FOV) and a 170° diagonal FOV. They introduce significant distortion in images and the acquired images are quite warped, which makes conventional camera calibration algorithms no longer work well. A photogrammetric calibration method of geometric model for the type of optical fish-eye constructions is investigated in this paper. In the method, Hazcams model is represented by collinearity equations with interior orientation and exterior orientation parameters [1] [2]. For high-precision applications, the accurate calibration model is formulated with the radial symmetric distortion and the decentering distortion as well as parameters to model affinity and shear based on the fisheye deformation model [3] [4]. The proposed method has been applied to the stereo camera calibration system for Lunar Rover.
Multivariate Statistical Methods as a Tool of Financial Analysis of Farm Business
Czech Academy of Sciences Publication Activity Database
Novák, J.; Sůvová, H.; Vondráček, Jiří
2002-01-01
Roč. 48, č. 1 (2002), s. 9-12 ISSN 0139-570X Institutional research plan: AV0Z1030915 Keywords : financial analysis * financial ratios * multivariate statistical methods * correlation analysis * discriminant analysis * cluster analysis Subject RIV: BB - Applied Statistics, Operational Research
Multivariate covariance generalized linear models
DEFF Research Database (Denmark)
Bonat, W. H.; Jørgensen, Bent
2016-01-01
are fitted by using an efficient Newton scoring algorithm based on quasi-likelihood and Pearson estimating functions, using only second-moment assumptions. This provides a unified approach to a wide variety of types of response variables and covariance structures, including multivariate extensions......We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...
The contribution to the calibration of LAr calorimeters at the ATLAS experiment
International Nuclear Information System (INIS)
Pecsy, M.
2011-01-01
The presented thesis brings various contributions to the testing and validation of the ATLAS detector calorimeter calibration. Since the ATLAS calorimeter is non-compensating, the sophisticated software calibration of the calorimeter response is needed. One of the ATLAS official calibration methods is the local hadron calibration. This method is based on detailed simulations providing information about the true deposited energy in calorimeter. Such calibration consists of several independent steps, starting with the basic electromagnetic scale signal calibration and proceeding to the particle energy calibration. Calibration starts from the topological clusters reconstruction and calibration at EM scale. These clusters are classified as EM or hadronic and the hadronic ones receive weights to correct for the invisible energy deposits of hadrons. To get the nal reconstructed energy the out-of-cluster and dead material corrections are applied in next steps. The tests of calorimeter response with the rst real data from cosmic-ray muons and the LHC collisions data are presented in the thesis. The detailed studies of the full hadronic calibration performance in the special combined end-cap calorimeter beam test 2004 are presented as well. To optimise the performance of the calibration, the Monte-Carlo based studies are necessary. Two alternative methods of cluster classification are discussed, and the software tool of particle track extrapolation has been developed. (author)
ONE-STEP AND TWO-STEP CALIBRATION OF A PORTABLE PANORAMIC IMAGE MAPPING SYSTEM
Directory of Open Access Journals (Sweden)
P.-C. Wang
2012-07-01
Full Text Available A Portable Panoramic Image Mapping System (PPIMS is proposed for rapid acquisition of three-dimensional spatial information. By considering the convenience of use, cost, weight of equipment, precision, and power supply, the designed PPIMS is equipped with 6 circularly arranged cameras to capture panoramic images and a GPS receiver for positioning. The motivation for this design is to develop a hand-held Mobile Mapping System (MMS for some difficult accessing areas by vehicle MMS, such as rugged terrains, forest areas, heavily damaged disaster areas, and crowed places etc. This PPIMS is in fact a GPS assisted close-range photogrammetric system. Compared with the traditional close-range photogrammetry, PPIMS can reduce the need of ground control points significantly. Under the condition of knowing the relative geometric relationships of the equipped sensors, the elements of exterior orientation of each captured image can be solved. However, the procedure of a system calibration should be done accurately to determine the relative geometric relationships of multi-cameras and the GPS antenna center, before the PPIMS can be applied for geo-referenced mapping. In this paper, both of one-step and two-step calibration procedures for PPIMS are performed to determine the lever-arm offsets and boresight angles among cameras and GPS. The performance of the one-step and two-step calibration is evaluated through the analysis of the experimental results. The comparison between these two calibration procedures was also conducted. The two-step calibration method outperforms the one-step calibration method in terms of calibration accuracy and operation convenience. We expect that the proposed two-step calibration procedure can also be applied to other platform-based MMSs.
Planck 2015 results. VIII. High Frequency Instrument data processing: Calibration and maps
Adam, R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A.J.; Barreiro, R.B.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bertincourt, B.; Bielewicz, P.; Bock, J.J.; Bonavera, L.; Bond, J.R.; Borrill, J.; Bouchet, F.R.; Boulanger, F.; Bucher, M.; Burigana, C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H.C.; Christensen, P.R.; Clements, D.L.; Colombi, S.; Colombo, L.P.L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B.P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R.D.; Davis, R.J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Diego, J.M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T.A.; Eriksen, H.K.; Falgarone, E.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A.A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K.M.; Gratton, S.; Gruppuso, A.; Gudmundsson, J.E.; Hansen, F.K.; Hanson, D.; Harrison, D.L.; Henrot-Versillé, S.; Herranz, D.; Hildebrandt, S.R.; Hivon, E.; Hobson, M.; Holmes, W.A.; Hornstrup, A.; Hovest, W.; Huffenberger, K.M.; Hurier, G.; Jaffe, A.H.; Jaffe, T.R.; Jones, W.C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T.S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C.R.; Le Jeune, M.; Leahy, J.P.; Lellouch, E.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P.B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P.M.; Macías-Pérez, J.F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P.G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Moreno, R.; Morgante, G.; Mortlock, D.; Moss, A.; Mottet, S.; Munshi, D.; Murphy, J.A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C.B.; Nørgaard-Nielsen, H.U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C.A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Pearson, T.J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Pratt, G.W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J.P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rusholme, B.; Sandri, M.; Santos, D.; Sauvé, A.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M.D.; Shellard, E.P.S.; Spencer, L.D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J.A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vibert, L.; Vielva, P.; Villa, F.; Wade, L.A.; Wandelt, B.D.; Watson, R.; Wehus, I.K.; Yvon, D.; Zacchei, A.
2016-01-01
This paper describes the processing applied to the Planck High Frequency Instrument (HFI) cleaned, time-ordered information to produce photometrically calibrated maps in temperature and (for the first time) in polarization. The data from the 2.5 year full mission include almost five independent full-sky surveys. HFI observes the sky over a broad range of frequencies, from 100 to 857 GHz. To get the best accuracy on the calibration over such a large range, two different photometric calibration schemes have been used. The 545 and 857 GHz data are calibrated using models of planetary atmospheric emission. The lower frequencies (from 100 to 353 GHz) are calibrated using the time-variable cosmological microwave background dipole which we call the orbital dipole. This source of calibration only depends on the satellite velocity with respect to the solar system and permits an independent measurement of the amplitude of the CMB solar dipole (3364.5 +/- 0.8 \\mu K) which is 1\\sigma\\ higher than the WMAP measurement wit...
Calibration procedures for improved accuracy of wind turbine blade load measurement
Energy Technology Data Exchange (ETDEWEB)
Dahlberg, J.Aa. [Aeronautical Research Inst. of Sweden, Bromma (Sweden); Johansson, Hjalmar [Teknikgruppen AB, Sollentuna (Sweden)
1996-12-01
External loads acting on wind turbine blades are mainly transferred via the hub to the rest of the structure. It is therefore a normal approach to measure the loads acting on the turbine by load measurements in the blade roots. The load measurement is often accomplished by measurements of strain on the surface of the blade or the hub. The strain signals are converted to loads by applying calibration factors to the measurements. This paper deals with difficulties associated with load measurements on two different wind turbines; one with strain gauges applied to a steel hub where a linear stress-load relationship is expected and the other with strain gauges applied to the GFRP blade close to the bearings where strong non-linearity`s and temperature effects are expected. This paper suggests calibration methods to overcome these problems. 2 refs, 11 figs
Crystal timing offset calibration method for time of flight PET scanners
Ye, Jinghan; Song, Xiyun
2016-03-01
In time-of-flight (TOF) positron emission tomography (PET), precise calibration of the timing offset of each crystal of a PET scanner is essential. Conventionally this calibration requires a specially designed tool just for this purpose. In this study a method that uses a planar source to measure the crystal timing offsets (CTO) is developed. The method uses list mode acquisitions of a planar source placed at multiple orientations inside the PET scanner field-of-view (FOV). The placement of the planar source in each acquisition is automatically figured out from the measured data, so that a fixture for exactly placing the source is not required. The expected coincidence time difference for each detected list mode event can be found from the planar source placement and the detector geometry. A deviation of the measured time difference from the expected one is due to CTO of the two crystals. The least squared solution of the CTO is found iteratively using the list mode events. The effectiveness of the crystal timing calibration method is evidenced using phantom images generated by placing back each list mode event into the image space with the timing offset applied to each event. The zigzagged outlines of the phantoms in the images become smooth after the crystal timing calibration is applied. In conclusion, a crystal timing calibration method is developed. The method uses multiple list mode acquisitions of a planar source to find the least squared solution of crystal timing offsets.
Rotation in the Dynamic Factor Modeling of Multivariate Stationary Time Series.
Molenaar, Peter C. M.; Nesselroade, John R.
2001-01-01
Proposes a special rotation procedure for the exploratory dynamic factor model for stationary multivariate time series. The rotation procedure applies separately to each univariate component series of a q-variate latent factor series and transforms such a component, initially represented as white noise, into a univariate moving-average.…
Calibration service of radiation detectors and dosemeters at IPEN/ Sao Paulo
Energy Technology Data Exchange (ETDEWEB)
Potiens, M.P.A.; Caldas, L.V.E. [IPEN, CNEN/SP, Sao Paulo (Brazil)]. e-mail: mppalbu@ipen.br
2006-07-01
The Calibration Laboratory of Instituto de Pesquisas Energeticas e Nucleares, IPEN, has already over 25 years been calibrating instruments used in radiation protection and therapy measurements and belonging to hospitals, industries, clinics and other users located in Sao Paulo and in other parts of Brazil. At the present time, the Calibration Laboratory is part of the Radiation Metrology Center and it acts in the Radiation Protection, Radiation Therapy, Nuclear Medicine and Diagnostic Radiology areas, using special set-ups with gamma and beta radiation sealed sources, alpha and beta radiation plane sources and low and intermediate energies of X radiation. Moreover, it has reference instruments for each calibration area with traceability to the Brazilian National Laboratory for Metrology of Ionizing Radiation (secondary standards) and international laboratories (primary standards). The number of tested instruments is increasing annually (from 170 in 1980 to 1871 in 2005), and for the development of new techniques and radiation detectors the continuous improvement of the existing calibration methods is necessary, as well as the establishment of new calibration services to be offered by the Calibration Laboratory for Brazilian and South American users. The objective of this study is to show the evolution of the calibration service developed at IPEN, describing the applied methods and the calibrated instruments types. The quality system implantation process following the basis of the NBR IEC/ISO 17025 standard is also presented with some tools used in the calibration procedures. (Author)
Calibration service of radiation detectors and dosemeters at IPEN/ Sao Paulo
International Nuclear Information System (INIS)
Potiens, M.P.A.; Caldas, L.V.E.
2006-01-01
The Calibration Laboratory of Instituto de Pesquisas Energeticas e Nucleares, IPEN, has already over 25 years been calibrating instruments used in radiation protection and therapy measurements and belonging to hospitals, industries, clinics and other users located in Sao Paulo and in other parts of Brazil. At the present time, the Calibration Laboratory is part of the Radiation Metrology Center and it acts in the Radiation Protection, Radiation Therapy, Nuclear Medicine and Diagnostic Radiology areas, using special set-ups with gamma and beta radiation sealed sources, alpha and beta radiation plane sources and low and intermediate energies of X radiation. Moreover, it has reference instruments for each calibration area with traceability to the Brazilian National Laboratory for Metrology of Ionizing Radiation (secondary standards) and international laboratories (primary standards). The number of tested instruments is increasing annually (from 170 in 1980 to 1871 in 2005), and for the development of new techniques and radiation detectors the continuous improvement of the existing calibration methods is necessary, as well as the establishment of new calibration services to be offered by the Calibration Laboratory for Brazilian and South American users. The objective of this study is to show the evolution of the calibration service developed at IPEN, describing the applied methods and the calibrated instruments types. The quality system implantation process following the basis of the NBR IEC/ISO 17025 standard is also presented with some tools used in the calibration procedures. (Author)
Standard practice for torque calibration of testing machines and devices
American Society for Testing and Materials. Philadelphia
2009-01-01
1.1 This practice covers procedures and requirements for the calibration of torque for static and quasi-static torque capable testing machines or devices. These may, or may not, have torque indicating systems and include those devices used for the calibration of hand torque tools. Testing machines may be calibrated by one of the three following methods or combination thereof: 1.1.1 Use of standard weights and lever arms. 1.1.2 Use of elastic torque measuring devices. 1.1.3 Use of elastic force measuring devices and lever arms. 1.1.4 Any of the methods require a specific uncertainty of measurement and a traceability derived from national standards of mass and length. 1.2 The procedures of 1.1.1, 1.1.2, and 1.1.3 apply to the calibration of the torque-indicating systems associated with the testing machine, such as a scale, dial, marked or unmarked recorder chart, digital display, etc. In all cases the buyer/owner/user must designate the torque-indicating system(s) to be calibrated and included in the repor...
Multivariate Statistical Process Control Charts: An Overview
Bersimis, Sotiris; Psarakis, Stelios; Panaretos, John
2006-01-01
In this paper we discuss the basic procedures for the implementation of multivariate statistical process control via control charting. Furthermore, we review multivariate extensions for all kinds of univariate control charts, such as multivariate Shewhart-type control charts, multivariate CUSUM control charts and multivariate EWMA control charts. In addition, we review unique procedures for the construction of multivariate control charts, based on multivariate statistical techniques such as p...
Directory of Open Access Journals (Sweden)
Sezar Gülbaz
2015-01-01
Full Text Available The land development and increase in urbanization in a watershed affect water quantityand water quality. On one hand, urbanization provokes the adjustment of geomorphicstructure of the streams, ultimately raises peak flow rate which causes flood; on theother hand, it diminishes water quality which results in an increase in Total SuspendedSolid (TSS. Consequently, sediment accumulation in downstream of urban areas isobserved which is not preferred for longer life of dams. In order to overcome thesediment accumulation problem in dams, the amount of TSS in streams and inwatersheds should be taken under control. Low Impact Development (LID is a BestManagement Practice (BMP which may be used for this purpose. It is a land planningand engineering design method which is applied in managing storm water runoff inorder to reduce flooding as well as simultaneously improve water quality. LID includestechniques to predict suspended solid loads in surface runoff generated over imperviousurban surfaces. In this study, the impact of LID-BMPs on surface runoff and TSS isinvestigated by employing a calibrated hydrodynamic model for Sazlidere Watershedwhich is located in Istanbul, Turkey. For this purpose, a calibrated hydrodynamicmodel was developed by using Environmental Protection Agency Storm WaterManagement Model (EPA SWMM. For model calibration and validation, we set up arain gauge and a flow meter into the field and obtain rainfall and flow rate data. Andthen, we select several LID types such as retention basins, vegetative swales andpermeable pavement and we obtain their influence on peak flow rate and pollutantbuildup and washoff for TSS. Consequently, we observe the possible effects ofLID on surface runoff and TSS in Sazlidere Watershed.
Testing of a one dimensional model for Field II calibration
DEFF Research Database (Denmark)
Bæk, David; Jensen, Jørgen Arendt; Willatzen, Morten
2008-01-01
Field II is a program for simulating ultrasound transducer fields. It is capable of calculating the emitted and pulse-echoed fields for both pulsed and continuous wave transducers. To make it fully calibrated a model of the transducer’s electro-mechanical impulse response must be included. We...... examine an adapted one dimensional transducer model originally proposed by Willatzen [9] to calibrate Field II. This model is modified to calculate the required impulse responses needed by Field II for a calibrated field pressure and external circuit current calculation. The testing has been performed...... to the calibrated Field II program for 1, 4, and 10 cycle excitations. Two parameter sets were applied for modeling, one real valued Pz27 parameter set, manufacturer supplied, and one complex valued parameter set found in literature, Alguer´o et al. [11]. The latter implicitly accounts for attenuation. Results show...
Vision system for dial gage torque wrench calibration
Aggarwal, Neelam; Doiron, Theodore D.; Sanghera, Paramjeet S.
1993-11-01
In this paper, we present the development of a fast and robust vision system which, in conjunction with the Dial Gage Calibration system developed by AKO Inc., will be used by the U.S. Army in calibrating dial gage torque wrenches. The vision system detects the change in the angular position of the dial pointer in a dial gage. The angular change is proportional to the applied torque. The input to the system is a sequence of images of the torque wrench dial gage taken at different dial pointer positions. The system then reports the angular difference between the different positions. The primary components of this vision system include modules for image acquisition, linear feature extraction and angle measurements. For each of these modules, several techniques were evaluated and the most applicable one was selected. This system has numerous other applications like vision systems to read and calibrate analog instruments.
Methods of Multivariate Analysis
Rencher, Alvin C
2012-01-01
Praise for the Second Edition "This book is a systematic, well-written, well-organized text on multivariate analysis packed with intuition and insight . . . There is much practical wisdom in this book that is hard to find elsewhere."-IIE Transactions Filled with new and timely content, Methods of Multivariate Analysis, Third Edition provides examples and exercises based on more than sixty real data sets from a wide variety of scientific fields. It takes a "methods" approach to the subject, placing an emphasis on how students and practitioners can employ multivariate analysis in real-life sit
Continuous multivariate exponential extension
International Nuclear Information System (INIS)
Block, H.W.
1975-01-01
The Freund-Weinman multivariate exponential extension is generalized to the case of nonidentically distributed marginal distributions. A fatal shock model is given for the resulting distribution. Results in the bivariate case and the concept of constant multivariate hazard rate lead to a continuous distribution related to the multivariate exponential distribution (MVE) of Marshall and Olkin. This distribution is shown to be a special case of the extended Freund-Weinman distribution. A generalization of the bivariate model of Proschan and Sullo leads to a distribution which contains both the extended Freund-Weinman distribution and the MVE
Validation of a densimeter calibration procedure for a secondary calibration laboratory
International Nuclear Information System (INIS)
Alpizar Herrera, Juan Carlos
2014-01-01
A survey was conducted to quantify the need for calibration of a density measurement instrument at the research units at the Sede Rodrigo Facio of the Universidad de Costa Rica. A calibration procedure was documented for the instrument that presented the highest demand in the survey by the calibration service. A study of INTE-ISO/IEC 17025: 2005 and specifically in section 5.4 of this standard was done, to document the procedure for calibrating densimeters. Densimeter calibration procedures and standards were sought from different national and international sources. The method of hydrostatic weighing or Cuckow method was the basis of the defined procedure. Documenting the calibration procedure and creating other documents was performed for data acquisition log, intermediate calculation log and calibration certificate copy. A veracity test was performed using as reference laboratory a laboratory of calibration secondary national as part of the validation process of the documented procedure. The results of the E_n statistic of 0.41; 0.34 and 0.46 for the calibration points 90%, 50% and 10% were obtained for the densimeter scale respectively. A reproducibility analysis of the method was performed with satisfactory results. Different suppliers were contacted to estimate the economic costs of the equipment and materials, needed to develop the documented method of densimeter calibration. The acquisition of an analytical balance was recommended, instead of a precision scale, in order to improve the results obtained with the documented method [es
Zhou, Fei; Zhao, Yajing; Peng, Jiyu; Jiang, Yirong; Li, Maiquan; Jiang, Yuan; Lu, Baiyi
2017-07-01
Osmanthus fragrans flowers are used as folk medicine and additives for teas, beverages and foods. The metabolites of O. fragrans flowers from different geographical origins were inconsistent in some extent. Chromatography and mass spectrometry combined with multivariable analysis methods provides an approach for discriminating the origin of O. fragrans flowers. To discriminate the Osmanthus fragrans var. thunbergii flowers from different origins with the identified metabolites. GC-MS and UPLC-PDA were conducted to analyse the metabolites in O. fragrans var. thunbergii flowers (in total 150 samples). Principal component analysis (PCA), soft independent modelling of class analogy analysis (SIMCA) and random forest (RF) analysis were applied to group the GC-MS and UPLC-PDA data. GC-MS identified 32 compounds common to all samples while UPLC-PDA/QTOF-MS identified 16 common compounds. PCA of the UPLC-PDA data generated a better clustering than PCA of the GC-MS data. Ten metabolites (six from GC-MS and four from UPLC-PDA) were selected as effective compounds for discrimination by PCA loadings. SIMCA and RF analysis were used to build classification models, and the RF model, based on the four effective compounds (caffeic acid derivative, acteoside, ligustroside and compound 15), yielded better results with the classification rate of 100% in the calibration set and 97.8% in the prediction set. GC-MS and UPLC-PDA combined with multivariable analysis methods can discriminate the origin of Osmanthus fragrans var. thunbergii flowers. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
SU-E-J-221: Advantages of a New Surface Imaging Calibration Method for SRS Treatments
International Nuclear Information System (INIS)
Paxton, A; Manger, R; Pawlicki, T; Kim, G
2014-01-01
Purpose: The present calibration method used for the AlignRT surface imaging system relies on the placement of a calibration plate at the linac isocenter using isocenter surrogates (crosshairs, room lasers, etc.). This work investigated the potential advantages of a new calibration method that shifts the AlignRT isocenter to be coincident with the linac MV beam isocenter. Methods: To quantify the potential uncertainties associated with the present calibration method for SRS treatments, the calibration plate was intentionally shifted away from isocenter +/−3mm in the longitudinal and lateral directions and +/−1mm in the longitudinal, lateral, and vertical directions. A head phantom was placed in a mock SRS treatment position and monitored with the AlignRT system. The AlignRT-indicated offsets were recorded at 270, 315, 0, 45, and 90° couch angles for each intentional calibration misalignment. The new isocenter calibration was applied after each misalignment, and the measurements were repeated and compared to the previous results. Results: With intentional longitudinal and lateral shifts of +/−3mm and +/−1mm in the calibration plate, the average indicated offsets at couch rotations of +/−90° were 4.3mm and 1.6mm, respectively. This was in agreement with the theoretical offset of sqrt(2)*(intentional shift of the calibration plate). Since vertical shifts were along the rotation axis of the couch, these shifts had little effect on the offsets with changing couch angle. When the new calibration was applied, the indicated offsets were all within 0.5mm for all couch angles. These offsets were in agreement with the known magnitude of couch walkout. Conclusion: The potential pitfalls of the present calibration method have been established, and the advantages of the new calibration method have been demonstrated. This new calibration method effectively removes the potential miscalibration artifacts of the present calibration method, giving the AlignRT user more
A multivariate family-based association test using generalized estimating equations : FBAT-GEE
Lange, C; Silverman, SK; Xu, [No Value; Weiss, ST; Laird, NM
In this paper we propose a multivariate extension of family-based association tests based on generalized estimating equations. The test can be applied to multiple phenotypes and to phenotypic data obtained in longitudinal studies without making any distributional assumptions for the phenotypic
Smith, Joseph P; Smith, Frank C; Ottaway, Joshua; Krull-Davatzes, Alexandra E; Simonson, Bruce M; Glass, Billy P; Booksh, Karl S
2017-08-01
The high-pressure, α-PbO 2 -structured polymorph of titanium dioxide (TiO 2 -II) was recently identified in micrometer-sized grains recovered from four Neoarchean spherule layers deposited between ∼2.65 and ∼2.54 billion years ago. Several lines of evidence support the interpretation that these layers represent distal impact ejecta layers. The presence of shock-induced TiO 2 -II provides physical evidence to further support an impact origin for these spherule layers. Detailed characterization of the distribution of TiO 2 -II in these grains may be useful for correlating the layers, estimating the paleodistances of the layers from their source craters, and providing insight into the formation of the TiO 2 -II. Here we report the investigation of TiO 2 -II-bearing grains from these four spherule layers using multivariate curve resolution-alternating least squares (MCR-ALS) applied to Raman microspectroscopic mapping. Raman spectra provide evidence of grains consisting primarily of rutile (TiO 2 ) and TiO 2 -II, as shown by Raman bands at 174 cm -1 (TiO 2 -II), 426 cm -1 (TiO 2 -II), 443 cm -1 (rutile), and 610 cm -1 (rutile). Principal component analysis (PCA) yielded a predominantly three-phase system comprised of rutile, TiO 2 -II, and substrate-adhesive epoxy. Scanning electron microscopy (SEM) suggests heterogeneous grains containing polydispersed micrometer- and submicrometer-sized particles. Multivariate curve resolution-alternating least squares applied to the Raman microspectroscopic mapping yielded up to five distinct chemical components: three phases of TiO 2 (rutile, TiO 2 -II, and anatase), quartz (SiO 2 ), and substrate-adhesive epoxy. Spectral profiles and spatially resolved chemical maps of the pure chemical components were generated using MCR-ALS applied to the Raman microspectroscopic maps. The spatial resolution of the Raman microspectroscopic maps was enhanced in comparable, cost-effective analysis times by limiting spectral resolution
Onboard Blackbody Calibrator Component Development for IR Remote Sensing Instrumentation
National Aeronautics and Space Administration — The objective of this study is to apply and to provide a reliable, stable durable onboard blackbody calibrator to future Earth Science missions by infusing the new...
Method for lateral force calibration in atomic force microscope using MEMS microforce sensor.
Dziekoński, Cezary; Dera, Wojciech; Jarząbek, Dariusz M
2017-11-01
In this paper we present a simple and direct method for the lateral force calibration constant determination. Our procedure does not require any knowledge about material or geometrical parameters of an investigated cantilever. We apply a commercially available microforce sensor with advanced electronics for direct measurement of the friction force applied by the cantilever's tip to a flat surface of the microforce sensor measuring beam. Due to the third law of dynamics, the friction force of the equal value tilts the AFM cantilever. Therefore, torsional (lateral force) signal is compared with the signal from the microforce sensor and the lateral force calibration constant is determined. The method is easy to perform and could be widely used for the lateral force calibration constant determination in many types of atomic force microscopes. Copyright © 2017 Elsevier B.V. All rights reserved.
Multivariable PID controller design tuning using bat algorithm for activated sludge process
Atikah Nor’Azlan, Nur; Asmiza Selamat, Nur; Mat Yahya, Nafrizuan
2018-04-01
The designing of a multivariable PID control for multi input multi output is being concerned with this project by applying four multivariable PID control tuning which is Davison, Penttinen-Koivo, Maciejowski and Proposed Combined method. The determination of this study is to investigate the performance of selected optimization technique to tune the parameter of MPID controller. The selected optimization technique is Bat Algorithm (BA). All the MPID-BA tuning result will be compared and analyzed. Later, the best MPID-BA will be chosen in order to determine which techniques are better based on the system performances in terms of transient response.
Copula Based Factorization in Bayesian Multivariate Infinite Mixture Models
Martin Burda; Artem Prokhorov
2012-01-01
Bayesian nonparametric models based on infinite mixtures of density kernels have been recently gaining in popularity due to their flexibility and feasibility of implementation even in complicated modeling scenarios. In economics, they have been particularly useful in estimating nonparametric distributions of latent variables. However, these models have been rarely applied in more than one dimension. Indeed, the multivariate case suffers from the curse of dimensionality, with a rapidly increas...
Tan, Chao; Chen, Hui; Wang, Chao; Zhu, Wanping; Wu, Tong; Diao, Yuanbo
2013-03-01
Near and mid-infrared (NIR/MIR) spectroscopy techniques have gained great acceptance in the industry due to their multiple applications and versatility. However, a success of application often depends heavily on the construction of accurate and stable calibration models. For this purpose, a simple multi-model fusion strategy is proposed. It is actually the combination of Kohonen self-organizing map (KSOM), mutual information (MI) and partial least squares (PLSs) and therefore named as KMICPLS. It works as follows: First, the original training set is fed into a KSOM for unsupervised clustering of samples, on which a series of training subsets are constructed. Thereafter, on each of the training subsets, a MI spectrum is calculated and only the variables with higher MI values than the mean value are retained, based on which a candidate PLS model is constructed. Finally, a fixed number of PLS models are selected to produce a consensus model. Two NIR/MIR spectral datasets from brewing industry are used for experiments. The results confirms its superior performance to two reference algorithms, i.e., the conventional PLS and genetic algorithm-PLS (GAPLS). It can build more accurate and stable calibration models without increasing the complexity, and can be generalized to other NIR/MIR applications.
Standard Test Method for Calibration of Non-Concentrator Photovoltaic Secondary Reference Cells
American Society for Testing and Materials. Philadelphia
2010-01-01
1.1 This test method covers calibration and characterization of secondary terrestrial photovoltaic reference cells to a desired reference spectral irradiance distribution. The recommended physical requirements for these reference cells are described in Specification E1040. Reference cells are principally used in the determination of the electrical performance of a photovoltaic device. 1.2 Secondary reference cells are calibrated indoors using simulated sunlight or outdoors in natural sunlight by reference to a primary reference cell previously calibrated to the same desired reference spectral irradiance distribution. 1.3 Secondary reference cells calibrated according to this test method will have the same radiometric traceability as the of the primary reference cell used for the calibration. Therefore, if the primary reference cell is traceable to the World Radiometric Reference (WRR, see Test Method E816), the resulting secondary reference cell will also be traceable to the WRR. 1.4 This test method appli...
Effects of pressurization procedures on calibration results for precise pressure transducers
International Nuclear Information System (INIS)
Kajikawa, Hiroaki; Kobata, Tokihiko
2010-01-01
The output of electromechanical pressure gauges depends on not only the currently applied pressure, but also the pressurization history. Thus, the calibration results of gauges are affected by the pressurization procedure. In this paper, among several important factors influencing the results, we report the effects of the interval between the calibration cycles and the effects of the preliminary pressurizations. In order to quantitatively evaluate these effects, we developed a fully automated system that uses a pressure balance to calibrate pressure gauges. Subsequently, gauges containing quartz Bourdon-type pressure transducers were calibrated in a stepwise manner for pressures between 10 MPa and 100 MPa. The typical standard deviation of the data over three cycles was reduced to a few parts per million (ppm). The interval between the calibration cycles, which ranges from zero to more than 12 h, exerts a strong influence on the results in the process of increasing the pressure, where at 10 MPa the maximum difference between the results was approximately 40 ppm. The preliminary pressurization immediately before the calibration cycle reduces the effects of the interval on the results in certain cases. However, in turn, the influence of the waiting time between the preliminary pressurization and the main calibration cycle becomes strong. In the present paper, we outline several possible measures for obtaining calibration results with high reproducibility
McKinley, C. C.; Scudder, R.; Thomas, D. J.
2016-12-01
The Neodymium Isotopic composition (Nd IC) of oxide coatings has been applied as a tracer of water mass composition and used to address fundamental questions about past ocean conditions. The leached authigenic oxide coating from marine sediment is widely assumed to reflect the dissolved trace metal composition of the bottom water interacting with sediment at the seafloor. However, recent studies have shown that readily reducible sediment components, in addition to trace metal fluxes from the pore water, are incorporated into the bottom water, influencing the trace metal composition of leached oxide coatings. This challenges the prevailing application of the authigenic oxide Nd IC as a proxy of seawater composition. Therefore, it is important to identify the component end-members that create sediments of different lithology and determine if, or how they might contribute to the Nd IC of oxide coatings. To investigate lithologic influence on the results of sequential leaching, we selected two sites with complete bulk sediment statistical characterization. Site U1370 in the South Pacific Gyre, is predominantly composed of Rhyolite ( 60%) and has a distinguishable ( 10%) Fe-Mn Oxyhydroxide component (Dunlea et al., 2015). Site 1149 near the Izu-Bonin-Arc is predominantly composed of dispersed ash ( 20-50%) and eolian dust from Asia ( 50-80%) (Scudder et al., 2014). We perform a two-step leaching procedure: a 14 mL of 0.02 M hydroxylamine hydrochloride (HH) in 20% acetic acid buffered to a pH 4 for one hour, targeting metals bound to Fe- and Mn- oxides fractions, and a second HH leach for 12 hours, designed to remove any remaining oxides from the residual component. We analyze all three resulting fractions for a large suite of major, trace and rare earth elements, a sub-set of the samples are also analyzed for Nd IC. We use multivariate statistical analyses of the resulting geochemical data to identify how each component of the sediment partitions across the sequential
User-friendly freehand ultrasound calibration using Lego bricks and automatic registration.
Xiao, Yiming; Yan, Charles Xiao Bo; Drouin, Simon; De Nigris, Dante; Kochanowska, Anna; Collins, D Louis
2016-09-01
As an inexpensive, noninvasive, and portable clinical imaging modality, ultrasound (US) has been widely employed in many interventional procedures for monitoring potential tissue deformation, surgical tool placement, and locating surgical targets. The application requires the spatial mapping between 2D US images and 3D coordinates of the patient. Although positions of the devices (i.e., ultrasound transducer) and the patient can be easily recorded by a motion tracking system, the spatial relationship between the US image and the tracker attached to the US transducer needs to be estimated through an US calibration procedure. Previously, various calibration techniques have been proposed, where a spatial transformation is computed to match the coordinates of corresponding features in a physical phantom and those seen in the US scans. However, most of these methods are difficult to use for novel users. We proposed an ultrasound calibration method by constructing a phantom from simple Lego bricks and applying an automated multi-slice 2D-3D registration scheme without volumetric reconstruction. The method was validated for its calibration accuracy and reproducibility. Our method yields a calibration accuracy of [Formula: see text] mm and a calibration reproducibility of 1.29 mm. We have proposed a robust, inexpensive, and easy-to-use ultrasound calibration method.
A holistic calibration method with iterative distortion compensation for stereo deflectometry
Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian
2018-07-01
This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.
Software System for the Calibration of X-Ray Measuring Instruments
International Nuclear Information System (INIS)
Gaytan-Gallardo, E.; Tovar-Munoz, V. M.; Cruz-Estrada, P.; Vergara-Martinez, F. J.; Rivero-Gutierrez, T.
2006-01-01
A software system that facilities the calibration of X-ray measuring instruments used in medical applications is presented. The Secondary Standard Dosimetry Laboratory (SSDL) of the Nuclear Research National Institute in Mexico (ININ in Spanish), supports activities concerning with ionizing radiations in medical area. One of these activities is the calibration of X-ray measuring instruments, in terms of air kerma or exposure by substitution method in an X-ray beam at a point where the rate has been determined by means of a standard ionization chamber. To automatize this process, a software system has been developed, the calibration system is composed by an X-ray unit, a Dynalizer IIIU X-ray meter by RADCAL, a commercial data acquisition card, the software system and the units to be tested and calibrated. A quality control plan has been applied in the development of the software system, ensuring that quality assurance procedures and standards are being followed
Radioactive sources for ATLAS hadron tile calorimeter calibration
International Nuclear Information System (INIS)
Budagov, Yu.; Cavalli-Sforza, M.; Ivanyushenkov, Yu.
1997-01-01
The main requirements for radioactive sources applied in the TileCal calibration systems are formulated; technology of the sources production developed in the Laboratory of Nuclear Problems, JINR is described. Design and characteristics of the prototype sources manufactured in Dubna and tested on ATLAS TileCal module 0 are presented
SCIAMACHY Level 1 data: calibration concept and in-flight calibration
Lichtenberg, G.; Kleipool, Q.; Krijger, J. M.; van Soest, G.; van Hees, R.; Tilstra, L. G.; Acarreta, J. R.; Aben, I.; Ahlers, B.; Bovensmann, H.; Chance, K.; Gloudemans, A. M. S.; Hoogeveen, R. W. M.; Jongma, R. T. N.; Noël, S.; Piters, A.; Schrijver, H.; Schrijvers, C.; Sioris, C. E.; Skupin, J.; Slijkhuis, S.; Stammes, P.; Wuttke, M.
2006-11-01
The calibration of SCIAMACHY was thoroughly checked since the instrument was launched on-board ENVISAT in February 2002. While SCIAMACHY's functional performance is excellent since launch, a number of technical difficulties have appeared, that required adjustments to the calibration. The problems can be separated into three types: (1) Those caused by the instrument and/or platform environment. Among these are the high water content in the satellite structure and/or MLI layer. This results in the deposition of ice on the detectors in channels 7 and 8 which seriously affects the retrievals in the IR, mostly because of the continuous change of the slit function caused by scattering of the light through the ice layer. Additionally a light leak in channel 7 severely hampers any retrieval from this channel. (2) Problems due to errors in the on-ground calibration and/or data processing affecting for example the radiometric calibration. A new approach based on a mixture of on-ground and in-flight data is shortly described here. (3) Problems caused by principal limitations of the calibration concept, e.g. the possible appearance of spectral structures after the polarisation correction due to unavoidable errors in the determination of atmospheric polarisation. In this paper we give a complete overview of the calibration and problems that still have to be solved. We will also give an indication of the effect of calibration problems on retrievals where possible. Since the operational processing chain is currently being updated and no newly processed data are available at this point in time, for some calibration issues only a rough estimate of the effect on Level 2 products can be given. However, it is the intention of this paper to serve as a future reference for detailed studies into specific calibration issues.
Kruger, Uwe
2012-01-01
The development and application of multivariate statistical techniques in process monitoring has gained substantial interest over the past two decades in academia and industry alike. Initially developed for monitoring and fault diagnosis in complex systems, such techniques have been refined and applied in various engineering areas, for example mechanical and manufacturing, chemical, electrical and electronic, and power engineering. The recipe for the tremendous interest in multivariate statistical techniques lies in its simplicity and adaptability for developing monitoring applica
International Nuclear Information System (INIS)
Pérez, Rocío L.; Escandar, Graciela M.
2016-01-01
A green method is reported based on non-sophisticated instrumental for the quantification of seven natural and synthetic estrogens, three progestagens and one androgen in the presence of real interferences. The method takes advantage of: (1) chromatography, allowing total or partial resolution of a large number of compounds, (2) dual detection, permitting selection of the most appropriate signal for each analyte and, (3) second-order calibration, enabling mathematical resolution of incompletely resolved chromatographic bands and analyte determination in the presence of interferents. Consumption of organic solvents for cleaning, extraction and separation are markedly decreased because of the coupling with MCR-ALS (multivariate curve resolution/alternating least-squares) which allows the successful resolution in the presence of other co-eluting matrix constituents. Rigorous IUPAC detection limits were obtained: 6–24 ng L"−"1 in water, and 0.1–0.9 ng g"−"1 in sediments. Relative prediction errors were 2–10% (water) and 1–8% (sediments). - Highlights: • A green and simple chromatographic method for endocrine disruptors is proposed. • Diode array and fluorescence detectors are simultaneously used. • Eleven sex hormones are determined in water and sediment samples. • Outstanding selectivity is attained with MCR-ALS second-order algorithm. - Liquid chromatography coupled to chemometrics allows one to selectively and sensitively quantitate eleven endocrine disruptors in challenging scenarios using a green analytical approach.
Regression calibration with more surrogates than mismeasured variables
Kipnis, Victor
2012-06-29
In a recent paper (Weller EA, Milton DK, Eisen EA, Spiegelman D. Regression calibration for logistic regression with multiple surrogates for one exposure. Journal of Statistical Planning and Inference 2007; 137: 449-461), the authors discussed fitting logistic regression models when a scalar main explanatory variable is measured with error by several surrogates, that is, a situation with more surrogates than variables measured with error. They compared two methods of adjusting for measurement error using a regression calibration approximate model as if it were exact. One is the standard regression calibration approach consisting of substituting an estimated conditional expectation of the true covariate given observed data in the logistic regression. The other is a novel two-stage approach when the logistic regression is fitted to multiple surrogates, and then a linear combination of estimated slopes is formed as the estimate of interest. Applying estimated asymptotic variances for both methods in a single data set with some sensitivity analysis, the authors asserted superiority of their two-stage approach. We investigate this claim in some detail. A troubling aspect of the proposed two-stage method is that, unlike standard regression calibration and a natural form of maximum likelihood, the resulting estimates are not invariant to reparameterization of nuisance parameters in the model. We show, however, that, under the regression calibration approximation, the two-stage method is asymptotically equivalent to a maximum likelihood formulation, and is therefore in theory superior to standard regression calibration. However, our extensive finite-sample simulations in the practically important parameter space where the regression calibration model provides a good approximation failed to uncover such superiority of the two-stage method. We also discuss extensions to different data structures.
Regression calibration with more surrogates than mismeasured variables
Kipnis, Victor; Midthune, Douglas; Freedman, Laurence S.; Carroll, Raymond J.
2012-01-01
In a recent paper (Weller EA, Milton DK, Eisen EA, Spiegelman D. Regression calibration for logistic regression with multiple surrogates for one exposure. Journal of Statistical Planning and Inference 2007; 137: 449-461), the authors discussed fitting logistic regression models when a scalar main explanatory variable is measured with error by several surrogates, that is, a situation with more surrogates than variables measured with error. They compared two methods of adjusting for measurement error using a regression calibration approximate model as if it were exact. One is the standard regression calibration approach consisting of substituting an estimated conditional expectation of the true covariate given observed data in the logistic regression. The other is a novel two-stage approach when the logistic regression is fitted to multiple surrogates, and then a linear combination of estimated slopes is formed as the estimate of interest. Applying estimated asymptotic variances for both methods in a single data set with some sensitivity analysis, the authors asserted superiority of their two-stage approach. We investigate this claim in some detail. A troubling aspect of the proposed two-stage method is that, unlike standard regression calibration and a natural form of maximum likelihood, the resulting estimates are not invariant to reparameterization of nuisance parameters in the model. We show, however, that, under the regression calibration approximation, the two-stage method is asymptotically equivalent to a maximum likelihood formulation, and is therefore in theory superior to standard regression calibration. However, our extensive finite-sample simulations in the practically important parameter space where the regression calibration model provides a good approximation failed to uncover such superiority of the two-stage method. We also discuss extensions to different data structures.
Influence of rainfall observation network on model calibration and application
Directory of Open Access Journals (Sweden)
A. Bárdossy
2008-01-01
Full Text Available The objective in this study is to investigate the influence of the spatial resolution of the rainfall input on the model calibration and application. The analysis is carried out by varying the distribution of the raingauge network. A meso-scale catchment located in southwest Germany has been selected for this study. First, the semi-distributed HBV model is calibrated with the precipitation interpolated from the available observed rainfall of the different raingauge networks. An automatic calibration method based on the combinatorial optimization algorithm simulated annealing is applied. The performance of the hydrological model is analyzed as a function of the raingauge density. Secondly, the calibrated model is validated using interpolated precipitation from the same raingauge density used for the calibration as well as interpolated precipitation based on networks of reduced and increased raingauge density. Lastly, the effect of missing rainfall data is investigated by using a multiple linear regression approach for filling in the missing measurements. The model, calibrated with the complete set of observed data, is then run in the validation period using the above described precipitation field. The simulated hydrographs obtained in the above described three sets of experiments are analyzed through the comparisons of the computed Nash-Sutcliffe coefficient and several goodness-of-fit indexes. The results show that the model using different raingauge networks might need re-calibration of the model parameters, specifically model calibrated on relatively sparse precipitation information might perform well on dense precipitation information while model calibrated on dense precipitation information fails on sparse precipitation information. Also, the model calibrated with the complete set of observed precipitation and run with incomplete observed data associated with the data estimated using multiple linear regressions, at the locations treated as
A landmark-based method for the geometrical 3D calibration of scanning microscopes
Energy Technology Data Exchange (ETDEWEB)
Ritter, M.
2007-04-27
This thesis presents a new strategy and a spatial method for the geometric calibration of 3D measurement devices at the micro-range, based on spatial reference structures with nanometersized landmarks (nanomarkers). The new method was successfully applied for the 3D calibration of scanning probe microscopes (SPM) and confocal laser scanning microscopes (CLSM). Moreover, the spatial method was also used for the photogrammetric self-calibration of scanning electron microscopes (SEM). In order to implement the calibration strategy to all scanning microscopes used, the landmark-based principle of reference points often applied at land survey or at close-range applications has been transferred to the nano- and micro-range in the form of nanomarker. In order to function as a support to the nanomarkers, slope-shaped step pyramids have been developed and fabricated by focused ion beam (FIB) induced metal deposition. These FIB produced 3D microstructures have been sized to embrace most of the measurement volume of the scanning microscopes. Additionally, their special design allows the homogenous distribution of the nanomarkers. The nanomarkers were applied onto the support and the plateaus of the slope-step pyramids by FIB etching (milling) as landmarks with as little as several hundreds of nanometers in diameter. The nanomarkers are either of point-, or ring-shaped design. They are optimized so that they can be spatially measured by SPM and CLSM, and, imaged and photogrammetrically analyzed on the basis of SEM data. The centre of the each nanomarker serves as reference point in the measurement data or images. By applying image processing routines, the image (2D) or object (3D) coordinates of each nanomarker has been determined with subpixel accuracy. The correlative analysis of the SPM, CLSM and photogrammetric SEM measurement data after 3D calibration resulted in mean residues in the measured coordinates of as little as 13 nm. Without the coupling factors the mean
Cannon, Alex J.
2018-01-01
Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin
Energy Technology Data Exchange (ETDEWEB)
Bardon, B
1995-01-31
Rapid Thermal Processing (RTP) technology is a delicate field to the control engineer. Its compatibility to single-wafer processing is well suited for performing thermal steps in the state-of-the-art integrated circuit (IC) manufacturing. Control of the wafer temperature during the processing is essential. The main problem in the scalar (SISO) approach is due to the steady-state temperature uniformity. A solution to this problem is to vary the spatial energy flux distribution radiating to the wafer. To achieve this requirement, one approach is the use a multivariable (MIMO) control law to manipulate independently the different lamps banks. Thermal process are highly non linear and distributed in nature. Besides, these non-linearities implies process dynamics variations. In this work, after physically describing our process about a reference value of the power and temperature, we present an off-line identification procedure (in the aim of devising a linear multivariable model) using input/output data for different reference values from real experiences and multi-variable least square algorithm. Afterwards, particular attention is devoted to the structure parameter determination of the linear model. Based on the linear model, a multivariable PID controller is designed. The controller coupled with the least mean square identification algorithm is tested under real conditions. The performances of the MIMO adaptive controller is also evaluated in tracking as well as in regulation. (author) refs.
Calibration of Mine Ventilation Network Models Using the Non-Linear Optimization Algorithm
Directory of Open Access Journals (Sweden)
Guang Xu
2017-12-01
Full Text Available Effective ventilation planning is vital to underground mining. To ensure stable operation of the ventilation system and to avoid airflow disorder, mine ventilation network (MVN models have been widely used in simulating and optimizing the mine ventilation system. However, one of the challenges for MVN model simulation is that the simulated airflow distribution results do not match the measured data. To solve this problem, a simple and effective calibration method is proposed based on the non-linear optimization algorithm. The calibrated model not only makes simulated airflow distribution results in accordance with the on-site measured data, but also controls the errors of other parameters within a minimum range. The proposed method was then applied to calibrate an MVN model in a real case, which is built based on ventilation survey results and Ventsim software. Finally, airflow simulation experiments are carried out respectively using data before and after calibration, whose results were compared and analyzed. This showed that the simulated airflows in the calibrated model agreed much better to the ventilation survey data, which verifies the effectiveness of calibrating method.
Probabilistic, multi-variate flood damage modelling using random forests and Bayesian networks
Kreibich, Heidi; Schröter, Kai
2015-04-01
Decisions on flood risk management and adaptation are increasingly based on risk analyses. Such analyses are associated with considerable uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention recently, they are hardly applied in flood damage assessments. Most of the damage models usually applied in standard practice have in common that complex damaging processes are described by simple, deterministic approaches like stage-damage functions. This presentation will show approaches for probabilistic, multi-variate flood damage modelling on the micro- and meso-scale and discuss their potential and limitations. Reference: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64. Schröter, K., Kreibich, H., Vogel, K., Riggelsen, C., Scherbaum, F., Merz, B. (2014): How useful are complex flood damage models? - Water Resources Research, 50, 4, p. 3378-3395.
Out of lab calibration of a rotating 2D scanner for 3D mapping
Koch, Rainer; Böttcher, Lena; Jahrsdörfer, Maximilian; Maier, Johannes; Trommer, Malte; May, Stefan; Nüchter, Andreas
2017-06-01
Mapping is an essential task in mobile robotics. To fulfil advanced navigation and manipulation tasks a 3D representation of the environment is required. Applying stereo cameras or Time-of-flight cameras (TOF cameras) are one way to archive this requirement. Unfortunately, they suffer from drawbacks which makes it difficult to map properly. Therefore, costly 3D laser scanners are applied. An inexpensive way to build a 3D representation is to use a 2D laser scanner and rotate the scan plane around an additional axis. A 3D point cloud acquired with such a custom device consists of multiple 2D line scans. Therefore the scanner pose of each line scan need to be determined as well as parameters resulting from a calibration to generate a 3D point cloud. Using external sensor systems are a common method to determine these calibration parameters. This is costly and difficult when the robot needs to be calibrated outside the lab. Thus, this work presents a calibration method applied on a rotating 2D laser scanner. It uses a hardware setup to identify the required parameters for calibration. This hardware setup is light, small, and easy to transport. Hence, an out of lab calibration is possible. Additional a theoretical model was created to test the algorithm and analyse impact of the scanner accuracy. The hardware components of the 3D scanner system are an HOKUYO UTM-30LX-EW 2D laser scanner, a Dynamixel servo-motor, and a control unit. The calibration system consists of an hemisphere. In the inner of the hemisphere a circular plate is mounted. The algorithm needs to be provided with a dataset of a single rotation from the laser scanner. To achieve a proper calibration result the scanner needs to be located in the middle of the hemisphere. By means of geometric formulas the algorithms determine the individual deviations of the placed laser scanner. In order to minimize errors, the algorithm solves the formulas in an iterative process. First, the calibration algorithm was
Multivariate Birkhoff interpolation
Lorentz, Rudolph A
1992-01-01
The subject of this book is Lagrange, Hermite and Birkhoff (lacunary Hermite) interpolation by multivariate algebraic polynomials. It unifies and extends a new algorithmic approach to this subject which was introduced and developed by G.G. Lorentz and the author. One particularly interesting feature of this algorithmic approach is that it obviates the necessity of finding a formula for the Vandermonde determinant of a multivariate interpolation in order to determine its regularity (which formulas are practically unknown anyways) by determining the regularity through simple geometric manipulations in the Euclidean space. Although interpolation is a classical problem, it is surprising how little is known about its basic properties in the multivariate case. The book therefore starts by exploring its fundamental properties and its limitations. The main part of the book is devoted to a complete and detailed elaboration of the new technique. A chapter with an extensive selection of finite elements follows as well a...
Calibration Procedures in Mid Format Camera Setups
Pivnicka, F.; Kemper, G.; Geissler, S.
2012-07-01
A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU), the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and camera can be applied
Simultaneous calibration phantom commission and geometry calibration in cone beam CT
Xu, Yuan; Yang, Shuai; Ma, Jianhui; Li, Bin; Wu, Shuyu; Qi, Hongliang; Zhou, Linghong
2017-09-01
Geometry calibration is a vital step for describing the geometry of a cone beam computed tomography (CBCT) system and is a prerequisite for CBCT reconstruction. In current methods, calibration phantom commission and geometry calibration are divided into two independent tasks. Small errors in ball-bearing (BB) positioning in the phantom-making step will severely degrade the quality of phantom calibration. To solve this problem, we propose an integrated method to simultaneously realize geometry phantom commission and geometry calibration. Instead of assuming the accuracy of the geometry phantom, the integrated method considers BB centers in the phantom as an optimized parameter in the workflow. Specifically, an evaluation phantom and the corresponding evaluation contrast index are used to evaluate geometry artifacts for optimizing the BB coordinates in the geometry phantom. After utilizing particle swarm optimization, the CBCT geometry and BB coordinates in the geometry phantom are calibrated accurately and are then directly used for the next geometry calibration task in other CBCT systems. To evaluate the proposed method, both qualitative and quantitative studies were performed on simulated and realistic CBCT data. The spatial resolution of reconstructed images using dental CBCT can reach up to 15 line pair cm-1. The proposed method is also superior to the Wiesent method in experiments. This paper shows that the proposed method is attractive for simultaneous and accurate geometry phantom commission and geometry calibration.
A novel calibration method for non-orthogonal shaft laser theodolite measurement system
Energy Technology Data Exchange (ETDEWEB)
Wu, Bin, E-mail: wubin@tju.edu.cn, E-mail: xueting@tju.edu.cn; Yang, Fengting; Ding, Wen [State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin 300072 (China); Xue, Ting, E-mail: wubin@tju.edu.cn, E-mail: xueting@tju.edu.cn [College of Electrical Engineering and Automation, Tianjin Key Laboratory of Process Measurement and Control, Tianjin University, Tianjin 300072 (China)
2016-03-15
Non-orthogonal shaft laser theodolite (N-theodolite) is a new kind of large-scale metrological instrument made up by two rotary tables and one collimated laser. There are three axes for an N-theodolite. According to naming conventions in traditional theodolite, rotary axes of two rotary tables are called as horizontal axis and vertical axis, respectively, and the collimated laser beam is named as sight axis. And the difference between N-theodolite and traditional theodolite is obvious, since the former one with no orthogonal and intersecting accuracy requirements. So the calibration method for traditional theodolite is no longer suitable for N-theodolite, while the calibration method applied currently is really complicated. Thus this paper introduces a novel calibration method for non-orthogonal shaft laser theodolite measurement system to simplify the procedure and to improve the calibration accuracy. A simple two-step process, calibration for intrinsic parameters and for extrinsic parameters, is proposed by the novel method. And experiments have shown its efficiency and accuracy.
A High Precision $3.50 Open Source 3D Printed Rain Gauge Calibrator
Lopez Alcala, J. M.; Udell, C.; Selker, J. S.
2017-12-01
Currently available rain gauge calibrators tend to be designed for specific rain gauges, are expensive, employ low-precision water reservoirs, and do not offer the flexibility needed to test the ever more popular small-aperture rain gauges. The objective of this project was to develop and validate a freely downloadable, open-source, 3D printed rain gauge calibrator that can be adjusted for a wide range of gauges. The proposed calibrator provides for applying low, medium, and high intensity flow, and allows the user to modify the design to conform to unique system specifications based on parametric design, which may be modified and printed using CAD software. To overcome the fact that different 3D printers yield different print qualities, we devised a simple post-printing step that controlled critical dimensions to assure robust performance. Specifically, the three orifices of the calibrator are drilled to reach the three target flow rates. Laboratory tests showed that flow rates were consistent between prints, and between trials of each part, while the total applied water was precisely controlled by the use of a volumetric flask as the reservoir.
Wiandt, T. J.
2008-06-01
The Hart Scientific Division of the Fluke Corporation operates two accredited standard platinum resistance thermometer (SPRT) calibration facilities, one at the Hart Scientific factory in Utah, USA, and the other at a service facility in Norwich, UK. The US facility is accredited through National Voluntary Laboratory Accreditation Program (NVLAP), and the UK facility is accredited through UKAS. Both provide SPRT calibrations using similar equipment and procedures, and at similar levels of uncertainty. These uncertainties are among the lowest available commercially. To achieve and maintain low uncertainties, it is required that the calibration procedures be thorough and optimized. However, to minimize customer downtime, it is also important that the instruments be calibrated in a timely manner and returned to the customer. Consequently, subjecting the instrument to repeated calibrations or extensive repeated measurements is not a viable approach. Additionally, these laboratories provide SPRT calibration services involving a wide variety of SPRT designs. These designs behave differently, yet predictably, when subjected to calibration measurements. To this end, an evaluation strategy involving both statistical process control and internal consistency measures is utilized to provide confidence in both the instrument calibration and the calibration process. This article describes the calibration facilities, procedure, uncertainty analysis, and internal quality assurance measures employed in the calibration of SPRTs. Data will be reviewed and generalities will be presented. Finally, challenges and considerations for future improvements will be discussed.
Calibration coefficient of the SSNTD and equilibrium factor for radon
International Nuclear Information System (INIS)
Planinic, J.; Vukovic, B.
1993-01-01
Disintegration, ventilation and deposition were considered as removal processes of the radon and its short-lived daughters in air and respective concentration equations were applied. Calibration coefficient (K F ) of the solid state nuclear track detector (SSNTD) LR-115 for radon and the equilibrium factor (F) were related to track densities of the bare detector (D) and the filtered one (D o ). A useful relationship between K F , F and detector sensitivity coefficient (k) was derived. Using the calibrated value k=3.29 * 10 -3 m, the exposed detectors gave the average values of the equilibrium factor, calibration coefficient and indoor radon concentration of a single house living room in Osijek 0.46, 142.3 m -1 and 37.8 Bq m -3 , respectively. (author) 4 refs.; 1 fig
Dingari, Narahara Chari; Barman, Ishan; Kang, Jeon Woong; Kong, Chae-Ryon; Dasari, Ramachandra R.; Feld, Michael S.
2011-01-01
While Raman spectroscopy provides a powerful tool for noninvasive and real time diagnostics of biological samples, its translation to the clinical setting has been impeded by the lack of robustness of spectroscopic calibration models and the size and cumbersome nature of conventional laboratory Raman systems. Linear multivariate calibration models employing full spectrum analysis are often misled by spurious correlations, such as system drift and covariations among constituents. In addition, such calibration schemes are prone to overfitting, especially in the presence of external interferences that may create nonlinearities in the spectra-concentration relationship. To address both of these issues we incorporate residue error plot-based wavelength selection and nonlinear support vector regression (SVR). Wavelength selection is used to eliminate uninformative regions of the spectrum, while SVR is used to model the curved effects such as those created by tissue turbidity and temperature fluctuations. Using glucose detection in tissue phantoms as a representative example, we show that even a substantial reduction in the number of wavelengths analyzed using SVR lead to calibration models of equivalent prediction accuracy as linear full spectrum analysis. Further, with clinical datasets obtained from human subject studies, we also demonstrate the prospective applicability of the selected wavelength subsets without sacrificing prediction accuracy, which has extensive implications for calibration maintenance and transfer. Additionally, such wavelength selection could substantially reduce the collection time of serial Raman acquisition systems. Given the reduced footprint of serial Raman systems in relation to conventional dispersive Raman spectrometers, we anticipate that the incorporation of wavelength selection in such hardware designs will enhance the possibility of miniaturized clinical systems for disease diagnosis in the near future. PMID:21895336
Bayesian calibration of power plant models for accurate performance prediction
International Nuclear Information System (INIS)
Boksteen, Sowande Z.; Buijtenen, Jos P. van; Pecnik, Rene; Vecht, Dick van der
2014-01-01
Highlights: • Bayesian calibration is applied to power plant performance prediction. • Measurements from a plant in operation are used for model calibration. • A gas turbine performance model and steam cycle model are calibrated. • An integrated plant model is derived. • Part load efficiency is accurately predicted as a function of ambient conditions. - Abstract: Gas turbine combined cycles are expected to play an increasingly important role in the balancing of supply and demand in future energy markets. Thermodynamic modeling of these energy systems is frequently applied to assist in decision making processes related to the management of plant operation and maintenance. In most cases, model inputs, parameters and outputs are treated as deterministic quantities and plant operators make decisions with limited or no regard of uncertainties. As the steady integration of wind and solar energy into the energy market induces extra uncertainties, part load operation and reliability are becoming increasingly important. In the current study, methods are proposed to not only quantify various types of uncertainties in measurements and plant model parameters using measured data, but to also assess their effect on various aspects of performance prediction. The authors aim to account for model parameter and measurement uncertainty, and for systematic discrepancy of models with respect to reality. For this purpose, the Bayesian calibration framework of Kennedy and O’Hagan is used, which is especially suitable for high-dimensional industrial problems. The article derives a calibrated model of the plant efficiency as a function of ambient conditions and operational parameters, which is also accurate in part load. The article shows that complete statistical modeling of power plants not only enhances process models, but can also increases confidence in operational decisions
Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale
Kreibich, Heidi; Schröter, Kai; Merz, Bruno
2016-05-01
Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.
Calibration and intercomparison methods of dose calibrators used in nuclear medicine facilities
International Nuclear Information System (INIS)
Costa, Alessandro Martins da
1999-01-01
Dose calibrators are used in most of the nuclear medicine facilities to determine the amount of radioactivity administered to a patient in a particular investigation or therapeutic procedure. It is therefore of vital importance that the equipment used presents good performance and is regular;y calibrated at a authorized laboratory. This occurs of adequate quality assurance procedures are carried out. Such quality control tests should be performed daily, other biannually or yearly, testing, for example, its accuracy and precision, the reproducibility and response linearity. In this work a commercial dose calibrator was calibrated with solution of radionuclides used in nuclear medicine. Simple instrument tests, such as response linearity and the response variation of the source volume increase at a constant source activity concentration, were performed. This instrument can now be used as a working standard for calibration of other dose calibrators/ An intercomparison procedure was proposed as a method of quality control of dose calibrators used in nuclear medicine facilities. (author)
Constructing networks from a dynamical system perspective for multivariate nonlinear time series.
Nakamura, Tomomichi; Tanizawa, Toshihiro; Small, Michael
2016-03-01
We describe a method for constructing networks for multivariate nonlinear time series. We approach the interaction between the various scalar time series from a deterministic dynamical system perspective and provide a generic and algorithmic test for whether the interaction between two measured time series is statistically significant. The method can be applied even when the data exhibit no obvious qualitative similarity: a situation in which the naive method utilizing the cross correlation function directly cannot correctly identify connectivity. To establish the connectivity between nodes we apply the previously proposed small-shuffle surrogate (SSS) method, which can investigate whether there are correlation structures in short-term variabilities (irregular fluctuations) between two data sets from the viewpoint of deterministic dynamical systems. The procedure to construct networks based on this idea is composed of three steps: (i) each time series is considered as a basic node of a network, (ii) the SSS method is applied to verify the connectivity between each pair of time series taken from the whole multivariate time series, and (iii) the pair of nodes is connected with an undirected edge when the null hypothesis cannot be rejected. The network constructed by the proposed method indicates the intrinsic (essential) connectivity of the elements included in the system or the underlying (assumed) system. The method is demonstrated for numerical data sets generated by known systems and applied to several experimental time series.
Directory of Open Access Journals (Sweden)
J. Guillén
2000-12-01
Full Text Available The water turbidity measured with optical methods (transmittance and backscattering is usually expressed as beam attenuation coefficient (BAC or formazin turbidity units (FTU. The transformation of these units to volumetric suspended sediment concentration (SSC units is not straightforward, and accurate calibrations are required in order to obtain valuable information on suspended sediment distributions and fluxes. In this paper, data from field calibrations between BAC, FTU and SSC are presented and best-fit calibration curves are shown. These calibrations represent an average from different marine environments of the western Mediterranean (from estuary to continental slope. However, the general curves can only be applied for descriptive or semi-quantitative purposes. Comparison of turbidity measurements using the same sensor with different calibration ranges shows the advantage of simultaneously combining two instruments calibrated in different ranges when significant changes in suspended sediment concentrations are expected.
Calibration method for direct conversion receiver front-ends
Directory of Open Access Journals (Sweden)
R. Müller
2008-05-01
Full Text Available Technology induced process tolerances in analog circuits cause device characteristics different from specification. For direct conversion receiver front-ends a system level calibration method is presented. The malfunctions of the devices are compensated by tuning dominant circuit parameters. Thereto optimization techniques are applied which use measurement values and special evaluation functions.
Calibration models for high enthalpy calorimetric probes.
Kannel, A
1978-07-01
The accuracy of gas-aspirated liquid-cooled calorimetric probes used for measuring the enthalpy of high-temperature gas streams is studied. The error in the differential temperature measurements caused by internal and external heat transfer interactions is considered and quantified by mathematical models. The analysis suggests calibration methods for the evaluation of dimensionless heat transfer parameters in the models, which then can give a more accurate value for the enthalpy of the sample. Calibration models for four types of calorimeters are applied to results from the literature and from our own experiments: a circular slit calorimeter developed by the author, single-cooling jacket probe, double-cooling jacket probe, and split-flow cooling jacket probe. The results show that the models are useful for describing and correcting the temperature measurements.
International Nuclear Information System (INIS)
Waller, W.C.; Cram, M.E.; Hall, J.E.
1975-01-01
For any measurement to have meaning, it must be related to generally accepted standard units by a valid and specified system of comparison. To calibrate well-logging tools, sensing systems are designed which produce consistent and repeatable indications over the range for which the tool was intended. The basics of calibration theory, procedures, and calibration record presentations are reviewed. Calibrations for induction, electrical, radioactivity, and sonic logging tools will be discussed. The authors' intent is to provide an understanding of the sources of errors, of the way errors are minimized in the calibration process, and of the significance of changes in recorded calibration data
Collision prediction models using multivariate Poisson-lognormal regression.
El-Basyouny, Karim; Sayed, Tarek
2009-07-01
This paper advocates the use of multivariate Poisson-lognormal (MVPLN) regression to develop models for collision count data. The MVPLN approach presents an opportunity to incorporate the correlations across collision severity levels and their influence on safety analyses. The paper introduces a new multivariate hazardous location identification technique, which generalizes the univariate posterior probability of excess that has been commonly proposed and applied in the literature. In addition, the paper presents an alternative approach for quantifying the effect of the multivariate structure on the precision of expected collision frequency. The MVPLN approach is compared with the independent (separate) univariate Poisson-lognormal (PLN) models with respect to model inference, goodness-of-fit, identification of hot spots and precision of expected collision frequency. The MVPLN is modeled using the WinBUGS platform which facilitates computation of posterior distributions as well as providing a goodness-of-fit measure for model comparisons. The results indicate that the estimates of the extra Poisson variation parameters were considerably smaller under MVPLN leading to higher precision. The improvement in precision is due mainly to the fact that MVPLN accounts for the correlation between the latent variables representing property damage only (PDO) and injuries plus fatalities (I+F). This correlation was estimated at 0.758, which is highly significant, suggesting that higher PDO rates are associated with higher I+F rates, as the collision likelihood for both types is likely to rise due to similar deficiencies in roadway design and/or other unobserved factors. In terms of goodness-of-fit, the MVPLN model provided a superior fit than the independent univariate models. The multivariate hazardous location identification results demonstrated that some hazardous locations could be overlooked if the analysis was restricted to the univariate models.
A direct-gradient multivariate index of biotic condition
Miranda, Leandro E.; Aycock, J.N.; Killgore, K. J.
2012-01-01
Multimetric indexes constructed by summing metric scores have been criticized despite many of their merits. A leading criticism is the potential for investigator bias involved in metric selection and scoring. Often there is a large number of competing metrics equally well correlated with environmental stressors, requiring a judgment call by the investigator to select the most suitable metrics to include in the index and how to score them. Data-driven procedures for multimetric index formulation published during the last decade have reduced this limitation, yet apprehension remains. Multivariate approaches that select metrics with statistical algorithms may reduce the level of investigator bias and alleviate a weakness of multimetric indexes. We investigated the suitability of a direct-gradient multivariate procedure to derive an index of biotic condition for fish assemblages in oxbow lakes in the Lower Mississippi Alluvial Valley. Although this multivariate procedure also requires that the investigator identify a set of suitable metrics potentially associated with a set of environmental stressors, it is different from multimetric procedures because it limits investigator judgment in selecting a subset of biotic metrics to include in the index and because it produces metric weights suitable for computation of index scores. The procedure, applied to a sample of 35 competing biotic metrics measured at 50 oxbow lakes distributed over a wide geographical region in the Lower Mississippi Alluvial Valley, selected 11 metrics that adequately indexed the biotic condition of five test lakes. Because the multivariate index includes only metrics that explain the maximum variability in the stressor variables rather than a balanced set of metrics chosen to reflect various fish assemblage attributes, it is fundamentally different from multimetric indexes of biotic integrity with advantages and disadvantages. As such, it provides an alternative to multimetric procedures.
Estimating uncertainty in multivariate responses to selection.
Stinchcombe, John R; Simonsen, Anna K; Blows, Mark W
2014-04-01
Predicting the responses to natural selection is one of the key goals of evolutionary biology. Two of the challenges in fulfilling this goal have been the realization that many estimates of natural selection might be highly biased by environmentally induced covariances between traits and fitness, and that many estimated responses to selection do not incorporate or report uncertainty in the estimates. Here we describe the application of a framework that blends the merits of the Robertson-Price Identity approach and the multivariate breeder's equation to address these challenges. The approach allows genetic covariance matrices, selection differentials, selection gradients, and responses to selection to be estimated without environmentally induced bias, direct and indirect selection and responses to selection to be distinguished, and if implemented in a Bayesian-MCMC framework, statistically robust estimates of uncertainty on all of these parameters to be made. We illustrate our approach with a worked example of previously published data. More generally, we suggest that applying both the Robertson-Price Identity and the multivariate breeder's equation will facilitate hypothesis testing about natural selection, genetic constraints, and evolutionary responses. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.
Semi-empirical neutron tool calibration (one and two-group approximation)
International Nuclear Information System (INIS)
Czubek, J.A.
1988-01-01
The physical principles of the new method of calibration of neutron tools for the rock porosity determination are given. A short description of the physics of neutron transport in the matter is presented together with some remarks on the elementary interactions of neutrons with nuclei (cross sections, group cross sections etc.). The definitions of the main integral parameters characterizing the neutron transport in the rock media are given. The three main approaches to the calibration problem: empirical, theoretical and semi-empirical are presented with some more detailed description of the latter one. The new semi-empirical approach is described. The method is based on the definition of the apparent slowing down or migration length for neutrons sensed by the neutron tool situated in the real borehole-rock conditions. To calculate this apparent slowing down or migration lengths the ratio of the proper space moments of the neutron distribution along the borehole axis is used. Theoretical results are given for one- and two-group diffusion approximations in the rock-borehole geometrical conditions when the tool is in the sidewall position. The physical and chemical parameters are given for the calibration blocks of the Logging Company in Zielona Gora. Using these data the neutron parameters of the calibration blocks have been calculated. An example, how to determine the calibration curve for the dual detector tool applying this new method and using the neutron parameters mentioned above together with the measurements performed in the calibration blocks, is given. The most important advantage of the new semi-empirical method of calibration is the possibility of setting on the unique calibration curve all experimental calibration data obtained for a given neutron tool for different porosities, lithologies and borehole diameters. 52 refs., 21 figs., 21 tabs. (author)
Self-calibration of a cone-beam micro-CT system
International Nuclear Information System (INIS)
Patel, V.; Chityala, R. N.; Hoffmann, K. R.; Ionita, C. N.; Bednarek, D. R.; Rudin, S.
2009-01-01
Use of cone-beam computed tomography (CBCT) is becoming more frequent. For proper reconstruction, the geometry of the CBCT systems must be known. While the system can be designed to reduce errors in the geometry, calibration measurements must still be performed and corrections applied. Investigators have proposed techniques using calibration objects for system calibration. In this study, the authors present methods to calibrate a rotary-stage CB micro-CT (CBμCT) system using only the images acquired of the object to be reconstructed, i.e., without the use of calibration objects. Projection images are acquired using a CBμCT system constructed in the authors' laboratories. Dark- and flat-field corrections are performed. Exposure variations are detected and quantified using analysis of image regions with an unobstructed view of the x-ray source. Translations that occur during the acquisition in the horizontal direction are detected, quantified, and corrected based on sinogram analysis. The axis of rotation is determined using registration of antiposed projection images. These techniques were evaluated using data obtained with calibration objects and phantoms. The physical geometric axis of rotation is determined and aligned with the rotational axis (assumed to be the center of the detector plane) used in the reconstruction process. The parameters describing this axis agree to within 0.1 mm and 0.3 deg with those determined using other techniques. Blurring due to residual calibration errors has a point-spread function in the reconstructed planes with a full-width-at-half-maximum of less than 125 μm in a tangential direction and essentially zero in the radial direction for the rotating object. The authors have used this approach on over 100 acquisitions over the past 2 years and have regularly obtained high-quality reconstructions, i.e., without artifacts and no detectable blurring of the reconstructed objects. This self-calibrating approach not only obviates
Calibration of a Plastic Classification System with the Ccw Model
International Nuclear Information System (INIS)
Barcala Riveira, J. M.; Fernandez Marron, J. L.; Alberdi Primicia, J.; Navarrete Marin, J. J.; Oller Gonzalez, J. C.
2003-01-01
This document describes the calibration of a plastic Classification system with the Ccw model (Classification by Quantum's built with Wavelet Coefficients). The method is applied to spectra of plastics usually present in domestic wastes. Obtained results are showed. (Author) 16 refs
DEFF Research Database (Denmark)
Van Daele, Timothy; Gernaey, Krist V.; Ringborg, Rolf Hoffmeyer
2017-01-01
The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during...... experimentation is not actively used to optimise the experimental design. By applying an iterative robust model-based optimal experimental design, the limited amount of data collected is used to design additional informative experiments. The algorithm is used here to calibrate the initial reaction rate of an ω......-transaminase catalysed reaction in a more accurate way. The parameter confidence region estimated from the Fisher Information Matrix is compared with the likelihood confidence region, which is a more accurate, but also a computationally more expensive method. As a result, an important deviation between both approaches...
Multivariate refined composite multiscale entropy analysis
International Nuclear Information System (INIS)
Humeau-Heurtier, Anne
2016-01-01
Multiscale entropy (MSE) has become a prevailing method to quantify signals complexity. MSE relies on sample entropy. However, MSE may yield imprecise complexity estimation at large scales, because sample entropy does not give precise estimation of entropy when short signals are processed. A refined composite multiscale entropy (RCMSE) has therefore recently been proposed. Nevertheless, RCMSE is for univariate signals only. The simultaneous analysis of multi-channel (multivariate) data often over-performs studies based on univariate signals. We therefore introduce an extension of RCMSE to multivariate data. Applications of multivariate RCMSE to simulated processes reveal its better performances over the standard multivariate MSE. - Highlights: • Multiscale entropy quantifies data complexity but may be inaccurate at large scale. • A refined composite multiscale entropy (RCMSE) has therefore recently been proposed. • Nevertheless, RCMSE is adapted to univariate time series only. • We herein introduce an extension of RCMSE to multivariate data. • It shows better performances than the standard multivariate multiscale entropy.
Directory of Open Access Journals (Sweden)
Pozhitkov Alexander E
2010-07-01
Full Text Available Abstract Background Calibration of a microarray scanner is critical for accurate interpretation of microarray results. Shi et al. (BMC Bioinformatics, 2005, 6, Art. No. S11 Suppl. 2. reported usage of a Full Moon BioSystems slide for calibration. Inspired by the Shi et al. work, we have calibrated microarray scanners in our previous research. We were puzzled however, that most of the signal intensities from a biological sample fell below the sensitivity threshold level determined by the calibration slide. This conundrum led us to re-investigate the quality of calibration provided by the Full Moon BioSystems slide as well as the accuracy of the analysis performed by Shi et al. Methods Signal intensities were recorded on three different microarray scanners at various photomultiplier gain levels using the same calibration slide from Full Moon BioSystems. Data analysis was conducted on raw signal intensities without normalization or transformation of any kind. Weighted least-squares method was used to fit the data. Results We found that initial analysis performed by Shi et al. did not take into account autofluorescence of the Full Moon BioSystems slide, which led to a grossly distorted microarray scanner response. Our analysis revealed that a power-law function, which is explicitly accounting for the slide autofluorescence, perfectly described a relationship between signal intensities and fluorophore quantities. Conclusions Microarray scanners respond in a much less distorted fashion than was reported by Shi et al. Full Moon BioSystems calibration slides are inadequate for performing calibration. We recommend against using these slides.
Multivariate Generalized Multiscale Entropy Analysis
Directory of Open Access Journals (Sweden)
Anne Humeau-Heurtier
2016-11-01
Full Text Available Multiscale entropy (MSE was introduced in the 2000s to quantify systems’ complexity. MSE relies on (i a coarse-graining procedure to derive a set of time series representing the system dynamics on different time scales; (ii the computation of the sample entropy for each coarse-grained time series. A refined composite MSE (rcMSE—based on the same steps as MSE—also exists. Compared to MSE, rcMSE increases the accuracy of entropy estimation and reduces the probability of inducing undefined entropy for short time series. The multivariate versions of MSE (MMSE and rcMSE (MrcMSE have also been introduced. In the coarse-graining step used in MSE, rcMSE, MMSE, and MrcMSE, the mean value is used to derive representations of the original data at different resolutions. A generalization of MSE was recently published, using the computation of different moments in the coarse-graining procedure. However, so far, this generalization only exists for univariate signals. We therefore herein propose an extension of this generalized MSE to multivariate data. The multivariate generalized algorithms of MMSE and MrcMSE presented herein (MGMSE and MGrcMSE, respectively are first analyzed through the processing of synthetic signals. We reveal that MGrcMSE shows better performance than MGMSE for short multivariate data. We then study the performance of MGrcMSE on two sets of short multivariate electroencephalograms (EEG available in the public domain. We report that MGrcMSE may show better performance than MrcMSE in distinguishing different types of multivariate EEG data. MGrcMSE could therefore supplement MMSE or MrcMSE in the processing of multivariate datasets.
Using Active Learning for Speeding up Calibration in Simulation Models.
Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2016-07-01
Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.
Directory of Open Access Journals (Sweden)
Jonas eKaplan
2015-03-01
Full Text Available Here we highlight an emerging trend in the use of machine learning classifiers to test for abstraction across patterns of neural activity. When a classifier algorithm is trained on data from one cognitive context, and tested on data from another, conclusions can be drawn about the role of a given brain region in representing information that abstracts across those cognitive contexts. We call this kind of analysis Multivariate Cross-Classification (MVCC, and review several domains where it has recently made an impact. MVCC has been important in establishing correspondences among neural patterns across cognitive domains, including motor-perception matching and cross-sensory matching. It has been used to test for similarity between neural patterns evoked by perception and those generated from memory. Other work has used MVCC to investigate the similarity of representations for semantic categories across different kinds of stimulus presentation, and in the presence of different cognitive demands. We use these examples to demonstrate the power of MVCC as a tool for investigating neural abstraction and discuss some important methodological issues related to its application.
Multivariate tensor-based brain anatomical surface morphometry via holomorphic one-forms.
Wang, Yalin; Chan, Tony F; Toga, Arthur W; Thompson, Paul M
2009-01-01
Here we introduce multivariate tensor-based surface morphometry using holomorphic one-forms to study brain anatomy. We computed new statistics from the Riemannian metric tensors that retain the full information in the deformation tensor fields. We introduce two different holomorphic one-forms that induce different surface conformal parameterizations. We applied this framework to 3D MRI data to analyze hippocampal surface morphometry in Alzheimer's Disease (AD; 26 subjects), lateral ventricular surface morphometry in HIV/AIDS (19 subjects) and cortical surface morphometry in Williams Syndrome (WS; 80 subjects). Experimental results demonstrated that our method powerfully detected brain surface abnormalities. Multivariate statistics on the local tensors outperformed other TBM methods including analysis of the Jacobian determinant, the largest eigenvalue, or the pair of eigenvalues, of the surface Jacobian matrix.
Effect Sizes for Research Univariate and Multivariate Applications
Grissom, Robert J
2011-01-01
Noted for its comprehensive coverage, this greatly expanded new edition now covers the use of univariate and multivariate effect sizes. Many measures and estimators are reviewed along with their application, interpretation, and limitations. Noted for its practical approach, the book features numerous examples using real data for a variety of variables and designs, to help readers apply the material to their own data. Tips on the use of SPSS, SAS, R, and S-Plus are provided. The book's broad disciplinary appeal results from its inclusion of a variety of examples from psychology, medicine, educa
Ulbrich, N.; Volden, T.
2018-01-01
Analysis and use of temperature-dependent wind tunnel strain-gage balance calibration data are discussed in the paper. First, three different methods are presented and compared that may be used to process temperature-dependent strain-gage balance data. The first method uses an extended set of independent variables in order to process the data and predict balance loads. The second method applies an extended load iteration equation during the analysis of balance calibration data. The third method uses temperature-dependent sensitivities for the data analysis. Physical interpretations of the most important temperature-dependent regression model terms are provided that relate temperature compensation imperfections and the temperature-dependent nature of the gage factor to sets of regression model terms. Finally, balance calibration recommendations are listed so that temperature-dependent calibration data can be obtained and successfully processed using the reviewed analysis methods.
Use of GPS TEC Maps for Calibrating Single Band VLBI Sessions
Gordon, David
2010-01-01
GPS TEC ionosphere maps were first applied to a series of K and Q band VLBA astrometry sessions to try to eliminate a declination bias in estimated source positions. Their usage has been expanded to calibrate X-band only VLBI observations as well. At K-band, approx.60% of the declination bias appears to be removed with the application of GPS ionosphere calibrations. At X-band however, it appears that up to 90% or more of the declination bias is removed, with a corresponding increase in RA and declination uncertainties of approx.0.5 mas. GPS ionosphere calibrations may be very useful for improving the estimated positions of the X-only and S-only sources in the VCS and RDV sessions.
Robust multi-objective calibration strategies – possibilities for improving flood forecasting
Directory of Open Access Journals (Sweden)
G. H. Schmitz
2012-10-01
Full Text Available Process-oriented rainfall-runoff models are designed to approximate the complex hydrologic processes within a specific catchment and in particular to simulate the discharge at the catchment outlet. Most of these models exhibit a high degree of complexity and require the determination of various parameters by calibration. Recently, automatic calibration methods became popular in order to identify parameter vectors with high corresponding model performance. The model performance is often assessed by a purpose-oriented objective function. Practical experience suggests that in many situations one single objective function cannot adequately describe the model's ability to represent any aspect of the catchment's behaviour. This is regardless of whether the objective is aggregated of several criteria that measure different (possibly opposite aspects of the system behaviour. One strategy to circumvent this problem is to define multiple objective functions and to apply a multi-objective optimisation algorithm to identify the set of Pareto optimal or non-dominated solutions. Nonetheless, there is a major disadvantage of automatic calibration procedures that understand the problem of model calibration just as the solution of an optimisation problem: due to the complex-shaped response surface, the estimated solution of the optimisation problem can result in different near-optimum parameter vectors that can lead to a very different performance on the validation data. Bárdossy and Singh (2008 studied this problem for single-objective calibration problems using the example of hydrological models and proposed a geometrical sampling approach called Robust Parameter Estimation (ROPE. This approach applies the concept of data depth in order to overcome the shortcomings of automatic calibration procedures and find a set of robust parameter vectors. Recent studies confirmed the effectivity of this method. However, all ROPE approaches published so far just identify
Paik, Daehwa; Miyahara, Masaya; Matsuzawa, Akira
This paper analyzes a pseudo-differential dynamic comparator with a dynamic pre-amplifier. The transient gain of a dynamic pre-amplifier is derived and applied to equations of the thermal noise and the regeneration time of a comparator. This analysis enhances understanding of the roles of transistor's parameters in pre-amplifier's gain. Based on the calculated gain, two calibration methods are also analyzed. One is calibration of a load capacitance and the other is calibration of a bypass current. The analysis helps designers' estimation for the accuracy of calibration, dead-zone of a comparator with a calibration circuit, and the influence of PVT variation. The analyzed comparator uses 90-nm CMOS technology as an example and each estimation is compared with simulation results.
Energy Technology Data Exchange (ETDEWEB)
Sreckovic, G.; Hall, E.R. [British Columbia Univ., Dept. of Civil Engineering, Vancouver, BC (Canada); Thibault, J. [Laval Univ., Dept. of Chemical Engineering, Ste-Foy, PQ (Canada); Savic, D. [Exeter Univ., School of Engineering, Exeter (United Kingdom)
1999-05-01
The issue of proper model calibration techniques applied to mechanistic mathematical models relating to activated sludge systems was discussed. Such calibrations are complex because of the non-linearity and multi-model objective functions of the process. This paper presents a hybrid model which was developed using two techniques to model and calibrate secondary clarifier parts of an activated sludge system. Genetic algorithms were used to successfully calibrate the settler mechanistic model, and neural networks were used to reduce the error between the mechanistic model output and real world data. Results of the modelling study show that the long term response of a one-dimensional settler mechanistic model calibrated by genetic algorithms and compared to full scale plant data can be improved by coupling the calibrated mechanistic model to as black-box model, such as a neural network. 11 refs., 2 figs.
Directory of Open Access Journals (Sweden)
Carlos E. Galván-Tejada
2017-02-01
Full Text Available Breast cancer is an important global health problem, and the most common type of cancer among women. Late diagnosis significantly decreases the survival rate of the patient; however, using mammography for early detection has been demonstrated to be a very important tool increasing the survival rate. The purpose of this paper is to obtain a multivariate model to classify benign and malignant tumor lesions using a computer-assisted diagnosis with a genetic algorithm in training and test datasets from mammography image features. A multivariate search was conducted to obtain predictive models with different approaches, in order to compare and validate results. The multivariate models were constructed using: Random Forest, Nearest centroid, and K-Nearest Neighbor (K-NN strategies as cost function in a genetic algorithm applied to the features in the BCDR public databases. Results suggest that the two texture descriptor features obtained in the multivariate model have a similar or better prediction capability to classify the data outcome compared with the multivariate model composed of all the features, according to their fitness value. This model can help to reduce the workload of radiologists and present a second opinion in the classification of tumor lesions.
Galván-Tejada, Carlos E; Zanella-Calzada, Laura A; Galván-Tejada, Jorge I; Celaya-Padilla, José M; Gamboa-Rosales, Hamurabi; Garza-Veloz, Idalia; Martinez-Fierro, Margarita L
2017-02-14
Breast cancer is an important global health problem, and the most common type of cancer among women. Late diagnosis significantly decreases the survival rate of the patient; however, using mammography for early detection has been demonstrated to be a very important tool increasing the survival rate. The purpose of this paper is to obtain a multivariate model to classify benign and malignant tumor lesions using a computer-assisted diagnosis with a genetic algorithm in training and test datasets from mammography image features. A multivariate search was conducted to obtain predictive models with different approaches, in order to compare and validate results. The multivariate models were constructed using: Random Forest, Nearest centroid, and K-Nearest Neighbor (K-NN) strategies as cost function in a genetic algorithm applied to the features in the BCDR public databases. Results suggest that the two texture descriptor features obtained in the multivariate model have a similar or better prediction capability to classify the data outcome compared with the multivariate model composed of all the features, according to their fitness value. This model can help to reduce the workload of radiologists and present a second opinion in the classification of tumor lesions.
Schillinger, Dominik
2013-07-01
The method of separation can be used as a non-parametric estimation technique, especially suitable for evolutionary spectral density functions of uniformly modulated and strongly narrow-band stochastic processes. The paper at hand provides a consistent derivation of method of separation based spectrum estimation for the general multi-variate and multi-dimensional case. The validity of the method is demonstrated by benchmark tests with uniformly modulated spectra, for which convergence to the analytical solution is demonstrated. The key advantage of the method of separation is the minimization of spectral dispersion due to optimum time- or space-frequency localization. This is illustrated by the calibration of multi-dimensional and multi-variate geometric imperfection models from strongly narrow-band measurements in I-beams and cylindrical shells. Finally, the application of the method of separation based estimates for the stochastic buckling analysis of the example structures is briefly discussed. © 2013 Elsevier Ltd.
CubiCal - Fast radio interferometric calibration suite exploiting complex optimisation
Kenyon, J. S.; Smirnov, O. M.; Grobler, T. L.; Perkins, S. J.
2018-05-01
It has recently been shown that radio interferometric gain calibration can be expressed succinctly in the language of complex optimisation. In addition to providing an elegant framework for further development, it exposes properties of the calibration problem which can be exploited to accelerate traditional non-linear least squares solvers such as Gauss-Newton and Levenberg-Marquardt. We extend existing derivations to chains of Jones terms: products of several gains which model different aberrant effects. In doing so, we find that the useful properties found in the single term case still hold. We also develop several specialised solvers which deal with complex gains parameterised by real values. The newly developed solvers have been implemented in a Python package called CubiCal, which uses a combination of Cython, multiprocessing and shared memory to leverage the power of modern hardware. We apply CubiCal to both simulated and real data, and perform both direction-independent and direction-dependent self-calibration. Finally, we present the results of some rudimentary profiling to show that CubiCal is competitive with respect to existing calibration tools such as MeqTrees.
Effectiveness of Multivariate Time Series Classification Using Shapelets
Directory of Open Access Journals (Sweden)
A. P. Karpenko
2015-01-01
Full Text Available Typically, time series classifiers require signal pre-processing (filtering signals from noise and artifact removal, etc., enhancement of signal features (amplitude, frequency, spectrum, etc., classification of signal features in space using the classical techniques and classification algorithms of multivariate data. We consider a method of classifying time series, which does not require enhancement of the signal features. The method uses the shapelets of time series (time series shapelets i.e. small fragments of this series, which reflect properties of one of its classes most of all.Despite the significant number of publications on the theory and shapelet applications for classification of time series, the task to evaluate the effectiveness of this technique remains relevant. An objective of this publication is to study the effectiveness of a number of modifications of the original shapelet method as applied to the multivariate series classification that is a littlestudied problem. The paper presents the problem statement of multivariate time series classification using the shapelets and describes the shapelet–based basic method of binary classification, as well as various generalizations and proposed modification of the method. It also offers the software that implements a modified method and results of computational experiments confirming the effectiveness of the algorithmic and software solutions.The paper shows that the modified method and the software to use it allow us to reach the classification accuracy of about 85%, at best. The shapelet search time increases in proportion to input data dimension.
Laboratory for the Dosimetric Equipment Calibration at the Institute of Nuclear Physics in Cracow
International Nuclear Information System (INIS)
Bilski, P.; Budzanowski, M.; Horwacik, T.; Marczewska, B.; Nowak, T.; Olko, P.; Ryba, E.; Zbroja, K.
2000-12-01
A new calibration laboratory has been developed at the INP, Cracow, Poland. The laboratory is located in a hall of dimension 9 m (length) x 4 m (wide) x 4.5 m (height). For calibration purposes the Cs-137 source of activity 185 MBq / 5 Ci / is applied, placed in the 16 cm thick lead capsule. The beam is collimated using a collimator with a constant opening of 20 o . The source is placed 2 m above the ground to avoid albedo scattering. This source covers a dose rate range from 17 mGy/h to 290 μGy/h. For low-dose calibration 0.05 Ci source is applied. The positioning of the source and opening of the collimator is pneumatically controlled. The dosimeters to be calibrated are placed onto a vehicle with DC motor positioned by PC computer. The vehicle is remotely positioned with the precision of one millimetre at the distance from the source between 1 and 7 meters. The vehicle positioning is controlled electronically and additionally checked via TV-camera. Exact dosimeter positioning is performed with a medical cross-laser and with a telescope device. The construction of the vehicle allows for performing of angular irradiations. On the axis of the vehicle 320 keV Phillips X-ray tube is installed which may be used as an irradiation source. UNIDOS dosimeter with PTW ionisation chambers is used for determination of the dose rate. This calibration stand is designed for calibration of personal dosimeters, calibration of active devices for radiation protections and for research on the newly developed thermoluminescent materials. (author)
Directory of Open Access Journals (Sweden)
J. C. Ochoa-Rivera
2002-01-01
Full Text Available A model for multivariate streamflow generation is presented, based on a multilayer feedforward neural network. The structure of the model results from two components, the neural network (NN deterministic component and a random component which is assumed to be normally distributed. It is from this second component that the model achieves the ability to incorporate effectively the uncertainty associated with hydrological processes, making it valuable as a practical tool for synthetic generation of streamflow series. The NN topology and the corresponding analytical explicit formulation of the model are described in detail. The model is calibrated with a series of monthly inflows to two reservoir sites located in the Tagus River basin (Spain, while validation is performed through estimation of a set of statistics that is relevant for water resources systems planning and management. Among others, drought and storage statistics are computed and compared for both the synthetic and historical series. The performance of the NN-based model was compared to that of a standard autoregressive AR(2 model. Results show that NN represents a promising modelling alternative for simulation purposes, with interesting potential in the context of water resources systems management and optimisation. Keywords: neural networks, perceptron multilayer, error backpropagation, hydrological scenario generation, multivariate time-series..
Directory of Open Access Journals (Sweden)
Shikha Awasthi
2017-06-01
Full Text Available Analysis of emission from laser-induced plasma has a unique capability for quantifying the major and minor elements present in any type of samples under optimal analysis conditions. Chemometric techniques are very effective and reliable tools for quantification of multiple components in complex matrices. The feasibility of laser-induced breakdown spectroscopy (LIBS in combination with multivariate analysis was investigated for the analysis of environmental reference materials (RMs. In the present work, different (Certified/Standard Reference Materials of soil and plant origin were analyzed using LIBS and the presence of Al, Ca, Mg, Fe, K, Mn and Si were identified in the LIBS spectra of these materials. Multivariate statistical methods (Partial Least Square Regression and Partial Least Square Discriminant Analysis were employed for quantitative analysis of the constituent elements using the LIBS spectral data. Calibration models were used to predict the concentrations of the different elements of test samples and subsequently, the concentrations were compared with certified concentrations to check the authenticity of models. The non-destructive analytical method namely Instrumental Neutron Activation Analysis (INAA using high flux reactor neutrons and high resolution gamma-ray spectrometry was also used for intercomparison of results of two RMs by LIBS.
A flexible calibration method for laser displacement sensors based on a stereo-target
International Nuclear Information System (INIS)
Zhang, Jie; Sun, Junhua; Liu, Zhen; Zhang, Guangjun
2014-01-01
Laser displacement sensors (LDSs) are widely used in online measurement owing to their characteristics of non-contact, high measurement speed, etc. However, existing calibration methods for LDSs based on the traditional triangulation measurement model are time-consuming and tedious to operate. In this paper, a calibration method for LDSs based on a vision measurement model of the LDS is presented. According to the constraint relationships of the model parameters, the calibration is implemented by freely moving a stereo-target at least twice in the field of view of the LDS. Both simulation analyses and real experiments were conducted. Experimental results demonstrate that the calibration method achieves an accuracy of 0.044 mm within the measurement range of about 150 mm. Compared to traditional calibration methods, the proposed method has no special limitation on the relative position of the LDS and the target. The linearity approximation of the measurement model in the calibration is not needed, and thus the measurement range is not limited in the linearity range. It is easy and quick to implement the calibration for the LDS. The method can be applied in wider fields. (paper)
Calibration techniques and strategies for the present and future LHC electromagnetic calorimeters
Aleksa, M.
2018-02-01
This document describes the different calibration strategies and techniques applied by the two general purpose experiments at the LHC, ATLAS and CMS, and discusses them underlining their respective strengths and weaknesses from the view of the author. The resulting performances of both calorimeters are described and compared on the basis of selected physics results. Future upgrade plans for High Luminosity LHC (HL-LHC) are briefly introduced and planned calibration strategies for the upgraded detectors are shown.
Absolute radiometric calibration of Landsat using a pseudo invariant calibration site
Helder, D.; Thome, K.J.; Mishra, N.; Chander, G.; Xiong, Xiaoxiong; Angal, A.; Choi, Tae-young
2013-01-01
Pseudo invariant calibration sites (PICS) have been used for on-orbit radiometric trending of optical satellite systems for more than 15 years. This approach to vicarious calibration has demonstrated a high degree of reliability and repeatability at the level of 1-3% depending on the site, spectral channel, and imaging geometries. A variety of sensors have used this approach for trending because it is broadly applicable and easy to implement. Models to describe the surface reflectance properties, as well as the intervening atmosphere have also been developed to improve the precision of the method. However, one limiting factor of using PICS is that an absolute calibration capability has not yet been fully developed. Because of this, PICS are primarily limited to providing only long term trending information for individual sensors or cross-calibration opportunities between two sensors. This paper builds an argument that PICS can be used more extensively for absolute calibration. To illustrate this, a simple empirical model is developed for the well-known Libya 4 PICS based on observations by Terra MODIS and EO-1 Hyperion. The model is validated by comparing model predicted top-of-atmosphere reflectance values to actual measurements made by the Landsat ETM+ sensor reflective bands. Following this, an outline is presented to develop a more comprehensive and accurate PICS absolute calibration model that can be Système international d'unités (SI) traceable. These initial concepts suggest that absolute calibration using PICS is possible on a broad scale and can lead to improved on-orbit calibration capabilities for optical satellite sensors.
Wang, Wei; Young, Bessie A.; Fülöp, Tibor; de Boer, Ian H.; Boulware, L. Ebony; Katz, Ronit; Correa, Adolfo; Griswold, Michael E.
2015-01-01
Background The calibration to Isotope Dilution Mass Spectroscopy (IDMS) traceable creatinine is essential for valid use of the new Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation to estimate the glomerular filtration rate (GFR). Methods For 5,210 participants in the Jackson Heart Study (JHS), serum creatinine was measured with a multipoint enzymatic spectrophotometric assay at the baseline visit (2000–2004) and re-measured using the Roche enzymatic method, traceable to IDMS in a subset of 206 subjects. The 200 eligible samples (6 were excluded, 1 for failure of the re-measurement and 5 for outliers) were divided into three disjoint sets - training, validation, and test - to select a calibration model, estimate true errors, and assess performance of the final calibration equation. The calibration equation was applied to serum creatinine measurements of 5,210 participants to estimate GFR and the prevalence of CKD. Results The selected Deming regression model provided a slope of 0.968 (95% Confidence Interval (CI), 0.904 to 1.053) and intercept of −0.0248 (95% CI, −0.0862 to 0.0366) with R squared 0.9527. Calibrated serum creatinine showed high agreement with actual measurements when applying to the unused test set (concordance correlation coefficient 0.934, 95% CI, 0.894 to 0.960). The baseline prevalence of CKD in the JHS (2000–2004) was 6.30% using calibrated values, compared with 8.29% using non-calibrated serum creatinine with the CKD-EPI equation (P creatinine measurements in the JHS and the calibrated values provide a lower CKD prevalence estimate. PMID:25806862
International Nuclear Information System (INIS)
Shankar, Ramesh; Hussey, Aaron; Davis, Eddie
2003-01-01
On-line monitoring of instrument channels provides increased information about the condition of monitored channels through accurate, more frequent evaluation of each cannel's performance over time. This type of performance monitoring is a methodology that offers an alternate approach to traditional time-directed calibration. EPRI's strategic role in on-line monitoring is to facilitate its implementation and cost-effective use in numerous applications at power plants. To this end, EPRI has sponsored an on-line monitoring implementation project at multiple nuclear plants specifically intended to install and use on-line monitoring technology. The selected on-line monitoring method is based on the Multivariate State Estimation Technique. The project has a planned three-year life; seven plants are participating in the project. The goal is to apply on-line monitoring to all types of power plant applications and document all aspects of the implementation process in a series of EPRI reports. These deliverables cover installation, modeling, optimization, and proven cost-benefit. This paper discusses the actual implementation of on-line monitoring to various nuclear plant instrument systems. Examples of detected instrument drift are provided. (author)
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj
2015-03-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj
2015-03-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.
Directory of Open Access Journals (Sweden)
E. Mitishita
2012-07-01
Full Text Available Digital photogrammetric products from the integration of imagery and lidar datasets are a reality nowadays. When the imagery and lidar surveys are performed together and the camera is connected to the lidar system, a direct georeferencing can be applied to compute the exterior orientation parameters of the images. Direct georeferencing of the images requires accurate interior orientation parameters to perform photogrammetric application. Camera calibration is a procedure applied to compute the interior orientation parameters (IOPs. Calibration researches have established that to obtain accurate IOPs, the calibration must be performed with same or equal condition that the photogrammetric survey is done. This paper shows the methodology and experiments results from in situ self-calibration using a simultaneous images block and lidar dataset. The calibration results are analyzed and discussed. To perform this research a test field was fixed in an urban area. A set of signalized points was implanted on the test field to use as the check points or control points. The photogrammetric images and lidar dataset of the test field were taken simultaneously. Four strips of flight were used to obtain a cross layout. The strips were taken with opposite directions of flight (W-E, E-W, N-S and S-N. The Kodak DSC Pro SLR/c digital camera was connected to the lidar system. The coordinates of the exposition station were computed from the lidar trajectory. Different layouts of vertical control points were used in the calibration experiments. The experiments use vertical coordinates from precise differential GPS survey or computed by an interpolation procedure using the lidar dataset. The positions of the exposition stations are used as control points in the calibration procedure to eliminate the linear dependency of the group of interior and exterior orientation parameters. This linear dependency happens, in the calibration procedure, when the vertical images and
Multivariate Max-Stable Spatial Processes
Genton, Marc G.
2014-01-06
Analysis of spatial extremes is currently based on univariate processes. Max-stable processes allow the spatial dependence of extremes to be modelled and explicitly quantified, they are therefore widely adopted in applications. For a better understanding of extreme events of real processes, such as environmental phenomena, it may be useful to study several spatial variables simultaneously. To this end, we extend some theoretical results and applications of max-stable processes to the multivariate setting to analyze extreme events of several variables observed across space. In particular, we study the maxima of independent replicates of multivariate processes, both in the Gaussian and Student-t cases. Then, we define a Poisson process construction in the multivariate setting and introduce multivariate versions of the Smith Gaussian extremevalue, the Schlather extremal-Gaussian and extremal-t, and the BrownResnick models. Inferential aspects of those models based on composite likelihoods are developed. We present results of various Monte Carlo simulations and of an application to a dataset of summer daily temperature maxima and minima in Oklahoma, U.S.A., highlighting the utility of working with multivariate models in contrast to the univariate case. Based on joint work with Simone Padoan and Huiyan Sang.
Multivariate Max-Stable Spatial Processes
Genton, Marc G.
2014-01-01
Analysis of spatial extremes is currently based on univariate processes. Max-stable processes allow the spatial dependence of extremes to be modelled and explicitly quantified, they are therefore widely adopted in applications. For a better understanding of extreme events of real processes, such as environmental phenomena, it may be useful to study several spatial variables simultaneously. To this end, we extend some theoretical results and applications of max-stable processes to the multivariate setting to analyze extreme events of several variables observed across space. In particular, we study the maxima of independent replicates of multivariate processes, both in the Gaussian and Student-t cases. Then, we define a Poisson process construction in the multivariate setting and introduce multivariate versions of the Smith Gaussian extremevalue, the Schlather extremal-Gaussian and extremal-t, and the BrownResnick models. Inferential aspects of those models based on composite likelihoods are developed. We present results of various Monte Carlo simulations and of an application to a dataset of summer daily temperature maxima and minima in Oklahoma, U.S.A., highlighting the utility of working with multivariate models in contrast to the univariate case. Based on joint work with Simone Padoan and Huiyan Sang.
Cryogenic Pressure Calibration Facility Using a Cold Force Reference
Bager, T; Métral, L
1999-01-01
Presently various commercial cryogenic pressure sensors are being investigated for installation in the LHC collider, they will eventually be used to assess that the magnets are fully immersed in liquid and to monitor fast pressure transients. In the framework of this selection procedure a cryogenic pressue calibration facility has been designed and built; it is based on a cryogenic primary pressure reference made of a bellows that converts the pressure into a force measurement. For that a shaft transfers this force to a precision force transducer at room temperature. Knowing the liquid bath pessure and the surface area of the bellows the pressure applied to the transducers under calibration is calculated; corrections due to thermal contraction are introduced. To avoid loss of force in the bellows wall its length is maintained constant; a cold capacitive displacement sensor measures this. The calibration temperature covers 1.5 K to 4.2 K and the pressure 0 to 20 bar. In contrast with more classical techniques ...
Multivariate stochastic analysis for Monthly hydrological time series at Cuyahoga River Basin
zhang, L.
2011-12-01
Copula has become a very powerful statistic and stochastic methodology in case of the multivariate analysis in Environmental and Water resources Engineering. In recent years, the popular one-parameter Archimedean copulas, e.g. Gumbel-Houggard copula, Cook-Johnson copula, Frank copula, the meta-elliptical copula, e.g. Gaussian Copula, Student-T copula, etc. have been applied in multivariate hydrological analyses, e.g. multivariate rainfall (rainfall intensity, duration and depth), flood (peak discharge, duration and volume), and drought analyses (drought length, mean and minimum SPI values, and drought mean areal extent). Copula has also been applied in the flood frequency analysis at the confluences of river systems by taking into account the dependence among upstream gauge stations rather than by using the hydrological routing technique. In most of the studies above, the annual time series have been considered as stationary signal which the time series have been assumed as independent identically distributed (i.i.d.) random variables. But in reality, hydrological time series, especially the daily and monthly hydrological time series, cannot be considered as i.i.d. random variables due to the periodicity existed in the data structure. Also, the stationary assumption is also under question due to the Climate Change and Land Use and Land Cover (LULC) change in the fast years. To this end, it is necessary to revaluate the classic approach for the study of hydrological time series by relaxing the stationary assumption by the use of nonstationary approach. Also as to the study of the dependence structure for the hydrological time series, the assumption of same type of univariate distribution also needs to be relaxed by adopting the copula theory. In this paper, the univariate monthly hydrological time series will be studied through the nonstationary time series analysis approach. The dependence structure of the multivariate monthly hydrological time series will be
A primer of multivariate statistics
Harris, Richard J
2014-01-01
Drawing upon more than 30 years of experience in working with statistics, Dr. Richard J. Harris has updated A Primer of Multivariate Statistics to provide a model of balance between how-to and why. This classic text covers multivariate techniques with a taste of latent variable approaches. Throughout the book there is a focus on the importance of describing and testing one's interpretations of the emergent variables that are produced by multivariate analysis. This edition retains its conversational writing style while focusing on classical techniques. The book gives the reader a feel for why
Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale
Directory of Open Access Journals (Sweden)
H. Kreibich
2016-05-01
Full Text Available Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB.In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.
Reliability Based Calibration of Fatigue Design Guidelines for Ship Structures
DEFF Research Database (Denmark)
Folsø, Rasmus; Otto, S.; Parmentier, G.
2002-01-01
A simple reliability based framework is applied to calibrate a new set of fatigue design guidelines. This new guideline considers two different approaches for the assessment of both loads, stresses and local stress raising effects, and partial safety factors must be given for any combination...
CALIBRATION PROCEDURES IN MID FORMAT CAMERA SETUPS
Directory of Open Access Journals (Sweden)
F. Pivnicka
2012-07-01
camera can be applied. However, there is a misalignment (bore side angle that must be evaluated by photogrammetric process using advanced tools e.g. in Bingo. Once, all these parameters have been determined, the system is capable for projects without or with only a few ground control points. But which effect has the photogrammetric process when directly applying the achieved direct orientation values compared with an AT based on a proper tiepoint matching? The paper aims to show the steps to be done by potential users and gives a kind of quality estimation about the importance and quality influence of the various calibration and adjustment steps.
“Calibration-on-the-spot”: How to calibrate an EMCCD camera from its images
DEFF Research Database (Denmark)
Mortensen, Kim; Flyvbjerg, Henrik
2016-01-01
In order to count photons with a camera, the camera must be calibrated. Photon counting is necessary, e.g., to determine the precision of localization-based super-resolution microscopy. Here we present a protocol that calibrates an EMCCD camera from information contained in isolated, diffraction-......-limited spots in any image taken by the camera, thus making dedicated calibration procedures redundant by enabling calibration post festum, from images filed without calibration information....
A New Time Calibration Method for Switched-capacitor-array-based Waveform Samplers.
Kim, H; Chen, C-T; Eclov, N; Ronzhin, A; Murat, P; Ramberg, E; Los, S; Moses, W; Choong, W-S; Kao, C-M
2014-12-11
We have developed a new time calibration method for the DRS4 waveform sampler that enables us to precisely measure the non-uniform sampling interval inherent in the switched-capacitor cells of the DRS4. The method uses the proportionality between the differential amplitude and sampling interval of adjacent switched-capacitor cells responding to a sawtooth-shape pulse. In the experiment, a sawtooth-shape pulse with a 40 ns period generated by a Tektronix AWG7102 is fed to a DRS4 evaluation board for calibrating the sampling intervals of all 1024 cells individually. The electronic time resolution of the DRS4 evaluation board with the new time calibration is measured to be ~2.4 ps RMS by using two simultaneous Gaussian pulses with 2.35 ns full-width at half-maximum and applying a Gaussian fit. The time resolution dependencies on the time difference with the new time calibration are measured and compared to results obtained by another method. The new method could be applicable for other switched-capacitor-array technology-based waveform samplers for precise time calibration.
Zhou, Chengfeng; Jiang, Wei; Cheng, Qingzheng; Via, Brian K.
2015-01-01
This research addressed a rapid method to monitor hardwood chemical composition by applying Fourier transform infrared (FT-IR) spectroscopy, with particular interest in model performance for interpretation and prediction. Partial least squares (PLS) and principal components regression (PCR) were chosen as the primary models for comparison. Standard laboratory chemistry methods were employed on a mixed genus/species hardwood sample set to collect the original data. PLS was found to provide bet...
On-body calibration and processing for a combination of two radio-frequency personal exposimeters
International Nuclear Information System (INIS)
Thielens, Arno; Verloock, Leen; Tanghe, Emmeric; Martens, Luc; Joseph, Wout; Agneessens, Sam; Rogier, Hendrik
2015-01-01
Two radio-frequency personal exposimeters (PEMs) worn on both hips are calibrated on a subject in an anechoic chamber. The PEMs' response and crosstalk are determined for realistically polarised incident electric fields using this calibration. The 50 % confidence interval of the PEMs' response is reduced (2.6 dB on average) when averaged over both PEMs. A significant crosstalk (up to a ratio of 1.2) is measured, indicating that PEM measurements can be obfuscated by crosstalk. Simultaneous measurements with two PEMs are carried out in Ghent, Belgium. The highest exposure is measured for Global System for Mobile Communication down-link (0.052 mW m -2 on average), while the lowest exposure is found for Universal Mobile Telecommunications System uplink (0.061 μW m -2 on average). The authors recommend the use of a combination of multiple PEMs and, considering the multivariate data, to provide the mean vector and the covariance matrix next to the commonly listed univariate summary statistics, in future PEM studies. (authors)
Calibration of reference KAP-meters at SSDL and cross calibration of clinical KAP-meters
International Nuclear Information System (INIS)
Hetland, Per O.; Friberg, Eva G.; Oevreboe, Kirsti M.; Bjerke, Hans H.
2009-01-01
In the summer of 2007 the secondary standard dosimetry laboratory (SSDL) in Norway established a calibration service for reference air-kerma product meter (KAP-meter). The air-kerma area product, PKA, is a dosimetric quantity that can be directly related to the patient dose and used for risk assessment associated with different x-ray examinations. The calibration of reference KAP-meters at the SSDL gives important information on parameters influencing the calibration factor for different types of KAP-meters. The use of reference KAP-meters calibrated at the SSDL is an easy and reliable way to calibrate or verify the PKA indicated by the x-ray equipment out in the clinics. Material and methods. Twelve KAP-meters were calibrated at the SSDL by use of the substitution method at five diagnostic radiation qualities (RQRs). Results. The calibration factors varied from 0.94 to 1.18. The energy response of the individual KAP-meters varied by a total of 20% between the different RQRs and the typical chamber transmission factors ranged from 0.78 to 0.91. Discussion. It is important to use a calibrated reference KAP-meter and a harmonised calibration method in the PKA calibration in hospitals. The obtained uncertainty in the PKA readings is comparable with other calibration methods if the information in the calibration certificate is correct used, corrections are made and proper positioning of the KAP-chamber is performed. This will ensure a reliable estimate of the patient dose and a proper optimisation of conventional x-ray examinations and interventional procedures
Jantzi, Sarah C; Almirall, José R
2014-01-01
Elemental analysis of soil is a useful application of both laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) and laser-induced breakdown spectroscopy (LIBS) in geological, agricultural, environmental, archeological, planetary, and forensic sciences. In forensic science, the question to be answered is often whether soil specimens found on objects (e.g., shoes, tires, or tools) originated from the crime scene or other location of interest. Elemental analysis of the soil from the object and the locations of interest results in a characteristic elemental profile of each specimen, consisting of the amount of each element present. Because multiple elements are measured, multivariate statistics can be used to compare the elemental profiles in order to determine whether the specimen from the object is similar to one of the locations of interest. Previous work involved milling and pressing 0.5 g of soil into pellets before analysis using LA-ICP-MS and LIBS. However, forensic examiners prefer techniques that require smaller samples, are less time consuming, and are less destructive, allowing for future analysis by other techniques. An alternative sample introduction method was developed to meet these needs while still providing quantitative results suitable for multivariate comparisons. The tape-mounting method involved deposition of a thin layer of soil onto double-sided adhesive tape. A comparison of tape-mounting and pellet method performance is reported for both LA-ICP-MS and LIBS. Calibration standards and reference materials, prepared using the tape method, were analyzed by LA-ICP-MS and LIBS. As with the pellet method, linear calibration curves were achieved with the tape method, as well as good precision and low bias. Soil specimens from Miami-Dade County were prepared by both the pellet and tape methods and analyzed by LA-ICP-MS and LIBS. Principal components analysis and linear discriminant analysis were applied to the multivariate data
International Nuclear Information System (INIS)
Davidson, M.; Gilbertson, A.; Dougherty, M.
1991-03-01
These transducers are designed to measure stresses on SSC collared coils. They are individually calibrated with a bonded ten-stack of SSC inner coil cable by applying a known load and reading corresponding output from the gages. The transducer is supported by a notched ''backing plate'' that allows for bending of the gage beam during calibration or in use with an actual coil. Several factors affecting the calibration and use of the transducers are: the number of times a ''backing plate'' is used, the similarities or difficulties between bonded ten-stacks, and the differences between the ten-stacks and the coil they represent. The latter is probably the most important because a calibration curve is a model of how a transducer should react within a coil. If the model is wrong, the calibration curve is wrong. Information will be presented regarding differences in calibrations between Brookhaven National Labs (also calibrating these transducers) and Fermilab -- what caused these differences, the investigation into the differences between coils and ten-stacks and how they relate to transducer calibration, and some suggestions for future calibrations
Multivariate stochastic simulation with subjective multivariate normal distributions
P. J. Ince; J. Buongiorno
1991-01-01
In many applications of Monte Carlo simulation in forestry or forest products, it may be known that some variables are correlated. However, for simplicity, in most simulations it has been assumed that random variables are independently distributed. This report describes an alternative Monte Carlo simulation technique for subjectively assesed multivariate normal...
Model Checking Multivariate State Rewards
DEFF Research Database (Denmark)
Nielsen, Bo Friis; Nielson, Flemming; Nielson, Hanne Riis
2010-01-01
We consider continuous stochastic logics with state rewards that are interpreted over continuous time Markov chains. We show how results from multivariate phase type distributions can be used to obtain higher-order moments for multivariate state rewards (including covariance). We also generalise...
Step wise, multiple objective calibration of a hydrologic model for a snowmelt dominated basin
Hay, L.E.; Leavesley, G.H.; Clark, M.P.; Markstrom, S.L.; Viger, R.J.; Umemoto, M.
2006-01-01
The ability to apply a hydrologic model to large numbers of basins for forecasting purposes requires a quick and effective calibration strategy. This paper presents a step wise, multiple objective, automated procedure for hydrologic model calibration. This procedure includes the sequential calibration of a model's simulation of solar radiation (SR), potential evapotranspiration (PET), water balance, and daily runoff. The procedure uses the Shuffled Complex Evolution global search algorithm to calibrate the U.S. Geological Survey's Precipitation Runoff Modeling System in the Yampa River basin of Colorado. This process assures that intermediate states of the model (SR and PET on a monthly mean basis), as well as the water balance and components of the daily hydrograph are simulated, consistently with measured values.
Application of calibrations to hyperspectral images of food grains: example for wheat falling number
Directory of Open Access Journals (Sweden)
Nicola Caporaso
2017-04-01
Full Text Available The presence of a few kernels with sprouting problems in a batch of wheat can result in enzymatic activity sufficient to compromise flour functionality and bread quality. This is commonly assessed using the Hagberg Falling Number (HFN method, which is a batch analysis. Hyperspectral imaging (HSI can provide analysis at the single grain level with potential for improved performance. The present paper deals with the development and application of calibrations obtained using an HSI system working in the near infrared (NIR region (~900–2500 nm and reference measurements of HFN. A partial least squares regression calibration has been built using 425 wheat samples with a HFN range of 62–318 s, including field and laboratory pre-germinated samples placed under wet conditions. Two different approaches were tested to apply calibrations: i application of the calibration to each pixel, followed by calculation of the average of the resulting values for each object (kernel; ii calculation of the average spectrum for each object, followed by application of the calibration to the mean spectrum. The calibration performance achieved for HFN (R2 = 0.6; RMSEC ~ 50 s; RMSEP ~ 63 s compares favourably with other studies using NIR spectroscopy. Linear spectral pre-treatments lead to similar results when applying the two methods, while non-linear treatments such as standard normal variate showed obvious differences between these approaches. A classification model based on linear discriminant analysis (LDA was also applied to segregate wheat kernels into low (250 s HFN groups. LDA correctly classified 86.4% of the samples, with a classification accuracy of 97.9% when using an HFN threshold of 150 s. These results are promising in terms of wheat quality assessment using a rapid and non-destructive technique which is able to analyse wheat properties on a single-kernel basis, and to classify samples as acceptable or unacceptable for flour production.
Estimation of age in forensic medicine using multivariate approach to image analysis
DEFF Research Database (Denmark)
Kucheryavskiy, Sergey V.; Belyaev, Ivan; Fominykh, Sergey
2009-01-01
approach based on statistical analysis of grey-level co-occurrence matrix, fractal analysis, wavelet transformation and Angle Measure Technique. Projection on latent structures regression was chosen for calibration and prediction. The method has been applied to 70 male and 63 female individuals aged from...... 21 to 93 and results were compared with traditional approach. Some important questions and problems have been raised....
Maggio, Rubén M; Damiani, Patricia C; Olivieri, Alejandro C
2011-01-30
Liquid chromatographic-diode array detection data recorded for aqueous mixtures of 11 pesticides show the combined presence of strongly coeluting peaks, distortions in the time dimension between experimental runs, and the presence of potential interferents not modeled by the calibration phase in certain test samples. Due to the complexity of these phenomena, data were processed by a second-order multivariate algorithm based on multivariate curve resolution and alternating least-squares, which allows one to successfully model both the spectral and retention time behavior for all sample constituents. This led to the accurate quantitation of all analytes in a set of validation samples: aldicarb sulfoxide, oxamyl, aldicarb sulfone, methomyl, 3-hydroxy-carbofuran, aldicarb, propoxur, carbofuran, carbaryl, 1-naphthol and methiocarb. Limits of detection in the range 0.1-2 μg mL(-1) were obtained. Additionally, the second-order advantage for several analytes was achieved in samples containing several uncalibrated interferences. The limits of detection for all analytes were decreased by solid phase pre-concentration to values compatible to those officially recommended, i.e., in the order of 5 ng mL(-1). Copyright Â© 2010 Elsevier B.V. All rights reserved.
Providing radiometric traceability for the calibration home base of DLR by PTB
Energy Technology Data Exchange (ETDEWEB)
Taubert, D. R.; Hollandt, J.; Sperfeld, P.; Pape, S.; Hoepe, A.; Hauer, K.-O. [Physikalisch-Technische Bundesanstalt, Braunschweig und Berlin, 10587 Berlin (Germany); Gege, P.; Schwarzmaier, T.; Lenhard, K.; Baumgartner, A. [Deutsches Zentrum fuer Luft- und Raumfahrt, Institut fuer Methodik der Fernerkundung, 82234 Oberpfaffenhofen (Germany)
2013-05-10
A dedicated calibration technique was applied for the calibration of the spectral radiance transfer standard (RASTA) of the German Aerospace Center (DLR) at the Physikalisch-Technische Bundesanstalt (PTB), consisting of two independent but complementing calibration procedures to provide redundancy and smallest possible calibration uncertainties. Procedure I included two calibration steps: In a first step the optical radiation source of RASTA, an FEL lamp, was calibrated in terms of its spectral irradiance E{sub {lambda}}({lambda}) in the wavelength range from 350 nm to 2400 nm using the PTB Spectral Irradiance Calibration Equipment (SPICE), while in a second step the spectral radiance factor {beta}{sub 0 Degree-Sign :45 Degree-Sign }({lambda}) of the RASTA reflection standard was calibrated in a 0 Degree-Sign :45 Degree-Sign -viewing geometry in the wavelength range from 350 nm to 1700 nm at the robot-based gonioreflectometer facility of PTB. The achieved relative standard uncertainties (k= 1) range from 0.6 % to 3.2 % and 0.1 % to 0.6 % respectively. Procedure II was completely independent from procedure I and allowed to cover the entire spectral range of RASTA from 350 nm to 2500 nm. In the second procedure, the 0 Degree-Sign :45 Degree-Sign -viewing geometry spectral radiance L{sub {lambda},0 Degree-Sign :45 Degree-Sign }({lambda}) of RASTA was directly calibrated at the Spectral Radiance Comparator Facility (SRCF) of PTB. The relative uncertainties for this calibration procedure range from 0.8 % in the visible up to 7.5 % at 2500 nm (k= 1). In the overlapping spectral range of both calibration procedures the calculated spectral radiance L{sub {lambda},0 Degree-Sign :45 Degree-Sign ,calc}({lambda}) from procedure I is in good agreement with the direct measurement of procedure II, i.e. well within the combined expanded uncertainties (k= 2) of both procedures.
International Nuclear Information System (INIS)
Aminu Ibrahim; Hafizan Juahir; Mohd Ekhwan Toriman; Mustapha, A.; Azman Azid; Isiyaka, H.A.
2015-01-01
Multivariate Statistical techniques including cluster analysis, discriminant analysis, and principal component analysis/factor analysis were applied to investigate the spatial variation and pollution sources in the Terengganu river basin during 5 years of monitoring 13 water quality parameters at thirteen different stations. Cluster analysis (CA) classified 13 stations into 2 clusters low polluted (LP) and moderate polluted (MP) based on similar water quality characteristics. Discriminant analysis (DA) rendered significant data reduction with 4 parameters (pH, NH 3 -NL, PO 4 and EC) and correct assignation of 95.80 %. The PCA/ FA applied to the data sets, yielded in five latent factors accounting 72.42 % of the total variance in the water quality data. The obtained varifactors indicate that parameters in charge for water quality variations are mainly related to domestic waste, industrial, runoff and agricultural (anthropogenic activities). Therefore, multivariate techniques are important in environmental management. (author)
Directory of Open Access Journals (Sweden)
Eun-Seok Lee
2002-12-01
Full Text Available This paper describes the re-design and the calibration results of the MAG digital circuit onboard the KSR-3. We enhanced the sampling rate of magnetometer data. Also, we reduced noise and increased authoritativeness of data. We could confirm that AIM resolution was decreased less than 1nT of analog calibration by a digital calibration of magnetometer. Therefore, we used numerical-program to correct this problem. As a result, we could calculate correction and error of data. These corrections will be applied to magnetometer data after the launch of KSR-3.
A robust and simple two-mode digital calibration technique for pipelined ADC
Energy Technology Data Exchange (ETDEWEB)
Yin Xiumei; Zhao Nan; Sekedi Bomeh Kobenge; Yang Huazhong, E-mail: yxm@mails.tsinghua.edu.cn [Department of Electronic Engineering, Tsinghua University, Beijing 100084 (China)
2011-03-15
This paper presents a two-mode digital calibration technique for pipelined analog-to-digital converters (ADC). The proposed calibration eliminates the errors of residual difference voltage induced by capacitor mismatch of pseudorandom (PN) sequence injection capacitors at the ADC initialization, while applies digital background calibration to continuously compensate the interstage gain errors in ADC normal operation. The presented technique not only reduces the complexity of analog circuit by eliminating the implementation of PN sequence with accurate amplitude in analog domain, but also improves the performance of digital background calibration by minimizing the sensitivity of calibration accuracy to sub-ADC errors. The use of opamps with low DC gains in normal operation makes the proposed design more compatible with future nanometer CMOS technology. The prototype of a 12-bit 40-MS/s pipelined ADC with the two-mode digital calibration is implemented in 0.18-{mu}m CMOS process. Adopting a simple telescopic opamp with a DC gain of 58-dB in the first stage, the measured SFDR and SNDR within the first Nyquist zone reach 80-dB and 66-dB, respectively. With the calibration, the maximum integral nonlinearity (INL) of the ADC reduces from 4.75-LSB to 0.65-LSB, while the ADC core consumes 82-mW at 3.3-V power supply. (semiconductor integrated circuits)
Calibration of Binocular Vision Sensors Based on Unknown-Sized Elliptical Stripe Images
Directory of Open Access Journals (Sweden)
Zhen Liu
2017-12-01
Full Text Available Most of the existing calibration methods for binocular stereo vision sensor (BSVS depend on a high-accuracy target with feature points that are difficult and costly to manufacture and. In complex light conditions, optical filters are used for BSVS, but they affect imaging quality. Hence, the use of a high-accuracy target with certain-sized feature points for calibration is not feasible under such complex conditions. To solve these problems, a calibration method based on unknown-sized elliptical stripe images is proposed. With known intrinsic parameters, the proposed method adopts the elliptical stripes located on the parallel planes as a medium to calibrate BSVS online. In comparison with the common calibration methods, the proposed method avoids utilizing high-accuracy target with certain-sized feature points. Therefore, the proposed method is not only easy to implement but is a realistic method for the calibration of BSVS with optical filter. Changing the size of elliptical curves projected on the target solves the difficulty of applying the proposed method in different fields of view and distances. Simulative and physical experiments are conducted to validate the efficiency of the proposed method. When the field of view is approximately 400 mm × 300 mm, the proposed method can reach a calibration accuracy of 0.03 mm, which is comparable with that of Zhang’s method.
Pilot study to verify the calibration of electrometers
International Nuclear Information System (INIS)
Becker, P.; Meghzifene, A.
2002-01-01
National Laboratory for Electrical Measurements has not yet developed its capability for the standardization of small electrical charge produced by DC, the IRD is trying to verify its standardization procedures of the electrical charge through a comparison programme. This subject was discussed with a major electrometer manufacturer that has offered to provide free of charge, three of their electrometer calibration standards for a pilot run. The model to be provided consists of four calibrated resistors and two calibrated capacitors, covering the charge/current range of interest. For producing charge or current a standard DC voltage must be applied to these components. Since practically all-modern electrometers measure using virtual ground, this methodology is viable. The IRD, in collaboration with the IAEA, wishes to invite interested laboratories to participate in this pilot comparison programme. This exercise is expected to be useful for all participants and will hopefully open the way for the establishment of routine comparisons in this area. The results will be discussed and published in an appropriate journal. Interested institutions should contact directly Mr. Paulo H. B. Becker through e-mail (pbecker at ird.gov.br) or fax +55 21 24421950 informing him of the model and manufacturer of the electrometer to be used for the pilot study and discuss all practical details. (author)
Cichy, Radoslaw Martin; Pantazis, Dimitrios
2017-09-01
Multivariate pattern analysis of magnetoencephalography (MEG) and electroencephalography (EEG) data can reveal the rapid neural dynamics underlying cognition. However, MEG and EEG have systematic differences in sampling neural activity. This poses the question to which degree such measurement differences consistently bias the results of multivariate analysis applied to MEG and EEG activation patterns. To investigate, we conducted a concurrent MEG/EEG study while participants viewed images of everyday objects. We applied multivariate classification analyses to MEG and EEG data, and compared the resulting time courses to each other, and to fMRI data for an independent evaluation in space. We found that both MEG and EEG revealed the millisecond spatio-temporal dynamics of visual processing with largely equivalent results. Beyond yielding convergent results, we found that MEG and EEG also captured partly unique aspects of visual representations. Those unique components emerged earlier in time for MEG than for EEG. Identifying the sources of those unique components with fMRI, we found the locus for both MEG and EEG in high-level visual cortex, and in addition for MEG in low-level visual cortex. Together, our results show that multivariate analyses of MEG and EEG data offer a convergent and complimentary view on neural processing, and motivate the wider adoption of these methods in both MEG and EEG research. Copyright © 2017 Elsevier Inc. All rights reserved.
Investigation on calibration parameter of mammography calibration facilities at MINT
International Nuclear Information System (INIS)
Asmaliza Hashim; Wan Hazlinda Ismail; Md Saion Salikin; Muhammad Jamal Md Isa; Azuhar Ripin; Norriza Mohd Isa
2004-01-01
A mammography calibration facility has been established in the Medical Physics Laboratory, Malaysian Institute for Nuclear Technology Research (MINT). The calibration facility is established at the national level mainly to provide calibration services for radiation measuring test instruments or test tools used in quality assurance programme in mammography, which is being implemented in Malaysia. One of the accepted parameters that determine the quality of a radiation beam is the homogeneity coefficient. It is determined from the values of the 1 st and 2 nd Half Value Layer (HVL). In this paper, the consistency of the mammography machine beam qualities that is available in MINT, is investigated and presented. For calibration purposes, five radiation qualities namely 23, 25, 28, 30 and 35 kV, selectable from the control panel of the X-ray machine is used. Important parameters that are set for this calibration facility are exposure time, tube current, focal spot to detector distance (FDD) and beam size at specific distance. The values of homogeneity coefficient of this laboratory for the past few years tip to now be presented in this paper. Backscatter radiations are also considered in this investigation. (Author)
Reliably detectable flaw size for NDE methods that use calibration
Koshti, Ajay M.
2017-04-01
Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-1823 and associated mh18232 POD software gives most common methods of POD analysis. In this paper, POD analysis is applied to an NDE method, such as eddy current testing, where calibration is used. NDE calibration standards have known size artificial flaws such as electro-discharge machined (EDM) notches and flat bottom hole (FBH) reflectors which are used to set instrument sensitivity for detection of real flaws. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. Therefore, it is important to correlate signal responses from real flaws with signal responses form artificial flaws used in calibration process to determine reliably detectable flaw size.
Talpur, M Younis; Kara, Huseyin; Sherazi, S T H; Ayyildiz, H Filiz; Topkafa, Mustafa; Arslan, Fatma Nur; Naz, Saba; Durmaz, Fatih; Sirajuddin
2014-11-01
Single bounce attenuated total reflectance (SB-ATR) Fourier transform infrared (FTIR) spectroscopy in conjunction with chemometrics was used for accurate determination of free fatty acid (FFA), peroxide value (PV), iodine value (IV), conjugated diene (CD) and conjugated triene (CT) of cottonseed oil (CSO) during potato chips frying. Partial least square (PLS), stepwise multiple linear regression (SMLR), principal component regression (PCR) and simple Beer׳s law (SBL) were applied to develop the calibrations for simultaneous evaluation of five stated parameters of cottonseed oil (CSO) during frying of French frozen potato chips at 170°C. Good regression coefficients (R(2)) were achieved for FFA, PV, IV, CD and CT with value of >0.992 by PLS, SMLR, PCR, and SBL. Root mean square error of prediction (RMSEP) was found to be less than 1.95% for all determinations. Result of the study indicated that SB-ATR FTIR in combination with multivariate chemometrics could be used for accurate and simultaneous determination of different parameters during the frying process without using any toxic organic solvent. Copyright © 2014 Elsevier B.V. All rights reserved.
Autonomous Kinematic Calibration of the Robot Manipulator with a Linear Laser-Vision Sensor
Kang, Hee-Jun; Jeong, Jeong-Woo; Shin, Sung-Weon; Suh, Young-Soo; Ro, Young-Schick
This paper presents a new autonomous kinematic calibration technique by using a laser-vision sensor called "Perceptron TriCam Contour". Because the sensor measures by capturing the image of a projected laser line on the surface of the object, we set up a long, straight line of a very fine string inside the robot workspace, and then allow the sensor mounted on a robot to measure the point intersection of the line of string and the projected laser line. The data collected by changing robot configuration and measuring the intersection points are constrained to on a single straght line such that the closed-loop calibration method can be applied. The obtained calibration method is simple and accurate and also suitable for on-site calibration in an industrial environment. The method is implemented using Hyundai VORG-35 for its effectiveness.
Anderson, R. B.; Morris, R. V.; Clegg, S. M.; Bell, J. F., III; Humphries, S. D.; Wiens, R. C.
2011-01-01
The ChemCam instrument selected for the Curiosity rover is capable of remote laser-induced breakdown spectroscopy (LIBS).[1] We used a remote LIBS instrument similar to ChemCam to analyze 197 geologic slab samples and 32 pressed-powder geostandards. The slab samples are well-characterized and have been used to validate the calibration of previous instruments on Mars missions, including CRISM [2], OMEGA [3], the MER Pancam [4], Mini-TES [5], and Moessbauer [6] instruments and the Phoenix SSI [7]. The resulting dataset was used to compare multivariate methods for quantitative LIBS and to determine the effect of grain size on calculations. Three multivariate methods - partial least squares (PLS), multilayer perceptron artificial neural networks (MLP ANNs) and cascade correlation (CC) ANNs - were used to generate models and extract the quantitative composition of unknown samples. PLS can be used to predict one element (PLS1) or multiple elements (PLS2) at a time, as can the neural network methods. Although MLP and CC ANNs were successful in some cases, PLS generally produced the most accurate and precise results.
Design Through Integration of On-Board Calibration Device with Imaging Spectroscopy Instruments
Stange, Michael
2012-01-01
The main purpose of the Airborne Visible and Infrared Imaging Spectroscopy (AVIRIS) project is to "identify, measure, and monitor constituents of the Earth's surface and atmosphere based on molecular absorption and particle scattering signatures." The project designs, builds, and tests various imaging spectroscopy instruments that use On-Board Calibration devices (OBC) to check the accuracy of the data collected by the spectrometers. The imaging instrument records the spectral signatures of light collected during flight. To verify the data is correct, the OBC shines light which is collected by the imaging spectrometer and compared against previous calibration data to track spectral response changes in the instrument. The spectral data has the calibration applied to it based on the readings from the OBC data in order to ensure accuracy.
Real-time alignment and calibration of the LHCb Detector in Run II
Dujany, Giulio
2016-01-01
Stable, precise spatial alignment and PID calibration are necessary to achieve optimal detector performance. During Run2, LHCb has a new real-time detector alignment and calibration to allow equivalent performance in the online and offline reconstruction to be reached. This offers the opportunity to optimise the event selection by applying stronger constraints, and to use hadronic particle identification at the trigger level. The computing time constraints are met through the use of a new dedicated framework using the multi-core farm infrastructure for the trigger. The motivation for a real-time alignment and calibration of the LHCb detector is discussed from the operative and physics performance point of view. Specific challenges of this configuration are discussed, as well as the designed framework and its performance.
Real-time alignment and calibration of the LHCb Detector in Run II
Dujany, Giulio
2015-01-01
Stable, precise spatial alignment and PID calibration are necessary to achieve optimal detector performance. During Run2, LHCb will have a new real-time detector alignment and calibration to allow equivalent performance in the online and offline reconstruction to be reached. This offers the opportunity to optimise the event selection by applying stronger constraints, and to use hadronic particle identification at the trigger level. The computing time constraints are met through the use of a new dedicated framework using the multi-core farm infrastructure for the trigger. The motivation for a real-time alignment and calibration of the LHCb detector is discussed from the operative and physics performance point of view. Specific challenges of this configuration are discussed, as well as the designed framework and its performance.
Corrected direct force balance method for atomic force microscopy lateral force calibration
International Nuclear Information System (INIS)
Asay, David B.; Hsiao, Erik; Kim, Seong H.
2009-01-01
This paper reports corrections and improvements of the previously reported direct force balance method (DFBM) developed for lateral calibration of atomic force microscopy. The DFBM method employs the lateral force signal obtained during a force-distance measurement on a sloped surface and relates this signal to the applied load and the slope of the surface to determine the lateral calibration factor. In the original publication [Rev. Sci. Instrum. 77, 043903 (2006)], the tip-substrate contact was assumed to be pinned at the point of contact, i.e., no slip along the slope. In control experiments, the tip was found to slide along the slope during force-distance curve measurement. This paper presents the correct force balance for lateral force calibration.
International Nuclear Information System (INIS)
Tastan, S.; Soylu, A.; Kucuk, O.; Ibis, E.
2004-01-01
Full text: Radionuclides for diagnostics purposes like Tc-99m, Tl-201, Ga-67 and In-111 are measured by using ionization type of dose calibrators. Therapeutic radionuclides, which emit both beta and gamma rays are detected by the same type of dose calibrators. Other therapeutic products like Y-90, P-32 and Sr-89 are pure beta emitters and they are gaining wider utility because various new therapy radiopharmaceuticals are being developed. The type of container material, like glass or plastic, may seriously affect radioactivity measurement due to attenuation, Since it is crucial to give the exact amount of radioactivity to the patient for therapy purposes, dedicated dose calibrators are specially manufactured for the measurement of these radionuclides. But these measuring systems are not widely available in nuclear medicine centers where therapy is applied to the patient. It is a known fact that dose calibrators routinely used in nuclear medicine departments can be calibrated for vials and syringes using standard sources of the same radioisotope. The method of calibration of Y-90 measurement for two ionization chamber dose calibrators available in the institute will be summarized in this presentation
International Nuclear Information System (INIS)
Tastan, S.; Soylu, A.; Kucuk, O.; Ibis, E.
2004-01-01
Radionuclides for diagnostics purposes like Tc-99m, Tl-201, Ga-67 and In-111 are measured by using ionization type of dose calibrators. Therapeutic radionuclides, which emit both beta and gamma rays are detected by the same type of dose calibrators. Other therapeutic products like Y-90, P-32 and Sr-89 are pure beta emitters and they are gaining wider utility because various new therapy radiopharmaceuticals are being developed. The type of container material, like glass or plastic, may seriously affect radioactivity measurement due to attenuation, Since it is crucial to give the exact amount of radioactivity to the patient for therapy purposes, dedicated dose calibrators are specially manufactured for the measurement of these radionuclides. But these measuring systems are not widely available in nuclear medicine centers where therapy is applied to the patient. It is a known fact that dose calibrators routinely used in nuclear medicine departments can be calibrated for vials and syringes using standard sources of the same radioisotope. The method of calibration of Y-90 measurement for two ionization chamber dose calibrators available in the institute will be summarized in this presentation. (author)
Improvement of a Robotic Manipulator Model Based on Multivariate Residual Modeling
Directory of Open Access Journals (Sweden)
Serge Gale
2017-07-01
Full Text Available A new method is presented for extending a dynamic model of a six degrees of freedom robotic manipulator. A non-linear multivariate calibration of input–output training data from several typical motion trajectories is carried out with the aim of predicting the model systematic output error at time (t + 1 from known input reference up till and including time (t. A new partial least squares regression (PLSR based method, nominal PLSR with interactions was developed and used to handle, unmodelled non-linearities. The performance of the new method is compared with least squares (LS. Different cross-validation schemes were compared in order to assess the sampling of the state space based on conventional trajectories. The method developed in the paper can be used as fault monitoring mechanism and early warning system for sensor failure. The results show that the suggested methods improves trajectory tracking performance of the robotic manipulator by extending the initial dynamic model of the manipulator.
Calibration Under Uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Swiler, Laura Painton; Trucano, Timothy Guy
2005-03-01
This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.
Wang, Wei; Young, Bessie A; Fülöp, Tibor; de Boer, Ian H; Boulware, L Ebony; Katz, Ronit; Correa, Adolfo; Griswold, Michael E
2015-05-01
The calibration to isotope dilution mass spectrometry-traceable creatinine is essential for valid use of the new Chronic Kidney Disease Epidemiology Collaboration equation to estimate the glomerular filtration rate. For 5,210 participants in the Jackson Heart Study (JHS), serum creatinine was measured with a multipoint enzymatic spectrophotometric assay at the baseline visit (2000-2004) and remeasured using the Roche enzymatic method, traceable to isotope dilution mass spectrometry in a subset of 206 subjects. The 200 eligible samples (6 were excluded, 1 for failure of the remeasurement and 5 for outliers) were divided into 3 disjoint sets-training, validation and test-to select a calibration model, estimate true errors and assess performance of the final calibration equation. The calibration equation was applied to serum creatinine measurements of 5,210 participants to estimate glomerular filtration rate and the prevalence of chronic kidney disease (CKD). The selected Deming regression model provided a slope of 0.968 (95% confidence interval [CI], 0.904-1.053) and intercept of -0.0248 (95% CI, -0.0862 to 0.0366) with R value of 0.9527. Calibrated serum creatinine showed high agreement with actual measurements when applying to the unused test set (concordance correlation coefficient 0.934, 95% CI, 0.894-0.960). The baseline prevalence of CKD in the JHS (2000-2004) was 6.30% using calibrated values compared with 8.29% using noncalibrated serum creatinine with the Chronic Kidney Disease Epidemiology Collaboration equation (P creatinine measurements in the JHS, and the calibrated values provide a lower CKD prevalence estimate.
Calibrating the CsI(Tl) detectors of the GARFIELD apparatus
Abbondanno, U; Casini, G; Cavaletti, R; Cavallaro, S; Chiari, M; D'Agostino, M; Gramegna, F; Lanchais, A; Margagliotti, G V; Mastinu, P F; Milazzo, P M; Moroni, A; Nannini, A; Ordine, A; Vannini, G; Vannucci, L
2002-01-01
The energy and charge dependence of the light output of the CsI(Tl) detectors of the GARFIELD apparatus has been investigated for heavy ions with 5<=Z<=16 in the energy range from 2.2 to 8.3 A MeV. The results have been compared to an analytical expression successfully used in previous calibration procedures at higher energies, and a rather good agreement was obtained between measured and calculated quantities. The resulting parameter set was successfully applied to another set of experimental data. The overall result demonstrates the validity of the above mentioned calibration procedure in a wide range of incident ion energies and masses.
Radioactivity measurement of 18F in 16 ml vials for calibration of radionuclide calibrators
International Nuclear Information System (INIS)
Wurdiyanto, Gatot; Marsoem, Pujadi; Candra, Hermawan; Wijono, Paidi
2012-01-01
Fluorine-18 is obtained through the reaction 18 O(p, n) 18 F using a cyclotron that is situated in a hospital in Jakarta. Standardization of the 18 F solution is performed by gamma spectrometry using calibration sources of 152 Eu, 60 Co and 137 Cs that have traceability to the International System of units (SI). The activities in the 16 ml vials that were used for calibrating the radionuclide calibrators were between 1 and 2 GBq, with expanded uncertainties of 3.8%. The expanded uncertainty, at a coverage factor of k=2, on the derived calibration factor for the radionuclide calibrator was 6.6%. - Highlights: ► PTKMR–BATAN as a NMI of Indonesia is required to have procedures to calibrate the radionuclide calibrators. ► Standardizations were carried out on a solution of [ 18 F]FDG using gamma spectrometry. ► The volume of 18 F solutions used was 16 ml because this is the volume often used in hospitals. ► The Secondary Standard ionization chamber is a CRC-7BT Capintec radionuclide calibrator. ► A dial setting for 16 ml of [ 18 F]FDG solution in a vial is 443 for the Capintec dose calibrator.
Determination of activity of I-125 applying sum-peak methods
International Nuclear Information System (INIS)
Arbelo Penna, Y.; Hernandez Rivero, A.T.; Oropesa Verdecia, P.; Serra Aguila, R.; Moreno Leon, Y.
2011-01-01
The determination of activity of I-125 in radioactive solutions, applying sum-peak methods, by using an n-type HPGe detector of extended range is described. Two procedures were used for obtaining I-125 specific activity in solutions: a) an absolute method, which is independent of nuclear parameters and detector efficiency, and b) an option which consider constant the efficiency in the region of interest and involves calculations using nuclear parameters. The measurement geometries studied are specifically solid point sources. The relative deviations between specific activities, obtained by these different procedures are not higher than 1 %. Moreover, the activity of the radioactive solution was obtained by measuring it in NIST ampoule using a CAPINTEC CRC 35R dose calibrator. The consistency of obtained results, confirm the feasibility of applying direct methods of measurement for I-125 activity determinations, which allow us to achieve lower uncertainties in comparison with the relative methods of measurement. The establishment of these methods is aimed to be applied for the calibration of equipment and radionuclide dose calibrators used currently in clinical RIA/IRMA assays and Nuclear medicine practice respectively. (Author)
Development of neutron calibration field using accelerators
Energy Technology Data Exchange (ETDEWEB)
Baba, Mamoru [Tohoku Univ., Cyclotron and Radioisotope Center, Sendai, Miyagi (Japan)
2003-03-01
A brief summary is given on the fast neutron calibration fields for 1) 8 keV to 15 MeV range, and 2) 30-80 MeV range. The field for 8 keV to 15 MeV range was developed at the Fast Neutron Laboratory (FNL) at Tohoku University using a 4.5 MV pulsed Dynamitron accelerator and neutron production reactions, {sup 45}Sc(p, n), {sup 7}Li(p, n), {sup 3}H(p, n), D(d, n) and T(d, n). The latter 30-80 MeV fields are setup at TIARA of Takasaki Establishment of Japan Atomic Energy Research Institute, and at Cyclotron Radio Isotope Center (CYRIC) of Tohoku University using a 90 MeV AVF cyclotron and the {sup 7}Li(p, n) reaction. These fields have been applied for various calibration of neutron spectrometers and dosimeters, and for irradiation purposes. (author)
Ultrasound probe and needle-guide calibration for robotic ultrasound scanning and needle targeting.
Kim, Chunwoo; Chang, Doyoung; Petrisor, Doru; Chirikjian, Gregory; Han, Misop; Stoianovici, Dan
2013-06-01
Image-to-robot registration is a typical step for robotic image-guided interventions. If the imaging device uses a portable imaging probe that is held by a robot, this registration is constant and has been commonly named probe calibration. The same applies to probes tracked by a position measurement device. We report a calibration method for 2-D ultrasound probes using robotic manipulation and a planar calibration rig. Moreover, a needle guide that is attached to the probe is also calibrated for ultrasound-guided needle targeting. The method is applied to a transrectal ultrasound (TRUS) probe for robot-assisted prostate biopsy. Validation experiments include TRUS-guided needle targeting accuracy tests. This paper outlines the entire process from the calibration to image-guided targeting. Freehand TRUS-guided prostate biopsy is the primary method of diagnosing prostate cancer, with over 1.2 million procedures performed annually in the U.S. alone. However, freehand biopsy is a highly challenging procedure with subjective quality control. As such, biopsy devices are emerging to assist the physician. Here, we present a method that uses robotic TRUS manipulation. A 2-D TRUS probe is supported by a 4-degree-of-freedom robot. The robot performs ultrasound scanning, enabling 3-D reconstructions. Based on the images, the robot orients a needle guide on target for biopsy. The biopsy is acquired manually through the guide. In vitro tests showed that the 3-D images were geometrically accurate, and an image-based needle targeting accuracy was 1.55 mm. These validate the probe calibration presented and the overall robotic system for needle targeting. Targeting accuracy is sufficient for targeting small, clinically significant prostatic cancer lesions, but actual in vivo targeting will include additional error components that will have to be determined.
Variable selection in multivariate calibration based on clustering of variable concept.
Farrokhnia, Maryam; Karimi, Sadegh
2016-01-01
Recently we have proposed a new variable selection algorithm, based on clustering of variable concept (CLoVA) in classification problem. With the same idea, this new concept has been applied to a regression problem and then the obtained results have been compared with conventional variable selection strategies for PLS. The basic idea behind the clustering of variable is that, the instrument channels are clustered into different clusters via clustering algorithms. Then, the spectral data of each cluster are subjected to PLS regression. Different real data sets (Cargill corn, Biscuit dough, ACE QSAR, Soy, and Tablet) have been used to evaluate the influence of the clustering of variables on the prediction performances of PLS. Almost in the all cases, the statistical parameter especially in prediction error shows the superiority of CLoVA-PLS respect to other variable selection strategies. Finally the synergy clustering of variable (sCLoVA-PLS), which is used the combination of cluster, has been proposed as an efficient and modification of CLoVA algorithm. The obtained statistical parameter indicates that variable clustering can split useful part from redundant ones, and then based on informative cluster; stable model can be reached. Copyright © 2015 Elsevier B.V. All rights reserved.
Energy calibration for the forward detector at WASA-at-COSY
Energy Technology Data Exchange (ETDEWEB)
Demmich, Kay; Bergmann, Florian; Huesemann, Patrice; Huesken, Nils; Taeschner, Alexander; Khoukaz, Alfons [Institut fuer Kernphysik, Westfaelische Wilhelms-Universitaet Muenster (Germany); Collaboration: WASA-at-COSY-Collaboration
2014-07-01
Studies on rare and forbidden decays of light mesons are one main aspect of the WASA-at-COSY physics program. In this context a large data set of η mesons has been produced in proton proton scattering in order to investigate the decay properties of this meson. This high statistic measurement allows, e.g., for the search for the C parity violating reaction η → π{sup 0} + e{sup +} + e{sup -}, for which only an upper limit for the relative branching ratio of 4 x 10{sup -5} is quoted by the particle data group. The analysis of this forbidden decay channel relies on an effective separation of the physical background which is mainly caused by the direct pion production. To handle this background a missing mass analysis and kinematic fitting will be applied. Since both methods rely on a high energy resolution of the forward detector this detector, which measures the proton energies, has to be calibrated very carefully. In this contribution, a new calibration software is presented which has been developed especially for proton-proton measurements, and which allows for a precise determination of the calibration parameters by the mean of a graphical user interface and a dedicated fitting algorithm. Moreover, with this tool a run-by-run calibration can be realised. First results of the improved calibration are presented.
International Nuclear Information System (INIS)
Wang, W; Qi, H; Ayhan, B; Kwan, C; Vance, S
2014-01-01
Compositional analysis is important to interrogate spectral samples for direct analysis of materials in agriculture, environment and archaeology, etc. In this paper, multi-variate analysis (MVA) techniques are coupled with laser induced breakdown spectroscopy (LIBS) to estimate quantitative elemental compositions and determine the type of the sample. In particular, we present a new multivariate analysis method for composition analysis, referred to as s pectral unmixing . The LIBS spectrum of a testing sample is considered as a linear mixture with more than one constituent signatures that correspond to various chemical elements. The signature library is derived from regression analysis using training samples or is manually set up with the information from an elemental LIBS spectral database. A calibration step is used to make all the signatures in library to be homogeneous with the testing sample so as to avoid inhomogeneous signatures that might be caused by different sampling conditions. To demonstrate the feasibility of the proposed method, we compare it with the traditional partial least squares (PLS) method and the univariate method using a standard soil data set with elemental concentration measured a priori. The experimental results show that the proposed method holds great potential for reliable and effective elemental concentration estimation
Field calibration of cup anemometers
DEFF Research Database (Denmark)
Schmidt Paulsen, Uwe; Mortensen, Niels Gylling; Hansen, Jens Carsten
2007-01-01
A field calibration method and results are described along with the experience gained with the method. The cup anemometers to be calibrated are mounted in a row on a 10-m high rig and calibrated in the free wind against a reference cup anemometer. The method has been reported [1] to improve...... the statistical bias on the data relative to calibrations carried out in a wind tunnel. The methodology is sufficiently accurate for calibration of cup anemometers used for wind resource assessments and provides a simple, reliable and cost-effective solution to cup anemometer calibration, especially suited...
Calibration of Flick standards
International Nuclear Information System (INIS)
Thalmann, Ruedi; Spiller, Jürg; Küng, Alain; Jusko, Otto
2012-01-01
Flick standards or magnification standards are widely used for an efficient and functional calibration of the sensitivity of form measuring instruments. The results of a recent measurement comparison have shown to be partially unsatisfactory and revealed problems related to the calibration of these standards. In this paper the influence factors for the calibration of Flick standards using roundness measurement instruments are discussed in detail, in particular the bandwidth of the measurement chain, residual form errors of the device under test, profile distortions due to the diameter of the probing element and questions related to the definition of the measurand. The different contributions are estimated using simulations and are experimentally verified. Also alternative methods to calibrate Flick standards are investigated. Finally the practical limitations of Flick standard calibration are shown and the usability of Flick standards both to calibrate the sensitivity of roundness instruments and to check the filter function of such instruments is analysed. (paper)
Thermo analytic investigation of hydrogen effusion behavior - sensor evaluation and calibration
Energy Technology Data Exchange (ETDEWEB)
Ried, P.; Gaber, M.; Beyer, K.; Mueller, R.; Kipphardt, H.; Kannengiesser, T. [BAM, Federal Institute for Material Research and Testing, Berlin (Germany)
2011-01-15
The well established carrier gas analysis (CGA) method was used to test different hydrogen detectors comprising a thermal conductivity detector (TCD) and a metal oxide semiconducting (MOx) sensor. The MOx sensor provides high hydrogen sensitivity and selectivity, whereas the TCD exhibits a much shorter response time and a linear hydrogen concentration dependency. Therefore, the TCD was used for quantitative hydrogen concentration measurements above 50 {mu}mol/mol. The respective calibration was made using N{sub 2}/H{sub 2} gas mixtures. Furthermore, the hydrogen content and degassing behaviour of titanium hydride (TiH{sub 2-x}) was studied. This material turned out to be a potential candidate for a solid sample calibration. Vacuum hot extraction (VHE) coupled with a mass spectrometer (MS) was then calibrated with TiH{sub 2-x} as transfer standard. The calibration was applied for the evaluation of the hydrogen content of austenitic steel samples (1.4301) and the comparison of CGA-TCD and VHE-MS. (Copyright copyright 2011 Wiley-VCH Verlag GmbH and Co. KGaA, Weinheim)
Calibration methodology for instruments utilized in X radiation beams, diagnostic level
International Nuclear Information System (INIS)
Penha, M. da; Potiens, A.; Caldas, L.V.E.
2004-01-01
Methodologies for the calibration of diagnostic radiology instruments were established at the Calibration Laboratory of IPEN. The methods may be used in the calibration procedures of survey meters used in radiation protection measurements (scattered radiation), instruments used in direct beams (attenuated and non attenuated beams) and quality control instruments. The established qualities are recommended by the international standards IEC 1267 and ISO 4037-3. Two ionization chambers were used as reference systems, one with a volume of 30 cm 3 for radiation protection measurements, and the other with a volume of 1 cm 3 for direct beam measurements. Both are traceable to the German Primary Laboratory of Physikalisch-Technische Bundesanstalt (PTB). In the case of calibration of quality control instruments, a non-invasive method using the measurement of the spectrum endpoint was established with a portable gamma and X-ray Intertechnique spectrometer system. The methods were applied to survey meters (radiation protection measurements), ionization chambers (direct beam measurements) and k Vp meters (invasive and non-invasive instruments). (Author)
Calibration of the Dodewaard downcomer thermocouple cross-correlation flow-rate measurements
Energy Technology Data Exchange (ETDEWEB)
Stekelenburg, A J.C. [Technische Univ. Delft (Netherlands). Interfacultair Reactor Inst.; Hagen, T.H.J.J. van der [Technische Univ. Delft (Netherlands). Interfacultair Reactor Inst.; Akker, H.E.A. van den [Technische Univ. Delft (Netherlands). Lab. voor Fysische Technologie
1992-12-01
The cross-correlation flow measurement technique, applied for measuring the coolant flow rate in a nuclear reactor, was calibrated with the use of numerical simulations of turbulent flow. The three-dimensional domain was collapsed into two dimensions. With a two-dimensional calculation of steady-state flow with transient thermal characteristics the response of thermocouples to a temperature variation was calculated. By cross-correlating the calculated thermocouple responses, the link between total flow rate and measured transit times was made. Three calibration points were taken in the range of 579 kg/s to 1477 kg/s. In this range, the product of the calculated transit time and the mass flow-rate is constant up to +3.5% and -2.4%. The reliability of the calibration was estimated at {+-}4.6%. The influence of the inlet boundary conditions, and the modelling of the flow in the upper part of the downcomer channel on the calibration result is shown to be small. A measured velocity profile effect was successfully predicted. (orig.).
Calibration methodology for instruments utilized in X radiation beams, diagnostic level
Energy Technology Data Exchange (ETDEWEB)
Penha, M. da; Potiens, A.; Caldas, L.V.E. [Instituto de Pesquisas Energeticas e Nucleares, Comissao Nacional de Energia Nuclear, Sao Paulo (Brazil)]. E-mail: mppalbu@ipen.br
2004-07-01
Methodologies for the calibration of diagnostic radiology instruments were established at the Calibration Laboratory of IPEN. The methods may be used in the calibration procedures of survey meters used in radiation protection measurements (scattered radiation), instruments used in direct beams (attenuated and non attenuated beams) and quality control instruments. The established qualities are recommended by the international standards IEC 1267 and ISO 4037-3. Two ionization chambers were used as reference systems, one with a volume of 30 cm{sup 3} for radiation protection measurements, and the other with a volume of 1 cm{sup 3} for direct beam measurements. Both are traceable to the German Primary Laboratory of Physikalisch-Technische Bundesanstalt (PTB). In the case of calibration of quality control instruments, a non-invasive method using the measurement of the spectrum endpoint was established with a portable gamma and X-ray Intertechnique spectrometer system. The methods were applied to survey meters (radiation protection measurements), ionization chambers (direct beam measurements) and k Vp meters (invasive and non-invasive instruments). (Author)
Intersatellite Calibration of Microwave Radiometers for GPM
Wilheit, T. T.
2010-12-01
observations from one set of viewing parameters to those of the GMI. For the conically scanning window channel radiometers, the models are reasonably complete. Currently we have compared TMI with Windsat and arrived at a preliminary consensus calibration based on the pair. This consensus calibration standard has been applied to TMI and is currently being compared with AMSR-E on the Aqua satellite. In this way we are implementing a rolling wave spin-up of X-CAL. In this sense, the launch of GPM core will simply provide one more radiometer to the constellation; one hopes it will be the best calibrated. Water vapor and temperature sounders will use a different scenario. Some of the precipitation retrieval algorithms will use sounding channels. The GMI will include typical water vapor sounding channels. The radiances are ingested directly via 3DVAR and 4DVAR techniques into forecast models by many operational weather forecast agencies. The residuals and calibration adjustments of this process will provide a measure of the relative calibration errors throughout the constellation. The use of the ARM Southern Great Plains site as a benchmark for calibrating the more opaque channels is also being investigated.
Mathematical Formulation used by MATLAB Code to Convert FTIR Interferograms to Calibrated Spectra
Energy Technology Data Exchange (ETDEWEB)
Armstrong, Derek Elswick [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-07-19
This report discusses the mathematical procedures used to convert raw interferograms from Fourier transform infrared (FTIR) sensors to calibrated spectra. The work discussed in this report was completed as part of the Helios project at Los Alamos National Laboratory. MATLAB code was developed to convert the raw interferograms to calibrated spectra. The report summarizes the developed MATLAB scripts and functions, along with a description of the mathematical methods used by the code. The first step in working with raw interferograms is to convert them to uncalibrated spectra by applying an apodization function to the raw data and then by performing a Fourier transform. The developed MATLAB code also addresses phase error correction by applying the Mertz method. This report provides documentation for the MATLAB scripts.
Advanced event reweighting using multivariate analysis
International Nuclear Information System (INIS)
Martschei, D; Feindt, M; Honc, S; Wagner-Kuhr, J
2012-01-01
Multivariate analysis (MVA) methods, especially discrimination techniques such as neural networks, are key ingredients in modern data analysis and play an important role in high energy physics. They are usually trained on simulated Monte Carlo (MC) samples to discriminate so called 'signal' from 'background' events and are then applied to data to select real events of signal type. We here address procedures that improve this work flow. This will be the enhancement of data / MC agreement by reweighting MC samples on a per event basis. Then training MVAs on real data using the sPlot technique will be discussed. Finally we will address the construction of MVAs whose discriminator is independent of a certain control variable, i.e. cuts on this variable will not change the discriminator shape.
Directory of Open Access Journals (Sweden)
Márcio Mendonça
2015-10-01
Full Text Available In this work, it is analyzed a multivariate system control of an alcoholic fermentation process with no minimum phase. The control is made with PID classic controllers associated with a supervisory system based on Fuzzy Systems. The Fuzzy system, a priori, send set-points to PID controllers, but also adds protection functions, such as if the biomass valued is at zero or very close. The Fuzzy controller changes the campaign to prevent or mitigate the paralyzation of the process. Three control architectures based on Fuzzy Control Systems are presented and compared in performance with classic control in different campaigns. The third architecture, in particular, adds an adaptive function. A brief summary of Fuzzy theory and correlated works will be presented. And, finally simulations results, conclusions and future works end the article.
Synthesis Polarimetry Calibration
Moellenbrock, George
2017-10-01
Synthesis instrumental polarization calibration fundamentals for both linear (ALMA) and circular (EVLA) feed bases are reviewed, with special attention to the calibration heuristics supported in CASA. Practical problems affecting modern instruments are also discussed.
Calibration of neural networks using genetic algorithms, with application to optimal path planning
Smith, Terence R.; Pitney, Gilbert A.; Greenwood, Daniel
1987-01-01
Genetic algorithms (GA) are used to search the synaptic weight space of artificial neural systems (ANS) for weight vectors that optimize some network performance function. GAs do not suffer from some of the architectural constraints involved with other techniques and it is straightforward to incorporate terms into the performance function concerning the metastructure of the ANS. Hence GAs offer a remarkably general approach to calibrating ANS. GAs are applied to the problem of calibrating an ANS that finds optimal paths over a given surface. This problem involves training an ANS on a relatively small set of paths and then examining whether the calibrated ANS is able to find good paths between arbitrary start and end points on the surface.
ITER Articulated Inspection Arm (AIA): Geometric calibration issues of a long-reach flexible robot
International Nuclear Information System (INIS)
Arhur, D.; Perrot, Y.; Bidard, C.; Friconneau, J.P.; Palmer, J.D.; Semeraro, L.
2005-01-01
This paper is part of the Remote Handling (RH) activities for the future fusion reactor ITER. Specifically it relates to the possibility to carry out close inspection tasks of the Vacuum Vessel first wall using a long reach robot called the 'Articulated Inspection Arm' (AIA). Early studies for this device identified the need of improving the accuracy of the end-effector position in such robot structures. Therefore, the aim of this R and D program performed under the European Fusion Development Agreement (EFDA) work program is to develop a flexible parametric model with localised compliances of an AIA-like system, in order to compensate for its flexibilities. The geometric calibration is performed using a non-linear multivariable optimisation technique, which minimizes the average error between the simulated and real robot position. The optimised set of parameters, tested on the first segment of the robot, enables to divide by 3 the error on the end-effector position, in comparison to a rigid model. We expect better prediction after mechanical improvements to reduce the serious backlash in the joints. The prediction model applied to the whole arm will enable errors to be reduced from more than 1 m, in some configurations, to a final accuracy of a few centimetres
Sandia WIPP calibration traceability
Energy Technology Data Exchange (ETDEWEB)
Schuhen, M.D. [Sandia National Labs., Albuquerque, NM (United States); Dean, T.A. [RE/SPEC, Inc., Albuquerque, NM (United States)
1996-05-01
This report summarizes the work performed to establish calibration traceability for the instrumentation used by Sandia National Laboratories at the Waste Isolation Pilot Plant (WIPP) during testing from 1980-1985. Identifying the calibration traceability is an important part of establishing a pedigree for the data and is part of the qualification of existing data. In general, the requirement states that the calibration of Measuring and Test equipment must have a valid relationship to nationally recognized standards or the basis for the calibration must be documented. Sandia recognized that just establishing calibration traceability would not necessarily mean that all QA requirements were met during the certification of test instrumentation. To address this concern, the assessment was expanded to include various activities.
Sandia WIPP calibration traceability
International Nuclear Information System (INIS)
Schuhen, M.D.; Dean, T.A.
1996-05-01
This report summarizes the work performed to establish calibration traceability for the instrumentation used by Sandia National Laboratories at the Waste Isolation Pilot Plant (WIPP) during testing from 1980-1985. Identifying the calibration traceability is an important part of establishing a pedigree for the data and is part of the qualification of existing data. In general, the requirement states that the calibration of Measuring and Test equipment must have a valid relationship to nationally recognized standards or the basis for the calibration must be documented. Sandia recognized that just establishing calibration traceability would not necessarily mean that all QA requirements were met during the certification of test instrumentation. To address this concern, the assessment was expanded to include various activities
Dynamic Classification using Multivariate Locally Stationary Wavelet Processes
Park, Timothy
2018-03-11
Methods for the supervised classification of signals generally aim to assign a signal to one class for its entire time span. In this paper we present an alternative formulation for multivariate signals where the class membership is permitted to change over time. Our aim therefore changes from classifying the signal as a whole to classifying the signal at each time point to one of a fixed number of known classes. We assume that each class is characterised by a different stationary generating process, the signal as a whole will however be nonstationary due to class switching. To capture this nonstationarity we use the recently proposed Multivariate Locally Stationary Wavelet model. To account for uncertainty in class membership at each time point our goal is not to assign a definite class membership but rather to calculate the probability of a signal belonging to a particular class. Under this framework we prove some asymptotic consistency results. This method is also shown to perform well when applied to both simulated and accelerometer data. In both cases our method is able to place a high probability on the correct class for the majority of time points.
Dynamic Classification using Multivariate Locally Stationary Wavelet Processes
Park, Timothy; Eckley, Idris A.; Ombao, Hernando
2018-01-01
Methods for the supervised classification of signals generally aim to assign a signal to one class for its entire time span. In this paper we present an alternative formulation for multivariate signals where the class membership is permitted to change over time. Our aim therefore changes from classifying the signal as a whole to classifying the signal at each time point to one of a fixed number of known classes. We assume that each class is characterised by a different stationary generating process, the signal as a whole will however be nonstationary due to class switching. To capture this nonstationarity we use the recently proposed Multivariate Locally Stationary Wavelet model. To account for uncertainty in class membership at each time point our goal is not to assign a definite class membership but rather to calculate the probability of a signal belonging to a particular class. Under this framework we prove some asymptotic consistency results. This method is also shown to perform well when applied to both simulated and accelerometer data. In both cases our method is able to place a high probability on the correct class for the majority of time points.
Henn, Raphael; Kirchler, Christian G; Grossgut, Maria-Elisabeth; Huck, Christian W
2017-05-01
This study compared three commercially available spectrometers - whereas two of them were miniaturized - in terms of prediction ability of melamine in milk powder (infant formula). Therefore all spectra were split into calibration- and validation-set using Kennard Stone and Duplex algorithm in comparison. For each instrument the three best performing PLSR models were constructed using SNV and Savitzky Golay derivatives. The best RMSEP values were 0.28g/100g, 0.33g/100g and 0.27g/100g for the NIRFlex N-500, the microPHAZIR and the microNIR2200 respectively. Furthermore the multivariate LOD interval [LOD min , LOD max ] was calculated for all the PLSR models unveiling significant differences among the spectrometers showing values of 0.20g/100g - 0.27g/100g, 0.28g/100g - 0.54g/100g and 0.44g/100g - 1.01g/100g for the NIRFlex N-500, the microPHAZIR and the microNIR2200 respectively. To assess the robustness of all models, artificial introduction of white noise, baseline shift, multiplicative effect, spectral shrink and stretch, stray light and spectral shift were applied. Monitoring the RMSEP as function of the perturbation gave indication of robustness of the models and helped to compare the performances of the spectrometers. Not taking the additional information from the LOD calculations into account one could falsely assume that all the spectrometers perform equally well which is not the case when the multivariate evaluation and robustness data were considered. Copyright Â© 2017 Elsevier B.V. All rights reserved.
Multivariate Tensor-based Brain Anatomical Surface Morphometry via Holomorphic One-Forms
Wang, Yalin; Chan, Tony F.; Toga, Arthur W.; Thompson, Paul M.
2009-01-01
Here we introduce multivariate tensor-based surface morphometry using holomorphic one-forms to study brain anatomy. We computed new statistics from the Riemannian metric tensors that retain the full information in the deformation tensor fields. We introduce two different holomorphic one-forms that induce different surface conformal parameterizations. We applied this framework to 3D MRI data to analyze hippocampal surface morphometry in Alzheimer’s Disease (AD; 26 subjects), lateral ventricula...
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Lunde, Asger; Voev, Valeri
We introduce a multivariate GARCH model that utilizes and models realized measures of volatility and covolatility. The realized measures extract information contained in high-frequency data that is particularly beneficial during periods with variation in volatility and covolatility. Applying the ...
Yehia, Ali M.; Mohamed, Heba M.
2016-01-01
Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference.
A new method to calibrate Lagrangian model with ASAR images for oil slick trajectory.
Tian, Siyu; Huang, Xiaoxia; Li, Hongga
2017-03-15
Since Lagrangian model coefficients vary with different conditions, it is necessary to calibrate the model to obtain optimal coefficient combination for special oil spill accident. This paper focuses on proposing a new method to calibrate Lagrangian model with time series of Envisat ASAR images. Oil slicks extracted from time series images form a detected trajectory of special oil slick. Lagrangian model is calibrated by minimizing the difference between simulated trajectory and detected trajectory. mean center position distance difference (MCPD) and rotation difference (RD) of Oil slicks' or particles' standard deviational ellipses (SDEs) are calculated as two evaluations. The two parameters are taken to evaluate the performance of Lagrangian transport model with different coefficient combinations. This method is applied to Penglai 19-3 oil spill accident. The simulation result with calibrated model agrees well with related satellite observations. It is suggested the new method is effective to calibrate Lagrangian model. Copyright © 2016 Elsevier Ltd. All rights reserved.
POLCAL - POLARIMETRIC RADAR CALIBRATION
Vanzyl, J.
1994-01-01
Calibration of polarimetric radar systems is a field of research in which great progress has been made over the last few years. POLCAL (Polarimetric Radar Calibration) is a software tool intended to assist in the calibration of Synthetic Aperture Radar (SAR) systems. In particular, POLCAL calibrates Stokes matrix format data produced as the standard product by the NASA/Jet Propulsion Laboratory (JPL) airborne imaging synthetic aperture radar (AIRSAR). POLCAL was designed to be used in conjunction with data collected by the NASA/JPL AIRSAR system. AIRSAR is a multifrequency (6 cm, 24 cm, and 68 cm wavelength), fully polarimetric SAR system which produces 12 x 12 km imagery at 10 m resolution. AIRSTAR was designed as a testbed for NASA's Spaceborne Imaging Radar program. While the images produced after 1991 are thought to be calibrated (phase calibrated, cross-talk removed, channel imbalance removed, and absolutely calibrated), POLCAL can and should still be used to check the accuracy of the calibration and to correct it if necessary. Version 4.0 of POLCAL is an upgrade of POLCAL version 2.0 released to AIRSAR investigators in June, 1990. New options in version 4.0 include automatic absolute calibration of 89/90 data, distributed target analysis, calibration of nearby scenes with calibration parameters from a scene with corner reflectors, altitude or roll angle corrections, and calibration of errors introduced by known topography. Many sources of error can lead to false conclusions about the nature of scatterers on the surface. Errors in the phase relationship between polarization channels result in incorrect synthesis of polarization states. Cross-talk, caused by imperfections in the radar antenna itself, can also lead to error. POLCAL reduces cross-talk and corrects phase calibration without the use of ground calibration equipment. Removing the antenna patterns during SAR processing also forms a very important part of the calibration of SAR data. Errors in the
CPAC moisture study: Phase 1 report on the study of optical spectra calibration for moisture
International Nuclear Information System (INIS)
Veltkamp, D.
1993-01-01
This report discusses work done to investigate the feasibility of using optical spectroscopic methods, combined with multivariate Partial Least Squares (PLS) calibration modeling, to quantitatively predict the moisture content of the crust material in Hanford's waste tank materials. Experiments were conducted with BY-104 simulant material for the 400--1100 nm (VIS), 1100--2500 (NIR), and 400-4000 cm -1 (IR) optical regions. The test data indicated that the NIR optical region, with a single PLS calibration factor, provided the highest accuracy response (better than 0.5 wt %) over a 0--25 wt % moisture range. Issues relating to the preparation of moisture samples with the BY-104 materials and the potential implementation within hot cell and waste tanks are also discussed. The investigation of potential material interferences, including physical and chemical properties, and the scaled demonstration of fiber optic and camera types of applications with simulated waste tanks are outlined as future work tasks
Multivariate strategies in functional magnetic resonance imaging
DEFF Research Database (Denmark)
Hansen, Lars Kai
2007-01-01
We discuss aspects of multivariate fMRI modeling, including the statistical evaluation of multivariate models and means for dimensional reduction. In a case study we analyze linear and non-linear dimensional reduction tools in the context of a `mind reading' predictive multivariate fMRI model....
Advanced multivariate data evaluation for Fourier transform infrared spectroscopy
International Nuclear Information System (INIS)
Diewok, J.
2002-12-01
The objective of the presented dissertation was the evaluation, application and further development of advanced multivariate data evaluation methods for qualitative and quantitative Fourier transform infrared (FT-IR) measurements, especially of aqueous samples. The focus was set on 'evolving systems'; i.e. chemical systems that change gradually with a master variable, such as pH, reaction time, elution time, etc. and that are increasingly encountered in analytical chemistry. FT-IR measurements on such systems yield 2-way and 3-way data sets, i.e. data matrices and cubes. The chemometric methods used were soft-modeling techniques, like multivariate curve resolution - alternating least squares (MCR-ALS) or principal component analysis (PCA), hard modeling of equilibrium systems and two-dimensional correlation spectroscopy (2D-CoS). The research results are presented in six publications and comprise: A new combination of FT-IR flow titrations and second-order calibration by MCR-ALS for the quantitative analysis of mixture samples of organic acids and sugars. A novel combination of MCR-ALS with a hard-modeled equilibrium constraint for second-order quantitation in pH-modulated samples where analytes and interferences show very similar acid-base behavior. A detailed study in which MCR-ALS and 2D-CoS are directly compared for the first time. From the analysis of simulated and experimental acid-base equilibrium systems, the performance and interpretability of the two methods is evaluated. Investigation of the binding process of vancomycin, an important antibiotic, to a cell wall analogue tripeptide by time-resolved FT-IR spectroscopy and detailed chemometric evaluation. Determination of red wine constituents by liquid chromatography with FT-IR detection and MCR-ALS for resolution of overlapped peaks. Classification of red wine cultivars from FT-IR spectroscopy of phenolic wine extracts with hierarchical clustering and soft independent modeling of class analogy (SIMCA
Bai, Jianwen; Shen, Zhenyao; Yan, Tiezhu
2017-09-01
An essential task in evaluating global water resource and pollution problems is to obtain the optimum set of parameters in hydrological models through calibration and validation. For a large-scale watershed, single-site calibration and validation may ignore spatial heterogeneity and may not meet the needs of the entire watershed. The goal of this study is to apply a multi-site calibration and validation of the Soil andWater Assessment Tool (SWAT), using the observed flow data at three monitoring sites within the Baihe watershed of the Miyun Reservoir watershed, China. Our results indicate that the multi-site calibration parameter values are more reasonable than those obtained from single-site calibrations. These results are mainly due to significant differences in the topographic factors over the large-scale area, human activities and climate variability. The multi-site method involves the division of the large watershed into smaller watersheds, and applying the calibrated parameters of the multi-site calibration to the entire watershed. It was anticipated that this case study could provide experience of multi-site calibration in a large-scale basin, and provide a good foundation for the simulation of other pollutants in followup work in the Miyun Reservoir watershed and other similar large areas.
Multivariate Bonferroni-type inequalities theory and applications
Chen, John
2014-01-01
Multivariate Bonferroni-Type Inequalities: Theory and Applications presents a systematic account of research discoveries on multivariate Bonferroni-type inequalities published in the past decade. The emergence of new bounding approaches pushes the conventional definitions of optimal inequalities and demands new insights into linear and Fréchet optimality. The book explores these advances in bounding techniques with corresponding innovative applications. It presents the method of linear programming for multivariate bounds, multivariate hybrid bounds, sub-Markovian bounds, and bounds using Hamil
A kernel version of multivariate alteration detection
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack
2013-01-01
Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations.......Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations....
In-flight calibration of the Swift XRT Point Spread Function
International Nuclear Information System (INIS)
Moretti, A.; Campana, S.; Chincarini, G.; Covino, S.; Romano, P.; Tagliaferri, G.; Capalbi, M.; Giommi, P.; Perri, M.; Cusumano, G.; La Parola, V.; Mangano, V.; Mineo, T.
2006-01-01
The Swift X-ray Telescope (XRT) is designed to make astrometric, spectroscopic and photometric observations of the X-ray emission from Gamma-ray bursts and their afterglows, in the energy band 0.2-10 keV. Here we report the results of the analysis of Swift XRT Point Spread Function (PSF) as measured in the first four months of the mission during the instrument calibration phase. The analysis includes the study of the PSF of different point-like sources both on-axis and off-axis with different spectral properties. We compare the in-flight data with the expectations from the on-ground calibration. On the basis of the calibration data we built an analytical model to reproduce the PSF as a function of the energy and the source position within the detector which can be applied in the PSF correction calculation for any extraction region geometry. All the results of this study are implemented in the standard public software
RSR Calculator, a tool for the Calibration / Validation activities
Directory of Open Access Journals (Sweden)
C. Durán-Alarcón
2014-12-01
Full Text Available The calibration/validation of remote sensing products is a key step that needs to be done before its use in different kinds of environmental applications and to ensure the success of remote sensing missions. In order to compare the measurements from remote sensors on spacecrafts and airborne platforms with in-situ data, it is necessary to perform a spectral comparison process that takes into account the relative spectral response of the sensors. This technical note presents the RSR Calculator, a new tool to estimate, through numerical convolution, the values corresponding to each spectral range of a given sensor. RSR Calculator is useful for several applications ranging from the convolution of spectral signatures of laboratory or field measurements to the parameter estimation for the calibration of sensors, such as extraterrestrial solar irradiance (ESUN or atmospheric transmissivity (τ per spectral band. RSR Calculator is a useful tool that allows the processing of spectral data and that it can be successfully applied in the calibration/validation remote sensing process of the optical domain.
Stochastic isotropic hyperelastic materials: constitutive calibration and model selection
Mihai, L. Angela; Woolley, Thomas E.; Goriely, Alain
2018-03-01
Biological and synthetic materials often exhibit intrinsic variability in their elastic responses under large strains, owing to microstructural inhomogeneity or when elastic data are extracted from viscoelastic mechanical tests. For these materials, although hyperelastic models calibrated to mean data are useful, stochastic representations accounting also for data dispersion carry extra information about the variability of material properties found in practical applications. We combine finite elasticity and information theories to construct homogeneous isotropic hyperelastic models with random field parameters calibrated to discrete mean values and standard deviations of either the stress-strain function or the nonlinear shear modulus, which is a function of the deformation, estimated from experimental tests. These quantities can take on different values, corresponding to possible outcomes of the experiments. As multiple models can be derived that adequately represent the observed phenomena, we apply Occam's razor by providing an explicit criterion for model selection based on Bayesian statistics. We then employ this criterion to select a model among competing models calibrated to experimental data for rubber and brain tissue under single or multiaxial loads.
A Numerical Procedure for Model Identifiability Analysis Applied to Enzyme Kinetics
DEFF Research Database (Denmark)
Daele, Timothy, Van; Van Hoey, Stijn; Gernaey, Krist
2015-01-01
The proper calibration of models describing enzyme kinetics can be quite challenging. In the literature, different procedures are available to calibrate these enzymatic models in an efficient way. However, in most cases the model structure is already decided on prior to the actual calibration...... and Pronzato (1997) and which can be easily set up for any type of model. In this paper the proposed approach is applied to the forward reaction rate of the enzyme kinetics proposed by Shin and Kim(1998). Structural identifiability analysis showed that no local structural model problems were occurring......) identifiability problems. By using the presented approach it is possible to detect potential identifiability problems and avoid pointless calibration (and experimental!) effort....
Technical Note: Procedure for the calibration and validation of kilo-voltage cone-beam CT models
Energy Technology Data Exchange (ETDEWEB)
Vilches-Freixas, Gloria; Létang, Jean Michel; Rit, Simon, E-mail: simon.rit@creatis.insa-lyon.fr [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1206, INSA-Lyon, Université Lyon 1, Centre Léon Bérard, Lyon 69373 Cedex 08 (France); Brousmiche, Sébastien [Ion Beam Application, Louvain-la-Neuve 1348 (Belgium); Romero, Edward; Vila Oliva, Marc [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1206, INSA-Lyon, Université Lyon 1, Centre Léon Bérard, Lyon 69373 Cedex 08, France and Ion Beam Application, Louvain-la-Neuve 1348 (Belgium); Kellner, Daniel; Deutschmann, Heinz; Keuschnigg, Peter; Steininger, Philipp [Institute for Research and Development on Advanced Radiation Technologies, Paracelsus Medical University, Salzburg 5020 (Austria)
2016-09-15
Purpose: The aim of this work is to propose a general and simple procedure for the calibration and validation of kilo-voltage cone-beam CT (kV CBCT) models against experimental data. Methods: The calibration and validation of the CT model is a two-step procedure: the source model then the detector model. The source is described by the direction dependent photon energy spectrum at each voltage while the detector is described by the pixel intensity value as a function of the direction and the energy of incident photons. The measurements for the source consist of a series of dose measurements in air performed at each voltage with varying filter thicknesses and materials in front of the x-ray tube. The measurements for the detector are acquisitions of projection images using the same filters and several tube voltages. The proposed procedure has been applied to calibrate and assess the accuracy of simple models of the source and the detector of three commercial kV CBCT units. If the CBCT system models had been calibrated differently, the current procedure would have been exclusively used to validate the models. Several high-purity attenuation filters of aluminum, copper, and silver combined with a dosimeter which is sensitive to the range of voltages of interest were used. A sensitivity analysis of the model has also been conducted for each parameter of the source and the detector models. Results: Average deviations between experimental and theoretical dose values are below 1.5% after calibration for the three x-ray sources. The predicted energy deposited in the detector agrees with experimental data within 4% for all imaging systems. Conclusions: The authors developed and applied an experimental procedure to calibrate and validate any model of the source and the detector of a CBCT unit. The present protocol has been successfully applied to three x-ray imaging systems. The minimum requirements in terms of material and equipment would make its implementation suitable in
System for calibration of instruments of x-ray measurement (CIR-X) applying the PGCS
International Nuclear Information System (INIS)
Gaytan G, E.; Rivero G, T.; Cruz E, P.; Tovar M, V.M.; Vergara M, F.J.
2007-01-01
The Department of Metrology of Ionizing Radiations of the ININ carries out calibration of instruments for X-ray measurement that determine the operation parameters in X-ray diagnostic machines of the health and private sectors. To facilitate this task, the Department of Automation and Instrumentation developed a system for acquisition and signals processing coming from a reference voltage divider with traceability at NIST that is connected directly to the X-rays tube. The system is integrated by the X-ray unit, the X-ray measurement equipment Dynalizer IIIU of RADCAL, a data acquisition card, a personal computer and the acquisition software and signals processing. (Author)
SEE cross section calibration and application to quasi-monoenergetic and spallation facilities
Directory of Open Access Journals (Sweden)
Alía Rubén García
2017-01-01
Full Text Available We describe an approach to calibrate SEE-based detectors in monoenergetic fields and apply the resulting semi-empiric responses to more general mixed-field cases in which a broad variety of particle species and energy spectra are involved. The calibration of the response functions is based both on experimental proton and neutron data and considerations derived from Monte Carlo simulations using the FLUKA code. The application environments include the quasi-monoenergetic neutrons at RCNP, the atmospheric-like VESUVIO spallation spectrum and the CHARM high-energy accelerator test facility.
Compact Optical Technique for Streak Camera Calibration
International Nuclear Information System (INIS)
Bell, P; Griffith, R; Hagans, K; Lerche, R; Allen, C; Davies, T; Janson, F; Justin, R; Marshall, B; Sweningsen, O
2004-01-01
The National Ignition Facility (NIF) is under construction at the Lawrence Livermore National Laboratory (LLNL) for the U.S. Department of Energy Stockpile Stewardship Program. Optical streak cameras are an integral part of the experimental diagnostics instrumentation. To accurately reduce data from the streak cameras a temporal calibration is required. This article describes a technique for generating trains of precisely timed short-duration optical pulses1 (optical comb generators) that are suitable for temporal calibrations. These optical comb generators (Figure 1) are used with the LLNL optical streak cameras. They are small, portable light sources that produce a series of temporally short, uniformly spaced, optical pulses. Comb generators have been produced with 0.1, 0.5, 1, 3, 6, and 10-GHz pulse trains of 780-nm wavelength light with individual pulse durations of ∼25-ps FWHM. Signal output is via a fiber-optic connector. Signal is transported from comb generator to streak camera through multi-mode, graded-index optical fibers. At the NIF, ultra-fast streak-cameras are used by the Laser Fusion Program experimentalists to record fast transient optical signals. Their temporal resolution is unmatched by any other transient recorder. Their ability to spatially discriminate an image along the input slit allows them to function as a one-dimensional image recorder, time-resolved spectrometer, or multichannel transient recorder. Depending on the choice of photocathode, they can be made sensitive to photon energies from 1.1 eV to 30 keV and beyond. Comb generators perform two important functions for LLNL streak-camera users. First, comb generators are used as a precision time-mark generator for calibrating streak camera sweep rates. Accuracy is achieved by averaging many streak camera images of comb generator signals. Time-base calibrations with portable comb generators are easily done in both the calibration laboratory and in situ. Second, comb signals are applied
Energy Technology Data Exchange (ETDEWEB)
Courtney, M.
2013-01-15
Nacelle mounted, forward looking wind lidars are beginning to be used to provide reference wind speed measurements for the power performance testing of wind turbines. In such applications, a formal calibration procedure with a corresponding uncertainty assessment will be necessary. This report presents four concepts for performing such a nacelle lidar calibration. Of the four methods, two are found to be immediately relevant and are pursued in some detail. The first of these is a line of sight calibration method in which both lines of sight (for a two beam lidar) are individually calibrated by accurately aligning the beam to pass close to a reference wind speed sensor. A testing procedure is presented, reporting requirements outlined and the uncertainty of the method analysed. It is seen that the main limitation of the line of sight calibration method is the time required to obtain a representative distribution of radial wind speeds. An alternative method is to place the nacelle lidar on the ground and incline the beams upwards to bisect a mast equipped with reference instrumentation at a known height and range. This method will be easier and faster to implement and execute but the beam inclination introduces extra uncertainties. A procedure for conducting such a calibration is presented and initial indications of the uncertainties given. A discussion of the merits and weaknesses of the two methods is given together with some proposals for the next important steps to be taken in this work. (Author)
Eftekhari, Ali; Parastar, Hadi
2016-09-30
The present contribution is devoted to develop multivariate analytical figures of merit (AFOMs) as a new metric for evaluation of quantitative measurements using comprehensive two-dimensional gas chromatography-mass spectrometry (GC×GC-MS). In this regard, new definition of sensitivity (SEN) is extended to GC×GC-MS data and then, other multivariate AFOMs including analytical SEN (γ), selectivity (SEL) and limit of detection (LOD) are calculated. Also, two frequently used second- and third-order calibration algorithms of multivariate curve resolution-alternating least squares (MCR-ALS) as representative of multi-set methods and parallel factor analysis (PARAFAC) as representative of multi-way methods are discussed to exploit pure component profiles and to calculate multivariate AFOMs. Different GC×GC-MS data sets with different number of components along with various levels of artifacts are simulated and analyzed. Noise, elution time shifts in both chromatographic dimensions, peak overlap and interferences are considered as the main artifacts in this work. Additionally, a new strategy is developed to estimate the noise level using variance-covariance matrix of residuals which is very important to calculate multivariate AFOMs. Finally, determination of polycyclic aromatic hydrocarbons (PAHs) in aromatic fraction of heavy fuel oil (HFO) analyzed by GC×GC-MS is considered as real case to confirm applicability of the proposed metric in real samples. It should be pointed out that the proposed strategy in this work can be used for other types of comprehensive two-dimensional chromatographic (CTDC) techniques like comprehensive two dimensional liquid chromatography (LC×LC). Copyright © 2016 Elsevier B.V. All rights reserved.
A Baseline Load Schedule for the Manual Calibration of a Force Balance
Ulbrich, N.; Gisler, R.
2013-01-01
A baseline load schedule for the manual calibration of a force balance is defined that takes current capabilities at the NASA Ames Balance Calibration Laboratory into account. The chosen load schedule consists of 18 load series with a total of 194 data points. It was designed to satisfy six requirements: (i) positive and negative loadings should be applied for each load component; (i