WorldWideScience

Sample records for experimentally calibrated computational

  1. Calibration of uncertain inputs to computer models using experimentally measured quantities and the BMARS emulator

    International Nuclear Information System (INIS)

    Stripling, H.F.; McClarren, R.G.; Kuranz, C.C.; Grosskopf, M.J.; Rutter, E.; Torralva, B.R.

    2011-01-01

    We present a method for calibrating the uncertain inputs to a computer model using available experimental data. The goal of the procedure is to produce posterior distributions of the uncertain inputs such that when samples from the posteriors are used as inputs to future model runs, the model is more likely to replicate (or predict) the experimental response. The calibration is performed by sampling the space of the uncertain inputs, using the computer model (or, more likely, an emulator for the computer model) to assign weights to the samples, and applying the weights to produce the posterior distributions and generate predictions of new experiments within confidence bounds. The method is similar to the Markov chain Monte Carlo (MCMC) calibration methods with independent sampling with the exception that we generate samples beforehand and replace the candidate acceptance routine with a weighting scheme. We apply our method to the calibration of a Hyades 2D model of laser energy deposition in beryllium. We employ a Bayesian Multivariate Adaptive Regression Splines (BMARS) emulator as a surrogate for Hyades 2D. We treat a range of uncertainties in our system, including uncertainties in the experimental inputs, experimental measurement error, and systematic experimental timing errors. The results of the calibration are posterior distributions that both agree with intuition and improve the accuracy and decrease the uncertainty in experimental predictions. (author)

  2. On the Bayesian calibration of computer model mixtures through experimental data, and the design of predictive models

    Science.gov (United States)

    Karagiannis, Georgios; Lin, Guang

    2017-08-01

    For many real systems, several computer models may exist with different physics and predictive abilities. To achieve more accurate simulations/predictions, it is desirable for these models to be properly combined and calibrated. We propose the Bayesian calibration of computer model mixture method which relies on the idea of representing the real system output as a mixture of the available computer model outputs with unknown input dependent weight functions. The method builds a fully Bayesian predictive model as an emulator for the real system output by combining, weighting, and calibrating the available models in the Bayesian framework. Moreover, it fits a mixture of calibrated computer models that can be used by the domain scientist as a mean to combine the available computer models, in a flexible and principled manner, and perform reliable simulations. It can address realistic cases where one model may be more accurate than the others at different input values because the mixture weights, indicating the contribution of each model, are functions of the input. Inference on the calibration parameters can consider multiple computer models associated with different physics. The method does not require knowledge of the fidelity order of the models. We provide a technique able to mitigate the computational overhead due to the consideration of multiple computer models that is suitable to the mixture model framework. We implement the proposed method in a real-world application involving the Weather Research and Forecasting large-scale climate model.

  3. Electronic transport in VO2—Experimentally calibrated Boltzmann transport modeling

    International Nuclear Information System (INIS)

    Kinaci, Alper; Rosenmann, Daniel; Chan, Maria K. Y.; Kado, Motohisa; Ling, Chen; Zhu, Gaohua; Banerjee, Debasish

    2015-01-01

    Materials that undergo metal-insulator transitions (MITs) are under intense study, because the transition is scientifically fascinating and technologically promising for various applications. Among these materials, VO 2 has served as a prototype due to its favorable transition temperature. While the physical underpinnings of the transition have been heavily investigated experimentally and computationally, quantitative modeling of electronic transport in the two phases has yet to be undertaken. In this work, we establish a density-functional-theory (DFT)-based approach with Hubbard U correction (DFT + U) to model electronic transport properties in VO 2 in the semiconducting and metallic regimes, focusing on band transport using the Boltzmann transport equations. We synthesized high quality VO 2 films and measured the transport quantities across the transition, in order to calibrate the free parameters in the model. We find that the experimental calibration of the Hubbard correction term can efficiently and adequately model the metallic and semiconducting phases, allowing for further computational design of MIT materials for desirable transport properties

  4. Application of Iterative Robust Model-based Optimal Experimental Design for the Calibration of Biocatalytic Models

    DEFF Research Database (Denmark)

    Van Daele, Timothy; Gernaey, Krist V.; Ringborg, Rolf Hoffmeyer

    2017-01-01

    The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during...... experimentation is not actively used to optimise the experimental design. By applying an iterative robust model-based optimal experimental design, the limited amount of data collected is used to design additional informative experiments. The algorithm is used here to calibrate the initial reaction rate of an ω......-transaminase catalysed reaction in a more accurate way. The parameter confidence region estimated from the Fisher Information Matrix is compared with the likelihood confidence region, which is a more accurate, but also a computationally more expensive method. As a result, an important deviation between both approaches...

  5. Calibration Under Uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton; Trucano, Timothy Guy

    2005-03-01

    This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.

  6. Electronic transport in VO{sub 2}—Experimentally calibrated Boltzmann transport modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kinaci, Alper; Rosenmann, Daniel; Chan, Maria K. Y., E-mail: debasish.banerjee@toyota.com, E-mail: mchan@anl.gov [Center for Nanoscale Materials, Argonne National Laboratory, Argonne, Illinois 60439 (United States); Kado, Motohisa [Higashifuji Technical Center, Toyota Motor Corporation, Susono, Shizuoka 410-1193 (Japan); Ling, Chen; Zhu, Gaohua; Banerjee, Debasish, E-mail: debasish.banerjee@toyota.com, E-mail: mchan@anl.gov [Materials Research Department, Toyota Motor Engineering and Manufacturing North America, Inc., Ann Arbor, Michigan 48105 (United States)

    2015-12-28

    Materials that undergo metal-insulator transitions (MITs) are under intense study, because the transition is scientifically fascinating and technologically promising for various applications. Among these materials, VO{sub 2} has served as a prototype due to its favorable transition temperature. While the physical underpinnings of the transition have been heavily investigated experimentally and computationally, quantitative modeling of electronic transport in the two phases has yet to be undertaken. In this work, we establish a density-functional-theory (DFT)-based approach with Hubbard U correction (DFT + U) to model electronic transport properties in VO{sub 2} in the semiconducting and metallic regimes, focusing on band transport using the Boltzmann transport equations. We synthesized high quality VO{sub 2} films and measured the transport quantities across the transition, in order to calibrate the free parameters in the model. We find that the experimental calibration of the Hubbard correction term can efficiently and adequately model the metallic and semiconducting phases, allowing for further computational design of MIT materials for desirable transport properties.

  7. The peak efficiency calibration of volume source using 152Eu point source in computer

    International Nuclear Information System (INIS)

    Shen Tingyun; Qian Jianfu; Nan Qinliang; Zhou Yanguo

    1997-01-01

    The author describes the method of the peak efficiency calibration of volume source by means of 152 Eu point source for HPGe γ spectrometer. The peak efficiency can be computed by Monte Carlo simulation, after inputting parameter of detector. The computation results are in agreement with the experimental results with an error of +-3.8%, with an exception one is about +-7.4%

  8. Experimental comparison between total calibration factors and components calibration factors of reference dosemeters used in secondary standard laboratory dosemeters

    International Nuclear Information System (INIS)

    Silva, T.A. da.

    1981-06-01

    A quantitative comparison of component calibration factors with the corresponding overall calibration factor was used to evaluate the adopted component calibration procedure in regard to parasitic elements. Judgement of significance is based upon the experimental uncertainty of a well established procedure for determination of the overall calibration factor. The experimental results obtained for different ionization chambers and different electrometers demonstrate that for one type of electrometer the parasitic elements have no influence on its sensitivity considering the experimental uncertainty of the calibration procedures. In this case the adopted procedure for determination of component calibration factors is considered to be equivalent to the procedure of determination of the overall calibration factor and thus might be used as a strong quality control measure in routine calibration. (Author) [pt

  9. Mathematical model and computer programme for theoretical calculation of calibration curves of neutron soil moisture probes with highly effective counters

    International Nuclear Information System (INIS)

    Kolev, N.A.

    1981-07-01

    A mathematical model based on the three group theory for theoretical calculation by means of computer of the calibration curves of neutron soil moisture probes with highly effective counters, is described. Methods for experimental correction of the mathematical model are discussed and proposed. The computer programme described allows the calibration of neutron probes with high or low effective counters, and central or end geometry, with or without linearizing of the calibration curve. The use of two calculation variants and printing of output data gives the possibility not only for calibration, but also for other researches. The separate data inputs for soil and probe temperature allow the temperature influence analysis. The computer programme and calculation examples are given. (author)

  10. Computational integration of the phases and procedures of calibration processes for radioprotection

    International Nuclear Information System (INIS)

    Santos, Gleice R. dos; Thiago, Bibiana dos S.; Rocha, Felicia D.G.; Santos, Gelson P. dos; Potiens, Maria da Penha A.; Vivolo, Vitor

    2011-01-01

    This work proceed the computational integration of the processes phases by using only a single computational software, from the entrance of the instrument at the Instrument Calibration Laboratory (LCI-IPEN) to the conclusion of calibration procedures. So, the initial information such as trade mark, model, manufacturer, owner, and the calibration records are digitized once until the calibration certificate emission

  11. Microscope self-calibration based on micro laser line imaging and soft computing algorithms

    Science.gov (United States)

    Apolinar Muñoz Rodríguez, J.

    2018-06-01

    A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.

  12. CVM-X: Calibrating contingent values with experimental auction markets

    International Nuclear Information System (INIS)

    Fox, J.A.; Shogren, J.F.; Hayes, D.J.; Kliebenstein, J.B.

    1998-01-01

    We design and implement a method, CVM-X, to calibrate hypothetical survey values using experimental auction markets. We test the procedure using consumer willingness-to-pay for irradiated/nonirradiated meat. Our results show that calibration factors for those who favor the irradiation process (0.67-0.69) are less severe than for those with an initial dislike of the process (0.55-0.59), suggesting that calibration may be commodity specific

  13. Statistical analysis on experimental calibration data for flowmeters in pressure pipes

    Science.gov (United States)

    Lazzarin, Alessandro; Orsi, Enrico; Sanfilippo, Umberto

    2017-08-01

    This paper shows a statistical analysis on experimental calibration data for flowmeters (i.e.: electromagnetic, ultrasonic, turbine flowmeters) in pressure pipes. The experimental calibration data set consists of the whole archive of the calibration tests carried out on 246 flowmeters from January 2001 to October 2015 at Settore Portate of Laboratorio di Idraulica “G. Fantoli” of Politecnico di Milano, that is accredited as LAT 104 for a flow range between 3 l/s and 80 l/s, with a certified Calibration and Measurement Capability (CMC) - formerly known as Best Measurement Capability (BMC) - equal to 0.2%. The data set is split into three subsets, respectively consisting in: 94 electromagnetic, 83 ultrasonic and 69 turbine flowmeters; each subset is analysed separately from the others, but then a final comparison is carried out. In particular, the main focus of the statistical analysis is the correction C, that is the difference between the flow rate Q measured by the calibration facility (through the accredited procedures and the certified reference specimen) minus the flow rate QM contemporarily recorded by the flowmeter under calibration, expressed as a percentage of the same QM .

  14. Program for large scanning-measuring table calibration on-line with the TRA-1001 computer

    International Nuclear Information System (INIS)

    Kalmykova, L.P.; Ososkov, G.A.; Pogodina, G.A.; Skryl', I.I.

    1975-01-01

    A description of a programme variant for complex calibration of the system of boards BPS-2, is given in which the stage of measurements and accumulation of calibration data is separated from that of data processing on CDC-1604A computers. Stage-by-stage calibration allows for simultaneous and parallel measurements on all the 6 boards. Such a possibility of the boards' operation on a computer-controlled line has been used as checking of the operators' actions with current control of the data obtained. Mathematical formulae are followed by programmes' descriptions. Reception and accumulation of coordinates of the units of the calibrating plate, with a highly precise net of straight lines, are accomplished with the help of the DATREC programme working on a guiding TPA-1001 computer with simultaneous calibration of all the six BPS-2 boards. The DATREC programme is written in the SLANG-1 language. The CALBPS programme works on CDC-1604A computer, is written in FORTRAN, and calculates transformation coefficients and corresponding precision characteristics. The work has resulted in reducing the calibration time from 10-15 min. per board to 10-15 min. per all the 6 boards. The time of accumulation in the memory TPA-1001 and of recording on the CDC-608 tape recorder of calibration data is from 3 to 5 min.; the remaining time is spent on processing on a bigger CDC-1604A computer. Examples of typical output and certain results of calibration measurements are given

  15. Cumulative error models for the tank calibration problem

    International Nuclear Information System (INIS)

    Goldman, A.; Anderson, L.G.; Weber, J.

    1983-01-01

    The purpose of a tank calibration equation is to obtain an estimate of the liquid volume that corresponds to a liquid level measurement. Calibration experimental errors occur in both liquid level and liquid volume measurements. If one of the errors is relatively small, the calibration equation can be determined from wellknown regression and calibration methods. If both variables are assumed to be in error, then for linear cases a prototype model should be considered. Many investigators are not familiar with this model or do not have computing facilities capable of obtaining numerical solutions. This paper discusses and compares three linear models that approximate the prototype model and have the advantage of much simpler computations. Comparisons among the four models and recommendations of suitability are made from simulations and from analyses of six sets of experimental data

  16. A real-time camera calibration system based on OpenCV

    Science.gov (United States)

    Zhang, Hui; Wang, Hua; Guo, Huinan; Ren, Long; Zhou, Zuofeng

    2015-07-01

    Camera calibration is one of the essential steps in the computer vision research. This paper describes a real-time OpenCV based camera calibration system, and developed and implemented in the VS2008 environment. Experimental results prove that the system to achieve a simple and fast camera calibration, compared with MATLAB, higher precision and does not need manual intervention, and can be widely used in various computer vision system.

  17. Experimental facility and void fraction calibration methods for impedance probes

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Fernando L. de; Rocha, Marcelo S., E-mail: floliveira@ipen.br, E-mail: msrocha@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    An experimental facility was designed and constructed with aims of to calibrate a capacitance probe for gas-liquid flow void fraction measurements. The facility is composed of a metallic hack with a vertical 2,300 mm high glass tube with 38 mm ID with stagnant water and compressed air bubbling system simulating the gas phase (vapor). At the lower part, a mixing section with a porous media element releases the air bubbles into the water, and the compressed air flow is measured by two calibrated rotameters. At the upper part a stagnant water tank separates the liquid and gas. Two pressure taps are located near the lower and upper sides of the glass tube for pressure difference measurement. The pressure difference is used for low void fraction values (0-15%) calibration methods, as described in the work. Two electrically controlled quick closing valves are installed between the porous media element and the upward separation tank for high void fraction values measurement (15-50%) used to calibrate the capacitance probe. The experimental facility design, construction, capacitance probe calibration methods and results, as well as flow pattern visualization, are presented. Finally, the capacitance probe will be installed on a natural circulation circuit mounted at the Nuclear Engineering Center (CEN/IPEN/CNEN-SP) for measurement of the instantaneous bulk void. Instantaneous signals generated by the capacitance probe will allow the determination of natural circulation loop global energy balance. (author)

  18. Experimental facility and void fraction calibration methods for impedance probes

    International Nuclear Information System (INIS)

    Oliveira, Fernando L. de; Rocha, Marcelo S.

    2013-01-01

    An experimental facility was designed and constructed with aims of to calibrate a capacitance probe for gas-liquid flow void fraction measurements. The facility is composed of a metallic hack with a vertical 2,300 mm high glass tube with 38 mm ID with stagnant water and compressed air bubbling system simulating the gas phase (vapor). At the lower part, a mixing section with a porous media element releases the air bubbles into the water, and the compressed air flow is measured by two calibrated rotameters. At the upper part a stagnant water tank separates the liquid and gas. Two pressure taps are located near the lower and upper sides of the glass tube for pressure difference measurement. The pressure difference is used for low void fraction values (0-15%) calibration methods, as described in the work. Two electrically controlled quick closing valves are installed between the porous media element and the upward separation tank for high void fraction values measurement (15-50%) used to calibrate the capacitance probe. The experimental facility design, construction, capacitance probe calibration methods and results, as well as flow pattern visualization, are presented. Finally, the capacitance probe will be installed on a natural circulation circuit mounted at the Nuclear Engineering Center (CEN/IPEN/CNEN-SP) for measurement of the instantaneous bulk void. Instantaneous signals generated by the capacitance probe will allow the determination of natural circulation loop global energy balance. (author)

  19. Accurate technique for complete geometric calibration of cone-beam computed tomography systems

    International Nuclear Information System (INIS)

    Cho Youngbin; Moseley, Douglas J.; Siewerdsen, Jeffrey H.; Jaffray, David A.

    2005-01-01

    Cone-beam computed tomography systems have been developed to provide in situ imaging for the purpose of guiding radiation therapy. Clinical systems have been constructed using this approach, a clinical linear accelerator (Elekta Synergy RP) and an iso-centric C-arm. Geometric calibration involves the estimation of a set of parameters that describes the geometry of such systems, and is essential for accurate image reconstruction. We have developed a general analytic algorithm and corresponding calibration phantom for estimating these geometric parameters in cone-beam computed tomography (CT) systems. The performance of the calibration algorithm is evaluated and its application is discussed. The algorithm makes use of a calibration phantom to estimate the geometric parameters of the system. The phantom consists of 24 steel ball bearings (BBs) in a known geometry. Twelve BBs are spaced evenly at 30 deg in two plane-parallel circles separated by a given distance along the tube axis. The detector (e.g., a flat panel detector) is assumed to have no spatial distortion. The method estimates geometric parameters including the position of the x-ray source, position, and rotation of the detector, and gantry angle, and can describe complex source-detector trajectories. The accuracy and sensitivity of the calibration algorithm was analyzed. The calibration algorithm estimates geometric parameters in a high level of accuracy such that the quality of CT reconstruction is not degraded by the error of estimation. Sensitivity analysis shows uncertainty of 0.01 deg. (around beam direction) to 0.3 deg. (normal to the beam direction) in rotation, and 0.2 mm (orthogonal to the beam direction) to 4.9 mm (beam direction) in position for the medical linear accelerator geometry. Experimental measurements using a laboratory bench Cone-beam CT system of known geometry demonstrate the sensitivity of the method in detecting small changes in the imaging geometry with an uncertainty of 0.1 mm in

  20. Calibration method for a radwaste assay system

    International Nuclear Information System (INIS)

    Dulama, C.; Dobrin, R.; Toma, Al.; Paunoiu, C.

    2004-01-01

    A waste assay system entirely designed and manufactured in the Institute for Nuclear Research is used in radwaste treatment and conditioning stream to ensure compliance with national repository radiological requirements. Usually, waste assay systems are calibrated by using various experimental arrangements including calibration phantoms. The paper presents a comparative study concerning the efficiency calibration performed by shell source method and a semiempirical, computational method based on a Monte Carlo algorithm. (authors)

  1. A calibration and data assimilation method using the Bayesian MARS emulator

    International Nuclear Information System (INIS)

    Stripling, H.F.; McClarren, R.G.; Kuranz, C.C.; Grosskopf, M.J.; Rutter, E.; Torralva, B.R.

    2013-01-01

    Highlights: ► We outline a transparent, flexible method for the calibration of uncertain inputs to computer models. ► We account for model, data, emulator, and measurement uncertainties. ► The method produces improved predictive results, which are validated using leave one-out experiments. ► Our implementation leverages the Bayesian MARS emulator, but any emulator may be substituted. -- Abstract: We present a method for calibrating the uncertain inputs to a computer model using available experimental data. The goal of the procedure is to estimate the posterior distribution of the uncertain inputs such that when samples from the posterior are used as inputs to future model runs, the model is more likely to replicate (or predict) the experimental response. The calibration is performed by sampling the space of the uncertain inputs, using the computer model (or, more likely, an emulator for the computer model) to assign weights to the samples, and applying the weights to produce the posterior distributions and generate predictions of new experiments with confidence bounds. The method is similar to Metropolis–Hastings calibration methods with independently sampled updates, except that we generate samples beforehand and replace the candidate acceptance routine with a weighting scheme. We apply our method to the calibration of a Hyades 2D model of laser energy deposition in beryllium. We employ a Bayesian Multivariate Adaptive Regression Splines (BMARS) emulator as a surrogate for Hyades 2D. We treat a range of uncertainties in our application, including uncertainties in the experimental inputs, experimental measurement error, and systematic experimental timing errors. The resulting posterior distributions agree with our existing intuition, and we validate the results by performing a series of leave-one-out predictions. We find that the calibrated predictions are considerably more accurate and less uncertain than blind sampling of the forward model alone.

  2. Computer Generated Hologram System for Wavefront Measurement System Calibration

    Science.gov (United States)

    Olczak, Gene

    2011-01-01

    Computer Generated Holograms (CGHs) have been used for some time to calibrate interferometers that require nulling optics. A typical scenario is the testing of aspheric surfaces with an interferometer placed near the paraxial center of curvature. Existing CGH technology suffers from a reduced capacity to calibrate middle and high spatial frequencies. The root cause of this shortcoming is as follows: the CGH is not placed at an image conjugate of the asphere due to limitations imposed by the geometry of the test and the allowable size of the CGH. This innovation provides a calibration system where the imaging properties in calibration can be made comparable to the test configuration. Thus, if the test is designed to have good imaging properties, then middle and high spatial frequency errors in the test system can be well calibrated. The improved imaging properties are provided by a rudimentary auxiliary optic as part of the calibration system. The auxiliary optic is simple to characterize and align to the CGH. Use of the auxiliary optic also reduces the size of the CGH required for calibration and the density of the lines required for the CGH. The resulting CGH is less expensive than the existing technology and has reduced write error and alignment error sensitivities. This CGH system is suitable for any kind of calibration using an interferometer when high spatial resolution is required. It is especially well suited for tests that include segmented optical components or large apertures.

  3. Calibration, validation, and sensitivity analysis: What's what

    International Nuclear Information System (INIS)

    Trucano, T.G.; Swiler, L.P.; Igusa, T.; Oberkampf, W.L.; Pilch, M.

    2006-01-01

    One very simple interpretation of calibration is to adjust a set of parameters associated with a computational science and engineering code so that the model agreement is maximized with respect to a set of experimental data. One very simple interpretation of validation is to quantify our belief in the predictive capability of a computational code through comparison with a set of experimental data. Uncertainty in both the data and the code are important and must be mathematically understood to correctly perform both calibration and validation. Sensitivity analysis, being an important methodology in uncertainty analysis, is thus important to both calibration and validation. In this paper, we intend to clarify the language just used and express some opinions on the associated issues. We will endeavor to identify some technical challenges that must be resolved for successful validation of a predictive modeling capability. One of these challenges is a formal description of a 'model discrepancy' term. Another challenge revolves around the general adaptation of abstract learning theory as a formalism that potentially encompasses both calibration and validation in the face of model uncertainty

  4. Automatic calibration system of the temperature instrument display based on computer vision measuring

    Science.gov (United States)

    Li, Zhihong; Li, Jinze; Bao, Changchun; Hou, Guifeng; Liu, Chunxia; Cheng, Fang; Xiao, Nianxin

    2010-07-01

    With the development of computers and the techniques of dealing with pictures and computer optical measurement, various measuring techniques are maturing gradually on the basis of optical picture processing technique and using in practice. On the bases, we make use of the many years' experience and social needs in temperature measurement and computer vision measurement to come up with the completely automatic way of the temperature measurement meter with integration of the computer vision measuring technique. It realizes synchronization collection with theory temperature value, improves calibration efficiency. based on least square fitting principle, integrate data procession and the best optimize theory, rapidly and accurately realizes automation acquisition and calibration of temperature.

  5. A methodology to develop computational phantoms with adjustable posture for WBC calibration

    International Nuclear Information System (INIS)

    Fonseca, T C Ferreira; Vanhavere, F; Bogaerts, R; Hunt, John

    2014-01-01

    A Whole Body Counter (WBC) is a facility to routinely assess the internal contamination of exposed workers, especially in the case of radiation release accidents. The calibration of the counting device is usually done by using anthropomorphic physical phantoms representing the human body. Due to such a challenge of constructing representative physical phantoms a virtual calibration has been introduced. The use of computational phantoms and the Monte Carlo method to simulate radiation transport have been demonstrated to be a worthy alternative. In this study we introduce a methodology developed for the creation of realistic computational voxel phantoms with adjustable posture for WBC calibration. The methodology makes use of different software packages to enable the creation and modification of computational voxel phantoms. This allows voxel phantoms to be developed on demand for the calibration of different WBC configurations. This in turn helps to study the major source of uncertainty associated with the in vivo measurement routine which is the difference between the calibration phantoms and the real persons being counted. The use of realistic computational phantoms also helps the optimization of the counting measurement. Open source codes such as MakeHuman and Blender software packages have been used for the creation and modelling of 3D humanoid characters based on polygonal mesh surfaces. Also, a home-made software was developed whose goal is to convert the binary 3D voxel grid into a MCNPX input file. This paper summarizes the development of a library of phantoms of the human body that uses two basic phantoms called MaMP and FeMP (Male and Female Mesh Phantoms) to create a set of male and female phantoms that vary both in height and in weight. Two sets of MaMP and FeMP phantoms were developed and used for efficiency calibration of two different WBC set-ups: the Doel NPP WBC laboratory and AGM laboratory of SCK-CEN in Mol, Belgium. (paper)

  6. A methodology to develop computational phantoms with adjustable posture for WBC calibration

    Science.gov (United States)

    Ferreira Fonseca, T. C.; Bogaerts, R.; Hunt, John; Vanhavere, F.

    2014-11-01

    A Whole Body Counter (WBC) is a facility to routinely assess the internal contamination of exposed workers, especially in the case of radiation release accidents. The calibration of the counting device is usually done by using anthropomorphic physical phantoms representing the human body. Due to such a challenge of constructing representative physical phantoms a virtual calibration has been introduced. The use of computational phantoms and the Monte Carlo method to simulate radiation transport have been demonstrated to be a worthy alternative. In this study we introduce a methodology developed for the creation of realistic computational voxel phantoms with adjustable posture for WBC calibration. The methodology makes use of different software packages to enable the creation and modification of computational voxel phantoms. This allows voxel phantoms to be developed on demand for the calibration of different WBC configurations. This in turn helps to study the major source of uncertainty associated with the in vivo measurement routine which is the difference between the calibration phantoms and the real persons being counted. The use of realistic computational phantoms also helps the optimization of the counting measurement. Open source codes such as MakeHuman and Blender software packages have been used for the creation and modelling of 3D humanoid characters based on polygonal mesh surfaces. Also, a home-made software was developed whose goal is to convert the binary 3D voxel grid into a MCNPX input file. This paper summarizes the development of a library of phantoms of the human body that uses two basic phantoms called MaMP and FeMP (Male and Female Mesh Phantoms) to create a set of male and female phantoms that vary both in height and in weight. Two sets of MaMP and FeMP phantoms were developed and used for efficiency calibration of two different WBC set-ups: the Doel NPP WBC laboratory and AGM laboratory of SCK-CEN in Mol, Belgium.

  7. A proposed methodology for computational fluid dynamics code verification, calibration, and validation

    Science.gov (United States)

    Aeschliman, D. P.; Oberkampf, W. L.; Blottner, F. G.

    Verification, calibration, and validation (VCV) of Computational Fluid Dynamics (CFD) codes is an essential element of the code development process. The exact manner in which code VCV activities are planned and conducted, however, is critically important. It is suggested that the way in which code validation, in particular, is often conducted--by comparison to published experimental data obtained for other purposes--is in general difficult and unsatisfactory, and that a different approach is required. This paper describes a proposed methodology for CFD code VCV that meets the technical requirements and is philosophically consistent with code development needs. The proposed methodology stresses teamwork and cooperation between code developers and experimentalists throughout the VCV process, and takes advantage of certain synergisms between CFD and experiment. A novel approach to uncertainty analysis is described which can both distinguish between and quantify various types of experimental error, and whose attributes are used to help define an appropriate experimental design for code VCV experiments. The methodology is demonstrated with an example of laminar, hypersonic, near perfect gas, 3-dimensional flow over a sliced sphere/cone of varying geometrical complexity.

  8. A parallel calibration utility for WRF-Hydro on high performance computers

    Science.gov (United States)

    Wang, J.; Wang, C.; Kotamarthi, V. R.

    2017-12-01

    A successful modeling of complex hydrological processes comprises establishing an integrated hydrological model which simulates the hydrological processes in each water regime, calibrates and validates the model performance based on observation data, and estimates the uncertainties from different sources especially those associated with parameters. Such a model system requires large computing resources and often have to be run on High Performance Computers (HPC). The recently developed WRF-Hydro modeling system provides a significant advancement in the capability to simulate regional water cycles more completely. The WRF-Hydro model has a large range of parameters such as those in the input table files — GENPARM.TBL, SOILPARM.TBL and CHANPARM.TBL — and several distributed scaling factors such as OVROUGHRTFAC. These parameters affect the behavior and outputs of the model and thus may need to be calibrated against the observations in order to obtain a good modeling performance. Having a parameter calibration tool specifically for automate calibration and uncertainty estimates of WRF-Hydro model can provide significant convenience for the modeling community. In this study, we developed a customized tool using the parallel version of the model-independent parameter estimation and uncertainty analysis tool, PEST, to enabled it to run on HPC with PBS and SLURM workload manager and job scheduler. We also developed a series of PEST input file templates that are specifically for WRF-Hydro model calibration and uncertainty analysis. Here we will present a flood case study occurred in April 2013 over Midwest. The sensitivity and uncertainties are analyzed using the customized PEST tool we developed.

  9. Computer aided method for colour calibration and analysis of digital rock photographs

    Directory of Open Access Journals (Sweden)

    Matic Potočnik

    2015-12-01

    Full Text Available The methods used in geology to determine colour and colour coverage are expensive, time consuming, and/ or subjective. Estimates of colour coverage can only be approximate since they are based on rough comparisonbased measuring etalons and subjective estimation, which is dependent upon the skill and experience of the person performing the estimation. We present a method which accelerates, simplifis, and objectifis these tasks using a computer application. It automatically calibrates the colours of a digital photo, and enables the user to read colour values and coverage, even after returning from fild work. Colour identifiation is based on the Munsell colour system. For the purposes of colour calibration we use the X-Rite ColorChecker Passport colour chart placed onto the photographed scene. Our computer application detects the ColorChecker colour chart, and fids a colour space transformation to calibrate the colour in the photo. The user can then use the application to read colours within selected points or regions of the photo. The results of the computerised colour calibration were compared to the reference values of the ColorChecker chart. The values slightly deviate from the exact values, but the deviation is around the limit of human capability for visual comparison. We have devised an experiment, which compares the precision of the computerised colour analysis and manual colour analysis performed on a variety of rock samples with the help of geology students using Munsell Rock-color Chart. The analysis showed that the precision of manual comparative identifiation on multicoloured samples is somewhat problematic, since the choice of representative colours and observation points for a certain part of a sample are subjective. The computer based method has the edge in verifibility and repeatability of the analysis since the application the original photo to be saved with colour calibration, and tagging of colouranalysed points and regions.

  10. Modeling and Experimental Analysis of Piezoelectric Shakers for High-Frequency Calibration of Accelerometers

    International Nuclear Information System (INIS)

    Vogl, Gregory W.; Harper, Kari K.; Payne, Bev

    2010-01-01

    Piezoelectric shakers have been developed and used at the National Institute of Standards and Technology (NIST) for decades for high-frequency calibration of accelerometers. Recently, NIST researchers built new piezoelectric shakers in the hopes of reducing the uncertainties in the calibrations of accelerometers while extending the calibration frequency range beyond 20 kHz. The ability to build and measure piezoelectric shakers invites modeling of these systems in order to improve their design for increased performance, which includes a sinusoidal motion with lower distortion, lower cross-axial motion, and an increased frequency range. In this paper, we present a model of piezoelectric shakers and match it to experimental data. The equations of motion for all masses are solved along with the coupled state equations for the piezoelectric actuator. Finally, additional electrical elements like inductors, capacitors, and resistors are added to the piezoelectric actuator for matching of experimental and theoretical frequency responses.

  11. Precision alignment and calibration of optical systems using computer generated holograms

    Science.gov (United States)

    Coyle, Laura Elizabeth

    As techniques for manufacturing and metrology advance, optical systems are being designed with more complexity than ever before. Given these prescriptions, alignment and calibration can be a limiting factor in their final performance. Computer generated holograms (CGHs) have several unique properties that make them powerful tools for meeting these demanding tolerances. This work will present three novel methods for alignment and calibration of optical systems using computer generated holograms. Alignment methods using CGHs require that the optical wavefront created by the CGH be related to a mechanical datum to locate it space. An overview of existing methods is provided as background, then two new alignment methods are discussed in detail. In the first method, the CGH contact Ball Alignment Tool (CBAT) is used to align a ball or sphere mounted retroreflector (SMR) to a Fresnel zone plate pattern with micron level accuracy. The ball is bonded directly onto the CGH substrate and provides permanent, accurate registration between the optical wavefront and a mechanical reference to locate the CGH in space. A prototype CBAT was built and used to align and bond an SMR to a CGH. In the second method, CGH references are used to align axi-symmetric optics in four degrees of freedom with low uncertainty and real time feedback. The CGHs create simultaneous 3D optical references where the zero order reflection sets tilt and the first diffracted order sets centration. The flexibility of the CGH design can be used to accommodate a wide variety of optical systems and maximize sensitivity to misalignments. A 2-CGH prototype system was aligned multiplied times and the alignment uncertainty was quantified and compared to an error model. Finally, an enhanced calibration method is presented. It uses multiple perturbed measurements of a master sphere to improve the calibration of CGH-based Fizeau interferometers ultimately measuring aspheric test surfaces. The improvement in the

  12. Differential computation method used to calibrate the angle-centroid relationship in coaxial reverse Hartmann test

    Science.gov (United States)

    Li, Xinji; Hui, Mei; Zhao, Zhu; Liu, Ming; Dong, Liquan; Kong, Lingqin; Zhao, Yuejin

    2018-05-01

    A differential computation method is presented to improve the precision of calibration for coaxial reverse Hartmann test (RHT). In the calibration, the accuracy of the distance measurement greatly influences the surface shape test, as demonstrated in the mathematical analyses. However, high-precision absolute distance measurement is difficult in the calibration. Thus, a differential computation method that only requires the relative distance was developed. In the proposed method, a liquid crystal display screen successively displayed two regular dot matrix patterns with different dot spacing. In a special case, images on the detector exhibited similar centroid distributions during the reflector translation. Thus, the critical value of the relative displacement distance and the centroid distributions of the dots on the detector were utilized to establish the relationship between the rays at certain angles and the detector coordinates. Experiments revealed the approximately linear behavior of the centroid variation with the relative displacement distance. With the differential computation method, we increased the precision of traditional calibration 10-5 rad root mean square. The precision of the RHT was increased by approximately 100 nm.

  13. CT crown for on-machine scale calibration in Computed Tomography

    DEFF Research Database (Denmark)

    Stolfi, Alessandro; De Chiffre, Leonardo

    2016-01-01

    A novel artefact for on-machine calibration of the scale in 3D X-ray Computed Tomography (CT) is presented. The artefact comprises an invar disc on which several reference ruby spheres are positioned at different heights using carbon fibre rods. The artefact is positioned and scanned together...

  14. 3D artefact for concurrent scale calibration in Computed Tomography

    DEFF Research Database (Denmark)

    Stolfi, Alessandro; De Chiffre, Leonardo

    2016-01-01

    A novel artefact for calibration of the scale in 3D X-ray Computed Tomography (CT) is presented. The artefact comprises a carbon fibre tubular structure on which a number of reference ruby spheres are glued. The artefact is positioned and scanned together with the workpiece inside the CT scanner...

  15. An innovative method for coordinate measuring machine one-dimensional self-calibration with simplified experimental process.

    Science.gov (United States)

    Fang, Cheng; Butler, David Lee

    2013-05-01

    In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.

  16. On the calibration strategies of the Johnson–Cook strength model: Discussion and applications to experimental data

    International Nuclear Information System (INIS)

    Gambirasio, Luca; Rizzi, Egidio

    2014-01-01

    The present paper aims at assessing the various procedures adoptable for calibrating the parameters of the so-called Johnson–Cook strength model, expressing the deviatoric behavior of elastoplastic materials, with particular reference to the description of High Strain Rate (HSR) phenomena. The procedures rely on input experimental data corresponding to a set of hardening functions recorded at different equivalent plastic strain rates and temperatures. After a brief review of the main characteristics of the Johnson–Cook strength model, five different calibration strategies are framed and widely described. The assessment is implemented through a systematic application of each calibration strategy to three different real material cases, i.e. a DH-36 structural steel, a commercially pure niobium and an AL-6XN stainless steel. Experimental data available in the literature are considered. Results are presented in terms of plots showing the predicted Johnson–Cook hardening functions against the experimental trends, together with tables describing the fitting problematics which arise in each case, by assessing both lower yield stress and overall plastic flow introduced errors. The consequences determined by each calibration approach are then carefully compared and evaluated. A discussion on the positive and negative aspects of each strategy is presented and some suggestions on how to choose the best calibration approach are outlined, by considering the available experimental data and the objectives of the following modeling process. The proposed considerations should provide a useful guideline in the process of determining the best Johnson–Cook parameters in each specific situation in which the model is going to be adopted. A last section introduces some considerations about the calibration of the Johnson–Cook strength model through experimental data different from those consisting in a set of hardening functions relative to different equivalent plastic strain

  17. A case study on robust optimal experimental design for model calibration of ω-Transaminase

    DEFF Research Database (Denmark)

    Daele, Timothy, Van; Van Hauwermeiren, Daan; Ringborg, Rolf Hoffmeyer

    the experimental space. However, it is expected that more informative experiments can be designed to increase the confidence of the parameter estimates. Therefore, we apply Optimal Experimental Design (OED) to the calibrated model of Shin and Kim (1998). The total number of samples was retained to allow fair......” parameter values are not known before finishing the model calibration. However, it is important that the chosen parameter values are close to the real parameter values, otherwise the OED can possibly yield non-informative experiments. To counter this problem, one can use robust OED. The idea of robust OED......Proper calibration of models describing enzyme kinetics can be quite challenging. This is especially the case for more complex models like transaminase models (Shin and Kim, 1998). The latter fitted model parameters, but the confidence on the parameter estimation was not derived. Hence...

  18. Calibrating EEG-based motor imagery brain-computer interface from passive movement.

    Science.gov (United States)

    Ang, Kai Keng; Guan, Cuntai; Wang, Chuanchu; Phua, Kok Soon; Tan, Adrian Hock Guan; Chin, Zheng Yang

    2011-01-01

    EEG data from performing motor imagery are usually collected to calibrate a subject-specific model for classifying the EEG data during the evaluation phase of motor imagery Brain-Computer Interface (BCI). However, there is no direct objective measure to determine if a subject is performing motor imagery correctly for proper calibration. Studies have shown that passive movement, which is directly observable, induces Event-Related Synchronization patterns that are similar to those induced from motor imagery. Hence, this paper investigates the feasibility of calibrating EEG-based motor imagery BCI from passive movement. EEG data of 12 healthy subjects were collected during motor imagery and passive movement of the hand by a haptic knob robot. The calibration models using the Filter Bank Common Spatial Pattern algorithm on the EEG data from motor imagery were compared against using the EEG data from passive movement. The performances were compared based on the 10×10-fold cross-validation accuracies of the calibration data, and off-line session-to-session transfer kappa values to other sessions of motor imagery performed on another day. The results showed that the calibration performed using passive movement yielded higher model accuracy and off-line session-to-session transfer (73.6% and 0.354) than the calibration performed using motor imagery (71.3% and 0.311), and no significant differences were observed between the two groups (p=0.20, 0.23). Hence, this study shows that it is feasible to calibrate EEG-based motor imagery BCI from passive movement.

  19. Evaluating Statistical Process Control (SPC) techniques and computing the uncertainty of force calibrations

    Science.gov (United States)

    Navard, Sharon E.

    1989-01-01

    In recent years there has been a push within NASA to use statistical techniques to improve the quality of production. Two areas where statistics are used are in establishing product and process quality control of flight hardware and in evaluating the uncertainty of calibration of instruments. The Flight Systems Quality Engineering branch is responsible for developing and assuring the quality of all flight hardware; the statistical process control methods employed are reviewed and evaluated. The Measurement Standards and Calibration Laboratory performs the calibration of all instruments used on-site at JSC as well as those used by all off-site contractors. These calibrations must be performed in such a way as to be traceable to national standards maintained by the National Institute of Standards and Technology, and they must meet a four-to-one ratio of the instrument specifications to calibrating standard uncertainty. In some instances this ratio is not met, and in these cases it is desirable to compute the exact uncertainty of the calibration and determine ways of reducing it. A particular example where this problem is encountered is with a machine which does automatic calibrations of force. The process of force calibration using the United Force Machine is described in detail. The sources of error are identified and quantified when possible. Suggestions for improvement are made.

  20. Experimental and calculated calibration of ionization chambers with air circulation

    CERN Document Server

    Peetermans, A

    1972-01-01

    The reports describes the method followed in order to calibrate the different ionization chambers with air circulation, used by the 'Health Physics Group'. The calculations agree more precisely with isotopes cited previously (/sup 11/C, /sup 13/N, /sup 15/O, /sup 41 /Ar, /sup 14/O, /sup 38/Cl) as well as for /sup 85/Kr, /sup 133/Xe, /sup 14/C and tritium which are used for the experimental standardisation of different chambers.

  1. Computations for a condenser. Experimental results

    International Nuclear Information System (INIS)

    Walden, Jean.

    1975-01-01

    Computations for condensers are presented with experimental results. The computations are concerned with the steam flux at the condenser input, and inside the tube bundle. Experimental results are given for the flux inside the condenser sleeve and the flow passing through the tube bundle [fr

  2. Accuracy evaluation of optical distortion calibration by digital image correlation

    Science.gov (United States)

    Gao, Zeren; Zhang, Qingchuan; Su, Yong; Wu, Shangquan

    2017-11-01

    Due to its convenience of operation, the camera calibration algorithm, which is based on the plane template, is widely used in image measurement, computer vision and other fields. How to select a suitable distortion model is always a problem to be solved. Therefore, there is an urgent need for an experimental evaluation of the accuracy of camera distortion calibrations. This paper presents an experimental method for evaluating camera distortion calibration accuracy, which is easy to implement, has high precision, and is suitable for a variety of commonly used lens. First, we use the digital image correlation method to calculate the in-plane rigid body displacement field of an image displayed on a liquid crystal display before and after translation, as captured with a camera. Next, we use a calibration board to calibrate the camera to obtain calibration parameters which are used to correct calculation points of the image before and after deformation. The displacement field before and after correction is compared to analyze the distortion calibration results. Experiments were carried out to evaluate the performance of two commonly used industrial camera lenses for four commonly used distortion models.

  3. Absolute intensity calibration of the 32-channel heterodyne radiometer on experimental advanced superconducting tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Liu, X.; Zhao, H. L.; Liu, Y., E-mail: liuyong@ipp.ac.cn; Li, E. Z.; Han, X.; Ti, A.; Hu, L. Q.; Zhang, X. D. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Domier, C. W.; Luhmann, N. C. [Department of Electrical and Computer Engineering, University of California at Davis, Davis, California 95616 (United States)

    2014-09-15

    This paper presents the results of the in situ absolute intensity calibration for the 32-channel heterodyne radiometer on the experimental advanced superconducting tokamak. The hot/cold load method is adopted, and the coherent averaging technique is employed to improve the signal to noise ratio. Measured spectra and electron temperature profiles are compared with those from an independent calibrated Michelson interferometer, and there is a relatively good agreement between the results from the two different systems.

  4. Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations for solvent-based carbon capture. Part 2: Chemical absorption across a wetted wall column: Original Research Article: Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Chao [Pacific Northwest National Laboratory, Physical and Computational Sciences Directorate, Richland WA; Xu, Zhijie [Pacific Northwest National Laboratory, Physical and Computational Sciences Directorate, Richland WA; Lai, Kevin [Pacific Northwest National Laboratory, Physical and Computational Sciences Directorate, Richland WA; Whyatt, Greg [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland WA; Marcy, Peter W. [Los Alamos National Laboratory, Statistical Sciences Group, Los Alamos NM; Sun, Xin [Oak Ridge National Laboratory, Energy and Transportation Science Division, Oak Ridge TN

    2017-10-24

    The first part of this paper (Part 1) presents a numerical model for non-reactive physical mass transfer across a wetted wall column (WWC). In Part 2, we improved the existing computational fluid dynamics (CFD) model to simulate chemical absorption occurring in a WWC as a bench-scale study of solvent-based carbon dioxide (CO2) capture. To generate data for WWC model validation, CO2 mass transfer across a monoethanolamine (MEA) solvent was first measured on a WWC experimental apparatus. The numerical model developed in this work has the ability to account for both chemical absorption and desorption of CO2 in MEA. In addition, the overall mass transfer coefficient predicted using traditional/empirical correlations is conducted and compared with CFD prediction results for both steady and wavy falling films. A Bayesian statistical calibration algorithm is adopted to calibrate the reaction rate constants in chemical absorption/desorption of CO2 across a falling film of MEA. The posterior distributions of the two transport properties, i.e., Henry’s constant and gas diffusivity in the non-reacting nitrous oxide (N2O)/MEA system obtained from Part 1 of this study, serves as priors for the calibration of CO2 reaction rate constants after using the N2O/CO2 analogy method. The calibrated model can be used to predict the CO2 mass transfer in a WWC for a wider range of operating conditions.

  5. Optimisation of the energy efficiency of bread-baking ovens using a combined experimental and computational approach

    International Nuclear Information System (INIS)

    Khatir, Zinedine; Paton, Joe; Thompson, Harvey; Kapur, Nik; Toropov, Vassili

    2013-01-01

    Highlights: ► A scientific framework for optimising oven operating conditions is presented. ► Experiments measuring local convective heat transfer coefficient are undertaken. ► An energy efficiency model is developed with experimentally calibrated CFD analysis. ► Designing ovens with optimum heat transfer coefficients reduces energy use. ► Results demonstrate a strong case to design and manufacture energy optimised ovens. - Abstract: Changing legislation and rising energy costs are bringing the need for efficient baking processes into much sharper focus. High-speed air impingement bread-baking ovens are complex systems using air flow to transfer heat to the product. In this paper, computational fluid dynamics (CFD) is combined with experimental analysis to develop a rigorous scientific framework for the rapid generation of forced convection oven designs. A design parameterisation of a three-dimensional generic oven model is carried out for a wide range of oven sizes and flow conditions to optimise desirable features such as temperature uniformity throughout the oven, energy efficiency and manufacturability. Coupled with the computational model, a series of experiments measuring the local convective heat transfer coefficient (h c ) are undertaken. The facility used for the heat transfer experiments is representative of a scaled-down production oven where the air temperature and velocity as well as important physical constraints such as nozzle dimensions and nozzle-to-surface distance can be varied. An efficient energy model is developed using a CFD analysis calibrated using experimentally determined inputs. Results from a range of oven designs are presented together with ensuing energy usage and savings

  6. Computerized 50 liter volume calibration system

    International Nuclear Information System (INIS)

    Proffitt, T.H.

    1990-01-01

    A system has been designed for the Savannah River Site that will be used to calibrate product shipping containers. For accountability purposes, it is necessary that these containers be calibrated to a very high precision. The Computerized 50 Liter Volume Calibration System (CVCS), which is based on the Ideal Gas Law (IGL), will use reference volumes with precision of no less ±0.03%, and helium to calibrate the containers to have a total error of no greater than ±0.10%. A statistical interpretation of the system has given a theoretical total calculated error of ±0.08%. Tests with the system will be performed once fabrication is complete to experimentally verify the calculated error. Since the total error was calculated using the worst case scenario, the actual error should be significantly less than the calculated value. The computer controlled, totally automated system is traceable to the National Institute of Standards and Technology. The design, calibration procedure, and statistical interpretation of the system will be discussed. 1 ref

  7. Extrinsic Calibration of Camera and 2D Laser Sensors without Overlap

    Directory of Open Access Journals (Sweden)

    Khalil M. Ahmad Yousef

    2017-10-01

    Full Text Available Extrinsic calibration of a camera and a 2D laser range finder (lidar sensors is crucial in sensor data fusion applications; for example SLAM algorithms used in mobile robot platforms. The fundamental challenge of extrinsic calibration is when the camera-lidar sensors do not overlap or share the same field of view. In this paper we propose a novel and flexible approach for the extrinsic calibration of a camera-lidar system without overlap, which can be used for robotic platform self-calibration. The approach is based on the robot–world hand–eye calibration (RWHE problem; proven to have efficient and accurate solutions. First, the system was mapped to the RWHE calibration problem modeled as the linear relationship AX = ZB , where X and Z are unknown calibration matrices. Then, we computed the transformation matrix B , which was the main challenge in the above mapping. The computation is based on reasonable assumptions about geometric structure in the calibration environment. The reliability and accuracy of the proposed approach is compared to a state-of-the-art method in extrinsic 2D lidar to camera calibration. Experimental results from real datasets indicate that the proposed approach provides better results with an L2 norm translational and rotational deviations of 314 mm and 0 . 12 ∘ respectively.

  8. Extrinsic Calibration of Camera and 2D Laser Sensors without Overlap.

    Science.gov (United States)

    Ahmad Yousef, Khalil M; Mohd, Bassam J; Al-Widyan, Khalid; Hayajneh, Thaier

    2017-10-14

    Extrinsic calibration of a camera and a 2D laser range finder (lidar) sensors is crucial in sensor data fusion applications; for example SLAM algorithms used in mobile robot platforms. The fundamental challenge of extrinsic calibration is when the camera-lidar sensors do not overlap or share the same field of view. In this paper we propose a novel and flexible approach for the extrinsic calibration of a camera-lidar system without overlap, which can be used for robotic platform self-calibration. The approach is based on the robot-world hand-eye calibration (RWHE) problem; proven to have efficient and accurate solutions. First, the system was mapped to the RWHE calibration problem modeled as the linear relationship AX = ZB , where X and Z are unknown calibration matrices. Then, we computed the transformation matrix B , which was the main challenge in the above mapping. The computation is based on reasonable assumptions about geometric structure in the calibration environment. The reliability and accuracy of the proposed approach is compared to a state-of-the-art method in extrinsic 2D lidar to camera calibration. Experimental results from real datasets indicate that the proposed approach provides better results with an L2 norm translational and rotational deviations of 314 mm and 0 . 12 ∘ respectively.

  9. Wavelength calibration of x-ray imaging crystal spectrometer on Joint Texas Experimental Tokamak

    International Nuclear Information System (INIS)

    Yan, W.; Chen, Z. Y.; Jin, W.; Huang, D. W.; Ding, Y. H.; Li, J. C.; Zhang, X. Q.; Zhuang, G.; Lee, S. G.; Shi, Y. J.

    2014-01-01

    The wavelength calibration of x-ray imaging crystal spectrometer is a key issue for the measurements of plasma rotation. For the lack of available standard radiation source near 3.95 Å and there is no other diagnostics to measure the core rotation for inter-calibration, an indirect method by using tokamak plasma itself has been applied on joint Texas experimental tokamak. It is found that the core toroidal rotation velocity is not zero during locked mode phase. This is consistent with the observation of small oscillations on soft x-ray signals and electron cyclotron emission during locked-mode phase

  10. Calculation of calibration factors and layout criteria for gamma scanning of waste drums from nuclear plants

    International Nuclear Information System (INIS)

    Inder Schmitten, W.; Sohnius, B.; Wehner, E.

    1990-01-01

    This paper present a procedure to calculate calibration factors for converting the measured gamma rate of waste drums into activity content and a layout and free release measurement criterion for waste drums. A computer program is developed that simulates drum scanning technique, which calculates calibration factors and eliminates laborious experimental measurements. The calculated calibration factors exhibit good agreement with experimentally determined values. By checking the calculated calibration factors for trial equipment layouts (including the waste drum and the scanning facility) using the layout and free release measurement criterion, a layout can be achieved that clearly determines whether there can be free release of a waste drum

  11. CAMAC-system for calibration and control of experimental apparatus with scintillation counters

    International Nuclear Information System (INIS)

    Petrov, A.G.; Sinaev, A.N.

    1977-01-01

    The CAMAC-system is described, connected to the minicomputer PH-2116C, for calibration and control of an experiment on pion scattering on He-nuclei performed with a streamer chamber triggered by a scintillation hodoscope. The following operations are performed: -delay calibration in telescope and hodoscope tracts involving 22 scintillation counters; -control of relative efficiency of hodoscope counters and other parameters of the experiments; -control of HV supply of photomultipliers; -control of the currents of magnets and lenses of the muon track of the synchrocyclotron; -measurement of pulse-hight spectra from a Cherenkov counter to determine the beam composition. The working programs are initiated and the dialogue with the computer is carried out using an alphanumerical display connected to the PH-2116C via a CAMAC interface

  12. Computational tools for the construction of calibration curves for use in dose calculations in radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Oliveira, Alex C.H.; Vieira, Jose W.; Escola Politecnica de Pernambuco , Recife, PE

    2011-01-01

    The realization of tissue inhomogeneity corrections in image-based treatment planning improves the accuracy of radiation dose calculations for patients undergoing external-beam radiotherapy. Before the tissue inhomogeneity correction can be applied, the relationship between the computed tomography (CT) numbers and density must be established. This relationship is typically established by a calibration curve empirically obtained from CT images of a phantom that has several inserts of tissue-equivalent materials, covering a wide range of densities. This calibration curve is scanner-dependent and allows the conversion of CT numbers in densities for use in dose calculations. This paper describes the implementation of computational tools necessary to construct calibration curves. These tools are used for reading and displaying of CT images in DICOM format, determination of the mean CT numbers (and their standard deviations) of each tissue-equivalent material and construction of calibration curves by fits with bilinear equations. All these tools have been implemented in the Microsoft Visual Studio 2010 in C≠ programming language. (author)

  13. Experimental procedures for the calibration of scintillation cells used in the determination of radon gas concentrations

    International Nuclear Information System (INIS)

    Grenier, M; Bigu, J.

    1982-02-01

    Experimental and analytical procedures are described for the calibration of scintillation cells used for the determination of radon gas concentration. In-house designed and built scintillation cells, used routinely in the monitoring of radon gas in uranium mine underground environments and in the laboratory, were calibrated. The cells had a volume of approximately 158 cm 3 and an α-counting efficiency ranging from 50% to 64%. Calibration factors for the cells were determined. Values ranged approximately from 0.177 cpm/pCiL -1 (4.77 cpm/BqL -1 ) to 0.224 cpm/pCiL -1 (6.05 cpm/BqL -1 ). The calibration facilities at the Elliot Lake Laboratory are briefly described

  14. A New Perspective for the Calibration of Computational Predictor Models.

    Energy Technology Data Exchange (ETDEWEB)

    Crespo, Luis Guillermo

    2014-11-01

    This paper presents a framework for calibrating computational models using data from sev- eral and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncer- tainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of obser- vations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it is a description of the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain (i.e., roll-up and extrapolation).

  15. Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations for solvent-based carbon capture. Part 2: Chemical absorption across a wetted wall column: Original Research Article: Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Chao [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Physical and Computational Sciences Directorate; Xu, Zhijie [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Physical and Computational Sciences Directorate; Lai, Kevin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Physical and Computational Sciences Directorate; Whyatt, Greg [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Energy and Environment Directorate; Marcy, Peter W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sun, Xin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Energy and Transportation Science Division

    2017-10-24

    Part 1 of this paper presents a numerical model for non-reactive physical mass transfer across a wetted wall column (WWC). In Part 2, we improved the existing computational fluid dynamics (CFD) model to simulate chemical absorption occurring in a WWC as a bench-scale study of solvent-based carbon dioxide (CO2) capture. In this study, to generate data for WWC model validation, CO2 mass transfer across a monoethanolamine (MEA) solvent was first measured on a WWC experimental apparatus. The numerical model developed in this work can account for both chemical absorption and desorption of CO2 in MEA. In addition, the overall mass transfer coefficient predicted using traditional/empirical correlations is conducted and compared with CFD prediction results for both steady and wavy falling films. A Bayesian statistical calibration algorithm is adopted to calibrate the reaction rate constants in chemical absorption/desorption of CO2 across a falling film of MEA. The posterior distributions of the two transport properties, i.e., Henry's constant and gas diffusivity in the non-reacting nitrous oxide (N2O)/MEA system obtained from Part 1 of this study, serves as priors for the calibration of CO2 reaction rate constants after using the N2O/CO2 analogy method. Finally, the calibrated model can be used to predict the CO2 mass transfer in a WWC for a wider range of operating conditions.

  16. [Animal experimentation, computer simulation and surgical research].

    Science.gov (United States)

    Carpentier, Alain

    2009-11-01

    We live in a digital world In medicine, computers are providing new tools for data collection, imaging, and treatment. During research and development of complex technologies and devices such as artificial hearts, computer simulation can provide more reliable information than experimentation on large animals. In these specific settings, animal experimentation should serve more to validate computer models of complex devices than to demonstrate their reliability.

  17. Bayesian model calibration of computational models in velocimetry diagnosed dynamic compression experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Justin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hund, Lauren [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesian model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.

  18. Effects of calibration methods on quantitative material decomposition in photon-counting spectral computed tomography using a maximum a posteriori estimator.

    Science.gov (United States)

    Curtis, Tyler E; Roeder, Ryan K

    2017-10-01

    Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in

  19. Calibration of a gamma spectrometer for natural radioactivity measurement. Experimental measurements and Monte Carlo modelling

    International Nuclear Information System (INIS)

    Courtine, Fabien

    2007-03-01

    The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of 137 Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the 60 Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)

  20. Experimental Mathematics and Computational Statistics

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.; Borwein, Jonathan M.

    2009-04-30

    The field of statistics has long been noted for techniques to detect patterns and regularities in numerical data. In this article we explore connections between statistics and the emerging field of 'experimental mathematics'. These includes both applications of experimental mathematics in statistics, as well as statistical methods applied to computational mathematics.

  1. A convolutional neural network approach to calibrating the rotation axis for X-ray computed tomography.

    Science.gov (United States)

    Yang, Xiaogang; De Carlo, Francesco; Phatak, Charudatta; Gürsoy, Dogˇa

    2017-03-01

    This paper presents an algorithm to calibrate the center-of-rotation for X-ray tomography by using a machine learning approach, the Convolutional Neural Network (CNN). The algorithm shows excellent accuracy from the evaluation of synthetic data with various noise ratios. It is further validated with experimental data of four different shale samples measured at the Advanced Photon Source and at the Swiss Light Source. The results are as good as those determined by visual inspection and show better robustness than conventional methods. CNN has also great potential for reducing or removing other artifacts caused by instrument instability, detector non-linearity, etc. An open-source toolbox, which integrates the CNN methods described in this paper, is freely available through GitHub at tomography/xlearn and can be easily integrated into existing computational pipelines available at various synchrotron facilities. Source code, documentation and information on how to contribute are also provided.

  2. RADIOMETRIC CALIBRATION OF MARS HiRISE HIGH RESOLUTION IMAGERY BASED ON FPGA

    Directory of Open Access Journals (Sweden)

    Y. Hou

    2016-06-01

    Full Text Available Due to the large data amount of HiRISE imagery, traditional radiometric calibration method is not able to meet the fast processing requirements. To solve this problem, a radiometric calibration system of HiRISE imagery based on field program gate array (FPGA is designed. The montage gap between two channels caused by gray inconsistency is removed through histogram matching. The calibration system is composed of FPGA and DSP, which makes full use of the parallel processing ability of FPGA and fast computation as well as flexible control characteristic of DSP. Experimental results show that the designed system consumes less hardware resources and the real-time processing ability of radiometric calibration of HiRISE imagery is improved.

  3. Spelling is Just a Click Away - A User-Centered Brain-Computer Interface Including Auto-Calibration and Predictive Text Entry.

    Science.gov (United States)

    Kaufmann, Tobias; Völker, Stefan; Gunesch, Laura; Kübler, Andrea

    2012-01-01

    Brain-computer interfaces (BCI) based on event-related potentials (ERP) allow for selection of characters from a visually presented character-matrix and thus provide a communication channel for users with neurodegenerative disease. Although they have been topic of research for more than 20 years and were multiply proven to be a reliable communication method, BCIs are almost exclusively used in experimental settings, handled by qualified experts. This study investigates if ERP-BCIs can be handled independently by laymen without expert support, which is inevitable for establishing BCIs in end-user's daily life situations. Furthermore we compared the classic character-by-character text entry against a predictive text entry (PTE) that directly incorporates predictive text into the character-matrix. N = 19 BCI novices handled a user-centered ERP-BCI application on their own without expert support. The software individually adjusted classifier weights and control parameters in the background, invisible to the user (auto-calibration). All participants were able to operate the software on their own and to twice correctly spell a sentence with the auto-calibrated classifier (once with PTE, once without). Our PTE increased spelling speed and, importantly, did not reduce accuracy. In sum, this study demonstrates feasibility of auto-calibrating ERP-BCI use, independently by laymen and the strong benefit of integrating predictive text directly into the character-matrix.

  4. Calibration of a Stereo Radiation Detection Camera Using Planar Homography

    Directory of Open Access Journals (Sweden)

    Seung-Hae Baek

    2016-01-01

    Full Text Available This paper proposes a calibration technique of a stereo gamma detection camera. Calibration of the internal and external parameters of a stereo vision camera is a well-known research problem in the computer vision society. However, few or no stereo calibration has been investigated in the radiation measurement research. Since no visual information can be obtained from a stereo radiation camera, it is impossible to use a general stereo calibration algorithm directly. In this paper, we develop a hybrid-type stereo system which is equipped with both radiation and vision cameras. To calibrate the stereo radiation cameras, stereo images of a calibration pattern captured from the vision cameras are transformed in the view of the radiation cameras. The homography transformation is calibrated based on the geometric relationship between visual and radiation camera coordinates. The accuracy of the stereo parameters of the radiation camera is analyzed by distance measurements to both visual light and gamma sources. The experimental results show that the measurement error is about 3%.

  5. Reduced Calibration Curve for Proton Computed Tomography

    International Nuclear Information System (INIS)

    Yevseyeva, Olga; Assis, Joaquim de; Evseev, Ivan; Schelin, Hugo; Paschuk, Sergei; Milhoretto, Edney; Setti, Joao; Diaz, Katherin; Hormaza, Joel; Lopes, Ricardo

    2010-01-01

    The pCT deals with relatively thick targets like the human head or trunk. Thus, the fidelity of pCT as a tool for proton therapy planning depends on the accuracy of physical formulas used for proton interaction with thick absorbers. Although the actual overall accuracy of the proton stopping power in the Bethe-Bloch domain is about 1%, the analytical calculations and the Monte Carlo simulations with codes like TRIM/SRIM, MCNPX and GEANT4 do not agreed with each other. A tentative to validate the codes against experimental data for thick absorbers bring some difficulties: only a few data is available and the existing data sets have been acquired at different initial proton energies, and for different absorber materials. In this work we compare the results of our Monte Carlo simulations with existing experimental data in terms of reduced calibration curve, i.e. the range - energy dependence normalized on the range scale by the full projected CSDA range for given initial proton energy in a given material, taken from the NIST PSTAR database, and on the final proton energy scale - by the given initial energy of protons. This approach is almost energy and material independent. The results of our analysis are important for pCT development because the contradictions observed at arbitrary low initial proton energies could be easily scaled now to typical pCT energies.

  6. Automatic calibration of gamma spectrometers

    International Nuclear Information System (INIS)

    Tluchor, D.; Jiranek, V.

    1989-01-01

    The principle is described of energy calibration of the spectrometric path based on the measurement of the standard of one radionuclide or a set of them. The entire computer-aided process is divided into three main steps, viz.: the insertion of the calibration standard by the operator; the start of the calibration program; energy calibration by the computer. The program was selected such that the spectrum identification should not depend on adjustment of the digital or analog elements of the gamma spectrometric measuring path. The ECL program is described for automatic energy calibration as is its control, the organization of data file ECL.DAT and the necessary hardware support. The computer-multichannel analyzer communication was provided using an interface pair of Canberra 8673V and Canberra 8573 operating in the RS-422 standard. All subroutines for communication with the multichannel analyzer were written in MACRO 11 while the main program and the other subroutines were written in FORTRAN-77. (E.J.). 1 tab., 4 refs

  7. Data multiplexing in radio interferometric calibration

    Science.gov (United States)

    Yatawatta, Sarod; Diblen, Faruk; Spreeuw, Hanno; Koopmans, L. V. E.

    2018-03-01

    New and upcoming radio interferometers will produce unprecedented amount of data that demand extremely powerful computers for processing. This is a limiting factor due to the large computational power and energy costs involved. Such limitations restrict several key data processing steps in radio interferometry. One such step is calibration where systematic errors in the data are determined and corrected. Accurate calibration is an essential component in reaching many scientific goals in radio astronomy and the use of consensus optimization that exploits the continuity of systematic errors across frequency significantly improves calibration accuracy. In order to reach full consensus, data at all frequencies need to be calibrated simultaneously. In the SKA regime, this can become intractable if the available compute agents do not have the resources to process data from all frequency channels simultaneously. In this paper, we propose a multiplexing scheme that is based on the alternating direction method of multipliers with cyclic updates. With this scheme, it is possible to simultaneously calibrate the full data set using far fewer compute agents than the number of frequencies at which data are available. We give simulation results to show the feasibility of the proposed multiplexing scheme in simultaneously calibrating a full data set when a limited number of compute agents are available.

  8. Computational and Experimental Approaches to Visual Aesthetics

    Science.gov (United States)

    Brachmann, Anselm; Redies, Christoph

    2017-01-01

    Aesthetics has been the subject of long-standing debates by philosophers and psychologists alike. In psychology, it is generally agreed that aesthetic experience results from an interaction between perception, cognition, and emotion. By experimental means, this triad has been studied in the field of experimental aesthetics, which aims to gain a better understanding of how aesthetic experience relates to fundamental principles of human visual perception and brain processes. Recently, researchers in computer vision have also gained interest in the topic, giving rise to the field of computational aesthetics. With computing hardware and methodology developing at a high pace, the modeling of perceptually relevant aspect of aesthetic stimuli has a huge potential. In this review, we present an overview of recent developments in computational aesthetics and how they relate to experimental studies. In the first part, we cover topics such as the prediction of ratings, style and artist identification as well as computational methods in art history, such as the detection of influences among artists or forgeries. We also describe currently used computational algorithms, such as classifiers and deep neural networks. In the second part, we summarize results from the field of experimental aesthetics and cover several isolated image properties that are believed to have a effect on the aesthetic appeal of visual stimuli. Their relation to each other and to findings from computational aesthetics are discussed. Moreover, we compare the strategies in the two fields of research and suggest that both fields would greatly profit from a joined research effort. We hope to encourage researchers from both disciplines to work more closely together in order to understand visual aesthetics from an integrated point of view. PMID:29184491

  9. Calibration technology in application of robot-laser scanning system

    Science.gov (United States)

    Ren, YongJie; Yin, ShiBin; Zhu, JiGui

    2012-11-01

    A system composed of laser sensor and 6-DOF industrial robot is proposed to obtain complete three-dimensional (3-D) information of the object surface. Suitable for the different combining ways of laser sensor and robot, a new method to calibrate the position and pose between sensor and robot is presented. By using a standard sphere with known radius as a reference tool, the rotation and translation matrices between the laser sensor and robot are computed, respectively in two steps, so that many unstable factors introduced in conventional optimization methods can be avoided. The experimental results show that the accuracy of the proposed calibration method can be achieved up to 0.062 mm. The calibration method is also implemented into the automated robot scanning system to reconstruct a car door panel.

  10. A Visual Servoing-Based Method for ProCam Systems Calibration

    Directory of Open Access Journals (Sweden)

    Jeremie Mosnier

    2013-10-01

    Full Text Available Projector-camera systems are currently used in a wide field of applications, such as 3D reconstruction and augmented reality, and can provide accurate measurements, depending on the configuration and calibration. Frequently, the calibration task is divided into two steps: camera calibration followed by projector calibration. The latter still poses certain problems that are not easy to solve, such as the difficulty in obtaining a set of 2D–3D points to compute the projection matrix between the projector and the world. Existing methods are either not sufficiently accurate or not flexible. We propose an easy and automatic method to calibrate such systems that consists in projecting a calibration pattern and superimposing it automatically on a known printed pattern. The projected pattern is provided by a virtual camera observing a virtual pattern in an OpenGL model. The projector displays what the virtual camera visualizes. Thus, the projected pattern can be controlled and superimposed on the printed one with the aid of visual servoing. Our experimental results compare favorably with those of other methods considering both usability and accuracy.

  11. Spelling is just a click away – a user-centered brain-computer interface including auto-calibration and predictive text entry

    Directory of Open Access Journals (Sweden)

    Tobias eKaufmann

    2012-05-01

    Full Text Available Brain Computer Interfaces (BCI based on event-related potentials (ERP allow for selection of characters from a visually presented character-matrix and thus provide a communication channel for users with neurodegenerative disease. Although they have been topic of research for more than 20 years and were multiply proven to be a reliable communication method, BCIs are almost exclusively used in experimental settings, handled by qualified experts. This study investigates if ERP-BCIs can be handled independently by laymen without expert interference, which is inevitable for establishing BCIs in end-user’s daily life situations. Furthermore we compared the classic character-by-character text entry against a predictive text entry (PTE that directly incorporates predictive text into the character matrix. N=19 BCI novices handled a user-centred ERP-BCI application on their own without expert interference. The software individually adjusted classifier weights and control parameters in the background, invisible to the user (auto-calibration. All participants were able to operate the software on their own and to twice correctly spell a sentence with the auto-calibrated classifier (once with PTE, once without. Our PTE increased spelling speed and importantly did not reduce accuracy. In sum, this study demonstrates feasibility of auto-calibrating ERP-BCI use, independently by laymen and the strong benefit of integrating predictive text directly into the character-matrix.

  12. Calibration-free quantification of interior properties of porous media with x-ray computed tomography

    International Nuclear Information System (INIS)

    Hussein, Esam M.A.; Agbogun, H.M.D.; Al, Tom A.

    2015-01-01

    A method is presented for interpreting the values of x-ray attenuation coefficients reconstructed in computed tomography of porous media, while overcoming the ambiguity caused by the multichromatic nature of x-rays, dilution by void, and material heterogeneity. The method enables determination of porosity without relying on calibration or image segmentation or thresholding to discriminate pores from solid material. It distinguishes between solution-accessible and inaccessible pores, and provides the spatial and frequency distributions of solid-matrix material in a heterogeneous medium. This is accomplished by matching an image of a sample saturated with a contrast solution with that saturated with a transparent solution. Voxels occupied with solid-material and inaccessible pores are identified by the fact that they maintain the same location and image attributes in both images, with voxels containing inaccessible pores appearing empty in both images. Fully porous and accessible voxels exhibit the maximum contrast, while the rest are porous voxels containing mixtures of pore solutions and solid. This matching process is performed with an image registration computer code, and image processing software that requires only simple subtraction and multiplication (scaling) processes. The process is demonstrated in dolomite (non-uniform void distribution, homogeneous solid matrix) and sandstone (nearly uniform void distribution, heterogeneous solid matrix) samples, and its overall performance is shown to compare favorably with a method based on calibration and thresholding. - Highlights: • A method is presented for quantifying x-ray CT data for porous media. • The method neither requires calibration nor segmentation nor thresholding. • We use voxel matching between images of a sample saturated with two distinct solutions. • Method is demonstrated for micro-CT images of dolomite and sandstone samples. • Overall performance compares favorably with a calibration

  13. Calibration of an experimental model of tritium storage bed designed for 'in situ' accountability

    International Nuclear Information System (INIS)

    Bidica, Nicolae; Stefanescu, Ioan; Bucur, Ciprian; Bulubasa, Gheorghe; Deaconu, Mariea

    2009-01-01

    Full text: Objectives: Tritium accountancy of the storage beds in tritium facilities is an important issue for tritium inventory control. The purpose of our work was to perform calibration of an experimental model of tritium storage bed with a special design, using electric heaters to simulate tritium decay, and to evaluate the detection limit of the accountancy method. The objective of this paper is to present an experimental method used for calibration of the storage bed and the experimental results consisting of calibration curves and detection limit. Our method is based on a 'self-assaying' tritium storage bed. The basic characteristics of the design of our storage bed consists, in principle, of a uniform distribution of the storage material on several copper thin fins (in order to obtain a uniform temperature field inside the bed), an electrical heat source to simulate the tritium decay heat, a system of thermocouples for measuring the temperature field inside the bed, and good thermal isolation of the bed from the external environment. Within this design of the tritium storage bed, the tritium accounting method is based on determining the decay heat of tritium by measuring the temperature increase of the isolated storage bed. Experimental procedure consisted in measuring of temperature field inside the bed for few values of the power injected with the aid of electrical heat source. Data have been collected for few hours and the temperature increase rate was determined for each value of the power injected. Graphical representation of temperature rise versus injected powers was obtained. This accounting method of tritium inventory stored as metal tritide is a reliable solution for in-situ tritium accountability in a tritium handling facility. Several improvements can be done regarding the design of the storage bed in order to improve the measurement accuracy and to obtain a lower detection limit as for instance use of more accurate thermocouples or special

  14. Calibration of a gamma spectrometer for measuring natural radioactivity. Experimental measurements and modeling by Monte-Carlo methods

    International Nuclear Information System (INIS)

    Courtine, Fabien

    2007-01-01

    The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of 137 Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the 60 Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)

  15. GRID computing for experimental high energy physics

    International Nuclear Information System (INIS)

    Moloney, G.R.; Martin, L.; Seviour, E.; Taylor, G.N.; Moorhead, G.F.

    2002-01-01

    Full text: The Large Hadron Collider (LHC), to be completed at the CERN laboratory in 2006, will generate 11 petabytes of data per year. The processing of this large data stream requires a large, distributed computing infrastructure. A recent innovation in high performance distributed computing, the GRID, has been identified as an important tool in data analysis for the LHC. GRID computing has actual and potential application in many fields which require computationally intensive analysis of large, shared data sets. The Australian experimental High Energy Physics community has formed partnerships with the High Performance Computing community to establish a GRID node at the University of Melbourne. Through Australian membership of the ATLAS experiment at the LHC, Australian researchers have an opportunity to be involved in the European DataGRID project. This presentation will include an introduction to the GRID, and it's application to experimental High Energy Physics. We will present the results of our studies, including participation in the first LHC data challenge

  16. Geometric Parameters Estimation and Calibration in Cone-Beam Micro-CT

    Directory of Open Access Journals (Sweden)

    Jintao Zhao

    2015-09-01

    Full Text Available The quality of Computed Tomography (CT images crucially depends on the precise knowledge of the scanner geometry. Therefore, it is necessary to estimate and calibrate the misalignments before image acquisition. In this paper, a Two-Piece-Ball (TPB phantom is used to estimate a set of parameters that describe the geometry of a cone-beam CT system. Only multiple projections of the TPB phantom at one position are required, which can avoid the rotation errors when acquiring multi-angle projections. Also, a corresponding algorithm is derived. The performance of the method is evaluated through simulation and experimental data. The results demonstrated that the proposed method is valid and easy to implement. Furthermore, the experimental results from the Micro-CT system demonstrate the ability to reduce artifacts and improve image quality through geometric parameter calibration.

  17. Dry calibration of electromagnetic flowmeters based on numerical models combining multiple physical phenomena (multiphysics)

    Science.gov (United States)

    Fu, X.; Hu, L.; Lee, K. M.; Zou, J.; Ruan, X. D.; Yang, H. Y.

    2010-10-01

    This paper presents a method for dry calibration of an electromagnetic flowmeter (EMF). This method, which determines the voltage induced in the EMF as conductive liquid flows through a magnetic field, numerically solves a coupled set of multiphysical equations with measured boundary conditions for the magnetic, electric, and flow fields in the measuring pipe of the flowmeter. Specifically, this paper details the formulation of dry calibration and an efficient algorithm (that adaptively minimizes the number of measurements and requires only the normal component of the magnetic flux density as boundary conditions on the pipe surface to reconstruct the magnetic field involved) for computing the sensitivity of EMF. Along with an in-depth discussion on factors that could significantly affect the final precision of a dry calibrated EMF, the effects of flow disturbance on measuring errors have been experimentally studied by installing a baffle at the inflow port of the EMF. Results of the dry calibration on an actual EMF were compared against flow-rig calibration; excellent agreements (within 0.3%) between dry calibration and flow-rig tests verify the multiphysical computation of the fields and the robustness of the method. As requiring no actual flow, the dry calibration is particularly useful for calibrating large-diameter EMFs where conventional flow-rig methods are often costly and difficult to implement.

  18. A proposed framework for computational fluid dynamics code calibration/validation

    International Nuclear Information System (INIS)

    Oberkampf, W.L.

    1993-01-01

    The paper reviews the terminology and methodology that have been introduced during the last several years for building confidence n the predictions from Computational Fluid Dynamics (CID) codes. Code validation terminology developed for nuclear reactor analyses and aerospace applications is reviewed and evaluated. Currently used terminology such as ''calibrated code,'' ''validated code,'' and a ''validation experiment'' is discussed along with the shortcomings and criticisms of these terms. A new framework is proposed for building confidence in CFD code predictions that overcomes some of the difficulties of past procedures and delineates the causes of uncertainty in CFD predictions. Building on previous work, new definitions of code verification and calibration are proposed. These definitions provide more specific requirements for the knowledge level of the flow physics involved and the solution accuracy of the given partial differential equations. As part of the proposed framework, categories are also proposed for flow physics research, flow modeling research, and the application of numerical predictions. The contributions of physical experiments, analytical solutions, and other numerical solutions are discussed, showing that each should be designed to achieve a distinctively separate purpose in building confidence in accuracy of CFD predictions. A number of examples are given for each approach to suggest methods for obtaining the highest value for CFD code quality assurance

  19. Rotating-coil calibration in a reference quadrupole, considering roll-angle misalignment and higher-order harmonics

    CERN Document Server

    AUTHOR|(CDS)2075492; Buzio, Marco; Köster, Oliver; Russenschuck, Stephan; Severino, Giordana

    2016-01-01

    A method is proposed for calibrating the radius of a rotating coil sensor by relaxing the metrological constraints on alignment and field errors of the reference quadrupole. A coil radius calibration considering a roll-angle misalignment of the measurement bench, the magnet, and the motor-drive unit is analyzed. Then, the error arising from higher-order harmonic field imperfections in the reference quadrupole is assessed. The method is validated by numerical field computation for both the higher-order harmonic errors and the roll-angle misalignment. Finally, an experimental proof-of-principle demonstration is car-ried out in a calibration magnet with sextupole harmonic.

  20. Computational and experimental methods for enclosed natural convection

    International Nuclear Information System (INIS)

    Larson, D.W.; Gartling, D.K.; Schimmel, W.P. Jr.

    1977-10-01

    Two computational procedures and one optical experimental procedure for studying enclosed natural convection are described. The finite-difference and finite-element numerical methods are developed and several sample problems are solved. Results obtained from the two computational approaches are compared. A temperature-visualization scheme using laser holographic interferometry is described, and results from this experimental procedure are compared with results from both numerical methods

  1. Experimental and computational investigation of flow of pebbles in a pebble bed nuclear reactor

    Science.gov (United States)

    Khane, Vaibhav B.

    The Pebble Bed Reactor (PBR) is a 4th generation nuclear reactor which is conceptually similar to moving bed reactors used in the chemical and petrochemical industries. In a PBR core, nuclear fuel in the form of pebbles moves slowly under the influence of gravity. Due to the dynamic nature of the core, a thorough understanding about slow and dense granular flow of pebbles is required from both a reactor safety and performance evaluation point of view. In this dissertation, a new integrated experimental and computational study of granular flow in a PBR has been performed. Continuous pebble re-circulation experimental set-up, mimicking flow of pebbles in a PBR, is designed and developed. Experimental investigation of the flow of pebbles in a mimicked test reactor was carried out for the first time using non-invasive radioactive particle tracking (RPT) and residence time distribution (RTD) techniques to measure the pebble trajectory, velocity, overall/zonal residence times, flow patterns etc. The tracer trajectory length and overall/zonal residence time is found to increase with change in pebble's initial seeding position from the center towards the wall of the test reactor. Overall and zonal average velocities of pebbles are found to decrease from the center towards the wall. Discrete element method (DEM) based simulations of test reactor geometry were also carried out using commercial code EDEM(TM) and simulation results were validated using the obtained benchmark experimental data. In addition, EDEM(TM) based parametric sensitivity study of interaction properties was carried out which suggests that static friction characteristics play an important role from a packed/pebble beds structural characterization point of view. To make the RPT technique viable for practical applications and to enhance its accuracy, a novel and dynamic technique for RPT calibration was designed and developed. Preliminary feasibility results suggest that it can be implemented as a non

  2. Novel crystal timing calibration method based on total variation

    Science.gov (United States)

    Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng

    2016-11-01

    A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.

  3. Establishment of a new calibration method of pencil ionization chamber for dosimetry in computed tomography

    International Nuclear Information System (INIS)

    Dias, Daniel Menezes

    2010-01-01

    Pencil ionization chambers are used for beam dosimetry in computed tomography equipment (CT). In this study, a new calibration methodology was established, in order to make the Calibration Laboratory of Instituto de Pesquisas Energeticas e Nucleares (LCI) suitable to international metrological standards, dealing with specific procedures for calibration of these chambers used in CT. Firstly, the setup for the new RQT radiation qualities was mounted, in agreement with IEC61267 from the International Electrotechnical Commission (IEC). After the establishment of these radiation qualities, a specific calibration methodology for pencil ionization chambers was set, according to Technical Report Series No. 457, from the International Atomic Energy Agency (IAEA), which describes particularities of the procedure to be followed by the Secondary Standard Dosimetry Laboratories (SSDL's), concerning to collimation and positioning related to the radiation beam. Initially, PPV (kV) measurements and the determination of copper additional filtrations were carried out, measuring the half value layers (HVL) recommended by the IEC 61267 standard, after that the RQT 8, RQT 9 and RQT 10 radiation quality references were established. For additional filters, aluminum and copper of high purity (around 99.9%) were used. RQT's in thickness of copper filters equivalent to the set 'RQR (Al) + Additional Filtration (Cu)' was directly found by an alternative methodology used to determine additional filtrations, which is a good option when RQR's have not the possibility of be setting up. With the establishment of this new methodology for the ionization pencil chambers calibration, the LCI is ready to calibrate these instruments according to the most recent international standards. Therefore, an improvement in calibration traceability, as well as in metrological services offered by IPEN to all Brazil is achieved. (author)

  4. Fully in Silico Calibration of Empirical Predictive Models for Environmental Fate Properties of Novel Munitions Compounds

    Science.gov (United States)

    2016-04-01

    constants using several IMC related aromatic amines. To develop QSARs with these data, many possible descriptors were evaluated , ranging from simple to...converted to approximate half-lives to give. The blue symbols are experimental data, obtained with IMC related amino compounds...calibrated “in silico” that predict the hydrolysis behavior of the diverse range of energetic NACs. However, new experimental and computational

  5. Experimental calibration of the mathematical model of Air Torque Position dampers with non-cascading blades

    Directory of Open Access Journals (Sweden)

    Bikić Siniša M.

    2016-01-01

    Full Text Available This paper is focused on the mathematical model of the Air Torque Position dampers. The mathematical model establishes a link between the velocity of air in front of the damper, position of the damper blade and the moment acting on the blade caused by the air flow. This research aims to experimentally verify the mathematical model for the damper type with non-cascading blades. Four different types of dampers with non-cascading blades were considered: single blade dampers, dampers with two cross-blades, dampers with two parallel blades and dampers with two blades of which one is a fixed blade in the horizontal position. The case of a damper with a straight pipeline positioned in front of and behind the damper was taken in consideration. Calibration and verification of the mathematical model was conducted experimentally. The experiment was conducted on the laboratory facility for testing dampers used for regulation of the air flow rate in heating, ventilation and air conditioning systems. The design and setup of the laboratory facility, as well as construction, adjustment and calibration of the laboratory damper are presented in this paper. The mathematical model was calibrated by using one set of data, while the verification of the mathematical model was conducted by using the second set of data. The mathematical model was successfully validated and it can be used for accurate measurement of the air velocity on dampers with non-cascading blades under different operating conditions. [Projekat Ministarstva nauke Republike Srbije, br. TR31058

  6. Automated Calibration of Dosimeters for Diagnostic Radiology

    International Nuclear Information System (INIS)

    Romero Acosta, A.; Gutierrez Lores, S.

    2015-01-01

    Calibration of dosimeters for diagnostic radiology includes current and charge measurements, which are often repetitive. However, these measurements are usually done using modern electrometers, which are equipped with an RS-232 interface that enables instrument control from a computer. This paper presents an automated system aimed to the measurements for the calibration of dosimeters used in diagnostic radiology. A software application was developed, in order to achieve the acquisition of the electric charge readings, measured values of the monitor chamber, calculation of the calibration coefficient and issue of a calibration certificate. A primary data record file is filled and stored in the computer hard disk. The calibration method used was calibration by substitution. With this system, a better control over the calibration process is achieved and the need for human intervention is reduced. the automated system will be used in the calibration of dosimeters for diagnostic radiology at the Cuban Secondary Standard Dosimetry Laboratory of the Center for Radiation Protection and Hygiene. (Author)

  7. Experimental quantum computing without entanglement.

    Science.gov (United States)

    Lanyon, B P; Barbieri, M; Almeida, M P; White, A G

    2008-11-14

    Deterministic quantum computation with one pure qubit (DQC1) is an efficient model of computation that uses highly mixed states. Unlike pure-state models, its power is not derived from the generation of a large amount of entanglement. Instead it has been proposed that other nonclassical correlations are responsible for the computational speedup, and that these can be captured by the quantum discord. In this Letter we implement DQC1 in an all-optical architecture, and experimentally observe the generated correlations. We find no entanglement, but large amounts of quantum discord-except in three cases where an efficient classical simulation is always possible. Our results show that even fully separable, highly mixed, states can contain intrinsically quantum mechanical correlations and that these could offer a valuable resource for quantum information technologies.

  8. New approach for calibration the efficiency of HPGe detectors

    International Nuclear Information System (INIS)

    Alnour, I.A.; Wagiran, H.; Suhaimi Hamzah; Siong, W.B.; Mohd Suhaimi Elias

    2013-01-01

    Full-text: This work evaluates the efficiency calibrating of HPGe detector coupled with Canberra GC3018 with Genie 2000 software and Ortec GEM25-76-XLB-C with Gamma Vision software; available at Neutron activation analysis laboratory in Malaysian Nuclear Agency (NM). The efficiency calibration curve was constructed from measurement of an IAEA, standard gamma point sources set composed by 214 Am, 57 Co, 133 Ba, 152 Eu, 137 Cs and 60 Co. The efficiency calibrations were performed for three different geometries: 5, 10 and 15 cm distances from the end cap detector. The polynomial parameters functions were simulated through a computer program, MATLAB in order to find an accurate fit to the experimental data points. The efficiency equation was established from the known fitted parameters which allow for the efficiency evaluation at particular energy of interest. The study shows that significant deviations in the efficiency, depending on the source-detector distance and photon energy. (author)

  9. Calibration and analysis of genome-based models for microbial ecology.

    Science.gov (United States)

    Louca, Stilianos; Doebeli, Michael

    2015-10-16

    Microbial ecosystem modeling is complicated by the large number of unknown parameters and the lack of appropriate calibration tools. Here we present a novel computational framework for modeling microbial ecosystems, which combines genome-based model construction with statistical analysis and calibration to experimental data. Using this framework, we examined the dynamics of a community of Escherichia coli strains that emerged in laboratory evolution experiments, during which an ancestral strain diversified into two coexisting ecotypes. We constructed a microbial community model comprising the ancestral and the evolved strains, which we calibrated using separate monoculture experiments. Simulations reproduced the successional dynamics in the evolution experiments, and pathway activation patterns observed in microarray transcript profiles. Our approach yielded detailed insights into the metabolic processes that drove bacterial diversification, involving acetate cross-feeding and competition for organic carbon and oxygen. Our framework provides a missing link towards a data-driven mechanistic microbial ecology.

  10. An investigation of automatic exposure control calibration for chest imaging with a computed radiography system

    International Nuclear Information System (INIS)

    Moore, C S; Wood, T J; Beavis, A W; Saunderson, J R; Avery, G; Balcam, S; Needler, L

    2014-01-01

    The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQ m ), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQ m  and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma. (paper)

  11. An investigation of automatic exposure control calibration for chest imaging with a computed radiography system.

    Science.gov (United States)

    Moore, C S; Wood, T J; Avery, G; Balcam, S; Needler, L; Beavis, A W; Saunderson, J R

    2014-05-07

    The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.

  12. Methods and experimental techniques in computer engineering

    CERN Document Server

    Schiaffonati, Viola

    2014-01-01

    Computing and science reveal a synergic relationship. On the one hand, it is widely evident that computing plays an important role in the scientific endeavor. On the other hand, the role of scientific method in computing is getting increasingly important, especially in providing ways to experimentally evaluate the properties of complex computing systems. This book critically presents these issues from a unitary conceptual and methodological perspective by addressing specific case studies at the intersection between computing and science. The book originates from, and collects the experience of, a course for PhD students in Information Engineering held at the Politecnico di Milano. Following the structure of the course, the book features contributions from some researchers who are working at the intersection between computing and science.

  13. EAST-AIA deployment under vacuum: Calibration of laser diagnostic system using computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yang, E-mail: yangyang@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Song, Yuntao; Cheng, Yong; Feng, Hansheng; Wu, Zhenwei; Li, Yingying; Sun, Yongjun; Zheng, Lei [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Bruno, Vincent; Eric, Villedieu [CEA-IRFM, F-13108 Saint-Paul-Lez-Durance (France)

    2016-11-15

    Highlights: • The first deployment of the EAST articulated inspection arm robot under vacuum is presented. • A computer vision based approach to measure the laser spot displacement is proposed. • An experiment on the real EAST tokamak is performed to validate the proposed measure approach, and the results shows that the measurement accuracy satisfies the requirement. - Abstract: For the operation of EAST tokamak, it is crucial to ensure that all the diagnostic systems are in the good condition in order to reflect the plasma status properly. However, most of the diagnostic systems are mounted inside the tokamak vacuum vessel, which makes them extremely difficult to maintain under high vacuum condition during the tokamak operation. Thanks to a system called EAST articulated inspection arm robot (EAST-AIA), the examination of these in-vessel diagnostic systems can be performed by an embedded camera carried by the robot. In this paper, a computer vision algorithm has been developed to calibrate a laser diagnostic system with the help of a monocular camera at the robot end. In order to estimate the displacement of the laser diagnostic system with respect to the vacuum vessel, several visual markers were attached to the inner wall. This experiment was conducted both on the EAST vacuum vessel mock-up and the real EAST tokamak under vacuum condition. As a result, the accuracy of the displacement measurement was within 3 mm under the current camera resolution, which satisfied the laser diagnostic system calibration.

  14. RF impedance measurement calibration

    International Nuclear Information System (INIS)

    Matthews, P.J.; Song, J.J.

    1993-01-01

    The intent of this note is not to explain all of the available calibration methods in detail. Instead, we will focus on the calibration methods of interest for RF impedance coupling measurements and attempt to explain: (1). The standards and measurements necessary for the various calibration techniques. (2). The advantages and disadvantages of each technique. (3). The mathematical manipulations that need to be applied to the measured standards and devices. (4). An outline of the steps needed for writing a calibration routine that operated from a remote computer. For further details of the various techniques presented in this note, the reader should consult the references

  15. MCNPx computational estimation of the calibration factor of an In vivo counter for "1"8F-FDG activity incorporated in the brain

    International Nuclear Information System (INIS)

    Melo M, B.; Ferreira F, T. C.; Duarte V, K.; Da Silva, T. A.; Ribeiro de C, T. P.

    2016-10-01

    In previous work, the computational modeling of the Centro de Desenvolvimento da Tecnologia Nuclear in vivo counter for estimation of "1"8F-FGD activity incorporated into workers brains was validated. Here, we studied the calibration factor (Cf) for seven distinct voxelized computational phantoms including the reference models from ICRP 110. Computational simulations were also carried out to study the effect of increasing the distance between the detector and the brain up to 100 cm. The degree of correlation among geometric and anatomical parameters of the computational models and the calibration factors were also evaluated. The morphological diversity of the computational phantoms resulted Cf variations greater than 38% (39.8 ± 0.2 to 64.6 ± 0.5 Bq.CPS"-"1). Despite the variations, Cf has been reduced by the increasing distance, although the remarkable decrease in counting efficiency makes prohibitive this geometry. These findings suggest that head anatomic parameters can be used to improve Cf estimation. (Author)

  16. MCNPx computational estimation of the calibration factor of an In vivo counter for {sup 18}F-FDG activity incorporated in the brain

    Energy Technology Data Exchange (ETDEWEB)

    Melo M, B.; Ferreira F, T. C.; Duarte V, K.; Da Silva, T. A. [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil); Ribeiro de C, T. P., E-mail: bmm@cdtn.br [Universidade Federal de Minas Gerais, Departamento de Engenharia Nuclear, Programa de Pos-graduacao em Ciencias e Tecnicas, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil)

    2016-10-15

    In previous work, the computational modeling of the Centro de Desenvolvimento da Tecnologia Nuclear in vivo counter for estimation of {sup 18}F-FGD activity incorporated into workers brains was validated. Here, we studied the calibration factor (Cf) for seven distinct voxelized computational phantoms including the reference models from ICRP 110. Computational simulations were also carried out to study the effect of increasing the distance between the detector and the brain up to 100 cm. The degree of correlation among geometric and anatomical parameters of the computational models and the calibration factors were also evaluated. The morphological diversity of the computational phantoms resulted Cf variations greater than 38% (39.8 ± 0.2 to 64.6 ± 0.5 Bq.CPS{sup -1}). Despite the variations, Cf has been reduced by the increasing distance, although the remarkable decrease in counting efficiency makes prohibitive this geometry. These findings suggest that head anatomic parameters can be used to improve Cf estimation. (Author)

  17. Computer system for Monte Carlo experimentation

    International Nuclear Information System (INIS)

    Grier, D.A.

    1986-01-01

    A new computer system for Monte Carlo Experimentation is presented. The new system speeds and simplifies the process of coding and preparing a Monte Carlo Experiment; it also encourages the proper design of Monte Carlo Experiments, and the careful analysis of the experimental results. A new functional language is the core of this system. Monte Carlo Experiments, and their experimental designs, are programmed in this new language; those programs are compiled into Fortran output. The Fortran output is then compiled and executed. The experimental results are analyzed with a standard statistics package such as Si, Isp, or Minitab or with a user-supplied program. Both the experimental results and the experimental design may be directly loaded into the workspace of those packages. The new functional language frees programmers from many of the details of programming an experiment. Experimental designs such as factorial, fractional factorial, or latin square are easily described by the control structures and expressions of the language. Specific mathematical modes are generated by the routines of the language

  18. Experimental demonstration of deterministic one-way quantum computing on a NMR quantum computer

    OpenAIRE

    Ju, Chenyong; Zhu, Jing; Peng, Xinhua; Chong, Bo; Zhou, Xianyi; Du, Jiangfeng

    2008-01-01

    One-way quantum computing is an important and novel approach to quantum computation. By exploiting the existing particle-particle interactions, we report the first experimental realization of the complete process of deterministic one-way quantum Deutsch-Josza algorithm in NMR, including graph state preparation, single-qubit measurements and feed-forward corrections. The findings in our experiment may shed light on the future scalable one-way quantum computation.

  19. Calibration of a fluxgate magnetometer array and its application in magnetic object localization

    International Nuclear Information System (INIS)

    Pang, Hongfeng; Luo, Shitu; Zhang, Qi; Li, Ji; Chen, Dixiang; Pan, Mengchun; Luo, Feilu

    2013-01-01

    The magnetometer array is effective for magnetic object detection and localization. Calibration is important to improve the accuracy of the magnetometer array. A magnetic sensor array built with four three-axis DM-050 fluxgate magnetometers is designed, which is connected by a cross aluminum frame. In order to improve the accuracy of the magnetometer array, a calibration process is presented. The calibration process includes magnetometer calibration, coordinate transformation and misalignment calibration. The calibration system consists of a magnetic sensor array, a GSM-19T proton magnetometer, a two-dimensional nonmagnetic rotation platform, a 12 V-dc portable power device and two portable computers. After magnetometer calibration, the RMS error has been decreased from an original value of 125.559 nT to a final value of 1.711 nT (a factor of 74). After alignment, the RMS error of misalignment has been decreased from 1322.3 to 6.0 nT (a factor of 220). Then, the calibrated array deployed on the nonmagnetic rotation platform is used for ferromagnetic object localization. Experimental results show that the estimated errors of X, Y and Z axes are −0.049 m, 0.008 m and 0.025 m, respectively. Thus, the magnetometer array is effective for magnetic object detection and localization in three dimensions. (paper)

  20. Calibration of a fluxgate magnetometer array and its application in magnetic object localization

    Science.gov (United States)

    Pang, Hongfeng; Luo, Shitu; Zhang, Qi; Li, Ji; Chen, Dixiang; Pan, Mengchun; Luo, Feilu

    2013-07-01

    The magnetometer array is effective for magnetic object detection and localization. Calibration is important to improve the accuracy of the magnetometer array. A magnetic sensor array built with four three-axis DM-050 fluxgate magnetometers is designed, which is connected by a cross aluminum frame. In order to improve the accuracy of the magnetometer array, a calibration process is presented. The calibration process includes magnetometer calibration, coordinate transformation and misalignment calibration. The calibration system consists of a magnetic sensor array, a GSM-19T proton magnetometer, a two-dimensional nonmagnetic rotation platform, a 12 V-dc portable power device and two portable computers. After magnetometer calibration, the RMS error has been decreased from an original value of 125.559 nT to a final value of 1.711 nT (a factor of 74). After alignment, the RMS error of misalignment has been decreased from 1322.3 to 6.0 nT (a factor of 220). Then, the calibrated array deployed on the nonmagnetic rotation platform is used for ferromagnetic object localization. Experimental results show that the estimated errors of X, Y and Z axes are -0.049 m, 0.008 m and 0.025 m, respectively. Thus, the magnetometer array is effective for magnetic object detection and localization in three dimensions.

  1. Experimental approach for the uncertainty assessment of 3D complex geometry dimensional measurements using computed tomography at the mm and sub-mm scales

    DEFF Research Database (Denmark)

    Jiménez, Roberto; Torralba, Marta; Yagüe-Fabra, José A.

    2017-01-01

    The dimensional verification of miniaturized components with 3D complex geometries is particularly challenging. Computed Tomography (CT) can represent a suitable alternative solution to micro metrology tools based on optical and tactile techniques. However, the establishment of CT systems......’ traceability when measuring 3D complex geometries is still an open issue. In this work, an alternative method for the measurement uncertainty assessment of 3D complex geometries by using CT is presented. The method is based on the micro-CT system Maximum Permissible Error (MPE) estimation, determined...... experimentally by using several calibrated reference artefacts. The main advantage of the presented method is that a previous calibration of the component by a more accurate Coordinate Measuring System (CMS) is not needed. In fact, such CMS would still hold all the typical limitations of optical and tactile...

  2. Multi-proxy experimental calibration in cold water corals for high resolution paleoreconstructions

    Science.gov (United States)

    Pelejero, C.; Martínez-Dios, A.; Ko, S.; Sherrell, R. M.; Kozdon, R.; López-Sanz, À.; Calvo, E.

    2017-12-01

    Cold-water corals (CWCs) display an almost cosmopolitan distribution over a wide range of depths. Similar to their tropical counterparts, they can provide continuous, high-resolution records of up to a century or more. Several CWC elemental and isotopic ratios have been suggested as useful proxies, but robust calibrations under controlled conditions in aquaria are needed. Whereas a few such calibrations have been performed for tropical corals, they are still pending for CWCs. This reflects the technical challenges involved in maintaining these slow-growing animals alive during the long-term experiments required to achieve sufficient skeletal growth for geochemical analyses. We will show details of the set up and initial stages of a long-term experiment being run at the ICM (Barcelona), where live specimens (>150) of Desmophyllum dianthus sampled in Comau Fjord (Chile) are kept under controlled and manipulated physical chemistry (temperature, pH, phosphate, barium, cadmium) and feeding conditions. With this set up, we aim to calibrate experimentally several specific elemental ratios including P/Ca, Ba/Ca, Cd/Ca, B/Ca, U/Ca and Mg/Li as proxies of nutrients dynamics, pH, carbonate ion concentration and temperature. For the trace element analysis, we are analyzing coral skeletons using Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICP-MS), running quantitative analyses on spot sizes of tens of microns, and comparing to micromilling and solution ICP-MS. Preliminary data obtained using these techniques will be presented, as well as measurements of calcification rate. Since coral-water corals are potentially vulnerable to ocean acidification, the same experiment is being exploited to assess potential effects of the pH stressor in D. dianthus; main findings to date will be summarized.

  3. Establishing a standard calibration methodology for MOSFET detectors in computed tomography dosimetry

    International Nuclear Information System (INIS)

    Brady, S. L.; Kaufman, R. A.

    2012-01-01

    Purpose: The use of metal-oxide-semiconductor field-effect transistor (MOSFET) detectors for patient dosimetry has increased by ∼25% since 2005. Despite this increase, no standard calibration methodology has been identified nor calibration uncertainty quantified for the use of MOSFET dosimetry in CT. This work compares three MOSFET calibration methodologies proposed in the literature, and additionally investigates questions relating to optimal time for signal equilibration and exposure levels for maximum calibration precision. Methods: The calibration methodologies tested were (1) free in-air (FIA) with radiographic x-ray tube, (2) FIA with stationary CT x-ray tube, and (3) within scatter phantom with rotational CT x-ray tube. Each calibration was performed at absorbed dose levels of 10, 23, and 35 mGy. Times of 0 min or 5 min were investigated for signal equilibration before or after signal read out. Results: Calibration precision was measured to be better than 5%–7%, 3%–5%, and 2%–4% for the 10, 23, and 35 mGy respective dose levels, and independent of calibration methodology. No correlation was demonstrated for precision and signal equilibration time when allowing 5 min before or after signal read out. Differences in average calibration coefficients were demonstrated between the FIA with CT calibration methodology 26.7 ± 1.1 mV cGy −1 versus the CT scatter phantom 29.2 ± 1.0 mV cGy −1 and FIA with x-ray 29.9 ± 1.1 mV cGy −1 methodologies. A decrease in MOSFET sensitivity was seen at an average change in read out voltage of ∼3000 mV. Conclusions: The best measured calibration precision was obtained by exposing the MOSFET detectors to 23 mGy. No signal equilibration time is necessary to improve calibration precision. A significant difference between calibration outcomes was demonstrated for FIA with CT compared to the other two methodologies. If the FIA with a CT calibration methodology was used to create calibration coefficients for the

  4. Establishing a standard calibration methodology for MOSFET detectors in computed tomography dosimetry.

    Science.gov (United States)

    Brady, S L; Kaufman, R A

    2012-06-01

    The use of metal-oxide-semiconductor field-effect transistor (MOSFET) detectors for patient dosimetry has increased by ~25% since 2005. Despite this increase, no standard calibration methodology has been identified nor calibration uncertainty quantified for the use of MOSFET dosimetry in CT. This work compares three MOSFET calibration methodologies proposed in the literature, and additionally investigates questions relating to optimal time for signal equilibration and exposure levels for maximum calibration precision. The calibration methodologies tested were (1) free in-air (FIA) with radiographic x-ray tube, (2) FIA with stationary CT x-ray tube, and (3) within scatter phantom with rotational CT x-ray tube. Each calibration was performed at absorbed dose levels of 10, 23, and 35 mGy. Times of 0 min or 5 min were investigated for signal equilibration before or after signal read out. Calibration precision was measured to be better than 5%-7%, 3%-5%, and 2%-4% for the 10, 23, and 35 mGy respective dose levels, and independent of calibration methodology. No correlation was demonstrated for precision and signal equilibration time when allowing 5 min before or after signal read out. Differences in average calibration coefficients were demonstrated between the FIA with CT calibration methodology 26.7 ± 1.1 mV cGy(-1) versus the CT scatter phantom 29.2 ± 1.0 mV cGy(-1) and FIA with x-ray 29.9 ± 1.1 mV cGy(-1) methodologies. A decrease in MOSFET sensitivity was seen at an average change in read out voltage of ~3000 mV. The best measured calibration precision was obtained by exposing the MOSFET detectors to 23 mGy. No signal equilibration time is necessary to improve calibration precision. A significant difference between calibration outcomes was demonstrated for FIA with CT compared to the other two methodologies. If the FIA with a CT calibration methodology was used to create calibration coefficients for the eventual use for phantom dosimetry, a measurement error ~12

  5. Node-to-node field calibration of wireless distributed air pollution sensor network.

    Science.gov (United States)

    Kizel, Fadi; Etzion, Yael; Shafran-Nathan, Rakefet; Levy, Ilan; Fishbain, Barak; Bartonova, Alena; Broday, David M

    2018-02-01

    Low-cost air quality sensors offer high-resolution spatiotemporal measurements that can be used for air resources management and exposure estimation. Yet, such sensors require frequent calibration to provide reliable data, since even after a laboratory calibration they might not report correct values when they are deployed in the field, due to interference with other pollutants, as a result of sensitivity to environmental conditions and due to sensor aging and drift. Field calibration has been suggested as a means for overcoming these limitations, with the common strategy involving periodical collocations of the sensors at an air quality monitoring station. However, the cost and complexity involved in relocating numerous sensor nodes back and forth, and the loss of data during the repeated calibration periods make this strategy inefficient. This work examines an alternative approach, a node-to-node (N2N) calibration, where only one sensor in each chain is directly calibrated against the reference measurements and the rest of the sensors are calibrated sequentially one against the other while they are deployed and collocated in pairs. The calibration can be performed multiple times as a routine procedure. This procedure minimizes the total number of sensor relocations, and enables calibration while simultaneously collecting data at the deployment sites. We studied N2N chain calibration and the propagation of the calibration error analytically, computationally and experimentally. The in-situ N2N calibration is shown to be generic and applicable for different pollutants, sensing technologies, sensor platforms, chain lengths, and sensor order within the chain. In particular, we show that chain calibration of three nodes, each calibrated for a week, propagate calibration errors that are similar to those found in direct field calibration. Hence, N2N calibration is shown to be suitable for calibration of distributed sensor networks. Copyright © 2017 Elsevier Ltd. All

  6. Cosmic reionization on computers. I. Design and calibration of simulations

    Energy Technology Data Exchange (ETDEWEB)

    Gnedin, Nickolay Y., E-mail: gnedin@fnal.gov [Particle Astrophysics Center, Fermi National Accelerator Laboratory, Batavia, IL 60510 (United States)

    2014-09-20

    Cosmic Reionization On Computers is a long-term program of numerical simulations of cosmic reionization. Its goal is to model fully self-consistently (albeit not necessarily from the first principles) all relevant physics, from radiative transfer to gas dynamics and star formation, in simulation volumes of up to 100 comoving Mpc, and with spatial resolution approaching 100 pc in physical units. In this method paper, we describe our numerical method, the design of simulations, and the calibration of numerical parameters. Using several sets (ensembles) of simulations in 20 h {sup –1} Mpc and 40 h {sup –1} Mpc boxes with spatial resolution reaching 125 pc at z = 6, we are able to match the observed galaxy UV luminosity functions at all redshifts between 6 and 10, as well as obtain reasonable agreement with the observational measurements of the Gunn-Peterson optical depth at z < 6.

  7. Observatory Magnetometer In-Situ Calibration

    Directory of Open Access Journals (Sweden)

    A Marusenkov

    2011-07-01

    Full Text Available An experimental validation of the in-situ calibration procedure, which allows estimating parameters of observatory magnetometers (scale factors, sensor misalignment without its operation interruption, is presented. In order to control the validity of the procedure, the records provided by two magnetometers calibrated independently in a coil system have been processed. The in-situ estimations of the parameters are in very good agreement with the values provided by the coil system calibration.

  8. The computer library of experimental neutron data

    International Nuclear Information System (INIS)

    Bychkov, V.M.; Manokhin, V.N.; Surgutanov, V.V.

    1976-05-01

    The paper describes the computer library of experimental neutron data at the Obninsk Nuclear Data Centre. The format of the library (EXFOR) and the system of programmes for supplying the library are briefly described. (author)

  9. Computer experimental analysis of the CHP performance of a 100 kW e SOFC Field Unit by a factorial design

    Science.gov (United States)

    Calì, M.; Santarelli, M. G. L.; Leone, P.

    Gas Turbine Technologies (GTT) and Politecnico di Torino, both located in Torino (Italy), have been involved in the design and installation of a SOFC laboratory in order to analyse the operation, in cogenerative configuration, of the CHP 100 kW e SOFC Field Unit, built by Siemens-Westinghouse Power Corporation (SWPC), which is at present (May 2005) starting its operation and which will supply electric and thermal power to the GTT factory. In order to take the better advantage from the analysis of the on-site operation, and especially to correctly design the scheduled experimental tests on the system, we developed a mathematical model and run a simulated experimental campaign, applying a rigorous statistical approach to the analysis of the results. The aim of this work is the computer experimental analysis, through a statistical methodology (2 k factorial experiments), of the CHP 100 performance. First, the mathematical model has been calibrated with the results acquired during the first CHP100 demonstration at EDB/ELSAM in Westerwoort. After, the simulated tests have been performed in the form of computer experimental session, and the measurement uncertainties have been simulated with perturbation imposed to the model independent variables. The statistical methodology used for the computer experimental analysis is the factorial design (Yates' Technique): using the ANOVA technique the effect of the main independent variables (air utilization factor U ox, fuel utilization factor U F, internal fuel and air preheating and anodic recycling flow rate) has been investigated in a rigorous manner. Analysis accounts for the effects of parameters on stack electric power, thermal recovered power, single cell voltage, cell operative temperature, consumed fuel flow and steam to carbon ratio. Each main effect and interaction effect of parameters is shown with particular attention on generated electric power and stack heat recovered.

  10. Experimental benchmark and code validation for airfoils equipped with passive vortex generators

    International Nuclear Information System (INIS)

    Baldacchino, D; Ferreira, C; Florentie, L; Timmer, N; Van Zuijlen, A; Manolesos, M; Chaviaropoulos, T; Diakakis, K; Papadakis, G; Voutsinas, S; González Salcedo, Á; Aparicio, M; García, N R.; Sørensen, N N.; Troldborg, N

    2016-01-01

    Experimental results and complimentary computations for airfoils with vortex generators are compared in this paper, as part of an effort within the AVATAR project to develop tools for wind turbine blade control devices. Measurements from two airfoils equipped with passive vortex generators, a 30% thick DU97W300 and an 18% thick NTUA T18 have been used for benchmarking several simulation tools. These tools span low-to-high complexity, ranging from engineering-level integral boundary layer tools to fully-resolved computational fluid dynamics codes. Results indicate that with appropriate calibration, engineering-type tools can capture the effects of vortex generators and outperform more complex tools. Fully resolved CFD comes at a much higher computational cost and does not necessarily capture the increased lift due to the VGs. However, in lieu of the limited experimental data available for calibration, high fidelity tools are still required for assessing the effect of vortex generators on airfoil performance. (paper)

  11. Empirical dual energy calibration (EDEC) for cone-beam computed tomography

    International Nuclear Information System (INIS)

    Stenner, Philip; Berkus, Timo; Kachelriess, Marc

    2007-01-01

    Material-selective imaging using dual energy CT (DECT) relies heavily on well-calibrated material decomposition functions. These require the precise knowledge of the detected x-ray spectra, and even if they are exactly known the reliability of DECT will suffer from scattered radiation. We propose an empirical method to determine the proper decomposition function. In contrast to other decomposition algorithms our empirical dual energy calibration (EDEC) technique requires neither knowledge of the spectra nor of the attenuation coefficients. The desired material-selective raw data p 1 and p 2 are obtained as functions of the measured attenuation data q 1 and q 2 (one DECT scan=two raw data sets) by passing them through a polynomial function. The polynomial's coefficients are determined using a general least squares fit based on thresholded images of a calibration phantom. The calibration phantom's dimension should be of the same order of magnitude as the test object, but other than that no assumptions on its exact size or positioning are made. Once the decomposition coefficients are determined DECT raw data can be decomposed by simply passing them through the polynomial. To demonstrate EDEC simulations of an oval CTDI phantom, a lung phantom, a thorax phantom and a mouse phantom were carried out. The method was further verified by measuring a physical mouse phantom, a half-and-half-cylinder phantom and a Yin-Yang phantom with a dedicated in vivo dual source micro-CT scanner. The raw data were decomposed into their components, reconstructed, and the pixel values obtained were compared to the theoretical values. The determination of the calibration coefficients with EDEC is very robust and depends only slightly on the type of calibration phantom used. The images of the test phantoms (simulations and measurements) show a nearly perfect agreement with the theoretical μ values and density values. Since EDEC is an empirical technique it inherently compensates for scatter

  12. Simultaneous calibration phantom commission and geometry calibration in cone beam CT

    Science.gov (United States)

    Xu, Yuan; Yang, Shuai; Ma, Jianhui; Li, Bin; Wu, Shuyu; Qi, Hongliang; Zhou, Linghong

    2017-09-01

    Geometry calibration is a vital step for describing the geometry of a cone beam computed tomography (CBCT) system and is a prerequisite for CBCT reconstruction. In current methods, calibration phantom commission and geometry calibration are divided into two independent tasks. Small errors in ball-bearing (BB) positioning in the phantom-making step will severely degrade the quality of phantom calibration. To solve this problem, we propose an integrated method to simultaneously realize geometry phantom commission and geometry calibration. Instead of assuming the accuracy of the geometry phantom, the integrated method considers BB centers in the phantom as an optimized parameter in the workflow. Specifically, an evaluation phantom and the corresponding evaluation contrast index are used to evaluate geometry artifacts for optimizing the BB coordinates in the geometry phantom. After utilizing particle swarm optimization, the CBCT geometry and BB coordinates in the geometry phantom are calibrated accurately and are then directly used for the next geometry calibration task in other CBCT systems. To evaluate the proposed method, both qualitative and quantitative studies were performed on simulated and realistic CBCT data. The spatial resolution of reconstructed images using dental CBCT can reach up to 15 line pair cm-1. The proposed method is also superior to the Wiesent method in experiments. This paper shows that the proposed method is attractive for simultaneous and accurate geometry phantom commission and geometry calibration.

  13. Experimental and computational development of a natural breast phantom for dosimetry studies

    International Nuclear Information System (INIS)

    Nogueira, Luciana B.; Campos, Tarcisio P.R.

    2013-01-01

    This paper describes the experimental and computational development of a natural breast phantom, anthropomorphic and anthropometric for studies in dosimetry of brachytherapy and teletherapy of breast. The natural breast phantom developed corresponding to fibroadipose breasts of women aged 30 to 50 years, presenting radiographically medium density. The experimental breast phantom was constituted of three tissue-equivalents (TE's): glandular TE, adipose TE and skin TE. These TE's were developed according to chemical composition of human breast and present radiological response to exposure. Completed the construction of experimental breast phantom this was mounted on a thorax phantom previously developed by the research group NRI/UFMG. Then the computational breast phantom was constructed by performing a computed tomography (CT) by axial slices of the chest phantom. Through the images generated by CT a computational model of voxels of the thorax phantom was developed by SISCODES computational program, being the computational breast phantom represented by the same TE's of the experimental breast phantom. The images generated by CT allowed evaluating the radiological equivalence of the tissues. The breast phantom is being used in studies of experimental dosimetry both in brachytherapy as in teletherapy of breast. Dosimetry studies by MCNP-5 code using the computational model of the phantom breast are in progress. (author)

  14. Characterization and Simulation of a New Design Parallel-Plate Ionization Chamber for CT Dosimetry at Calibration Laboratories

    Science.gov (United States)

    Perini, Ana P.; Neves, Lucio P.; Maia, Ana F.; Caldas, Linda V. E.

    2013-12-01

    In this work, a new extended-length parallel-plate ionization chamber was tested in the standard radiation qualities for computed tomography established according to the half-value layers defined at the IEC 61267 standard, at the Calibration Laboratory of the Instituto de Pesquisas Energéticas e Nucleares (IPEN). The experimental characterization was made following the IEC 61674 standard recommendations. The experimental results obtained with the ionization chamber studied in this work were compared to those obtained with a commercial pencil ionization chamber, showing a good agreement. With the use of the PENELOPE Monte Carlo code, simulations were undertaken to evaluate the influence of the cables, insulator, PMMA body, collecting electrode, guard ring, screws, as well as different materials and geometrical arrangements, on the energy deposited on the ionization chamber sensitive volume. The maximum influence observed was 13.3% for the collecting electrode, and regarding the use of different materials and design, the substitutions showed that the original project presented the most suitable configuration. The experimental and simulated results obtained in this work show that this ionization chamber has appropriate characteristics to be used at calibration laboratories, for dosimetry in standard computed tomography and diagnostic radiology quality beams.

  15. Calibration-free quantification of interior properties of porous media with x-ray computed tomography.

    Science.gov (United States)

    Hussein, Esam M A; Agbogun, H M D; Al, Tom A

    2015-03-01

    A method is presented for interpreting the values of x-ray attenuation coefficients reconstructed in computed tomography of porous media, while overcoming the ambiguity caused by the multichromatic nature of x-rays, dilution by void, and material heterogeneity. The method enables determination of porosity without relying on calibration or image segmentation or thresholding to discriminate pores from solid material. It distinguishes between solution-accessible and inaccessible pores, and provides the spatial and frequency distributions of solid-matrix material in a heterogeneous medium. This is accomplished by matching an image of a sample saturated with a contrast solution with that saturated with a transparent solution. Voxels occupied with solid-material and inaccessible pores are identified by the fact that they maintain the same location and image attributes in both images, with voxels containing inaccessible pores appearing empty in both images. Fully porous and accessible voxels exhibit the maximum contrast, while the rest are porous voxels containing mixtures of pore solutions and solid. This matching process is performed with an image registration computer code, and image processing software that requires only simple subtraction and multiplication (scaling) processes. The process is demonstrated in dolomite (non-uniform void distribution, homogeneous solid matrix) and sandstone (nearly uniform void distribution, heterogeneous solid matrix) samples, and its overall performance is shown to compare favorably with a method based on calibration and thresholding. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Calibration of Flick standards

    International Nuclear Information System (INIS)

    Thalmann, Ruedi; Spiller, Jürg; Küng, Alain; Jusko, Otto

    2012-01-01

    Flick standards or magnification standards are widely used for an efficient and functional calibration of the sensitivity of form measuring instruments. The results of a recent measurement comparison have shown to be partially unsatisfactory and revealed problems related to the calibration of these standards. In this paper the influence factors for the calibration of Flick standards using roundness measurement instruments are discussed in detail, in particular the bandwidth of the measurement chain, residual form errors of the device under test, profile distortions due to the diameter of the probing element and questions related to the definition of the measurand. The different contributions are estimated using simulations and are experimentally verified. Also alternative methods to calibrate Flick standards are investigated. Finally the practical limitations of Flick standard calibration are shown and the usability of Flick standards both to calibrate the sensitivity of roundness instruments and to check the filter function of such instruments is analysed. (paper)

  17. Understanding organometallic reaction mechanisms and catalysis experimental and computational tools computational and experimental tools

    CERN Document Server

    Ananikov, Valentin P

    2014-01-01

    Exploring and highlighting the new horizons in the studies of reaction mechanisms that open joint application of experimental studies and theoretical calculations is the goal of this book. The latest insights and developments in the mechanistic studies of organometallic reactions and catalytic processes are presented and reviewed. The book adopts a unique approach, exemplifying how to use experiments, spectroscopy measurements, and computational methods to reveal reaction pathways and molecular structures of catalysts, rather than concentrating solely on one discipline. The result is a deeper

  18. CAMAC-controlled calibration system for nuclear reactor instruments

    International Nuclear Information System (INIS)

    McDowell, W.P.; Cornella, R.J.

    1977-01-01

    The hardware and the software which have been developed to implement a nuclear instrument calibration system for the Argonne National Laboratory ZPR-VI and ZPR-IX reactor complex are described. The system is implemented using an SEL-840 computer with its associated CAMAC crates and a hardware interface to generate input parameters and measure the required outputs on the instrument under test. Both linear and logarithmic instruments can be calibrated by the system and output parameters can be measured at various automatically selected values of ac line voltage. A complete report on each instrument is printed as a result of the calibration and out-of-tolerance readings are flagged. Operator interface is provided by a CAMAC-controlled Hazeltine terminal. The terminal display leads the operator through the complete calibration procedure. This computer-controlled system is a significant improvement over previously used methods of calibrating nuclear instruments since it reduces reactor downtime and allows rapid detection of long-term changes in instrument calibration

  19. Experimental calibration and validation of sewer/surface flow exchange equations in steady and unsteady flow conditions

    Science.gov (United States)

    Rubinato, Matteo; Martins, Ricardo; Kesserwani, Georges; Leandro, Jorge; Djordjević, Slobodan; Shucksmith, James

    2017-09-01

    The linkage between sewer pipe flow and floodplain flow is recognised to induce an important source of uncertainty within two-dimensional (2D) urban flood models. This uncertainty is often attributed to the use of empirical hydraulic formulae (the one-dimensional (1D) weir and orifice steady flow equations) to achieve data-connectivity at the linking interface, which require the determination of discharge coefficients. Because of the paucity of high resolution localised data for this type of flows, the current understanding and quantification of a suitable range for those discharge coefficients is somewhat lacking. To fulfil this gap, this work presents the results acquired from an instrumented physical model designed to study the interaction between a pipe network flow and a floodplain flow. The full range of sewer-to-surface and surface-to-sewer flow conditions at the exchange zone are experimentally analysed in both steady and unsteady flow regimes. Steady state measured discharges are first analysed considering the relationship between the energy heads from the sewer flow and the floodplain flow; these results show that existing weir and orifice formulae are valid for describing the flow exchange for the present physical model, and yield new calibrated discharge coefficients for each of the flow conditions. The measured exchange discharges are also integrated (as a source term) within a 2D numerical flood model (a finite volume solver to the 2D Shallow Water Equations (SWE)), which is shown to reproduce the observed coefficients. This calibrated numerical model is then used to simulate a series of unsteady flow tests reproduced within the experimental facility. Results show that the numerical model overestimated the values of mean surcharge flow rate. This suggests the occurrence of additional head losses in unsteady conditions which are not currently accounted for within flood models calibrated in steady flow conditions.

  20. Rock models at Zielona Gora, Poland applied to the semi-empirical neutron tool calibration

    International Nuclear Information System (INIS)

    Czubek, J.A.; Ossowski, A.; Zorski, T.; Massalski, T.

    1995-01-01

    The semi-empirical calibration method applied to the neutron porosity tool is presented in this paper. It was used with the ODSN-102 tool of 70 mm diameter and equipped with an Am-Be neutron source at the calibration facility of Zielona Gora, Poland, inside natural and artificial rocks: four sandstone, four limestone and one dolomite block with borehole diameters of 143 and 216 mm, and three artificial ceramic blocks with borehole diameters of 90 and 180 mm. All blocks were saturated with fresh water, and fresh water was also inside all boreholes. In five blocks mineralized water (200,000 ppm NaCl) was introduced inside the boreholes. All neutron characteristics of the calibration blocks are given in this paper. The semi-empirical method of calibration correlates the tool readings observed experimentally with the general neutron parameter (GNP). This results in a general calibration curve, where the tool readings (TR) vs GNP are situated at one curve irrespective of their origin, i.e. of the formation lithology, borehole diameter, tool stand-off, brine salinity, etc. The n and m power coefficients are obtained experimentally during the calibration procedure. The apparent neutron parameters are defined as those sensed by a neutron tool situated inside the borehole and in real environmental conditions. When they are known, the GNP parameter can be computed analytically for the whole range of porosity at any kind of borehole diameter, formation lithology (including variable rock matrix absorption cross-section and density), borehole and formation salinity, tool stand-off and drilling fluid physical parameters. By this approach all porosity corrections with respect to the standard (e.g. limestone) calibration curve can be generated. (author)

  1. Computational fluid dynamic (CFD) investigation of thermal uniformity in a thermal cycling based calibration chamber for MEMS

    Science.gov (United States)

    Gui, Xulong; Luo, Xiaobing; Wang, Xiaoping; Liu, Sheng

    2015-12-01

    Micro-electrical-mechanical system (MEMS) has become important for many industries such as automotive, home appliance, portable electronics, especially with the emergence of Internet of Things. Volume testing with temperature compensation has been essential in order to provide MEMS based sensors with repeatability, consistency, reliability, and durability, but low cost. Particularly, in the temperature calibration test, temperature uniformity of thermal cycling based calibration chamber becomes more important for obtaining precision sensors, as each sensor is different before the calibration. When sensor samples are loaded into the chamber, we usually open the door of the chamber, then place fixtures into chamber and mount the samples on the fixtures. These operations may affect temperature uniformity in the chamber. In order to study the influencing factors of sample-loading on the temperature uniformity in the chamber during calibration testing, numerical simulation work was conducted first. Temperature field and flow field were simulated in empty chamber, chamber with open door, chamber with samples, and chamber with fixtures, respectively. By simulation, it was found that opening chamber door, sample size and number of fixture layers all have effects on flow field and temperature field. By experimental validation, it was found that the measured temperature value was consistent with the simulated temperature value.

  2. POLCAL - POLARIMETRIC RADAR CALIBRATION

    Science.gov (United States)

    Vanzyl, J.

    1994-01-01

    processing altitude or in the aircraft roll angle are possible causes of error in computing the antenna patterns inside the processor. POLCAL uses an altitude error correction algorithm to correctly remove the antenna pattern from the SAR images. POLCAL also uses a topographic calibration algorithm to reduce calibration errors resulting from ground topography. By utilizing the backscatter measurements from either the corner reflectors or a well-known distributed target, POLCAL can correct the residual amplitude offsets in the various polarization channels and correct for the absolute gain of the radar system. POLCAL also gives the user the option of calibrating a scene using the calibration data from a nearby site. This allows precise calibration of all the scenes acquired on a flight line where corner reflectors were present. Construction and positioning of corner reflectors is covered extensively in the program documentation. In an effort to keep the POLCAL code as transportable as possible, the authors eliminated all interactions with a graphics display system. For this reason, it is assumed that users will have their own software for doing the following: (1) synthesize an image using HH or VV polarization, (2) display the synthesized image on any display device, and (3) read the pixel locations of the corner reflectors from the image. The only inputs used by the software (in addition to the input Stokes matrix data file) is a small data file with the corner reflector information. POLCAL is written in FORTRAN 77 for use on Sun series computers running SunOS and DEC VAX computers running VMS. It requires 4Mb of RAM under SunOS and 3.7Mb of RAM under VMS for execution. The standard distribution medium for POLCAL is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format or on a TK50 tape cartridge in DEC VAX FILES-11 format. Other distribution media may be available upon request

  3. A Combined Experimental and Computational Approach to Subject-Specific Analysis of Knee Joint Laxity

    Science.gov (United States)

    Harris, Michael D.; Cyr, Adam J.; Ali, Azhar A.; Fitzpatrick, Clare K.; Rullkoetter, Paul J.; Maletsky, Lorin P.; Shelburne, Kevin B.

    2016-01-01

    Modeling complex knee biomechanics is a continual challenge, which has resulted in many models of varying levels of quality, complexity, and validation. Beyond modeling healthy knees, accurately mimicking pathologic knee mechanics, such as after cruciate rupture or meniscectomy, is difficult. Experimental tests of knee laxity can provide important information about ligament engagement and overall contributions to knee stability for development of subject-specific models to accurately simulate knee motion and loading. Our objective was to provide combined experimental tests and finite-element (FE) models of natural knee laxity that are subject-specific, have one-to-one experiment to model calibration, simulate ligament engagement in agreement with literature, and are adaptable for a variety of biomechanical investigations (e.g., cartilage contact, ligament strain, in vivo kinematics). Calibration involved perturbing ligament stiffness, initial ligament strain, and attachment location until model-predicted kinematics and ligament engagement matched experimental reports. Errors between model-predicted and experimental kinematics averaged ligaments agreed with literature descriptions. These results demonstrate the ability of our constraint models to be customized for multiple individuals and simultaneously call attention to the need to verify that ligament engagement is in good general agreement with literature. To facilitate further investigations of subject-specific or population based knee joint biomechanics, data collected during the experimental and modeling phases of this study are available for download by the research community. PMID:27306137

  4. SPOTS Calibration Example

    Directory of Open Access Journals (Sweden)

    Patterson E.

    2010-06-01

    Full Text Available The results are presented using the procedure outlined by the Standardisation Project for Optical Techniques of Strain measurement to calibrate a digital image correlation system. The process involves comparing the experimental data obtained with the optical measurement system to the theoretical values for a specially designed specimen. The standard states the criteria which must be met in order to achieve successful calibration, in addition to quantifying the measurement uncertainty in the system. The system was evaluated at three different displacement load levels, generating strain ranges from 289 µstrain to 2110 µstrain. At the 289 µstrain range, the calibration uncertainty was found to be 14.1 µstrain, and at the 2110 µstrain range it was found to be 28.9 µstrain. This calibration procedure was performed without painting a speckle pattern on the surface of the metal. Instead, the specimen surface was prepared using different grades of grit paper to produce the desired texture.

  5. Monte Carlo simulation of calibration of shadow shield scanning bed whole body monitor using different size BOMAB phantoms

    International Nuclear Information System (INIS)

    Bhati, S.; Patni, H.K.; Singh, I.S.; Garg, S.P.

    2005-01-01

    A shadow shield scanning bed whole body monitor incorporating a (102 mm dia x 76 mm thick) NaI(Tl) detector, is employed for assessment of high-energy photon emitters at BARC. The monitor is calibrated using a Reference BOMAB phantom representative of an average Indian radiation worker. However to account for the size variation in the physique of workers, it is required to calibrate the system with different size BOMAB phantoms which is both difficult and expensive. Therefore, a theoretical approach based on Monte Carlo techniques has been employed to calibrate the system with BOMAB phantoms of different sizes for several radionuclides of interest. A computer program developed for this purpose, simulates the scanning geometry of the whole body monitor and computes detection efficiencies for the BARC Reference phantom (63 kg/168 cm), ICRP Reference phantom (70 kg/170 cm) and several of its scaled versions covering a wide range of body builds. The detection efficiencies computed for different photon energies for BARC Reference phantom were found to be in very good agreement with experimental data, thus validating the Monte Carlo scheme used in the computer code. The results from this study could be used for assessment of internal contamination due to high-energy photon emitters for radiation workers of different physiques. (author)

  6. Numerical calibration and experimental validation of a PCM-Air heat exchanger model

    International Nuclear Information System (INIS)

    Stathopoulos, N.; El Mankibi, M.; Santamouris, Mattheos

    2017-01-01

    Highlights: • Development of a PCM-Air heat exchanger experimental unit and its numerical model. • Differential Scanning Calorimetry for PCM properties. • Ineptitude of DSC obtained heat capacity curves. • Creation of adequate heat capacity curves depending on heat transfer rates. • Confrontation of numerical and experimental results and validation of the model. - Abstract: Ambitious goals have been set at international, European and French level for energy consumption and greenhouse gas emissions decrease of the building sector. Achieving them requires renewable energy integration, a technology that presents however an important drawback: intermittent energy production. In response, thermal energy storage (TES) technology applications have been developed in order to correlate energy production and consumption of the building. Phase Change Materials (PCMs) have been widely used in TES applications as they offer a high storage density and adequate phase change temperature range. It is important to accurately know the thermophysical properties of the PCM, both for experimental (system design) and numerical (correct prediction) purposes. In this paper, the fabrication of a PCM – Air experimental prototype is presented at first, along with the development of a numerical model simulating the downstream temperature evolution of the heat exchanger. Particular focus is given to the calibration method and the validation of the model using experimental characterization results. Differential scanning calorimetry (DSC) is used to define the thermal properties of the PCM. Initial numerical results are underestimated compared to experimental ones. Various factors were investigated, pointing to the ineptitude of the heat capacity parameter, as DSC results depend on heating/cooling rates. Adequate heat capacity curves were empirically determined, depending on heat transfer rates and based on DSC results and experimental observations. The results of the proposed model

  7. Design of Test Tracks for Odometry Calibration of Wheeled Mobile Robots

    Directory of Open Access Journals (Sweden)

    Changbae Jung

    2011-09-01

    Full Text Available Pose estimation for mobile robots depends basically on accurate odometry information. Odometry from the wheel's encoder is widely used for simple and inexpensive implementation. As the travel distance increases, odometry suffers from kinematic modeling errors regarding the wheels. Therefore, in order to improve the odometry accuracy, it is necessary that systematic errors be calibrated. The UMBmark test is a practical and useful scheme for calibrating the systematic errors of two-wheeled mobile robots. However, the square path track size used in the test has not been validated. A consideration of the calibration equations, experimental conditions, and modeling errors is essential to improve the calibration accuracy. In this paper, we analyze the effect on calibration performance of the approximation errors of calibration equations and nonsystematic errors under experimental conditions. Then, we propose a test track size for improving the accuracy of odometry calibration. From simulation and experimental results, we show that the proposed test track size significantly improves the calibration accuracy of odometry under a normal range of kinematic modeling errors for robots.

  8. Commissioning the CMS Alignment and Calibration Framework

    CERN Document Server

    Futyan, David

    2009-01-01

    The CMS experiment has developed a powerful framework to ensure the precise and prompt alignment and calibration of its components, which is a major prerequisite to achieve the optimal performance for physics analysis. The prompt alignment and calibration strategy harnesses computing resources both at the Tier-0 site and the CERN Analysis Facility (CAF) to ensure fast turnaround for updating the corresponding database payloads. An essential element is the creation of dedicated data streams concentrating the specific event information required by the various alignment and calibration workflows. The resulting low latency is required for feeding the resulting constants into the prompt reconstruction process, which is essential for achieving swift physics analysis of the LHC data. This report discusses the implementation and the computational aspects of the alignment and calibration framework. Recent commissioning campaigns with cosmic muons, beam halo and simulated data have been used to gain detailed experience...

  9. Experimental high energy physics and modern computer architectures

    International Nuclear Information System (INIS)

    Hoek, J.

    1988-06-01

    The paper examines how experimental High Energy Physics can use modern computer architectures efficiently. In this connection parallel and vector architectures are investigated, and the types available at the moment for general use are discussed. A separate section briefly describes some architectures that are either a combination of both, or exemplify other architectures. In an appendix some directions in which computing seems to be developing in the USA are mentioned. (author)

  10. Photorefractive and computational holography in the experimental generation of Airy beams

    Science.gov (United States)

    Suarez, Rafael A. B.; Vieira, Tarcio A.; Yepes, Indira S. V.; Gesualdi, Marcos R. R.

    2016-05-01

    In this paper, we present the experimental generation of Airy beams via computational and photorefractive holography. Experimental generation of Airy beams using conventional optical components presents several difficulties and a practically infeasible. Thus, the optical generation of Airy beams has been made from the optical reconstruction of a computer generated hologram implemented by a spatial light modulator. In the photorefractive holography technique, being used for the first time to our knowledge, the hologram of an Airy beam is constructed (recorded) and reconstructed (read) optically in a nonlinear photorefractive medium. The Airy beam experimental realization was made by a setup of computational and photorefractive holography using a photorefractive Bi12 TiO20 crystal as holographic recording medium. Airy beams and Airy beam arrays were obtained experimentally in accordance with the predicted theory; with excellent prospects for applications in optical trapping and optical communications systems.

  11. Sloan Digital Sky Survey Photometric Calibration Revisited

    International Nuclear Information System (INIS)

    Marriner, John

    2012-01-01

    The Sloan Digital Sky Survey calibration is revisited to obtain the most accurate photometric calibration. A small but significant error is found in the flat-fielding of the Photometric telescope used for calibration. Two SDSS star catalogs are compared and the average difference in magnitude as a function of right ascension and declination exhibits small systematic errors in relative calibration. The photometric transformation from the SDSS Photometric Telescope to the 2.5 m telescope is recomputed and compared to synthetic magnitudes computed from measured filter bandpasses.

  12. Sloan Digital Sky Survey Photometric Calibration Revisited

    Energy Technology Data Exchange (ETDEWEB)

    Marriner, John; /Fermilab

    2012-06-29

    The Sloan Digital Sky Survey calibration is revisited to obtain the most accurate photometric calibration. A small but significant error is found in the flat-fielding of the Photometric telescope used for calibration. Two SDSS star catalogs are compared and the average difference in magnitude as a function of right ascension and declination exhibits small systematic errors in relative calibration. The photometric transformation from the SDSS Photometric Telescope to the 2.5 m telescope is recomputed and compared to synthetic magnitudes computed from measured filter bandpasses.

  13. Experimental all-optical one-way quantum computing

    International Nuclear Information System (INIS)

    Prevedel, R.

    2009-01-01

    In recent years, the relatively new field of quantum information processing (QIP) has attracted the attention of many scientists around the world due to its promise of increased computational speed, absolute secure communication and the potential to simulate complex quantum mechanical systems. The very essence of this new quantum information technology are two concepts at the very heart of quantum mechanics, namely superposition and entanglement. The present Thesis contains the results of four different experiments that were all aimed at the demonstration of an entirely new model for quantum computing with linear optics, the 'one-way' quantum computer. For this purpose a multi-photon entangled state of four photons has been generated via the process of spontaneous parametric down-conversion and by using an interferometric setup. This entangled state acts as a resource that allowed for novel demonstrations of quantum algorithms and relevant experimental techniques. By exploiting the advances developed in both theory and experiment, in this Thesis we report the implementation of fast, active feed-forward that allowed, for the first time, the realization of deterministic linear optics quantum computing at an unprecedented speed. Further we were able to demonstrate the Deutsch algorithm on our one-way quantum computer, an important quantum algorithm that is capable of distinguishing whether a function is constant or balanced. Classically one needs to query the algorithm at least 2N/2 + 1 times for an N-bit binary input string, however, in the quantum regime, this can be done with one evaluation of the algorithm, independent of the size of the input. In another experiment we succeeded in playing an instance of a quantum game - the so-called Prisoner's dilemma - on our one-way quantum computer. Playing such a game is essentially the execution of a quantum algorithm made up of a distinct set of one- and two-qubit gates. This allows the individual players to increase their

  14. Experimental verification of self-calibration radiometer based on spontaneous parametric downconversion

    Science.gov (United States)

    Gao, Dongyang; Zheng, Xiaobing; Li, Jianjun; Hu, Youbo; Xia, Maopeng; Salam, Abdul; Zhang, Peng

    2018-03-01

    Based on spontaneous parametric downconversion process, we propose a novel self-calibration radiometer scheme which can self-calibrate the degradation of its own response and ultimately monitor the fluctuation of a target radiation. Monitor results were independent of its degradation and not linked to the primary standard detector scale. The principle and feasibility of the proposed scheme were verified by observing bromine-tungsten lamp. A relative standard deviation of 0.39 % was obtained for stable bromine-tungsten lamp. Results show that the proposed scheme is advanced of its principle. The proposed scheme could make a significant breakthrough in the self-calibration issue on the space platform.

  15. Using MCNP for in-core instrument calibration in CANDU

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, D.C. [Point Lepreau Generating Station, NB Power, Lepreau, New Brunswick (Canada); Anghel, V.N.P.; Sur, B. [Atomic Energy of Canada Limited, Chalk River, Ontario (Canada)

    2002-07-01

    The calibration of in-core instruments is important for safe and economical CANDU operation. However, in-core detectors are not normally suited to bench calibration procedures. This paper describes the use and validation of detailed neutron transport calculations for the purpose of calibrating the response of in-core neutron flux detectors. The Monte-Carlo transport code, MCNP, was used to model the thermal neutron flux distribution in the region around self-powered in-core flux detectors (ICFDs), and in the vicinity of the calandria edge. The ICFD model was used to evaluate the reduction in signal of a given detector (the 'detector shading factor') due to neutron absorption in surrounding materials, detectors, and lead-cables. The calandria edge model was used to infer the accuracy of the calandria edge position from flux scans performed by AECL's traveling flux detector (TFD) system. The MCNP results were checked against experimental results on ICFDs, and also against shading factors computed by other means. The use of improved in-core detector calibration factors obtained by this new methodology will improve the accuracy of spatial flux control performance in CANDU-6 reactors. The accurate determination of TFD based calandria edge position is useful in the quantitative measurement of changes in in-core component dimensions and position due to aging, such as pressure tube sag. (author)

  16. LiDAR-IMU Time Delay Calibration Based on Iterative Closest Point and Iterated Sigma Point Kalman Filter.

    Science.gov (United States)

    Liu, Wanli

    2017-03-08

    The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated.

  17. Calibrated birth-death phylogenetic time-tree priors for bayesian inference.

    Science.gov (United States)

    Heled, Joseph; Drummond, Alexei J

    2015-05-01

    Here we introduce a general class of multiple calibration birth-death tree priors for use in Bayesian phylogenetic inference. All tree priors in this class separate ancestral node heights into a set of "calibrated nodes" and "uncalibrated nodes" such that the marginal distribution of the calibrated nodes is user-specified whereas the density ratio of the birth-death prior is retained for trees with equal values for the calibrated nodes. We describe two formulations, one in which the calibration information informs the prior on ranked tree topologies, through the (conditional) prior, and the other which factorizes the prior on divergence times and ranked topologies, thus allowing uniform, or any arbitrary prior distribution on ranked topologies. Although the first of these formulations has some attractive properties, the algorithm we present for computing its prior density is computationally intensive. However, the second formulation is always faster and computationally efficient for up to six calibrations. We demonstrate the utility of the new class of multiple-calibration tree priors using both small simulations and a real-world analysis and compare the results to existing schemes. The two new calibrated tree priors described in this article offer greater flexibility and control of prior specification in calibrated time-tree inference and divergence time dating, and will remove the need for indirect approaches to the assessment of the combined effect of calibration densities and tree priors in Bayesian phylogenetic inference. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  18. Fractal approach to computer-analytical modelling of tree crown

    International Nuclear Information System (INIS)

    Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.

    1993-09-01

    In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs

  19. Development of portable flow calibrator

    International Nuclear Information System (INIS)

    Akiyama, Kiyomitsu; Iijima, Nobuo

    1995-01-01

    In the nuclear facilities, air sniffer system is often utilized to evaluate atmospheric concentration of radioactivity in the working environment. The system collects airborne dust on the filter during some sampling period. In this method, total air flow during the sampling period is an important parameter to evaluate the radioactivity concentration correctly. Therefore, calibration for the flow meter of air sniffer system must be done periodically according to Japan Industry Standards (JIS). As we have had to available device to calibrate the flow meter in the working area, we had to remove the flow meters from the installed place and carry them to another place where calibration can be made. This work required a great deal of labor. Now we have developed a portable flow calibrator for air sniffer system which enables us to make in-site calibration of the flow meter in the working area more easily. This report describes the outline of portable flow calibrator and it's experimental results. (author)

  20. Commissioning the CMS alignment and calibration framework

    International Nuclear Information System (INIS)

    Futyan, David

    2010-01-01

    The CMS experiment has developed a powerful framework to ensure the precise and prompt alignment and calibration of its components, which is a major prerequisite to achieve the optimal performance for physics analysis. The prompt alignment and calibration strategy harnesses computing resources both at the Tier-0 site and the CERN Analysis Facility (CAF) to ensure fast turnaround for updating the corresponding database payloads. An essential element is the creation of dedicated data streams concentrating the specific event information required by the various alignment and calibration workflows. The resulting low latency is required for feeding the resulting constants into the prompt reconstruction process, which is essential for achieving swift physics analysis of the LHC data. This report discusses the implementation and the computational aspects of the alignment and calibration framework. Recent commissioning campaigns with cosmic muons, beam halo and simulated data have been used to gain detailed experience with this framework, and results of this validation are reported.

  1. Effects of pressure and fuel dilution on coflow laminar methane-air diffusion flames: A computational and experimental study

    Science.gov (United States)

    Cao, Su; Ma, Bin; Giassi, Davide; Bennett, Beth Anne V.; Long, Marshall B.; Smooke, Mitchell D.

    2018-03-01

    In this study, the influence of pressure and fuel dilution on the structure and geometry of coflow laminar methane-air diffusion flames is examined. A series of methane-fuelled, nitrogen-diluted flames has been investigated both computationally and experimentally, with pressure ranging from 1.0 to 2.7 atm and CH4 mole fraction ranging from 0.50 to 0.65. Computationally, the MC-Smooth vorticity-velocity formulation was employed to describe the reactive gaseous mixture, and soot evolution was modelled by sectional aerosol equations. The governing equations and boundary conditions were discretised on a two-dimensional computational domain by finite differences, and the resulting set of fully coupled, strongly nonlinear equations was solved simultaneously at all points using a damped, modified Newton's method. Experimentally, chemiluminescence measurements of CH* were taken to determine its relative concentration profile and the structure of the flame front. A thin-filament ratio pyrometry method using a colour digital camera was employed to determine the temperature profiles of the non-sooty, atmospheric pressure flames, while soot volume fraction was quantified, after evaluation of soot temperature, through an absolute light calibration using a thermocouple. For a broad spectrum of flames in atmospheric and elevated pressures, the computed and measured flame quantities were examined to characterise the influence of pressure and fuel dilution, and the major conclusions were as follows: (1) maximum temperature increases with increasing pressure or CH4 concentration; (2) lift-off height decreases significantly with increasing pressure, modified flame length is roughly independent of pressure, and flame radius decreases with pressure approximately as P-1/2; and (3) pressure and fuel stream dilution significantly affect the spatial distribution and the peak value of the soot volume fraction.

  2. On the calibration methods for neutron moisture gauges

    International Nuclear Information System (INIS)

    Apostol, I.

    1975-01-01

    Theoretical and experimental calibration methods for devices using neutron sources to measure the water content in subsurface soil and other samples are investigated. Neutron flux density is evaluated by means of the two and three group diffusion and Fermi age theories. The correction criteria for the calibration curves are presented. The agreement of the theoretical curves with the determined experimental data may be considered as excellent. (author)

  3. Calibration of personal dosemeters in terms of the ICRU operational quantities

    International Nuclear Information System (INIS)

    McDonald, J.C.; Hertel, N.E.

    1992-01-01

    The International Commission on Radiological Units and Measurements (ICRU) has defined several new operational quantities for radiation protection purposes. The quantities to be used for personal monitoring are defined at depths in the human body. Because these quantities are impossible to measure directly, the ICRU has recommended that personal dosimeters should be calibrated under simplified conditions on an appropriate phantom, such as the ICRU sphere. The U.S. personal dosimetry accreditation programs make use of a 30 x 30 x 15 cm polymethylmethacrylate (PMMA) phantom; therefore it is necessary to relate the response of dosimeters calibrated on this phantom to the ICRU operational quantities. Calculations of the conversion factors to compute dosimeter response in terms of the operational quantities have been performed using the code MCNP. These calculations have also been compared to experimental measurements using thermoluminescent (TLD) detectors. (author)

  4. Calibration of personal dosemeters in terms of the ICRU operational quantities

    International Nuclear Information System (INIS)

    McDonald, J.C.; Hertel, N.E.

    1992-05-01

    The International Commission on Radiological Units and Measurements (ICRU) has defined several new operational quantities for radiation protection purposes. The quantities to be used for personal monitoring are defined at depths in the human body. Because these quantities are impossible to measure directly, the ICRU has recommended that personal dosemeters should be calibrated under simplified conditions on an appropriate phantom, such as the ICRU sphere. The US personal dosimetry accreditation programs make use of a 30 x 30 x 15 cm polymethymethacrylate (PMMA) phantom, therefore it is necessary to relate the response of dosemeters calibrated on this phantom to the ICRU operational quantities. Calculations of the conversion factors to compute dosemeter response in terms of the operational quantities have been performed using the code MCNP. These calculations have also been compared to experimental measurements using thermoluminescent (TLD) detectors

  5. Overview of hypersonic CFD code calibration studies

    Science.gov (United States)

    Miller, Charles G.

    1987-01-01

    The topics are presented in viewgraph form and include the following: definitions of computational fluid dynamics (CFD) code validation; climate in hypersonics and LaRC when first 'designed' CFD code calibration studied was initiated; methodology from the experimentalist's perspective; hypersonic facilities; measurement techniques; and CFD code calibration studies.

  6. Reconstructing the calibrated strain signal in the Advanced LIGO detectors

    Science.gov (United States)

    Viets, A. D.; Wade, M.; Urban, A. L.; Kandhasamy, S.; Betzwieser, J.; Brown, Duncan A.; Burguet-Castell, J.; Cahillane, C.; Goetz, E.; Izumi, K.; Karki, S.; Kissel, J. S.; Mendell, G.; Savage, R. L.; Siemens, X.; Tuyenbayev, D.; Weinstein, A. J.

    2018-05-01

    Advanced LIGO’s raw detector output needs to be calibrated to compute dimensionless strain h(t) . Calibrated strain data is produced in the time domain using both a low-latency, online procedure and a high-latency, offline procedure. The low-latency h(t) data stream is produced in two stages, the first of which is performed on the same computers that operate the detector’s feedback control system. This stage, referred to as the front-end calibration, uses infinite impulse response (IIR) filtering and performs all operations at a 16 384 Hz digital sampling rate. Due to several limitations, this procedure currently introduces certain systematic errors in the calibrated strain data, motivating the second stage of the low-latency procedure, known as the low-latency gstlal calibration pipeline. The gstlal calibration pipeline uses finite impulse response (FIR) filtering to apply corrections to the output of the front-end calibration. It applies time-dependent correction factors to the sensing and actuation components of the calibrated strain to reduce systematic errors. The gstlal calibration pipeline is also used in high latency to recalibrate the data, which is necessary due mainly to online dropouts in the calibrated data and identified improvements to the calibration models or filters.

  7. A novel angle computation and calibration algorithm of bio-inspired sky-light polarization navigation sensor.

    Science.gov (United States)

    Xian, Zhiwen; Hu, Xiaoping; Lian, Junxiang; Zhang, Lilian; Cao, Juliang; Wang, Yujie; Ma, Tao

    2014-09-15

    Navigation plays a vital role in our daily life. As traditional and commonly used navigation technologies, Inertial Navigation System (INS) and Global Navigation Satellite System (GNSS) can provide accurate location information, but suffer from the accumulative error of inertial sensors and cannot be used in a satellite denied environment. The remarkable navigation ability of animals shows that the pattern of the polarization sky can be used for navigation. A bio-inspired POLarization Navigation Sensor (POLNS) is constructed to detect the polarization of skylight. Contrary to the previous approach, we utilize all the outputs of POLNS to compute input polarization angle, based on Least Squares, which provides optimal angle estimation. In addition, a new sensor calibration algorithm is presented, in which the installation angle errors and sensor biases are taken into consideration. Derivation and implementation of our calibration algorithm are discussed in detail. To evaluate the performance of our algorithms, simulation and real data test are done to compare our algorithms with several exiting algorithms. Comparison results indicate that our algorithms are superior to the others and are more feasible and effective in practice.

  8. Experimentally calibrated computational chemistry of tryptophan hydroxylase: Trans influence, hydrogen-bonding, and 18-electron rule govern O-2-activation

    DEFF Research Database (Denmark)

    Haahr, Lærke Tvedebrink; Kepp, Kasper Planeta; Boesen, Jane

    2010-01-01

    with the experimental value (0.25 mm/s) which we propose as the structure of the hydroxylating intermediate, with the tryptophan substrate well located for further reaction 3.5 Å from the ferryl group. Based on the optimized transition states, the activation barriers for the two paths (glu and his) are similar, so......Insight into the nature of oxygen activation in tryptophan hydroxylase has been obtained from density functional computations. Conformations of O2-bound intermediates have been studied with oxygen trans to glutamate and histidine, respectively. An O2-adduct with O2 trans to histidine (Ohis...... towards the cofactor and a more activated O–O bond (1.33 Å) than in Oglu (1.30 Å). It is shown that the cofactor can hydrogen bond to O2 and activate the O–O bond further (from 1.33 to 1.38 Å). The Ohis intermediate leads to a ferryl intermediate (Fhis) with an isomer shift of 0.34 mm/s, also consistent...

  9. Color calibration and color-managed medical displays: does the calibration method matter?

    Science.gov (United States)

    Roehrig, Hans; Rehm, Kelly; Silverstein, Louis D.; Dallas, William J.; Fan, Jiahua; Krupinski, Elizabeth A.

    2010-02-01

    Our laboratory has investigated the efficacy of a suite of color calibration and monitor profiling packages which employ a variety of color measurement sensors. Each of the methods computes gamma correction tables for the red, green and blue color channels of a monitor that attempt to: a) match a desired luminance range and tone reproduction curve; and b) maintain a target neutral point across the range of grey values. All of the methods examined here produce International Color Consortium (ICC) profiles that describe the color rendering capabilities of the monitor after calibration. Color profiles incorporate a transfer matrix that establishes the relationship between RGB driving levels and the International Commission on Illumination (CIE) XYZ (tristimulus) values of the resulting on-screen color; the matrix is developed by displaying color patches of known RGB values on the monitor and measuring the tristimulus values with a sensor. The number and chromatic distribution of color patches varies across methods and is usually not under user control. In this work we examine the effect of employing differing calibration and profiling methods on rendition of color images. A series of color patches encoded in sRGB color space were presented on the monitor using color-management software that utilized the ICC profile produced by each method. The patches were displayed on the calibrated monitor and measured with a Minolta CS200 colorimeter. Differences in intended and achieved luminance and chromaticity were computed using the CIE DE2000 color-difference metric, in which a value of ▵E = 1 is generally considered to be approximately one just noticeable difference (JND) in color. We observed between one and 17 JND's for individual colors, depending on calibration method and target.

  10. Phase behavior of multicomponent membranes: Experimental and computational techniques

    DEFF Research Database (Denmark)

    Bagatolli, Luis; Kumar, P.B. Sunil

    2009-01-01

    Recent developments in biology seems to indicate that the Fluid Mosaic model of membrane proposed by Singer and Nicolson, with lipid bilayer functioning only as medium to support protein machinery, may be too simple to be realistic. Many protein functions are now known to depend on the compositio....... This review includes basic foundations on membrane model systems and experimental approaches applied in the membrane research area, stressing on recent advances in the experimental and computational techniques....... membranes. Current increase in interest in the domain formation in multicomponent membranes also stems from the experiments demonstrating liquid ordered-liquid disordered coexistence in mixtures of lipids and cholesterol and the success of several computational models in predicting their behavior...

  11. Immune Algorithm Complex Method for Transducer Calibration

    Directory of Open Access Journals (Sweden)

    YU Jiangming

    2014-08-01

    Full Text Available As a key link in engineering test tasks, the transducer calibration has significant influence on accuracy and reliability of test results. Because of unknown and complex nonlinear characteristics, conventional method can’t achieve satisfactory accuracy. An Immune algorithm complex modeling approach is proposed, and the simulated studies on the calibration of third multiple output transducers is made respectively by use of the developed complex modeling. The simulated and experimental results show that the Immune algorithm complex modeling approach can improve significantly calibration precision comparison with traditional calibration methods.

  12. An experimental test of CSR theory using a globally calibrated ordination method.

    Science.gov (United States)

    Li, Yuanzhi; Shipley, Bill

    2017-01-01

    Can CSR theory, in conjunction with a recently proposed globally calibrated CSR ordination ("StrateFy"), using only three easily measured leaf traits (leaf area, specific leaf area and leaf dry matter content) predict the functional signature of herbaceous vegetation along experimentally manipulated gradients of soil fertility and disturbance? To determine this, we grew 37 herbaceous species in mixture for five years in 24 experimental mesocosms differing in factorial levels of soil resources (stress) and density-independent mortality (disturbance). We measured 16 different functional traits and then ordinated the resulting vegetation within the CSR triangle using StrateFy. We then calculated community-weighted mean (CWM) values of the competitor (CCWM), stress-tolerator (SCWM) and ruderal (RCWM) scores for each mesocosm. We found a significant increase in SCWM from low to high stress mesocosms, and an increase in RCWM from lowly to highly disturbed mesocosms. However, CCWM did not decline significantly as intensity of stress or disturbance increased, as predicted by CSR theory. This last result likely arose because our herbaceous species were relatively poor competitors in global comparisons and thus no strong competitors in our species pool were selectively favoured in low stress and low disturbed mesocosms. Variation in the 13 other traits, not used by StrateFy, largely argeed with the predictions of CSR theory. StrateFy worked surprisingly well in our experimental study except for the C-dimension. Despite loss of some precision, it has great potential applicability in future studies due to its simplicity and generality.

  13. Image calibration and registration in cone-beam computed tomogram for measuring the accuracy of computer-aided implant surgery

    Science.gov (United States)

    Lam, Walter Y. H.; Ngan, Henry Y. T.; Wat, Peter Y. P.; Luk, Henry W. K.; Goto, Tazuko K.; Pow, Edmond H. N.

    2015-02-01

    Medical radiography is the use of radiation to "see through" a human body without breaching its integrity (surface). With computed tomography (CT)/cone beam computed tomography (CBCT), three-dimensional (3D) imaging can be produced. These imagings not only facilitate disease diagnosis but also enable computer-aided surgical planning/navigation. In dentistry, the common method for transfer of the virtual surgical planning to the patient (reality) is the use of surgical stent either with a preloaded planning (static) like a channel or a real time surgical navigation (dynamic) after registration with fiducial markers (RF). This paper describes using the corner of a cube as a radiopaque fiducial marker on an acrylic (plastic) stent, this RF allows robust calibration and registration of Cartesian (x, y, z)- coordinates for linking up the patient (reality) and the imaging (virtuality) and hence the surgical planning can be transferred in either static or dynamic way. The accuracy of computer-aided implant surgery was measured with reference to coordinates. In our preliminary model surgery, a dental implant was planned virtually and placed with preloaded surgical guide. The deviation of the placed implant apex from the planning was x=+0.56mm [more right], y=- 0.05mm [deeper], z=-0.26mm [more lingual]) which was within clinically 2mm safety range. For comparison with the virtual planning, the physically placed implant was CT/CBCT scanned and errors may be introduced. The difference of the actual implant apex to the virtual apex was x=0.00mm, y=+0.21mm [shallower], z=-1.35mm [more lingual] and this should be brought in mind when interpret the results.

  14. Infrared stereo calibration for unmanned ground vehicle navigation

    Science.gov (United States)

    Harguess, Josh; Strange, Shawn

    2014-06-01

    The problem of calibrating two color cameras as a stereo pair has been heavily researched and many off-the-shelf software packages, such as Robot Operating System and OpenCV, include calibration routines that work in most cases. However, the problem of calibrating two infrared (IR) cameras for the purposes of sensor fusion and point could generation is relatively new and many challenges exist. We present a comparison of color camera and IR camera stereo calibration using data from an unmanned ground vehicle. There are two main challenges in IR stereo calibration; the calibration board (material, design, etc.) and the accuracy of calibration pattern detection. We present our analysis of these challenges along with our IR stereo calibration methodology. Finally, we present our results both visually and analytically with computed reprojection errors.

  15. Revisiting dibenzothiophene thermochemical data: Experimental and computational studies

    International Nuclear Information System (INIS)

    Freitas, Vera L.S.; Gomes, Jose R.B.; Ribeiro da Silva, Maria D.M.C.

    2009-01-01

    Thermochemical data of dibenzothiophene were studied in the present work by experimental techniques and computational calculations. The standard (p 0 =0.1MPa) molar enthalpy of formation, at T = 298.15 K, in the gaseous phase, was determined from the enthalpy of combustion and sublimation, obtained by rotating bomb calorimetry in oxygen, and by Calvet microcalorimetry, respectively. This value was compared with estimated data from G3(MP2)//B3LYP computations and also with the other results available in the literature.

  16. In-Flight Pitot-Static Calibration

    Science.gov (United States)

    Foster, John V. (Inventor); Cunningham, Kevin (Inventor)

    2016-01-01

    A GPS-based pitot-static calibration system uses global output-error optimization. High data rate measurements of static and total pressure, ambient air conditions, and GPS-based ground speed measurements are used to compute pitot-static pressure errors over a range of airspeed. System identification methods rapidly compute optimal pressure error models with defined confidence intervals.

  17. Statistical Methodologies to Integrate Experimental and Computational Research

    Science.gov (United States)

    Parker, P. A.; Johnson, R. T.; Montgomery, D. C.

    2008-01-01

    Development of advanced algorithms for simulating engine flow paths requires the integration of fundamental experiments with the validation of enhanced mathematical models. In this paper, we provide an overview of statistical methods to strategically and efficiently conduct experiments and computational model refinement. Moreover, the integration of experimental and computational research efforts is emphasized. With a statistical engineering perspective, scientific and engineering expertise is combined with statistical sciences to gain deeper insights into experimental phenomenon and code development performance; supporting the overall research objectives. The particular statistical methods discussed are design of experiments, response surface methodology, and uncertainty analysis and planning. Their application is illustrated with a coaxial free jet experiment and a turbulence model refinement investigation. Our goal is to provide an overview, focusing on concepts rather than practice, to demonstrate the benefits of using statistical methods in research and development, thereby encouraging their broader and more systematic application.

  18. Calibration of Cherenkov detectors for monoenergetic photon imaging in active interrogation applications

    Energy Technology Data Exchange (ETDEWEB)

    Rose, P.B., E-mail: prose6@gatech.edu; Erickson, A.S., E-mail: anna.erickson@me.gatech.edu

    2015-11-01

    Active interrogation of cargo containers using monoenergetic photons offers a rapid and low-dose approach to search for shielded special nuclear materials. Cherenkov detectors can be used for imaging of the cargo provided that gamma ray energies used in interrogation are well resolved, as the case in {sup 11}B(d,n-γ){sup 12}C reaction resulting in 4.4 MeV and 15.1 MeV photons. While an array of Cherenkov threshold detectors reduces low energy background from scatter while providing the ability of high contrast transmission imaging, thus confirming the presence of high-Z materials, these detectors require a special approach to energy calibration due to the lack of resolution. In this paper, we discuss the utility of Cherenkov detectors for active interrogation with monoenergetic photons as well as the results of computational and experimental studies of their energy calibration. The results of the studies with sources emitting monoenergetic photons as well as complex gamma ray spectrum sources, for example {sup 232}Th, show that calibration is possible as long as the energies of photons of interest are distinct.

  19. Calibration biases in logical reasoning tasks

    Directory of Open Access Journals (Sweden)

    Guillermo Macbeth

    2013-08-01

    Full Text Available The aim of this contribution is to present an experimental study about calibration in deductive reasoning tasks. Calibration is defi ned as the empirical convergence or divergence between the objective and the subjective success. The underconfi dence bias is understood as the dominance of the former over the latter. The hypothesis of this study states that the form of the propositions presented in the experiment is critical for calibration phenomena. Affi rmative and negative propositions are distinguished in their cognitive processing. Results suggests that monotonous compound propositions are prone to underconfi dence. An heuristic approach to this phenomenon is proposed. The activation of a monotony heuristic would produce an illusion of simplicity that generates the calibration bias. These evidence is analysed in the context of the metacognitive modeling of calibration phenomena.

  20. HIFiRE-1 Turbulent Shock Boundary Layer Interaction - Flight Data and Computations

    Science.gov (United States)

    Kimmel, Roger L.; Prabhu, Dinesh

    2015-01-01

    The Hypersonic International Flight Research Experimentation (HIFiRE) program is a hypersonic flight test program executed by the Air Force Research Laboratory (AFRL) and Australian Defence Science and Technology Organisation (DSTO). This flight contained a cylinder-flare induced shock boundary layer interaction (SBLI). Computations of the interaction were conducted for a number of times during the ascent. The DPLR code used for predictions was calibrated against ground test data prior to exercising the code at flight conditions. Generally, the computations predicted the upstream influence and interaction pressures very well. Plateau pressures on the cylinder were predicted well at all conditions. Although the experimental heat transfer showed a large amount of scatter, especially at low heating levels, the measured heat transfer agreed well with computations. The primary discrepancy between the experiment and computation occurred in the pressures measured on the flare during second stage burn. Measured pressures exhibited large overshoots late in the second stage burn, the mechanism of which is unknown. The good agreement between flight measurements and CFD helps validate the philosophy of calibrating CFD against ground test, prior to exercising it at flight conditions.

  1. Global Calibration of Multi-Cameras Based on Refractive Projection and Ray Tracing

    Directory of Open Access Journals (Sweden)

    Mingchi Feng

    2017-10-01

    Full Text Available Multi-camera systems are widely applied in the three dimensional (3D computer vision, especially when multiple cameras are distributed on both sides of the measured object. The calibration methods of multi-camera systems are critical to the accuracy of vision measurement and the key is to find an appropriate calibration target. In this paper, a high-precision camera calibration method for multi-camera systems based on transparent glass checkerboards and ray tracing is described, and is used to calibrate multiple cameras distributed on both sides of the glass checkerboard. Firstly, the intrinsic parameters of each camera are obtained by Zhang’s calibration method. Then, multiple cameras capture several images from the front and back of the glass checkerboard with different orientations, and all images contain distinct grid corners. As the cameras on one side are not affected by the refraction of glass checkerboard, extrinsic parameters can be directly calculated. However, the cameras on the other side are influenced by the refraction of glass checkerboard, and the direct use of projection model will produce a calibration error. A multi-camera calibration method using refractive projection model and ray tracing is developed to eliminate this error. Furthermore, both synthetic and real data are employed to validate the proposed approach. The experimental results of refractive calibration show that the error of the 3D reconstruction is smaller than 0.2 mm, the relative errors of both rotation and translation are less than 0.014%, and the mean and standard deviation of reprojection error of the four-camera system are 0.00007 and 0.4543 pixels, respectively. The proposed method is flexible, highly accurate, and simple to carry out.

  2. Experimental data processing techniques by a personal computer

    International Nuclear Information System (INIS)

    Matsuura, Kiyokata; Tsuda, Kenzo; Abe, Yoshihiko; Kojima, Tsuyoshi; Nishikawa, Akira; Shimura, Hitoshi; Hyodo, Hiromi; Yamagishi, Shigeru.

    1989-01-01

    A personal computer (16-bit, about 1 MB memory) can be used at a low cost in the experimental data processing. This report surveys the important techniques on A/D and D/A conversion, display, store and transfer of the experimental data. It is also discussed the items to be considered in the software. Practical softwares programed BASIC and Assembler language are given as examples. Here, we present some techniques to get faster process in BASIC language and show that the system composed of BASIC and Assembler is useful in a practical experiment. The system performance such as processing speed and flexibility in setting operation condition will depend strongly on programming language. We have made test for processing speed by some typical programming languages; BASIC(interpreter), C, FORTRAN and Assembler. As for the calculation, FORTRAN has the best performance which is comparable to or better than Assembler even in the personal computer. (author)

  3. Automated system for the calibration of magnetometers

    DEFF Research Database (Denmark)

    Petrucha, Vojtech; Kaspar, Petr; Ripka, Pavel

    2009-01-01

    A completely nonmagnetic calibration platform has been developed and constructed at DTU Space (Technical University of Denmark). It is intended for on-site scalar calibration of high-precise fluxgate magnetometers. An enhanced version of the same platform is being built at the Czech Technical Uni...... through custom-made optical incremental sensors. The system is controlled by a microcontroller, which executes commands from a computer. The properties of the system as well as calibration and measurement results will be presented. ©2009 American Institute of Physics...

  4. Full-Field Calibration of Color Camera Chromatic Aberration using Absolute Phase Maps.

    Science.gov (United States)

    Liu, Xiaohong; Huang, Shujun; Zhang, Zonghua; Gao, Feng; Jiang, Xiangqian

    2017-05-06

    The refractive index of a lens varies for different wavelengths of light, and thus the same incident light with different wavelengths has different outgoing light. This characteristic of lenses causes images captured by a color camera to display chromatic aberration (CA), which seriously reduces image quality. Based on an analysis of the distribution of CA, a full-field calibration method based on absolute phase maps is proposed in this paper. Red, green, and blue closed sinusoidal fringe patterns are generated, consecutively displayed on an LCD (liquid crystal display), and captured by a color camera from the front viewpoint. The phase information of each color fringe is obtained using a four-step phase-shifting algorithm and optimum fringe number selection method. CA causes the unwrapped phase of the three channels to differ. These pixel deviations can be computed by comparing the unwrapped phase data of the red, blue, and green channels in polar coordinates. CA calibration is accomplished in Cartesian coordinates. The systematic errors introduced by the LCD are analyzed and corrected. Simulated results show the validity of the proposed method and experimental results demonstrate that the proposed full-field calibration method based on absolute phase maps will be useful for practical software-based CA calibration.

  5. Camera calibration method of binocular stereo vision based on OpenCV

    Science.gov (United States)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  6. Calibration bench of flowmeters

    International Nuclear Information System (INIS)

    Bremond, J.; Da Costa, D.; Calvet, A.; Vieuxmaire, C.

    1966-01-01

    This equipment is devoted to the comparison of signals from two turbines installed in the Cabri experimental loop. The signal is compared to the standard turbine. The characteristics and the performance of the calibration bench are presented. (A.L.B.)

  7. Distributed computing testbed for a remote experimental environment

    International Nuclear Information System (INIS)

    Butner, D.N.; Casper, T.A.; Howard, B.C.; Henline, P.A.; Davis, S.L.; Barnes, D.

    1995-01-01

    Collaboration is increasing as physics research becomes concentrated on a few large, expensive facilities, particularly in magnetic fusion energy research, with national and international participation. These facilities are designed for steady state operation and interactive, real-time experimentation. We are developing tools to provide for the establishment of geographically distant centers for interactive operations; such centers would allow scientists to participate in experiments from their home institutions. A testbed is being developed for a Remote Experimental Environment (REE), a ''Collaboratory.'' The testbed will be used to evaluate the ability of a remotely located group of scientists to conduct research on the DIII-D Tokamak at General Atomics. The REE will serve as a testing environment for advanced control and collaboration concepts applicable to future experiments. Process-to-process communications over high speed wide area networks provide real-time synchronization and exchange of data among multiple computer networks, while the ability to conduct research is enhanced by adding audio/video communication capabilities. The Open Software Foundation's Distributed Computing Environment is being used to test concepts in distributed control, security, naming, remote procedure calls and distributed file access using the Distributed File Services. We are exploring the technology and sociology of remotely participating in the operation of a large scale experimental facility

  8. Experimental device for obtaining calibration factor for the total count technique

    International Nuclear Information System (INIS)

    Gonçalves, Eduardo R.; Braz, Delson; Brandão, Luís Eduardo B.

    2017-01-01

    Nuclear technologies have widely used on industry plants in order to help to solve troubles processes/design or just obtain information of them. The Total Count technique for flow measurement has as main advantages: being an absolute technique, because it is independent of additional devices readings unless the directly used for recording the radioactive cloud, requiring only a single detector to provide the final result; the independence of the internal volume of the transport duct, can be applied in the presence or absence of obstructions; no restriction as to the nature of the product or material to be conveyed; it is a noninvasive technique which allows real-time diagnostics. To use Total Count Technique, knowledge of a geometric calibration factor is required. Called Factor F, it is obtained in the laboratory using an experimental apparatus to faithfully reproduce the geometry of the detection system and the pipeline that being analyzed and using the same radiotracer, therefore, its value is constant for each specific measuring system under survey. This experimental apparatus for obtaining the factor F consisting by a pipe of 2 ″PVC, which simulates a transmission line, where they were deposited 500 ml oil and the use of a specific pipette for use viscous fluids were added sequentially aliquots (50.00 ± 0.01) μl radiotracer (radionuclide photopeak energy of 198 Au 411.8 keV) and analyzing data obtained by three distinct detection systems composed of detectors NaI scintillators 1″ x 1 ″ and a data acquisition system. (author)

  9. Experimental device for obtaining calibration factor for the total count technique

    Energy Technology Data Exchange (ETDEWEB)

    Gonçalves, Eduardo R.; Braz, Delson [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Brandão, Luís Eduardo B. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Divisao de Reatores

    2017-07-01

    Nuclear technologies have widely used on industry plants in order to help to solve troubles processes/design or just obtain information of them. The Total Count technique for flow measurement has as main advantages: being an absolute technique, because it is independent of additional devices readings unless the directly used for recording the radioactive cloud, requiring only a single detector to provide the final result; the independence of the internal volume of the transport duct, can be applied in the presence or absence of obstructions; no restriction as to the nature of the product or material to be conveyed; it is a noninvasive technique which allows real-time diagnostics. To use Total Count Technique, knowledge of a geometric calibration factor is required. Called Factor F, it is obtained in the laboratory using an experimental apparatus to faithfully reproduce the geometry of the detection system and the pipeline that being analyzed and using the same radiotracer, therefore, its value is constant for each specific measuring system under survey. This experimental apparatus for obtaining the factor F consisting by a pipe of 2 ″PVC, which simulates a transmission line, where they were deposited 500 ml oil and the use of a specific pipette for use viscous fluids were added sequentially aliquots (50.00 ± 0.01) μl radiotracer (radionuclide photopeak energy of 198 Au 411.8 keV) and analyzing data obtained by three distinct detection systems composed of detectors NaI scintillators 1″ x 1 ″ and a data acquisition system. (author)

  10. Automation of RELAP5 input calibration and code validation using genetic algorithm

    International Nuclear Information System (INIS)

    Phung, Viet-Anh; Kööp, Kaspar; Grishchenko, Dmitry; Vorobyev, Yury; Kudinov, Pavel

    2016-01-01

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  11. Automation of RELAP5 input calibration and code validation using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Phung, Viet-Anh, E-mail: vaphung@kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Kööp, Kaspar, E-mail: kaspar@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Grishchenko, Dmitry, E-mail: dmitry@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Vorobyev, Yury, E-mail: yura3510@gmail.com [National Research Center “Kurchatov Institute”, Kurchatov square 1, Moscow 123182 (Russian Federation); Kudinov, Pavel, E-mail: pavel@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden)

    2016-04-15

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  12. The calibration procedures in the Studsvik standardized personnel dosimetry system

    International Nuclear Information System (INIS)

    Widell, C.O.

    1978-01-01

    Every large nuclear installation in Sweden reads its own personnel TLDs. In order to supervise this decentralized reading of dose meters, the TLD readers are connected by telephone lines to a central computer for dose registration. This computer is used both for registering the personnel doses and for checking the TLD readers. This checking is performed by the use of pre-irradiated calibration dose meters which are always used when a batch of personnel dose meters are read. The pre-irradiated dose meters are either irradiated using 137 Cs to various doses up to 100mSv(10000mrem) or using a 90 Sr source in a reference dose irradiator to a dose equal to 3mSv(300mrem) from a 137 Cs source. The results from the reading of the pre-irradiated dose meters are processed by the computer and a calibration factor is calculated. The calibration factor is automatically used to calculate the doses to the personnel TLD's. However, if the calibration factor deviates by more than 10% from the previously used factor, this fact is shown to the operator - who then has to decide what calibration factor is going to be used. This calibration and supervisory procedure together with the safety interlocks in the TLD readers has resulted in a very reliable and accurate dosimetry system. (author)

  13. Experimental investigation of liquid chromatography columns by means of computed tomography

    DEFF Research Database (Denmark)

    Astrath, D.U.; Lottes, F.; Vu, Duc Thuong

    2007-01-01

    The efficiency of packed chromatographic columns was investigated experimentally by means of computed tomography (CT) techniques. The measurements were carried out by monitoring tracer fronts in situ inside the chromatographic columns. The experimental results were fitted using the equilibrium di...

  14. Development and experimental assessment of a numerical modelling code to aid the design of profile extrusion cooling tools

    Science.gov (United States)

    Carneiro, O. S.; Rajkumar, A.; Fernandes, C.; Ferrás, L. L.; Habla, F.; Nóbrega, J. M.

    2017-10-01

    On the extrusion of thermoplastic profiles, upon the forming stage that takes place in the extrusion die, the profile must be cooled in a metallic calibrator. This stage must be done at a high rate, to assure increased productivity, but avoiding the development of high temperature gradients, in order to minimize the level of induced thermal residual stresses. In this work, we present a new coupled numerical solver, developed in the framework of the OpenFOAM® computational library, that computes the temperature distribution in both domains simultaneously (metallic calibrator and plastic profile), whose implementation aimed the minimization of the computational time. The new solver was experimentally assessed with an industrial case study.

  15. The experimental modification of a computer software package for ...

    African Journals Online (AJOL)

    The experimental modification of a computer software package for graphing algebraic functions. ... No Abstract Available South African Journal of Education Vol.25(2) 2005: 61-68. Full Text: EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  16. Automatic Calibration and Reconstruction for Active Vision Systems

    CERN Document Server

    Zhang, Beiwei

    2012-01-01

    In this book, the design of two new planar patterns for camera calibration of intrinsic parameters is addressed and a line-based method for distortion correction is suggested. The dynamic calibration of structured light systems, which consist of a camera and a projector is also treated. Also, the 3D Euclidean reconstruction by using the image-to-world transformation is investigated. Lastly, linear calibration algorithms for the catadioptric camera are considered, and the homographic matrix and fundamental matrix are extensively studied. In these methods, analytic solutions are provided for the computational efficiency and redundancy in the data can be easily incorporated to improve reliability of the estimations. This volume will therefore prove valuable and practical tool for researchers and practioners working in image processing and computer vision and related subjects.

  17. Implementation of Fast Emulator-based Code Calibration

    Energy Technology Data Exchange (ETDEWEB)

    Bowman, Nathaniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Risk & Reliability Analysis; Denman, Matthew R [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Risk & Reliability Analysis

    2016-08-01

    Calibration is the process of using experimental data to gain more precise knowledge of simulator inputs. This process commonly involves the use of Markov-chain Monte Carlo, which requires running a simulator thousands of times. If we can create a faster program, called an emulator, that mimics the outputs of the simulator for an input range of interest, then we can speed up the process enough to make it feasible for expensive simulators. To this end, we implement a Gaussian-process emulator capable of reproducing the behavior of various long-running simulators to within acceptable tolerance. This fast emulator can be used in place of a simulator to run Markov-chain Monte Carlo in order to calibrate simulation parameters to experimental data. As a demonstration, this emulator is used to calibrate the inputs of an actual simulator against two sodium-fire experiments.

  18. Novel Real-time Calibration and Alignment Procedure for LHCb Run II

    CERN Multimedia

    Prouve, Claire

    2016-01-01

    In order to achieve optimal detector performance the LHCb experiment has introduced a novel real-time detector alignment and calibration strategy for Run II of the LHC. For the alignment tasks, data is collected and processed at the beginning of each fill while the calibrations are performed for each run. This real time alignment and calibration allows the same constants being used in both the online and offline reconstruction, thus improving the correlation between triggered and offline selected events. Additionally the newly computed alignment and calibration constants can be instantly used in the trigger, making it more efficient. The online alignment and calibration of the RICH detectors also enable the use of hadronic particle identification in the trigger. The computing time constraints are met through the use of a new dedicated framework using the multi-core farm infrastructure for the LHCb trigger. An overview of all alignment and calibration tasks is presented and their performance is shown.

  19. Experimental and Computational Investigation of Triple-rotating Blades in a Mower Deck

    Science.gov (United States)

    Chon, Woochong; Amano, Ryoichi S.

    Experimental and computational studies were performed on the 1.27m wide three-spindle lawn mower deck with side discharge arrangement. Laser Doppler Velocimetry was used to measure the air velocity at 12 different sections under the mower deck. The high-speed video camera test provided valuable visual evidence of airflow and grass discharge patterns. The strain gages were attached at several predetermined locations of the mower blades to measure the strain. In computational fluid dynamics work, computer based analytical studies were performed. During this phase of work, two different trials were attempted. First, two-dimensional blade shapes at several arbitrary radial sections were selected for flow computations around the blade model. Finally, a three-dimensional full deck model was developed and compared with the experimental results.

  20. Effect of Computer-Based Video Games on Children: An Experimental Study

    Science.gov (United States)

    Chuang, Tsung-Yen; Chen, Wei-Fan

    2009-01-01

    This experimental study investigated whether computer-based video games facilitate children's cognitive learning. In comparison to traditional computer-assisted instruction (CAI), this study explored the impact of the varied types of instructional delivery strategies on children's learning achievement. One major research null hypothesis was…

  1. Thermodynamic properties of indan: Experimental and computational results

    International Nuclear Information System (INIS)

    Chirico, Robert D.; Steele, William V.; Kazakov, Andrei F.

    2016-01-01

    Highlights: • Heat capacities were measured for the temperature range (5 to 445) K. • Vapor pressures were measured for the temperature range (338 to 495) K. • Densities at saturation pressure were measured from T = (323 to 523) K. • Computed and experimentally derived properties for ideal gas entropies are in excellent accord. • Thermodynamic consistency analysis revealed anomalous literature data. - Abstract: Measurements leading to the calculation of thermodynamic properties in the ideal-gas state for indan (Chemical Abstracts registry number [496-11-7], 2,3-dihydro-1H-indene) are reported. Experimental methods were adiabatic heat-capacity calorimetry, differential scanning calorimetry, comparative ebulliometry, and vibrating-tube densitometry. Molar thermodynamic functions (enthalpies, entropies, and Gibbs energies) for the condensed and ideal-gas states were derived from the experimental studies at selected temperatures. Statistical calculations were performed based on molecular geometry optimization and vibrational frequencies calculated at the B3LYP/6-31+G(d, p) level of theory. Computed ideal-gas properties derived with the rigid-rotor harmonic-oscillator approximation are shown to be in excellent accord with ideal-gas entropies derived from thermophysical property measurements of this research, as well as with experimental heat capacities for the ideal-gas state reported in the literature. Literature spectroscopic studies and ab initio calculations report a range of values for the barrier to ring puckering. Results of the present work are consistent with a large barrier that allows use of the rigid-rotor harmonic-oscillator approximation for ideal-gas entropy and heat-capacity calculations, even with the stringent uncertainty requirements imposed by the calorimetric and physical property measurements reported here. All experimental results are compared with property values reported in the literature.

  2. Computational Modelling of Patella Femoral Kinematics During Gait Cycle and Experimental Validation

    Science.gov (United States)

    Maiti, Raman

    2016-06-01

    The effect of loading and boundary conditions on patellar mechanics is significant due to the complications arising in patella femoral joints during total knee replacements. To understand the patellar mechanics with respect to loading and motion, a computational model representing the patella femoral joint was developed and validated against experimental results. The computational model was created in IDEAS NX and simulated in MSC ADAMS/VIEW software. The results obtained in the form of internal external rotations and anterior posterior displacements for a new and experimentally simulated specimen for patella femoral joint under standard gait condition were compared with experimental measurements performed on the Leeds ProSim knee simulator. A good overall agreement between the computational prediction and the experimental data was obtained for patella femoral kinematics. Good agreement between the model and the past studies was observed when the ligament load was removed and the medial lateral displacement was constrained. The model is sensitive to ±5 % change in kinematics, frictional, force and stiffness coefficients and insensitive to time step.

  3. The simple procedure for the fluxgate magnetometers calibration

    Science.gov (United States)

    Marusenkov, Andriy

    2014-05-01

    The fluxgate magnetometers are widely used in geophysics investigations including the geomagnetic field monitoring at the global network of geomagnetic observatories as well as for electromagnetic sounding of the Earth's crust conductivity. For solving these tasks the magnetometers have to be calibrated with an appropriate level of accuracy. As a particular case, the ways to satisfy the recent requirements to the scaling and orientation errors of 1-second INTERNAGNET magnetometers are considered in the work. The goal of the present study was to choose a simple and reliable calibration method for estimation of scale factors and angular errors of the three-axis magnetometers in the field. There are a large number of the scalar calibration methods, which use a free rotation of the sensor in the calibration field followed by complicated data processing procedures for numerical solution of the high-order equations set. The chosen approach also exploits the Earth's magnetic field as a calibrating signal, but, in contrast to other methods, the sensor has to be oriented in some particular positions in respect to the total field vector, instead of the sensor free rotation. This allows to use very simple and straightforward linear computation formulas and, as a result, to achieve more reliable estimations of the calibrated parameters. The estimation of the scale factors is performed by the sequential aligning of each component of the sensor in two positions: parallel and anti-parallel to the Earth's magnetic field vector. The estimation of non-orthogonality angles between each pair of components is performed after sequential aligning of the components at the angles +/- 45 and +/- 135 degrees of arc in respect to the total field vector. Due to such four positions approach the estimations of the non-orthogonality angles are invariant to the zero offsets and non-linearity of transfer functions of the components. The experimental justifying of the proposed method by means of the

  4. A GPS-Based Pitot-Static Calibration Method Using Global Output-Error Optimization

    Science.gov (United States)

    Foster, John V.; Cunningham, Kevin

    2010-01-01

    Pressure-based airspeed and altitude measurements for aircraft typically require calibration of the installed system to account for pressure sensing errors such as those due to local flow field effects. In some cases, calibration is used to meet requirements such as those specified in Federal Aviation Regulation Part 25. Several methods are used for in-flight pitot-static calibration including tower fly-by, pacer aircraft, and trailing cone methods. In the 1990 s, the introduction of satellite-based positioning systems to the civilian market enabled new inflight calibration methods based on accurate ground speed measurements provided by Global Positioning Systems (GPS). Use of GPS for airspeed calibration has many advantages such as accuracy, ease of portability (e.g. hand-held) and the flexibility of operating in airspace without the limitations of test range boundaries or ground telemetry support. The current research was motivated by the need for a rapid and statistically accurate method for in-flight calibration of pitot-static systems for remotely piloted, dynamically-scaled research aircraft. Current calibration methods were deemed not practical for this application because of confined test range size and limited flight time available for each sortie. A method was developed that uses high data rate measurements of static and total pressure, and GPSbased ground speed measurements to compute the pressure errors over a range of airspeed. The novel application of this approach is the use of system identification methods that rapidly compute optimal pressure error models with defined confidence intervals in nearreal time. This method has been demonstrated in flight tests and has shown 2- bounds of approximately 0.2 kts with an order of magnitude reduction in test time over other methods. As part of this experiment, a unique database of wind measurements was acquired concurrently with the flight experiments, for the purpose of experimental validation of the

  5. Computational Design and Experimental Validation of New Thermal Barrier Systems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2011-12-31

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This proposal will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; we will perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; we will perform material characterizations and oxidation/corrosion tests; and we will demonstrate our new Thermal barrier coating (TBC) systems experimentally under Integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed High Temperature/High Pressure Durability Test Rig under real syngas product compositions.

  6. Development of a calibration system for surface contamination monitors

    International Nuclear Information System (INIS)

    Marechal, M.H.H.; Barbosa, M.P.

    1992-01-01

    A calibration system for surface contamination monitors is developed, aiming supply the existence demand of these instruments. A experimental arrangement and a methodology are described. The advantages of use this system for calibration routine optimization are also discussed. (C.G.C.)

  7. Experimental and computational evaluation of area selectively immobilized horseradish peroxidase in a microfluidic device

    DEFF Research Database (Denmark)

    Hoffmann, Christian; Pereira Rosinha Grundtvig, Ines; Thrane, Joachim

    2017-01-01

    experimentally and by computational fluid dynamics (CFD) simulations. Ultimately, such a correlation would lead to faster development through computational pre-screening and optimized experimental design.In this proof-of-concept study, microreactors were prepared in a 2-step curing process of an off......-stoichiometric thiol-ene-epoxy (OSTE+) mixture employing both a thiol-ene (TEC) and a thiol-epoxy curing reaction. Subsequent surface functionalization of the remaining thiol groups on the reactor surface through stenciled photoinitiated TEC enabled the preparation of specific surface patterns in the reactor. Patterns...... as obtained from experimental determination. This good agreement between the obtained experimental and computational results confirmed the high potential of CFD models for predicting and optimizing the biocatalytic performance of such a reactor....

  8. Efficiency Calibration of Phantom Family for Use in Direct Bioassay of Radionuclide in the Body

    International Nuclear Information System (INIS)

    Kim, Ji Seok; Ha, Wi Ho; Kim, Hyun Ki; Park, Gyung Deok; Lee, Jai Ki

    2008-01-01

    A major source of uncertainties of in vivo bioassay using a whole body counter calibrated against a body phantom containing known radioactivities is variation of counting geometry caused by the differences in body size of the subject from that of the phantom. Phantoms such as the BOMAB phantom are based on the body size of the reference man and usually single phantom is used in usual calibration of the counter. This is because it is difficult to apply a set of phantoms having different sizes. In order to reduce the potential errors due to variation of counting geometry, use of a set of phantoms having different body-shapes have been attempted. The efficiency files are stored in the computer analyzing the measurement data and a suitable one is retrieved for the specific subject. Experimental or computational approach can be employed in generation of the efficiency files. Carlan et al. demonstrated that Monte Carlo simulations can provide acceptable efficiencies by use of the IGOR phantom family. The body size of the individual subject undergoing in vivo bioassay should be determined by an appropriate method

  9. Compact blackbody calibration sources for in-flight calibration of spaceborne infrared instruments

    Science.gov (United States)

    Scheiding, S.; Driescher, H.; Walter, I.; Hanbuch, K.; Paul, M.; Hartmann, M.; Scheiding, M.

    2017-11-01

    High-emissivity blackbodies are mandatory as calibration sources in infrared radiometers. Besides the requirements on the high spectral emissivity and low reflectance, constraints regarding energy consumption, installation space and mass must be considered during instrument design. Cavity radiators provide an outstanding spectral emissivity to the price of installation space and mass of the calibration source. Surface radiation sources are mainly limited by the spectral emissivity of the functional coating and the homogeneity of the temperature distribution. The effective emissivity of a "black" surface can be optimized, by structuring the substrate with the aim to enlarge the ratio of the surface to its projection. Based on the experiences of the Mercury Radiometer and Thermal Infrared Spectrometer (MERTIS) calibration source MBB3, the results of the surface structuring on the effective emissivity are described analytically and compared to the experimental performance. Different geometries are analyzed and the production methods are discussed. The high-emissivity temperature calibration source features values of 0.99 for wavelength from 5 μm to 10 μm and emissivity larger than 0.95 for the spectral range from 10 μm to 40 μm.

  10. Application of Computer Vision for quality control in frozen mixed berries production: colour calibration issues

    Directory of Open Access Journals (Sweden)

    D. Ricauda Aimonino

    2013-09-01

    Full Text Available Computer vision is becoming increasingly important in quality control of many food processes. The appearance properties of food products (colour, texture, shape and size are, in fact, correlated with organoleptic characteristics and/or the presence of defects. Quality control based on image processing eliminates the subjectivity of human visual inspection, allowing rapid and non-destructive analysis. However, most food matrices show a wide variability in appearance features, therefore robust and customized image elaboration algorithms have to be implemented for each specific product. For this reason, quality control by visual inspection is still rather diffused in several food processes. The case study inspiring this paper concerns the production of frozen mixed berries. Once frozen, different kinds of berries are mixed together, in different amounts, according to a recipe. The correct quantity of each kind of fruit, within a certain tolerance, has to be ensured by producers. Quality control relies on bringing few samples for each production lot (samples of the same weight and, manually, counting the amount of each species. This operation is tedious, subject to errors, and time consuming, while a computer vision system (CVS could determine the amount of each kind of berries in a few seconds. This paper discusses the problem of colour calibration of the CVS used for frozen berries mixture evaluation. Images are acquired by a digital camera coupled with a dome lighting system, which gives a homogeneous illumination on the entire visible surface of the berries, and a flat bed scanner. RBG device dependent data are then mapped onto CIELab colorimetric colour space using different transformation operators. The obtained results show that the proposed calibration procedure leads to colour discrepancies comparable or even below the human eyes sensibility.

  11. Emerging Trends in Heart Valve Engineering: Part IV. Computational Modeling and Experimental Studies.

    Science.gov (United States)

    Kheradvar, Arash; Groves, Elliott M; Falahatpisheh, Ahmad; Mofrad, Mohammad K; Hamed Alavi, S; Tranquillo, Robert; Dasi, Lakshmi P; Simmons, Craig A; Jane Grande-Allen, K; Goergen, Craig J; Baaijens, Frank; Little, Stephen H; Canic, Suncica; Griffith, Boyce

    2015-10-01

    In this final portion of an extensive review of heart valve engineering, we focus on the computational methods and experimental studies related to heart valves. The discussion begins with a thorough review of computational modeling and the governing equations of fluid and structural interaction. We then move onto multiscale and disease specific modeling. Finally, advanced methods related to in vitro testing of the heart valves are reviewed. This section of the review series is intended to illustrate application of computational methods and experimental studies and their interrelation for studying heart valves.

  12. Calibration factor or calibration coefficient?

    International Nuclear Information System (INIS)

    Meghzifene, A.; Shortt, K.R.

    2002-01-01

    Full text: The IAEA/WHO network of SSDLs was set up in order to establish links between SSDL members and the international measurement system. At the end of 2001, there were 73 network members in 63 Member States. The SSDL network members provide calibration services to end-users at the national or regional level. The results of the calibrations are summarized in a document called calibration report or calibration certificate. The IAEA has been using the term calibration certificate and will continue using the same terminology. The most important information in a calibration certificate is a list of calibration factors and their related uncertainties that apply to the calibrated instrument for the well-defined irradiation and ambient conditions. The IAEA has recently decided to change the term calibration factor to calibration coefficient, to be fully in line with ISO [ISO 31-0], which recommends the use of the term coefficient when it links two quantities A and B (equation 1) that have different dimensions. The term factor should only be used for k when it is used to link the terms A and B that have the same dimensions A=k.B. However, in a typical calibration, an ion chamber is calibrated in terms of a physical quantity such as air kerma, dose to water, ambient dose equivalent, etc. If the chamber is calibrated together with its electrometer, then the calibration refers to the physical quantity to be measured per electrometer unit reading. In this case, the terms referred have different dimensions. The adoption by the Agency of the term coefficient to express the results of calibrations is consistent with the 'International vocabulary of basic and general terms in metrology' prepared jointly by the BIPM, IEC, ISO, OIML and other organizations. The BIPM has changed from factor to coefficient. The authors believe that this is more than just a matter of semantics and recommend that the SSDL network members adopt this change in terminology. (author)

  13. An Experimental Study into the use of computers for teaching of ...

    African Journals Online (AJOL)

    This study was an experimental study which sought to establish how English language teachers used computers for teaching composition writing at Prince Edward High School in Harare. The findings of the study show that computers were rarely used in the teaching of composition despite the observation that the school ...

  14. Using genetic algorithms to calibrate a water quality model.

    Science.gov (United States)

    Liu, Shuming; Butler, David; Brazier, Richard; Heathwaite, Louise; Khu, Soon-Thiam

    2007-03-15

    With the increasing concern over the impact of diffuse pollution on water bodies, many diffuse pollution models have been developed in the last two decades. A common obstacle in using such models is how to determine the values of the model parameters. This is especially true when a model has a large number of parameters, which makes a full range of calibration expensive in terms of computing time. Compared with conventional optimisation approaches, soft computing techniques often have a faster convergence speed and are more efficient for global optimum searches. This paper presents an attempt to calibrate a diffuse pollution model using a genetic algorithm (GA). Designed to simulate the export of phosphorus from diffuse sources (agricultural land) and point sources (human), the Phosphorus Indicators Tool (PIT) version 1.1, on which this paper is based, consisted of 78 parameters. Previous studies have indicated the difficulty of full range model calibration due to the number of parameters involved. In this paper, a GA was employed to carry out the model calibration in which all parameters were involved. A sensitivity analysis was also performed to investigate the impact of operators in the GA on its effectiveness in optimum searching. The calibration yielded satisfactory results and required reasonable computing time. The application of the PIT model to the Windrush catchment with optimum parameter values was demonstrated. The annual P loss was predicted as 4.4 kg P/ha/yr, which showed a good fitness to the observed value.

  15. Computed tomography and plain radiography in experimental fracture healing

    International Nuclear Information System (INIS)

    Braunstein, E.M.; Goldstein, S.A.; Ku, J.; Smith, P.; Matthews, L.S.

    1986-01-01

    We evaluated the relative contribution of plain radiographs and computed tomography to the assessment of fracture healing under experimental circumstances. In 15 sheep, we performed midshaft femoral osteotomies and internal fixation of the resultant segmental fractures. Radiographs were obtained preoperatively and immediately postoperatively. Animals were sacrificed at 3 weeks, 6 weeks, 12 weeks, 24 weeks, and 36 weeks after surgery, and the femoral specimens radiographed. After removal of the internal fixation devices, computed tomographic scans of the specimens were performed. Computed tomography may be of value in the evaluation of fractures of long bones in those cases in which clinical examination and plain radiographs fail to give adequate information as to the status of healing. (orig./SHA)

  16. Quantification of breast density using dual-energy mammography with liquid phantom calibration

    International Nuclear Information System (INIS)

    Lam, Alfonso R; Ding, Huanjun; Molloi, Sabee

    2014-01-01

    Breast density is a widely recognized potential risk factor for breast cancer. However, accurate quantification of breast density is a challenging task in mammography. The current use of plastic breast-equivalent phantoms for calibration provides limited accuracy in dual-energy mammography due to the chemical composition of the phantom. We implemented a breast-equivalent liquid phantom for dual-energy calibration in order to improve the accuracy of breast density measurement. To design these phantoms, three liquid compounds were chosen: water, isopropyl alcohol, and glycerol. Chemical compositions of glandular and adipose tissues, obtained from NIST database, were used as reference materials. Dual-energy signal of the liquid phantom at different breast densities (0% to 100%) and thicknesses (1 to 8 cm) were simulated. Glandular and adipose tissue thicknesses were estimated from a higher order polynomial of the signals. Our results indicated that the linear attenuation coefficients of the breast-equivalent liquid phantoms match those of the target material. Comparison between measured and known breast density data shows a linear correlation with a slope close to 1 and a non-zero intercept of 7%, while plastic phantoms showed a slope of 0.6 and a non-zero intercept of 8%. Breast density results derived from the liquid calibration phantoms showed higher accuracy than those derived from the plastic phantoms for different breast thicknesses and various tube voltages. We performed experimental phantom studies using liquid phantoms and then compared the computed breast density with those obtained using a bovine tissue model. The experimental data and the known values were in good correlation with a slope close to 1 (∼1.1). In conclusion, our results indicate that liquid phantoms are a reliable alternative for calibration in dual-energy mammography and better reproduce the chemical properties of the target material. (paper)

  17. Direct illumination LED calibration for telescope photometry

    International Nuclear Information System (INIS)

    Barrelet, E.; Juramy, C.

    2008-01-01

    A calibration method for telescope photometry, based on the direct illumination of a telescope with a calibrated light source regrouping multiple LEDs, is proposed. Its purpose is to calibrate the instrument response. The main emphasis of the proposed method is the traceability of the calibration process and a continuous monitoring of the instrument in order to maintain a 0.2% accuracy over a period of years. Its specificity is to map finely the response of the telescope and its camera as a function of all light ray parameters. This feature is essential to implement a computer model of the instrument representing the variation of the overall light collection efficiency of each pixel for various filter configurations. We report on hardware developments done for SNDICE, the first application of this direct illumination calibration system which will be installed in Canada France Hawaii telescope (CFHT) for its leading supernova experiment (SNLS)

  18. A complete dosimetry experimental program in support to the core characterization and to the power calibration of the CABRI reactor. A complete dosimetry experimental program in support of the core characterization and of the power calibration of the CABRI reactor

    Energy Technology Data Exchange (ETDEWEB)

    Rodiac, F.; Hudelot, JP.; Lecerf, J.; Garnier, Y.; Ritter, G. [CEA, DEN, CAD/DER/SRES/LPRE, Cadarache, F-13108 Saint-Paul-lez-Durance, (France); Gueton, O.; Colombier, AC. [CEA, DEN, CAD/DER/SPRC/LPN, Cadarache, F-13108 Saint-Paul-lez-Durance, (France); Domergue, C. [CEA, DEN, CAD/DER/SPEx/LDCI, Cadarache, F-13108 Saint-Paul-lez-Durance, (France)

    2015-07-01

    CABRI is an experimental pulse reactor operated by CEA at the Cadarache research center. Since 1978 the experimental programs have aimed at studying the fuel behavior under Reactivity Initiated Accident (RIA) conditions. Since 2003, it has been refurbished in order to be able to provide RIA and LOCA (Loss Of Coolant Accident) experiments in prototypical PWR conditions (155 bar, 300 deg. C). This project is part of a broader scope including an overall facility refurbishment and a safety review. The global modification is conducted by the CEA project team. It is funded by IRSN, which is conducting the CIP experimental program, in the framework of the OECD/NEA project CIP. It is financed in the framework of an international collaboration. During the reactor restart, commissioning tests are realized for all equipment, systems and circuits of the reactor. In particular neutronics and power commissioning tests will be performed respectively in 2015 and 2016. This paper focuses on the design of a complete and original dosimetry program that was built in support to the CABRI core characterization and to the power calibration. Each one of the above experimental goals will be fully described, as well as the target uncertainties and the forecasted experimental techniques and data treatment. (authors)

  19. Skew redundant MEMS IMU calibration using a Kalman filter

    International Nuclear Information System (INIS)

    Jafari, M; Sahebjameyan, M; Moshiri, B; Najafabadi, T A

    2015-01-01

    In this paper, a novel calibration procedure for skew redundant inertial measurement units (SRIMUs) based on micro-electro mechanical systems (MEMS) is proposed. A general model of the SRIMU measurements is derived which contains the effects of bias, scale factor error and misalignments. For more accuracy, the effect of lever arms of the accelerometers to the center of the table are modeled and compensated in the calibration procedure. Two separate Kalman filters (KFs) are proposed to perform the estimation of error parameters for gyroscopes and accelerometers. The predictive error minimization (PEM) stochastic modeling method is used to simultaneously model the effect of bias instability and random walk noise on the calibration Kalman filters to diminish the biased estimations. The proposed procedure is simulated numerically and has expected experimental results. The calibration maneuvers are applied using a two-axis angle turntable in a way that the persistency of excitation (PE) condition for parameter estimation is met. For this purpose, a trapezoidal calibration profile is utilized to excite different deterministic error parameters of the accelerometers and a pulse profile is used for the gyroscopes. Furthermore, to evaluate the performance of the proposed KF calibration method, a conventional least squares (LS) calibration procedure is derived for the SRIMUs and the simulation and experimental results compare the functionality of the two proposed methods with each other. (paper)

  20. A versatile calibration procedure for portable coded aperture gamma cameras and RGB-D sensors

    Science.gov (United States)

    Paradiso, V.; Crivellaro, A.; Amgarou, K.; de Lanaute, N. Blanc; Fua, P.; Liénard, E.

    2018-04-01

    The present paper proposes a versatile procedure for the geometrical calibration of coded aperture gamma cameras and RGB-D depth sensors, using only one radioactive point source and a simple experimental set-up. Calibration data is then used for accurately aligning radiation images retrieved by means of the γ-camera with the respective depth images computed with the RGB-D sensor. The system resulting from such a combination is thus able to retrieve, automatically, the distance of radioactive hotspots by means of pixel-wise mapping between gamma and depth images. This procedure is of great interest for a wide number of applications, ranging from precise automatic estimation of the shape and distance of radioactive objects to Augmented Reality systems. Incidentally, the corresponding results validated the choice of a perspective design model for a coded aperture γ-camera.

  1. The Effects of Computer-Assisted Instruction of Simple Circuits on Experimental Process Skills

    Directory of Open Access Journals (Sweden)

    Şeyma ULUKÖK

    2013-01-01

    Full Text Available The experimental and control groups were composed of 30 sophomores majoring in Classroom Teaching for this study investigating the effects of computer-assisted instruction of simple circuits on the development of experimental process skills. The instruction includes experiments and studies about simple circuits and its elements (serial, parallel, and mixed conncetions of resistors covered in Science and Technology Laboratory II course curriculum. In this study where quantitative and qualitative methods were used together, the control list developed by the researchers was used to collect data. Results showed that experimental process skills of sophomores in experimental group were more developed than that of those in control group. Thus, it can be said that computer-assisted instruction has a positive impact on the development of experimental process skills of students.

  2. Optimization of procedure for calibration with radiometer/photometer

    International Nuclear Information System (INIS)

    Detilly, Isabelle

    2009-01-01

    A test procedure for the radiometer/photometer calibrations mark International Light at the Laboratorio de Fotometria y Tecnologia Laser (LAFTA) de la Escuela de Ingenieria Electrica de la Universidad de Costa Rica is established. Two photometric banks are used as experimental set and two calibrations were performed of the International Light. A basic procedure established in the laboratory, is used for calibration from measurements of illuminance and luminous intensity. Some dependent variations of photometric banks used in the calibration process, the programming of the radiometer/photometer and the applied methodology showed the results. The procedure for calibration with radiometer/photometer can be improved by optimizing the programming process of the measurement instrument and possible errors can be minimized by using the recommended procedure. (author) [es

  3. Computer simulations and the changing face of scientific experimentation

    CERN Document Server

    Duran, Juan M

    2013-01-01

    Computer simulations have become a central tool for scientific practice. Their use has replaced, in many cases, standard experimental procedures. This goes without mentioning cases where the target system is empirical but there are no techniques for direct manipulation of the system, such as astronomical observation. To these cases, computer simulations have proved to be of central importance. The question about their use and implementation, therefore, is not only a technical one but represents a challenge for the humanities as well. In this volume, scientists, historians, and philosophers joi

  4. Experimental and Computational Study of Ductile Fracture in Small Punch Tests

    Directory of Open Access Journals (Sweden)

    Betül Gülçimen Çakan

    2017-10-01

    Full Text Available A unified experimental-computational study on ductile fracture initiation and propagation during small punch testing is presented. Tests are carried out at room temperature with unnotched disks of different thicknesses where large-scale yielding prevails. In thinner specimens, the fracture occurs with severe necking under membrane tension, whereas for thicker ones a through thickness shearing mode prevails changing the crack orientation relative to the loading direction. Computational studies involve finite element simulations using a shear modified Gurson-Tvergaard-Needleman porous plasticity model with an integral-type nonlocal formulation. The predicted punch load-displacement curves and deformed profiles are in good agreement with the experimental results.

  5. Experimental and Computational Study of Ductile Fracture in Small Punch Tests.

    Science.gov (United States)

    Gülçimen Çakan, Betül; Soyarslan, Celal; Bargmann, Swantje; Hähner, Peter

    2017-10-17

    A unified experimental-computational study on ductile fracture initiation and propagation during small punch testing is presented. Tests are carried out at room temperature with unnotched disks of different thicknesses where large-scale yielding prevails. In thinner specimens, the fracture occurs with severe necking under membrane tension, whereas for thicker ones a through thickness shearing mode prevails changing the crack orientation relative to the loading direction. Computational studies involve finite element simulations using a shear modified Gurson-Tvergaard-Needleman porous plasticity model with an integral-type nonlocal formulation. The predicted punch load-displacement curves and deformed profiles are in good agreement with the experimental results.

  6. A combined experimental and computational investigation of excess molar enthalpies of (nitrobenzene + alkanol) mixtures

    International Nuclear Information System (INIS)

    Neyband, Razieh Sadat; Zarei, Hosseinali

    2015-01-01

    Highlights: • Excess molar enthalpies for the binary mixtures of nitrobenzene + alkanols mixtures were measured. • The infinite dilution excess partial molar enthalpies were calculated using the ab initio methods. • The PCM calculations were performed. • The computed excess partial molar enthalpies at infinite dilution were compared to experimental results. - Abstract: Excess molar enthalpies (H m E ) for the binary mixtures of {(nitrobenzene + ethanol), 1-propanol, 2-propanol, 1-butanol and 2-butanol} have been measured over the entire composition range at ambient pressure (81.5 kPa) and temperature 298 K using a Parr 1455 solution calorimeter. From the experimental results, the excess partial molar enthalpies (H i E ) and excess partial molar enthalpies at infinite dilution (H i E,∞ ) were calculated. The excess molar enthalpies (H m E ) are positive for all {nitrobenzene (1) + alkanol (2)} mixtures over the entire composition range. A state-of-the-art computational strategy for the evaluation of excess partial molar enthalpies at infinite dilution was followed at the M05-2X/6-311++G ∗∗ level of theory with the PCM model. The experimental excess partial molar enthalpies at infinite dilution have been compared to the computational data of the ab initio in liquid phase. Integrated experimental and computational results help to clarify the nature of the intermolecular interactions in {nitrobenzene (1) + alkanol (2)} mixtures. The experimental and computational work which was done in this study complements and extends the general research on the computation of excess partial molar enthalpy at infinite dilution of binary mixtures

  7. A novel iterative energy calibration method for composite germanium detectors

    International Nuclear Information System (INIS)

    Pattabiraman, N.S.; Chintalapudi, S.N.; Ghugre, S.S.

    2004-01-01

    An automatic method for energy calibration of the observed experimental spectrum has been developed. The method presented is based on an iterative algorithm and presents an efficient way to perform energy calibrations after establishing the weights of the calibration data. An application of this novel technique for data acquired using composite detectors in an in-beam γ-ray spectroscopy experiment is presented

  8. A novel iterative energy calibration method for composite germanium detectors

    Energy Technology Data Exchange (ETDEWEB)

    Pattabiraman, N.S.; Chintalapudi, S.N.; Ghugre, S.S. E-mail: ssg@alpha.iuc.res.in

    2004-07-01

    An automatic method for energy calibration of the observed experimental spectrum has been developed. The method presented is based on an iterative algorithm and presents an efficient way to perform energy calibrations after establishing the weights of the calibration data. An application of this novel technique for data acquired using composite detectors in an in-beam {gamma}-ray spectroscopy experiment is presented.

  9. 1st International Conference on Computational and Experimental Biomedical Sciences

    CERN Document Server

    Jorge, RM

    2015-01-01

    This book contains the full papers presented at ICCEBS 2013 – the 1st International Conference on Computational and Experimental Biomedical Sciences, which was organized in Azores, in October 2013. The included papers present and discuss new trends in those fields, using several methods and techniques, including active shape models, constitutive models, isogeometric elements, genetic algorithms, level sets, material models, neural networks, optimization, and the finite element method, in order to address more efficiently different and timely applications involving biofluids, computer simulation, computational biomechanics, image based diagnosis, image processing and analysis, image segmentation, image registration, scaffolds, simulation, and surgical planning. The main audience for this book consists of researchers, Ph.D students, and graduate students with multidisciplinary interests related to the areas of artificial intelligence, bioengineering, biology, biomechanics, computational fluid dynamics, comput...

  10. Experimental determinations of correction factors as a function of vertical displacement of radioactive sources in the radionuclide calibrators of the CRCN-NE, Pernambuco, Brazil

    International Nuclear Information System (INIS)

    Fragoso, Maria da Conceiao de Farias; Albuquerque, Antonio Morais de Sa; Lacerda, Isabelle Viviane Batista de; Oliveira, Mercia L.

    2011-01-01

    In nuclear medicine, the accurate knowledge of the activity of radio-pharmaceuticals which will be administered to the patients is an important factor to ensure the success of diagnosis or therapy. The activity measurements are performed in reentrant ionization chambers, also known as radionuclide calibrators. These equipment are sensitive to changes in radioactive sample geometry and its position within the chamber well. The purpose this work was to evaluate the behavior of radionuclide calibrators by means of vertical displacement of radioactive sources in the well and to determine experimentally the correction factors for each radionuclide, recognizing the specific positions in which the measurements must be made to ensure the highest sensitivity. The highest activity was obtained between 6 and 8 cm from the bottom of the well for both radionuclide calibrators utilized at this work. (author)

  11. Experimental determinations of correction factors as a function of vertical displacement of radioactive sources in the radionuclide calibrators of the CRCN-NE, Pernambuco, Brazil

    Energy Technology Data Exchange (ETDEWEB)

    Fragoso, Maria da Conceiao de Farias; Albuquerque, Antonio Morais de Sa; Lacerda, Isabelle Viviane Batista de; Oliveira, Mercia L. [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, MG (Brazil)

    2011-07-01

    In nuclear medicine, the accurate knowledge of the activity of radio-pharmaceuticals which will be administered to the patients is an important factor to ensure the success of diagnosis or therapy. The activity measurements are performed in reentrant ionization chambers, also known as radionuclide calibrators. These equipment are sensitive to changes in radioactive sample geometry and its position within the chamber well. The purpose this work was to evaluate the behavior of radionuclide calibrators by means of vertical displacement of radioactive sources in the well and to determine experimentally the correction factors for each radionuclide, recognizing the specific positions in which the measurements must be made to ensure the highest sensitivity. The highest activity was obtained between 6 and 8 cm from the bottom of the well for both radionuclide calibrators utilized at this work. (author)

  12. Device calibration impacts security of quantum key distribution.

    Science.gov (United States)

    Jain, Nitin; Wittmann, Christoffer; Lydersen, Lars; Wiechers, Carlos; Elser, Dominique; Marquardt, Christoph; Makarov, Vadim; Leuchs, Gerd

    2011-09-09

    Characterizing the physical channel and calibrating the cryptosystem hardware are prerequisites for establishing a quantum channel for quantum key distribution (QKD). Moreover, an inappropriately implemented calibration routine can open a fatal security loophole. We propose and experimentally demonstrate a method to induce a large temporal detector efficiency mismatch in a commercial QKD system by deceiving a channel length calibration routine. We then devise an optimal and realistic strategy using faked states to break the security of the cryptosystem. A fix for this loophole is also suggested.

  13. A portable, automated, inexpensive mass and balance calibration system

    International Nuclear Information System (INIS)

    Maxwell, S.L. III; Clark, J.P.

    1987-01-01

    Reliable mass measurements are essential for a nuclear production facility or process control laboratory. DOE Order 5630.2 requires that traceable standards be used to calibrate and monitor equipment used for nuclear material measurements. To ensure the reliability of mass measurements and to comply with DOE traceability requirements, a portable, automated mass and balance calibration system is used at the Savannah River Plant. Automation is achieved using an EPSON HX-20 notebook computer, which can be operated via RS232C interfacing to electronic balances or function with manual data entry if computer interfacing is not feasible. This economical, comprehensive, user-friendly system has three main functions in a mass measurement control program (MMCP): balance certification, calibration of mass standards, and daily measurement of traceable standards. The balance certification program tests for accuracy, precision, sensitivity, linearity, and cornerloading versus specific requirements. The mass calibration program allows rapid calibration of inexpensive mass standards traceable to certified Class S standards. This MMCP permits daily measurement of traceable standards to monitor the reliability of balances during routine use. The automated system verifies balance calibration, stores results for future use, and provides a printed control chart of the stored data. Another feature of the system permits three different weighing routines that accommodate their need for varying degrees of reliability in routine weighing operations

  14. A portable, automated, inexpensive mass and balance calibration system

    International Nuclear Information System (INIS)

    Maxwell, S.L. III; Clark, J.P.

    1987-01-01

    Reliable mass measurements are essential for a nuclear production facility or process control laboratory. DOE Order 5630.2 requires that traceable standards be used to calibrate and monitor equipment used for nuclear material measurements. To ensure the reliability of mass measurements and to comply with DOE traceable requirements, a portable, automated mass and balance calibration system is used at the Savannah River Plant. Automation is achieved using an EPSON HX-20 notebook computer, which can be operated via RS232C interfacing to electronic balances or function with manual data entry if computer interfacing is not feasible. This economical, comprehensive, user-friendly system has three main functions in a mass measurement control program (MMCP): balance certification, calibration of mass standards, and daily measurement of traceable standards. The balance certification program tests for accuracy, precision, sensitivity, linearity, and cornerloading versus specific requirements. The mass calibration program allows rapid calibration of inexpensive mass standards traceable to certified Class S standards. This MMCP permits daily measurement of traceable standards to monitor the reliability of balances during routine use. The automated system verifies balance calibration, stores results for future use, and provides a printed control chart of the stored data. Another feature of the system permits three different weighing routines that accommodate our need for varying degrees of reliability in routine weighing operations. 1 ref

  15. Study of the performance of diagnostic radiology instruments during calibration

    International Nuclear Information System (INIS)

    Freitas, Rodrigo N. de; Vivolo, Vitor; Potiens, Maria da Penha A.

    2008-01-01

    Full text: The instruments used in diagnostic radiology measurements represent 8 % of the tested instruments by the calibration laboratory of IPEN annually (approximately 1600 in 2007). Considering that the calibration of this kind of instrument is performed biannually it is possible to conclude that almost 300 instruments are being used to measure the air kerma in diagnostic radiology clinics to determine the in beam values (in front of the patient), attenuated measurements (behind the patient) and scattered radiation. This work presents the results of the calibration of the instruments used in mammography, computed tomography, dental and conventional diagnostic radiology dosimetry, performed during the period of 2005 to 2007. Their performances during the calibrations measurements were evaluated. Although at the calibration laboratory there are three available series of radiation quality to this type of calibration (RQR, N and M, according to standards IEC 61267 and ISO 4037-1.), the applications can be assorted (general radiology, computed tomography, mammography, radiation protection and fluoroscopy). Depending on its design and behaviour , one kind of instrument can be used for one or more type of applications. The instruments normally used for diagnostic radiology measurements are ionization chambers with volumes varying from 3 to 1800 cm 3 , and can be cylindrical, spherical or plane parallel plates kind. They usually are sensitive to photon particles, with energies greater than 15 keV and can be used up to 1200 keV. In this work they were tested in X radiation fields from 25 to 150 kV, in specific qualities depending on the utilization of the instrument. The calibration results of 390 instruments received from 2005 to 2007 were analyzed. About 20 instruments were not able to be calibrated due to bad functioning. The calibration coefficients obtained were between 0.88 and 1.24. The uncertainties were always less than ± 3.6% to instruments used in scattered

  16. Compact Optical Technique for Streak Camera Calibration

    International Nuclear Information System (INIS)

    Curt Allen; Terence Davies; Frans Janson; Ronald Justin; Bruce Marshall; Oliver Sweningsen; Perry Bell; Roger Griffith; Karla Hagans; Richard Lerche

    2004-01-01

    The National Ignition Facility is under construction at the Lawrence Livermore National Laboratory for the U.S. Department of Energy Stockpile Stewardship Program. Optical streak cameras are an integral part of the experimental diagnostics instrumentation. To accurately reduce data from the streak cameras a temporal calibration is required. This article describes a technique for generating trains of precisely timed short-duration optical pulses that are suitable for temporal calibrations

  17. Identifying controlling variables for math computation fluency through experimental analysis: the interaction of stimulus control and reinforcing consequences.

    Science.gov (United States)

    Hofstadter-Duke, Kristi L; Daly, Edward J

    2015-03-01

    This study investigated a method for conducting experimental analyses of academic responding. In the experimental analyses, academic responding (math computation), rather than problem behavior, was reinforced across conditions. Two separate experimental analyses (one with fluent math computation problems and one with non-fluent math computation problems) were conducted with three elementary school children using identical contingencies while math computation rate was measured. Results indicate that the experimental analysis with non-fluent problems produced undifferentiated responding across participants; however, differentiated responding was achieved for all participants in the experimental analysis with fluent problems. A subsequent comparison of the single-most effective condition from the experimental analyses replicated the findings with novel computation problems. Results are discussed in terms of the critical role of stimulus control in identifying controlling consequences for academic deficits, and recommendations for future research refining and extending experimental analysis to academic responding are made. © The Author(s) 2014.

  18. "Calibration" system for spectral measurements and its experimental results

    Science.gov (United States)

    Bruchkouskaya, Sviatlana I.; Katkovsky, Leonid V.; Belyaev, Boris I.; Malyshev, Vladislav B.

    2017-04-01

    "Calibration" system has been developed at A. N. Sevchenko Research Institute of Applied Physical Problems of the Belarusian State University. It was designed for measuring the characteristics of spectral reflectance of all types of natural surfaces (test sites) in ground conditions or on board of aircraft carriers and has the following components: - Photospectroradiometr (PhSR) of high resolution with a range of 400-900 nm, equipped with a digital time-lapse video system; - Two-channel modular spectroradiometer (TMS) with a range of 400-900 nm, designed for simultaneous measurements of reflected light brightness of the underlying surface and the incident radiation from the upper hemisphere; - Two portable spectroradiometers (PSR-700 and PSR-1300) with a spectral range 800-1500 nm; 1200-2500 nm; - Scanning solar spectropolarimeter (SSP-600) with a range of 350-950 nm for measurements of direct sunlight and scattered by the atmosphere at different angles; "Calibration" system provides spectral resolution of 5.2 nm in a range of 400-900 nm, 10 nm in a range of 800-1500 nm and 15 nm in a range of 1200-2500 nm. Measurements of the optical characteristics of solar radiation (for determining parameters of the atmosphere) and that of underlying surface are synchronous. There is also a set of special nozzles for measurements of spectral brightness coefficients, polarization characteristics and spectral albedo. Spectra and images are geotagged to the navigation data (time, GPS). For the measurements of spectral reflection dependencies within "Monitoring-SG" framework expeditions to the Kuril Islands, Kursk aerospace test site and Kamchatka Peninsula were conducted in 2015 and 2016. The spectra of different underlying surfaces have been obtained: soils, plants and water objects, sedimentary and volcanic rocks. These surveys are a valuable material for further researches and selection of test facilities for flight calibration of space imaging systems. Information obtained

  19. A New Automated Instrument Calibration Facility at the Savannah River Site

    International Nuclear Information System (INIS)

    Polz, E.; Rushton, R.O.; Wilkie, W.H.; Hancock, R.C.

    1998-01-01

    The Health Physics Instrument Calibration Facility at the Savannah River Site in Aiken, SC was expressly designed and built to calibrate portable radiation survey instruments. The facility incorporates recent advances in automation technology, building layout and construction, and computer software to improve the calibration process. Nine new calibration systems automate instrument calibration and data collection. The building is laid out so that instruments are moved from one area to another in a logical, efficient manner. New software and hardware integrate all functions such as shipping/receiving, work flow, calibration, testing, and report generation. Benefits include a streamlined and integrated program, improved efficiency, reduced errors, and better accuracy

  20. Theoretical and experimental study of radon measurement with designing and calibration domestic canister with active charcoal

    International Nuclear Information System (INIS)

    Urosevic, V.; Nikezic, D.; Zekic, R.

    2005-01-01

    Radon concentration in air may change significantly large variation due to atmospheric variation. Measurement with active charcoal can be inaccurate because the variation in radon concentration. We made model to simulate radon measurements with active charcoal in order to optimize and improve integration characteristic. A numerical method and computer code based on the method of finite elements is developed for the case of variable radon concentration in air. This program simulates radon adsorption by the activated charcoal bed, enabling determination of sensitivity. The dependence of sensitivity on different parameters, such as temperature, thickness of the charcoal, etc. was studied using this program. Using results of theoretical investigation we designed and calibrated our canister with active charcoal for radon measurements. (author)

  1. Medical-grade Sterilizable Target for Fluid-immersed Fetoscope Optical Distortion Calibration.

    Science.gov (United States)

    Nikitichev, Daniil I; Shakir, Dzhoshkun I; Chadebecq, François; Tella, Marcel; Deprest, Jan; Stoyanov, Danail; Ourselin, Sébastien; Vercauteren, Tom

    2017-02-23

    We have developed a calibration target for use with fluid-immersed endoscopes within the context of the GIFT-Surg (Guided Instrumentation for Fetal Therapy and Surgery) project. One of the aims of this project is to engineer novel, real-time image processing methods for intra-operative use in the treatment of congenital birth defects, such as spina bifida and the twin-to-twin transfusion syndrome. The developed target allows for the sterility-preserving optical distortion calibration of endoscopes within a few minutes. Good optical distortion calibration and compensation are important for mitigating undesirable effects like radial distortions, which not only hamper accurate imaging using existing endoscopic technology during fetal surgery, but also make acquired images less suitable for potentially very useful image computing applications, like real-time mosaicing. In this paper proposes a novel fabrication method to create an affordable, sterilizable calibration target suitable for use in a clinical setup. This method involves etching a calibration pattern by laser cutting a sandblasted stainless steel sheet. This target was validated using the camera calibration module provided by OpenCV, a state-of-the-art software library popular in the computer vision community.

  2. Mercury CEM Calibration

    Energy Technology Data Exchange (ETDEWEB)

    John F. Schabron; Joseph F. Rovani; Susan S. Sorini

    2007-03-31

    The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005, requires that calibration of mercury continuous emissions monitors (CEMs) be performed with NIST-traceable standards. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor generators. The traceability protocol will be written by EPA. Traceability will be based on the actual analysis of the output of each calibration unit at several concentration levels ranging from about 2-40 ug/m{sup 3}, and this analysis will be directly traceable to analyses by NIST using isotope dilution inductively coupled plasma/mass spectrometry (ID ICP/MS) through a chain of analyses linking the calibration unit in the power plant to the NIST ID ICP/MS. Prior to this project, NIST did not provide a recommended mercury vapor pressure equation or list mercury vapor pressure in its vapor pressure database. The NIST Physical and Chemical Properties Division in Boulder, Colorado was subcontracted under this project to study the issue in detail and to recommend a mercury vapor pressure equation that the vendors of mercury vapor pressure calibration units can use to calculate the elemental mercury vapor concentration in an equilibrium chamber at a particular temperature. As part of this study, a preliminary evaluation of calibration units from five vendors was made. The work was performed by NIST in Gaithersburg, MD and Joe Rovani from WRI who traveled to NIST as a Visiting Scientist.

  3. Experiences in Automated Calibration of a Nickel Equation of State

    Science.gov (United States)

    Carpenter, John H.

    2017-06-01

    Wide availability of large computers has led to increasing incorporation of computational data, such as from density functional theory molecular dynamics, in the development of equation of state (EOS) models. Once a grid of computational data is available, it is usually left to an expert modeler to model the EOS using traditional techniques. One can envision the possibility of using the increasing computing resources to perform black-box calibration of EOS models, with the goal of reducing the workload on the modeler or enabling non-experts to generate good EOSs with such a tool. Progress towards building such a black-box calibration tool will be explored in the context of developing a new, wide-range EOS for nickel. While some details of the model and data will be shared, the focus will be on what was learned by automatically calibrating the model in a black-box method. Model choices and ensuring physicality will also be discussed. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  4. Calibration method of the pulsed X-ray relative sensitivity for ST401 plastic scintillators

    International Nuclear Information System (INIS)

    Xie Hongwei; Song Guzhou; Wang Kuilu

    2011-01-01

    The relative sensitivity calibration method of the pulsed X-ray in ST401 plastic scintillator is presented. Experimental relative sensitivity calibrations of the plastic scintillators of different thicknesses from 1 mm to 50 mm are accomplished on the 'Chenguang' pulsed X-ray source and a Co radioactive source, The uncertainty of the calibration data is evaluated, which can be treated as the experimental evidence for the relative sensitivity conversion of ST401 plastic scintillator. (authors)

  5. Calculation Analysis of Calibration Factors of Airborne Gamma-ray Spectrometer

    International Nuclear Information System (INIS)

    Zhao Jun; Zhu Jinhui; Xie Honggang; He Qinglin

    2009-01-01

    To determine the calibration factors of an airborne gamma-ray spectrometer measuring large area gamma-ray emitting source at deferent flying height, a series of Monte Carlo simulations were drawn. Response energy spectrums of NaI crystals in airplane caused by nature-decay-series calibration-pads, and calibration factors on different heights above Cs-137 plane source, were obtained. The calculated results agreed with the experimental data well. (authors)

  6. Model validation and calibration based on component functions of model output

    International Nuclear Information System (INIS)

    Wu, Danqing; Lu, Zhenzhou; Wang, Yanping; Cheng, Lei

    2015-01-01

    The target in this work is to validate the component functions of model output between physical observation and computational model with the area metric. Based on the theory of high dimensional model representations (HDMR) of independent input variables, conditional expectations are component functions of model output, and the conditional expectations reflect partial information of model output. Therefore, the model validation of conditional expectations tells the discrepancy between the partial information of the computational model output and that of the observations. Then a calibration of the conditional expectations is carried out to reduce the value of model validation metric. After that, a recalculation of the model validation metric of model output is taken with the calibrated model parameters, and the result shows that a reduction of the discrepancy in the conditional expectations can help decrease the difference in model output. At last, several examples are employed to demonstrate the rationality and necessity of the methodology in case of both single validation site and multiple validation sites. - Highlights: • A validation metric of conditional expectations of model output is proposed. • HDRM explains the relationship of conditional expectations and model output. • An improved approach of parameter calibration updates the computational models. • Validation and calibration process are applied at single site and multiple sites. • Validation and calibration process show a superiority than existing methods

  7. Calibration of LOFAR data on the cloud

    Science.gov (United States)

    Sabater, J.; Sánchez-Expósito, S.; Best, P.; Garrido, J.; Verdes-Montenegro, L.; Lezzi, D.

    2017-04-01

    New scientific instruments are starting to generate an unprecedented amount of data. The Low Frequency Array (LOFAR), one of the Square Kilometre Array (SKA) pathfinders, is already producing data on a petabyte scale. The calibration of these data presents a huge challenge for final users: (a) extensive storage and computing resources are required; (b) the installation and maintenance of the software required for the processing is not trivial; and (c) the requirements of calibration pipelines, which are experimental and under development, are quickly evolving. After encountering some limitations in classical infrastructures like dedicated clusters, we investigated the viability of cloud infrastructures as a solution. We found that the installation and operation of LOFAR data calibration pipelines is not only possible, but can also be efficient in cloud infrastructures. The main advantages were: (1) the ease of software installation and maintenance, and the availability of standard APIs and tools, widely used in the industry; this reduces the requirement for significant manual intervention, which can have a highly negative impact in some infrastructures; (2) the flexibility to adapt the infrastructure to the needs of the problem, especially as those demands change over time; (3) the on-demand consumption of (shared) resources. We found that a critical factor (also in other infrastructures) is the availability of scratch storage areas of an appropriate size. We found no significant impediments associated with the speed of data transfer, the use of virtualization, the use of external block storage, or the memory available (provided a minimum threshold is reached). Finally, we considered the cost-effectiveness of a commercial cloud like Amazon Web Services. While a cloud solution is more expensive than the operation of a large, fully-utilized cluster completely dedicated to LOFAR data reduction, we found that its costs are competitive if the number of datasets to be

  8. Patient positioning with X-ray detector self-calibration for image guided therapy

    International Nuclear Information System (INIS)

    Selby, B.P.; Sakas, G.; Stilla, U.; Groch, W.-D.

    2011-01-01

    Full text: Automatic alignment estimation from projection images has a range of applications, but misaligned cameras induce inaccuracies. Calibration methods for optical cameras requiring calibration bodies or detectable features have been a matter of research for years. Not so for image guided therapy, although exact patient pose recovery is crucial. To image patient anatomy, X-ray instead of optical equipment is used. Feature detection is often infeasible. Furthermore, a method not requiring a calibration body, usable during treatment, would be desirable to improve accuracy of the patient alignment. We present a novel approach not relying on image features but combining intensity based calibration with 3D pose recovery. A stereoscopic X-ray camera model is proposed, and effects of erroneous parameters on the patient alignment are evaluated. The relevant camera parameters are automatically computed by comparison of X-ray to CT images and are incorporated in the patient alignment computation. The methods were tested with ground truth data of an anatomic phantom with artificially produced misalignments and available real-patient images from a particle therapy machine. We show that our approach can compensate patient alignment errors through mis-calibration of a camera from more than 5 mm to below 0.2 mm. Usage of images with artificial noise shows that the method is robust against image degradation of 2-5%. X-ray camera sel calibration improves accuracy when cameras are misaligned. We could show that rigid body alignment was computed more accurately and that self-calibration is possible, even if detection of corresponding image features is not. (author)

  9. Calibration method for a carbon nanotube field-effect transistor biosensor

    International Nuclear Information System (INIS)

    Abe, Masuhiro; Murata, Katsuyuki; Ataka, Tatsuaki; Matsumoto, Kazuhiko

    2008-01-01

    An easy calibration method based on the Langmuir adsorption theory is proposed for a carbon nanotube field-effect transistor (NTFET) biosensor. This method was applied to three NTFET biosensors that had approximately the same structure but exhibited different characteristics. After calibration, their experimentally determined characteristics exhibited a good agreement with the calibration curve. The reason why the observed characteristics of these NTFET biosensors differed among the devices was that the carbon nanotube (CNT) that formed the channel was not uniform. Although the controlled growth of a CNT is difficult, it is shown that an NTFET biosensor can be easy calibrated using the proposed calibration method, regardless of the CNT channel structures

  10. Calibration of optical cantilever deflection readers

    International Nuclear Information System (INIS)

    Hu Zhiyu; Seeley, Tim; Kossek, Sebastian; Thundat, Thomas

    2004-01-01

    Because of its ultrahigh sensitivity, the optical lever detection method similar to that used in the atomic force microscope (AFM) has been widely employed as a standard technique for measuring microcantilever deflection. Along with the increasing interest in using the microcantilever as a sensing platform, there is also a requirement for a reliable calibration technique. Many researchers have used the concept of optical lever detection to construct microcantilever deflection readout instruments for chemical, physical, and biological detection. However, without an AFM piezo z scanner, it is very difficult to precisely calibrate these instruments. Here, we present a step-by-step method to conveniently calibrate an instrument using commercially available piezoresistive cantilevers. The experimental results closely match the theoretical calculation. Following this procedure, one can easily calibrate any optical cantilever deflection detection system with high reproducibility, precision, and reliability. A detailed discussion of the optical lever readout system design has been addressed in this article

  11. Geometrical error calibration in reflective surface testing based on reverse Hartmann test

    Science.gov (United States)

    Gong, Zhidong; Wang, Daodang; Xu, Ping; Wang, Chao; Liang, Rongguang; Kong, Ming; Zhao, Jun; Mo, Linhai; Mo, Shuhui

    2017-08-01

    In the fringe-illumination deflectometry based on reverse-Hartmann-test configuration, ray tracing of the modeled testing system is performed to reconstruct the test surface error. Careful calibration of system geometry is required to achieve high testing accuracy. To realize the high-precision surface testing with reverse Hartmann test, a computer-aided geometrical error calibration method is proposed. The aberrations corresponding to various geometrical errors are studied. With the aberration weights for various geometrical errors, the computer-aided optimization of system geometry with iterative ray tracing is carried out to calibration the geometrical error, and the accuracy in the order of subnanometer is achieved.

  12. Contribution of computer science to the evaluation of experimental data

    International Nuclear Information System (INIS)

    Steuerwald, J.

    1978-11-01

    The GALE data acquisition system and EDDAR data processing system, used at Max-Planck-Institut fuer Plasmaphysik, serve to illustrate some of the various ways in which computer science plays a major role in developing the evaluation of experimental data. (orig.) [de

  13. Laser's calibration of an AOTF-based spectral colorimeter

    Science.gov (United States)

    Emelianov, Sergey P.; Khrustalev, Vladimir N.; Kochin, Leonid B.; Polosin, Lev L.

    2003-06-01

    The paper is devoted to expedients of AOTF spectral colorimeters calibration. The spectrometer method of color values measuring with reference to spectral colorimeters on AOTF surveyed. The theoretical exposition of spectrometer data processing expedients is offered. The justified source of radiation choice, suitable for calibration of spectral colorimeters is carried out. The experimental results for different acousto-optical mediums and modes of interaction are submitted.

  14. GRS vs. OMS Calibration in LISA Pathfinder Data Analysis

    Science.gov (United States)

    Meshksar, Neda; Ferraioli, Luigi; Mance, Davor; ten Pierick, Jan; Zweifel, Peter; Giardini, Domenico; ">LISA Pathfinder colaboration, calibration between the two measurements during the mission science phase. The trend sensitivity of the relative calibration has been computed for different physical parameters, such as temperature, magnetic field, test mass bias voltage and current.

  15. Computational design and experimental validation of new thermal barrier systems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shengmin [Louisiana State Univ., Baton Rouge, LA (United States)

    2015-03-31

    The focus of this project is on the development of a reliable and efficient ab initio based computational high temperature material design method which can be used to assist the Thermal Barrier Coating (TBC) bond-coat and top-coat design. Experimental evaluations on the new TBCs are conducted to confirm the new TBCs’ properties. Southern University is the subcontractor on this project with a focus on the computational simulation method development. We have performed ab initio density functional theory (DFT) method and molecular dynamics simulation on screening the top coats and bond coats for gas turbine thermal barrier coating design and validation applications. For experimental validations, our focus is on the hot corrosion performance of different TBC systems. For example, for one of the top coatings studied, we examined the thermal stability of TaZr2.75O8 and confirmed it’s hot corrosion performance.

  16. Calibration of a γ- Re θ transition model and its application in low-speed flows

    Science.gov (United States)

    Wang, YunTao; Zhang, YuLun; Meng, DeHong; Wang, GunXue; Li, Song

    2014-12-01

    The prediction of laminar-turbulent transition in boundary layer is very important for obtaining accurate aerodynamic characteristics with computational fluid dynamic (CFD) tools, because laminar-turbulent transition is directly related to complex flow phenomena in boundary layer and separated flow in space. Unfortunately, the transition effect isn't included in today's major CFD tools because of non-local calculations in transition modeling. In this paper, Menter's γ- Re θ transition model is calibrated and incorporated into a Reynolds-Averaged Navier-Stokes (RANS) code — Trisonic Platform (TRIP) developed in China Aerodynamic Research and Development Center (CARDC). Based on the experimental data of flat plate from the literature, the empirical correlations involved in the transition model are modified and calibrated numerically. Numerical simulation for low-speed flow of Trapezoidal Wing (Trap Wing) is performed and compared with the corresponding experimental data. It is indicated that the γ- Re θ transition model can accurately predict the location of separation-induced transition and natural transition in the flow region with moderate pressure gradient. The transition model effectively imporves the simulation accuracy of the boundary layer and aerodynamic characteristics.

  17. Low-cost programmable pulse generator for particle telescope calibration

    CERN Document Server

    Sanchez, S; Seisdedos, M; Meziat, D; Carbajo, M; Medina, J; Bronchalo, E; Peral, L D; Rodríguez-Pacheco, J

    1999-01-01

    In this paper we present a new calibration system for particle telescopes including multipulse generator and digital controller. The calibration system generates synchronized pulses of variable height for every detector channel on the telescope. The control system is based on a commercial microcontroller linked to a personal computer through an RS-232 bidirectional line. The aim of the device is to perform laboratory calibration of multi-detector telescopes prior to calibration at accelerator. This task includes evaluation of linearity and resolution of each detector channel, as well as coincidence logic. The heights of the pulses sent to the detectors are obtained by Monte Carlo simulation of telescope response to a particle flux of any desired geometry and composition.

  18. Computational Fluid Dynamics Modeling of the Human Pulmonary Arteries with Experimental Validation.

    Science.gov (United States)

    Bordones, Alifer D; Leroux, Matthew; Kheyfets, Vitaly O; Wu, Yu-An; Chen, Chia-Yuan; Finol, Ender A

    2018-05-21

    Pulmonary hypertension (PH) is a chronic progressive disease characterized by elevated pulmonary arterial pressure, caused by an increase in pulmonary arterial impedance. Computational fluid dynamics (CFD) can be used to identify metrics representative of the stage of PH disease. However, experimental validation of CFD models is often not pursued due to the geometric complexity of the model or uncertainties in the reproduction of the required flow conditions. The goal of this work is to validate experimentally a CFD model of a pulmonary artery phantom using a particle image velocimetry (PIV) technique. Rapid prototyping was used for the construction of the patient-specific pulmonary geometry, derived from chest computed tomography angiography images. CFD simulations were performed with the pulmonary model with a Reynolds number matching those of the experiments. Flow rates, the velocity field, and shear stress distributions obtained with the CFD simulations were compared to their counterparts from the PIV flow visualization experiments. Computationally predicted flow rates were within 1% of the experimental measurements for three of the four branches of the CFD model. The mean velocities in four transversal planes of study were within 5.9 to 13.1% of the experimental mean velocities. Shear stresses were qualitatively similar between the two methods with some discrepancies in the regions of high velocity gradients. The fluid flow differences between the CFD model and the PIV phantom are attributed to experimental inaccuracies and the relative compliance of the phantom. This comparative analysis yielded valuable information on the accuracy of CFD predicted hemodynamics in pulmonary circulation models.

  19. Mercury Continuous Emmission Monitor Calibration

    Energy Technology Data Exchange (ETDEWEB)

    John Schabron; Eric Kalberer; Ryan Boysen; William Schuster; Joseph Rovani

    2009-03-12

    Mercury continuous emissions monitoring systems (CEMs) are being implemented in over 800 coal-fired power plant stacks throughput the U.S. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor calibrators/generators. These devices are used to calibrate mercury CEMs at power plant sites. The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005 and vacated by a Federal appeals court in early 2008 required that calibration be performed with NIST-traceable standards. Despite the vacature, mercury emissions regulations in the future will require NIST traceable calibration standards, and EPA does not want to interrupt the effort towards developing NIST traceability protocols. The traceability procedures will be defined by EPA. An initial draft traceability protocol was issued by EPA in May 2007 for comment. In August 2007, EPA issued a conceptual interim traceability protocol for elemental mercury calibrators. The protocol is based on the actual analysis of the output of each calibration unit at several concentration levels ranging initially from about 2-40 {micro}g/m{sup 3} elemental mercury, and in the future down to 0.2 {micro}g/m{sup 3}, and this analysis will be directly traceable to analyses by NIST. The EPA traceability protocol document is divided into two separate sections. The first deals with the qualification of calibrator models by the vendors for use in mercury CEM calibration. The second describes the procedure that the vendors must use to certify the calibrators that meet the qualification specifications. The NIST traceable certification is performance based, traceable to analysis using isotope dilution inductively coupled plasma

  20. Benchmarking Experimental and Computational Thermochemical Data: A Case Study of the Butane Conformers.

    Science.gov (United States)

    Barna, Dóra; Nagy, Balázs; Csontos, József; Császár, Attila G; Tasi, Gyula

    2012-02-14

    Due to its crucial importance, numerous studies have been conducted to determine the enthalpy difference between the conformers of butane. However, it is shown here that the most reliable experimental values are biased due to the statistical model utilized during the evaluation of the raw experimental data. In this study, using the appropriate statistical model, both the experimental expectation values and the associated uncertainties are revised. For the 133-196 and 223-297 K temperature ranges, 668 ± 20 and 653 ± 125 cal mol(-1), respectively, are recommended as reference values. Furthermore, to show that present-day quantum chemistry is a favorable alternative to experimental techniques in the determination of enthalpy differences of conformers, a focal-point analysis, based on coupled-cluster electronic structure computations, has been performed that included contributions of up to perturbative quadruple excitations as well as small correction terms beyond the Born-Oppenheimer and nonrelativistic approximations. For the 133-196 and 223-297 K temperature ranges, in exceptional agreement with the corresponding revised experimental data, our computations yielded 668 ± 3 and 650 ± 6 cal mol(-1), respectively. The most reliable enthalpy difference values for 0 and 298.15 K are also provided by the computational approach, 680.9 ± 2.5 and 647.4 ± 7.0 cal mol(-1), respectively.

  1. Experimental evaluation of permanent magnet probe flowmeter measuring high temperature liquid sodium flow in the ITSL

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Uiju; Kim, Yun Ho [Nuclear engineering Department, Hanyang University, 17 Haengdang-dong, Seongdong-gu, Seoul 133-791 (Korea, Republic of); Kim, Jong-Man; Kim, Tae-Joon [Korea Atomic Energy Research Institute, 1045 Daedeok-daero, Yuseong-gu, Daejeon 305-353 (Korea, Republic of); Kim, Sung Joong, E-mail: sungjkim@mit.edu [Nuclear engineering Department, Hanyang University, 17 Haengdang-dong, Seongdong-gu, Seoul 133-791 (Korea, Republic of)

    2013-12-15

    Highlights: • An Instrument Test Sodium Loop (ITSL) has been built and tested in various conditions at KAERI. • Free fall of liquid sodium was conducted experimentally and numerically. • A Permanent Magnet Probe Flowmeter (PMPF) was experimented in the ITSL. • Excellent linearity of the PMPF was achieved under high temperature condition. - Abstract: The Instrument Test Sodium Loop (ITSL) installed at Korea Atomic Energy Research Institute (KAERI) is a medium-size experimental facility dedicated to obtaining relevant experimental data of liquid sodium flow characteristics under various thermal hydraulic conditions and sodium purification. The ITSL has been utilized to perform thermal flow measurement of the liquid sodium and to calibrate a Permanent Magnet Probe Flowmeter (PMPF). The primary objective of this study is to obtain liquid sodium flow rate given a wide temperature range using the PMPF. Non-stationary method was adopted for the calibration of the probe given the liquid sodium temperature range of 150–415 °C. A relationship between the measured voltage signal and flow rate was obtained successfully. It is observed that the calibration experiments result in excellent linear relationships between measured voltage and volumetric flow rate at various temperature conditions. Also a computational analysis using FlowMaster, is employed to facilitate the calibration process by predicting the liquid sodium flow rate. Finally the effect of the fluid temperature on thermal flow measurements is discussed in light of the obtained experimental data.

  2. Calibration of laboratory equipment and its intermediate verification

    International Nuclear Information System (INIS)

    Remedi, Jorge O.

    2011-01-01

    When a laboratory wants to prove that he has technical competence to carry out tests or calibrations must demonstrate that it has complied with certain requirements that establish , among others, the mandatory : calibrate or verify equipment before putting it into service in order to ensure that it meets to the specifications of laboratory equipment to keep records evidencing the checks that equipment complies with the specification ; perform intermediate checks for maintain confidence in the calibration status of the equipment , ensure that the operation is checked and calibration status of equipment when the equipment goes outside the direct control of the laboratory , before be returned to service, establish a program and procedure for the calibration of equipment; show how determined the calibration periods of their equipment as well as evidence that intermediate checks are suitable for the calibration periods. However, some confusion is observed as to the meaning of the terms 'calibration' and 'verification' of a computer. This paper analyzes applicable documentation and suggests that the differences are generated in part by translations and by characterization concepts upon its usage, that is, if it is legal metrology or assessment conformity. Therefore, this study aims to characterize both concepts , fundamentals to zoom distinguish , outline appropriate strategies for calibration and verification activities to ensure the compliance with regulatory requirements [es

  3. Combined computational and experimental study of Ar beam induced defect formation in graphite

    International Nuclear Information System (INIS)

    Pregler, Sharon K.; Hayakawa, Tetsuichiro; Yasumatsu, Hisato; Kondow, Tamotsu; Sinnott, Susan B.

    2007-01-01

    Irradiation of graphite, commonly used in nuclear power plants, is known to produce structural damage. Here, experimental and computational methods are used to study defect formation in graphite during Ar irradiation at incident energies of 50 eV. The experimental samples are analyzed with scanning tunneling microscopy to quantify the size distribution of the defects that form. The computational approach is classical molecular dynamic simulations that illustrate the mechanisms by which the defects are produced. The results indicate that defects in graphite grow in concentrated areas and are nucleated by the presence of existing defects

  4. PLEIADES ABSOLUTE CALIBRATION : INFLIGHT CALIBRATION SITES AND METHODOLOGY

    Directory of Open Access Journals (Sweden)

    S. Lachérade

    2012-07-01

    Full Text Available In-flight calibration of space sensors once in orbit is a decisive step to be able to fulfil the mission objectives. This article presents the methods of the in-flight absolute calibration processed during the commissioning phase. Four In-flight calibration methods are used: absolute calibration, cross-calibration with reference sensors such as PARASOL or MERIS, multi-temporal monitoring and inter-bands calibration. These algorithms are based on acquisitions over natural targets such as African deserts, Antarctic sites, La Crau (Automatic calibration station and Oceans (Calibration over molecular scattering or also new extra-terrestrial sites such as the Moon and selected stars. After an overview of the instrument and a description of the calibration sites, it is pointed out how each method is able to address one or several aspects of the calibration. We focus on how these methods complete each other in their operational use, and how they help building a coherent set of information that addresses all aspects of in-orbit calibration. Finally, we present the perspectives that the high level of agility of PLEIADES offers for the improvement of its calibration and a better characterization of the calibration sites.

  5. GRS vs. OMS Calibration in LISA Pathfinder Data Analysis

    International Nuclear Information System (INIS)

    Meshksar, Neda; Ferraioli, Luigi; Mance, Davor; Zweifel, Peter; Giardini, Domenico; Ten Pierick, Jan

    2017-01-01

    On board LISA Pathfinder spacecraft the test mass displacement along the main measurement axis is sensed in two different ways: optically and electrostatically. We have monitored the relative calibration between the two measurements during the mission science phase. The trend sensitivity of the relative calibration has been computed for different physical parameters, such as temperature, magnetic field, test mass bias voltage and current. (paper)

  6. Feature-based automatic color calibration for networked camera system

    Science.gov (United States)

    Yamamoto, Shoji; Taki, Keisuke; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi

    2011-01-01

    In this paper, we have developed a feature-based automatic color calibration by using an area-based detection and adaptive nonlinear regression method. Simple color matching of chartless is achieved by using the characteristic of overlapping image area with each camera. Accurate detection of common object is achieved by the area-based detection that combines MSER with SIFT. Adaptive color calibration by using the color of detected object is calculated by nonlinear regression method. This method can indicate the contribution of object's color for color calibration, and automatic selection notification for user is performed by this function. Experimental result show that the accuracy of the calibration improves gradually. It is clear that this method can endure practical use of multi-camera color calibration if an enough sample is obtained.

  7. CALIBRATION OF LOW COST DIGITAL CAMERA USING DATA FROM SIMULTANEOUS LIDAR AND PHOTOGRAMMETRIC SURVEYS

    Directory of Open Access Journals (Sweden)

    E. Mitishita

    2012-07-01

    Full Text Available Digital photogrammetric products from the integration of imagery and lidar datasets are a reality nowadays. When the imagery and lidar surveys are performed together and the camera is connected to the lidar system, a direct georeferencing can be applied to compute the exterior orientation parameters of the images. Direct georeferencing of the images requires accurate interior orientation parameters to perform photogrammetric application. Camera calibration is a procedure applied to compute the interior orientation parameters (IOPs. Calibration researches have established that to obtain accurate IOPs, the calibration must be performed with same or equal condition that the photogrammetric survey is done. This paper shows the methodology and experiments results from in situ self-calibration using a simultaneous images block and lidar dataset. The calibration results are analyzed and discussed. To perform this research a test field was fixed in an urban area. A set of signalized points was implanted on the test field to use as the check points or control points. The photogrammetric images and lidar dataset of the test field were taken simultaneously. Four strips of flight were used to obtain a cross layout. The strips were taken with opposite directions of flight (W-E, E-W, N-S and S-N. The Kodak DSC Pro SLR/c digital camera was connected to the lidar system. The coordinates of the exposition station were computed from the lidar trajectory. Different layouts of vertical control points were used in the calibration experiments. The experiments use vertical coordinates from precise differential GPS survey or computed by an interpolation procedure using the lidar dataset. The positions of the exposition stations are used as control points in the calibration procedure to eliminate the linear dependency of the group of interior and exterior orientation parameters. This linear dependency happens, in the calibration procedure, when the vertical images and

  8. Online Calibration of the TPC Drift Time in the ALICE High Level Trigger

    Science.gov (United States)

    Rohr, David; Krzewicki, Mikolaj; Zampolli, Chiara; Wiechula, Jens; Gorbunov, Sergey; Chauvin, Alex; Vorobyev, Ivan; Weber, Steffen; Schweda, Kai; Lindenstruth, Volker

    2017-06-01

    A Large Ion Collider Experiment (ALICE) is one of the four major experiments at the Large Hadron Collider (LHC) at CERN. The high level trigger (HLT) is a compute cluster, which reconstructs collisions as recorded by the ALICE detector in real-time. It employs a custom online data-transport framework to distribute data and workload among the compute nodes. ALICE employs subdetectors that are sensitive to environmental conditions such as pressure and temperature, e.g., the time projection chamber (TPC). A precise reconstruction of particle trajectories requires calibration of these detectors. Performing calibration in real time in the HLT improves the online reconstructions and renders certain offline calibration steps obsolete speeding up offline physics analysis. For LHC Run 3, starting in 2020 when data reduction will rely on reconstructed data, online calibration becomes a necessity. Reconstructed particle trajectories build the basis for the calibration making a fast online-tracking mandatory. The main detectors used for this purpose are the TPC and Inner Tracking System. Reconstructing the trajectories in the TPC is the most compute-intense step. We present several improvements to the ALICE HLT developed to facilitate online calibration. The main new development for online calibration is a wrapper that can run ALICE offline analysis and calibration tasks inside the HLT. In addition, we have added asynchronous processing capabilities to support long-running calibration tasks in the HLT framework, which runs event-synchronously otherwise. In order to improve the resiliency, an isolated process performs the asynchronous operations such that even a fatal error does not disturb data taking. We have complemented the original loop-free HLT chain with ZeroMQ data-transfer components. The ZeroMQ components facilitate a feedback loop that inserts the calibration result created at the end of the chain back into tracking components at the beginning of the chain, after a

  9. A comparative study to investigate burnup in research reactor fuel using two independent experimental methods

    International Nuclear Information System (INIS)

    Iqbal, M.; Mehmood, T.; Ayazuddin, S.K.; Salahuddin, A.; Pervez, S.

    2001-01-01

    Two independent experimental methods have been used for the comparative study of fuel burnup measurement in low enriched uranium, plate type research reactor. In the first method a gamma ray activity ratio method was employed. An experimental setup was established for gamma ray scanning using prior calibrated high purity germanium detector. The computer software KORIGEN gave the theoretical support. In the second method reactivity difference technique was used. At the same location in the same core configuration the fresh and burned fuel element's reactivity worth was estimated. For theoretical estimated curve, group cross-sections were generated using computer code WIMS-D/4, and three dimensional modeling was made by computer code CITATION. The measured burnup of different fuel elements using these methods were found to be in good agreement

  10. Partial safety factor calibration from stochastic finite element computation of welded joint with random geometries

    International Nuclear Information System (INIS)

    Schoefs, Franck; Chevreuil, Mathilde; Pasqualini, Olivier; Cazuguel, Mikaël

    2016-01-01

    Welded joints are used in various structures and infrastructures like bridges, ships and offshore structures, and are submitted to cyclic stresses. Their fatigue behaviour is an industrial key issue to deal with and still offers original research subjects. One of the available methods relies on the computing of the stress concentration factor. Even if some studies were previously driven to evaluate this factor onto some cases of welded structures, the shape of the weld joint is generally idealized through a deterministic parametric geometry. Previous experimental works however have shown that this shape plays a key role in the lifetime assessment. We propose in this paper a methodology for computing the stress concentration factor in presence of random geometries of welded joints. In view to make the results available by engineers, this method merges stochastic computation and semi-probabilistic analysis by computing partial safety factors with a dedicated method. - Highlights: • Numerical computation of stress concentration factor with random geometry of weld. • Real data are used for probabilistic modelling. • Identification of partial safety factor from SFEM computation in case of random geometries.

  11. LONG-TERM CALIBRATION STABILITY OF A RADIO ASTRONOMICAL PHASED ARRAY FEED

    Energy Technology Data Exchange (ETDEWEB)

    Elmer, Michael; Jeffs, Brian D.; Warnick, Karl F. [Department of Electrical and Computer Engineering, Brigham Young University, Provo, UT 84602 (United States)

    2013-01-01

    There are many challenges associated with the implementation of a phased array feed for radio astronomy applications. Among these is the need to have an adequate set of calibration measurements so that reliable beamformers can be computed. Changes in the operating environment and temporal gain drift in the electronics contribute to calibration drift, which affects the beamformer performance. We will show that calibration measurements are relatively stable over a 5 day period and may remain so for up to 70 days or longer. We have incorporated the use of a calibration update system that has the potential to refresh a set of old calibrators, specifically correcting for electronic gain drift. However, the long-term variations that are present with fresh, current calibrators are greater than the degradation due to using an old calibration set, suggesting that, at this time, there is not a need for sophisticated calibration update systems or algorithms.

  12. Experimental and computational laser tissue welding using a protein patch.

    Science.gov (United States)

    Small, W; Heredia, N J; Maitland, D J; Eder, D C; Celliers, P M; Da Silva, L B; London, R A; Matthews, D L

    1998-01-01

    An in vitro study of laser tissue welding mediated with a dye-enhanced protein patch was conducted. Fresh sections of porcine aorta were used for the experiments. Arteriotomies were treated using an indocyanine green dye-enhanced collagen patch activated by an 805-nm continuous-wave fiber-delivered diode laser. Temperature histories of the surface of the weld site were obtained using a hollow glass optical fiber-based two-color infrared thermometer. The experimental effort was complemented by simulations with the LATIS (LAser-TISsue) computer code, which uses coupled Monte Carlo, thermal transport, and mass transport models. Comparison of simulated and experimental thermal data indicated that evaporative cooling clamped the surface temperature of the weld site below 100 °C. For fluences of approximately 200 J/cm2, peak surface temperatures averaged 74°C and acute burst strengths consistently exceeded 0.14×106 dyn/cm (hoop tension). The combination of experimental and simulation results showed that the inclusion of water transport and evaporative losses in the computer code has a significant impact on the thermal distributions and hydration levels throughout the tissue volume. The solid-matrix protein patch provided a means of controllable energy delivery and yielded consistently strong welds. © 1998 Society of Photo-Optical Instrumentation Engineers.

  13. Application of Voxel Phantoms to Study the Influence of Heterogeneous Distribution of Actinides in Lungs on In Vivo Counting Calibration Factors Using Animal Experimentations

    Energy Technology Data Exchange (ETDEWEB)

    Lamart, S.; Pierrat, N.; De Carlan, L.; Franck, D. [IRSN/DRPH/SDI/LEDI, BP 17, F-92 262 Fontenay-aux-Roses (France); Dudoignon, N. [IRSN/DRPH/SRBE/LRPAT, BP 17, F-92 262 Fontenay-aux-Roses (France); Rateau, S.; Van der Meeren, A.; Rouit, E. [CEA/DSV/DRR/SRCA/LRT BP no 12, F-91680 Bruyeres-le-Chatel (France); Bottlaender, M. [CEA/SHFJ, 4, place du General Leclerc F-91400 Orsay (France)

    2006-07-01

    Calibration of lung counting system dedicated to retention assessment of actinides in the lungs remains critical due to large uncertainties in calibration factors. Among them, the detector positioning, the chest wall thickness and composition (muscle/fat) assessment, and the distribution of the contamination are the main parameters influencing the detector response. In order to reduce these uncertainties, a numerical approach based on the application of voxel phantoms (numerical phantoms based on tomographic images, CT or MRI) associated to a Monte-Carlo code (namely M.C.N.P.) was developed. It led to the development of a dedicated tool, called O.E.D.I.P.E., that allows to easily handle realistic voxel phantoms for the simulation of in vivo measurement (or dose calculation, application that will not be presented in this paper). The goal of this paper is to present our study of the influence of the lung distribution on calibration factors using both animal experimentations and our numerical method. Indeed, physical anthropomorphic phantoms used for calibration always consider a uniform distribution of the source in the lungs, which is not true in many contamination conditions. The purpose of the study is to compare the response of the measurement detectors using a real distribution of actinide particles in the lungs, obtained from animal experimentations, with the homogeneous one considered as the reference. This comparison was performed using O.E.D.I.P.E. that can almost simulate any source distribution. A non human primate was contaminated heterogeneously by intra-tracheal administration of actinide oxide. After euthanasia, gamma spectrometry measurements were performed on the pulmonary lobes to obtain the distribution of the contamination in the lungs. This realistic distribution was used to simulate an heterogeneous contamination in the numerical phantom of the non human primate, which was compared with a simulation of an homogeneous contamination presenting the

  14. Application of Voxel Phantoms to Study the Influence of Heterogeneous Distribution of Actinides in Lungs on In Vivo Counting Calibration Factors Using Animal Experimentations

    International Nuclear Information System (INIS)

    Lamart, S.; Pierrat, N.; De Carlan, L.; Franck, D.; Dudoignon, N.; Rateau, S.; Van der Meeren, A.; Rouit, E.; Bottlaender, M.

    2006-01-01

    Calibration of lung counting system dedicated to retention assessment of actinides in the lungs remains critical due to large uncertainties in calibration factors. Among them, the detector positioning, the chest wall thickness and composition (muscle/fat) assessment, and the distribution of the contamination are the main parameters influencing the detector response. In order to reduce these uncertainties, a numerical approach based on the application of voxel phantoms (numerical phantoms based on tomographic images, CT or MRI) associated to a Monte-Carlo code (namely M.C.N.P.) was developed. It led to the development of a dedicated tool, called O.E.D.I.P.E., that allows to easily handle realistic voxel phantoms for the simulation of in vivo measurement (or dose calculation, application that will not be presented in this paper). The goal of this paper is to present our study of the influence of the lung distribution on calibration factors using both animal experimentations and our numerical method. Indeed, physical anthropomorphic phantoms used for calibration always consider a uniform distribution of the source in the lungs, which is not true in many contamination conditions. The purpose of the study is to compare the response of the measurement detectors using a real distribution of actinide particles in the lungs, obtained from animal experimentations, with the homogeneous one considered as the reference. This comparison was performed using O.E.D.I.P.E. that can almost simulate any source distribution. A non human primate was contaminated heterogeneously by intra-tracheal administration of actinide oxide. After euthanasia, gamma spectrometry measurements were performed on the pulmonary lobes to obtain the distribution of the contamination in the lungs. This realistic distribution was used to simulate an heterogeneous contamination in the numerical phantom of the non human primate, which was compared with a simulation of an homogeneous contamination presenting the

  15. One step geometrical calibration method for optical coherence tomography

    International Nuclear Information System (INIS)

    Díaz, Jesús Díaz; Ortmaier, Tobias; Stritzel, Jenny; Rahlves, Maik; Reithmeier, Eduard; Roth, Bernhard; Majdani, Omid

    2016-01-01

    We present a novel one-step calibration methodology for geometrical distortion correction for optical coherence tomography (OCT). A calibration standard especially designed for OCT is introduced, which consists of an array of inverse pyramidal structures. The use of multiple landmarks situated on four different height levels on the pyramids allow performing a 3D geometrical calibration. The calibration procedure itself is based on a parametric model of the OCT beam propagation. It is validated by experimental results and enables the reduction of systematic errors by more than one order of magnitude. In future, our results can improve OCT image reconstruction and interpretation for medical applications such as real time monitoring of surgery. (paper)

  16. Prototype of an auto-calibrating, context-aware, hybrid brain-computer interface.

    Science.gov (United States)

    Faller, J; Torrellas, S; Miralles, F; Holzner, C; Kapeller, C; Guger, C; Bund, J; Müller-Putz, G R; Scherer, R

    2012-01-01

    We present the prototype of a context-aware framework that allows users to control smart home devices and to access internet services via a Hybrid BCI system of an auto-calibrating sensorimotor rhythm (SMR) based BCI and another assistive device (Integra Mouse mouth joystick). While there is extensive literature that describes the merit of Hybrid BCIs, auto-calibrating and co-adaptive ERD BCI training paradigms, specialized BCI user interfaces, context-awareness and smart home control, there is up to now, no system that includes all these concepts in one integrated easy-to-use framework that can truly benefit individuals with severe functional disabilities by increasing independence and social inclusion. Here we integrate all these technologies in a prototype framework that does not require expert knowledge or excess time for calibration. In a first pilot-study, 3 healthy volunteers successfully operated the system using input signals from an ERD BCI and an Integra Mouse and reached average positive predictive values (PPV) of 72 and 98% respectively. Based on what we learned here we are planning to improve the system for a test with a larger number of healthy volunteers so we can soon bring the system to benefit individuals with severe functional disability.

  17. Research on self-calibration biaxial autocollimator based on ZYNQ

    Science.gov (United States)

    Guo, Pan; Liu, Bingguo; Liu, Guodong; Zhong, Yao; Lu, Binghui

    2018-01-01

    Autocollimators are mainly based on computers or the electronic devices that can be connected to the internet, and its precision, measurement range and resolution are all defective, and external displays are needed to display images in real time. What's more, there is no real-time calibration for autocollimator in the market. In this paper, we propose a biaxial autocollimator based on the ZYNQ embedded platform to solve the above problems. Firstly, the traditional optical system is improved and a light path is added for real-time calibration. Then, in order to improve measurement speed, the embedded platform based on ZYNQ that combines Linux operating system with autocollimator is designed. In this part, image acquisition, image processing, image display and the man-machine interaction interface based on Qt are achieved. Finally, the system realizes two-dimensional small angle measurement. Experimental results showed that the proposed method can improve the angle measurement accuracy. The standard deviation of the close distance (1.5m) is 0.15" in horizontal direction of image and 0.24"in vertical direction, the repeatability of measurement of the long distance (10m) is improved by 0.12 in horizontal direction of image and 0.3 in vertical direction.

  18. Redundant interferometric calibration as a complex optimization problem

    Science.gov (United States)

    Grobler, T. L.; Bernardi, G.; Kenyon, J. S.; Parsons, A. R.; Smirnov, O. M.

    2018-05-01

    Observations of the redshifted 21 cm line from the epoch of reionization have recently motivated the construction of low-frequency radio arrays with highly redundant configurations. These configurations provide an alternative calibration strategy - `redundant calibration' - and boost sensitivity on specific spatial scales. In this paper, we formulate calibration of redundant interferometric arrays as a complex optimization problem. We solve this optimization problem via the Levenberg-Marquardt algorithm. This calibration approach is more robust to initial conditions than current algorithms and, by leveraging an approximate matrix inversion, allows for further optimization and an efficient implementation (`redundant STEFCAL'). We also investigated using the preconditioned conjugate gradient method as an alternative to the approximate matrix inverse, but found that its computational performance is not competitive with respect to `redundant STEFCAL'. The efficient implementation of this new algorithm is made publicly available.

  19. The quark gluon plasma: Lattice computations put to experimental test

    Indian Academy of Sciences (India)

    I describe how lattice computations are being used to extract experimentally relevant features of the quark gluon plasma. I deal specifically with relaxation times, photon emissivity, strangeness yields, event-by-event fluctuations of conserved quantities and hydrodynamic flow. Finally I give evidence that the plasma is rather ...

  20. Calibration of pressure gauge for Cherenkov detector

    CERN Document Server

    Saponjic, Nevena

    2013-01-01

    Solartron/Hamilton pressure gauges are used to monitor the gas pressure in the particle beam detectors installed in the experimental areas. Here is description of the test bench for the calibration of these gauges in Labview.

  1. Qualification of an out-of-pile Thermohydraulic test Bench (BETHY) developed to calibrate calorimetric cells under specific JHR experimental conditions

    International Nuclear Information System (INIS)

    De Vita, C.; Brun, J.; Carette, M.; Reynard-Carette, C.; Lyoussi, A.; Fourmentel, D.; Villard, J.F.; Guimbal, P.; Malo, J.Y.

    2013-06-01

    Online in-pile measurement methods are crucial during irradiations in material testing reactors to better understand the behavior of materials under accelerated ageing conditions and of nuclear fuels under high irradiation levels. Thus, the construction of a new Material Testing Reactor such as the Jules Horowitz Reactor (JHR) leads to new research and development programs devoted to innovative instrumentation and measurement methods. The presented works are performed in the framework of the IN-CORE program, 'Instrumentation for Nuclear radiations and Calorimetry Online in Reactor', between CEA and Aix-Marseille University. The program aim is to develop experimental devices and test bench to quantify more precisely the nuclear heating parameter in the JHR experimental channels. This in-pile parameter is usually measured by means of calorimeter or gamma thermometer. This paper focuses on a new out-of-pile test bench called BETHY. This bench was developed to study the response of a differential calorimeter during its preliminary calibration step according to specific thermal and hydraulic conditions occurring inside one type of JHR core channel. The first section of this paper is dedicated to a detailed description of the bench. The second part presents the study of the thermal characteristics established in the bench for two main thermal running modes. The last one concerns the calibration curve of the reference cell of the differential calorimeter in the case of homogenous temperature. (authors)

  2. Precise calibration of few-cycle laser pulses with atomic hydrogen

    Science.gov (United States)

    Wallace, W. C.; Kielpinski, D.; Litvinyuk, I. V.; Sang, R. T.

    2017-12-01

    Interaction of atoms and molecules with strong electric fields is a fundamental process in many fields of research, particularly in the emerging field of attosecond science. Therefore, understanding the physics underpinning those interactions is of significant interest to the scientific community. One crucial step in this understanding is accurate knowledge of the few-cycle laser field driving the process. Atomic hydrogen (H), the simplest of all atomic species, plays a key role in benchmarking strong-field processes. Its wide-spread use as a testbed for theoretical calculations allows the comparison of approximate theoretical models against nearly-perfect numerical solutions of the three-dimensional time-dependent Schrödinger equation. Until recently, relatively little experimental data in atomic H was available for comparison to these models, and was due mostly due to the difficulty in the construction and use of atomic H sources. Here, we review our most recent experimental results from atomic H interaction with few-cycle laser pulses and how they have been used to calibrate important laser pulse parameters such as peak intensity and the carrier-envelope phase (CEP). Quantitative agreement between experimental data and theoretical predictions for atomic H has been obtained at the 10% uncertainty level, allowing for accurate laser calibration intensity at the 1% level. Using this calibration in atomic H, both accurate CEP data and an intensity calibration standard have been obtained Ar, Kr, and Xe; such gases are in common use for strong-field experiments. This calibration standard can be used by any laboratory using few-cycle pulses in the 1014 W cm-2 intensity regime centered at 800 nm wavelength to accurately calibrate their peak laser intensity to within few-percent precision.

  3. Two calibration procedures for a gyroscope-free inertial measurement system based on a double-pendulum apparatus

    Science.gov (United States)

    Cappa, P.; Patanè, F.; Rossi, S.

    2008-05-01

    This paper presents a novel calibration algorithm to be used with a gyro-free inertial measurement unit (GF-IMU) based on the use of linear accelerometers (AC). The analytical approach can be implemented in two calibration procedures. The first procedure (P-I) is articulated in the conduction of a static trial, to compute the sensitivity and the direction of the sensing axis of each AC, followed by a dynamic trial, to determine the AC locations. By contrast, the latter procedure (P-II) consists in the calculation of the previously indicated calibration parameters by means of a dynamic trial only. The feasibility of the two calibration procedures has been investigated by testing two GF-IMUs, equipped with ten and six bi-axial linear ACs, with an ad hoc instrumented double-pendulum apparatus. P-I and P-II were compared to a calibration procedure used as a reference (P-REF), which incorporates the AC positions measured with an optoelectronic system. The experimental results we present in this paper demonstrate that (i) P-I is able to determine the calibration parameters of the AC array with a higher accuracy than P-II; (ii) consequently, the errors associated with translational (a0 - g) and rotational (\\dot{\\bm \\omega }) acceleration components for the two GF-IMUs are significantly greater using P-II than P-I and (iii) the errors in (a0 - g) and \\dot{\\bm \\omega } obtained with P-I are comparable with the ones obtainable by using P-REF. Thus, the proposed novel algorithm used in P-I, in conjunction with the double-pendulum apparatus, can be globally considered a viable tool in GF-IMU calibration.

  4. A laser sheet self-calibration method for scanning PIV

    Science.gov (United States)

    Knutsen, Anna N.; Lawson, John M.; Dawson, James R.; Worth, Nicholas A.

    2017-10-01

    Knowledge of laser sheet position, orientation, and thickness is a fundamental requirement of scanning PIV and other laser-scanning methods. This paper describes the development and evaluation of a new laser sheet self-calibration method for stereoscopic scanning PIV, which allows the measurement of these properties from particle images themselves. The approach is to fit a laser sheet model by treating particles as randomly distributed probes of the laser sheet profile, whose position is obtained via a triangulation procedure enhanced by matching particle images according to their variation in brightness over a scan. Numerical simulations and tests with experimental data were used to quantify the sensitivity of the method to typical experimental error sources and validate its performance in practice. The numerical simulations demonstrate the accurate recovery of the laser sheet parameters over range of different seeding densities and sheet thicknesses. Furthermore, they show that the method is robust to significant image noise and camera misalignment. Tests with experimental data confirm that the laser sheet model can be accurately reconstructed with no impairment to PIV measurement accuracy. The new method is more efficient and robust in comparison with the standard (self-) calibration approach, which requires an involved, separate calibration step that is sensitive to experimental misalignments. The method significantly improves the practicality of making accurate scanning PIV measurements and broadens its potential applicability to scanning systems with significant vibrations.

  5. Inherent calibration of microdosemeters for dose distributions in lineal energy

    Energy Technology Data Exchange (ETDEWEB)

    Crossman, J.S.P.; Watt, D.E. [Saint Andrews Univ. (United Kingdom). Dept. of Physics and Astronomy

    1994-12-31

    A method, utilising the inherent electron event spectra, is described for the absolute calibration of microdosemeters in the presence of a photon field. The method, which avoids the problems and uncertainties present in conventional calibration techniques, involves simple extrapolation of the dose distribution in lineal energy associated with `exact stopper` electrons. Validation of the method is made using the published experimental distributions of Rossi, of Kliauga, and of Dvorak and by direct theoretical calculation of the components of the microdose distributions for gamma rays. Further experimental data from a cylindrical TEPC in a photon field generated by an external source of {sup 137}Cs are obtained for comparison. A `universal` calibration curve for the dose-weighted lineal energy as a function of the simulated mean diameter of the microdosemeter, is presented for use in practical applications. (author).

  6. Inherent calibration of microdosemeters for dose distributions in lineal energy

    International Nuclear Information System (INIS)

    Crossman, J.S.P.; Watt, D.E.

    1994-01-01

    A method, utilising the inherent electron event spectra, is described for the absolute calibration of microdosemeters in the presence of a photon field. The method, which avoids the problems and uncertainties present in conventional calibration techniques, involves simple extrapolation of the dose distribution in lineal energy associated with 'exact stopper' electrons. Validation of the method is made using the published experimental distributions of Rossi, of Kliauga, and of Dvorak and by direct theoretical calculation of the components of the microdose distributions for gamma rays. Further experimental data from a cylindrical TEPC in a photon field generated by an external source of 137 Cs are obtained for comparison. A 'universal' calibration curve for the dose-weighted lineal energy as a function of the simulated mean diameter of the microdosemeter, is presented for use in practical applications. (author)

  7. Automated intraoperative calibration for prostate cancer brachytherapy

    International Nuclear Information System (INIS)

    Kuiran Chen, Thomas; Heffter, Tamas; Lasso, Andras; Pinter, Csaba; Abolmaesumi, Purang; Burdette, E. Clif; Fichtinger, Gabor

    2011-01-01

    Purpose: Prostate cancer brachytherapy relies on an accurate spatial registration between the implant needles and the TRUS image, called ''calibration''. The authors propose a new device and a fast, automatic method to calibrate the brachytherapy system in the operating room, with instant error feedback. Methods: A device was CAD-designed and precision-engineered, which mechanically couples a calibration phantom with an exact replica of the standard brachytherapy template. From real-time TRUS images acquired from the calibration device and processed by the calibration system, the coordinate transformation between the brachytherapy template and the TRUS images was computed automatically. The system instantly generated a report of the target reconstruction accuracy based on the current calibration outcome. Results: Four types of validation tests were conducted. First, 50 independent, real-time calibration trials yielded an average of 0.57 ± 0.13 mm line reconstruction error (LRE) relative to ground truth. Second, the averaged LRE was 0.37 ± 0.25 mm relative to ground truth in tests with six different commercial TRUS scanners operating at similar imaging settings. Furthermore, testing with five different commercial stepper systems yielded an average of 0.29 ± 0.16 mm LRE relative to ground truth. Finally, the system achieved an average of 0.56 ± 0.27 mm target registration error (TRE) relative to ground truth in needle insertion tests through the template in a water tank. Conclusions: The proposed automatic, intraoperative calibration system for prostate cancer brachytherapy has achieved high accuracy, precision, and robustness.

  8. Validation of thermohydraulic codes by comparison of experimental results with computer simulations

    International Nuclear Information System (INIS)

    Madeira, A.A.; Galetti, M.R.S.; Pontedeiro, A.C.

    1989-01-01

    The results obtained by simulation of three cases from CANON depressurization experience, using the TRAC-PF1 computer code, version 7.6, implanted in the VAX-11/750 computer of Brazilian CNEN, are presented. The CANON experience was chosen as first standard problem in thermo-hydraulic to be discussed at ENFIR for comparing results from different computer codes with results obtained experimentally. The ability of TRAC-PF1 code to prevent the depressurization phase of a loss of primary collant accident in pressurized water reactors is evaluated. (M.C.K.) [pt

  9. Calibration of new I and C at VR-1 training reactor

    International Nuclear Information System (INIS)

    Kropik, Martin; Jurickova, Monika

    2011-01-01

    The paper describes a calibration of the new instrumentation and control (I and C) at the VR-1 training reactor in Prague. The I and C uses uncompensated fission chambers for the power measurement that operate in a pulse or a DC current and a Campbell regime, according to the reactor power. The pulse regime uses discrimination for the avoidance of gamma and noise influence of the measurement. The DC current regime employs a logarithmic amplifier to cover the whole reactor DC current power range with only one electronic circuit. The system computer calculates the real power from the logarithmic data. The Campbell regime is based on evaluation of the root mean square (RMS) value of the neutron noise. The calculated power from Campbell range is based on the square value of the RMS neutron noise data. All data for the power calculation are stored in computer flash memories. To set proper data there, it was necessary to carry out the calibration of the I and C. At first, the proper discrimination value was found while examining the spectrum of the neutron signal from the chamber. The constants for the DC current and Campbell calculations were determined from an independent reactor power measurement. The independent power measuring system that was used for the calibration was accomplished by a compensated current chamber with an electrometer. The calculated calibration constants were stored in the computer flash memories, and the calibrated system was again successfully compared with the independent power measuring system. Finally, proper gamma discrimination of the Campbell system was carefully checked.

  10. Modal and Wave Load Identification by ARMA Calibration

    DEFF Research Database (Denmark)

    Jensen, Jens Kristian Jehrbo; Kirkegaard, Poul Henning; Brincker, Rune

    1992-01-01

    In this note, modal parameter and wave load identification by calibration of ARMA models are considered for a simple offshore structure. The theory of identification by ARMA calibration is introduced as an identification technique in the time domain, which can be applied for white noise–excited s......In this note, modal parameter and wave load identification by calibration of ARMA models are considered for a simple offshore structure. The theory of identification by ARMA calibration is introduced as an identification technique in the time domain, which can be applied for white noise...... by an experimental example of a monopile model excited by random waves. The identification results show that the approach is able to give very reliable estimates of the modal parameters. Furthermore, a comparison of the identified wave load process and the calculated load process based on the Morison equation shows...

  11. Low-Cost Photometric Calibration for Interactive Relighting

    OpenAIRE

    Loscos , Céline; Drettakis , George

    2000-01-01

    International audience; Computer augmented reality is a rapidly emerging field allowing users to mix virtual and real worlds. Our interest is to allow relighting and remodelling of real scenes, using a reflectance estimation method. Most previous work focused on the quality of the results without considering the expense in computation and the price of acquisition equipment. In this paper, we present a low–cost photometric calibration method which improves the reflectance estimate of real scen...

  12. Calibration

    International Nuclear Information System (INIS)

    Greacen, E.L.; Correll, R.L.; Cunningham, R.B.; Johns, G.G.; Nicolls, K.D.

    1981-01-01

    Procedures common to different methods of calibration of neutron moisture meters are outlined and laboratory and field calibration methods compared. Gross errors which arise from faulty calibration techniques are described. The count rate can be affected by the dry bulk density of the soil, the volumetric content of constitutional hydrogen and other chemical components of the soil and soil solution. Calibration is further complicated by the fact that the neutron meter responds more strongly to the soil properties close to the detector and source. The differences in slope of calibration curves for different soils can be as much as 40%

  13. Grid based calibration of SWAT hydrological models

    Directory of Open Access Journals (Sweden)

    D. Gorgan

    2012-07-01

    Full Text Available The calibration and execution of large hydrological models, such as SWAT (soil and water assessment tool, developed for large areas, high resolution, and huge input data, need not only quite a long execution time but also high computation resources. SWAT hydrological model supports studies and predictions of the impact of land management practices on water, sediment, and agricultural chemical yields in complex watersheds. The paper presents the gSWAT application as a web practical solution for environmental specialists to calibrate extensive hydrological models and to run scenarios, by hiding the complex control of processes and heterogeneous resources across the grid based high computation infrastructure. The paper highlights the basic functionalities of the gSWAT platform, and the features of the graphical user interface. The presentation is concerned with the development of working sessions, interactive control of calibration, direct and basic editing of parameters, process monitoring, and graphical and interactive visualization of the results. The experiments performed on different SWAT models and the obtained results argue the benefits brought by the grid parallel and distributed environment as a solution for the processing platform. All the instances of SWAT models used in the reported experiments have been developed through the enviroGRIDS project, targeting the Black Sea catchment area.

  14. Model Calibration in Watershed Hydrology

    Science.gov (United States)

    Yilmaz, Koray K.; Vrugt, Jasper A.; Gupta, Hoshin V.; Sorooshian, Soroosh

    2009-01-01

    Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must, therefore, be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. This Chapter reviews the current state-of-the-art of model calibration in watershed hydrology with special emphasis on our own contributions in the last few decades. We discuss the historical background that has led to current perspectives, and review different approaches for manual and automatic single- and multi-objective parameter estimation. In particular, we highlight the recent developments in the calibration of distributed hydrologic models using parameter dimensionality reduction sampling, parameter regularization and parallel computing.

  15. The Importance of Calibration on Digital Radioscopy System

    International Nuclear Information System (INIS)

    Noorhazleena Azaman; Abdul Razak Hamzah; Sapizah Rahim

    2011-01-01

    Radiography testing is one of the non-destructive testing methods where it is used to detect discontinuities in a material by using x-ray and gamma ray as a sources of radiation. Mobile digital radioscopy system (MDRS) is act as a detector or medium to capture and convert the image in a real time and the image produced can view directly from the computer connected to the MDRS. Calibration is one of the factors that we need to consider in order to get the image and see in a high resolution and good contrast. The advantage of calibration in MDRS is to reduce and avoid from the noise. Calibration is not stable and the imaging system must be calibrated periodically. It is also as an alternative to the radiographic film to reduce and saving cost and time with considerable saving in running cost and processing time. (author)

  16. Calibration and comparison of the NASA Lewis free-piston Stirling engine model predictions with RE-1000 test data

    Science.gov (United States)

    Geng, Steven M.

    1987-01-01

    A free-piston Stirling engine performance code is being upgraded and validated at the NASA Lewis Research Center under an interagency agreement between the Department of Energy's Oak Ridge National Laboratory and NASA Lewis. Many modifications were made to the free-piston code in an attempt to decrease the calibration effort. A procedure was developed that made the code calibration process more systematic. Engine-specific calibration parameters are often used to bring predictions and experimental data into better agreement. The code was calibrated to a matrix of six experimental data points. Predictions of the calibrated free-piston code are compared with RE-1000 free-piston Stirling engine sensitivity test data taken at NASA Lewis. Resonable agreement was obtained between the code predictions and the experimental data over a wide range of engine operating conditions.

  17. Medical color displays and their color calibration: investigations of various calibration methods, tools, and potential improvement in color difference ΔE

    Science.gov (United States)

    Roehrig, Hans; Hashmi, Syed F.; Dallas, William J.; Krupinski, Elizabeth A.; Rehm, Kelly; Fan, Jiahua

    2010-08-01

    Our laboratory has investigated the efficacy of a suite of color calibration and monitor profiling packages which employ a variety of color measurement sensors. Each of the methods computes gamma correction tables for the red, green and blue color channels of a monitor that attempt to: a) match a desired luminance range and tone reproduction curve; and b) maintain a target neutral point across the range of grey values. All of the methods examined here produce International Color Consortium (ICC) profiles that describe the color rendering capabilities of the monitor after calibration. Color profiles incorporate a transfer matrix that establishes the relationship between RGB driving levels and the International Commission on Illumination (CIE) XYZ (tristimulus) values of the resulting on-screen color; the matrix is developed by displaying color patches of known RGB values on the monitor and measuring the tristimulus values with a sensor. The number and chromatic distribution of color patches varies across methods and is usually not under user control. In this work we examine the effect of employing differing calibration and profiling methods on rendition of color images. A series of color patches encoded in sRGB color space were presented on the monitor using color-management software that utilized the ICC profile produced by each method. The patches were displayed on the calibrated monitor and measured with a Minolta CS200 colorimeter. Differences in intended and achieved luminance and chromaticity were computed using the CIE DE2000 color-difference metric, in which a value of ΔE = 1 is generally considered to be approximately one just noticeable difference (JND) in color. We observed between one and 17 JND's for individual colors, depending on calibration method and target. As an extension of this fundamental work1, we further improved our calibration method by defining concrete calibration parameters for the display, using the NEC wide gamut puck, and making sure

  18. Self-Calibrated In-Process Photogrammetry for Large Raw Part Measurement and Alignment before Machining.

    Science.gov (United States)

    Mendikute, Alberto; Yagüe-Fabra, José A; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai

    2017-09-09

    Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g. 0.1 mm error in 1 m) with

  19. Experimental/analytical approaches to modeling, calibrating and optimizing shaking table dynamics for structural dynamic applications

    Science.gov (United States)

    Trombetti, Tomaso

    This thesis presents an Experimental/Analytical approach to modeling and calibrating shaking tables for structural dynamic applications. This approach was successfully applied to the shaking table recently built in the structural laboratory of the Civil Engineering Department at Rice University. This shaking table is capable of reproducing model earthquake ground motions with a peak acceleration of 6 g's, a peak velocity of 40 inches per second, and a peak displacement of 3 inches, for a maximum payload of 1500 pounds. It has a frequency bandwidth of approximately 70 Hz and is designed to test structural specimens up to 1/5 scale. The rail/table system is mounted on a reaction mass of about 70,000 pounds consisting of three 12 ft x 12 ft x 1 ft reinforced concrete slabs, post-tensioned together and connected to the strong laboratory floor. The slip table is driven by a hydraulic actuator governed by a 407 MTS controller which employs a proportional-integral-derivative-feedforward-differential pressure algorithm to control the actuator displacement. Feedback signals are provided by two LVDT's (monitoring the slip table relative displacement and the servovalve main stage spool position) and by one differential pressure transducer (monitoring the actuator force). The dynamic actuator-foundation-specimen system is modeled and analyzed by combining linear control theory and linear structural dynamics. The analytical model developed accounts for the effects of actuator oil compressibility, oil leakage in the actuator, time delay in the response of the servovalve spool to a given electrical signal, foundation flexibility, and dynamic characteristics of multi-degree-of-freedom specimens. In order to study the actual dynamic behavior of the shaking table, the transfer function between target and actual table accelerations were identified using experimental results and spectral estimation techniques. The power spectral density of the system input and the cross power spectral

  20. A new method for automated dynamic calibration of tipping-bucket rain gauges

    Science.gov (United States)

    Humphrey, M.D.; Istok, J.D.; Lee, J.Y.; Hevesi, J.A.; Flint, A.L.

    1997-01-01

    Existing methods for dynamic calibration of tipping-bucket rain gauges (TBRs) can be time consuming and labor intensive. A new automated dynamic calibration system has been developed to calibrate TBRs with minimal effort. The system consists of a programmable pump, datalogger, digital balance, and computer. Calibration is performed in two steps: 1) pump calibration and 2) rain gauge calibration. Pump calibration ensures precise control of water flow rates delivered to the rain gauge funnel; rain gauge calibration ensures precise conversion of bucket tip times to actual rainfall rates. Calibration of the pump and one rain gauge for 10 selected pump rates typically requires about 8 h. Data files generated during rain gauge calibration are used to compute rainfall intensities and amounts from a record of bucket tip times collected in the field. The system was tested using 5 types of commercial TBRs (15.2-, 20.3-, and 30.5-cm diameters; 0.1-, 0.2-, and 1.0-mm resolutions) and using 14 TBRs of a single type (20.3-cm diameter; 0.1-mm resolution). Ten pump rates ranging from 3 to 154 mL min-1 were used to calibrate the TBRs and represented rainfall rates between 6 and 254 mm h-1 depending on the rain gauge diameter. All pump calibration results were very linear with R2 values greater than 0.99. All rain gauges exhibited large nonlinear underestimation errors (between 5% and 29%) that decreased with increasing rain gauge resolution and increased with increasing rainfall rate, especially for rates greater than 50 mm h-1. Calibration curves of bucket tip time against the reciprocal of the true pump rate for all rain gauges also were linear with R2 values of 0.99. Calibration data for the 14 rain gauges of the same type were very similar, as indicated by slope values that were within 14% of each other and ranged from about 367 to 417 s mm h-1. The developed system can calibrate TBRs efficiently, accurately, and virtually unattended and could be modified for use with other

  1. Comparison of calibration curve of radiochromic films EBT2 and EBT

    International Nuclear Information System (INIS)

    Parra Osorio, V.; Martin-Viera Cueto, J. A.; Galan Montenegro, P.; Benitez Villegas, E. M.; Casado Villalon, F. F.; Bodineau Gil, C.

    2013-01-01

    The aim is to compare the quality of the fit to calibrate two radiochromic films batches, one model and another of EBT3 EBT2, using both experimental settings as phenomenological expression as of the calibration curve depends on the precision and accuracy of the estimate of absorbed dose. (Author)

  2. Experimental Approach for the Uncertainty Assessment of 3D Complex Geometry Dimensional Measurements Using Computed Tomography at the mm and Sub-mm Scales.

    Science.gov (United States)

    Jiménez, Roberto; Torralba, Marta; Yagüe-Fabra, José A; Ontiveros, Sinué; Tosello, Guido

    2017-05-16

    The dimensional verification of miniaturized components with 3D complex geometries is particularly challenging. Computed Tomography (CT) can represent a suitable alternative solution to micro metrology tools based on optical and tactile techniques. However, the establishment of CT systems' traceability when measuring 3D complex geometries is still an open issue. In this work, an alternative method for the measurement uncertainty assessment of 3D complex geometries by using CT is presented. The method is based on the micro-CT system Maximum Permissible Error (MPE) estimation, determined experimentally by using several calibrated reference artefacts. The main advantage of the presented method is that a previous calibration of the component by a more accurate Coordinate Measuring System (CMS) is not needed. In fact, such CMS would still hold all the typical limitations of optical and tactile techniques, particularly when measuring miniaturized components with complex 3D geometries and their inability to measure inner parts. To validate the presented method, the most accepted standard currently available for CT sensors, the Verein Deutscher Ingenieure/Verband Deutscher Elektrotechniker (VDI/VDE) guideline 2630-2.1 is applied. Considering the high number of influence factors in CT and their impact on the measuring result, two different techniques for surface extraction are also considered to obtain a realistic determination of the influence of data processing on uncertainty. The uncertainty assessment of a workpiece used for micro mechanical material testing is firstly used to confirm the method, due to its feasible calibration by an optical CMS. Secondly, the measurement of a miniaturized dental file with 3D complex geometry is carried out. The estimated uncertainties are eventually compared with the component's calibration and the micro manufacturing tolerances to demonstrate the suitability of the presented CT calibration procedure. The 2U/T ratios resulting from the

  3. Evaluation of uncertainties in the calibration of radiation survey meter

    International Nuclear Information System (INIS)

    Potiens, M.P.A.; Santos, G.P.

    2006-01-01

    In order to meet the requirements of ISO 17025, the quantification of the expanded uncertainties of experimental data in the calibration of survey meters must be carried out using well defined concepts, like those expressed in the 'ISO-Guide to the Expression of Uncertainty in Measurement'. The calibration procedure of gamma ray survey meters involves two values that have to get their uncertainties clearly known: measurements of the instrument under calibration and the conventional true values of a quantity. Considering the continuous improvement of the calibration methods and set-ups, it is necessary to evaluate periodically the involved uncertainties in the procedures. In this work it is shown how the measurement uncertainties of an individual calibration can be estimated and how it can be generalized to be valid for others radiation survey meters. (authors)

  4. Least square methods and covariance matrix applied to the relative efficiency calibration of a Ge(Li) detector

    International Nuclear Information System (INIS)

    Geraldo, L.P.; Smith, D.L.

    1989-01-01

    The methodology of covariance matrix and square methods have been applied in the relative efficiency calibration for a Ge(Li) detector apllied in the relative efficiency calibration for a Ge(Li) detector. Procedures employed to generate, manipulate and test covariance matrices which serve to properly represent uncertainties of experimental data are discussed. Calibration data fitting using least square methods has been performed for a particular experimental data set. (author) [pt

  5. Calibration of radon-222 detectors using closed circuit radium-226 sources

    International Nuclear Information System (INIS)

    Perna, Allan Felipe Nunes; Paschuk, Sergei Anatolyevich; Correa, Janine Nicolosi; Del Claro, Flavia

    2012-01-01

    This paper presents the results of the calibration of the Radon-222 detectors used by the Laboratories specializing in measuring natural radiation from this gas. The research was conducted in collaboration between UTFPR, CDTN/CNEN, UFRN and IRD/CNEN. During the calibration the detectors were exposed in isolated chambers with radioactive calibrated sources. The calibration procedure was supported with four instant radon monitors AlphaGUARD (SAPHYMO Co.) responsible for radon activity measurements in the experimental chamber. The calibration procedure resulted an equation that relates the number of tracks found in solid-state detector CR-39 (Track-Etch detector) with the concentration of radon in the atmosphere. Obtained results are compatible with previously performed calibration at the National Institute of Radiological Sciences (NIRS, Japan) using high activity levels of radon in air. Present results of calibration give the possibility to expand the calibration curve of CR-39 for medium and low activity levels of radon. (author)

  6. The enthalpies of formation of alkyl carbamates: Experimental and computational redetermination

    International Nuclear Information System (INIS)

    Santos, Ana Filipa L.O.M.; Ribeiro da Silva, Manuel A.V.

    2013-01-01

    Highlights: ► Combustion calorimetry was used to redetermine Δ f H m o (cr) of methyl and ethyl carbamates. ► Δ cr g H m o of methyl and ethyl carbamates were redetermined by Calvet Microcalorimetry. ► Gas-phase Δ f H m o of the studied compounds have been derived from the experimental values of Δ f H m o (cr) and Δ cr g H m o . ► Gas-phase Δ f H m o of the studied compounds have been calculated by computational thermochemistry. ► The obtained Δ f H m o (g) for the alkylcarbamates are discussed versus literature values for the same compounds. - Abstract: In the present work, a redetermination of thermochemical data of methyl carbamate and ethyl carbamate was performed by both experimental and computational techniques. Their gas-phase standard (p o = 0.1 MPa) molar enthalpies of formation, Δ f H m o (g), at T = 298.15 K, were derived from the standard molar enthalpies of formation, in the crystalline phase, Δ f H m o (cr), and from the standard molar enthalpies of sublimation, Δ cr g H m o at T = 298.15 K, measured, respectively, by static bomb combustion calorimetry and high temperature Calvet microcalorimetry. The experimental results were compared with computational data, calculated at the G3(MP2)//B3LYP level, as well as with values reported in the literature. At the B3LYP/6-31G(d) level of theory, the molecular structure of both carbamates was obtained.

  7. Innovative Calibration Method for System Level Simulation Models of Internal Combustion Engines

    Directory of Open Access Journals (Sweden)

    Ivo Prah

    2016-09-01

    Full Text Available The paper outlines a procedure for the computer-controlled calibration of the combined zero-dimensional (0D and one-dimensional (1D thermodynamic simulation model of a turbocharged internal combustion engine (ICE. The main purpose of the calibration is to determine input parameters of the simulation model in such a way as to achieve the smallest difference between the results of the measurements and the results of the numerical simulations with minimum consumption of the computing time. An innovative calibration methodology is based on a novel interaction between optimization methods and physically based methods of the selected ICE sub-systems. Therein physically based methods were used for steering the division of the integral ICE to several sub-models and for determining parameters of selected components considering their governing equations. Innovative multistage interaction between optimization methods and physically based methods allows, unlike the use of well-established methods that rely only on the optimization techniques, for successful calibration of a large number of input parameters with low time consumption. Therefore, the proposed method is suitable for efficient calibration of simulation models of advanced ICEs.

  8. Predictive sensor based x-ray calibration using a physical model

    International Nuclear Information System (INIS)

    Fuente, Matias de la; Lutz, Peter; Wirtz, Dieter C.; Radermacher, Klaus

    2007-01-01

    Many computer assisted surgery systems are based on intraoperative x-ray images. To achieve reliable and accurate results these images have to be calibrated concerning geometric distortions, which can be distinguished between constant distortions and distortions caused by magnetic fields. Instead of using an intraoperative calibration phantom that has to be visible within each image resulting in overlaying markers, the presented approach directly takes advantage of the physical background of the distortions. Based on a computed physical model of an image intensifier and a magnetic field sensor, an online compensation of distortions can be achieved without the need of an intraoperative calibration phantom. The model has to be adapted once to each specific image intensifier through calibration, which is based on an optimization algorithm systematically altering the physical model parameters, until a minimal error is reached. Once calibrated, the model is able to predict the distortions caused by the measured magnetic field vector and build an appropriate dewarping function. The time needed for model calibration is not yet optimized and takes up to 4 h on a 3 GHz CPU. In contrast, the time needed for distortion correction is less than 1 s and therefore absolutely acceptable for intraoperative use. First evaluations showed that by using the model based dewarping algorithm the distortions of an XRII with a 21 cm FOV could be significantly reduced. The model was able to predict and compensate distortions by approximately 80% to a remaining error of 0.45 mm (max) (0.19 mm rms)

  9. Evaluation of the energy dependence of ionization chambers pencil type calibrated beam tomography standards

    International Nuclear Information System (INIS)

    Fontes, Ladyjane Pereira; Potiens, Maria da Penha A.

    2015-01-01

    The Instrument Calibration Laboratory of IPEN (LCI - IPEN) performs calibrations of pencil-type ionization chambers (IC) used in measures of dosimetric survey on clinical systems of Computed Tomography (CT). Many users make mistakes when using a calibrated ionization chamber in their CT dosimetry systems. In this work a methodology for determination of factors of correction for quality (Kq) through the calibration curve that is specific for each ionization chamber was established. Furthermore, it was possible to demonstrate the energy dependence on an pencil-type Ionization Chamber(IC) calibrated at the LCI - IPEN. (author)

  10. Cryogenic thermometer calibration system using a helium cooling loop and a temperature controller [for LHC magnets

    CERN Document Server

    Chanzy, E; Thermeau, J P; Bühler, S; Joly, C; Casas-Cubillos, J; Balle, C

    1998-01-01

    The IPN-Orsay and CERN are designing in close collaboration a fully automated cryogenic thermometer calibration facility which will calibrate in 3 years 10,000 cryogenic thermometers required for the Large Hadron Collider (LHC) operation. A reduced-scale model of the calibration facility has been developed, which enables the calibration of ten thermometers by comparison with two rhodium-iron standard thermometers in the 1.8 K to 300 K temperature range under vacuum conditions. The particular design, based on a helium cooling loop and an electrical temperature controller, gives good dynamic performances. This paper describes the experimental set-up and the data acquisition system. Results of experimental runs are also presented along with the estimated global accuracy for the calibration. (3 refs).

  11. Review of Calibration Methods for Scheimpflug Camera

    Directory of Open Access Journals (Sweden)

    Cong Sun

    2018-01-01

    Full Text Available The Scheimpflug camera offers a wide range of applications in the field of typical close-range photogrammetry, particle image velocity, and digital image correlation due to the fact that the depth-of-view of Scheimpflug camera can be greatly extended according to the Scheimpflug condition. Yet, the conventional calibration methods are not applicable in this case because the assumptions used by classical calibration methodologies are not valid anymore for cameras undergoing Scheimpflug condition. Therefore, various methods have been investigated to solve the problem over the last few years. However, no comprehensive review exists that provides an insight into recent calibration methods of Scheimpflug cameras. This paper presents a survey of recent calibration methods of Scheimpflug cameras with perspective lens, including the general nonparametric imaging model, and analyzes in detail the advantages and drawbacks of the mainstream calibration models with respect to each other. Real data experiments including calibrations, reconstructions, and measurements are performed to assess the performance of the models. The results reveal that the accuracies of the RMM, PLVM, PCIM, and GNIM are basically equal, while the accuracy of GNIM is slightly lower compared with the other three parametric models. Moreover, the experimental results reveal that the parameters of the tangential distortion are likely coupled with the tilt angle of the sensor in Scheimpflug calibration models. The work of this paper lays the foundation of further research of Scheimpflug cameras.

  12. Control console of the gamma calibration room

    International Nuclear Information System (INIS)

    Vilchis P, A.E.; Romero G, M.

    1999-01-01

    The Nuclear Centre of Mexico has a Ionizing Radiation Metrology Center (CMRI). This is in charge of the calibration in Mexico and Latin America of equipment dedicated to radiation measurement as industrial, medical as other fields. The importance to ensure that the equipment stay justly calibrated, it is imposed the necessity of automating the different rooms which the CMRI has. in this case it will be exposed the Calibration room for gamma radiation type. The operation of this application was carried out with the LabVIEW development platform and also in C language. The hardware associated is: personal computer with two cards using the 8255 device, 16 channels with optical isolation to manage input/output TTL type, 16 channels with optical isolation to management of charges to 127 V a.c., and 2 channels for 90V d.c. motors. (Author)

  13. Automated system for the calibration of magnetometers

    International Nuclear Information System (INIS)

    Petrucha, Vojtech; Kaspar, Petr; Ripka, Pavel; Merayo, Jose M. G.

    2009-01-01

    A completely nonmagnetic calibration platform has been developed and constructed at DTU Space (Technical University of Denmark). It is intended for on-site scalar calibration of high-precise fluxgate magnetometers. An enhanced version of the same platform is being built at the Czech Technical University. There are three axes of rotation in this design (compared to two axes in the previous version). The addition of the third axis allows us to calibrate more complex devices. An electronic compass based on a vector fluxgate magnetometer and micro electro mechanical systems (MEMS) accelerometer is one example. The new platform can also be used to evaluate the parameters of the compass in all possible variations in azimuth, pitch, and roll. The system is based on piezoelectric motors, which are placed on a platform made of aluminum, brass, plastic, and glass. Position sensing is accomplished through custom-made optical incremental sensors. The system is controlled by a microcontroller, which executes commands from a computer. The properties of the system as well as calibration and measurement results will be presented

  14. Limits on the Secular Drift of the TMI Calibration

    Science.gov (United States)

    Wilheit, T. T.; Farrar, S.; Jones, L.; Santos-Garcia, A.

    2012-12-01

    Data from the TRMM Microwave Imager (TMI) can be applied to the problem of determining the trend in oceanic precipitation over more than a decade. It is thus critical to know if the calibration of the instrument has any drift over this time scale. Recently a set of Windsat data with a self-consistent calibration covering July 2005 through June of 2006 and all of 2011 has become available. The mission of Windsat, determining the feasibility of measuring oceanic wind speed and direction, requires extraordinary attention to instrument calibration. With TRMM being in a low inclination orbit and Windsat in a near polar sun synchronous orbit, there are many observations coincident in space and nearly coincident in time. A data set has been assembled where the observations are averaged over 1 degree boxes of latitude and longitude and restricted to a maximum of 1 hour time difference. University of Central Florida (UCF) compares the two radiometers by computing radiances based on Global Data Assimilation System (GDAS) analyses for all channels of each radiometer for each box and computing double differences for corresponding channels. The algorithm is described in detail by Biswas et al., (2012). Texas A&M (TAMU) uses an independent implementation of GDAS-based algorithm and another where the radiances of Windsat are used to compute Sea Surface Temperature, Sea Surface Wind Speed, Precipitable Water and Cloud Liquid Water for each box. These are, in turn, used to compute the TMI radiances. These two algorithms have been described in detail by Wilheit (2012). Both teams apply stringent filters to the boxes to assure that the conditions are consistent with the model assumptions. Examination of both teams' results indicates that the drift is less than 0.04K over the 5 ½ year span for the 10 and 37 GHz channels of TMI. The 19 and 21 GHz channels have somewhat larger differences, but they are more influenced by atmospheric changes. Given the design of the instruments, it is

  15. Video analysis of projectile motion using tablet computers as experimental tools

    Science.gov (United States)

    Klein, P.; Gröber, S.; Kuhn, J.; Müller, A.

    2014-01-01

    Tablet computers were used as experimental tools to record and analyse the motion of a ball thrown vertically from a moving skateboard. Special applications plotted the measurement data component by component, allowing a simple determination of initial conditions and g in order to explore the underlying laws of motion. This experiment can easily be performed by students themselves, providing more autonomy in their problem-solving processes than traditional learning approaches. We believe that this autonomy and the authenticity of the experimental tool both foster their motivation.

  16. Vortex-Concept for Radioactivity Release Prevention at NPP: Development of Computational Model of Lab-Scale Experimental Setup

    Energy Technology Data Exchange (ETDEWEB)

    Ullah, Sana; Sung, Yim Man; Park, Jin Soo; Sung Hyung Jin [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    The experimental validation of the vortex-like air curtain concept and use of an appropriate CFD modelling approach for analyzing the problem becomes crucial. A lab-scale experimental setup is designed to validate the proposed concept and CFD modeling approach as a part of validation process. In this study, a computational model of this lab-scale experiment setup is developed using open source CFD code OpenFOAM. The computational results will be compared with experimental data for validation purposes in future, when experimental data is available. 1) A computation model of a lab-scale experimental setup, designed to validate the concept of artificial vortex-like airflow generation for application to radioactivity dispersion prevention in the event of severe accident, was developed. 2) The mesh sensitivity study was performed and a mesh of about 2 million cells was found to be sufficient for this setup.

  17. Computer network for experimental research using ISDN

    International Nuclear Information System (INIS)

    Ida, Katsumi; Nakanishi, Hideya

    1997-01-01

    This report describes the development of a computer network that uses the Integrated Service Digital Network (ISDN) for real-time analysis of experimental plasma physics and nuclear fusion research. Communication speed, 64/128kbps (INS64) or 1.5Mbps (INS1500) per connection, is independent of how busy the network is. When INS-1500 is used, the communication speed, which is proportional to the public telephone connection fee, can be dynamically varied from 64kbps to 1472kbps (depending on how much data are being transferred using the Bandwidth-on-Demand (BOD) function in the ISDN Router. On-demand dial-up and time-out disconnection reduce the public telephone connection fee by 10%-97%. (author)

  18. Uniformity calibration for ICT image

    International Nuclear Information System (INIS)

    Zeng Gang; Liu Li; Que Jiemin; Zhang Yingping; Yin Yin; Wang Yanfang; Yu Zhongqiang; Yan Yonglian

    2004-01-01

    The uniformity of ICT image is impaired by beam hardening and the inconsistency of detector units responses. The beam hardening and the nonlinearity of the detector's output have been analyzed. The correction factors are determined experimentally by the detector's responses with different absorption length. The artifacts in the CT image of a symmetrical aluminium cylinder have been eliminated after calibration. (author)

  19. Instrumentation, computer software and experimental techniques used in low-frequency internal friction studies at WNRE

    International Nuclear Information System (INIS)

    Sprugmann, K.W.; Ritchie, I.G.

    1980-04-01

    A detailed and comprehensive account of the equipment, computer programs and experimental methods developed at the Whiteshell Nuclear Research Estalbishment for the study of low-frequency internal friction is presented. Part 1 describes the mechanical apparatus, electronic instrumentation and computer software, while Part II describes in detail the laboratory techniques and various types of experiments performed together with data reduction and analysis. Experimental procedures for the study of internal friction as a function of temperature, strain amplitude or time are described. Computer control of these experiments using the free-decay technique is outlined. In addition, a pendulum constant-amplitude drive system is described. (auth)

  20. Ideas for fast accelerator model calibration

    International Nuclear Information System (INIS)

    Corbett, J.

    1997-05-01

    With the advent of a simple matrix inversion technique, measurement-based storage ring modeling has made rapid progress in recent years. Using fast computers with large memory, the matrix inversion procedure typically adjusts up to 10 3 model variables to fit the order of 10 5 measurements. The results have been surprisingly accurate. Physics aside, one of the next frontiers is to simplify the process and to reduce computation time. In this paper, the authors discuss two approaches to speed up the model calibration process: recursive least-squares fitting and a piecewise fitting approach

  1. A Comparison of Computed and Experimental Flowfields of the RAH-66 Helicopter

    Science.gov (United States)

    vanDam, C. P.; Budge, A. M.; Duque, E. P. N.

    1996-01-01

    This paper compares and evaluates numerical and experimental flowfields of the RAH-66 Comanche helicopter. The numerical predictions were obtained by solving the Thin-Layer Navier-Stokes equations. The computations use actuator disks to investigate the main and tail rotor effects upon the fuselage flowfield. The wind tunnel experiment was performed in the 14 x 22 foot facility located at NASA Langley. A suite of flow conditions, rotor thrusts and fuselage-rotor-tail configurations were tested. In addition, the tunnel model and the computational geometry were based upon the same CAD definition. Computations were performed for an isolated fuselage configuration and for a rotor on configuration. Comparisons between the measured and computed surface pressures show areas of correlation and some discrepancies. Local areas of poor computational grid-quality and local areas of geometry differences account for the differences. These calculations demonstrate the use of advanced computational fluid dynamic methodologies towards a flight vehicle currently under development. It serves as an important verification for future computed results.

  2. Calibration of neutron detectors on the Joint European Torus.

    Science.gov (United States)

    Batistoni, Paola; Popovichev, S; Conroy, S; Lengar, I; Čufar, A; Abhangi, M; Snoj, L; Horton, L

    2017-10-01

    The present paper describes the findings of the calibration of the neutron yield monitors on the Joint European Torus (JET) performed in 2013 using a 252 Cf source deployed inside the torus by the remote handling system, with particular regard to the calibration of fission chambers which provide the time resolved neutron yield from JET plasmas. The experimental data obtained in toroidal, radial, and vertical scans are presented. These data are first analysed following an analytical approach adopted in the previous neutron calibrations at JET. In this way, a calibration function for the volumetric plasma source is derived which allows us to understand the importance of the different plasma regions and of different spatial profiles of neutron emissivity on fission chamber response. Neutronics analyses have also been performed to calculate the correction factors needed to derive the plasma calibration factors taking into account the different energy spectrum and angular emission distribution of the calibrating (point) 252 Cf source, the discrete positions compared to the plasma volumetric source, and the calibration circumstances. All correction factors are presented and discussed. We discuss also the lessons learnt which are the basis for the on-going 14 MeV neutron calibration at JET and for ITER.

  3. Computational and experimental analyses of the wave propagation through a bar structure including liquid-solid interface

    Energy Technology Data Exchange (ETDEWEB)

    Park, Sang Jin [UST Graduate School, Daejeon (Korea, Republic of); Rhee, Hui Nam [Division of Mechanical and Aerospace Engineering, Sunchon National University, Sunchon (Korea, Republic of); Yoon, Doo Byung; Park, Jin Ho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-08-15

    In this research, we study the propagation of longitudinal and transverse waves through a metal rod including a liquid layer using computational and experimental analyses. The propagation characteristics of longitudinal and transverse waves obtained by the computational and experimental analyses were consistent with the wave propagation theory for both cases, that is, the homogeneous metal rod and the metal rod including a liquid layer. The fluid-structure interaction modeling technique developed for the computational wave propagation analysis in this research can be applied to the more complex structures including solid-liquid interfaces.

  4. Cloud-Based Model Calibration Using OpenStudio: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Hale, E.; Lisell, L.; Goldwasser, D.; Macumber, D.; Dean, J.; Metzger, I.; Parker, A.; Long, N.; Ball, B.; Schott, M.; Weaver, E.; Brackney, L.

    2014-03-01

    OpenStudio is a free, open source Software Development Kit (SDK) and application suite for performing building energy modeling and analysis. The OpenStudio Parametric Analysis Tool has been extended to allow cloud-based simulation of multiple OpenStudio models parametrically related to a baseline model. This paper describes the new cloud-based simulation functionality and presents a model cali-bration case study. Calibration is initiated by entering actual monthly utility bill data into the baseline model. Multiple parameters are then varied over multiple iterations to reduce the difference between actual energy consumption and model simulation results, as calculated and visualized by billing period and by fuel type. Simulations are per-formed in parallel using the Amazon Elastic Cloud service. This paper highlights model parameterizations (measures) used for calibration, but the same multi-nodal computing architecture is available for other purposes, for example, recommending combinations of retrofit energy saving measures using the calibrated model as the new baseline.

  5. Rapid and accurate control rod calibration measurement and analysis

    International Nuclear Information System (INIS)

    Nelson, George W.; Doane, Harry J.

    1990-01-01

    In order to reduce the time needed to perform control rod calibrations and improve the accuracy of the results, a technique for a measurement, analysis, and tabulation of integral rod worths has been developed. A single series of critical rod positions are determined at constant low power to reduce the waiting time between positive period measurements and still assure true stable reactor period data. Reactivity values from positive period measurements and control rod drop measurements are used as input data for a non-linear fit to the expected control rod integral worth shape. With this method, two control rods can be calibrated in about two hours, and integral and differential calibration tables for operator use are printed almost immediately. Listings of the BASIC computer programs for the non-linear fitting and calibration table preparation are provided. (author)

  6. Computer techniques for experimental work in GDR nuclear power plants with WWER

    International Nuclear Information System (INIS)

    Stemmler, G.

    1985-01-01

    Nuclear power plant units with WWER are being increasingly equipped with high-performance, programmable process control computers. There are, however, essential reasons for further advancing the development of computer-aided measuring systems, in particular for experimental work. A special structure of such systems, which is based on the division into relatively rigid data registration and primary handling and into further processing by advanced programming language, has proved useful in the GDR. (author)

  7. Gradient-based model calibration with proxy-model assistance

    Science.gov (United States)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  8. Calibration of neutrons monitors with moderators and application in the calibration factors of albedo dosemeters

    International Nuclear Information System (INIS)

    Schuch, L.A.

    1978-11-01

    The calibration factors and the reproducibility of an Albedo Dosimeter designed for personal neutron monitoring were determined. These factor were obtained simulating the dosimeter reading and the equivalent dose in the locality by a convenient combination of responses of the Bonner Sphere Spectrometer. The results obtained in the simulation were verified experimentally for different spectra employing the Am-Be, bare 252 Cf source and 253 Cf source with graphite sields of varying thickness. Different standards were used in the procedures necessary for the determination of the calibration factors. An Am-Be neutron source, standardized by the activation of a manganese sulphate bath was used as a primary standard. As a secondary standard, for the measurement of the neutron fluence, a De Pangher Long Counter was used and the scattering effects were determined using the shadow cone method. The other monitors such as the Rem-Counter and the Bonner Sphere Spectrometer were also calibrated with reference to the secondary standard with a view to comparing the results obtained with those furnished by the Albedo Dosimeter. (Author) [pt

  9. Calibrating corneal material model parameters using only inflation data: an ill-posed problem

    CSIR Research Space (South Africa)

    Kok, S

    2014-08-01

    Full Text Available is to perform numerical modelling using the finite element method, for which a calibrated material model is required. These material models are typically calibrated using experimental inflation data by solving an inverse problem. In the inverse problem...

  10. Experimental and computational investigations of heat and mass transfer of intensifier grids

    International Nuclear Information System (INIS)

    Kobzar, Leonid; Oleksyuk, Dmitry; Semchenkov, Yuriy

    2015-01-01

    The paper discusses experimental and numerical investigations on intensification of thermal and mass exchange which were performed by National Research Centre ''Kurchatov Institute'' over the past years. Recently, many designs of heat mass transfer intensifier grids have been proposed. NRC ''Kurchatov Institute'' has accomplished a large scope of experimental investigations to study efficiency of intensifier grids of various types. The outcomes of experimental investigations can be used in verification of computational models and codes. On the basis of experimental data, we derived correlations to calculate coolant mixing and critical heat flux mixing in rod bundles equipped with intensifier grids. The acquired correlations were integrated in subchannel code SC-INT.

  11. Radioactive standards and calibration methods for contamination monitoring instruments

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, Makoto [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-06-01

    Contamination monitoring in the facilities for handling unsealed radioactive materials is one of the most important procedures for radiation protection as well as radiation dose monitoring. For implementation of the proper contamination monitoring, radiation measuring instruments should not only be suitable to the purpose of monitoring, but also be well calibrated for the objective qualities of measurement. In the calibration of contamination monitoring instruments, quality reference activities need to be used. They are supplied in different such as extended sources, radioactive solutions or radioactive gases. These reference activities must be traceable to the national standards or equivalent standards. On the other hand, the appropriate calibration methods must be applied for each type of contamination monitoring instruments. In this paper, the concepts of calibration for contamination monitoring instruments, reference sources, determination methods of reference quantities and practical calibration methods of contamination monitoring instruments, including the procedures carried out in Japan Atomic Energy Research Institute and some relevant experimental data. (G.K.)

  12. Calibration Method to Eliminate Zeroth Order Effect in Lateral Shearing Interferometry

    Science.gov (United States)

    Fang, Chao; Xiang, Yang; Qi, Keqi; Chen, Dawei

    2018-04-01

    In this paper, a calibration method is proposed which eliminates the zeroth order effect in lateral shearing interferometry. An analytical expression of the calibration error function is deduced, and the relationship between the phase-restoration error and calibration error is established. The analytical results show that the phase-restoration error introduced by the calibration error is proportional to the phase shifting error and zeroth order effect. The calibration method is verified using simulations and experiments. The simulation results show that the phase-restoration error is approximately proportional to the phase shift error and zeroth order effect, when the phase shifting error is less than 2° and the zeroth order effect is less than 0.2. The experimental result shows that compared with the conventional method with 9-frame interferograms, the calibration method with 5-frame interferograms achieves nearly the same restoration accuracy.

  13. New Theoretical Analysis of the LRRM Calibration Technique for Vector Network Analyzers

    OpenAIRE

    Purroy Martín, Francesc; Pradell i Cara, Lluís

    2001-01-01

    In this paper, a new theoretical analysis of the four-standards line-reflect-reflect-match (LRRM) vector network-analyzer (VNA) calibration technique is presented. As a result, it is shown that the reference-impedance (to which the LRRM calibration is referred) cannot generally be defined whenever nonideal standards are used. Based on this consideration, a new algorithm to determine the on-wafer match standard is proposed that improves the LRRM calibration accuracy. Experimental verification ...

  14. Non-uniformity calibration for MWIR polarization imagery obtained with integrated microgrid polarimeters

    Science.gov (United States)

    Liu, Hai-Zheng; Shi, Ze-Lin; Feng, Bin; Hui, Bin; Zhao, Yao-Hong

    2016-03-01

    Integrating microgrid polarimeters on focal plane array (FPA) of an infrared detector causes non-uniformity of polarization response. In order to reduce the effect of polarization non-uniformity, this paper constructs an experimental setup for capturing raw flat-field images and proposes a procedure for acquiring non-uniform calibration (NUC) matrix and calibrating raw polarization images. The proposed procedure takes the incident radiation as a polarization vector and offers a calibration matrix for each pixel. Both our matrix calibration and two-point calibration are applied to our mid-wavelength infrared (MWIR) polarization imaging system with integrated microgrid polarimeters. Compared with two point calibration, our matrix calibration reduces non-uniformity by 30 40% under condition of flat-field data test with polarization. The ourdoor scene observation experiment indicates that our calibration can effectively reduce polarization non-uniformity and improve the image quality of our MWIR polarization imaging system.

  15. Calibration Technique of the Irradiated Thermocouple using Artificial Neural Network

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Jin Tae; Joung, Chang Young; Ahn, Sung Ho; Yang, Tae Ho; Heo, Sung Ho; Jang, Seo Yoon [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    To correct the signals, the degradation rate of sensors needs to be analyzed, and re-calibration of sensors should be followed periodically. In particular, because thermocouples instrumented in the nuclear fuel rod are degraded owing to the high neutron fluence generated from the nuclear fuel, the periodic re-calibration process is necessary. However, despite the re-calibration of the thermocouple, the measurement error will be increased until next re-calibration. In this study, based on the periodically calibrated temperature - voltage data, an interpolation technique using the artificial neural network will be introduced to minimize the calibration error of the C-type thermocouple under the irradiation test. The test result shows that the calculated voltages derived from the interpolation function have good agreement with the experimental sampling data, and they also accurately interpolate the voltages at arbitrary temperature and neutron fluence. That is, once the reference data is obtained by experiments, it is possible to accurately calibrate the voltage signal at a certain neutron fluence and temperature using an artificial neural network.

  16. Calibration of Ge gamma-ray spectrometers for complex sample geometries and matrices

    Energy Technology Data Exchange (ETDEWEB)

    Semkow, T.M., E-mail: thomas.semkow@health.ny.gov [Wadsworth Center, New York State Department of Health, Empire State Plaza, Albany, NY 12201 (United States); Department of Environmental Health Sciences, School of Public Health, University at Albany, State University of New York, Rensselaer, NY 12144 (United States); Bradt, C.J.; Beach, S.E.; Haines, D.K.; Khan, A.J.; Bari, A.; Torres, M.A.; Marrantino, J.C.; Syed, U.-F. [Wadsworth Center, New York State Department of Health, Empire State Plaza, Albany, NY 12201 (United States); Kitto, M.E. [Wadsworth Center, New York State Department of Health, Empire State Plaza, Albany, NY 12201 (United States); Department of Environmental Health Sciences, School of Public Health, University at Albany, State University of New York, Rensselaer, NY 12144 (United States); Hoffman, T.J. [Wadsworth Center, New York State Department of Health, Empire State Plaza, Albany, NY 12201 (United States); Curtis, P. [Kiltel Systems, Inc., Clyde Hill, WA 98004 (United States)

    2015-11-01

    A comprehensive study of the efficiency calibration and calibration verification of Ge gamma-ray spectrometers was performed using semi-empirical, computational Monte-Carlo (MC), and transfer methods. The aim of this study was to evaluate the accuracy of the quantification of gamma-emitting radionuclides in complex matrices normally encountered in environmental and food samples. A wide range of gamma energies from 59.5 to 1836.0 keV and geometries from a 10-mL jar to 1.4-L Marinelli beaker were studied on four Ge spectrometers with the relative efficiencies between 102% and 140%. Density and coincidence summing corrections were applied. Innovative techniques were developed for the preparation of artificial complex matrices from materials such as acidified water, polystyrene, ethanol, sugar, and sand, resulting in the densities ranging from 0.3655 to 2.164 g cm{sup −3}. They were spiked with gamma activity traceable to international standards and used for calibration verifications. A quantitative method of tuning MC calculations to experiment was developed based on a multidimensional chi-square paraboloid. - Highlights: • Preparation and spiking of traceable complex matrices in extended geometries. • Calibration of Ge gamma spectrometers for complex matrices. • Verification of gamma calibrations. • Comparison of semi-empirical, computational Monte Carlo, and transfer methods of Ge calibration. • Tuning of Monte Carlo calculations using a multidimensional paraboloid.

  17. Calibration of Ge gamma-ray spectrometers for complex sample geometries and matrices

    International Nuclear Information System (INIS)

    Semkow, T.M.; Bradt, C.J.; Beach, S.E.; Haines, D.K.; Khan, A.J.; Bari, A.; Torres, M.A.; Marrantino, J.C.; Syed, U.-F.; Kitto, M.E.; Hoffman, T.J.; Curtis, P.

    2015-01-01

    A comprehensive study of the efficiency calibration and calibration verification of Ge gamma-ray spectrometers was performed using semi-empirical, computational Monte-Carlo (MC), and transfer methods. The aim of this study was to evaluate the accuracy of the quantification of gamma-emitting radionuclides in complex matrices normally encountered in environmental and food samples. A wide range of gamma energies from 59.5 to 1836.0 keV and geometries from a 10-mL jar to 1.4-L Marinelli beaker were studied on four Ge spectrometers with the relative efficiencies between 102% and 140%. Density and coincidence summing corrections were applied. Innovative techniques were developed for the preparation of artificial complex matrices from materials such as acidified water, polystyrene, ethanol, sugar, and sand, resulting in the densities ranging from 0.3655 to 2.164 g cm −3 . They were spiked with gamma activity traceable to international standards and used for calibration verifications. A quantitative method of tuning MC calculations to experiment was developed based on a multidimensional chi-square paraboloid. - Highlights: • Preparation and spiking of traceable complex matrices in extended geometries. • Calibration of Ge gamma spectrometers for complex matrices. • Verification of gamma calibrations. • Comparison of semi-empirical, computational Monte Carlo, and transfer methods of Ge calibration. • Tuning of Monte Carlo calculations using a multidimensional paraboloid

  18. Preface to special issue of selected papers from Theoretical, Experimental, and Computational Mechanics (TECM)

    DEFF Research Database (Denmark)

    Jabbari, Masoud; Sarlak Chivaee, Hamid; Hattel, Jesper Henri

    2017-01-01

    We are pleased to introduce this special issue of the Applied Mathematical Modelling journal with highlights from theTheoretical, Experimental, and Computational Mechanics Symposium (TECM-2015). This special issue consists of four rigorouslyselected papers originally presented at TECM-2015...... as a part of the 13th International Conference of Numerical Analysisand Applied Mathematics 2015 (ICNAAM 2015), which was held on 23-29 September 2015 in Rhodes, Greece.The symposium attracted a broad range of international and local leaders in theoretical, experimental, and computational mechanics across...... various fields and application. The symposium did an excellent job of outlining the current landscape of computational mechanics and its capabilities in solving complex industrial problems in the process industries, and we agree with the editor-in-chief of the journal that it is certainly worthwhile...

  19. Calibration of the MACHO Photometry Database

    International Nuclear Information System (INIS)

    Alcock, C.; Allsman, R. A.; Alves, D. R.; Axelrod, T. S.; Becker, A. C.; Bennett, D. P.; Cook, K. H.; Drake, A. J.; Freeman, K. C.; Geha, M.

    1999-01-01

    The MACHO Project is a microlensing survey that monitors the brightnesses of ∼60 million stars in the Large Magellanic Cloud (LMC), Small Magellanic Cloud, and Galactic bulge. Our database presently contains about 80 billion photometric measurements, a significant fraction of all astronomical photometry. We describe the calibration of MACHO two-color photometry and transformation to the standard Kron-Cousins V and R system. Calibrated MACHO photometry may be properly compared with all other observations on the Kron-Cousins standard system, enhancing the astrophysical value of these data. For ∼9 million stars in the LMC bar, independent photometric measurements of ∼20,000 stars with V(less-or-similar sign)18 mag in field-overlap regions demonstrate an internal precision σ V =0.021, σ R =0.019, σ V-R =0.028 mag. The accuracy of the zero point in this calibration is estimated to be ±0.035 mag for stars with colors in the range -0.1 mag< V-R<1.2 mag. A comparison of calibrated MACHO photometry with published photometric sequences and new Hubble Space Telescope observations shows agreement. The current calibration zero-point uncertainty for the remainder of the MACHO photometry database is estimated to be ±0.10 mag in V or R and ±0.04 mag in V-R. We describe the first application of calibrated MACHO data: the construction of a color-magnitude diagram used to calculate our experimental sensitivity for detecting microlensing in the LMC. (c) (c) 1999. The Astronomical Society of the Pacific

  20. Calibration of the MACHO Photometry Database

    Energy Technology Data Exchange (ETDEWEB)

    Alcock, C.; Allsman, R. A.; Alves, D. R.; Axelrod, T. S.; Becker, A. C.; Bennett, D. P.; Cook, K. H.; Drake, A. J.; Freeman, K. C.; Geha, M. (and others)

    1999-12-01

    The MACHO Project is a microlensing survey that monitors the brightnesses of {approx}60 million stars in the Large Magellanic Cloud (LMC), Small Magellanic Cloud, and Galactic bulge. Our database presently contains about 80 billion photometric measurements, a significant fraction of all astronomical photometry. We describe the calibration of MACHO two-color photometry and transformation to the standard Kron-Cousins V and R system. Calibrated MACHO photometry may be properly compared with all other observations on the Kron-Cousins standard system, enhancing the astrophysical value of these data. For {approx}9 million stars in the LMC bar, independent photometric measurements of {approx}20,000 stars with V(less-or-similar sign)18 mag in field-overlap regions demonstrate an internal precision {sigma}{sub V} =0.021, {sigma}{sub R} =0.019, {sigma}{sub V-R} =0.028 mag. The accuracy of the zero point in this calibration is estimated to be {+-}0.035 mag for stars with colors in the range -0.1 magcalibrated MACHO photometry with published photometric sequences and new Hubble Space Telescope observations shows agreement. The current calibration zero-point uncertainty for the remainder of the MACHO photometry database is estimated to be {+-}0.10 mag in V or R and {+-}0.04 mag in V-R. We describe the first application of calibrated MACHO data: the construction of a color-magnitude diagram used to calculate our experimental sensitivity for detecting microlensing in the LMC. (c) (c) 1999. The Astronomical Society of the Pacific.

  1. The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes

    Energy Technology Data Exchange (ETDEWEB)

    Bhatnagar, S.; Cornwell, T. J., E-mail: sbhatnag@nrao.edu [National Radio Astronomy Observatory, 1003 Lopezville Road, Socorro, NM 87801 (United States)

    2017-11-01

    This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth–Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measured a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.

  2. The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes

    Science.gov (United States)

    Bhatnagar, S.; Cornwell, T. J.

    2017-11-01

    This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth-Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measured a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.

  3. Calibration of a laboratory spectrophotometer for specular light by means of stacked glass plates.

    Science.gov (United States)

    Allen, W. A.; Richardson, A. J.

    1971-01-01

    Stacked glass plates have been used to calibrate a laboratory spectrophotometer, over the spectral range 0.5-2.5 microns, for specular light. The uncalibrated instrument was characterized by systematic errors when used to measure the reflectance and transmittance of stacked glass plates. Calibration included first, a determination of the reflectance of a standard composed of barium sulfate paint deposited on an aluminum plate; second, the approximation of the reflectance and transmittance residuals between observed and computed values by means of cubic equations; and, finally, the removal of the systematic errors by a computer. The instrument, after calibration, was accurate to 1% when used to measure the reflectance and transmittance of stacked glass plates.

  4. Energy calibration of a silicon detector using pure beta-emitters

    International Nuclear Information System (INIS)

    Borras, C.; Los Arcos, J.M.

    1992-01-01

    Energy calibration of Si detectors used in electron spectroscopy is commonly performed with conversion electron sources or monoenergetic electrons beams, which are preferred against beta emitters due to the problems arising from their continuous spectra. This paper presents a simple calibration procedure for a PIP-type silicon detector, using 1 4C, 1 47m, 9 9Tc and 4 5Ca, that is based on the correspondence between the average channel observed in the experimental spectrum and the mean energy evaluated from the theoretical Fermi distribution for each nuclide. First, a method for evaluating the average channel in the experimental spectrum distorted by the electronic noise is described and its uncertainty estimated. Then, the channel-energy relation ship is established by least squares fitting modified to account for uncertainties in both variables. The calibration has been successfully verified with 147Pm and 109Cs source, showing discrepancies not greater than 2.5%, within the uncertainties due to the detector resolution and the sources features. (author)

  5. Energy Calibration of a Silicon Detector Using Pure Beta-Emitters

    International Nuclear Information System (INIS)

    Borras, C.; Arcos, J. M. los

    1992-01-01

    Energy calibration of SI detectors used in electron spectroscopy 13 commonly performed with conversion electron sources or monoenergetic electrons beams, which are preferred against beta emitters due to the problems arising from their continuous spectra. This paper presents a simple calibration procedure for a PIP-type silicon detector, using 14C, 147Pm, 99 T c and 45Ca sources, that is based on the correspondence between the average channel observed in the experimental spectrum and the mean energy evaluated from the theoretical Fermi distribution for each nuclide. First, a method for evaluating the average channel in the experimental spectrum distorted by the electronic noise is described and its uncertainty estimated. Then, the channel-energy relation ship is established by least squares fitting modified to account for uncertainties in both variables.The calibration has been successfully verified with 147Pm and '09cd sources, showing discrepaneles not greater than 2.5%, within the uncertainties due to the detector resolution and the sources features. (Author)

  6. Stochastic isotropic hyperelastic materials: constitutive calibration and model selection

    Science.gov (United States)

    Mihai, L. Angela; Woolley, Thomas E.; Goriely, Alain

    2018-03-01

    Biological and synthetic materials often exhibit intrinsic variability in their elastic responses under large strains, owing to microstructural inhomogeneity or when elastic data are extracted from viscoelastic mechanical tests. For these materials, although hyperelastic models calibrated to mean data are useful, stochastic representations accounting also for data dispersion carry extra information about the variability of material properties found in practical applications. We combine finite elasticity and information theories to construct homogeneous isotropic hyperelastic models with random field parameters calibrated to discrete mean values and standard deviations of either the stress-strain function or the nonlinear shear modulus, which is a function of the deformation, estimated from experimental tests. These quantities can take on different values, corresponding to possible outcomes of the experiments. As multiple models can be derived that adequately represent the observed phenomena, we apply Occam's razor by providing an explicit criterion for model selection based on Bayesian statistics. We then employ this criterion to select a model among competing models calibrated to experimental data for rubber and brain tissue under single or multiaxial loads.

  7. Identifying influential data points in hydrological model calibration and their impact on streamflow predictions

    Science.gov (United States)

    Wright, David; Thyer, Mark; Westra, Seth

    2015-04-01

    Highly influential data points are those that have a disproportionately large impact on model performance, parameters and predictions. However, in current hydrological modelling practice the relative influence of individual data points on hydrological model calibration is not commonly evaluated. This presentation illustrates and evaluates several influence diagnostics tools that hydrological modellers can use to assess the relative influence of data. The feasibility and importance of including influence detection diagnostics as a standard tool in hydrological model calibration is discussed. Two classes of influence diagnostics are evaluated: (1) computationally demanding numerical "case deletion" diagnostics; and (2) computationally efficient analytical diagnostics, based on Cook's distance. These diagnostics are compared against hydrologically orientated diagnostics that describe changes in the model parameters (measured through the Mahalanobis distance), performance (objective function displacement) and predictions (mean and maximum streamflow). These influence diagnostics are applied to two case studies: a stage/discharge rating curve model, and a conceptual rainfall-runoff model (GR4J). Removing a single data point from the calibration resulted in differences to mean flow predictions of up to 6% for the rating curve model, and differences to mean and maximum flow predictions of up to 10% and 17%, respectively, for the hydrological model. When using the Nash-Sutcliffe efficiency in calibration, the computationally cheaper Cook's distance metrics produce similar results to the case-deletion metrics at a fraction of the computational cost. However, Cooks distance is adapted from linear regression with inherit assumptions on the data and is therefore less flexible than case deletion. Influential point detection diagnostics show great potential to improve current hydrological modelling practices by identifying highly influential data points. The findings of this

  8. Video Analysis of Projectile Motion Using Tablet Computers as Experimental Tools

    Science.gov (United States)

    Klein, P.; Gröber, S.; Kuhn, J.; Müller, A.

    2014-01-01

    Tablet computers were used as experimental tools to record and analyse the motion of a ball thrown vertically from a moving skateboard. Special applications plotted the measurement data component by component, allowing a simple determination of initial conditions and "g" in order to explore the underlying laws of motion. This experiment…

  9. Calibrations of a tritium extraction facility

    International Nuclear Information System (INIS)

    Bretscher, M.M.; Oliver, B.M.; Farrar, H. IV.

    1983-01-01

    A tritium extraction facility has been built for the purpose of measuring the absolute tritium concentration in neutron-irradiated lithium metal samples. Two independent calibration procedures have been used to determine what fraction, if any, of tritium is lost during the extraction process. The first procedure compares independently measured 4 He and 3 H concentrations from the 6 Li(n,α)T reaction. The second procedure compared measured 6 Li(n,α)T/ 197 Au (n,γ) 198 Au thermal neutron reaction rate ratios with those obtained from Monte Carlo calculations using well-known cross sections. Both calibration methods show that within experimental errors (approx. 1.5%) no tritium is lost during the extraction process

  10. Modeling an Excitable Biosynthetic Tissue with Inherent Variability for Paired Computational-Experimental Studies.

    Directory of Open Access Journals (Sweden)

    Tanmay A Gokhale

    2017-01-01

    Full Text Available To understand how excitable tissues give rise to arrhythmias, it is crucially necessary to understand the electrical dynamics of cells in the context of their environment. Multicellular monolayer cultures have proven useful for investigating arrhythmias and other conduction anomalies, and because of their relatively simple structure, these constructs lend themselves to paired computational studies that often help elucidate mechanisms of the observed behavior. However, tissue cultures of cardiomyocyte monolayers currently require the use of neonatal cells with ionic properties that change rapidly during development and have thus been poorly characterized and modeled to date. Recently, Kirkton and Bursac demonstrated the ability to create biosynthetic excitable tissues from genetically engineered and immortalized HEK293 cells with well-characterized electrical properties and the ability to propagate action potentials. In this study, we developed and validated a computational model of these excitable HEK293 cells (called "Ex293" cells using existing electrophysiological data and a genetic search algorithm. In order to reproduce not only the mean but also the variability of experimental observations, we examined what sources of variation were required in the computational model. Random cell-to-cell and inter-monolayer variation in both ionic conductances and tissue conductivity was necessary to explain the experimentally observed variability in action potential shape and macroscopic conduction, and the spatial organization of cell-to-cell conductance variation was found to not impact macroscopic behavior; the resulting model accurately reproduces both normal and drug-modified conduction behavior. The development of a computational Ex293 cell and tissue model provides a novel framework to perform paired computational-experimental studies to study normal and abnormal conduction in multidimensional excitable tissue, and the methodology of modeling

  11. Using LabVIEW to facilitate calibration and verification for respiratory impedance plethysmography.

    Science.gov (United States)

    Ellis, W S; Jones, R T

    1991-12-01

    A system for calibrating the Respitrace impedance plethysmograph was developed with the capacity to quantitatively verify the accuracy of calibration. LabVIEW software was used on a Macintosh II computer to create a user-friendly environment, with the added benefit of reducing development time. The system developed enabled a research assistant to calibrate the Respitrace within 15 min while achieving an accuracy within the normally accepted 10% deviation when the Respitrace output is compared to a water spirometer standard. The system and methods described were successfully used in a study of 10 subjects smoking cigarettes containing marijuana or cocaine under four conditions, calibrating all subjects to 10% accuracy within 15 min.

  12. CMS Alignement and Calibration workflows: lesson learned and future plans

    CERN Document Server

    AUTHOR|(CDS)2069172

    2014-01-01

    We review the online and offline workflows designed to align and calibrate the CMS detector. Starting from the gained experience during the first LHC run, we discuss the expected developments for Run II. In particular, we describe the envisioned different stages, from the alignment using cosmic rays data to the detector alignment and calibration using the first proton-proton collisions data ( O(100 pb-1) ) and a larger dataset ( O(1 fb-1) ) to reach the target precision. The automatisation of the workflow and the integration in the online and offline activity (dedicated triggers and datasets, data skims, workflows to compute the calibration and alignment constants) are discussed.

  13. Global Calibration of Multiple Cameras Based on Sphere Targets

    Directory of Open Access Journals (Sweden)

    Junhua Sun

    2016-01-01

    Full Text Available Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three, while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view.

  14. Experiments on mobile robot stereo vision system calibration under hardware imperfection

    Directory of Open Access Journals (Sweden)

    Safin Ramil

    2018-01-01

    Full Text Available Calibration is essential for any robot vision system for achieving high accuracy in deriving objects metric information. One of typical requirements for a stereo vison system in order to obtain better calibration results is to guarantee that both cameras keep the same vertical level. However, cameras may be displaced due to severe conditions of a robot operating or some other circumstances. This paper presents our experimental approach to the problem of a mobile robot stereo vision system calibration under a hardware imperfection. In our experiments, we used crawler-type mobile robot «Servosila Engineer». Stereo system cameras of the robot were displaced relative to each other, causing loss of surrounding environment information. We implemented and verified checkerboard and circle grid based calibration methods. The two methods comparison demonstrated that a circle grid based calibration should be preferred over a classical checkerboard calibration approach.

  15. CryoSat/SIRAL Cal1 Calibration Orbits

    Science.gov (United States)

    Scagliola, Michele; Fornari, Marco; Bouffard, Jerome; Parrinello, Tommaso

    2017-04-01

    The main payload of CryoSat is a Ku band pulsewidth limited radar altimeter, called SIRAL (Synthetic interferometric radar altimeter), that transmits pulses at a high pulse repetition frequency thus making the received echoes phase coherent and suitable for SAR processing. This allows to reach an along track resolution that is significantly improved with respect to traditional pulse-width limited altimeters. Due to the fact that SIRAL is a phase coherent pulse-width limited radar altimeter, a proper calibration approach has been developed. In fact, not only corrections for transfer function, gain and instrument path delay have to be computed (as in previous altimeters), but also corrections for phase (SAR/SARIn) and phase difference between the two receiving chains (SARIN only). Recalling that the CryoSat's orbit has a high inclination of 92° and it is non-sun-synchronous, the temperature of the SIRAL changes continuously along the orbit with a period of about 480 days and it is also function of the ascending/descending passes. By analysis of the CAL1 calibration corrections, it has been verified that the internal path delay and the instrument gain variation measured on the SIRAL are affected by the thermal status of the instrument and as a consequence they are expected to vary along the orbit. In order to gain knowledge on the calibration corrections (i.e. the instrument behavior) as function of latitude and temperature, it has been planned to command a few number of orbits where only CAL1 calibration acquisitions are continuously performed. The analysis of the CAL1 calibration corrections produced along the Calibration orbits can be also useful to verify whether the current calibration plan is able to provide sufficiently accurate corrections for the instrument acquisitions at any latitude. In 2016, the CryoSat/SIRAL Cal1 Calibration Orbits have been commanded two times, a first time the 20th of July 2016 and a second time the 24th of November 2016, and they

  16. Initial absolute calibration factors for some neutron sensitive self-powered detectors

    International Nuclear Information System (INIS)

    Kroon, J.

    1975-01-01

    Self-powered flux detectors have found extensive use as monitoring devices in PWR (Pressurized Water Reactor) cores and CANDU (Canada Deuterium Uranium) type power reactors. The detectors measure fuel power distributions and indicate trip parameters for reactor control and safety requirements. Both applications demand accurate absolute initial calibration factors. Experimental results obtained in calibrating some neutron sensitive self-powered detectors is presented. (author)

  17. Study of the GPS inter-frequency calibration of timing receivers

    Science.gov (United States)

    Defraigne, P.; Huang, W.; Bertrand, B.; Rovera, D.

    2018-02-01

    When calibrating Global Positioning System (GPS) stations dedicated to timing, the hardware delays of P1 and P2, the P(Y)-codes on frequencies L1 and L2, are determined separately. In the international atomic time (TAI) network the GPS stations of the time laboratories are calibrated relatively against reference stations. This paper aims at determining the consistency between the P1 and P2 hardware delays (called dP1 and dP2) of these reference stations, and to look at the stability of the inter-signal hardware delays dP1-dP2 of all the stations in the network. The method consists of determining the dP1-dP2 directly from the GPS pseudorange measurements corrected for the frequency-dependent antenna phase center and the frequency-dependent ionosphere corrections, and then to compare these computed dP1-dP2 to the calibrated values. Our results show that the differences between the computed and calibrated dP1-dP2 are well inside the expected combined uncertainty of the two quantities. Furthermore, the consistency between the calibrated time transfer solution obtained from either single-frequency P1 or dual-frequency P3 for reference laboratories is shown to be about 1.0 ns, well inside the 2.1 ns uB uncertainty of a time transfer link based on GPS P3 or Precise Point Positioning. This demonstrates the good consistency between the P1 and P2 hardware delays of the reference stations used for calibration in the TAI network. The long-term stability of the inter-signal hardware delays is also analysed from the computed dP1-dP2. It is shown that only variations larger than 2 ns can be detected for a particular station, while variations of 200 ps can be detected when differentiating the results between two stations. Finally, we also show that in the differential calibration process as used in the TAI network, using the same antenna phase center or using different positions for L1 and L2 signals gives maximum differences of 200 ps on the hardware delays of the separate

  18. Calibration belt for quality-of-care assessment based on dichotomous outcomes.

    Directory of Open Access Journals (Sweden)

    Stefano Finazzi

    Full Text Available Prognostic models applied in medicine must be validated on independent samples, before their use can be recommended. The assessment of calibration, i.e., the model's ability to provide reliable predictions, is crucial in external validation studies. Besides having several shortcomings, statistical techniques such as the computation of the standardized mortality ratio (SMR and its confidence intervals, the Hosmer-Lemeshow statistics, and the Cox calibration test, are all non-informative with respect to calibration across risk classes. Accordingly, calibration plots reporting expected versus observed outcomes across risk subsets have been used for many years. Erroneously, the points in the plot (frequently representing deciles of risk have been connected with lines, generating false calibration curves. Here we propose a methodology to create a confidence band for the calibration curve based on a function that relates expected to observed probabilities across classes of risk. The calibration belt allows the ranges of risk to be spotted where there is a significant deviation from the ideal calibration, and the direction of the deviation to be indicated. This method thus offers a more analytical view in the assessment of quality of care, compared to other approaches.

  19. Validation of smart sensor technologies for instrument calibration reduction in nuclear power plants

    International Nuclear Information System (INIS)

    Hashemian, H.M.; Mitchell, D.W.; Petersen, K.M.; Shell, C.S.

    1993-01-01

    This report presents the preliminary results of a research and development project on the validation of new techniques for on-line testing of calibration drift of process instrumentation channels in nuclear power plants. These techniques generally involve a computer-based data acquisition and data analysis system to trend the output of a large number of instrument channels and identify the channels that have drifted out of tolerance. This helps limit the calibration effort to those channels which need the calibration, as opposed to the current nuclear industry practice of calibrating essentially all the safety-related instrument channels at every refueling outage

  20. Planck 2013 results. V. LFI calibration

    CERN Document Server

    Aghanim, N; Arnaud, M; Ashdown, M; Atrio-Barandela, F; Aumont, J; Baccigalupi, C; Banday, A J; Barreiro, R B; Battaner, E; Benabed, K; Benoît, A; Benoit-Lévy, A; Bernard, J -P; Bersanelli, M; Bielewicz, P; Bobin, J; Bock, J J; Bonaldi, A; Bonavera, L; Bond, J R; Borrill, J; Bouchet, F R; Bridges, M; Bucher, M; Burigana, C; Butler, R C; Cappellini, B; Cardoso, J -F; Catalano, A; Chamballu, A; Chen, X; Chiang, L -Y; Christensen, P R; Church, S; Colombi, S; Colombo, L P L; Crill, B P; Curto, A; Cuttaia, F; Danese, L; Davies, R D; Davis, R J; de Bernardis, P; de Rosa, A; de Zotti, G; Delabrouille, J; Dickinson, C; Diego, J M; Dole, H; Donzelli, S; Doré, O; Douspis, M; Dupac, X; Efstathiou, G; Enßlin, T A; Eriksen, H K; Finelli, F; Forni, O; Frailis, M; Franceschi, E; Gaier, T C; Galeotta, S; Ganga, K; Giard, M; Giardino, G; Giraud-Héraud, Y; Gjerløw, E; González-Nuevo, J; Górski, K M; Gratton, S; Gregorio, A; Gruppuso, A; Hansen, F K; Hanson, D; Harrison, D; Henrot-Versillé, S; Hernández-Monteagudo, C; Herranz, D; Hildebrandt, S R; Hivon, E; Hobson, M; Holmes, W A; Hornstrup, A; Hovest, W; Huffenberger, K M; Jaffe, T R; Jaffe, A H; Jewell, J; Jones, W C; Juvela, M; Kangaslahti, P; Keihänen, E; Keskitalo, R; Kisner, T S; Knoche, J; Knox, L; Kunz, M; Kurki-Suonio, H; Lagache, G; Lähteenmäki, A; Lamarre, J -M; Lasenby, A; Laureijs, R J; Lawrence, C R; Leach, S; Leahy, J P; Leonardi, R; Lesgourgues, J; Liguori, M; Lilje, P B; Linden-Vørnle, M; López-Caniego, M; Lubin, P M; Macías-Pérez, J F; Maino, D; Mandolesi, N; Maris, M; Marshall, D J; Martin, P G; Martínez-González, E; Masi, S; Matarrese, S; Matthai, F; Mazzotta, P; Meinhold, P R; Melchiorri, A; Mendes, L; Mennella, A; Migliaccio, M; Mitra, S; Moneti, A; Montier, L; Morgante, G; Mortlock, D; Moss, A; Munshi, D; Naselsky, P; Natoli, P; Netterfield, C B; Nørgaard-Nielsen, H U; Novikov, D; Novikov, I; O'Dwyer, I J; Osborne, S; Paci, F; Pagano, L; Paladini, R; Paoletti, D; Partridge, B; Pasian, F; Patanchon, G; Pearson, D; Peel, M; Perdereau, O; Perotto, L; Perrotta, F; Pierpaoli, E; Pietrobon, D; Plaszczynski, S; Pointecouteau, E; Polenta, G; Ponthieu, N; Popa, L; Poutanen, T; Pratt, G W; Prézeau, G; Prunet, S; Puget, J -L; Rachen, J P; Rebolo, R; Reinecke, M; Remazeilles, M; Ricciardi, S; Riller, T; Rocha, G; Rosset, C; Rossetti, M; Roudier, G; Rubiño-Martín, J A; Rusholme, B; Sandri, M; Santos, D; Scott, D; Seiffert, M D; Shellard, E P S; Spencer, L D; Starck, J -L; Stolyarov, V; Stompor, R; Sureau, F; Sutton, D; Suur-Uski, A -S; Sygnet, J -F; Tauber, J A; Tavagnacco, D; Terenzi, L; Toffolatti, L; Tomasi, M; Tristram, M; Tucci, M; Tuovinen, J; Türler, M; Umana, G; Valenziano, L; Valiviita, J; Van Tent, B; Varis, J; Vielva, P; Villa, F; Vittorio, N; Wade, L A; Wandelt, B D; Watson, R; Wilkinson, A; Yvon, D; Zacchei, A; Zonca, A

    2014-01-01

    We discuss the methods employed to photometrically calibrate the data acquired by the Low Frequency Instrument on Planck. Our calibration is based on a combination of the Orbital Dipole plus the Solar Dipole, caused respectively by the motion of the Planck spacecraft with respect to the Sun and by motion of the Solar System with respect to the CMB rest frame. The latter provides a signal of a few mK with the same spectrum as the CMB anisotropies and is visible throughout the mission. In this data release we rely on the characterization of the Solar Dipole as measured by WMAP. We also present preliminary results (at 44GHz only) on the study of the Orbital Dipole, which agree with the WMAP value of the Solar System speed within our uncertainties. We compute the calibration constant for each radiometer roughly once per hour, in order to keep track of changes in the detectors' gain. Since non-idealities in the optical response of the beams proved to be important, we implemented a fast convolution algorithm which ...

  1. IMU-based online kinematic calibration of robot manipulator.

    Science.gov (United States)

    Du, Guanglong; Zhang, Ping

    2013-01-01

    Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU) is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA) and Kalman Filter (KF) to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF) is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods.

  2. IMU-Based Online Kinematic Calibration of Robot Manipulator

    Directory of Open Access Journals (Sweden)

    Guanglong Du

    2013-01-01

    Full Text Available Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA and Kalman Filter (KF to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods.

  3. Direct calibration of PICKY-designed microarrays

    Directory of Open Access Journals (Sweden)

    Ronald Pamela C

    2009-10-01

    Full Text Available Abstract Background Few microarrays have been quantitatively calibrated to identify optimal hybridization conditions because it is difficult to precisely determine the hybridization characteristics of a microarray using biologically variable cDNA samples. Results Using synthesized samples with known concentrations of specific oligonucleotides, a series of microarray experiments was conducted to evaluate microarrays designed by PICKY, an oligo microarray design software tool, and to test a direct microarray calibration method based on the PICKY-predicted, thermodynamically closest nontarget information. The complete set of microarray experiment results is archived in the GEO database with series accession number GSE14717. Additional data files and Perl programs described in this paper can be obtained from the website http://www.complex.iastate.edu under the PICKY Download area. Conclusion PICKY-designed microarray probes are highly reliable over a wide range of hybridization temperatures and sample concentrations. The microarray calibration method reported here allows researchers to experimentally optimize their hybridization conditions. Because this method is straightforward, uses existing microarrays and relatively inexpensive synthesized samples, it can be used by any lab that uses microarrays designed by PICKY. In addition, other microarrays can be reanalyzed by PICKY to obtain the thermodynamically closest nontarget information for calibration.

  4. A computer program to evaluate the experimental data in instrumental multielement neutron activation analysis

    International Nuclear Information System (INIS)

    Greim, L.; Motamedi, K.; Niedergesaess, R.

    1976-01-01

    A computer code evaluating experimental data of neutron activation analysis (NAA) for determination of atomic abundancies is described. The experimental data are, beside a probe designation, the probe weight, irradiation parameters and a Ge(Li)-pulse-height-spectrum from the activity measurement. The organisation of the necessary nuclear data, comprising all methods of activation in reactor-irradiations, is given. Furthermore the automatic evaluation of spectra, the designation of the resulting peaks to nuclei and the calculation of atomic abundancies are described. The complete evaluation of a spectrum with many lines, e.g. 100 lines of 20 nuclei, takes less than 1 minute machine-time on the TR 440 computer. (orig.) [de

  5. Calibration of three-axis magnetometers with differential evolution algorithm

    International Nuclear Information System (INIS)

    Pang, Hongfeng; Zhang, Qi; Wang, Wei; Wang, Junya; Li, Ji; Luo, Shitu; Wan, Chengbiao; Chen, Dixiang; Pan, Mengchun; Luo, Feilu

    2013-01-01

    The accuracy of three-axis magnetometers is influenced by different scale and bias of each axis and nonorthogonality between axes. One limitation of traditional iteration methods is that initial parameters influence the calibration, thus leading to the local optimal or wrong results. In this paper, a new method is proposed to calibrate three-axis magnetometers. To employ this method, a nonmagnetic rotation platform, a proton magnetometer, a DM-050 three-axis magnetometer and the differential evolution (DE) algorithm are used. The performance of this calibration method is analyzed with simulation and experiment. In simulation, the calibration results of DE, unscented Kalman filter (UKF), recursive least squares (RLS) and genetic algorithm (GA) are compared. RMS error using DE is least, which is reduced from 81.233 nT to 1.567 nT. Experimental results show that comparing with UKF, RLS and GA, the DE algorithm has not only the least calibration error but also the best robustness. After calibration, RMS error is reduced from 68.914 nT to 2.919 nT. In addition, the DE algorithm is not sensitive to initial parameters, which is an important advantage compared with traditional iteration algorithms. The proposed algorithm can avoid the troublesome procedure to select suitable initial parameters, thus it can improve the calibration performance of three-axis magnetometers. - Highlights: • The calibration results and robustness of UKF, GA, RLS and DE algorithm are analyzed. • Calibration error of DE is the least in simulation and experiment. • Comparing with traditional calibration algorithms, DE is not sensitive to initial parameters. • It can improve the calibration performance of three-axis magnetometers

  6. Autonomous calibration of single spin qubit operations

    Science.gov (United States)

    Frank, Florian; Unden, Thomas; Zoller, Jonathan; Said, Ressa S.; Calarco, Tommaso; Montangero, Simone; Naydenov, Boris; Jelezko, Fedor

    2017-12-01

    Fully autonomous precise control of qubits is crucial for quantum information processing, quantum communication, and quantum sensing applications. It requires minimal human intervention on the ability to model, to predict, and to anticipate the quantum dynamics, as well as to precisely control and calibrate single qubit operations. Here, we demonstrate single qubit autonomous calibrations via closed-loop optimisations of electron spin quantum operations in diamond. The operations are examined by quantum state and process tomographic measurements at room temperature, and their performances against systematic errors are iteratively rectified by an optimal pulse engineering algorithm. We achieve an autonomous calibrated fidelity up to 1.00 on a time scale of minutes for a spin population inversion and up to 0.98 on a time scale of hours for a single qubit π/2 -rotation within the experimental error of 2%. These results manifest a full potential for versatile quantum technologies.

  7. A novel computational framework for deducing muscle synergies from experimental joint moments

    Directory of Open Access Journals (Sweden)

    Anantharaman eGopalakrishnan

    2014-12-01

    Full Text Available Prior experimental studies have hypothesized the existence of a ‘muscle synergy’ based control scheme for producing limb movements and locomotion in vertebrates. Such synergies have been suggested to consist of fixed muscle grouping schemes with the co-activation of all muscles in a synergy resulting in limb movement. Quantitative representations of these groupings (termed muscle weightings and their control signals (termed synergy controls have traditionally been derived by the factorization of experimentally measured EMG. This study presents a novel approach for deducing these weightings and controls from inverse dynamic joint moments that are computed from an alternative set of experimental measurements – movement kinematics and kinetics. This technique was applied to joint moments for healthy human walking at 0.7 and 1.7 m/s, and two sets of ‘simulated’ synergies were computed based on two different criteria (1 synergies were required to minimize errors between experimental and simulated joint moments in a musculoskeletal model (pure-synergy solution (2 along with minimizing joint moment errors, synergies also minimized muscle activation levels (optimal-synergy solution. On comparing the two solutions, it was observed that the introduction of optimality requirements (optimal-synergy to a control strategy solely aimed at reproducing the joint moments (pure-synergy did not necessitate major changes in the muscle grouping within synergies or the temporal profiles of synergy control signals. Synergies from both the simulated solutions exhibited many similarities to EMG derived synergies from a previously published study, thus implying that the analysis of the two different types of experimental data reveals similar, underlying synergy structures.

  8. Comparison between calibration methods of ionization chamber type pencil in greatness P_K_L

    International Nuclear Information System (INIS)

    Macedo, E.M.; Pereira, L.C.S.; Ferreira, M.J.; Navarro, V.C.C.; Garcia, I.F.M.; Pires, E.J.; Navarro, M.V.T.

    2016-01-01

    Calibration of radiation meters is indispensable on Quality Assurance Program in Radiodiagnostic procedures, mainly Computed Tomography. Thus, this study aims evaluate two calibration methods of pencil ionization chambers in terms of Kerma-length Product (P_K_L) (a direct substitution method and an indirect one, through Kerma and length measurements). The results showed a good equivalence, with minimal concordance of 98,5% between calibration factors. About uncertainties, both showed similar results (substitution 2.2% and indirect 2.3%), indicating that the last one is better, due the costs reduction to implant this calibration procedure. (author)

  9. Experimental pain leads to reorganisation of trapezius electromyography during computer work with active and passive pauses

    DEFF Research Database (Denmark)

    Samani, Afshin; Holtermann, Andreas; Søgaard, Karen

    2009-01-01

    The aim of this laboratory study was to investigate acute effects of experimental muscle pain on spatial electromyographic (EMG) activity of the trapezius muscle during computer work with active and passive pauses. Twelve healthy male subjects performed four sessions of computer work for 2 min...... in one day, with passive (relax) and active (30% maximum voluntary contraction of shoulder elevation) pauses given every 40 s without and with presence of experimental pain. Surface EMG signals were recorded from four parts of the trapezius. The centroid of exposure variation analysis along the time axis...... was lower during computer work with active pauses when compared with passive one in all muscle parts (P

  10. Studies Concerning the ATLAS IBL Calibration Architecture

    CERN Document Server

    Kretz, Moritz; Kugel, Andreas

    With the commissioning of the Insertable B-Layer (IBL) in 2013 at the ATLAS experiment 12~million additional pixels will be added to the current Pixel Detector. While the idea of employing pairs of VME based Read-Out Driver (ROD) and Back of Crate (BOC) cards in the read-out chain remains unchanged, modifications regarding the IBL calibration procedure were introduced to overcome current hardware limitations. The analysis of calibration histograms will no longer be performed on the RODs, but on an external computing farm that is connected to the RODs via Ethernet. This thesis contributes to the new IBL calibration procedure and presents a concept for a scalable software and hardware architecture. An embedded system targeted to the ROD FPGAs is realized for sending data from the RODs to the fit farm servers and benchmarks are carried out with a Linux based networking stack, as well as a standalone software stack. Furthermore, the histogram fitting algorithm currently being employed on the Pixel Detector RODs i...

  11. Trigger Algorithms for Alignment and Calibration at the CMS Experiment

    CERN Document Server

    Fernandez Perez Tomei, Thiago Rafael

    2017-01-01

    The data needs of the Alignment and Calibration group at the CMS experiment are reasonably different from those of the physics studies groups. Data are taken at CMS through the online event selection system, which is implemented in two steps. The Level-1 Trigger is implemented on custom-made electronics and dedicated to analyse the detector information at a coarse-grained scale, while the High Level Trigger (HLT) is implemented as a series of software algorithms, running in a computing farm, that have access to the full detector information. In this paper we describe the set of trigger algorithms that is deployed to address the needs of the Alignment and Calibration group, how it fits in the general infrastructure of the HLT, and how it feeds the Prompt Calibration Loop (PCL), allowing for a fast turnaround for the alignment and calibration constants.

  12. Calibration of TLD cards to beta ray spectra of 32P

    International Nuclear Information System (INIS)

    Ben-Shachar, B.; German, U.; Naim, E.

    1994-01-01

    The results of the evaluation of TLD chips are received in nC, and we need to transform these values to mGy, in order to obtain the penetrating and the non-penetrating dose. The calibration factors were determined experimentally by irradiation the TLD chips with an uranium source. Beta rays having other spectra can cause incorrect values when estimating the non-penetrating dose. If the spectrum of the beta source is known, a specific evaluation of the calibration factor can be performed. In this report, the estimation of the calibration factor for a 32 P source is presented. LiF:Ti,Mg TLD cards were irradiated calibrated source of 90 Sr/ 90 Y and 204 Tl and non-calibrated source of 32 P, in order to find the beta correction factor for the spectrum of 32 P. Calculations of the beta correction factor were performed too, by applying the Loevinger equations to the geometry of the TLD chips used in our routine measurements. The calculated values of the beta correction factors are lower than the experimental ones. When comparing the ratios, between the beta factors of 32 P and 90 Sr/ 90 Y received from the experiments and from the calculation, we found them to be constant up to ±5%. (authors) 15 refs, 2 figs, 6 tabs

  13. Mutations that Cause Human Disease: A Computational/Experimental Approach

    Energy Technology Data Exchange (ETDEWEB)

    Beernink, P; Barsky, D; Pesavento, B

    2006-01-11

    can be used to understand how an amino acid change affects the protein. The experimental methods that provide the most detailed structural information on proteins are X-ray crystallography and NMR spectroscopy. However, these methods are labor intensive and currently cannot be carried out on a genomic scale. Nonetheless, Structural Genomics projects are being pursued by more than a dozen groups and consortia worldwide and as a result the number of experimentally determined structures is rising exponentially. Based on the expectation that protein structures will continue to be determined at an ever-increasing rate, reliable structure prediction schemes will become increasingly valuable, leading to information on protein function and disease for many different proteins. Given known genetic variability and experimentally determined protein structures, can we accurately predict the effects of single amino acid substitutions? An objective assessment of this question would involve comparing predicted and experimentally determined structures, which thus far has not been rigorously performed. The completed research leveraged existing expertise at LLNL in computational and structural biology, as well as significant computing resources, to address this question.

  14. A Study on the Guided Wave Mode Conversion using Self-calibrating Technique

    International Nuclear Information System (INIS)

    Park, Jung Chul; Cho, Youn Ho

    2000-01-01

    The guided wave mode conversion phenomena were investigated for the NDE of a plate-like structure with thickness variation. The ratios of reflection and transmission (R/T) were measured via the self-calibrating procedure which allows us to obtain experimental guided wave data in a more reliable way regardless of the coupling uncertainty between transducer and specimen. The results on R/T could be used to determine the thickness reduction of the structure. It was shown that not only the incident modes but also the converted ones need to be considered in the self-calibrating guided wave inspection to extract a reasonable correlation between experimental data and the thickness variation. Through this study, the potential of guided wave inspection as a quantitative NDE technique was explored based on the combined concept of self-calibration and multi-mode conversion in guided wave scattering problems

  15. A novel single-step procedure for the calibration of the mounting parameters of a multi-camera terrestrial mobile mapping system

    Science.gov (United States)

    Habib, A.; Kersting, P.; Bang, K.; Rau, J.

    2011-12-01

    Mobile Mapping Systems (MMS) can be defined as moving platforms which integrates a set of imaging sensors and a position and orientation system (POS) for the collection of geo-spatial information. In order to fully explore the potential accuracy of such systems and guarantee accurate multi-sensor integration, a careful system calibration must be carried out. System calibration involves individual sensor calibration as well as the estimation of the inter-sensor geometric relationship. This paper tackles a specific component of the system calibration process of a multi-camera MMS - the estimation of the relative orientation parameters among the cameras, i.e., the inter-camera geometric relationship (lever-arm offsets and boresight angles among the cameras). For that purpose, a novel single step procedure, which is easy to implement and not computationally intensive, will be introduced. The proposed method is implemented in such a way that it can also be used for the estimation of the mounting parameters among the cameras and the IMU body frame, in case of directly georeferenced systems. The performance of the proposed method is evaluated through experimental results using simulated data. A comparative analysis between the proposed single-step and the two-step, which makes use of the traditional bundle adjustment procedure, is demonstrated.

  16. Tanks for liquids: calibration and errors assessment

    International Nuclear Information System (INIS)

    Espejo, J.M.; Gutierrez Fernandez, J.; Ortiz, J.

    1980-01-01

    After a brief reference to some of the problems raised by tanks calibration, two methods, theoretical and experimental are presented, so as to achieve it taking into account measurement errors. The method is applied to the transfer of liquid from one tank to another. Further, a practical example is developed. (author)

  17. A theoretical and experimental study of neuromorphic atomic switch networks for reservoir computing.

    Science.gov (United States)

    Sillin, Henry O; Aguilera, Renato; Shieh, Hsien-Hang; Avizienis, Audrius V; Aono, Masakazu; Stieg, Adam Z; Gimzewski, James K

    2013-09-27

    Atomic switch networks (ASNs) have been shown to generate network level dynamics that resemble those observed in biological neural networks. To facilitate understanding and control of these behaviors, we developed a numerical model based on the synapse-like properties of individual atomic switches and the random nature of the network wiring. We validated the model against various experimental results highlighting the possibility to functionalize the network plasticity and the differences between an atomic switch in isolation and its behaviors in a network. The effects of changing connectivity density on the nonlinear dynamics were examined as characterized by higher harmonic generation in response to AC inputs. To demonstrate their utility for computation, we subjected the simulated network to training within the framework of reservoir computing and showed initial evidence of the ASN acting as a reservoir which may be optimized for specific tasks by adjusting the input gain. The work presented represents steps in a unified approach to experimentation and theory of complex systems to make ASNs a uniquely scalable platform for neuromorphic computing.

  18. Econometrically calibrated computable general equilibrium models: Applications to the analysis of energy and climate politics

    Science.gov (United States)

    Schu, Kathryn L.

    Economy-energy-environment models are the mainstay of economic assessments of policies to reduce carbon dioxide (CO2) emissions, yet their empirical basis is often criticized as being weak. This thesis addresses these limitations by constructing econometrically calibrated models in two policy areas. The first is a 35-sector computable general equilibrium (CGE) model of the U.S. economy which analyzes the uncertain impacts of CO2 emission abatement. Econometric modeling of sectors' nested constant elasticity of substitution (CES) cost functions based on a 45-year price-quantity dataset yields estimates of capital-labor-energy-material input substitution elasticities and biases of technical change that are incorporated into the CGE model. I use the estimated standard errors and variance-covariance matrices to construct the joint distribution of the parameters of the economy's supply side, which I sample to perform Monte Carlo baseline and counterfactual runs of the model. The resulting probabilistic abatement cost estimates highlight the importance of the uncertainty in baseline emissions growth. The second model is an equilibrium simulation of the market for new vehicles which I use to assess the response of vehicle prices, sales and mileage to CO2 taxes and increased corporate average fuel economy (CAFE) standards. I specify an econometric model of a representative consumer's vehicle preferences using a nested CES expenditure function which incorporates mileage and other characteristics in addition to prices, and develop a novel calibration algorithm to link this structure to vehicle model supplies by manufacturers engaged in Bertrand competition. CO2 taxes' effects on gasoline prices reduce vehicle sales and manufacturers' profits if vehicles' mileage is fixed, but these losses shrink once mileage can be adjusted. Accelerated CAFE standards induce manufacturers to pay fines for noncompliance rather than incur the higher costs of radical mileage improvements

  19. Calibration setting numbers for dose calibrators for the PET isotopes "5"2Mn, "6"4Cu, "7"6Br, "8"6Y, "8"9Zr, "1"2"4I

    International Nuclear Information System (INIS)

    Wooten, A. Lake; Lewis, Benjamin C.; Szatkowski, Daniel J.; Sultan, Deborah H.; Abdin, Kinda I.; Voller, Thomas F.; Liu, Yongjian; Lapi, Suzanne E.

    2016-01-01

    For PET radionuclides, the radioactivity of a sample can be conveniently measured by a dose calibrator. These devices depend on a “calibration setting number”, but many recommended settings from manuals were interpolated based on standard sources of other radionuclide(s). We conducted HPGe gamma-ray spectroscopy, resulting in a reference for determining settings in two types of vessels containing one of several PET radionuclides. Our results reiterate the notion that in-house, experimental calibrations are recommended for different radionuclides and vessels. - Highlights: • Dose calibrators measure radioactivity by ionization of gas from emitted radiation. • Accuracy of dose calibrators depends on “calibration setting numbers” for isotopes. • Many manufacturer settings are interpolated from emissions of other radionuclides. • As a high-precision reference, HPGe gamma-ray spectroscopy was conducted. • New calibrations were found for PET isotopes "5"2Mn, "6"4Cu, "7"6Br, "8"6Y, "8"9Zr, and "1"2"4I.

  20. A detailed experimental study of a DNA computer with two endonucleases.

    Science.gov (United States)

    Sakowski, Sebastian; Krasiński, Tadeusz; Sarnik, Joanna; Blasiak, Janusz; Waldmajer, Jacek; Poplawski, Tomasz

    2017-07-14

    Great advances in biotechnology have allowed the construction of a computer from DNA. One of the proposed solutions is a biomolecular finite automaton, a simple two-state DNA computer without memory, which was presented by Ehud Shapiro's group at the Weizmann Institute of Science. The main problem with this computer, in which biomolecules carry out logical operations, is its complexity - increasing the number of states of biomolecular automata. In this study, we constructed (in laboratory conditions) a six-state DNA computer that uses two endonucleases (e.g. AcuI and BbvI) and a ligase. We have presented a detailed experimental verification of its feasibility. We described the effect of the number of states, the length of input data, and the nondeterminism on the computing process. We also tested different automata (with three, four, and six states) running on various accepted input words of different lengths such as ab, aab, aaab, ababa, and of an unaccepted word ba. Moreover, this article presents the reaction optimization and the methods of eliminating certain biochemical problems occurring in the implementation of a biomolecular DNA automaton based on two endonucleases.

  1. The calibrated laparoscopic Heller myotomy with fundoplication.

    Science.gov (United States)

    Di Martino, Natale; Marano, Luigi; Torelli, Francesco; Schettino, Michele; Porfidia, Raffaele; Reda, Gianmarco; Grassia, Michele; Petrillo, Marianna; Braccio, Bartolomeo

    2013-01-01

    Esophageal achalasia is the most common primary esophageal motor disorder. Laparoscopic Heller's myotomy combined with fundoplication represents the treatment of choice for this disease, achieving good results in about 90% of patients. However, about 10% of treated patients refer persistent or recurrent dysphagia. Many Authors showed that this failure rate is related to inadequate myotomy. To verify, from experimental to clinical study, the modifications induced by Heller's myotomy of the esophago- gastric junction on LES pressure (LES-P profile, using a computerized manometric system. From 2002 to 2010 105 patients with achalasia underwent laparoscopic calibrated Heller myotomy followed by antireflux surgery. The calibrated Heller myotomy was extended for at least 2.5 cm on the esophagus and for 3 cm on the gastric side. Each step was evaluated by intraoperative manometry. Moreover, intraoperative manometry and endoscopy were used to calibrate the fundoplication. The preoperative mean LES-P was 37.73 ± 12.21. After esophageal and gastric myotomy the mean pressure drop was 21.3% and 91.9%, respectively. No mortality was reported. Laparoscopic calibrated Heller myotomy with fundoplication achieves a good outcome in the surgical treatment of achalasia. The use of intraoperative manometry enables an adequate calibration of myotomy, being effective in the evaluation of the complete pressure drop, avoiding too long esophageal myotomy and, especially, too short gastric myotomy, that may be the cause of surgical failure.

  2. Another look at volume self-calibration: calibration and self-calibration within a pinhole model of Scheimpflug cameras

    International Nuclear Information System (INIS)

    Cornic, Philippe; Le Besnerais, Guy; Champagnat, Frédéric; Illoul, Cédric; Cheminet, Adam; Le Sant, Yves; Leclaire, Benjamin

    2016-01-01

    We address calibration and self-calibration of tomographic PIV experiments within a pinhole model of cameras. A complete and explicit pinhole model of a camera equipped with a 2-tilt angles Scheimpflug adapter is presented. It is then used in a calibration procedure based on a freely moving calibration plate. While the resulting calibrations are accurate enough for Tomo-PIV, we confirm, through a simple experiment, that they are not stable in time, and illustrate how the pinhole framework can be used to provide a quantitative evaluation of geometrical drifts in the setup. We propose an original self-calibration method based on global optimization of the extrinsic parameters of the pinhole model. These methods are successfully applied to the tomographic PIV of an air jet experiment. An unexpected by-product of our work is to show that volume self-calibration induces a change in the world frame coordinates. Provided the calibration drift is small, as generally observed in PIV, the bias on the estimated velocity field is negligible but the absolute location cannot be accurately recovered using standard calibration data. (paper)

  3. Modeling microelectrode biosensors: free-flow calibration can substantially underestimate tissue concentrations.

    Science.gov (United States)

    Newton, Adam J H; Wall, Mark J; Richardson, Magnus J E

    2017-03-01

    Microelectrode amperometric biosensors are widely used to measure concentrations of analytes in solution and tissue including acetylcholine, adenosine, glucose, and glutamate. A great deal of experimental and modeling effort has been directed at quantifying the response of the biosensors themselves; however, the influence that the macroscopic tissue environment has on biosensor response has not been subjected to the same level of scrutiny. Here we identify an important issue in the way microelectrode biosensors are calibrated that is likely to have led to underestimations of analyte tissue concentrations. Concentration in tissue is typically determined by comparing the biosensor signal to that measured in free-flow calibration conditions. In a free-flow environment the concentration of the analyte at the outer surface of the biosensor can be considered constant. However, in tissue the analyte reaches the biosensor surface by diffusion through the extracellular space. Because the enzymes in the biosensor break down the analyte, a density gradient is set up resulting in a significantly lower concentration of analyte near the biosensor surface. This effect is compounded by the diminished volume fraction (porosity) and reduction in the diffusion coefficient due to obstructions (tortuosity) in tissue. We demonstrate this effect through modeling and experimentally verify our predictions in diffusive environments. NEW & NOTEWORTHY Microelectrode biosensors are typically calibrated in a free-flow environment where the concentrations at the biosensor surface are constant. However, when in tissue, the analyte reaches the biosensor via diffusion and so analyte breakdown by the biosensor results in a concentration gradient and consequently a lower concentration around the biosensor. This effect means that naive free-flow calibration will underestimate tissue concentration. We develop mathematical models to better quantify the discrepancy between the calibration and tissue

  4. Computations for the 1:5 model of the THTR pressure vessel compared with experimental results

    International Nuclear Information System (INIS)

    Stangenberg, F.

    1972-01-01

    In this report experimental results measured at the 1:5-model of the prestressed concrete pressure vessel of the THTR-nuclear power station Schmehausen in 1971, are compared with the results of axis-symmetrical computations. Linear-elastic computations were performed as well as approximate computations for overload pressures taking into consideration the influences of the load history (prestressing, temperature, creep) and the effects of the steel components. (orig.) [de

  5. Preparatory research to develop an operational method to calibrate airborne sensor data using a network of ground calibration sites

    International Nuclear Information System (INIS)

    Milton, E.J.; Smith, G.M.; Lawless, K.P.

    1996-01-01

    The objective of the research is to develop an operational method to convert airborne spectral radiance data to reflectance using a number of well-characterized ground calibration sites located around the UK. The study is in three phases. First, a pilot study has been conducted at a disused airfield in southern England to test the feasibility of the open-quote empirical line close-quote method of sensor calibration. The second phase is developing methods to predict temporal changes in the bidirectional reflectance of ground calibration sites. The final phase of the project will look at methods to extend such calibrations spatially. This paper presents some results from the first phase of this study. The viability of the empirical line method of correction is shown to depend upon the use of ground targets whose in-band reflectance encompasses that of the targets of interest in the spectral band(s) concerned. The experimental design for the second phase of the study, in which methods to predict temporal trends in the bidirectional reflectance of these sites will be developed, is discussed. Finally, it is planned to develop an automated method of searching through Landsat TM data for the UK to identify a number of candidate ground calibration sites for which the model can be tested. 11 refs., 5 figs., 5 tabs

  6. The calibration of the solid state nuclear track detector LR 115 for radon measurements

    CERN Document Server

    Gericke, C; Jönsson, G; Freyer, K; Treutler, H C; Enge, W

    1999-01-01

    An experimental calibration of indoor room and outdoor soil detector devices which are based on LR 115 as sensitive element has taken place at the Swedish Radiation Protection Institute in Stockholm (Sweden) in 1994 and 1996, at the Physikalisch-Technischen Bundesanstalt in Braunschweig (Germany) in 1997 and at the Umweltforschungszentrum Leipzig-Halle (Germany) in 1997. Special properties of the used solid state nuclear track detector (SSNTD) material LR 115 have been measured to define the application of the experimental calibration.

  7. EXPERIMENTAL VERIFICATION OF COMPUTER MODEL OF COOLING SYSTEM FOR POWERFUL SEMI- CONDUCTOR DEVICE

    Directory of Open Access Journals (Sweden)

    I. A. Khorunzhii

    2007-01-01

    Full Text Available A cooling system for powerful semi-conductor device (power -1 kW consisting of a pin-type radiator and a body is considered in the paper. Cooling is carried out by forced convection of a coolant. Calculated values of temperatures on the radiator surface and experimentally measured values of temperatures in the same surface points have been compared in the paper. It has been shown that the difference between calculated and experimentally measured temperatures does not exceed 0,1-0,2 °C and it is comparable with experimental error value. The given results confirm correctness of a computer model.

  8. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    Energy Technology Data Exchange (ETDEWEB)

    and Ben Polly, Joseph Robertson [National Renewable Energy Lab. (NREL), Golden, CO (United States); Polly, Ben [National Renewable Energy Lab. (NREL), Golden, CO (United States); Collis, Jon [Colorado School of Mines, Golden, CO (United States)

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.

  9. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Robertson, J.; Polly, B.; Collis, J.

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.

  10. The design and realization of calibration apparatus for measuring the concentration of radon in three models

    Energy Technology Data Exchange (ETDEWEB)

    Huiping, Guo [The Second Artillery Engineering College, Xi' an (China)

    2007-06-15

    For satisfying calibration request of radon measure in the laboratory, the calibration apparatus for radon activity measure is designed and realized. The calibration apparatus can auto-control and auto-measure in three models. sequent mode, pulse mode and constant mode. The stability and reliability of the calibration apparatus was tested under the three models. The experimental result shows that the apparatus can provides an adjustable and steady radon activity concentration environment for the research of radon and its progeny and for the calibration of its measure. (authors)

  11. CryoSat-2 SIRAL Calibration: Strategy, Application and Results

    Science.gov (United States)

    Parrinello, T.; Fornari, M.; Bouzinac, C.; Scagliola, M.; Tagliani, N.

    2012-04-01

    The main payload of CryoSat-2 is a Ku band pulsewidth limited radar altimeter, called SIRAL (Synthetic interferometric radar altimeter), that transmits pulses at a high pulse repetition frequency thus making the received echoes phase coherent and suitable for azimuth processing. This allows to reach an along track resolution of about 250 meters which is an important improvement over traditional pulse-width limited altimeters. Due to the fact that SIRAL is a phase coherent pulse-width limited radar altimeter, a proper calibration approach has been developed. In fact, not only the corrections for transfer function amplitude with respect to frequency, gain and instrument path delay have to be computed but it is also needed to provide corrections for transfer function phase with respect to frequency and AGC setting as well as the phase variation across bursts of pulses. As a consequence, SIRAL performs regularly four types of calibrations: (1) CAL1 in order to calibrate the internal path delay and peak power variation, (2) CAL2 in order to compensate the instrument transfer function, (3) CAL4 to calibrate the interferometer and (4) AutoCal, a specific sequence in order to calibrate the gain and phase difference for each AGC setting. Commissioning phase results (April-December 2010) revealed high stability of the instrument, which made possible to reduce the calibration frequency during Operations. Internal calibration data are processed on ground by the CryoSat-2 Instrument Processing Facility (IPF1) and then applied to the science data. In this poster we will describe as first the calibration strategy and then how the four different types of calibration are applied to science data. Moreover the calibration results over almost 2 years of mission will be presented, analyzing their temporal evolution in order to highlight the stability of the instrument over its life.

  12. A calibration rig for multi-component internal strain gauge balance using the new design-of-experiment (DOE) approach

    Science.gov (United States)

    Nouri, N. M.; Mostafapour, K.; Kamran, M.

    2018-02-01

    In a closed water-tunnel circuit, the multi-component strain gauge force and moment sensor (also known as balance) are generally used to measure hydrodynamic forces and moments acting on scaled models. These balances are periodically calibrated by static loading. Their performance and accuracy depend significantly on the rig and the method of calibration. In this research, a new calibration rig was designed and constructed to calibrate multi-component internal strain gauge balances. The calibration rig has six degrees of freedom and six different component-loading structures that can be applied separately and synchronously. The system was designed based on the applicability of formal experimental design techniques, using gravity for balance loading and balance positioning and alignment relative to gravity. To evaluate the calibration rig, a six-component internal balance developed by Iran University of Science and Technology was calibrated using response surface methodology. According to the results, calibration rig met all design criteria. This rig provides the means by which various methods of formal experimental design techniques can be implemented. The simplicity of the rig saves time and money in the design of experiments and in balance calibration while simultaneously increasing the accuracy of these activities.

  13. Experimental and computer thermodynamics evaluations of an Al-Si-Coating on a quenchable steel

    International Nuclear Information System (INIS)

    Trindade, Vicente Braz

    2017-01-01

    High-strength steels are commonly used in the automobile industry in order to reduce the weight of the vehicles. However, a technical difficulty appears due to the need of hot stamping of the components, which leads to oxidation. Therefore, the application of a coating on the substrate to avoid high-temperature oxidation is used. In this work, experimental analysis and computer thermodynamic calculation were used to describe the phase transformations within an Al-Si coating on a quenchable high strength steel. The Al-Si coating was deposited by hot dipping and its characterization was done using SEM and XRD techniques. Computer thermodynamics calculations were done using the commercial software FactSage using the Calphad methodology. It demonstrated a good relationship between the experimental results and the computer calculations of phase stabilities for the as-deposited condition and after diffusion experiment at 920 deg C for 7 minutes, which simulates the thermal cycle of hot stamping of the quenchable steel used. (author)

  14. Experimental and computer thermodynamics evaluations of an Al-Si-Coating on a quenchable steel

    Energy Technology Data Exchange (ETDEWEB)

    Trindade, Vicente Braz, E-mail: vicentebraz@yahoo.com.b [Universidade Federal de Ouro Preto (UFOP), Ouro Preto, MG (Brazil). Escola de Minas. Departamento de Engenharia Metalurgica e de Materiais; Christ, Hans-Juergen, E-mail: christ@ifwt.mb.uni-siegen.de [University of Siegen (Germany)

    2017-01-15

    High-strength steels are commonly used in the automobile industry in order to reduce the weight of the vehicles. However, a technical difficulty appears due to the need of hot stamping of the components, which leads to oxidation. Therefore, the application of a coating on the substrate to avoid high-temperature oxidation is used. In this work, experimental analysis and computer thermodynamic calculation were used to describe the phase transformations within an Al-Si coating on a quenchable high strength steel. The Al-Si coating was deposited by hot dipping and its characterization was done using SEM and XRD techniques. Computer thermodynamics calculations were done using the commercial software FactSage using the Calphad methodology. It demonstrated a good relationship between the experimental results and the computer calculations of phase stabilities for the as-deposited condition and after diffusion experiment at 920 deg C for 7 minutes, which simulates the thermal cycle of hot stamping of the quenchable steel used. (author)

  15. Practical experimental certification of computational quantum gates using a twirling procedure.

    Science.gov (United States)

    Moussa, Osama; da Silva, Marcus P; Ryan, Colm A; Laflamme, Raymond

    2012-08-17

    Because of the technical difficulty of building large quantum computers, it is important to be able to estimate how faithful a given implementation is to an ideal quantum computer. The common approach of completely characterizing the computation process via quantum process tomography requires an exponential amount of resources, and thus is not practical even for relatively small devices. We solve this problem by demonstrating that twirling experiments previously used to characterize the average fidelity of quantum memories efficiently can be easily adapted to estimate the average fidelity of the experimental implementation of important quantum computation processes, such as unitaries in the Clifford group, in a practical and efficient manner with applicability in current quantum devices. Using this procedure, we demonstrate state-of-the-art coherent control of an ensemble of magnetic moments of nuclear spins in a single crystal solid by implementing the encoding operation for a 3-qubit code with only a 1% degradation in average fidelity discounting preparation and measurement errors. We also highlight one of the advances that was instrumental in achieving such high fidelity control.

  16. Parallel computing in experimental mechanics and optical measurement: A review (II)

    Science.gov (United States)

    Wang, Tianyi; Kemao, Qian

    2018-05-01

    With advantages such as non-destructiveness, high sensitivity and high accuracy, optical techniques have successfully integrated into various important physical quantities in experimental mechanics (EM) and optical measurement (OM). However, in pursuit of higher image resolutions for higher accuracy, the computation burden of optical techniques has become much heavier. Therefore, in recent years, heterogeneous platforms composing of hardware such as CPUs and GPUs, have been widely employed to accelerate these techniques due to their cost-effectiveness, short development cycle, easy portability, and high scalability. In this paper, we analyze various works by first illustrating their different architectures, followed by introducing their various parallel patterns for high speed computation. Next, we review the effects of CPU and GPU parallel computing specifically in EM & OM applications in a broad scope, which include digital image/volume correlation, fringe pattern analysis, tomography, hyperspectral imaging, computer-generated holograms, and integral imaging. In our survey, we have found that high parallelism can always be exploited in such applications for the development of high-performance systems.

  17. Simple laser vision sensor calibration for surface profiling applications

    Science.gov (United States)

    Abu-Nabah, Bassam A.; ElSoussi, Adnane O.; Al Alami, Abed ElRahman K.

    2016-09-01

    Due to the relatively large structures in the Oil and Gas industry, original equipment manufacturers (OEMs) have been implementing custom-designed laser vision sensor (LVS) surface profiling systems as part of quality control in their manufacturing processes. The rough manufacturing environment and the continuous movement and misalignment of these custom-designed tools adversely affect the accuracy of laser-based vision surface profiling applications. Accordingly, Oil and Gas businesses have been raising the demand from the OEMs to implement practical and robust LVS calibration techniques prior to running any visual inspections. This effort introduces an LVS calibration technique representing a simplified version of two known calibration techniques, which are commonly implemented to obtain a calibrated LVS system for surface profiling applications. Both calibration techniques are implemented virtually and experimentally to scan simulated and three-dimensional (3D) printed features of known profiles, respectively. Scanned data is transformed from the camera frame to points in the world coordinate system and compared with the input profiles to validate the introduced calibration technique capability against the more complex approach and preliminarily assess the measurement technique for weld profiling applications. Moreover, the sensitivity to stand-off distances is analyzed to illustrate the practicality of the presented technique.

  18. Calibration of optical tweezers with positional detection in the back focal plane

    DEFF Research Database (Denmark)

    Tolic-Nørrelykke, S.F.; Schäffer, E.; Howard, J.

    2006-01-01

    We explain and demonstrate a new method of force and position calibrations for optical tweezers with back-focal-plane photodetection. The method combines power spectral measurements of thermal motion and the response to a sinusoidal motion of a translation stage. It consequently does not use...... and precise: true values are returned, with small error bars. We tested this experimentally, near and far from surfaces in the lateral directions. Both position and force calibrations were accurate to within 3%. To calibrate, we moved the sample with a piezoelectric translation stage, but the laser beam could...

  19. Combined computational and experimental approach to improve the assessment of mitral regurgitation by echocardiography.

    Science.gov (United States)

    Sonntag, Simon J; Li, Wei; Becker, Michael; Kaestner, Wiebke; Büsen, Martin R; Marx, Nikolaus; Merhof, Dorit; Steinseifer, Ulrich

    2014-05-01

    Mitral regurgitation (MR) is one of the most frequent valvular heart diseases. To assess MR severity, color Doppler imaging (CDI) is the clinical standard. However, inadequate reliability, poor reproducibility and heavy user-dependence are known limitations. A novel approach combining computational and experimental methods is currently under development aiming to improve the quantification. A flow chamber for a circulatory flow loop was developed. Three different orifices were used to mimic variations of MR. The flow field was recorded simultaneously by a 2D Doppler ultrasound transducer and Particle Image Velocimetry (PIV). Computational Fluid Dynamics (CFD) simulations were conducted using the same geometry and boundary conditions. The resulting computed velocity field was used to simulate synthetic Doppler signals. Comparison between PIV and CFD shows a high level of agreement. The simulated CDI exhibits the same characteristics as the recorded color Doppler images. The feasibility of the proposed combination of experimental and computational methods for the investigation of MR is shown and the numerical methods are successfully validated against the experiments. Furthermore, it is discussed how the approach can be used in the long run as a platform to improve the assessment of MR quantification.

  20. Thermodynamic properties of 1-naphthol: Mutual validation of experimental and computational results

    International Nuclear Information System (INIS)

    Chirico, Robert D.; Steele, William V.; Kazakov, Andrei F.

    2015-01-01

    Highlights: • Heat capacities were measured for the temperature range 5 K to 445 K. • Vapor pressures were measured for the temperature range 370 K to 570 K. • Computed and derived properties for ideal gas entropies are in excellent accord. • The enthalpy of combustion was measured and shown to be consistent with reliable literature values. • Thermodynamic consistency analysis revealed anomalous literature data. - Abstract: Thermodynamic properties for 1-naphthol (Chemical Abstracts registry number [90-15-3]) in the ideal-gas state are reported based on both experimental and computational methods. Measured properties included the triple-point temperature, enthalpy of fusion, and heat capacities for the crystal and liquid phases by adiabatic calorimetry; vapor pressures by inclined-piston manometry and comparative ebulliometry; and the enthalpy of combustion of the crystal phase by oxygen bomb calorimetry. Critical properties were estimated. Entropies for the ideal-gas state were derived from the experimental studies for the temperature range 298.15 ⩽ T/K ⩽ 600, and independent statistical calculations were performed based on molecular geometry optimization and vibrational frequencies calculated at the B3LYP/6-31+G(d,p) level of theory. The mutual validation of the independent experimental and computed results is achieved with a scaling factor of 0.975 applied to the calculated vibrational frequencies. This same scaling factor was successfully applied in the analysis of results for other polycyclic molecules, as described in a series of recent articles by this research group. This article reports the first extension of this approach to a hydroxy-aromatic compound. All experimental results are compared with property values reported in the literature. Thermodynamic consistency between properties is used to show that several studies in the literature are erroneous. The enthalpy of combustion for 1-naphthol was also measured in this research, and excellent

  1. Calibration uncertainty

    DEFF Research Database (Denmark)

    Heydorn, Kaj; Anglov, Thomas

    2002-01-01

    Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration...

  2. Calibrating a tensor magnetic gradiometer using spin data

    Science.gov (United States)

    Bracken, Robert E.; Smith, David V.; Brown, Philip J.

    2005-01-01

    Scalar magnetic data are often acquired to discern characteristics of geologic source materials and buried objects. It is evident that a great deal can be done with scalar data, but there are significant advantages to direct measurement of the magnetic gradient tensor in applications with nearby sources, such as unexploded ordnance (UXO). To explore these advantages, we adapted a prototype tensor magnetic gradiometer system (TMGS) and successfully implemented a data-reduction procedure. One of several critical reduction issues is the precise determination of a large group of calibration coefficients for the sensors and sensor array. To resolve these coefficients, we devised a spin calibration method, after similar methods of calibrating space-based magnetometers (Snare, 2001). The spin calibration procedure consists of three parts: (1) collecting data by slowly revolving the sensor array in the Earth?s magnetic field, (2) deriving a comprehensive set of coefficients from the spin data, and (3) applying the coefficients to the survey data. To show that the TMGS functions as a tensor gradiometer, we conducted an experimental survey that verified that the reduction procedure was effective (Bracken and Brown, in press). Therefore, because it was an integral part of the reduction, it can be concluded that the spin calibration was correctly formulated with acceptably small errors.

  3. Fast TPC Online Tracking on GPUs and Asynchronous Data Processing in the ALICE HLT to facilitate Online Calibration

    International Nuclear Information System (INIS)

    Rohr, David; Gorbunov, Sergey; Krzewicki, Mikolaj; Breitner, Timo; Kretz, Matthias; Lindenstruth, Volker

    2015-01-01

    ALICE (A Large Heavy Ion Experiment) is one of the four major experiments at the Large Hadron Collider (LHC) at CERN, which is today the most powerful particle accelerator worldwide. The High Level Trigger (HLT) is an online compute farm of about 200 nodes, which reconstructs events measured by the ALICE detector in real-time. The HLT uses a custom online data-transport framework to distribute data and workload among the compute nodes.ALICE employs several calibration-sensitive subdetectors, e.g. the TPC (Time Projection Chamber). For a precise reconstruction, the HLT has to perform the calibration online. Online- calibration can make certain Offline calibration steps obsolete and can thus speed up Offline analysis. Looking forward to ALICE Run III starting in 2020, online calibration becomes a necessity.The main detector used for track reconstruction is the TPC. Reconstructing the trajectories in the TPC is the most compute-intense step during event reconstruction. Therefore, a fast tracking implementation is of great importance. Reconstructed TPC tracks build the basis for the calibration making a fast online-tracking mandatory.We present several components developed for the ALICE High Level Trigger to perform fast event reconstruction and to provide features required for online calibration.As first topic, we present our TPC tracker, which employs GPUs to speed up the processing, and which bases on a Cellular Automaton and on the Kalman filter. Our TPC tracking algorithm has been successfully used in 2011 and 2012 in the lead-lead and the proton-lead runs. We have improved it to leverage features of newer GPUs and we have ported it to support OpenCL, CUDA, and CPUs with a single common source code. This makes us vendor independent.As second topic, we present framework extensions required for online calibration. The extensions, however, are generic and can be used for other purposes as well. We have extended the framework to support asynchronous compute chains

  4. A comprehensive combined experimental and computational framework for pre-clinical wear simulation of total knee replacements.

    Science.gov (United States)

    Abdelgaied, A; Fisher, J; Jennings, L M

    2018-02-01

    A more robust pre-clinical wear simulation framework is required in order to simulate wider and higher ranges of activities, observed in different patient populations such as younger more active patients. Such a framework will help to understand and address the reported higher failure rates for younger and more active patients (National_Joint_Registry, 2016). The current study has developed and validated a comprehensive combined experimental and computational framework for pre-clinical wear simulation of total knee replacements (TKR). The input mechanical (elastic modulus and Poisson's ratio) and wear parameters of the moderately cross-linked ultra-high molecular weight polyethylene (UHMWPE) bearing material were independently measured from experimental studies under realistic test conditions, similar to the loading conditions found in the total knee replacements. The wear predictions from the computational wear simulation were validated against the direct experimental wear measurements for size 3 Sigma curved total knee replacements (DePuy, UK) in an independent experimental wear simulation study under three different daily activities; walking, deep squat, and stairs ascending kinematic conditions. The measured compressive mechanical properties of the moderately cross-linked UHMWPE material were more than 20% lower than that reported in the literature under tensile test conditions. The pin-on-plate wear coefficient of moderately cross-linked UHMWPE was significantly dependant of the contact stress and the degree of cross-shear at the articulating surfaces. The computational wear predictions for the TKR from the current framework were consistent and in a good agreement with the independent full TKR experimental wear simulation measurements, with 0.94 coefficient of determination of the framework. In addition, the comprehensive combined experimental and computational framework was able to explain the complex experimental wear trends from the three different daily

  5. Comparison of theoretical and experimental determinations of calibration factors for cylindrical and parallel plates ionization chambers

    International Nuclear Information System (INIS)

    Vallejos, Matias; Montano, Gustavo A.; Stefanic, Amalia; Saravi, Margarita

    2009-01-01

    The Ionizing Radiation Dosimetry Section of CNEA is the National Laboratory of Dosimeter Reference, having been designated by the National Institute of Industrial Technology (INTI, deposit taker by Law 19,511/72 of the national standards for metrology) for the safekeeping and operation of the national standards for dosimetry (Agreement INTI - CNEA, February 2004). From their creation, the CRRD provides, among other services, the calibration of dosemeters used in radiotherapy, in terms of Kerma in air, and since year 2002 provides calibration in terms of absorbed dose in water. In this work, those elements appear whereupon it counts the laboratory and that they tend to consolidate the securing of the quality of the results obtained in the calibrations of dosemeters. (author)

  6. Tritium monitor calibration at Los Alamos National Laboratory

    International Nuclear Information System (INIS)

    Bjork, C.J.; Aikin, D.J.; Houlton, T.W.

    1997-08-01

    Tritium in air is monitored at Los Alamos National Laboratory (LANL) with air breathing instruments based on ionization chambers. Stack emissions are continuously monitored from sample tubes which each connect to a Tritium bubble which differentially collects HTO and HT. A set of glass vials of glycol capture the HTO. The HT is oxidized with a palladium catalyst and the resultant HTO is captured in a second set of vials of glycol. The glycol is counted with a liquid scintillation counter. All calibrations are performed with tritium containing gas. The Radiation Instrumentation and Calibration (RIC) Team has constructed and maintains two closed loop gas handling systems based on femto TECH model U24 tritium ion chamber monitors: a fixed system housed in a fume hood and a portable system mounted on two two wheeled hand trucks. The U24 monitors are calibrated against tritium in nitrogen gas standards. They are used as standard transfer instruments to calibrate other ion chamber monitors with tritium in nitrogen, diluted with air. The gas handling systems include a circulation pump which permits a closed circulation loop to be established among the U24 monitor and typically two to four other monitors of a given model during calibration. Fixed and portable monitors can be calibrated. The stack bubblers are calibrated in the field by: blending a known concentration of tritium in air within the known volume of the two portable carts, coupled into a common loop; releasing that gas mixture into a ventilation intake to the stack; collecting oxidized tritium in the bubbler; counting the glycol; and using the stack and bubbler flow rates, computing the bubbler's efficiency. Gas calibration has become a convenient and quality tool in maintaining the tritium monitors at LANL

  7. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    Science.gov (United States)

    Xi, Maolong; Lu, Dan; Gui, Dongwei; Qi, Zhiming; Zhang, Guannan

    2017-01-01

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so as to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.

  8. Quantum-Enhanced Cyber Security: Experimental Computation on Quantum-Encrypted Data

    Science.gov (United States)

    2017-03-02

    AFRL-AFOSR-UK-TR-2017-0020 Quantum-Enhanced Cyber Security: Experimental Computation on Quantum- Encrypted Data Philip Walther UNIVERSITT WIEN Final...on Quantum- Encrypted Data 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-16-1-0004 5c.  PROGRAM ELEMENT NUMBER 61102F 6. AUTHOR(S) Philip Walther 5d...1010 AT 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) EOARD Unit 4515 APO AE 09421-4515 10

  9. Data Fusion for a Vision-Radiological System: a Statistical Calibration Algorithm

    International Nuclear Information System (INIS)

    Enqvist, Andreas; Koppal, Sanjeev; Riley, Phillip

    2015-01-01

    Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development of calibration algorithms for characterizing the fused sensor system as a single entity. There is an apparent need for correcting for a scene deviation from the basic inverse distance-squared law governing the detection rates even when evaluating system calibration algorithms. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked, to which the time-dependent radiological data can be incorporated by means of data fusion of the two sensors' output data. (authors)

  10. Data Fusion for a Vision-Radiological System: a Statistical Calibration Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Enqvist, Andreas; Koppal, Sanjeev; Riley, Phillip [University of Florida, Gainesville, FL 32611 (United States)

    2015-07-01

    Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development of calibration algorithms for characterizing the fused sensor system as a single entity. There is an apparent need for correcting for a scene deviation from the basic inverse distance-squared law governing the detection rates even when evaluating system calibration algorithms. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked, to which the time-dependent radiological data can be incorporated by means of data fusion of the two sensors' output data. (authors)

  11. Computational and experimental optimization of the exhaust air energy recovery wind turbine generator

    International Nuclear Information System (INIS)

    Tabatabaeikia, Seyedsaeed; Ghazali, Nik Nazri Bin Nik; Chong, Wen Tong; Shahizare, Behzad; Izadyar, Nima; Esmaeilzadeh, Alireza; Fazlizan, Ahmad

    2016-01-01

    Highlights: • Studying the viability of harvesting wasted energy by exhaust air recovery generator. • Optimizing the design using response surface methodology. • Validation of optimization and computation result by performing experimental tests. • Investigation of flow behaviour using computational fluid dynamic simulations. • Performing the technical and economic study of the exhaust air recovery generator. - Abstract: This paper studies the optimization of an innovative exhaust air recovery wind turbine generator through computational fluid dynamic (CFD) simulations. The optimization strategy aims to optimize the overall system energy generation and simultaneously guarantee that it does not violate the cooling tower performance in terms of decreasing airflow intake and increasing fan motor power consumption. The wind turbine rotor position, modifying diffuser plates, and introducing separator plates to the design are considered as the variable factors for the optimization. The generated power coefficient is selected as optimization objective. Unlike most of previous optimizations in field of wind turbines, in this study, response surface methodology (RSM) as a method of analytical procedures optimization has been utilised by using multivariate statistic techniques. A comprehensive study on CFD parameters including the mesh resolution, the turbulence model and transient time step values is presented. The system is simulated using SST K-ω turbulence model and then both computational and optimization results are validated by experimental data obtained in laboratory. Results show that the optimization strategy can improve the wind turbine generated power by 48.6% compared to baseline design. Meanwhile, it is able to enhance the fan intake airflow rate and decrease fan motor power consumption. The obtained optimization equations are also validated by both CFD and experimental results and a negligible deviation in range of 6–8.5% is observed.

  12. Cross-calibrating Spatial Positions of Light-viewing Diagnostics using Plasma Edge Sweeps in DIII-D

    International Nuclear Information System (INIS)

    Solomon, W.M.; Burrell, K.H.; Gohil, P.; Groebner, R.; Kaplan, D.

    2003-01-01

    An experimental technique is presented that permits diagnostics viewing light from the plasma edge to be spatially calibrated relative to one another. By sweeping the plasma edge, each chord of each diagnostic sweeps out a portion of the light emission profile. A nonlinear least-squares fit to such data provides superior cross-calibration of diagnostics located at different toroidal locations compared with simple surveying. Another advantage of the technique is that it can be used to monitor the position of viewing chords during an experimental campaign to ensure that alignment does not change over time. Moreover, should such a change occur, the data can still be cross-calibrated and its usefulness retained

  13. Mercury CEM Calibration

    Energy Technology Data Exchange (ETDEWEB)

    John Schabron; Joseph Rovani; Mark Sanderson

    2008-02-29

    Mercury continuous emissions monitoring systems (CEMS) are being implemented in over 800 coal-fired power plant stacks. The power industry desires to conduct at least a full year of monitoring before the formal monitoring and reporting requirement begins on January 1, 2009. It is important for the industry to have available reliable, turnkey equipment from CEM vendors. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor generators. The generators are used to calibrate mercury CEMs at power plant sites. The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005 requires that calibration be performed with NIST-traceable standards (Federal Register 2007). Traceability procedures will be defined by EPA. An initial draft traceability protocol was issued by EPA in May 2007 for comment. In August 2007, EPA issued an interim traceability protocol for elemental mercury generators (EPA 2007). The protocol is based on the actual analysis of the output of each calibration unit at several concentration levels ranging initially from about 2-40 {micro}g/m{sup 3} elemental mercury, and in the future down to 0.2 {micro}g/m{sup 3}, and this analysis will be directly traceable to analyses by NIST. The document is divided into two separate sections. The first deals with the qualification of generators by the vendors for use in mercury CEM calibration. The second describes the procedure that the vendors must use to certify the generator models that meet the qualification specifications. The NIST traceable certification is performance based, traceable to analysis using isotope dilution inductively coupled plasma/mass spectrometry performed by NIST in Gaithersburg, MD. The

  14. Calibration and automatic counting in a neutrons dosimeter

    International Nuclear Information System (INIS)

    Tavera-Davila, M.L.

    1991-01-01

    Over the past decade, the majority of improvements in spectrometry have not come from improved detectors but from better computing facilities, in Folding codes, and nuclear data. However several types of passive neutron detectors have the potential for development into crude, low resolution spectrometers, to be worn by individuals, making them attractive to be subject of more time of investigation. This thesis contributes to the understanding of the properties of solid state nuclear tracks (SSNTD) in order to use them in neutron dosimetry and spectrometry. In the first chapter the basic principles on neutron interaction with matter and some relevant considerations about detection and dosimetry are presented. The third chapter deals with detection properties of solid state nuclear track detectors (SSNTD). Calibration methods are presented for three different applications, detection of heavy ions, radon and neutrons. In chapter 4, some equations are derived which predict the whole efficiency formation of thin plastic detectors as a function of range, angle, and type of inciding particle as well as residual thickness. Using experimental data on proton registration properties in thin plastic detectors and the former derived equations, mathematical expressions are developed to predict the sparking counting efficiency for recoll protons and trions produced by neutrons in SSNTD. Chapter five deals with the comparison between experimental results and theoretical results of chapter 4. In chapter 6 experimental optimum parameters for electrochemical etching of thin plastic detectors are presented. The electrochemical etching efficiency is compared with the spark countering efficiency obtained in chapter 5. In chapter 7, general comments on applications to neutron dosimetry are presented. (Author)

  15. Radiation condition in computerized tomography (CT): determination and calibration of dosemeters

    International Nuclear Information System (INIS)

    Adrade, L.C.; Peixoto, J.G.P.

    2016-01-01

    RQT is the standard for radiation conditions in computed tomography . It simulates a beam unrelieved of a CT scanner . The camera pencil ionization dosimeter is used in CT. The LNMRI become known characterization of RQT radiation conditions and the secondary standard calibration for type pencil ionisation chambers in its lab. The obtained beam has the characteristics required by IEC 61267. The results of the calibration presented combined uncertainty expanded to 95.45% from 2.22% . (author)

  16. Kali: The framework for fine calibration of the LHCb Electromagnetic Calorimeter

    International Nuclear Information System (INIS)

    Belyaev, Ivan; Savrina, Daria; Graciani, Ricardo; Puig, Albert

    2011-01-01

    The precise calibration (at a level of below 1%) of the electromagnetic calorimeter (ECAL) of the LHCb experiment is an essential task for the fulfilment of the LHCb physics program. The final step of this task is performed with two calibration methods using the real data from the experimental setup. It is a very CPU-consuming procedure as both methods require processing of O(10 8 ) events which must be selected, reconstructed and analyzed. In this document we present the Kali framework developed within the LHCb software framework, which implements these two final calibration methods. It is integrated with Grid middleware and makes use of parallelism tools, such as python parallel processing modules, to provide an efficient way, both time and disk wise, for the final ECAL calibration. The results of the fine calibration with the very first data collected by the LHCb experiment will also be presented. With the use of the Kali framework it took only two days of processing and allowed to achieve a calibration accuracy of 2-2.5% for the different ECAL areas.

  17. Modeling and Calibration of a Novel One-Mirror Galvanometric Laser Scanner

    Directory of Open Access Journals (Sweden)

    Chengyi Yu

    2017-01-01

    Full Text Available A laser stripe sensor has limited application when a point cloud of geometric samples on the surface of the object needs to be collected, so a galvanometric laser scanner is designed by using a one-mirror galvanometer element as its mechanical device to drive the laser stripe to sweep along the object. A novel mathematical model is derived for the proposed galvanometer laser scanner without any position assumptions and then a model-driven calibration procedure is proposed. Compared with available model-driven approaches, the influence of machining and assembly errors is considered in the proposed model. Meanwhile, a plane-constraint-based approach is proposed to extract a large number of calibration points effectively and accurately to calibrate the galvanometric laser scanner. Repeatability and accuracy of the galvanometric laser scanner are evaluated on the automobile production line to verify the efficiency and accuracy of the proposed calibration method. Experimental results show that the proposed calibration approach yields similar measurement performance compared with a look-up table calibration method.

  18. Local Strategy Combined with a Wavelength Selection Method for Multivariate Calibration

    Directory of Open Access Journals (Sweden)

    Haitao Chang

    2016-06-01

    Full Text Available One of the essential factors influencing the prediction accuracy of multivariate calibration models is the quality of the calibration data. A local regression strategy, together with a wavelength selection approach, is proposed to build the multivariate calibration models based on partial least squares regression. The local algorithm is applied to create a calibration set of spectra similar to the spectrum of an unknown sample; the synthetic degree of grey relation coefficient is used to evaluate the similarity. A wavelength selection method based on simple-to-use interactive self-modeling mixture analysis minimizes the influence of noisy variables, and the most informative variables of the most similar samples are selected to build the multivariate calibration model based on partial least squares regression. To validate the performance of the proposed method, ultraviolet-visible absorbance spectra of mixed solutions of food coloring analytes in a concentration range of 20–200 µg/mL is measured. Experimental results show that the proposed method can not only enhance the prediction accuracy of the calibration model, but also greatly reduce its complexity.

  19. Calibration of optically trapped nanotools

    Energy Technology Data Exchange (ETDEWEB)

    Carberry, D M; Simpson, S H; Grieve, J A; Hanna, S; Miles, M J [H H Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol BS8 1TL (United Kingdom); Wang, Y; Schaefer, H; Steinhart, M [Institute for Chemistry, University of Osnabrueck, Osnabrueck (Germany); Bowman, R; Gibson, G M; Padgett, M J, E-mail: m.j.miles@bristol.ac.uk [SUPA, Department of Physics and Astronomy, University of Glasgow, Science Road, Glasgow G12 8QQ (United Kingdom)

    2010-04-30

    Holographically trapped nanotools can be used in a novel form of force microscopy. By measuring the displacement of the tool in the optical traps, the contact force experienced by the probe can be inferred. In the following paper we experimentally demonstrate the calibration of such a device and show that its behaviour is independent of small changes in the relative position of the optical traps. Furthermore, we explore more general aspects of the thermal motion of the tool.

  20. Calibration bench of flowmeters; Banc d'etalonnage de debitmetres

    Energy Technology Data Exchange (ETDEWEB)

    Bremond, J; Da Costa, D; Calvet, A; Vieuxmaire, C

    1966-07-01

    This equipment is devoted to the comparison of signals from two turbines installed in the Cabri experimental loop. The signal is compared to the standard turbine. The characteristics and the performance of the calibration bench are presented. (A.L.B.)

  1. Multidetector calibration for mass spectrometers

    International Nuclear Information System (INIS)

    Bayne, C.K.; Donohue, D.L.; Fiedler, R.

    1994-06-01

    The International Atomic Energy Agency's Safeguards Analytical Laboratory has performed calibration experiments to measure the different efficiencies among multi-Faraday detectors for a Finnigan-MAT 261 mass spectrometer. Two types of calibration experiments were performed: (1) peak-shift experiments and (2) peak-jump experiments. For peak-shift experiments, the ion intensities were measured for all isotopes of an element in different Faraday detectors. Repeated measurements were made by shifting the isotopes to various Faraday detectors. Two different peak-shifting schemes were used to measure plutonium (UK Pu5/92138) samples. For peak-jump experiments, ion intensities were measured in a reference Faraday detector for a single isotope and compared with those measured in the other Faraday detectors. Repeated measurements were made by switching back-and-forth between the reference Faraday detector and a selected Faraday detector. This switching procedure is repeated for all Faraday detectors. Peak-jump experiments were performed with replicate measurements of 239 Pu, 187 Re, and 238 U. Detector efficiency factors were estimated for both peak-jump and peak-shift experiments using a flexible calibration model to statistically analyze both types of multidetector calibration experiments. Calculated detector efficiency factors were shown to depend on both the material analyzed and the experimental conditions. A single detector efficiency factor is not recommended for each detector that would be used to correct routine sample analyses. An alternative three-run peak-shift sample analysis should be considered. A statistical analysis of the data from this peak-shift experiment can adjust the isotopic ratio estimates for detector differences due to each sample analysis

  2. Automatic calibration method of voxel size for cone-beam 3D-CT scanning system

    International Nuclear Information System (INIS)

    Yang Min; Wang Xiaolong; Wei Dongbo; Liu Yipeng; Meng Fanyong; Li Xingdong; Liu Wenli

    2014-01-01

    For a cone-beam three-dimensional computed tomography (3D-CT) scanning system, voxel size is an important indicator to guarantee the accuracy of data analysis and feature measurement based on 3D-CT images. Meanwhile, the voxel size changes with the movement of the rotary stage along X-ray direction. In order to realize the automatic calibration of the voxel size, a new and easily-implemented method is proposed. According to this method, several projections of a spherical phantom are captured at different imaging positions and the corresponding voxel size values are calculated by non-linear least-square fitting. Through these interpolation values, a linear equation is obtained that reflects the relationship between the voxel size and the rotary stage translation distance from its nominal zero position. Finally, the linear equation is imported into the calibration module of the 3D-CT scanning system. When the rotary stage is moving along X-ray direction, the accurate value of the voxel size is dynamically exported. The experimental results prove that this method meets the requirements of the actual CT scanning system, and has virtues of easy implementation and high accuracy. (authors)

  3. Calculating buoy response for a wave energy converter—A comparison of two computational methods and experimental results

    Directory of Open Access Journals (Sweden)

    Linnea Sjökvist

    2017-05-01

    Full Text Available When designing a wave power plant, reliable and fast simulation tools are required. Computational fluid dynamics (CFD software provides high accuracy but with a very high computational cost, and in operational, moderate sea states, linear potential flow theories may be sufficient to model the hydrodynamics. In this paper, a model is built in COMSOL Multiphysics to solve for the hydrodynamic parameters of a point-absorbing wave energy device. The results are compared with a linear model where the hydrodynamical parameters are computed using WAMIT, and to experimental results from the Lysekil research site. The agreement with experimental data is good for both numerical models.

  4. Calibration and validation of full-field techniques

    Directory of Open Access Journals (Sweden)

    Thalmann R.

    2010-06-01

    Full Text Available We review basic metrological terms related to the use of measurement equipment for verification of numerical model calculations. We address three challenges that are faced when performing measurements in experimental mechanics with optical techniques: the calibration of a measuring instrument that (i measures strain values, (ii provides full-field data, and (iii is dynamic.

  5. Brightness checkerboard lattice method for the calibration of the coaxial reverse Hartmann test

    Science.gov (United States)

    Li, Xinji; Hui, Mei; Li, Ning; Hu, Shinan; Liu, Ming; Kong, Lingqin; Dong, Liquan; Zhao, Yuejin

    2018-01-01

    The coaxial reverse Hartmann test (RHT) is widely used in the measurement of large aspheric surfaces as an auxiliary method for interference measurement, because of its large dynamic range, highly flexible test with low frequency of surface errors, and low cost. And the accuracy of the coaxial RHT depends on the calibration. However, the calibration process remains inefficient, and the signal-to-noise ratio limits the accuracy of the calibration. In this paper, brightness checkerboard lattices were used to replace the traditional dot matrix. The brightness checkerboard method can reduce the number of dot matrix projections in the calibration process, thus improving efficiency. An LCD screen displayed a brightness checkerboard lattice, in which the brighter checkerboard and the darker checkerboard alternately arranged. Based on the image on the detector, the relationship between the rays at certain angles and the photosensitive positions of the detector coordinates can be obtained. And a differential de-noising method can effectively reduce the impact of noise on the measurement results. Simulation and experimentation proved the feasibility of the method. Theoretical analysis and experimental results show that the efficiency of the brightness checkerboard lattices is about four times that of the traditional dot matrix, and the signal-to-noise ratio of the calibration is significantly improved.

  6. Conical Probe Calibration and Wind Tunnel Data Analysis of the Channeled Centerbody Inlet Experiment

    Science.gov (United States)

    Truong, Samson Siu

    2011-01-01

    For a multi-hole test probe undergoing wind tunnel tests, the resulting data needs to be analyzed for any significant trends. These trends include relating the pressure distributions, the geometric orientation, and the local velocity vector to one another. However, experimental runs always involve some sort of error. As a result, a calibration procedure is required to compensate for this error. For this case, it is the misalignment bias angles resulting from the distortion associated with the angularity of the test probe or the local velocity vector. Through a series of calibration steps presented here, the angular biases are determined and removed from the data sets. By removing the misalignment, smoother pressure distributions contribute to more accurate experimental results, which in turn could be then compared to theoretical and actual in-flight results to derive any similarities. Error analyses will also be performed to verify the accuracy of the calibration error reduction. The resulting calibrated data will be implemented into an in-flight RTF script that will output critical flight parameters during future CCIE experimental test runs. All of these tasks are associated with and in contribution to NASA Dryden Flight Research Center s F-15B Research Testbed s Small Business Innovation Research of the Channeled Centerbody Inlet Experiment.

  7. EXPERIMENTAL CALIBRATION OF UNDERGROUND HEAT TRANSFER MODELS UNDER A WINERY BUILDING IN A RURAL AREA

    Directory of Open Access Journals (Sweden)

    Francesco Tinti

    2017-01-01

    Full Text Available Ground temperature and hydrogeological conditions are key parameters for many engineering applications, such as the design of building basements and underground spaces and the assessment of shallow geothermal energy potential. Especially in urban areas, in the very shallow depths, it is diffi cult to fi nd natural undisturbed underground thermal conditions because of anthropic interventions. The assessment of underground behaviour in disturbed conditions will become more and more relevant because of increasing awareness to energy effi ciency and renewable energy topics. The purpose of this paper is to show a three-dimensional representation - based on models calibrated on experimental data - of the underground thermal behaviour aff ected by a building in a rural area in Italy. Temperature varies in space and time and it depends on ground, climate and building characteristics, and all these parameters are taken into account by the seasonal periodic modelling implemented. The results obtained in a context of low urbanization indirectly suggest the importance of these eff ects in dense urban areas; taking greater account of these aspects could lead to improvements in the design of underground spaces and geo-exchanger fi elds for geothermal energy exploitation.

  8. Exposure-rate calibration using large-area calibration pads

    International Nuclear Information System (INIS)

    Novak, E.F.

    1988-09-01

    The US Department of Energy (DOE) Office of Remedial Action and Waste Technology established the Technical Measurements Center (TMC) at the DOE Grand Junction Projects Office (GJPO) in Grand Junction, Colorado, to standardize, calibrate, and compare measurements made in support of DOE remedial action programs. A set of large-area, radioelement-enriched concrete pads was constructed by the DOE in 1978 at the Walker Field Airport in Grand Junction for use as calibration standards for airborne gamma-ray spectrometer systems. The use of these pads was investigated by the TMC as potential calibration standards for portable scintillometers employed in measuring gamma-ray exposure rates at Uranium Mill Tailings Remedial Action (UMTRA) project sites. Data acquired on the pads using a pressurized ionization chamber (PIC) and three scintillometers are presented as an illustration of an instrumental calibration. Conclusions and recommended calibration procedures are discussed, based on the results of these data

  9. Computer programs for evaluation of turbulence characteristics from hot-wire measurements

    International Nuclear Information System (INIS)

    Vosahlo, L.

    1984-06-01

    This report describes the set of the computer programs for evaluation of the turbulent flow characteristics from hot-wire experimental data. Three different methods and, in addition, some variants are solved in these programs. This enables a comparison of the results obtained by these methods and the analysis of the influence of individual calculation parameters and calibration coefficients on the evaluated results. The results are printed in lucid numerical tables and written into files for further processing into graphs by plotting routines. (orig.) [de

  10. Absolute calibration of in vivo measurement systems using magnetic resonance imaging and Monte Carlo computations

    International Nuclear Information System (INIS)

    Mallett, M.W.

    1991-01-01

    Lawrence Livermore National Laboratory (LLNL) is currently investigating a new method for obtaining absolute calibration factors for radiation measurement systems used to measure internally deposited radionuclides in vivo. This method uses magnetic resonance imaging (MRI) to determine the anatomical makeup of an individual. A new MRI technique is also employed that is capable of resolving the fat and water content of the human tissue. This anatomical and biochemical information is used to model a mathematical phantom. Monte Carlo methods are then used to simulate the transport of radiation throughout the phantom. By modeling the detection equipment of the in vivo measurement system into the code, calibration factors are generated that are specific to the individual. Furthermore, this method eliminates the need for surrogate human structures in the calibration process. A demonstration of the proposed method is being performed using a fat/water matrix

  11. Computational and experimental fluid mechanics. Progress report no. 5 for period January 1. to December 31. 1996

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    The general purpose of the program is the development of efficient algorithms, their implementation in codes of Computational Fluid Mechanics (CFD), and the experimental verification of these codes. Flows of both fundamental and applied nature will be investigated, including flows in industrial process equipment, about aerodynamics structures and ships, and flows over bed forms of importance for sediment transport. The experimental work will include the development of improved techniques, emphasizing optical methods. The objectives will be met through a coordinated experimental and theoretical/computational research program, organized in 7 specific projects: 1. CFD-methods and algorithms. 2. Spectral element simulation of ultrafiltration. 3. Turbulent swirling flows. 4. Near-wall models of turbulence. 5. Flow over bed forms. 6. Flow past ship hull. 7. Development of experimental techniques. (EG) 10 refs.

  12. User-friendly freehand ultrasound calibration using Lego bricks and automatic registration.

    Science.gov (United States)

    Xiao, Yiming; Yan, Charles Xiao Bo; Drouin, Simon; De Nigris, Dante; Kochanowska, Anna; Collins, D Louis

    2016-09-01

    As an inexpensive, noninvasive, and portable clinical imaging modality, ultrasound (US) has been widely employed in many interventional procedures for monitoring potential tissue deformation, surgical tool placement, and locating surgical targets. The application requires the spatial mapping between 2D US images and 3D coordinates of the patient. Although positions of the devices (i.e., ultrasound transducer) and the patient can be easily recorded by a motion tracking system, the spatial relationship between the US image and the tracker attached to the US transducer needs to be estimated through an US calibration procedure. Previously, various calibration techniques have been proposed, where a spatial transformation is computed to match the coordinates of corresponding features in a physical phantom and those seen in the US scans. However, most of these methods are difficult to use for novel users. We proposed an ultrasound calibration method by constructing a phantom from simple Lego bricks and applying an automated multi-slice 2D-3D registration scheme without volumetric reconstruction. The method was validated for its calibration accuracy and reproducibility. Our method yields a calibration accuracy of [Formula: see text] mm and a calibration reproducibility of 1.29 mm. We have proposed a robust, inexpensive, and easy-to-use ultrasound calibration method.

  13. Optimization of a data acquisition and control system for calibration of X rays detectors with usage of computational tools

    International Nuclear Information System (INIS)

    Rodrigues, Yklys Santos

    2013-01-01

    The Brazilian standard ABNT ISO/IEC 17025/2005 specifies general requirements for the competence of testing and calibration facilities. One of these requirements states that these facilities must always optimize their processes and thus, uncertainties must be estimated and lowered as much as possible. In order to achieve such goal, the Laboratorio de Calibracao de Instrumentos (LCI), a laboratory responsible for dosimeter calibration at the Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN) have acquired some equipment for the LCI's X-Ray calibration system, such as a rotating filter holder, and it has also been developing software in order to provide the best calibration results with lowest external interferences (from the operator, for instance) to diminish the possibilities of error occurrences. A semi-automated LabVIEW-based calibration software has been developed at LCI and it has been subjected to constant testing and improvements so far. The software is divided in two basic modules: one is responsible for calibration of the monitor chamber with a reference standard dosimeter, while the other is used to perform instruments calibrations using the monitor chamber as reference. Several influence quantities have been analyzed and the proper corrections have been added to the software as multiplicative correction factors. The software is not only able to acquire data from all the equipment used in the laboratory, but also it is able to calculate calibration factors and its uncertainties. A later analysis shows how this system has optimized the whole calibration process, lowering the occupational dose, reducing the mean time of calibration, uncertainties and preventing errors caused by the system's users. (author)

  14. Preliminary experimentally-validated forced and mixed convection computational simulations of the Rotatable Buoyancy Tunnel

    International Nuclear Information System (INIS)

    Clifford, Corey E.; Kimber, Mark L.

    2015-01-01

    Although computational fluid dynamics (CFD) has not been directly utilized to perform safety analyses of nuclear reactors in the United States, several vendors are considering adopting commercial numerical packages for current and future projects. To ensure the accuracy of these computational models, it is imperative to validate the assumptions and approximations built into commercial CFD codes against physical data from flows analogous to those in modern nuclear reactors. To this end, researchers at Utah State University (USU) have constructed the Rotatable Buoyancy Tunnel (RoBuT) test facility, which is designed to provide flow and thermal validation data for CFD simulations of forced and mixed convection scenarios. In order to evaluate the ability of current CFD codes to capture the complex physics associated with these types of flows, a computational model of the RoBuT test facility is created using the ANSYS Fluent commercial CFD code. The numerical RoBuT model is analyzed at identical conditions to several experimental trials undertaken at USU. Each experiment is reconstructed numerically and evaluated with the second-order Reynolds stress model (RSM). Two different thermal boundary conditions at the heated surface of the RoBuT test section are investigated: constant temperature (isothermal) and constant surface heat flux (isoflux). Additionally, the fluid velocity at the inlet of the test section is varied in an effort to modify the relative importance of natural convection heat transfer from the heated wall of the RoBuT. Mean velocity, both in the streamwise and transverse directions, as well as components of the Reynolds stress tensor at three points downstream of the RoBuT test section inlet are compared to results obtained from experimental trials. Early computational results obtained from this research initiative are in good agreement with experimental data obtained from the RoBuT facility and both the experimental data and numerical method can be used

  15. Assessing the Predictive Capability of the LIFEIV Nuclear Fuel Performance Code using Sequential Calibration

    International Nuclear Information System (INIS)

    Stull, Christopher J.; Williams, Brian J.; Unal, Cetin

    2012-01-01

    This report considers the problem of calibrating a numerical model to data from an experimental campaign (or series of experimental tests). The issue is that when an experimental campaign is proposed, only the input parameters associated with each experiment are known (i.e. outputs are not known because the experiments have yet to be conducted). Faced with such a situation, it would be beneficial from the standpoint of resource management to carefully consider the sequence in which the experiments are conducted. In this way, the resources available for experimental tests may be allocated in a way that best 'informs' the calibration of the numerical model. To address this concern, the authors propose decomposing the input design space of the experimental campaign into its principal components. Subsequently, the utility (to be explained) of each experimental test to the principal components of the input design space is used to formulate the sequence in which the experimental tests will be used for model calibration purposes. The results reported herein build on those presented and discussed in (1,2) wherein Verification and Validation and Uncertainty Quantification (VU) capabilities were applied to the nuclear fuel performance code LIFEIV. In addition to the raw results from the sequential calibration studies derived from the above, a description of the data within the context of the Predictive Maturity Index (PMI) will also be provided. The PMI (3,4) is a metric initiated and developed at Los Alamos National Laboratory to quantitatively describe the ability of a numerical model to make predictions in the absence of experimental data, where it is noted that 'predictions in the absence of experimental data' is not synonymous with extrapolation. This simply reflects the fact that resources do not exist such that each and every execution of the numerical model can be compared against experimental data. If such resources existed, the justification for numerical models

  16. Experimental realization of a one-way quantum computer algorithm solving Simon's problem.

    Science.gov (United States)

    Tame, M S; Bell, B A; Di Franco, C; Wadsworth, W J; Rarity, J G

    2014-11-14

    We report an experimental demonstration of a one-way implementation of a quantum algorithm solving Simon's problem-a black-box period-finding problem that has an exponential gap between the classical and quantum runtime. Using an all-optical setup and modifying the bases of single-qubit measurements on a five-qubit cluster state, key representative functions of the logical two-qubit version's black box can be queried and solved. To the best of our knowledge, this work represents the first experimental realization of the quantum algorithm solving Simon's problem. The experimental results are in excellent agreement with the theoretical model, demonstrating the successful performance of the algorithm. With a view to scaling up to larger numbers of qubits, we analyze the resource requirements for an n-qubit version. This work helps highlight how one-way quantum computing provides a practical route to experimentally investigating the quantum-classical gap in the query complexity model.

  17. In-flight interband calibration on AVHRR data by a cloud-viewing technique

    OpenAIRE

    Wald , Lucien

    1997-01-01

    ISBN 90-5410-933-5; International audience; A significant degradation in the responsivity of the AVHRR radiometers aboard the NOAA satellite series, affects the index vegetation (NDVI), which is an important source of information for monitoring vegetation conditions on regional and global scales. Many studies have been carried out which use the viewing Earth calibration approach in order to provide accurate calibration correction coefficients for the computation of the vegetation index using ...

  18. Experimental and Computer Modelling Studies of Metastability of Amorphous Silicon Based Solar Cells

    NARCIS (Netherlands)

    Munyeme, Geoffrey

    2003-01-01

    We present a combination of experimental and computer modelling studies of the light induced degradation in the performance of amorphous silicon based single junction solar cells. Of particular interest in this study is the degradation kinetics of different types of amorphous silicon single junction

  19. Calibration methodology for energy management system of a plug-in hybrid electric vehicle

    International Nuclear Information System (INIS)

    Duan, Benming; Wang, Qingnian; Zeng, Xiaohua; Gong, Yinsheng; Song, Dafeng; Wang, Junnian

    2017-01-01

    Highlights: • Calibration theory of EMS is proposed. • A comprehensive evaluating indicator is constructed by radar chart method. • Optimal Latin hypercube design algorithm is introduced to obtain training data. • An approximation model is established by using a RBF neural network. • Offline calibration methodology improves the actual calibration efficiency. - Abstract: This paper presents a new analytical calibration method for energy management strategy designed for a plug-in hybrid electric vehicle. This method improves the actual calibration efficiency to reach a compromise among the conflicting calibration requirements (e.g. emissions and economy). A comprehensive evaluating indicator covering emissions and economic performance is constructed by using a radar chart method. A radial basis functions (RBFs) neural network model is proposed to establish a precise model among control parameters and the comprehensive evaluation indicator. The optimal Latin hypercube design is introduced to obtain the experimental data to train the RBFs neural network model. And multi-island genetic algorithm is used to solve the optimization model. Finally, an offline calibration example is conducted. Results validate the effectiveness of the proposed calibration approach in improving vehicle performance and calibration efficiency.

  20. Equipment calibration with a microprocessor connected to a time-sharing system

    International Nuclear Information System (INIS)

    Fontaine, G.; Guglielmi, L.; Jaeger, J.J.; Szafran, S.

    1981-01-01

    In H.E.P., it is common practice to test and calibrate equipment at different stages (design, construction checks, setting up and running periods) with a dedicated mini or micro-computer (such as CERN CAVIAR). An alternative solution has been developed in which such tasks are split between a microprocessor (Motorola 6800), and a host computer; this allows an easy and cheap multiplication of independant testing set-ups. The local processor is limited to CAMAC data acquisition, histogramming and simple processing, but its computing power is enhanced by a connection to a host time-sharing system via a MUMM multiplexor described in a separate paper. It is thus possible to perform sophisticated computations (fits etc...) and to use the host disk space to store calibration results for later use. In spite of the use of assembly language, a software structure has been devised to ease the constitution of an application program. This is achieved by the interplay of three levels of facilities: macro-instructions, library of subroutines, and Patchy controlled pieces of programs. A comprehensive collection of these is kept in the form of PAM files on the host computer. This system has been used to test calorimeter modules for the UA 1 experiment. (orig.)

  1. Radiolytic modelling of spent fuel oxidative dissolution mechanism. Calibration against UO2 dynamic leaching experiments

    International Nuclear Information System (INIS)

    Merino, J.; Cera, E.; Bruno, J.; Quinones, J.; Casas, I.; Clarens, F.; Gimenez, J.; Pablo, J. de; Rovira, M.; Martinez-Esparza, A.

    2005-01-01

    Calibration and testing are inherent aspects of any modelling exercise and consequently they are key issues in developing a model for the oxidative dissolution of spent fuel. In the present work we present the outcome of the calibration process for the kinetic constants of a UO 2 oxidative dissolution mechanism developed for using in a radiolytic model. Experimental data obtained in dynamic leaching experiments of unirradiated UO 2 has been used for this purpose. The iterative calibration process has provided some insight into the detailed mechanism taking place in the alteration of UO 2 , particularly the role of · OH radicals and their interaction with the carbonate system. The results show that, although more simulations are needed for testing in different experimental systems, the calibrated oxidative dissolution mechanism could be included in radiolytic models to gain confidence in the prediction of the long-term alteration rate of the spent fuel under repository conditions

  2. Shipboard Calibration Network Extension Utilizing COTS Products

    Science.gov (United States)

    2014-09-01

    Identification TCP Transport Control Protocol VNC Virtual Network Computing WLAN Wireless Local Area Network xvi THIS PAGE INTENTIONALLY...available at the location of the sensor to be calibrated. With the wide adoption of the wireless local area network ( WLAN ) protocol, IEEE 802.11...standard devices have been proven to provide a stable, wireless infrastructure for many applications . The fast setup, wire-free configuration and

  3. New error calibration tests for gravity models using subset solutions and independent data - Applied to GEM-T3

    Science.gov (United States)

    Lerch, F. J.; Nerem, R. S.; Chinn, D. S.; Chan, J. C.; Patel, G. B.; Klosko, S. M.

    1993-01-01

    A new method has been developed to provide a direct test of the error calibrations of gravity models based on actual satellite observations. The basic approach projects the error estimates of the gravity model parameters onto satellite observations, and the results of these projections are then compared with data residual computed from the orbital fits. To allow specific testing of the gravity error calibrations, subset solutions are computed based on the data set and data weighting of the gravity model. The approach is demonstrated using GEM-T3 to show that the gravity error estimates are well calibrated and that reliable predictions of orbit accuracies can be achieved for independent orbits.

  4. Data-driven in computational plasticity

    Science.gov (United States)

    Ibáñez, R.; Abisset-Chavanne, E.; Cueto, E.; Chinesta, F.

    2018-05-01

    Computational mechanics is taking an enormous importance in industry nowadays. On one hand, numerical simulations can be seen as a tool that allows the industry to perform fewer experiments, reducing costs. On the other hand, the physical processes that are intended to be simulated are becoming more complex, requiring new constitutive relationships to capture such behaviors. Therefore, when a new material is intended to be classified, an open question still remains: which constitutive equation should be calibrated. In the present work, the use of model order reduction techniques are exploited to identify the plastic behavior of a material, opening an alternative route with respect to traditional calibration methods. Indeed, the main objective is to provide a plastic yield function such that the mismatch between experiments and simulations is minimized. Therefore, once the experimental results just like the parameterization of the plastic yield function are provided, finding the optimal plastic yield function can be seen either as a traditional optimization or interpolation problem. It is important to highlight that the dimensionality of the problem is equal to the number of dimensions related to the parameterization of the yield function. Thus, the use of sparse interpolation techniques seems almost compulsory.

  5. Result interpretation of experimental calibration for milk citric acid determination via infra-red spectroscopy (MIR-FT

    Directory of Open Access Journals (Sweden)

    Oto Hanuš

    2009-01-01

    Full Text Available Citric acid (KC in milk is indicator of cow energy metabolism. Milk laboratories set up KC determination. A method can be infra-red analyse (MIR-FT. The goal was to develop a relevant method for reference sample preparation for MIR-FT (indirect method, Lactoscope FTIR and MilkoScan FT 6000 calibration. As reference was used a photometric method (c; 428 nm. KC was added (n = 3 into some re­fe­ren­ce milk samples (n = 10, bulk milk. Mean value was 9.220 ± 3.094 mmol . l−1 with variation range from 6.206 to 15.975 mmol . l−1. Recovery c was from 100.8 to 120.2 %. Correlation between c and ­MIR-FT were from 0.979 to 0.992 (P < 0.001. These were lower in the set of native milk samples (n = 7, from 0.751 (Lactoscope FTIR; P < 0.05 to 0.947 (MilkoScan FT 6000; P < 0.001 in comparison to original va­lues from 0.981 to 0.992 (n = 10; P < 0.001. Correlations between calibrated MIR-FT instruments were from 0.958 to 1.0 (P < 0.001. Average recovery for instruments (n = 12 was 101.6 ± 18.1 %. The mean differences between c method and MIR-FT after calibration (n = 4 moved from −0.001 across zero to 0.037 %. Standard deviation of differences was from 0.0074 to 0.0187 % at MilkoScan FT 6000 and from 0.0105 to 0.0117 % for Lactoscope FTIR. Relative variability of differences (MIR-F (filter technology and FT for major components fat (T, proteins (B and lactose (L in total and minor components KC and free fatty acids (VMK was estimated to 1.0 and 7.2 and 34.4 %. The KC result is inferior than T, B and L superior than VMK. Autocorrelation (0.042; P > 0.05 of results demonstrated the in­de­pen­den­ce of consecutive measurements. Milk preservation effect amounted 0.2323 (P < 0.001 with bronopol and 0.0339 (P > 0.05 mmol . l−1 with dichromate. It was (3.0 and 0.44 % practically negligible, redeemable via relevant calibration. The results of proficiency testing in post-calibration period and evaluation of double

  6. Influence of rainfall observation network on model calibration and application

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-01-01

    Full Text Available The objective in this study is to investigate the influence of the spatial resolution of the rainfall input on the model calibration and application. The analysis is carried out by varying the distribution of the raingauge network. A meso-scale catchment located in southwest Germany has been selected for this study. First, the semi-distributed HBV model is calibrated with the precipitation interpolated from the available observed rainfall of the different raingauge networks. An automatic calibration method based on the combinatorial optimization algorithm simulated annealing is applied. The performance of the hydrological model is analyzed as a function of the raingauge density. Secondly, the calibrated model is validated using interpolated precipitation from the same raingauge density used for the calibration as well as interpolated precipitation based on networks of reduced and increased raingauge density. Lastly, the effect of missing rainfall data is investigated by using a multiple linear regression approach for filling in the missing measurements. The model, calibrated with the complete set of observed data, is then run in the validation period using the above described precipitation field. The simulated hydrographs obtained in the above described three sets of experiments are analyzed through the comparisons of the computed Nash-Sutcliffe coefficient and several goodness-of-fit indexes. The results show that the model using different raingauge networks might need re-calibration of the model parameters, specifically model calibrated on relatively sparse precipitation information might perform well on dense precipitation information while model calibrated on dense precipitation information fails on sparse precipitation information. Also, the model calibrated with the complete set of observed precipitation and run with incomplete observed data associated with the data estimated using multiple linear regressions, at the locations treated as

  7. Research on the calibration methods of the luminance parameter of radiation luminance meters

    Science.gov (United States)

    Cheng, Weihai; Huang, Biyong; Lin, Fangsheng; Li, Tiecheng; Yin, Dejin; Lai, Lei

    2017-10-01

    This paper introduces standard diffusion reflection white plate method and integrating sphere standard luminance source method to calibrate the luminance parameter. The paper compares the effects of calibration results by using these two methods through principle analysis and experimental verification. After using two methods to calibrate the same radiation luminance meter, the data obtained verifies the testing results of the two methods are both reliable. The results show that the display value using standard white plate method has fewer errors and better reproducibility. However, standard luminance source method is more convenient and suitable for on-site calibration. Moreover, standard luminance source method has wider range and can test the linear performance of the instruments.

  8. Intercomparison and calibration of dose calibrators used in nuclear medicine facilities

    CERN Document Server

    Costa, A M D

    2003-01-01

    The aim of this work was to establish a working standard for intercomparison and calibration of dose calibrators used in most of nuclear medicine facilities for the determination of the activity of radionuclides administered to patients in specific examinations or therapeutic procedures. A commercial dose calibrator, a set of standard radioactive sources, and syringes, vials and ampoules with radionuclide solutions used in nuclear medicine were utilized in this work. The commercial dose calibrator was calibrated for radionuclide solutions used in nuclear medicine. Simple instrument tests, such as linearity response and variation response with the source volume at a constant source activity concentration were performed. This instrument may be used as a reference system for intercomparison and calibration of other activity meters, as a method of quality control of dose calibrators utilized in nuclear medicine facilities.

  9. Characteristic thermal-hydraulic problems in NHRs: Overview of experimental investigations and computer codes

    Energy Technology Data Exchange (ETDEWEB)

    Falikov, A A; Vakhrushev, V V; Kuul, V S; Samoilov, O B; Tarasov, G I [OKBM, Nizhny Novgorod (Russian Federation)

    1997-09-01

    The paper briefly reviews the specific thermal-hydraulic problems for AST-type NHRs, the experimental investigations that have been carried out in the RF, and the design procedures and computer codes used for AST-500 thermohydraulic characteristics and safety validation. (author). 13 refs, 10 figs, 1 tab.

  10. Calibration of spent fuel measurement assembly

    International Nuclear Information System (INIS)

    Koleska, Michal; Viererbl, Ladislav; Marek, Milan

    2014-01-01

    The LVR-15 research reactor (Czech Republic) had been converted from the highly enriched IRT-2M to the low enriched IRT-4M fuel. For the possibility of the independent pre-transport evaluation of IRT-2M burnup, a spectrometric system was developed. This spectrometric system consists of the fuel holder, the collimator and the portable Canberra Big MAC HPGe (High Purity Germanium) detector. In order to have well reproducible and reliable experimental data for modeling of the measurement system, calibration with the 110m Ag isotope with known activity was performed. This isotope was chosen for having energies similar to isotopes measured in fuel assemblies. The 110m Ag isotope was prepared by irradiating of the silver foil in LVR-15 research reactor; its activity was evaluated in the LVR-15's spectrometric laboratory. From the measured data, an efficiency curve of the spectrometric system has been determined. The experimental data were compared to the calculation results with the MCNPX model of the spectrometric system. - Highlights: • Calibration of research reactor spent fuel measurement assembly. • On-site prepared 110m Ag isotope used for the measurement. • Calculated self-shielding factor for the IRT-2M fuel. • Applicable to other research reactor fuel geometries

  11. A holistic calibration method with iterative distortion compensation for stereo deflectometry

    Science.gov (United States)

    Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian

    2018-07-01

    This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.

  12. Photogrammetric measurement of 3D freeform millimetre-sized objects with micro features: an experimental validation of the close-range camera calibration model for narrow angles of view

    Science.gov (United States)

    Percoco, Gianluca; Sánchez Salmerón, Antonio J.

    2015-09-01

    The measurement of millimetre and micro-scale features is performed by high-cost systems based on technologies with narrow working ranges to accurately control the position of the sensors. Photogrammetry would lower the costs of 3D inspection of micro-features and would be applicable to the inspection of non-removable micro parts of large objects too. Unfortunately, the behaviour of photogrammetry is not known when photogrammetry is applied to micro-features. In this paper, the authors address these issues towards the application of digital close-range photogrammetry (DCRP) to the micro-scale, taking into account that in literature there are research papers stating that an angle of view (AOV) around 10° is the lower limit to the application of the traditional pinhole close-range calibration model (CRCM), which is the basis of DCRP. At first a general calibration procedure is introduced, with the aid of an open-source software library, to calibrate narrow AOV cameras with the CRCM. Subsequently the procedure is validated using a reflex camera with a 60 mm macro lens, equipped with extension tubes (20 and 32 mm) achieving magnification of up to 2 times approximately, to verify literature findings with experimental photogrammetric 3D measurements of millimetre-sized objects with micro-features. The limitation experienced by the laser printing technology, used to produce the bi-dimensional pattern on common paper, has been overcome using an accurate pattern manufactured with a photolithographic process. The results of the experimental activity prove that the CRCM is valid for AOVs down to 3.4° and that DCRP results are comparable with the results of existing and more expensive commercial techniques.

  13. Photogrammetric measurement of 3D freeform millimetre-sized objects with micro features: an experimental validation of the close-range camera calibration model for narrow angles of view

    International Nuclear Information System (INIS)

    Percoco, Gianluca; Sánchez Salmerón, Antonio J

    2015-01-01

    The measurement of millimetre and micro-scale features is performed by high-cost systems based on technologies with narrow working ranges to accurately control the position of the sensors. Photogrammetry would lower the costs of 3D inspection of micro-features and would be applicable to the inspection of non-removable micro parts of large objects too. Unfortunately, the behaviour of photogrammetry is not known when photogrammetry is applied to micro-features.In this paper, the authors address these issues towards the application of digital close-range photogrammetry (DCRP) to the micro-scale, taking into account that in literature there are research papers stating that an angle of view (AOV) around 10° is the lower limit to the application of the traditional pinhole close-range calibration model (CRCM), which is the basis of DCRP.At first a general calibration procedure is introduced, with the aid of an open-source software library, to calibrate narrow AOV cameras with the CRCM. Subsequently the procedure is validated using a reflex camera with a 60 mm macro lens, equipped with extension tubes (20 and 32 mm) achieving magnification of up to 2 times approximately, to verify literature findings with experimental photogrammetric 3D measurements of millimetre-sized objects with micro-features. The limitation experienced by the laser printing technology, used to produce the bi-dimensional pattern on common paper, has been overcome using an accurate pattern manufactured with a photolithographic process.The results of the experimental activity prove that the CRCM is valid for AOVs down to 3.4° and that DCRP results are comparable with the results of existing and more expensive commercial techniques. (paper)

  14. Experimental assessment of computer codes used for safety analysis of integral reactors

    Energy Technology Data Exchange (ETDEWEB)

    Falkov, A.A.; Kuul, V.S.; Samoilov, O.B. [OKB Mechanical Engineering, Nizhny Novgorod (Russian Federation)

    1995-09-01

    Peculiarities of integral reactor thermohydraulics in accidents are associated with presence of noncondensable gas in built-in pressurizer, absence of pumped ECCS, use of guard vessel for LOCAs localisation and passive RHRS through in-reactor HX`s. These features defined the main trends in experimental investigations and verification efforts for computer codes applied. The paper reviews briefly the performed experimental investigation of thermohydraulics of AST-500, VPBER600-type integral reactors. The characteristic of UROVEN/MB-3 code for LOCAs analysis in integral reactors and results of its verification are given. The assessment of RELAP5/mod3 applicability for accident analysis in integral reactor is presented.

  15. Auto calibration of a cone-beam-CT

    International Nuclear Information System (INIS)

    Gross, Daniel; Heil, Ulrich; Schulze, Ralf; Schoemer, Elmar; Schwanecke, Ulrich

    2012-01-01

    Purpose: This paper introduces a novel autocalibration method for cone-beam-CTs (CBCT) or flat-panel CTs, assuming a perfect rotation. The method is based on ellipse-fitting. Autocalibration refers to accurate recovery of the geometric alignment of a CBCT device from projection images alone, without any manual measurements. Methods: The authors use test objects containing small arbitrarily positioned radio-opaque markers. No information regarding the relative positions of the markers is used. In practice, the authors use three to eight metal ball bearings (diameter of 1 mm), e.g., positioned roughly in a vertical line such that their projection image curves on the detector preferably form large ellipses over the circular orbit. From this ellipse-to-curve mapping and also from its inversion the authors derive an explicit formula. Nonlinear optimization based on this mapping enables them to determine the six relevant parameters of the system up to the device rotation angle, which is sufficient to define the geometry of a CBCT-machine assuming a perfect rotational movement. These parameters also include out-of-plane rotations. The authors evaluate their method by simulation based on data used in two similar approaches [L. Smekal, M. Kachelriess, S. E, and K. Wa, “Geometric misalignment and calibration in cone-beam tomography,” Med. Phys. 31(12), 3242–3266 (2004); K. Yang, A. L. C. Kwan, D. F. Miller, and J. M. Boone, “A geometric calibration method for cone beam CT systems,” Med. Phys. 33(6), 1695–1706 (2006)]. This allows a direct comparison of accuracy. Furthermore, the authors present real-world 3D reconstructions of a dry human spine segment and an electronic device. The reconstructions were computed from projections taken with a commercial dental CBCT device having two different focus-to-detector distances that were both calibrated with their method. The authors compare their reconstruction with a reconstruction computed by the manufacturer of the

  16. Determination of calibration constants for perturbing objects of cavity resonators

    International Nuclear Information System (INIS)

    Franco, M.A.R.; Serrao, V.A.; Fuhrmann, C.

    1989-05-01

    Using the Slater theorem, the calibrating constants for objects utilized in the tecnique of perturbing measurements of cavities electric and magnetic fields have been determined. Such perturbing objects are utilized in the measurements of the shunt impedance and electric field relative intensity ocurring in linac accelerating structures. To determine the calibrating constants of the perturbing objects, a cylindrical cavity of well know field pattern has been utilized. The cavity was excited in two differente modes of oscillation and the experimental results are in good aggrement with the theoretical values. (author) [pt

  17. Thermodynamically consistent model calibration in chemical kinetics

    Directory of Open Access Journals (Sweden)

    Goutsias John

    2011-05-01

    Full Text Available Abstract Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new

  18. Sensitivity Measurement of Transmission Computer Tomography: thePreliminary Experimental Study

    International Nuclear Information System (INIS)

    Widodo, Chomsin-S; Sudjatmoko; Kusminarto; Agung-BS Utomo; Suparta, Gede B

    2000-01-01

    This paper reports result of preliminary experimental study onmeasurement method for sensitivity of a computed tomography (CT) scanner. ACT scanner has been build at the Department of Physics, FMIPA UGM and itsperformance based on its sensitivity was measured. The result showed that themeasurement method for sensitivity confirmed this method may be developedfurther as a measurement standard. Although the CT scanner developed has anumber of shortcoming, the analytical results from the sensitivitymeasurement suggest a number of reparations and improvements for the systemso that improved reconstructed CT images can be obtained. (author)

  19. Technical Note: Procedure for the calibration and validation of kilo-voltage cone-beam CT models

    Energy Technology Data Exchange (ETDEWEB)

    Vilches-Freixas, Gloria; Létang, Jean Michel; Rit, Simon, E-mail: simon.rit@creatis.insa-lyon.fr [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1206, INSA-Lyon, Université Lyon 1, Centre Léon Bérard, Lyon 69373 Cedex 08 (France); Brousmiche, Sébastien [Ion Beam Application, Louvain-la-Neuve 1348 (Belgium); Romero, Edward; Vila Oliva, Marc [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1206, INSA-Lyon, Université Lyon 1, Centre Léon Bérard, Lyon 69373 Cedex 08, France and Ion Beam Application, Louvain-la-Neuve 1348 (Belgium); Kellner, Daniel; Deutschmann, Heinz; Keuschnigg, Peter; Steininger, Philipp [Institute for Research and Development on Advanced Radiation Technologies, Paracelsus Medical University, Salzburg 5020 (Austria)

    2016-09-15

    Purpose: The aim of this work is to propose a general and simple procedure for the calibration and validation of kilo-voltage cone-beam CT (kV CBCT) models against experimental data. Methods: The calibration and validation of the CT model is a two-step procedure: the source model then the detector model. The source is described by the direction dependent photon energy spectrum at each voltage while the detector is described by the pixel intensity value as a function of the direction and the energy of incident photons. The measurements for the source consist of a series of dose measurements in air performed at each voltage with varying filter thicknesses and materials in front of the x-ray tube. The measurements for the detector are acquisitions of projection images using the same filters and several tube voltages. The proposed procedure has been applied to calibrate and assess the accuracy of simple models of the source and the detector of three commercial kV CBCT units. If the CBCT system models had been calibrated differently, the current procedure would have been exclusively used to validate the models. Several high-purity attenuation filters of aluminum, copper, and silver combined with a dosimeter which is sensitive to the range of voltages of interest were used. A sensitivity analysis of the model has also been conducted for each parameter of the source and the detector models. Results: Average deviations between experimental and theoretical dose values are below 1.5% after calibration for the three x-ray sources. The predicted energy deposited in the detector agrees with experimental data within 4% for all imaging systems. Conclusions: The authors developed and applied an experimental procedure to calibrate and validate any model of the source and the detector of a CBCT unit. The present protocol has been successfully applied to three x-ray imaging systems. The minimum requirements in terms of material and equipment would make its implementation suitable in

  20. Computational Experimentation to Understand C2 for Teams of Autonomous Systems and People

    Science.gov (United States)

    2014-12-12

    Muller, J. 9/26/2012. With Driverless Cars, Once Again It Is California Leading The Way. Forbes; http://www.forbes.com/sites/joannmuller/2012/09...26/with- driverless -cars- once-again-it-is-california-leading-the-way/; retrieved 09/19/2013. Nissen, M.E. 2007. Computational Experimentation on New

  1. Experimental and computational analysis of steam condensation in the presence of air and helium

    International Nuclear Information System (INIS)

    Bucci, M.

    2010-01-01

    Among the different phenomena expected to occur within nuclear reactor containments during a postulated loss of coolant accident, condensation on containment walls plays a major role, since it represents an important heat sink for evacuating the energy released by the discharge of the primary water. Nevertheless, condensation strongly affects other relevant phenomena, like containment atmosphere mixing, that influences the distribution of non-condensable gases hypothetically delivered in severe accident conditions. In this scenario, the role of condensation is not obvious, since it can locally aid the hydrogen produced by the oxidation of the core claddings to concentrate and reach flammability limits, providing a dangerous effect instead of a positive one. The understanding of condensation in the presence of air and hydrogen is therefore a fundamental task for the safety analyses of reactor containments. This research has been carried out with the aim to contribute to the understanding of these phenomena. A double strategy has been adopted, including complementary experimental and computational activities. Novel data have been made available by the CONAN facility, investigating the effects induced by light non-condensable gases in experimental configurations that were scarcely investigated in past studies. Computational fluid dynamics (CFD) condensation models have been developed and validated. The suitability of helium as a substitute for hydrogen in experimental activities has been investigated by theoretical and computational analyses allowing to establish simple criteria for the scaling of condensation tests in the presence of a light non-condensable gas. (authors)

  2. Calibration of a NE213 detector for neutron spectroscopy

    International Nuclear Information System (INIS)

    Blazquez Martinez, J.; Butragueno Casado, J. L.

    1974-01-01

    This work describes the experimental way followed for getting the calibration of a NE213 detector with a beam of neutrons from the J.E.N. 2 MeV Van de Graaff and using at once pulse shape discrimination. Detector has been used for measuring the spectrum of the fast reactor CORAL-1. There is also included an experimental method in order to get with precision where the Compton edge is placed on the electron spectrum. (Author) 9 refs

  3. Electrochemistry of moexipril: experimental and computational approach and voltammetric determination.

    Science.gov (United States)

    Taşdemir, Hüdai I; Kiliç, E

    2014-09-01

    The electrochemistry of moexipril (MOE) was studied by electrochemical methods with theoretical calculations performed at B3LYP/6-31 + G (d)//AM1. Cyclic voltammetric studies were carried out based on a reversible and adsorption-controlled reduction peak at -1.35 V on a hanging mercury drop electrode (HMDE). Concurrently irreversible diffusion-controlled oxidation peak at 1.15 V on glassy carbon electrode (GCE) was also employed. Potential values are according to Ag/AgCI, (3.0 M KCI) and measurements were performed in Britton-Robinson buffer of pH 5.5. Tentative electrode mechanisms were proposed according to experimental results and ab-initio calculations. Square-wave adsorptive stripping voltammetric methods have been developed and validated for quantification of MOE in pharmaceutical preparations. Linear working range was established as 0.03-1.35 microM for HMDE and 0.2-20.0 microM for GCE. Limit of quantification (LOQ) was calculated to be 0.032 and 0.47 microM for HMDE and GCE, respectively. Methods were successfully applied to assay the drug in tablets by calibration and standard addition methods with good recoveries between 97.1% and 106.2% having relative standard deviation less than 10%.

  4. A New Online Calibration Method Based on Lord's Bias-Correction.

    Science.gov (United States)

    He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei

    2017-09-01

    Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.

  5. A new spectroscopic calibration to determine Teff and [Fe/H] of FGK dwarfs and giants

    Directory of Open Access Journals (Sweden)

    Teixeira G. D. C.

    2017-01-01

    Full Text Available We present a new spectroscopic calibration for a fast estimate of Teff and [Fe/H] for FGK dwarfs and GK giant stars. We used spectra from a joint sample of 708 stars, composed by 451 FGK dwarfs and 257 GK-giant stars with homogeneously determined spectroscopic stellar parameters. We have derived 322 EW line-ratios and 100 FeI lines that can be used to compute Teff and [Fe/H], respectively. We show that these calibrations are effective for FGK dwarfs and GK-giant stars in the following ranges: 4500 K < Teff < 6500 K, 2.5 < log g < 4.9 dex, and –0.8 < [Fe/H] < 0:5 dex. The new calibration has a standard deviation of 74 K for Teff and 0.07 dex for [Fe/H]. We use four independent samples of stars to test and verify the new calibration, a sample of giant stars, a sample composed of Gaia FGK benchmark stars, a sample of GK-giant stars from the DR1 of the Gaia-ESO survey, and a sample of FGK-dwarf stars. We present a new computer code, GeTCal, for automatically producing new calibration files based on any new sample of stars.

  6. Validation of a densimeter calibration procedure for a secondary calibration laboratory

    International Nuclear Information System (INIS)

    Alpizar Herrera, Juan Carlos

    2014-01-01

    A survey was conducted to quantify the need for calibration of a density measurement instrument at the research units at the Sede Rodrigo Facio of the Universidad de Costa Rica. A calibration procedure was documented for the instrument that presented the highest demand in the survey by the calibration service. A study of INTE-ISO/IEC 17025: 2005 and specifically in section 5.4 of this standard was done, to document the procedure for calibrating densimeters. Densimeter calibration procedures and standards were sought from different national and international sources. The method of hydrostatic weighing or Cuckow method was the basis of the defined procedure. Documenting the calibration procedure and creating other documents was performed for data acquisition log, intermediate calculation log and calibration certificate copy. A veracity test was performed using as reference laboratory a laboratory of calibration secondary national as part of the validation process of the documented procedure. The results of the E_n statistic of 0.41; 0.34 and 0.46 for the calibration points 90%, 50% and 10% were obtained for the densimeter scale respectively. A reproducibility analysis of the method was performed with satisfactory results. Different suppliers were contacted to estimate the economic costs of the equipment and materials, needed to develop the documented method of densimeter calibration. The acquisition of an analytical balance was recommended, instead of a precision scale, in order to improve the results obtained with the documented method [es

  7. Radiochromic film calibration for dosimetry in computed tomography tests

    International Nuclear Information System (INIS)

    Costa, K. C.; Prata M, A.; Ladino G, A. M.; Costa, K. L.

    2017-10-01

    Radiochromic film applications in dosimetry have become increasingly significant for studies on radiotherapy and diagnostic tests. Due to sensitivity to exposure to ionizing radiation, radiochromic films are commonly used to obtain dose distribution maps. The objective of this study is to obtain the calibration curves of the radiographic film for exposure with X-ray beam in a computerized tomography (CT) scanner to realize measures of typical doses found in radiodiagnosis tests. It was used Gafchromic Xr-AQ2 film, which shows little sensitivity to visible light and a response in the range of 0.1 to 20 Gy for X-ray beam in a tube voltage supply range ranging from 20 kV to 200 kV. In the experiments, a head polymethylmethacrylate (PMMA) phantom, with a cylindrical shape with five openings was used. This phantom was placed in the CT scanner isocenter and radiochromic film strips were placed into two openings. The irradiations were performed in a Toshiba Asteion scanner that allows making acquisitions in helical mode. The central slice of the head phantom was irradiated to obtain the values of air kerma in PMMA measured with a pencil ionization chamber. Thereafter, radiochromic film strips were placed into the central and one peripheral opening and 10 cm long scans of the central region of the phantom were carried out with feed voltage of 120 kV. The strips irradiated with different X-ray tube currents were scanned and processed using the ImageJ software to obtain the intensity values resulting from the absorbed radiation by optical density analysis. The calibration curves were obtained for both region, central and peripheral corresponding to the values of air kerma in PMMA measured with ionization chamber. With the curves in hand, CT experiments with applied beams can use radiochromic films as a dosimetry method and then seek the generation of images with lower dose deposition and higher diagnostic quality. (Author)

  8. Radiochromic film calibration for dosimetry in computed tomography tests

    Energy Technology Data Exchange (ETDEWEB)

    Costa, K. C.; Prata M, A. [Federal Center for Technological Education of Minas Gerais, Biomedical Engineering Center, Av. Amazonas 5253, Nova Suica, 30421-169 Belo Horizonte, Minas Gerais (Brazil); Ladino G, A. M. [Federal University of Minas Gerais, Department of Nuclear Engineering, Av. Antonio Carlos 6627, Pampulha, 31270-90 Belo Horizonte, Minas Gerais (Brazil); Costa, K. L., E-mail: apratabhz@gmail.com [University of Itauna, Medicine Department, Rodovia Mg 431 Km 45 s/n, El Dorado, 35680-142 Itauna, Minas Gerais (Brazil)

    2017-10-15

    Radiochromic film applications in dosimetry have become increasingly significant for studies on radiotherapy and diagnostic tests. Due to sensitivity to exposure to ionizing radiation, radiochromic films are commonly used to obtain dose distribution maps. The objective of this study is to obtain the calibration curves of the radiographic film for exposure with X-ray beam in a computerized tomography (CT) scanner to realize measures of typical doses found in radiodiagnosis tests. It was used Gafchromic Xr-AQ2 film, which shows little sensitivity to visible light and a response in the range of 0.1 to 20 Gy for X-ray beam in a tube voltage supply range ranging from 20 kV to 200 kV. In the experiments, a head polymethylmethacrylate (PMMA) phantom, with a cylindrical shape with five openings was used. This phantom was placed in the CT scanner isocenter and radiochromic film strips were placed into two openings. The irradiations were performed in a Toshiba Asteion scanner that allows making acquisitions in helical mode. The central slice of the head phantom was irradiated to obtain the values of air kerma in PMMA measured with a pencil ionization chamber. Thereafter, radiochromic film strips were placed into the central and one peripheral opening and 10 cm long scans of the central region of the phantom were carried out with feed voltage of 120 kV. The strips irradiated with different X-ray tube currents were scanned and processed using the ImageJ software to obtain the intensity values resulting from the absorbed radiation by optical density analysis. The calibration curves were obtained for both region, central and peripheral corresponding to the values of air kerma in PMMA measured with ionization chamber. With the curves in hand, CT experiments with applied beams can use radiochromic films as a dosimetry method and then seek the generation of images with lower dose deposition and higher diagnostic quality. (Author)

  9. Krypton calibration of time projection chambers of the NA61/SHINE experiment

    CERN Document Server

    Naskret, Michal

    The NA61/SHINE experiment at CERN is searching for the critical point in phase transition between quark-gluon plasma and hadronic matter. To do so we use the most precise apparatus - Time Projection Chamber. Its main task is to find trajectories of particles created in a relativistic collision. In order to improve efficiency of TPCs, we introduce calibration using radioactive krypton gas. Simulation of events in a TPC cham- ber through a decay of excited krypton atoms gives us a spectrum, which is later fitted to the model spectrum of krypton from a Monte-Carlo simulation. The data obtained in such a way serves us to determine malfunctioning electronics in TPCs. Thanks to the krypton calibration we can create a map of pad by pad gains. In this thesis I will de- scribe in detail the NA61 experimental setup, krypton calibration procedure, calibration algorithm and results for recent calibration runs

  10. A Printer Indexing System for Color Calibration with Applications in Dietary Assessment.

    Science.gov (United States)

    Fang, Shaobo; Liu, Chang; Zhu, Fengqing; Boushey, Carol; Delp, Edward

    2015-09-01

    In image based dietary assessment, color is a very important feature in food identification. One issue with using color in image analysis in the calibration of the color imaging capture system. In this paper we propose an indexing system for color camera calibration using printed color checkerboards also known as fiducial markers (FMs). To use the FM for color calibration one must know which printer was used to print the FM so that the correct color calibration matrix can be used for calibration. We have designed a printer indexing scheme that allows one to determine which printer was used to print the FM based on a unique arrangement of color squares and binarized marks (used for error control) printed on the FM. Using normalized cross correlation and pattern detection, the index corresponding to the printer for a particular FM can be determined. Our experimental results show this scheme is robust against most types of lighting conditions.

  11. The stability of mechanical calibration for a kV cone beam computed tomography system integrated with linear accelerator

    International Nuclear Information System (INIS)

    Sharpe, Michael B.; Moseley, Douglas J.; Purdie, Thomas G.

    2006-01-01

    The geometric accuracy and precision of an image-guided treatment system were assessed. Image guidance is performed using an x-ray volume imaging (XVI) system integrated with a linear accelerator and treatment planning system. Using an amorphous silicon detector and x-ray tube, volumetric computed tomography images are reconstructed from kilovoltage radiographs by filtered backprojection. Image fusion and assessment of geometric targeting are supported by the treatment planning system. To assess the limiting accuracy and precision of image-guided treatment delivery, a rigid spherical target embedded in an opaque phantom was subjected to 21 treatment sessions over a three-month period. For each session, a volumetric data set was acquired and loaded directly into an active treatment planning session. Image fusion was used to ascertain the couch correction required to position the target at the prescribed iso-center. Corrections were validated independently using megavoltage electronic portal imaging to record the target position with respect to symmetric treatment beam apertures. An initial calibration cycle followed by repeated image-guidance sessions demonstrated the XVI system could be used to relocate an unambiguous object to within less than 1 mm of the prescribed location. Treatment could then proceed within the mechanical accuracy and precision of the delivery system. The calibration procedure maintained excellent spatial resolution and delivery precision over the duration of this study, while the linear accelerator was in routine clinical use. Based on these results, the mechanical accuracy and precision of the system are ideal for supporting high-precision localization and treatment of soft-tissue targets

  12. Study on the process of calibration and deep centring of blanks

    International Nuclear Information System (INIS)

    Potapov, I.N.; Romantsev, B.A.; Popov, V.A.; Volodin, V.V.; Goncharuk, A.V.

    1985-01-01

    Process of calibration and deep centring of blanks before broaching is developed and studied. Investigations are performed at a semi-industrial screw rolling mill MISiS-100 T. Blanks made of 40Kh, 60, 50, 45KhN2MFA, 30KhGSNA steels 80, 85, and 90 mm in diameter and 300 mm long were calibrated and centered after heating in a furnace to a depth of 200 mm at shafts and faces with different calibration. 30KhN2MFA steel is chosen for face material; heat treatment is conducted under the following conditions: heating up to 950-1000 deg C and hold-up during 30-35 min with the following cooling in the air. The above experimental investigations reveal that the process of calibration and deep centring accomplished at a three-shaft screw rolling mill is rather stable and provides a high accuracy of centering hollow marking

  13. Experimental demonstration of reservoir computing on a silicon photonics chip

    Science.gov (United States)

    Vandoorne, Kristof; Mechet, Pauline; van Vaerenbergh, Thomas; Fiers, Martin; Morthier, Geert; Verstraeten, David; Schrauwen, Benjamin; Dambre, Joni; Bienstman, Peter

    2014-03-01

    In today’s age, companies employ machine learning to extract information from large quantities of data. One of those techniques, reservoir computing (RC), is a decade old and has achieved state-of-the-art performance for processing sequential data. Dedicated hardware realizations of RC could enable speed gains and power savings. Here we propose the first integrated passive silicon photonics reservoir. We demonstrate experimentally and through simulations that, thanks to the RC paradigm, this generic chip can be used to perform arbitrary Boolean logic operations with memory as well as 5-bit header recognition up to 12.5 Gbit s-1, without power consumption in the reservoir. It can also perform isolated spoken digit recognition. Our realization exploits optical phase for computing. It is scalable to larger networks and much higher bitrates, up to speeds >100 Gbit s-1. These results pave the way for the application of integrated photonic RC for a wide range of applications.

  14. Phantom-Calibrated versus Automatic Coronary Artery Mass Quantification with Multidetector-Row Computed Tomography: In Vitro and In Vivo Study

    International Nuclear Information System (INIS)

    Serafin, Z.; Lasek, W.; Laskowska, K.

    2008-01-01

    Background: Coronary artery calcium scoring is used as a method for cardiovascular risk stratification and monitoring of coronary heart disease. Automatic software-based calcium mass calculation has been proposed to improve the performance of the procedure. Purpose: To compare two algorithms of calcium mass measurement, automatic and phantom calibrated, with respect to correlation, measurement error, and accuracy in vitro and in vivo. Material and Methods: A cardiac phantom with calcium cylinder inserts was scanned with sequential non-overlapping collimation 4x2.5 mm, at 120 kV and 165 mAs. Fifty adults (37 men; mean age 46.2 years) were examined with the same settings using prospective electrocardiographic triggering to detect and quantify coronary artery calcifications. Calculations were performed with two methods: software-based automatic calcium mass measurement (ACM) and phantom-calibrated calcium mass measurement (CCM). Results: The total phantom calcium masses measured with ACM and CCM were 175.0±13.8 mg and 163.0±4.4 mg, respectively (P<0.0001), and ACM produced a higher mean error (4.5 vs. 3.2; P<0.05). Results of ACM and CCM were strongly correlated to each other (R=0.73-0.96; P<0.0001). Mean image noise in the patient study was 8.72±1.68 HU. Results of patient calcium scoring with ACM and CCM were significantly different (median 70.3 mg and 59.7 mg, respectively; P<0.0001), with a mean systematic error of 17.5% (limit of agreement between 14.6% and 20.4%). The use of ACM resulted in an altered quartile classification for 14% of patients, as compared to CCM; all of these patients were classified into a higher category. Conclusion: Our data indicate that multidetector-row computed tomography coronary calcium mass determination based on dedicated phantom calibration shows lower measurement error than an automatic software-based calculation method. The tested automatic software does not yet seem to be a reliable option for calcium mass measurement

  15. Phantom-Calibrated versus Automatic Coronary Artery Mass Quantification with Multidetector-Row Computed Tomography: In Vitro and In Vivo Study

    Energy Technology Data Exchange (ETDEWEB)

    Serafin, Z.; Lasek, W.; Laskowska, K. (Dept. of Radiology and Diagnostic Imaging, Nicolaus Copernicus Univ., Collegium Medicum, Bydgoszcz (Poland))

    2008-11-15

    Background: Coronary artery calcium scoring is used as a method for cardiovascular risk stratification and monitoring of coronary heart disease. Automatic software-based calcium mass calculation has been proposed to improve the performance of the procedure. Purpose: To compare two algorithms of calcium mass measurement, automatic and phantom calibrated, with respect to correlation, measurement error, and accuracy in vitro and in vivo. Material and Methods: A cardiac phantom with calcium cylinder inserts was scanned with sequential non-overlapping collimation 4x2.5 mm, at 120 kV and 165 mAs. Fifty adults (37 men; mean age 46.2 years) were examined with the same settings using prospective electrocardiographic triggering to detect and quantify coronary artery calcifications. Calculations were performed with two methods: software-based automatic calcium mass measurement (ACM) and phantom-calibrated calcium mass measurement (CCM). Results: The total phantom calcium masses measured with ACM and CCM were 175.0+-13.8 mg and 163.0+-4.4 mg, respectively (P<0.0001), and ACM produced a higher mean error (4.5 vs. 3.2; P<0.05). Results of ACM and CCM were strongly correlated to each other (R=0.73-0.96; P<0.0001). Mean image noise in the patient study was 8.72+-1.68 HU. Results of patient calcium scoring with ACM and CCM were significantly different (median 70.3 mg and 59.7 mg, respectively; P<0.0001), with a mean systematic error of 17.5% (limit of agreement between 14.6% and 20.4%). The use of ACM resulted in an altered quartile classification for 14% of patients, as compared to CCM; all of these patients were classified into a higher category. Conclusion: Our data indicate that multidetector-row computed tomography coronary calcium mass determination based on dedicated phantom calibration shows lower measurement error than an automatic software-based calculation method. The tested automatic software does not yet seem to be a reliable option for calcium mass measurement

  16. CERN radiation protection (RP) calibration facilities

    Energy Technology Data Exchange (ETDEWEB)

    Pozzi, Fabio

    2016-04-14

    , the facility was commissioned by measuring the calibration quantities of interest, e.g. H*(10), as a function of the source-to-detector distance. In the case of neutron measurements, a comparison with the Monte Carlo results was carried out; in fact, the neutron scattering can be an important issue and the Monte Carlo method can contribute to its estimation and optimization. Neutron calibrations often need to be performed at neutron energies or spectra very much different from those generated by radioactive sources employed in standard calibration laboratories. Unfortunately, fields with a broad neutron spectrum extending to a few GeVs are very rare and the scientific community is calling for worldwide sharing of the existing facilities. The CERN RP group has been managing the CERN-EU high-energy Reference Field (CERF) facility for 20 years, which is a unique calibration field in its kind. CERF is a workplace field that reproduces the neutron spectrum encountered in the vicinity of high-energy accelerators and at commercial flight altitudes. Within the context of providing a well-characterized workplace field to the scientific community, Monte Carlo simulations were performed with the present development version of the FLUKA code. The simulations were compared with experimental measurements showing promising results for the future ISO accreditation of the facility as workplace reference facility. Even though the accreditation process is fairly long, the work achieved so far is setting the bases to start this process in the right way.

  17. Pairing experimentation and computational modelling to understand the role of tissue inducer cells in the development of lymphoid organs

    Directory of Open Access Journals (Sweden)

    Kieran eAlden

    2012-07-01

    Full Text Available The use of genetic tools, imaging technologies and ex vivo culture systems has provided significant insights into the role of tissue inducer cells and associated signalling pathways in the formation and function of lymphoid organs. Despite advances in experimental technologies, the molecular and cellular process orchestrating the formation of a complex 3-dimensional tissue is difficult to dissect using current approaches. Therefore, a robust set of simulation tools have been developed to model the processes involved in lymphoid tissue development. Specifically the role of different tissue inducer cell populations in the dynamic formation of Peyer's Patches has been examined. Utilising approaches from critical systems engineering an unbiased model of lymphoid tissue inducer cell function has been developed, that permits the development of emerging behaviours that are statistically not different from that observed in vivo. These results provide the confidence to utilise statistical methods to explore how the simulator predicts cellular behaviour and outcomes under different physiological conditions. Such methods, known as sensitivity analysis techniques, can provide insight into when a component part of the system (such as a particular cell type, adhesion molecule, or chemokine begins to have an influence on observed behaviour, and quantifies the effect a component part has on the end result: the formation of lymphoid tissue. Through use of such a principled approach in the design, calibration, and analysis of a computer simulation, a robust in silico tool can be developed which can both further the understanding of a biological system being explored, and act as a tool for the generation of hypotheses which can be tested utilising experimental approaches.

  18. Development, calibration and experimental results obtained with an innovative calorimeter (CALMOS) for nuclear heating measurements

    International Nuclear Information System (INIS)

    Carcreff, H.; Cloute-Cazalaa, V.; Salmon, L.

    2011-01-01

    Nuclear heating inside an MTR reactor has to be known in order to be able to control samples temperature during irradiation experiments. An R and D program has been carried out at CEA to design a new type of in-core calorimetric system. This new development, started in 2002, has for main objective to manufacture a calorimeter suitable to monitoring nuclear heating inside the 70 MWth OSIRIS material testing reactor operated by CEA's Nuclear Energy Div. at the Saclay research center. An innovative calorimetric probe, associated to a specific handling system, has been designed to provide access to measurements both along the fissile height and on the upper part of the core, where nuclear heating still remains high. Two mock-ups of the probe were manufactured and tested in 2005 and 2009 in ex-core area of OSIRIS reactor for process validation, while a displacement system has been especially studied to move the probe along a given axial measurement range. This paper deals with the development, tests on preliminary mock-ups and the finalization of the probe. Main modeling and experimental results are presented. Moreover, alternative methods to calibration for nuclear heating rate measurements which are now possible with this new calorimeter are presented and discussed. (authors)

  19. Development, calibration, and experimental results obtained with an innovative calorimeter (CALMOS) for nuclear heating measurements

    International Nuclear Information System (INIS)

    Carcreff, Hubert; Cloute-Cazalaa, Veronique; Salmon, Laurent

    2012-01-01

    Nuclear heating inside an MTR reactor has to be known in order to be able to control samples temperature during irradiation experiments. An R and D program has been carried out at CEA to design a new type of in-core calorimetric system. This new development, started in 2002, has for main objective to manufacture a calorimeter suitable to monitoring nuclear heating inside the 70 MWth OSIRIS material testing reactor operated by CEA's Nuclear Energy Division at the Saclay research center. An innovative calorimetric probe, associated to a specific handling system, has been designed to provide access to measurements both along the fissile height and on the upper part of the core, where nuclear heating still remains high. Two mock-ups of the probe were manufactured and tested in 2005 and 2009 in ex-core area of OSIRIS reactor for process validation, while a displacement system has been especially studied to move the probe along a given axial measurement range. This paper deals with the development, tests on preliminary mock-ups and the finalization of the probe. Main modeling and experimental results are presented. Moreover, alternative methods to calibration for nuclear heating rate measurements which are now possible with this new calorimeter are presented and discussed. (authors)

  20. Toward efficient computation of the expected relative entropy for nonlinear experimental design

    International Nuclear Information System (INIS)

    Coles, Darrell; Prange, Michael

    2012-01-01

    The expected relative entropy between prior and posterior model-parameter distributions is a Bayesian objective function in experimental design theory that quantifies the expected gain in information of an experiment relative to a previous state of knowledge. The expected relative entropy is a preferred measure of experimental quality because it can handle nonlinear data-model relationships, an important fact due to the ubiquity of nonlinearity in science and engineering and its effects on post-inversion parameter uncertainty. This objective function does not necessarily yield experiments that mediate well-determined systems, but, being a Bayesian quality measure, it rigorously accounts for prior information which constrains model parameters that may be only weakly constrained by the optimized dataset. Historically, use of the expected relative entropy has been limited by the computing and storage requirements associated with high-dimensional numerical integration. Herein, a bifocal algorithm is developed that makes these computations more efficient. The algorithm is demonstrated on a medium-sized problem of sampling relaxation phenomena and on a large problem of source–receiver selection for a 2D vertical seismic profile. The method is memory intensive but workarounds are discussed. (paper)

  1. Experimental and computational analysis of pressure response in a multiphase flow loop

    Science.gov (United States)

    Morshed, Munzarin; Amin, Al; Rahman, Mohammad Azizur; Imtiaz, Syed

    2016-07-01

    The characteristics of multiphase fluid flow in pipes are useful to understand fluid mechanics encountered in the oil and gas industries. In the present day oil and gas exploration is successively inducing subsea operation in the deep sea and arctic condition. During the transport of petroleum products, understanding the fluid dynamics inside the pipe network is important for flow assurance. In this case the information regarding static and dynamic pressure response, pressure loss, optimum flow rate, pipe diameter etc. are the important parameter for flow assurance. The principal aim of this research is to represents computational analysis and experimental analysis of multi-phase (L/G) in a pipe network. This computational study considers a two-phase fluid flow through a horizontal flow loop with at different Reynolds number in order to determine the pressure distribution, frictional pressure loss profiles by volume of fluid (VOF) method. However, numerical simulations are validated with the experimental data. The experiment is conducted in 76.20 mm ID transparent circular pipe using water and air in the flow loop. Static pressure transducers are used to measure local pressure response in multiphase pipeline.

  2. Developments of programs for the guidance of the experimental logics and the data acquisition

    International Nuclear Information System (INIS)

    Kraemer-Flecken, A.

    1988-01-01

    The new status of technique allows to construct the experimental electronics by the application of ECL modules essentially faster. By the use of the old CAMAC standard it is possible to calibrate experiment configurations by means of a calculator. New techniques in the fabrication of microprocessors and storage -IC's allow the use of microprocessors for the guidance of the experiment electronics and contribute to the creation of an independent on large calculators, modular, and transportable computer. For the calibration of complex detector systems new CAMAC plug-in's exist which allow a data acquisition on the CAMAC bus. With the new eightfold ADC's precision measurements can be perormed. An upgrading of such small data acquisition systems under inclusion of the VME bus is very soon realizable. By this nuclear spectroscopic experiments can be performed essentially more simply. (HSI)

  3. Self-calibration of a cone-beam micro-CT system

    International Nuclear Information System (INIS)

    Patel, V.; Chityala, R. N.; Hoffmann, K. R.; Ionita, C. N.; Bednarek, D. R.; Rudin, S.

    2009-01-01

    Use of cone-beam computed tomography (CBCT) is becoming more frequent. For proper reconstruction, the geometry of the CBCT systems must be known. While the system can be designed to reduce errors in the geometry, calibration measurements must still be performed and corrections applied. Investigators have proposed techniques using calibration objects for system calibration. In this study, the authors present methods to calibrate a rotary-stage CB micro-CT (CBμCT) system using only the images acquired of the object to be reconstructed, i.e., without the use of calibration objects. Projection images are acquired using a CBμCT system constructed in the authors' laboratories. Dark- and flat-field corrections are performed. Exposure variations are detected and quantified using analysis of image regions with an unobstructed view of the x-ray source. Translations that occur during the acquisition in the horizontal direction are detected, quantified, and corrected based on sinogram analysis. The axis of rotation is determined using registration of antiposed projection images. These techniques were evaluated using data obtained with calibration objects and phantoms. The physical geometric axis of rotation is determined and aligned with the rotational axis (assumed to be the center of the detector plane) used in the reconstruction process. The parameters describing this axis agree to within 0.1 mm and 0.3 deg with those determined using other techniques. Blurring due to residual calibration errors has a point-spread function in the reconstructed planes with a full-width-at-half-maximum of less than 125 μm in a tangential direction and essentially zero in the radial direction for the rotating object. The authors have used this approach on over 100 acquisitions over the past 2 years and have regularly obtained high-quality reconstructions, i.e., without artifacts and no detectable blurring of the reconstructed objects. This self-calibrating approach not only obviates

  4. Parallel-plate rheometer calibration using oil and lattice Boltzmann simulation

    DEFF Research Database (Denmark)

    Ferraris, Chiara F; Geiker, Mette Rica; Martys, Nicos S.

    2007-01-01

    compute the viscosity. This paper presents a modified parallel plate rheometer, and proposes means of calibration using standard oils and numerical simulation of the flow. A lattice Boltzmann method was used to simulate the flow in the modified rheometer, thus using an accurate numerical solution in place...

  5. Experimental/Computational Approach to Accommodation Coefficients and its Application to Noble Gases on Aluminum Surface (Preprint)

    Science.gov (United States)

    2009-02-03

    computational approach to accommodation coefficients and its application to noble gases on aluminum surface Nathaniel Selden Uruversity of Southern Cahfornia, Los ...8217 ,. 0.’ a~ .......,..,P. • " ,,-0, "p"’U".. ,Po"D.’ 0.’P.... uro . P." FIG. 5: Experimental and computed radiometri~ force for argon (left), xenon

  6. ATLAS tile calorimeter cesium calibration control and analysis software

    International Nuclear Information System (INIS)

    Solovyanov, O; Solodkov, A; Starchenko, E; Karyukhin, A; Isaev, A; Shalanda, N

    2008-01-01

    An online control system to calibrate and monitor ATLAS Barrel hadronic calorimeter (TileCal) with a movable radioactive source, driven by liquid flow, is described. To read out and control the system an online software has been developed, using ATLAS TDAQ components like DVS (Diagnostic and Verification System) to verify the hardware before running, IS (Information Server) for data and status exchange between networked computers, and other components like DDC (DCS to DAQ Connection), to connect to PVSS-based slow control systems of Tile Calorimeter, high voltage and low voltage. A system of scripting facilities, based on Python language, is used to handle all the calibration and monitoring processes from hardware perspective to final data storage, including various abnormal situations. A QT based graphical user interface to display the status of the calibration system during the cesium source scan is described. The software for analysis of the detector response, using online data, is discussed. Performance of the system and first experience from the ATLAS pit are presented

  7. ATLAS tile calorimeter cesium calibration control and analysis software

    Energy Technology Data Exchange (ETDEWEB)

    Solovyanov, O; Solodkov, A; Starchenko, E; Karyukhin, A; Isaev, A; Shalanda, N [Institute for High Energy Physics, Protvino 142281 (Russian Federation)], E-mail: Oleg.Solovyanov@ihep.ru

    2008-07-01

    An online control system to calibrate and monitor ATLAS Barrel hadronic calorimeter (TileCal) with a movable radioactive source, driven by liquid flow, is described. To read out and control the system an online software has been developed, using ATLAS TDAQ components like DVS (Diagnostic and Verification System) to verify the hardware before running, IS (Information Server) for data and status exchange between networked computers, and other components like DDC (DCS to DAQ Connection), to connect to PVSS-based slow control systems of Tile Calorimeter, high voltage and low voltage. A system of scripting facilities, based on Python language, is used to handle all the calibration and monitoring processes from hardware perspective to final data storage, including various abnormal situations. A QT based graphical user interface to display the status of the calibration system during the cesium source scan is described. The software for analysis of the detector response, using online data, is discussed. Performance of the system and first experience from the ATLAS pit are presented.

  8. Comparison between a Computational Seated Human Model and Experimental Verification Data

    Directory of Open Access Journals (Sweden)

    Christian G. Olesen

    2014-01-01

    Full Text Available Sitting-acquired deep tissue injuries (SADTI are the most serious type of pressure ulcers. In order to investigate the aetiology of SADTI a new approach is under development: a musculo-skeletal model which can predict forces between the chair and the human body at different seated postures. This study focuses on comparing results from a model developed in the AnyBody Modeling System, with data collected from an experimental setup. A chair with force-measuring equipment was developed, an experiment was conducted with three subjects, and the experimental results were compared with the predictions of the computational model. The results show that the model predicted the reaction forces for different chair postures well. The correlation coefficients of how well the experiment and model correlate for the seat angle, backrest angle and footrest height was 0.93, 0.96, and 0.95. The study show a good agreement between experimental data and model prediction of forces between a human body and a chair. The model can in the future be used in designing wheelchairs or automotive seats.

  9. The accident of overexposure at the University hospital center of Toulouse. Expertise report n.1. Checking of experimental protocols of micro-beams calibration before and after dysfunction correction

    International Nuclear Information System (INIS)

    2007-01-01

    The regional center of stereotaxic radiosurgery of the University hospital center of Toulouse is equipped since april 2006 of a Novalis accelerator (Brainlab) devoted to the intra-skull stereotaxic radiosurgery. In april 2007, during an intercomparison of dosimetry computer files coming from different sites, the Brainlab society finds an anomaly in the files. The analysis made by the society concludes to to the use of an inappropriate detector for the measurement of a dosimetry parameter during the initial calibration of the accelerator. following this error, 145 patients (on the 172 treated by the service in question) suffer the consequences of an overdose whom importance is variable according the cases. The I.R.S.N. in charge of an expertise about the protocols of calibration of micro-beams before and after the correction of the dysfunction, took up with the search of the technical causes of the dysfunction. This report presents successively: the documents base on which is founded the expertise; the material of dosimetry and quality control necessary to the initial calibration of the device and to its follow-up; the formula made at the accelerator commissioning; the calibration of micro-beams in the two configurations that allows the device (micro-multi-knives and conic collimator) and the definition of parameters of the software of treatment planning; the maintenance and quality control implemented in the frame of its clinical use. (N.C.)

  10. Computational/experimental studies of isolated, single component droplet combustion

    Science.gov (United States)

    Dryer, Frederick L.

    1993-01-01

    Isolated droplet combustion processes have been the subject of extensive experimental and theoretical investigations for nearly 40 years. The gross features of droplet burning are qualitatively embodied by simple theories and are relatively well understood. However, there remain significant aspects of droplet burning, particularly its dynamics, for which additional basic knowledge is needed for thorough interpretations and quantitative explanations of transient phenomena. Spherically-symmetric droplet combustion, which can only be approximated under conditions of both low Reynolds and Grashof numbers, represents the simplest geometrical configuration in which to study the coupled chemical/transport processes inherent within non-premixed flames. The research summarized here, concerns recent results on isolated, single component, droplet combustion under microgravity conditions, a program pursued jointly with F.A. Williams of the University of California, San Diego. The overall program involves developing and applying experimental methods to study the burning of isolated, single component droplets, in various atmospheres, primarily at atmospheric pressure and below, in both drop towers and aboard space-based platforms such as the Space Shuttle or Space Station. Both computational methods and asymptotic methods, the latter pursued mainly at UCSD, are used in developing the experimental test matrix, in analyzing results, and for extending theoretical understanding. Methanol, and the normal alkanes, n-heptane, and n-decane, have been selected as test fuels to study time-dependent droplet burning phenomena. The following sections summarizes the Princeton efforts on this program, describe work in progress, and briefly delineate future research directions.

  11. Lamb wave tuning curve calibration for surface-bonded piezoelectric transducers

    International Nuclear Information System (INIS)

    Sohn, Hoon; Lee, Sang Jun

    2010-01-01

    Surface-bonded lead zirconate titanate (PZT) transducers have been widely used for guided wave generation and measurement. For selective actuation and sensing of Lamb wave modes, the sizes of the transducers and the driving frequency of the input waveform should be tuned. For this purpose, a theoretical Lamb wave tuning curve (LWTC) of a specific transducer size is generally obtained. Here, the LWTC plots each Lamb wave mode' amplitude as a function of the driving frequency. However, a discrepancy between experimental and existing theoretical LWTCs has been observed due to little consideration of the bonding layer and the energy distribution between Lamb wave modes. In this study, calibration techniques for theoretical LWTCs are proposed. First, a theoretical LWTC is developed when circular PZT transducers are used for both Lamb wave excitation and sensing. Then, the LWTC is calibrated by estimating the effective PZT size with PZT admittance measurement. Finally, the energy distributions among symmetric and antisymmetric modes are taken into account for better prediction of the relative amplitudes between Lamb wave modes. The effectiveness of the proposed calibration techniques is examined through numerical simulations and experimental estimation of the LWTC using the circular PZT transducers instrumented on an aluminum plate

  12. ORNL calibrations facility

    International Nuclear Information System (INIS)

    Berger, C.D.; Gupton, E.D.; Lane, B.H.; Miller, J.H.; Nichols, S.W.

    1982-08-01

    The ORNL Calibrations Facility is operated by the Instrumentation Group of the Industrial Safety and Applied Health Physics Division. Its primary purpose is to maintain radiation calibration standards for calibration of ORNL health physics instruments and personnel dosimeters. This report includes a discussion of the radioactive sources and ancillary equipment in use and a step-by-step procedure for calibration of those survey instruments and personnel dosimeters in routine use at ORNL

  13. Materials by design: An experimental and computational investigation on the microanatomy arrangement of porous metallic glasses

    International Nuclear Information System (INIS)

    Sarac, Baran; Klusemann, Benjamin; Xiao, Tao; Bargmann, Swantje

    2014-01-01

    The correlation of a material’s structure with its properties is one of the important unresolved issues in materials science research. This paper discusses a novel experimental and computational approach by which the influence of the pores on the mechanical properties of bulk metallic glasses (BMGs) can be systematically and quantitatively analyzed. The experimental stage involves the fabrication of a template whose pore configurations are pre-determined by computer-aided design tools, and replication of the designed patterns with BMGs. Quasi-static mechanical characterization of these complex microstructures is conducted under uniaxial tension and in-plane compression. For the numerical simulations, a non-local gradient-enhanced continuum mechanical model is established, using thermodynamic principles and periodic boundary conditions. The combination of the experimental and numerical results has identified the importance of the pore configuration, overall porosity and diameter to the spacing ratio of the pores to attain optimized material properties

  14. Computational simulation of coupled material degradation processes for probabilistic lifetime strength of aerospace materials

    Science.gov (United States)

    Boyce, Lola; Bast, Callie C.

    1992-01-01

    The research included ongoing development of methodology that provides probabilistic lifetime strength of aerospace materials via computational simulation. A probabilistic material strength degradation model, in the form of a randomized multifactor interaction equation, is postulated for strength degradation of structural components of aerospace propulsion systems subjected to a number of effects or primative variables. These primative variable may include high temperature, fatigue or creep. In most cases, strength is reduced as a result of the action of a variable. This multifactor interaction strength degradation equation has been randomized and is included in the computer program, PROMISS. Also included in the research is the development of methodology to calibrate the above described constitutive equation using actual experimental materials data together with linear regression of that data, thereby predicting values for the empirical material constraints for each effect or primative variable. This regression methodology is included in the computer program, PROMISC. Actual experimental materials data were obtained from the open literature for materials typically of interest to those studying aerospace propulsion system components. Material data for Inconel 718 was analyzed using the developed methodology.

  15. Dynamic calibration of piezoelectric transducers for ballistic high-pressure measurement

    Directory of Open Access Journals (Sweden)

    Elkarous Lamine

    2016-01-01

    Full Text Available The development of a dynamic calibration standard for high-amplitude pressure piezoelectric transducers implies the implementation of a system which can provide reference pressure values with known characteristics and uncertainty. The reference pressure must be issued by a sensor, as a part of a measuring chain, with a guaranteed traceability to an international standard. However, this operation has not been completely addressed yet until today and is still calling further investigations. In this paper, we introduce an experimental study carried out in order to contribute to current efforts for the establishment of a reference dynamic calibration method. A suitable practical calibration method based on the calculation of the reference pressure by measurement of the displacement of the piston in contact with an oil-filled cylindrical chamber is presented. This measurement was achieved thanks to a high speed camera and an accelerometer. Both measurements are then compared. In the first way, pressure was generated by impacting the piston with a free falling weight and, in the second way, with strikers of known weights and accelerated to the impact velocities with an air gun. The aim of the experimental setup is to work out a system which may generate known hydraulic pressure pulses with high-accuracy and known uncertainty. Moreover, physical models were also introduced to consolidate the experimental study. The change of striker’s velocities and masses allows tuning the reference pressure pulses with different shapes and, therefore, permits to sweep a wide range of magnitudes and frequencies.

  16. Accurate KAP meter calibration as a prerequisite for optimisation in projection radiography

    International Nuclear Information System (INIS)

    Malusek, A.; Sandborg, M.; Alm Carlsson, G.

    2016-01-01

    Modern X-ray units register the air kerma-area product, PKA, with a built-in KAP meter. Some KAP meters show an energy-dependent bias comparable with the maximum uncertainty articulated by the IEC (25 %), adversely affecting dose-optimisation processes. To correct for the bias, a reference KAP meter calibrated at a standards laboratory and two calibration methods described here can be used to achieve an uncertainty of <7 % as recommended by IAEA. A computational model of the reference KAP meter is used to calculate beam quality correction factors for transfer of the calibration coefficient at the standards laboratory, Q 0 , to any beam quality, Q, in the clinic. Alternatively, beam quality corrections are measured with an energy-independent dosemeter via a reference beam quality in the clinic, Q 1 , to beam quality, Q. Biases up to 35 % of built-in KAP meter readings were noted. Energy-dependent calibration factors are needed for unbiased PKA. Accurate KAP meter calibration as a prerequisite for optimisation in projection radiography. (authors)

  17. Integration of computational modeling and experimental techniques to design fuel surrogates

    DEFF Research Database (Denmark)

    Choudhury, H.A.; Intikhab, S.; Kalakul, Sawitree

    2017-01-01

    performance. A simplified alternative is to develop surrogate fuels that have fewer compounds and emulate certain important desired physical properties of the target fuels. Six gasoline blends were formulated through a computer aided model based technique “Mixed Integer Non-Linear Programming” (MINLP...... Virtual Process-Product Design Laboratory (VPPD-Lab) are applied onto the defined compositions of the surrogate gasoline. The aim is to primarily verify the defined composition of gasoline by means of VPPD-Lab. ρ, η and RVP are calculated with more accuracy and constraints such as distillation curve...... and flash point on the blend design are also considered. A post-design experiment-based verification step is proposed to further improve and fine-tune the “best” selected gasoline blends following the computation work. Here, advanced experimental techniques are used to measure the RVP, ρ, η, RON...

  18. SCIAMACHY Level 1 data: calibration concept and in-flight calibration

    Science.gov (United States)

    Lichtenberg, G.; Kleipool, Q.; Krijger, J. M.; van Soest, G.; van Hees, R.; Tilstra, L. G.; Acarreta, J. R.; Aben, I.; Ahlers, B.; Bovensmann, H.; Chance, K.; Gloudemans, A. M. S.; Hoogeveen, R. W. M.; Jongma, R. T. N.; Noël, S.; Piters, A.; Schrijver, H.; Schrijvers, C.; Sioris, C. E.; Skupin, J.; Slijkhuis, S.; Stammes, P.; Wuttke, M.

    2006-11-01

    The calibration of SCIAMACHY was thoroughly checked since the instrument was launched on-board ENVISAT in February 2002. While SCIAMACHY's functional performance is excellent since launch, a number of technical difficulties have appeared, that required adjustments to the calibration. The problems can be separated into three types: (1) Those caused by the instrument and/or platform environment. Among these are the high water content in the satellite structure and/or MLI layer. This results in the deposition of ice on the detectors in channels 7 and 8 which seriously affects the retrievals in the IR, mostly because of the continuous change of the slit function caused by scattering of the light through the ice layer. Additionally a light leak in channel 7 severely hampers any retrieval from this channel. (2) Problems due to errors in the on-ground calibration and/or data processing affecting for example the radiometric calibration. A new approach based on a mixture of on-ground and in-flight data is shortly described here. (3) Problems caused by principal limitations of the calibration concept, e.g. the possible appearance of spectral structures after the polarisation correction due to unavoidable errors in the determination of atmospheric polarisation. In this paper we give a complete overview of the calibration and problems that still have to be solved. We will also give an indication of the effect of calibration problems on retrievals where possible. Since the operational processing chain is currently being updated and no newly processed data are available at this point in time, for some calibration issues only a rough estimate of the effect on Level 2 products can be given. However, it is the intention of this paper to serve as a future reference for detailed studies into specific calibration issues.

  19. Semi-empirical neutron tool calibration (one and two-group approximation)

    International Nuclear Information System (INIS)

    Czubek, J.A.

    1988-01-01

    The physical principles of the new method of calibration of neutron tools for the rock porosity determination are given. A short description of the physics of neutron transport in the matter is presented together with some remarks on the elementary interactions of neutrons with nuclei (cross sections, group cross sections etc.). The definitions of the main integral parameters characterizing the neutron transport in the rock media are given. The three main approaches to the calibration problem: empirical, theoretical and semi-empirical are presented with some more detailed description of the latter one. The new semi-empirical approach is described. The method is based on the definition of the apparent slowing down or migration length for neutrons sensed by the neutron tool situated in the real borehole-rock conditions. To calculate this apparent slowing down or migration lengths the ratio of the proper space moments of the neutron distribution along the borehole axis is used. Theoretical results are given for one- and two-group diffusion approximations in the rock-borehole geometrical conditions when the tool is in the sidewall position. The physical and chemical parameters are given for the calibration blocks of the Logging Company in Zielona Gora. Using these data the neutron parameters of the calibration blocks have been calculated. An example, how to determine the calibration curve for the dual detector tool applying this new method and using the neutron parameters mentioned above together with the measurements performed in the calibration blocks, is given. The most important advantage of the new semi-empirical method of calibration is the possibility of setting on the unique calibration curve all experimental calibration data obtained for a given neutron tool for different porosities, lithologies and borehole diameters. 52 refs., 21 figs., 21 tabs. (author)

  20. Computer model for ductile fracture

    International Nuclear Information System (INIS)

    Moran, B.; Reaugh, J. E.

    1979-01-01

    A computer model is described for predicting ductile fracture initiation and propagation. The computer fracture model is calibrated by simple and notched round-bar tension tests and a precracked compact tension test. The model is used to predict fracture initiation and propagation in a Charpy specimen and compare the results with experiments. The calibrated model provides a correlation between Charpy V-notch (CVN) fracture energy and any measure of fracture toughness, such as J/sub Ic/. A second simpler empirical correlation was obtained using the energy to initiate fracture in the Charpy specimen rather than total energy CVN, and compared the results with the empirical correlation of Rolfe and Novak

  1. Three-dimensional deformation response of a NiTi shape memory helical-coil actuator during thermomechanical cycling: experimentally validated numerical model

    Science.gov (United States)

    Dhakal, B.; Nicholson, D. E.; Saleeb, A. F.; Padula, S. A., II; Vaidyanathan, R.

    2016-09-01

    Shape memory alloy (SMA) actuators often operate under a complex state of stress for an extended number of thermomechanical cycles in many aerospace and engineering applications. Hence, it becomes important to account for multi-axial stress states and deformation characteristics (which evolve with thermomechanical cycling) when calibrating any SMA model for implementation in large-scale simulation of actuators. To this end, the present work is focused on the experimental validation of an SMA model calibrated for the transient and cyclic evolutionary behavior of shape memory Ni49.9Ti50.1, for the actuation of axially loaded helical-coil springs. The approach requires both experimental and computational aspects to appropriately assess the thermomechanical response of these multi-dimensional structures. As such, an instrumented and controlled experimental setup was assembled to obtain temperature, torque, degree of twist and extension, while controlling end constraints during heating and cooling of an SMA spring under a constant externally applied axial load. The computational component assesses the capabilities of a general, multi-axial, SMA material-modeling framework, calibrated for Ni49.9Ti50.1 with regard to its usefulness in the simulation of SMA helical-coil spring actuators. Axial extension, being the primary response, was examined on an axially-loaded spring with multiple active coils. Two different conditions of end boundary constraint were investigated in both the numerical simulations as well as the validation experiments: Case (1) where the loading end is restrained against twist (and the resulting torque measured as the secondary response) and Case (2) where the loading end is free to twist (and the degree of twist measured as the secondary response). The present study focuses on the transient and evolutionary response associated with the initial isothermal loading and the subsequent thermal cycles under applied constant axial load. The experimental

  2. Experimental and computational investigation of the NASA low-speed centrifugal compressor flow field

    Science.gov (United States)

    Hathaway, Michael D.; Chriss, Randall M.; Wood, Jerry R.; Strazisar, Anthony J.

    1993-01-01

    An experimental and computational investigation of the NASA Lewis Research Center's low-speed centrifugal compressor (LSCC) flow field was conducted using laser anemometry and Dawes' three-dimensional viscous code. The experimental configuration consisted of a backswept impeller followed by a vaneless diffuser. Measurements of the three-dimensional velocity field were acquired at several measurement planes through the compressor. The measurements describe both the throughflow and secondary velocity field along each measurement plane. In several cases the measurements provide details of the flow within the blade boundary layers. Insight into the complex flow physics within centrifugal compressors is provided by the computational fluid dynamics analysis (CFD), and assessment of the CFD predictions is provided by comparison with the measurements. Five-hole probe and hot-wire surveys at the inlet and exit to the impeller as well as surface flow visualization along the impeller blade surfaces provided independent confirmation of the laser measurement technique. The results clearly document the development of the throughflow velocity wake that is characteristic of unshrouded centrifugal compressors.

  3. Distinguishing new science from calibration effects in the electron-volt neutron spectrometer VESUVIO at ISIS

    Science.gov (United States)

    Chatzidimitriou-Dreismann, C. A.; Gray, E. MacA.; Blach, T. P.

    2012-06-01

    The "standard" procedure for calibrating the Vesuvio eV neutron spectrometer at the ISIS neutron source, forming the basis for data analysis over at least the last decade, was recently documented in considerable detail by the instrument's scientists. Additionally, we recently derived analytic expressions of the sensitivity of recoil peak positions with respect to fight-path parameters and presented neutron-proton scattering results that together called into question the validity of the "standard" calibration. These investigations should contribute significantly to the assessment of the experimental results obtained with Vesuvio. Here we present new results of neutron-deuteron scattering from D2 in the backscattering angular range (θ>90°) which are accompanied by a striking energy increase that violates the Impulse Approximation, thus leading unequivocally the following dilemma: (A) either the "standard" calibration is correct and then the experimental results represent a novel quantum dynamical effect of D which stands in blatant contradiction of conventional theoretical expectations; (B) or the present "standard" calibration procedure is seriously deficient and leads to artificial outcomes. For Case (A), we allude to the topic of attosecond quantum dynamical phenomena and our recent neutron scattering experiments from H2 molecules. For Case (B), some suggestions as to how the "standard" calibration could be considerably improved are made.

  4. Mass and stiffness calibration of nanowires using thermally driven vibration

    International Nuclear Information System (INIS)

    Kiracofe, D R; Raman, A; Yazdanpanah, M M

    2011-01-01

    Cantilevered or suspended nanowires show promise for force or mass sensing applications due to their small mass, high force sensitivity and high frequency bandwidth. To use these as quantitative sensors, their bending stiffness or mass must be calibrated experimentally, often using thermally driven vibration. However, this can be difficult because nanowires are slightly asymmetric, which results in two spatially orthogonal bending eigenmodes with closely spaced frequencies. This asymmetry presents problems for traditional stiffness calibration methods, which equate the measured thermal vibration spectrum near a resonance to that of a single eigenmode. Moreover, the principal axes may be arbitrarily rotated with respect to the measurement direction. In this work, the authors propose a method for calibrating the bending stiffness and mass of such nanowires' eigenmodes using a single measurement taken at an arbitrary orientation with respect to the principal axes.

  5. Studies on the calibration of mammography automatic exposure mode with computed radiology

    International Nuclear Information System (INIS)

    Zhu Hongzhou; Shao Guoliang; Shi Lei; Liu Qing

    2010-01-01

    Objective: To realize the optimization of image quality and radiation dose by correcting mammography automatic exposure, according to automatic exposure controlled mode of mammography film-screen system. Methods: The film-screen system (28 kV) was applied to perform automatic exposure of plexiglass (40 mm) and get the standard dose of exposure, the exposure mode of CR base on LgM=2.0 was rectified, which was divided into 10 steps. Mammary glands pattern (Fluke NA18-220) were examined with CR (26, 28, and 30 kV) by the automatic exposure mode corrected. The exposure values (mAs) were recorded. CR image was diagnosed and evaluated in double blind way by 4 radiologists according to American Collage of Radiology (ACR) standard. Results: Based on the standard of CR automatic exposure with the dose higher than the traditional exposure of film-screen system, the calibration of mammography automatic exposure was accomplished. The test results of the calibrated mode was better than the scoring system of ACR. Conclusions: Comparative study showed improvement in acquiring high-quality image and reduction of radiation dose. The corrected mammography automatic exposure mode might be a better method for clinical use. (authors)

  6. A flexible calibration method for laser displacement sensors based on a stereo-target

    International Nuclear Information System (INIS)

    Zhang, Jie; Sun, Junhua; Liu, Zhen; Zhang, Guangjun

    2014-01-01

    Laser displacement sensors (LDSs) are widely used in online measurement owing to their characteristics of non-contact, high measurement speed, etc. However, existing calibration methods for LDSs based on the traditional triangulation measurement model are time-consuming and tedious to operate. In this paper, a calibration method for LDSs based on a vision measurement model of the LDS is presented. According to the constraint relationships of the model parameters, the calibration is implemented by freely moving a stereo-target at least twice in the field of view of the LDS. Both simulation analyses and real experiments were conducted. Experimental results demonstrate that the calibration method achieves an accuracy of 0.044 mm within the measurement range of about 150 mm. Compared to traditional calibration methods, the proposed method has no special limitation on the relative position of the LDS and the target. The linearity approximation of the measurement model in the calibration is not needed, and thus the measurement range is not limited in the linearity range. It is easy and quick to implement the calibration for the LDS. The method can be applied in wider fields. (paper)

  7. Exploratory Experimentation and Computation

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.; Borwein, Jonathan M.

    2010-02-25

    We believe the mathematical research community is facing a great challenge to re-evaluate the role of proof in light of recent developments. On one hand, the growing power of current computer systems, of modern mathematical computing packages, and of the growing capacity to data-mine on the Internet, has provided marvelous resources to the research mathematician. On the other hand, the enormous complexity of many modern capstone results such as the Poincare conjecture, Fermat's last theorem, and the classification of finite simple groups has raised questions as to how we can better ensure the integrity of modern mathematics. Yet as the need and prospects for inductive mathematics blossom, the requirement to ensure the role of proof is properly founded remains undiminished.

  8. Misalignment calibration of geomagnetic vector measurement system using parallelepiped frame rotation method

    International Nuclear Information System (INIS)

    Pang, Hongfeng; Zhu, XueJun; Pan, Mengchun; Zhang, Qi; Wan, Chengbiao; Luo, Shitu; Chen, Dixiang; Chen, Jinfei; Li, Ji; Lv, Yunxiao

    2016-01-01

    Misalignment error is one key factor influencing the measurement accuracy of geomagnetic vector measurement system, which should be calibrated with the difficulties that sensors measure different physical information and coordinates are invisible. A new misalignment calibration method by rotating a parallelepiped frame is proposed. Simulation and experiment result show the effectiveness of calibration method. The experimental system mainly contains DM-050 three-axis fluxgate magnetometer, INS (inertia navigation system), aluminium parallelepiped frame, aluminium plane base. Misalignment angles are calculated by measured data of magnetometer and INS after rotating the aluminium parallelepiped frame on aluminium plane base. After calibration, RMS error of geomagnetic north, vertical and east are reduced from 349.441 nT, 392.530 nT and 562.316 nT to 40.130 nT, 91.586 nT and 141.989 nT respectively. - Highlights: • A new misalignment calibration method by rotating a parallelepiped frame is proposed. • It does not need to know sensor attitude information or local dip angle. • The calibration system attitude change angle is not strictly required. • It can be widely used when sensors measure different physical information. • Geomagnetic vector measurement error is reduced evidently.

  9. Misalignment calibration of geomagnetic vector measurement system using parallelepiped frame rotation method

    Energy Technology Data Exchange (ETDEWEB)

    Pang, Hongfeng [Academy of Equipment, Beijing 101416 (China); College of Mechatronics Engineering and Automation, National University of Defense Technology, Changsha 410073 (China); Zhu, XueJun, E-mail: zhuxuejun1990@126.com [College of Mechatronics Engineering and Automation, National University of Defense Technology, Changsha 410073 (China); Pan, Mengchun; Zhang, Qi; Wan, Chengbiao; Luo, Shitu; Chen, Dixiang; Chen, Jinfei; Li, Ji; Lv, Yunxiao [College of Mechatronics Engineering and Automation, National University of Defense Technology, Changsha 410073 (China)

    2016-12-01

    Misalignment error is one key factor influencing the measurement accuracy of geomagnetic vector measurement system, which should be calibrated with the difficulties that sensors measure different physical information and coordinates are invisible. A new misalignment calibration method by rotating a parallelepiped frame is proposed. Simulation and experiment result show the effectiveness of calibration method. The experimental system mainly contains DM-050 three-axis fluxgate magnetometer, INS (inertia navigation system), aluminium parallelepiped frame, aluminium plane base. Misalignment angles are calculated by measured data of magnetometer and INS after rotating the aluminium parallelepiped frame on aluminium plane base. After calibration, RMS error of geomagnetic north, vertical and east are reduced from 349.441 nT, 392.530 nT and 562.316 nT to 40.130 nT, 91.586 nT and 141.989 nT respectively. - Highlights: • A new misalignment calibration method by rotating a parallelepiped frame is proposed. • It does not need to know sensor attitude information or local dip angle. • The calibration system attitude change angle is not strictly required. • It can be widely used when sensors measure different physical information. • Geomagnetic vector measurement error is reduced evidently.

  10. Calibration technique and study on metrological characteristics of a high-voltage inverse square-law function generator

    International Nuclear Information System (INIS)

    Popov, V.P.; Semenov, A.L.

    1987-01-01

    The calibration technique is described, and the metrological characteristics of a high-voltage generator of the inverse-quadratic function (HGF), being a functional unit of the diagnostic system of an electrodynamic analyser of a ionic component of a laser plasma, is analysed. The results of HGF testing in the range of time constants of the τ=(5-25)μs function are given. Analysis of metrologic and experimental characteristics shows, that HGF with automatic calibration has quite high accurate parameters. The high accuracy of function generation is provided with the possibility of calibration and adjustment conduction under experimental working conditions. Increase of the generated pulse amplitude to several tens of kilovelts is possible. Besides, the possibility of timely function adjustment to the necessary parameter (τ) increases essentially the HGF functional possibilities

  11. An efficient multi-stage algorithm for full calibration of the hemodynamic model from BOLD signal responses

    KAUST Repository

    Zambri, Brian; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem

    2017-01-01

    We propose a computational strategy that falls into the category of prediction/correction iterative-type approaches, for calibrating the hemodynamic model introduced by Friston et al. (2000). The proposed method is employed to estimate consecutively the values of the biophysiological system parameters and the external stimulus characteristics of the model. Numerical results corresponding to both synthetic and real functional Magnetic Resonance Imaging (fMRI) measurements for a single stimulus as well as for multiple stimuli are reported to highlight the capability of this computational methodology to fully calibrate the considered hemodynamic model. This article is protected by copyright. All rights reserved.

  12. An efficient multi-stage algorithm for full calibration of the hemodynamic model from BOLD signal responses

    KAUST Repository

    Zambri, Brian

    2017-02-22

    We propose a computational strategy that falls into the category of prediction/correction iterative-type approaches, for calibrating the hemodynamic model introduced by Friston et al. (2000). The proposed method is employed to estimate consecutively the values of the biophysiological system parameters and the external stimulus characteristics of the model. Numerical results corresponding to both synthetic and real functional Magnetic Resonance Imaging (fMRI) measurements for a single stimulus as well as for multiple stimuli are reported to highlight the capability of this computational methodology to fully calibrate the considered hemodynamic model. This article is protected by copyright. All rights reserved.

  13. Modification of silicon nitride surfaces with GOPES and APTES for antibody immobilization: computational and experimental studies

    International Nuclear Information System (INIS)

    To, Thien Dien; Nguyen, Anh Tuan; Phan, Khoa Nhat Thanh; Truong, An Thu Thi; Doan, Tin Chanh Duc; Dang, Chien Mau

    2015-01-01

    Chemical modification of silicon nitride (SiN) surfaces by silanization has been widely studied especially with 3-(aminopropyl)triethoxysilane (APTES) and 3-(glycidyloxypropyl) dimethylethoxysilane (GOPES). However few reports performed the experimental and computational studies together. In this study, surface modification of SiN surfaces with GOPES and APTES covalently bound with glutaraldehyde (GTA) was investigated for antibody immobilization. The monoclonal anti-cytokeratin-FITC (MACF) antibody was immobilized on the modified SiN surfaces. The modified surfaces were characterized by water contact angle measurements, atomic force microscopy and fluorescence microscopy. The FITC-fluorescent label indicated the existence of MACF antibody on the SiN surfaces and the efficiency of the silanization reaction. Absorption of APTES and GOPES on the oxidized SiN surfaces was computationally modeled and calculated by Materials Studio software. The computational and experimental results showed that modification of the SiN surfaces with APTES and GTA was more effective than the modification with GOPES. (paper)

  14. Efficiency calibration of solid track spark auto counter

    International Nuclear Information System (INIS)

    Wang Mei; Wen Zhongwei; Lin Jufang; Liu Rong; Jiang Li; Lu Xinxin; Zhu Tonghua

    2008-01-01

    The factors influencing detection efficiency of solid track spark auto counter were analyzed, and the best etch condition and parameters of charge were also reconfirmed. With small plate fission ionization chamber, the efficiency of solid track spark auto counter at various experiment assemblies was re-calibrated. The efficiency of solid track spark auto counter at various experimental conditions was obtained. (authors)

  15. Spitzer/JWST Cross Calibration: IRAC Observations of Potential Calibrators for JWST

    Science.gov (United States)

    Carey, Sean J.; Gordon, Karl D.; Lowrance, Patrick; Ingalls, James G.; Glaccum, William J.; Grillmair, Carl J.; E Krick, Jessica; Laine, Seppo J.; Fazio, Giovanni G.; Hora, Joseph L.; Bohlin, Ralph

    2017-06-01

    We present observations at 3.6 and 4.5 microns using IRAC on the Spitzer Space Telescope of a set of main sequence A stars and white dwarfs that are potential calibrators across the JWST instrument suite. The stars range from brightnesses of 4.4 to 15 mag in K band. The calibration observations use a similar redundancy to the observing strategy for the IRAC primary calibrators (Reach et al. 2005) and the photometry is obtained using identical methods and instrumental photometric corrections as those applied to the IRAC primary calibrators (Carey et al. 2009). The resulting photometry is then compared to the predictions based on spectra from the CALSPEC Calibration Database (http://www.stsci.edu/hst/observatory/crds/calspec.html) and the IRAC bandpasses. These observations are part of an ongoing collaboration between IPAC and STScI investigating absolute calibration in the infrared.

  16. Real-time alignment and calibration of the LHCb Detector in Run II

    CERN Multimedia

    Dujany, Giulio

    2016-01-01

    Stable, precise spatial alignment and PID calibration are necessary to achieve optimal detector performance. During Run2, LHCb has a new real-time detector alignment and calibration to allow equivalent performance in the online and offline reconstruction to be reached. This offers the opportunity to optimise the event selection by applying stronger constraints, and to use hadronic particle identification at the trigger level. The computing time constraints are met through the use of a new dedicated framework using the multi-core farm infrastructure for the trigger. The motivation for a real-time alignment and calibration of the LHCb detector is discussed from the operative and physics performance point of view. Specific challenges of this configuration are discussed, as well as the designed framework and its performance.

  17. Real-time alignment and calibration of the LHCb Detector in Run II

    CERN Multimedia

    Dujany, Giulio

    2015-01-01

    Stable, precise spatial alignment and PID calibration are necessary to achieve optimal detector performance. During Run2, LHCb will have a new real-time detector alignment and calibration to allow equivalent performance in the online and offline reconstruction to be reached. This offers the opportunity to optimise the event selection by applying stronger constraints, and to use hadronic particle identification at the trigger level. The computing time constraints are met through the use of a new dedicated framework using the multi-core farm infrastructure for the trigger. The motivation for a real-time alignment and calibration of the LHCb detector is discussed from the operative and physics performance point of view. Specific challenges of this configuration are discussed, as well as the designed framework and its performance.

  18. Borehole Volumetric Strainmeter Calibration From a Nearby Seismic Broadband Array at Etna Volcano

    Science.gov (United States)

    Currenti, G.; Zuccarello, L.; Bonaccorso, A.; Sicali, A.

    2017-10-01

    Strainmeter and broadband seismic signals have been analyzed jointly with the aim of calibrating a borehole strainmeter at Etna volcano by using a seismo-geodetic technique. Our results reveal a good coherence between the dynamic strains estimated from seismometer data and strains recorded by a dilatometer in a low-frequency range [0.03-0.06 Hz] at the arrival of teleseismic waves. This significant coherence enabled estimating the calibration coefficient and making a comparison with calibration results derived from other methods. In particular, we verified that the proposed approach provides a calibration coefficient that matches the results obtained from the comparison of the recorded strain both with theoretical strain tides and with normal-mode synthetic straingrams. The approach presented here has the advantage of exploiting recorded seismic data, avoiding the use of computed strain from theoretical models.

  19. MP.EXE, a Calculation Program for Pressure Reciprocity Calibration of Microphones

    DEFF Research Database (Denmark)

    Rasmussen, Knud

    1998-01-01

    A computer program is described which calculates the pressure sensitivity of microphones based on measurements of the electrical transfer impedance in a reciprocity calibration set-up. The calculations are performed according to the International Standard IEC 6194-2. In addition a number of options...

  20. Experimental realization of quantum cheque using a five-qubit quantum computer

    Science.gov (United States)

    Behera, Bikash K.; Banerjee, Anindita; Panigrahi, Prasanta K.

    2017-12-01

    Quantum cheques could be a forgery-free way to make transaction in a quantum networked banking system with perfect security against any no-signalling adversary. Here, we demonstrate the implementation of quantum cheque, proposed by Moulick and Panigrahi (Quantum Inf Process 15:2475-2486, 2016), using the five-qubit IBM quantum computer. Appropriate single qubit, CNOT and Fredkin gates are used in an optimized configuration. The accuracy of implementation is checked and verified through quantum state tomography by comparing results from the theoretical and experimental density matrices.