WorldWideScience

Sample records for carlo efficiency calibration

  1. Monte Carlo efficiency calibration of a neutron generator-based total-body irradiator

    International Nuclear Information System (INIS)

    Many body composition measurement systems are calibrated against a single-sized reference phantom. Prompt-gamma neutron activation (PGNA) provides the only direct measure of total body nitrogen (TBN), an index of the body's lean tissue mass. In PGNA systems, body size influences neutron flux attenuation, induced gamma signal distribution, and counting efficiency. Thus, calibration based on a single-sized phantom could result in inaccurate TBN values. We used Monte Carlo simulations (MCNP-5; Los Alamos National Laboratory) in order to map a system's response to the range of body weights (65-160 kg) and body fat distributions (25-60%) in obese humans. Calibration curves were constructed to derive body-size correction factors relative to a standard reference phantom, providing customized adjustments to account for differences in body habitus of obese adults. The use of MCNP-generated calibration curves should allow for a better estimate of the true changes in lean tissue mass that many occur during intervention programs focused only on weight loss. (author)

  2. Application of the Monte Carlo code DETEFF to efficiency calibrations for in situ gamma-ray spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Carrazana Gonzalez, J.; Cornejo Diaz, N. [Centre for Radiological Protection and Hygiene, P.O. Box 6195, Habana (Cuba); Jurado Vargas, M., E-mail: mjv@unex.es [Departamento de Fisica, Universidad de Extremadura, 06071 Badajoz (Spain)

    2012-05-15

    We studied the applicability of the Monte Carlo code DETEFF for the efficiency calibration of detectors for in situ gamma-ray spectrometry determinations of ground deposition activity levels. For this purpose, the code DETEFF was applied to a study case, and the calculated {sup 137}Cs activity deposition levels at four sites were compared with published values obtained both by soil sampling and by in situ measurements. The {sup 137}Cs ground deposition levels obtained with DETEFF were found to be equivalent to the results of the study case within the uncertainties involved. The code DETEFF could thus be used for the efficiency calibration of in situ gamma-ray spectrometry for the determination of ground deposition activity using the uniform slab model. It has the advantage of requiring far less simulation time than general Monte Carlo codes adapted for efficiency computation, which is essential for in situ gamma-ray spectrometry where the measurement configuration yields low detection efficiency. - Highlights: Black-Right-Pointing-Pointer Application of the code DETEFF to in situ gamma-ray spectrometry. Black-Right-Pointing-Pointer {sup 137}Cs ground deposition levels evaluated assuming a uniform slab model. Black-Right-Pointing-Pointer Code DETEFF allows a rapid efficiency calibration.

  3. Experimental and Monte-Carlo absolute efficiency calibration of HPGE γ-ray spectrometer for application in neutron activation analysis

    International Nuclear Information System (INIS)

    High Purity Germanium (HPGe) detector is widely used to measure the γ-rays from neutron activated foils used for neutron spectra measurement due to its better energy resolution and photopeak efficiency. To determine the neutron induced activity in foils, it is very important to carry out absolute calibration for photo-peak efficiency in a wide range of γ-ray energy.Neutron activated foils are considered as extended γ-ray sources. The sources available for efficiency calibration are usually point sources. Therefore it is difficult to determine the photo-peak efficiency for extended sources using these point sources. A method has been developed to address this problem. This method is a combination of experimental measurement with point sources and development of an optimized model for Monte-Carlo N-Particle Code (MCNP) with the help of these experimental measurements. This MCNP model then can be used to find the photo-peak efficiency for any kind of source at any energy. (author)

  4. Numerical efficiency calibration of in vivo measurement systems. Monte Carlo simulations of in vivo measurement scenarios for the detection of incorporated radionuclides, including validation, analysis of efficiency-sensitive parameters and customized anthropomorphic voxel models

    International Nuclear Information System (INIS)

    Detector efficiency calibration of in vivo bioassay measurements is based on physical anthropomorphic phantoms that can be loaded with radionuclides of the suspected incorporation. Systematic errors of traditional calibration methods can cause considerable over- or underestimation of the incorporated activity and hence the absorbed dose in the human body. In this work Monte Carlo methods for radiation transport problem are used. Virtual models of the in vivo measurement equipment used at the Institute of Radiation Research, including detectors and anthropomorphic phantoms have been developed. Software tools have been coded to handle memory intensive human models for the visualization, preparation and evaluation of simulations of in vivo measurement scenarios. The used tools, methods, and models have been validated. Various parameters have been investigated for their sensitivity on the detector efficiency to identify and quantify possible systematic errors. Measures have been implemented to improve the determination of the detector efficiency in regard to apply them in the routine of the in vivo measurement laboratory of the institute. A positioning system has been designed and installed in the Partial Body Counter measurement chamber to measure the relative position of the detector to the test person, which has been identified to be a sensitive parameter. A computer cluster has been set up to facilitate the Monte Carlo simulations and reduce computing time. Methods based on image registration techniques have been developed to transform existing human models to match with an individual test person. The measures and methods developed have improved the classic detector efficiency methods successfully. (orig.)

  5. Energy and resolution calibration of NaI(Tl) and LaBr{sub 3}(Ce) scintillators and validation of an EGS5 Monte Carlo user code for efficiency calculations

    Energy Technology Data Exchange (ETDEWEB)

    Casanovas, R., E-mail: ramon.casanovas@urv.cat [Unitat de Fisica Medica, Facultat de Medicina i Ciencies de la Salut, Universitat Rovira i Virgili, ES-43201 Reus (Tarragona) (Spain); Morant, J.J. [Servei de Proteccio Radiologica, Facultat de Medicina i Ciencies de la Salut, Universitat Rovira i Virgili, ES-43201 Reus (Tarragona) (Spain); Salvado, M. [Unitat de Fisica Medica, Facultat de Medicina i Ciencies de la Salut, Universitat Rovira i Virgili, ES-43201 Reus (Tarragona) (Spain)

    2012-05-21

    The radiation detectors yield the optimal performance if they are accurately calibrated. This paper presents the energy, resolution and efficiency calibrations for two scintillation detectors, NaI(Tl) and LaBr{sub 3}(Ce). For the two former calibrations, several fitting functions were tested. To perform the efficiency calculations, a Monte Carlo user code for the EGS5 code system was developed with several important implementations. The correct performance of the simulations was validated by comparing the simulated spectra with the experimental spectra and reproducing a number of efficiency and activity calculations. - Highlights: Black-Right-Pointing-Pointer NaI(Tl) and LaBr{sub 3}(Ce) scintillation detectors are used for gamma-ray spectrometry. Black-Right-Pointing-Pointer Energy, resolution and efficiency calibrations are discussed for both detectors. Black-Right-Pointing-Pointer For the two former calibrations, several fitting functions are tested. Black-Right-Pointing-Pointer A Monte Carlo user code for EGS5 was developed for the efficiency calculations. Black-Right-Pointing-Pointer The code was validated reproducing some efficiency and activity calculations.

  6. Energy and resolution calibration of NaI(Tl) and LaBr3(Ce) scintillators and validation of an EGS5 Monte Carlo user code for efficiency calculations

    International Nuclear Information System (INIS)

    The radiation detectors yield the optimal performance if they are accurately calibrated. This paper presents the energy, resolution and efficiency calibrations for two scintillation detectors, NaI(Tl) and LaBr3(Ce). For the two former calibrations, several fitting functions were tested. To perform the efficiency calculations, a Monte Carlo user code for the EGS5 code system was developed with several important implementations. The correct performance of the simulations was validated by comparing the simulated spectra with the experimental spectra and reproducing a number of efficiency and activity calculations. - Highlights: ► NaI(Tl) and LaBr3(Ce) scintillation detectors are used for gamma-ray spectrometry. ► Energy, resolution and efficiency calibrations are discussed for both detectors. ► For the two former calibrations, several fitting functions are tested. ► A Monte Carlo user code for EGS5 was developed for the efficiency calculations. ► The code was validated reproducing some efficiency and activity calculations.

  7. Monte Carlo calculation of the efficiency calibration curve and coincidence-summing corrections in low-level gamma-ray spectrometry using well-type HPGe detectors

    International Nuclear Information System (INIS)

    Well-type high-purity germanium (HPGe) detectors are well suited to the analysis of small amounts of environmental samples, as they can combine both low background and high detection efficiency. A low-background well-type detector is installed in the Modane underground Laboratory. In the well geometry, coincidence-summing effects are high and make the construction of the full energy peak efficiency curve a difficult task with an usual calibration standard, especially in the high energy range. Using the GEANT code and taking into account a detailed description of the detector and the source, efficiency curves have been modelled for several filling heights of the vial. With a special routine taking into account the decay schemes of the radionuclides, corrections for coincidence-summing effects that occur when measuring samples containing 238U, 232Th or 134Cs have been computed. The results are found to be in good agreement with the experimental data. It is shown that triple coincidences effect on counting losses accounts for 7-15% of pair coincidences effect in the case of 604 and 796 keV lines of 134Cs

  8. Virtual point source efficiency calibration method for voluminous sample of radio-xenon based on efficiency function of point source

    International Nuclear Information System (INIS)

    A virtual point source calibration method is developed to finish the calibration of voluminous sample. We used a mixed point source to get the parameters of efficiency function, obtaining the virtual position of voluminous sample. So, the detection efficiency of xenon samples and standard soil samples were calibrated by placing the point source at their virtual position. The Monte Carlo method was also used to simulate the detector efficiency of xenon samples. Deviations between the virtual source method and Monte Carlo simulation are within 2.2 % for xenon samples. Thus, we have developed two robust efficiency calibration methods based on Monte Carlo simulations and virtual point source, respectively. (author)

  9. Mathematical efficiency calibration in gamma spectroscopy

    CERN Document Server

    Kaminski, S; Wilhelm, C

    2003-01-01

    Mathematical efficiency calibration with the LabSOCS software was introduced for two detectors in the measurement laboratory of the Central Safety Department of Forschungszentrum Karlsruhe. In the present contribution, conventional efficiency calibration of gamma spectroscopy systems and mathematical efficiency calibration with LabSOCS are compared with respect to their performance, uncertainties, expenses, and results. It is reported about the experience gained, and the advantages and disadvantages of both methods of efficiency calibration are listed. The results allow the conclusion to be drawn that mathematical efficiency calibration is a real alternative to conventional efficiency calibration of gamma spectroscopy systems as obtained by measurements of mixed gamma ray standard sources.

  10. Monte Carlo simulation: tool for the calibration in analytical determination of radionuclides

    International Nuclear Information System (INIS)

    This work shows how is established the traceability of the analytical determinations using this calibration method. Highlights the advantages offered by Monte Carlo simulation for the application of corrections by differences in chemical composition, density and height of the samples analyzed. Likewise, the results obtained by the LVRA in two exercises organized by the International Agency for Atomic Energy (IAEA) are presented. In these exercises (an intercomparison and a proficiency test) all reported analytical results were obtained based on calibrations in efficiency by Monte Carlo simulation using the DETEFF program

  11. Detector characterization for efficiency calibration in different measurement geometries

    International Nuclear Information System (INIS)

    In order to perform an accurate efficiency calibration for different measurement geometries a good knowledge of the detector characteristics is required. The Monte Carlo simulation program GESPECOR is applied. The detector characterization required for Monte Carlo simulation is achieved using the efficiency values obtained from measuring a point source. The point source was measured in two significant geometries: the source placed in a vertical plane containing the vertical symmetry axis of the detector and in a horizontal plane containing the centre of the active volume of the detector. The measurements were made using gamma spectrometry technique. (authors)

  12. Efficiency calibration of low background gamma spectrometer

    International Nuclear Information System (INIS)

    A method of efficiency calibration is described. The authors used standard ores of U, Ra and Th (power form), KCl and Cs-137 sources to do calibration volume-sources which were directly placed on the detector end cap. In such a measuring geometry, it is not necessary to make coincidence-summing correction. The efficiency calibration curve obtained by the method were compared with results measured by Am-241, Cd-109 and Eu-152 calibration sources. The agree in the error of about 5%

  13. Monte Carlo simulation: tool for the calibration in analytical determination of radionuclides; Simulacion Monte Carlo: herramienta para la calibracion en determinaciones analiticas de radionucleidos

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, Jorge A. Carrazana; Ferrera, Eduardo A. Capote; Gomez, Isis M. Fernandez; Castro, Gloria V. Rodriguez; Ricardo, Niury Martinez, E-mail: cphr@cphr.edu.cu [Centro de Proteccion e Higiene de las Radiaciones (CPHR), La Habana (Cuba)

    2013-07-01

    This work shows how is established the traceability of the analytical determinations using this calibration method. Highlights the advantages offered by Monte Carlo simulation for the application of corrections by differences in chemical composition, density and height of the samples analyzed. Likewise, the results obtained by the LVRA in two exercises organized by the International Agency for Atomic Energy (IAEA) are presented. In these exercises (an intercomparison and a proficiency test) all reported analytical results were obtained based on calibrations in efficiency by Monte Carlo simulation using the DETEFF program.

  14. Results of monte Carlo calibrations of a low energy germanium detector

    International Nuclear Information System (INIS)

    Normally, measurements of the peak efficiency of a gamma ray detector are performed with calibrated samples which are prepared to match the measured ones in all important characteristics like its volume, chemical composition and density. Another way to determine the peak efficiency is to calculate it with special monte Carlo programs. In principle the program 'Pencyl' from the source code 'P.E.N.E.L.O.P.E. 2003' can be used for peak efficiency calibration of a cylinder symmetric detector however exact data for the geometries and the materials is needed. The interpretation of the simulation results is not clear but we found a way to convert the data into values which can be compared to our measurement results. It is possible to find other simulation parameters which perform the same or better results. Further improvements can be expected by longer simulation times and more simulations in the questionable ranges of densities and filling heights. (N.C.)

  15. Top Quark Mass Calibration for Monte Carlo Event Generators

    CERN Document Server

    Butenschoen, Mathias; Hoang, Andre H; Mateu, Vicent; Preisser, Moritz; Stewart, Iain W

    2016-01-01

    The most precise top quark mass measurements use kinematic reconstruction methods, determining the top mass parameter of a Monte Carlo event generator, $m_t^{\\rm MC}$. Due to hadronization and parton shower dynamics, relating $m_t^{\\rm MC}$ to a field theory mass is difficult. We present a calibration procedure to determine this relation using hadron level QCD predictions for observables with kinematic mass sensitivity. Fitting $e^+e^-$ 2-Jettiness calculations at NLL/NNLL order to Pythia 8.205, $m_t^{\\rm MC}$ differs from the pole mass by $900$/$600$ MeV, and agrees with the MSR mass within uncertainties, $m_t^{\\rm MC}\\simeq m_{t,1\\,{\\rm GeV}}^{\\rm MSR}$.

  16. Monte Carlo calculations of calibrations of calibration coefficients of ATL monitors installed at NPP Temelin

    International Nuclear Information System (INIS)

    The contribution describes a technique of determination of calibration coefficients of a radioactivity monitor using Monte Carlo calculations. The monitor is installed at the NPP Temelin adjacent to lines with a radioactive medium. The output quantity is the activity concentration (in Bq/m3) that is converted from the number of counts per minute measured by the monitor. The value of this conversion constant, i.e. calibration coefficient, was calculated for gamma photons emitted by Co-60 and compared to the data stated by the manufacturer and supplier of these monitors, General Atomic Electronic Systems, Inc., USA. Results of the comparison show very good agreement between calculations and manufacturer data; the differences are lower than the quadratic sum of uncertainties. (authors)

  17. Monte Carlo based calibration of scintillation detectors for laboratory and in situ gamma ray measurements

    NARCIS (Netherlands)

    van der Graaf, E. R.; Limburg, J.; Koomans, R. L.; Tijs, M.

    2011-01-01

    The calibration of scintillation detectors for gamma radiation in a well characterized setup can be transferred to other geometries using Monte Carlo simulations to account for the differences between the calibration and the other geometry. In this study a calibration facility was used that is const

  18. High-precision efficiency calibration of a high-purity co-axial germanium detector

    International Nuclear Information System (INIS)

    A high-purity co-axial germanium detector has been calibrated in efficiency to a precision of about 0.15% over a wide energy range. High-precision scans of the detector crystal and γ-ray source measurements have been compared to Monte-Carlo simulations to adjust the dimensions of a detector model. For this purpose, standard calibration sources and short-lived online sources have been used. The resulting efficiency calibration reaches the precision needed e.g. for branching ratio measurements of super-allowed β decays for tests of the weak-interaction standard model

  19. Calibration of the identiFINDER detector for the iodine measurement in thyroid using the Monte Carlo method

    International Nuclear Information System (INIS)

    This work is based on the determination of the detection efficiency of 125I and 131I in thyroid of the identiFINDER detector using the Monte Carlo method. The suitability of the calibration method is analyzed, when comparing the results of the direct Monte Carlo method with the corrected, choosing the latter because the differences with the real efficiency stayed below 10%. To simulate the detector their geometric parameters were optimized using a tomographic study, what allowed the uncertainties minimization of the estimates. Finally were obtained the simulations of the detector geometry-point source to find the correction factors to 5 cm, 15 cm and 25 cm, and those corresponding to the detector-simulator arrangement for the method validation and final calculation of the efficiency, demonstrating that in the Monte Carlo method implementation if simulates at a greater distance than the used in the Laboratory measurements an efficiency overestimation can be obtained, while if simulates at a shorter distance this will be underestimated, so should be simulated at the same distance to which will be measured in the reality. Also, is achieved the obtaining of the efficiency curves and minimum detectable activity for the measurement of 131I and 125I. In general is achieved the implementation of the Monte Carlo methodology for the identiFINDER calibration with the purpose of estimating the measured activity of iodine in thyroid. This method represents an ideal way to replace the lack of patterns solutions and simulators assuring the capacities of the Internal Contamination Laboratory of the Centro de Proteccion e Higiene de las Radiaciones are always calibrated for the iodine measurement in thyroid. (author)

  20. Calibration and Monte Carlo modelling of neutron long counters

    CERN Document Server

    Tagziria, H

    2000-01-01

    The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivit...

  1. Force calibration using errors-in-variables regression and Monte Carlo uncertainty evaluation

    Science.gov (United States)

    Bartel, Thomas; Stoudt, Sara; Possolo, Antonio

    2016-06-01

    An errors-in-variables regression method is presented as an alternative to the ordinary least-squares regression computation currently employed for determining the calibration function for force measuring instruments from data acquired during calibration. A Monte Carlo uncertainty evaluation for the errors-in-variables regression is also presented. The corresponding function (which we call measurement function, often called analysis function in gas metrology) necessary for the subsequent use of the calibrated device to measure force, and the associated uncertainty evaluation, are also derived from the calibration results. Comparisons are made, using real force calibration data, between the results from the errors-in-variables and ordinary least-squares analyses, as well as between the Monte Carlo uncertainty assessment and the conventional uncertainty propagation employed at the National Institute of Standards and Technology (NIST). The results show that the errors-in-variables analysis properly accounts for the uncertainty in the applied calibrated forces, and that the Monte Carlo method, owing to its intrinsic ability to model uncertainty contributions accurately, yields a better representation of the calibration uncertainty throughout the transducer’s force range than the methods currently in use. These improvements notwithstanding, the differences between the results produced by the current and by the proposed new methods generally are small because the relative uncertainties of the inputs are small and most contemporary load cells respond approximately linearly to such inputs. For this reason, there will be no compelling need to revise any of the force calibration reports previously issued by NIST.

  2. Calibration and validation of a Monte Carlo model for PGNAA of chlorine in soil

    International Nuclear Information System (INIS)

    A prompt gamma-ray neutron activation analysis (PGNAA) system was used to calibrate and validate a Monte Carlo model as a proof of principle for the quantification of chlorine in soil. First, the response of an n-type HPGe detector to point sources of 60Co and 152Eu was determined experimentally and used to calibrate an MCNP4a model of the detector. The refined MCNP4a detector model can predict the absolute peak detection efficiency within 12% in the energy range of 120-1400 keV. Second, a PGNAA system consisting of a light-water moderated 252Cf (1.06 μg) neutron source, and the shielded and collimated HPGe detector was used to collect prompt gamma-ray spectra from Savannah River Site (SRS) soil spiked with chlorine. The spectra were used to calculate the minimum detectable concentration (MDC) of chlorine and the prompt gamma-ray detection probability. Using the 252Cf based PGNAA system, the MDC for Cl in the SRS soil is 4400 μg/g for an 1800-second irradiation based on the analysis of the 6110 keV prompt gamma-ray. MCNP4a was used to predict the PGNAA detection probability, which was accomplished by modeling the neutron and gamma-ray transport components separately. In the energy range of 788 to 6110 keV, the MCNP4a predictions of the prompt gamma-ray detection probability were generally within 60% of the experimental value, thus validating the Monte Carlo model. (author)

  3. Monte Carlo simulation for the calibration of neutron source strength measurement of JT-60 upgrade

    International Nuclear Information System (INIS)

    The calibration of the relation between the neutron source strength in the whole plasma and the output of neutron monitor is important to evaluate the fusion gain in tokamaks with DD or DT operation. JT-60 will be modified to be tokamak of deuterium plasma with Ip≤7MA and V≤110 m3. The source strength of JT-60 Upgrade will be measured with 235U and 238U fission chambers. Detection efficiencies for source neutron are calculated by the Monte Carlo code MCNP with 3-dimensional modelling of JT-60 Upgrade and with the poloidally distributed neutron source. More than 90% of fission chamber's counts are contributed by source of -85deg235U and 238U detectors, respectively. Detection efficiencies are sensitive to major radius of the detector position, but not so sensitive to vertical and toroidal shift of the detector positions. And total uncertainties combined detector position errors are ±13% and ±9% for 235U and 238U detectors, respectively. The modelling errors of the detection efficiencies are so large for the 238U detector that more precise modelling including the port boxes is needed. (author)

  4. Calibration of the Top-Quark Monte Carlo Mass

    Science.gov (United States)

    Kieseler, Jan; Lipka, Katerina; Moch, Sven-Olaf

    2016-04-01

    We present a method to establish, experimentally, the relation between the top-quark mass mtMC as implemented in Monte Carlo generators and the Lagrangian mass parameter mt in a theoretically well-defined renormalization scheme. We propose a simultaneous fit of mtMC and an observable sensitive to mt, which does not rely on any prior assumptions about the relation between mt and mtMC. The measured observable is independent of mtMC and can be used subsequently for a determination of mt. The analysis strategy is illustrated with examples for the extraction of mt from inclusive and differential cross sections for hadroproduction of top quarks.

  5. The peak efficiency calibration of volume source using 152Eu point source in computer

    International Nuclear Information System (INIS)

    The author describes the method of the peak efficiency calibration of volume source by means of 152Eu point source for HPGe γ spectrometer. The peak efficiency can be computed by Monte Carlo simulation, after inputting parameter of detector. The computation results are in agreement with the experimental results with an error of +-3.8%, with an exception one is about +-7.4%

  6. Confidence and efficiency scaling in Variational Quantum Monte Carlo calculations

    CERN Document Server

    Delyon, François; Holzmann, Markus

    2016-01-01

    Based on the central limit theorem, we discuss the problem of evaluation of the statistical error of Monte Carlo calculations using a time discretized diffusion process. We present a robust and practical method to determine the effective variance of general observables and show how to verify the equilibrium hypothesis by the Kolmogorov-Smirnov test. We then derive scaling laws of the efficiency illustrated by Variational Monte Carlo calculations on the two dimensional electron gas.

  7. Calibration of the top-quark Monte-Carlo mass

    Energy Technology Data Exchange (ETDEWEB)

    Kieseler, Jan; Lipka, Katerina [DESY Hamburg (Germany); Moch, Sven-Olaf [Hamburg Univ. (Germany). II. Inst. fuer Theoretische Physik

    2015-11-15

    We present a method to establish experimentally the relation between the top-quark mass m{sup MC}{sub t} as implemented in Monte-Carlo generators and the Lagrangian mass parameter m{sub t} in a theoretically well-defined renormalization scheme. We propose a simultaneous fit of m{sup MC}{sub t} and an observable sensitive to m{sub t}, which does not rely on any prior assumptions about the relation between m{sub t} and m{sup MC}{sub t}. The measured observable is independent of m{sup MC}{sub t} and can be used subsequently for a determination of m{sub t}. The analysis strategy is illustrated with examples for the extraction of m{sub t} from inclusive and differential cross sections for hadro-production of top-quarks.

  8. Calibration of the Top-Quark Monte-Carlo Mass

    CERN Document Server

    Kieseler, Jan; Moch, Sven-Olaf

    2015-01-01

    We present a method to establish experimentally the relation between the top-quark mass $m_t^{MC}$ as implemented in Monte-Carlo generators and the Lagrangian mass parameter $m_t$ in a theoretically well-defined renormalization scheme. We propose a simultaneous fit of $m_t^{MC}$ and an observable sensitive to $m_t$, which does not rely on any prior assumptions about the relation between $m_t$ and $m_t^{MC}$. The measured observable is independent of $m_t^{MC}$ and can be used subsequently for a determination of $m_t$. The analysis strategy is illustrated with examples for the extraction of $m_t$ from inclusive and differential cross sections for hadro-production of top-quarks.

  9. Calibration of the top-quark Monte-Carlo mass

    International Nuclear Information System (INIS)

    We present a method to establish experimentally the relation between the top-quark mass mMCt as implemented in Monte-Carlo generators and the Lagrangian mass parameter mt in a theoretically well-defined renormalization scheme. We propose a simultaneous fit of mMCt and an observable sensitive to mt, which does not rely on any prior assumptions about the relation between mt and mMCt. The measured observable is independent of mMCt and can be used subsequently for a determination of mt. The analysis strategy is illustrated with examples for the extraction of mt from inclusive and differential cross sections for hadro-production of top-quarks.

  10. Strategies for improving the efficiency of quantum Monte Carlo calculations

    CERN Document Server

    Lee, R M; Nemec, N; Rios, P Lopez; Drummond, N D

    2010-01-01

    We describe a number of strategies for optimizing the efficiency of quantum Monte Carlo (QMC) calculations. We investigate the dependence of the efficiency of the variational Monte Carlo method on the sampling algorithm. Within a unified framework, we compare several commonly used variants of diffusion Monte Carlo (DMC). We then investigate the behavior of DMC calculations on parallel computers and the details of parallel implementations, before proposing a technique to optimize the efficiency of the extrapolation of DMC results to zero time step, finding that a relative time step ratio of 1:4 is optimal. Finally, we discuss the removal of serial correlation from data sets by reblocking, setting out criteria for the choice of block length and quantifying the effects of the uncertainty in the estimated correlation length and the presence of divergences in the local energy on estimated error bars on QMC energies.

  11. Optimum and efficient sampling for variational quantum Monte Carlo

    CERN Document Server

    Trail, John Robert; 10.1063/1.3488651

    2010-01-01

    Quantum mechanics for many-body systems may be reduced to the evaluation of integrals in 3N dimensions using Monte-Carlo, providing the Quantum Monte Carlo ab initio methods. Here we limit ourselves to expectation values for trial wavefunctions, that is to Variational quantum Monte Carlo. Almost all previous implementations employ samples distributed as the physical probability density of the trial wavefunction, and assume the Central Limit Theorem to be valid. In this paper we provide an analysis of random error in estimation and optimisation that leads naturally to new sampling strategies with improved computational and statistical properties. A rigorous lower limit to the random error is derived, and an efficient sampling strategy presented that significantly increases computational efficiency. In addition the infinite variance heavy tailed random errors of optimum parameters in conventional methods are replaced with a Normal random error, strengthening the theoretical basis of optimisation. The method is ...

  12. HPGe Detector Efficiency Calibration Using HEU Standards

    Energy Technology Data Exchange (ETDEWEB)

    Salaymeh, S.R.

    2000-10-12

    The Analytical Development Section of SRTC was requested by the Facilities Disposition Division (FDD) to determine the holdup of enriched uranium in the 321-M facility as part of an overall deactivation project of the facility. The 321-M facility was used to fabricate enriched uranium fuel assemblies, lithium-aluminum target tubes, neptunium assemblies, and miscellaneous components for the production reactors. The facility also includes the 324-M storage building and the passageway connecting it to 321-M. The results of the holdup assays are essential for determining compliance with the Solid Waste's Waste Acceptance Criteria, Material Control and Accountability, and to meet criticality safety controls. Two measurement systems will be used to determine highly enriched uranium (HEU) holdup: One is a portable HPGe detector and EG and G Dart system that contains high voltage power supply and signal processing electronics. A personal computer with Gamma-Vision software was used to provide an MCA card, and space to store and manipulate multiple 4096-channel g-ray spectra. The other is a 2 inches x 2 inches NaI crystal with an MCA that uses a portable computer with a Canberra NaI plus card installed. This card converts the PC to a full function MCA and contains the ancillary electronics, high voltage power supply and amplifier, required for data acquisition. This report describes and documents the HPGe point, line, area, and constant geometry-constant transmission detector efficiency calibrations acquired and calculated for use in conducting holdup measurements as part of the overall deactivation project of building 321-M.

  13. Application of the Monte Carlo method to the analysis of measurement geometries for the calibration of a HP Ge detector in an environmental radioactivity laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Rodenas, Jose [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain)], E-mail: jrodenas@iqn.upv.es; Gallardo, Sergio; Ballester, Silvia; Primault, Virginie [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain); Ortiz, Josefina [Laboratorio de Radiactividad Ambiental, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain)

    2007-10-15

    A gamma spectrometer including an HP Ge detector is commonly used for environmental radioactivity measurements. The efficiency of the detector should be calibrated for each geometry considered. Simulation of the calibration procedure with a validated computer program is an important auxiliary tool for environmental radioactivity laboratories. The MCNP code based on the Monte Carlo method has been applied to simulate the detection process in order to obtain spectrum peaks and determine the efficiency curve for each modelled geometry. The source used for measurements was a calibration mixed radionuclide gamma reference solution, covering a wide energy range (50-2000 keV). Two measurement geometries - Marinelli beaker and Petri boxes - as well as different materials - water, charcoal, sand - containing the source have been considered. Results obtained from the Monte Carlo model have been compared with experimental measurements in the laboratory in order to validate the model.

  14. Influence of the GE inactive layer thickness on detector calibration simulation for environmental radioactive samples using the Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Rodenas, J.; Pascual, A.; Zarza, I.; Serradell, V.; Ortiz, J.; Ballesteros, L.

    2001-07-01

    One of the most powerful tools used for environmental radioactivity measurements is a gamma spectrometer, which usually includes a hp ge detector. The detector should be calibrated in efficiency for each considered geometry. Simulation of the calibration procedure with a validated computer program becomes an important auxiliary tool for an environmental radioactivity laboratory being that it permits one to optimise calibration procedures and reduce the amount of radioactive wastes produced. The Monte Carlo method is applied to simulate the detection process and obtain spectrum peaks for each modelled geometry. An accurate detector model should be developed in order to obtain a good accuracy in the output of the calibration simulation. an important parameter in the detector model is the thickness of any absorber layer surrounding the Ge crystal, particularly the inactive Ge layer. In this paper, a sensitivity analysis on the inactive Ge layer thickness is performed using MCNP 4B code. Results are compared with experimental measured efficiency. A sensitivity analysis is also performed on the aluminium cap thickness. (Author) 5 refs.

  15. Monte Carlo simulation studies of the timing calibration accuracy required by the NEMO underwater neutrino telescope

    International Nuclear Information System (INIS)

    The results of Monte Carlo simulation studies of the timing calibration accuracy required by the NEMO underwater neutrino telescope are presented. The NEMO Collaboration is conducting a long term R and D activity toward the installation of a km3 apparatus in the Mediterranean Sea. An optimal site has been found and characterized at 3500 m depth off the Sicilian coast. Monte Carlo simulation shows that the angular resolution of the telescope remains approximately unchanged if the offset errors of timing calibration are less than 1 ns. This value is tolerable because the apparatus performance is not significantly changed when such inaccuracies are added to the other sources of error (e.g., the accuracy position of optical modules). We also discuss the optical background rate effect on the angular resolution of the apparatus and we compare the present version of the NEMO telescope with a different configuration.

  16. Whole body counter calibration using Monte Carlo modeling with an array of phantom sizes based on national anthropometric reference data

    International Nuclear Information System (INIS)

    During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of 40K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.

  17. Whole body counter calibration using Monte Carlo modeling with an array of phantom sizes based on national anthropometric reference data

    Science.gov (United States)

    Shypailo, R. J.; Ellis, K. J.

    2011-05-01

    During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of 40K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.

  18. Self-absorption correction in γ-ray efficiency calibration of fission gas nuclide

    International Nuclear Information System (INIS)

    In order to solve the problem of self-absorption correction in γ-ray efficiency calibration of fission gas nuclide, the parameters about source container, detector and source matrix etc.were described, and Monte-Carlo model and program of efficiency computation for HPGe detector were established according to experiment layout. The efficiency of fission gas nuclides was calculated in the different source matrices, and the corresponding self-absorption coefficients were obtained. The reliability of the model was validated by the experiment data. (authors)

  19. Monte Carlo Studies for the Calibration System of the GERDA Experiment

    CERN Document Server

    Baudis, Laura; Froborg, Francis; Tarka, Michal

    2013-01-01

    The GERmanium Detector Array, GERDA, searches for neutrinoless double beta decay in Ge-76 using bare high-purity germanium detectors submerged in liquid argon. For the calibration of these detectors gamma emitting sources have to be lowered from their parking position on top of the cryostat over more than five meters down to the germanium crystals. With the help of Monte Carlo simulations, the relevant parameters of the calibration system were determined. It was found that three Th-228 sources with an activity of 20 kBq each at two different vertical positions will be necessary to reach sufficient statistics in all detectors in less than four hours of calibration time. These sources will contribute to the background of the experiment with a total of (1.07 +/- 0.04(stat) +0.13 -0.19(sys)) 10^{-4} cts/(keV kg yr) when shielded from below with 6 cm of tantalum in the parking position.

  20. Model calibration for building energy efficiency simulation

    International Nuclear Information System (INIS)

    Highlights: • Developing a 3D model relating to building architecture, occupancy and HVAC operation. • Two calibration stages developed, final model providing accurate results. • Using an onsite weather station for generating the weather data file in EnergyPlus. • Predicting thermal behaviour of underfloor heating, heat pump and natural ventilation. • Monthly energy saving opportunities related to heat pump of 20–27% was identified. - Abstract: This research work deals with an Environmental Research Institute (ERI) building where an underfloor heating system and natural ventilation are the main systems used to maintain comfort condition throughout 80% of the building areas. Firstly, this work involved developing a 3D model relating to building architecture, occupancy and HVAC operation. Secondly, the calibration methodology, which consists of two levels, was then applied in order to insure accuracy and reduce the likelihood of errors. To further improve the accuracy of calibration a historical weather data file related to year 2011, was created from the on-site local weather station of ERI building. After applying the second level of calibration process, the values of Mean bias Error (MBE) and Cumulative Variation of Root Mean Squared Error (CV(RMSE)) on hourly based analysis for heat pump electricity consumption varied within the following ranges: (MBE)hourly from −5.6% to 7.5% and CV(RMSE)hourly from 7.3% to 25.1%. Finally, the building was simulated with EnergyPlus to identify further possibilities of energy savings supplied by a water to water heat pump to underfloor heating system. It found that electricity consumption savings from the heat pump can vary between 20% and 27% on monthly bases

  1. Efficient Monte Carlo methods for light transport in scattering media

    OpenAIRE

    Jarosz, Wojciech

    2008-01-01

    In this dissertation we focus on developing accurate and efficient Monte Carlo methods for synthesizing images containing general participating media. Participating media such as clouds, smoke, and fog are ubiquitous in the world and are responsible for many important visual phenomena which are of interest to computer graphics as well as related fields. When present, the medium participates in lighting interactions by scattering or absorbing photons as they travel through the scene. Though th...

  2. Efficiency calibration of solid track spark auto counter

    International Nuclear Information System (INIS)

    The factors influencing detection efficiency of solid track spark auto counter were analyzed, and the best etch condition and parameters of charge were also reconfirmed. With small plate fission ionization chamber, the efficiency of solid track spark auto counter at various experiment assemblies was re-calibrated. The efficiency of solid track spark auto counter at various experimental conditions was obtained. (authors)

  3. Direct calibration in megavoltage photon beams using Monte Carlo conversion factor: validation and clinical implications

    International Nuclear Information System (INIS)

    The Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) has established a method for ionisation chamber calibrations using megavoltage photon reference beams. The new method will reduce the calibration uncertainty compared to a 60Co calibration combined with the TRS-398 energy correction factor. The calibration method employs a graphite calorimeter and a Monte Carlo (MC) conversion factor to convert the absolute dose to graphite to absorbed dose to water. EGSnrc is used to model the linac head and doses in the calorimeter and water phantom. The linac model is validated by comparing measured and modelled PDDs and profiles. The relative standard uncertainties in the calibration factors at the ARPANSA beam qualities were found to be 0.47% at 6 MV, 0.51% at 10 MV and 0.46% for the 18 MV beam. A comparison with the Bureau International des Poids et Mesures (BIPM) as part of the key comparison BIPM.RI(I)-K6 gave results of 0.9965(55), 0.9924(60) and 0.9932(59) for the 6, 10 and 18 MV beams, respectively, with all beams within 1σ of the participant average. The measured kQ values for an NE2571 Farmer chamber were found to be lower than those in TRS-398 but are consistent with published measured and modelled values. Users can expect a shift in the calibration factor at user energies of an NE2571 chamber between 0.4–1.1% across the range of calibration energies compared to the current calibration method. (paper)

  4. Calibration factors for dose rate probes in environmental monitoring networks obtained from Monte-Carlo-simulations

    International Nuclear Information System (INIS)

    The interpretation of data obtained from fixed, ground-based dose rate monitoring stations of environmental networks, in terms of deposited radionuclide activity per unit area, requires not only the knowledge of the nuclide spectrum and the deposition mechanism, but also the knowledge of the situation in the vicinity at the probe if it significantly differs from ideal conditions, which are defined to be an infinitely-extended plane surface. Distance-dependent calibration factors for different gamma-ray energies and depth profiles are calculated with the new Monte Carlo code PENELOPE. How these distance-dependent calibration factors can take inhomogeneous surface types in the vicinity of the probe into account will also be discussed. In addition, calibration factors for different detector heights and calibration factors for gamma sources in the air will also be calculated. The main irregularities that, in practice, occur in the vicinity of such probes are discussed. Their impact on the representativeness of the site is assessed. For some typical irregularities parameterized calibration factors are calculated and discussed. Sewage plants and sandboxes are discussed as further special examples. The application of these results to real sites allows for the improved interpretation of data and the quantitative assessment of the representativeness of the site. A semi-quantitative scoring scheme helps to decide to what extent irregularities can be tolerated. Its application is straightforward and provides a coarse but objective description of the site-specific conditions of a dose rate probe. (orig.)

  5. Field parameters and dosimetric characteristics of a fast neutron calibration facility: experimental and Monte Carlo evaluations

    International Nuclear Information System (INIS)

    At the ENEA Institute for Radiation Protection (IRP) the fast neutron calibration facility consists of a remote control device which allows locating different sources (Am-Be, Pu-Li, bare and D2O moderated 252Cf) at the reference position, at the desired height from the floor, inside a 10x10x3 m3 irradiation room. Either the ISO reference sources or the Pu-Li source have been characterised in terms of uncollided H*(10) and neutron fluence according to the ISO calibration procedures. A spectral fluence mapping, carried out with the Monte Carlo Code MCNPTM, allowed characterising the calibration point, in scattered field conditions, according to the most recent international recommendations. Moreover, the irradiation of personal dosemeters on the ISO water filled slab phantom was simulated to determine the field homogeneity of the calibration area and the variability of the neutron field (including the backscattered component) along the phantom surface. At the ENEA Institute for Radiation Protection the calibration of neutron area monitors as well as personal dosemeters can now be performed according to the international standards, at the same time guaranteeing suitable conditions for research and qualification purposes in the field of neutron dosimetry

  6. Efficient Monte Carlo methods for continuum radiative transfer

    CERN Document Server

    Juvela, M

    2005-01-01

    We discuss the efficiency of Monte Carlo methods in solving continuum radiative transfer problems. The sampling of the radiation field and convergence of dust temperature calculations in the case of optically thick clouds are both studied. For spherically symmetric clouds we find that the computational cost of Monte Carlo simulations can be reduced, in some cases by orders of magnitude, with simple importance weighting schemes. This is particularly true for models consisting of cells of different sizes for which the run times would otherwise be determined by the size of the smallest cell. We present a new idea of extending importance weighting to scattered photons. This is found to be useful in calculations of scattered flux and could be important for three-dimensional models when observed intensity is needed only for one general direction of observations. Convergence of dust temperature calculations is studied for models with optical depths 10-10000. We examine acceleration methods where radiative interactio...

  7. IMRT treatment Monitor Unit verification using absolute calibrated BEAMnrc and Geant4 Monte Carlo simulations

    International Nuclear Information System (INIS)

    Intensity Modulated Radiation Therapy (IMRT) treatments are some of the most complex being delivered by modern megavoltage radiotherapy accelerators. Therefore verification of the dose, or the presecribed Monitor Units (MU), predicted by the planning system is a key element to ensuring that patients should receive an accurate radiation dose plan during IMRT. One inherently accurate method is by comparison with absolute calibrated Monte Carlo simulations of the IMRT delivery by the linac head and corresponding delivery of the plan to a patient based phantom. In this work this approach has been taken using BEAMnrc for simulation of the treatment head, and both DOSXYZnrc and Geant4 for the phantom dose calculation. The two Monte Carlo codes agreed to within 1% of each other, and these matched very well to our planning system for IMRT plans to the brain, nasopharynx, and head and neck.

  8. Calibration of the identiFINDER detector for the iodine measurement in thyroid using the Monte Carlo method; Calibracion del detector identiFINDER para la medicion de yodo en tiroides utilizando el metodo Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Ramos M, D.; Yera S, Y.; Lopez B, G. M.; Acosta R, N.; Vergara G, A., E-mail: dayana@cphr.edu.cu [Centro de Proteccion e Higiene de las Radiaciones, Calle 20 No. 4113 e/ 41 y 47, Playa, 10600 La Habana (Cuba)

    2014-08-15

    This work is based on the determination of the detection efficiency of {sup 125}I and {sup 131}I in thyroid of the identiFINDER detector using the Monte Carlo method. The suitability of the calibration method is analyzed, when comparing the results of the direct Monte Carlo method with the corrected, choosing the latter because the differences with the real efficiency stayed below 10%. To simulate the detector their geometric parameters were optimized using a tomographic study, what allowed the uncertainties minimization of the estimates. Finally were obtained the simulations of the detector geometry-point source to find the correction factors to 5 cm, 15 cm and 25 cm, and those corresponding to the detector-simulator arrangement for the method validation and final calculation of the efficiency, demonstrating that in the Monte Carlo method implementation if simulates at a greater distance than the used in the Laboratory measurements an efficiency overestimation can be obtained, while if simulates at a shorter distance this will be underestimated, so should be simulated at the same distance to which will be measured in the reality. Also, is achieved the obtaining of the efficiency curves and minimum detectable activity for the measurement of {sup 131}I and {sup 125}I. In general is achieved the implementation of the Monte Carlo methodology for the identiFINDER calibration with the purpose of estimating the measured activity of iodine in thyroid. This method represents an ideal way to replace the lack of patterns solutions and simulators assuring the capacities of the Internal Contamination Laboratory of the Centro de Proteccion e Higiene de las Radiaciones are always calibrated for the iodine measurement in thyroid. (author)

  9. The determination of the efficiency of a Compton suppressed HPGe detector using Monte Carlo simulations.

    Science.gov (United States)

    McNamara, A L; Heijnis, H; Fierro, D; Reinhard, M I

    2012-04-01

    A Compton suppressed high-purity germanium (HPGe) detector is well suited to the analysis of low levels of radioactivity in environmental samples. The difference in geometry, density and composition of environmental calibration standards (e.g. soil) can contribute to excessive experimental uncertainty to the measured efficiency curve. Furthermore multiple detectors, like those used in a Compton suppressed system, can add complexities to the calibration process. Monte Carlo simulations can be a powerful complement in calibrating these types of detector systems, provided enough physical information on the system is known. A full detector model using the Geant4 simulation toolkit is presented and the system is modelled in both the suppressed and unsuppressed mode of operation. The full energy peak efficiencies of radionuclides from a standard source sample is calculated and compared to experimental measurements. The experimental results agree relatively well with the simulated values (within ∼5 - 20%). The simulations show that coincidence losses in the Compton suppression system can cause radionuclide specific effects on the detector efficiency, especially in the Compton suppressed mode of the detector. Additionally since low energy photons are more sensitive to small inaccuracies in the computational detector model than high energy photons, large discrepancies may occur at energies lower than ∼100 keV. PMID:22304994

  10. Analysis of the effect of true coincidence summing on efficiency calibration for an HP GE detector

    Energy Technology Data Exchange (ETDEWEB)

    Rodenas, J.; Gallardo, S.; Ballester, S.; Primault, V. [Valencia Univ. Politecnica, Dept. de Ingenieria Quimica y Nuclear (Spain); Ortiz, J. [Valencia Univ. Politecnica, Lab. de Radiactividad Ambiental (Spain)

    2006-07-01

    The H.P. (High Purity) Germanium detector is commonly used for gamma spectrometry in environmental radioactivity laboratories. The efficiency of the detector must be calibrated for each geometry considered. This calibration is performed using a standard solution containing gamma emitter sources. The usual goal is the obtaining of an efficiency curve to be used in the determination of the activity of samples with the same geometry. It is evident the importance of the detector calibration. However, the procedure presents some problems as it depends on the source geometry (shape, volume, distance to detector, etc.) and shall be repeated when these factors change. That means an increasing use of standard solutions and consequently an increasing generation of radioactive wastes. Simulation of the calibration procedure with a validated computer program is clearly an important auxiliary tool for environmental radioactivity laboratories. This simulation is useful for both optimising calibration procedures and reducing the amount of radioactivity wastes produced. The M.C.N.P. code, based on the Monte Carlo method, has been used in this work for the simulation of detector calibration. A model has been developed for the detector as well as for the source contained in a Petri box. The source is a standard solution that contains the following radionuclides: {sup 241}Am, {sup 109}Cd, {sup 57}Co, {sup 139}Ce, {sup 203}Hg, {sup 113}Sn, {sup 85}Sr, {sup 137}Cs, {sup 88}Y and {sup 60}Co; covering a wide energy range (50 to 2000 keV). However, there are two radionuclides in the solution ({sup 60}Co and {sup 88}Y) that emit gamma rays in true coincidence. The effect of the true coincidence summing produces a distortion of the calibration curve at higher energies. To decrease this effect some measurements have been performed at increasing distances between the source and the detector. As the true coincidence effect is observed in experimental measurements but not in the Monte Carlo

  11. New data concerning the efficiency calibration of a drum waste assay system

    International Nuclear Information System (INIS)

    The study is focused on the efficiency calibration of a gamma spectroscopy system for drum waste assay.The measurement of a radioactive drum waste is usually difficult because of its large volume, the varied distribution of the waste within the drum and its high self attenuation.To solve this problems, a complex calibration of the system is required. For this purpose, a calibration drum provided with seven tubes, placed at different distances from its center was used, the rest of the drum being filled with Portland cement. For the efficiency determination of a uniformly distributed source, a linear source of 152 Eu was used.The linear calibration source was introduced successively inside the seven tubes, the gamma spectra being recorded while the drum was translated and simultaneously rotated. Using the GENIE-PC software, the gamma-spectra were analyzed and the detection efficiencies for shell-sources were obtained. Using this efficiencies, the total response of the detector and the detection efficiency appropriate to a uniform volume source were calculated. For the efficiency determination of a non-homogenous source, additional measurements in the following geometries were made. First, with a 152 Eu point source placed in front of the detector, measured in all seven tubes, the drum being only rotated. Second, with the linear source of 152 Eu placed in front of the detector, measured in all seven tubes, only the drum being rotated. For each position the gamma spectra was recorded and the detection efficiency was calculated.The obtained values for efficiency were verified using GESPECOR software, which has been developed for the computation of the efficiency of Ge detectors for a wide class of measurement configurations, using Monte-Carlo method. (authors)

  12. O5S, Calibration of Organic Scintillation Detector by Monte-Carlo

    International Nuclear Information System (INIS)

    1 - Nature of physical problem solved: O5S is designed to directly simulate the experimental techniques used to obtain the pulse height distribution for a parallel beam of mono-energetic neutrons incident on organic scintillator systems. Developed to accurately calibrate the nominally 2 in. by 2 in. liquid organic scintillator NE-213 (composition CH-1.2), the programme should be readily adaptable to many similar problems. 2 - Method of solution: O5S is a Monte Carlo programme patterned after the general-purpose Monte Carlo neutron transport programme system, O5R. The O5S Monte Carlo experiment follows the course of each neutron through the scintillator and obtains the energy-deposits of the ions produced by elastic scatterings and reactions. The light pulse produced by the neutron is obtained by summing up the contributions of the various ions with the use of appropriate light vs. ion-energy tables. Because of the specialized geometry and simpler cross section needs O5S is able to by-pass many features included in O5R. For instance, neutrons may be followed individually, their histories analyzed as they occur, and upon completion of the experiment, the results analyzed to obtain the pulse-height distribution during one pass on the computer. O5S does allow the absorption of neutrons, but does not allow splitting or Russian roulette (biased weighting schemes). SMOOTHIE is designed to smooth O5S histogram data using Gaussian functions with parameters specified by the user

  13. Calibration of AGILE-GRID with in-flight data and Monte Carlo simulations

    Science.gov (United States)

    Chen, A. W.; Argan, A.; Bulgarelli, A.; Cattaneo, P. W.; Contessi, T.; Giuliani, A.; Pittori, C.; Pucella, G.; Tavani, M.; Trois, A.; Verrecchia, F.; Barbiellini, G.; Caraveo, P.; Colafrancesco, S.; Costa, E.; De Paris, G.; Del Monte, E.; Di Cocco, G.; Donnarumma, I.; Evangelista, Y.; Ferrari, A.; Feroci, M.; Fioretti, V.; Fiorini, M.; Fuschino, F.; Galli, M.; Gianotti, F.; Giommi, P.; Giusti, M.; Labanti, C.; Lapshov, I.; Lazzarotto, F.; Lipari, P.; Longo, F.; Lucarelli, F.; Marisaldi, M.; Mereghetti, S.; Morelli, E.; Moretti, E.; Morselli, A.; Pacciani, L.; Pellizzoni, A.; Perotti, F.; Piano, G.; Picozza, P.; Pilia, M.; Prest, M.; Rapisarda, M.; Rappoldi, A.; Rubini, A.; Sabatini, S.; Santolamazza, P.; Soffitta, P.; Striani, E.; Trifoglio, M.; Valentini, G.; Vallazza, E.; Vercellone, S.; Vittorini, V.; Zanello, D.

    2013-10-01

    Context. AGILE is a γ-ray astrophysics mission which has been in orbit since 23 April 2007 and continues to operate reliably. The γ-ray detector, AGILE-GRID, has observed Galactic and extragalactic sources, many of which were collected in the first AGILE Catalog. Aims: We present the calibration of the AGILE-GRID using in-flight data and Monte Carlo simulations, producing instrument response functions (IRFs) for the effective area (Aeff), energy dispersion probability (EDP), and point spread function (PSF), each as a function of incident direction in instrument coordinates and energy. Methods: We performed Monte Carlo simulations at different γ-ray energies and incident angles, including background rejection filters and Kalman filter-based γ-ray reconstruction. Long integrations of in-flight observations of the Vela, Crab and Geminga sources in broad and narrow energy bands were used to validate and improve the accuracy of the instrument response functions. Results: The weighted average PSFs as a function of spectra correspond well to the data for all sources and energy bands. Conclusions: Changes in the interpolation of the PSF from Monte Carlo data and in the procedure for construction of the energy-weighted effective areas have improved the correspondence between predicted and observed fluxes and spectra of celestial calibration sources, reducing false positives and obviating the need for post-hoc energy-dependent scaling factors. The new IRFs have been publicly available from the AGILE Science Data Center since November 25, 2011, while the changes in the analysis software will be distributed in an upcoming release.

  14. An Efficient Approach to Ab Initio Monte Carlo Simulation

    CERN Document Server

    Leiding, Jeff

    2013-01-01

    We present a Nested Markov Chain Monte Carlo (NMC) scheme for building equilibrium averages based on accurate potentials such as density functional theory. Metropolis sampling of a reference system, defined by an inexpensive but approximate potential, is used to substantially decorrelate configurations at which the potential of interest is evaluated, thereby dramatically reducing the number needed to build ensemble averages at a given level of precision. The efficiency of this procedure is maximized on-the-fly through variation of the reference system thermodynamic state (characterized here by its inverse temperature \\beta^0), which is otherwise unconstrained. Local density approximation (LDA) results are presented for shocked states in argon at pressures from 4 to 60 GPa. Depending on the quality of the reference potential, the acceptance probability is enhanced by factors of 1.2-28 relative to unoptimized NMC sampling, and the procedure's efficiency is found to be competitive with that of standard ab initio...

  15. Improved photon counting efficiency calibration using superconducting single photon detectors

    Science.gov (United States)

    Gan, Haiyong; Xu, Nan; Li, Jianwei; Sun, Ruoduan; Feng, Guojin; Wang, Yanfei; Ma, Chong; Lin, Yandong; Zhang, Labao; Kang, Lin; Chen, Jian; Wu, Peiheng

    2015-10-01

    The quantum efficiency of photon counters can be measured with standard uncertainty below 1% level using correlated photon pairs generated through spontaneous parametric down-conversion process. Normally a laser in UV, blue or green wavelength range with sufficient photon energy is applied to produce energy and momentum conserved photon pairs in two channels with desired wavelengths for calibration. One channel is used as the heralding trigger, and the other is used for the calibration of the detector under test. A superconducting nanowire single photon detector with advantages such as high photon counting speed (responsivity (UV to near infrared) is used as the trigger detector, enabling correlated photons calibration capabilities into shortwave visible range. For a 355nm single longitudinal mode pump laser, when a superconducting nanowire single photon detector is used as the trigger detector at 1064nm and 1560nm in the near infrared range, the photon counting efficiency calibration capabilities can be realized at 532nm and 460nm. The quantum efficiency measurement on photon counters such as photomultiplier tubes and avalanche photodiodes can be then further extended in a wide wavelength range (e.g. 400-1000nm) using a flat spectral photon flux source to meet the calibration demands in cutting edge low light applications such as time resolved fluorescence and nonlinear optical spectroscopy, super resolution microscopy, deep space observation, and so on.

  16. Monte Carlo studies and optimization for the calibration system of the GERDA experiment

    Science.gov (United States)

    Baudis, L.; Ferella, A. D.; Froborg, F.; Tarka, M.

    2013-11-01

    The GERmanium Detector Array, GERDA, searches for neutrinoless double β decay in 76Ge using bare high-purity germanium detectors submerged in liquid argon. For the calibration of these detectors γ emitting sources have to be lowered from their parking position on the top of the cryostat over more than 5 m down to the germanium crystals. With the help of Monte Carlo simulations, the relevant parameters of the calibration system were determined. It was found that three 228Th sources with an activity of 20 kBq each at two different vertical positions will be necessary to reach sufficient statistics in all detectors in less than 4 h of calibration time. These sources will contribute to the background of the experiment with a total of (1.07±0.04(stat)-0.19+0.13(sys))×10-4 cts/(keV kg yr)) when shielded from below with 6 cm of tantalum in the parking position.

  17. Monte Carlo studies and optimization for the calibration system of the GERDA experiment

    Energy Technology Data Exchange (ETDEWEB)

    Baudis, L. [Physics Institute, University of Zurich, Winterthurerstrasse 190, 8057 Zürich (Switzerland); Ferella, A.D. [Physics Institute, University of Zurich, Winterthurerstrasse 190, 8057 Zürich (Switzerland); INFN Laboratori Nazionali del Gran Sasso, 67010 Assergi (Italy); Froborg, F., E-mail: francis@froborg.de [Physics Institute, University of Zurich, Winterthurerstrasse 190, 8057 Zürich (Switzerland); Tarka, M. [Physics Institute, University of Zurich, Winterthurerstrasse 190, 8057 Zürich (Switzerland); Physics Department, University of Illinois, 1110 West Green Street, Urbana, IL 61801 (United States)

    2013-11-21

    The GERmanium Detector Array, GERDA, searches for neutrinoless double β decay in {sup 76}Ge using bare high-purity germanium detectors submerged in liquid argon. For the calibration of these detectors γ emitting sources have to be lowered from their parking position on the top of the cryostat over more than 5 m down to the germanium crystals. With the help of Monte Carlo simulations, the relevant parameters of the calibration system were determined. It was found that three {sup 228}Th sources with an activity of 20 kBq each at two different vertical positions will be necessary to reach sufficient statistics in all detectors in less than 4 h of calibration time. These sources will contribute to the background of the experiment with a total of (1.07±0.04(stat){sub −0.19}{sup +0.13}(sys))×10{sup −4}cts/(keVkgyr)) when shielded from below with 6 cm of tantalum in the parking position.

  18. Monte Carlo Based Calibration and Uncertainty Analysis of a Coupled Plant Growth and Hydrological Model

    Science.gov (United States)

    Houska, Tobias; Multsch, Sebastian; Kraft, Philipp; Frede, Hans-Georg; Breuer, Lutz

    2014-05-01

    Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures, e.g. by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow a more detailed analysis of the dynamic behaviour of the soil-plant interface. We used the Python programming language to couple two of such high process oriented independent models and to calibrate both models simultaneously. The Catchment Modelling Framework (CMF) simulated soil hydrology based on the Richards equation and the Van-Genuchten-Mualem retention curve. CMF was coupled with the Plant growth Modelling Framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo based Generalised Likelihood Uncertainty Estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions to it. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 x 106 model runs randomly drawn from an equally distributed parameter space. Three objective functions were used to evaluate the model performance, i.e. coefficient of determination (R2), bias and model efficiency according to Nash Sutcliffe (NSE). The model was applied to three sites with different management in Muencheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matters of roots, storages, stems and leaves. Best parameter sets resulted in NSE of 0.57 for the simulation of soil moisture across all three sites. The shape

  19. Analysis of the Tandem Calibration Method for Kerma Area Product Meters Via Monte Carlo Simulations

    International Nuclear Information System (INIS)

    The IAEA recommends that uncertainties of dosimetric measurements in diagnostic radiology for risk assessment and quality assurance should be less than 7% on the confidence level of 95%. This accuracy is difficult to achieve with kerma area product (KAP) meters currently used in clinics. The reasons range from the high energy dependence of KAP meters to the wide variety of configurations in which KAP meters are used and calibrated. The tandem calibration method introduced by Poeyry, Komppa and Kosunen in 2005 has the potential to make the calibration procedure simpler and more accurate compared to the traditional beam-area method. In this method, two positions of the reference KAP meter are of interest: (a) a position close to the field KAP meter and (b) a position 20 cm above the couch. In the close position, the distance between the two KAP meters should be at least 30 cm to reduce the effect of back scatter. For the other position, which is recommended for the beam-area calibration method, the distance of 70 cm between the KAP meters was used in this study. The aim of this work was to complement existing experimental data comparing the two configurations with Monte Carlo (MC) simulations. In a geometry consisting of a simplified model of the VacuTec 70157 type KAP meter, the MCNP code was used to simulate the kerma area product, PKA, for the two (close and distant) reference planes. It was found that PKA values for the tube voltage of 40 kV were about 2.5% lower for the distant plane than for the close one. For higher tube voltages, the difference was smaller. The difference was mainly caused by attenuation of the X ray beam in air. Since the problem with high uncertainties in PKA measurements is also caused by the current design of X ray machines, possible solutions are discussed. (author)

  20. Calibration of a gamma spectrometer for natural radioactivity measurement. Experimental measurements and Monte Carlo modelling

    International Nuclear Information System (INIS)

    The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of 137Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the 60Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)

  1. Improving computational efficiency of Monte Carlo simulations with variance reduction

    International Nuclear Information System (INIS)

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)

  2. Calibration of AGILE-GRID with In-Flight Data and Monte Carlo Simulations

    CERN Document Server

    Chen, Andrew W; Bulgarelli, A; Cattaneo, P W; Contessi, T; Giuliani, A; Pittori, C; Pucella, G; Tavani, M; Trois, A; Verrecchia, F; Barbiellini, G; Caraveo, P; Colafrancesco, S; Costa, E; De Paris, G; Del Monte, E; Di Cocco, G; Donnarumma, I; Evangelista, Y; Ferrari, A; Feroci, M; Fioretti, V; Fiorini, M; Fuschino, F; Galli, M; Gianotti, F; Giommi, P; Giusti, M; Labanti, C; Lapshov, I; Lazzarotto, F; Lipari, P; Longo, F; Lucarelli, F; Marisaldi, M; Mereghetti, S; Morelli, E; Moretti, E; Morselli, A; Pacciani, L; Pellizzoni, A; Perotti, F; Piano, G; Picozza, P; Pilia, M; Prest, M; Rapisarda, M; Rappoldi, A; Rubini, A; Sabatini, S; Santolamazza, P; Soffitta, P; Striani, E; Trifoglio, M; Valentini, G; Vallazza, E; Vercellone, S; Vittorini, V; Zanello, D

    2013-01-01

    Context: AGILE is a gamma-ray astrophysics mission which has been in orbit since 23 April 2007 and continues to operate reliably. The gamma-ray detector, AGILE-GRID, has observed Galactic and extragalactic sources, many of which were collected in the first AGILE Catalog. Aims: We present the calibration of the AGILE-GRID using in-flight data and Monte Carlo simulations, producing Instrument Response Functions (IRFs) for the effective area A_eff), Energy Dispersion Probability (EDP), and Point Spread Function (PSF), each as a function of incident direction in instrument coordinates and energy. Methods: We performed Monte Carlo simulations at different gamma-ray energies and incident angles, including background rejection filters and Kalman filter-based gamma-ray reconstruction. Long integrations of in-flight observations of the Vela, Crab and Geminga sources in broad and narrow energy bands were used to validate and improve the accuracy of the instrument response functions. Results: The weighted average PSFs a...

  3. Effect of lung volume on counting efficiency: a Monte Carlo investigation.

    Science.gov (United States)

    Kramer, Gary H; Capello, Kevin

    2005-04-01

    Lung counters are usually calibrated with an anthropometric phantom that has a fixed lung size; however, people have widely varying lung sizes (both volume and dimensions). This work uses a simple Monte Carlo simulation to investigate the effect on the counting efficiency of a lung counter based on a four detector array of 50 mm diameter, 70 mm diameter, or 85 mm diameter as lung size varies. The simulations were carried out at several photon energies (17, 60, 120, and 1,000 keV). Comparing the simulated efficiencies with a reference value close to the lung volume of Reference Man, biases in the range of -21% to 63% were discovered. The values from the Monte Carlo simulation have also been compared with some literature data based on experimental measurements, and the agreement was found to be comparable suggesting that lung volume is indeed a factor that should be considered when trying to make an accurate estimate of a lung burden. PMID:15761297

  4. Efficiencies of dynamic Monte Carlo algorithms for off-lattice particle systems with a single impurity

    KAUST Repository

    Novotny, M.A.

    2010-02-01

    The efficiency of dynamic Monte Carlo algorithms for off-lattice systems composed of particles is studied for the case of a single impurity particle. The theoretical efficiencies of the rejection-free method and of the Monte Carlo with Absorbing Markov Chains method are given. Simulation results are presented to confirm the theoretical efficiencies. © 2010.

  5. A Generic Algorithm for IACT Optical Efficiency Calibration using Muons

    CERN Document Server

    Mitchell, A M W; Parsons, R D

    2015-01-01

    Muons produced in Extensive Air Showers (EAS) generate ring-like images in Imaging Atmospheric Cherenkov Telescopes when travelling near parallel to the optical axis. From geometrical parameters of these images, the absolute amount of light emitted may be calculated analytically. Comparing the amount of light recorded in these images to expectation is a well established technique for telescope optical efficiency calibration. However, this calculation is usually performed under the assumption of an approximately circular telescope mirror. The H.E.S.S. experiment entered its second phase in 2012, with the addition of a fifth telescope with a non-circular 600m$^2$ mirror. Due to the differing mirror shape of this telescope to the original four H.E.S.S. telescopes, adaptations to the standard muon calibration were required. We present a generalised muon calibration procedure, adaptable to telescopes of differing shapes and sizes, and demonstrate its performance on the H.E.S.S. II array.

  6. Monte-Carlo investigation of radiation beam quality of the CRNA neutron irradiator for calibration purposes

    Energy Technology Data Exchange (ETDEWEB)

    Mazrou, Hakim, E-mail: mazrou_h@crna.d [Centre de Recherche Nucleaire d' Alger (CRNA), 02 Boulevard Frantz, Fanon, B.P. 399, Alger-RP 16000 (Algeria); Sidahmed, Tassadit [Centre de Recherche Nucleaire d' Alger (CRNA), 02 Boulevard Frantz, Fanon, B.P. 399, Alger-RP 16000 (Algeria); Allab, Malika [Faculte de Physique, Universite des Sciences et de la Technologie de Houari-Boumediene (USTHB), 16111, Alger (Algeria)

    2010-10-15

    An irradiation system has been acquired by the Nuclear Research Center of Algiers (CRNA) to provide neutron references for metrology and dosimetry purposes. It consists of an {sup 241}Am-Be radionuclide source of 185 GBq (5 Ci) activity inside a cylindrical steel-enveloped polyethylene container with radially positioned beam channel. Because of its composition, filled with hydrogenous material, which is not recommended by ISO standards, we expect large changes in the physical quantities of primary importance of the source compared to a free-field situation. Thus, the main goal of the present work is to fully characterize neutron field of such special delivered set-up. This was conducted by both extensive Monte-Carlo calculations and experimental measurements obtained by using BF{sub 3} and {sup 3}He based neutron area dosimeters. Effects of each component present in the bunker facility of the Algerian Secondary Standard Dosimetry Laboratory (SSDL) on the energy neutron spectrum have been investigated by simulating four irradiation configurations and comparison to the ISO spectrum has been performed. The ambient dose equivalent rate was determined based upon a correct estimate of the mean fluence to ambient dose equivalent conversion factors at different irradiations positions by means of a 3-D transport code MCNP5. Finally, according to practical requirements established for calibration purposes an optimal irradiation position has been suggested to the SSDL staff to perform, in appropriate manner, their routine calibrations.

  7. Simplified methods for coincidence summing corrections in HPGe efficiency calibration

    International Nuclear Information System (INIS)

    Simple and practical coincidence summing corrections for n-type HPGe detectors are presented for the common calibration nuclides 57Co and 60Co using a defined “virtual peak” and accounting for the summing of gamma photons with x-rays having energies up to 40 keV (88Y and 139Ce). These corrections make it possible to easily and effectively establish peak and total efficiency curves suitable for subsequent summing corrections in routine gamma spectrometry analyses. Experimental verification of the methods shows excellent agreement for measurements of different reference solutions. - Highlights: ► Coincidence summing corrections are important in environmental gamma-ray spectrometry. ► Simple and practical corrections are presented for HPGe efficiency calibrations. ► Emphasis is placed on summing with low-energy photons in n-type detectors. ► The experimental validations of the methods show excellent agreement.

  8. Analysis of the influence of germanium dead layer on detector calibration simulation for environmental radioactive samples using the Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Rodenas, J. E-mail: jrodenas@iqn.upv.es; Pascual, A.; Zarza, I.; Serradell, V.; Ortiz, J.; Ballesteros, L

    2003-01-11

    Germanium crystals have a dead layer that causes a decrease in efficiency, since the layer is not useful for detection, but strongly attenuates photons. The thickness of this inactive layer is not well known due to the existence of a transition zone where photons are increasingly absorbed. Therefore, using data provided by manufacturers in the detector simulation model, some strong discrepancies appear between calculated and measured efficiencies. The Monte Carlo method is applied to simulate the calibration of a HP Ge detector in order to determine the total inactive germanium layer thickness and the active volume that are needed in order to obtain the minimum discrepancy between estimated and experimental efficiency. Calculations and measurements were performed for all of the radionuclides included in a standard calibration gamma cocktail solution. A Marinelli beaker was considered for this analysis, as it is one of the most commonly used sample container for environmental radioactivity measurements. Results indicated that a good agreement between calculated and measured efficiencies is obtained using a value for the inactive germanium layer thickness equal to approximately twice the value provided by the detector manufacturer. For all energy peaks included in the calibration, the best agreement with experimental efficiency was found using a combination of a small thickness of the inactive germanium layer and a small detection active volume.

  9. Analysis of the influence of germanium dead layer on detector calibration simulation for environmental radioactive samples using the Monte Carlo method

    Science.gov (United States)

    Ródenas, J.; Pascual, A.; Zarza, I.; Serradell, V.; Ortiz, J.; Ballesteros, L.

    2003-01-01

    Germanium crystals have a dead layer that causes a decrease in efficiency, since the layer is not useful for detection, but strongly attenuates photons. The thickness of this inactive layer is not well known due to the existence of a transition zone where photons are increasingly absorbed. Therefore, using data provided by manufacturers in the detector simulation model, some strong discrepancies appear between calculated and measured efficiencies. The Monte Carlo method is applied to simulate the calibration of a HP Ge detector in order to determine the total inactive germanium layer thickness and the active volume that are needed in order to obtain the minimum discrepancy between estimated and experimental efficiency. Calculations and measurements were performed for all of the radionuclides included in a standard calibration gamma cocktail solution. A Marinelli beaker was considered for this analysis, as it is one of the most commonly used sample container for environmental radioactivity measurements. Results indicated that a good agreement between calculated and measured efficiencies is obtained using a value for the inactive germanium layer thickness equal to approximately twice the value provided by the detector manufacturer. For all energy peaks included in the calibration, the best agreement with experimental efficiency was found using a combination of a small thickness of the inactive germanium layer and a small detection active volume.

  10. Design and fabrication of an in situ gamma radioactivity measurement system for marine environment and its calibration with Monte Carlo method.

    Science.gov (United States)

    Abdollahnejad, Hamed; Vosoughi, Naser; Zare, Mohammad Reza

    2016-08-01

    Simulation, design and fabrication of a sealing enclosure is carried out for a NaI(Tl) 2″×2″ detector, to be used as in situ gamma radioactivity measurement system in marine environment. Effect of sealing enclosure on performance of the system in laboratory and marine environment (distinct tank with 10m(3) volume) were studied using point sources. The marine volumetric efficiency for radiation with 1461keV energy (from (40)K) is measured with KCl volumetric liquid source diluted in distinct tank. The experimental and simulated efficiency values agreed well. Marine volumetric efficiency calibration curve is calculated for 60keV to 1461keV energy with Monte Carlo method. This curve indicates that efficiency increasing rapidly up to 140.5keV but then drops exponentially. PMID:27213808

  11. Energy Self-calibration and low-energy efficiency calibration for an underwater in-situ LaBr3:Ce spectrometer

    CERN Document Server

    Zeng, Zhi; Ma, Hao; He, Jianhua; Cang, Jirong; Zeng, Ming; Cheng, Jianping

    2016-01-01

    An underwater in situ gamma ray spectrometer based on LaBr was developed and optimized to monitor marine radioactivity. The intrinsic background mainly from 138La and 227Ac of LaBr was well determined by low background measurement and pulse shape discrimination method. A method of self calibration using three internal contaminant peaks was proposed to eliminate the peak shift during long term monitoring. With experiments under different temperatures, the method was proved to be helpful for maintaining long term stability. To monitor the marine radioactivity, the spectrometer's efficiency was calculated via water tank experiment as well as Monte Carlo simulation.

  12. Usability of potassium compounds as efficiency calibrators for gamma spectrometer

    International Nuclear Information System (INIS)

    K-40 is widely used for efficiency calibration of gamma spectrometer because provision of it is easy, it is cheap, it is found in each environmental sample and it gives a clear peak at 1460,8 keV. In this study, 18 different chemical compounds', that have different potassium portions, types of elements, densities and particle sizes, activity concentrations are measured and effect of these parameters on the activity of compounds is investigated. As a result, it is seen that the compounds which have low density, low molecular weight and high potassium portion have high potassium activity concentration and the compounds that have these properties are much suitable for using as calibration sources.

  13. Calibration and simulation of a HPGe well detector using Monte Carlo computer code

    International Nuclear Information System (INIS)

    Monte Carlo methods are often used in simulating physical and mathematical systems. This computer code is a class of computational algorithms that rely on repeated random sampling to compute their results. Because of their reliance on repeated computation of random or pseudo-random numbers, these methods are most suited to calculation by a computer and tend to be used when it is unfeasible or impossible to compute an exact result with a deterministic algorithm. The Monte Carlo method is used to determine a detector's response curves which are difficult to obtain experimentally. It deals with random numbers for the simulation of the decay conditions and angle of incidence at a given energy value, studying, thus, the random behavior of the radiation, providing response and efficiency curves. The MCNP5 computer code provides means to simulate gamma ray detectors and has been used for this work for the 50keV - 2000 keV energy range. The HPGe well detector was simulated with the MCNP5 computer code and compared with experimental data. The dimensions of both dead layer and the transition layer were determined, and the response curve for a particular geometry was then obtained and compared with the experimental results, in order to verify the detector's simulation. Both results were in very good agreement. (author)

  14. Calibration of an in-situ BEGe detector using semi-empirical and Monte Carlo techniques.

    Science.gov (United States)

    Agrafiotis, K; Karfopoulos, K L; Anagnostakis, M J

    2011-08-01

    In the case of a nuclear or radiological accident a rapid estimation of the qualitative and quantitative characteristics of the potential radioactive pollution is needed. For aerial releases the radioactive pollutants are finally deposited on the ground forming a surface source. In this case, in-situ γ-ray spectrometry is a powerful tool for the determination of ground pollution. In this work, the procedure followed at the Nuclear Engineering Department of the National Technical University of Athens (NED-NTUA) for the calibration of an in-situ Broad Energy Germanium (BEGe) detector, for the determination of gamma-emitting radionuclides deposited on the ground surface, is presented. BEGe detectors due to their technical characteristics are suitable for the analysis of photons in a wide energy region. Two different techniques were applied for the full-energy peak efficiency calibration of the BEGe detector in the energy region 60-1600 keV: Full-energy peak efficiencies determined using the two methods agree within statistical uncertainties. PMID:21193317

  15. An efficient framework for photon Monte Carlo treatment planning.

    Science.gov (United States)

    Fix, Michael K; Manser, Peter; Frei, Daniel; Volken, Werner; Mini, Roberto; Born, Ernst J

    2007-10-01

    Currently photon Monte Carlo treatment planning (MCTP) for a patient stored in the patient database of a treatment planning system (TPS) can usually only be performed using a cumbersome multi-step procedure where many user interactions are needed. This means automation is needed for usage in clinical routine. In addition, because of the long computing time in MCTP, optimization of the MC calculations is essential. For these purposes a new graphical user interface (GUI)-based photon MC environment has been developed resulting in a very flexible framework. By this means appropriate MC transport methods are assigned to different geometric regions by still benefiting from the features included in the TPS. In order to provide a flexible MC environment, the MC particle transport has been divided into different parts: the source, beam modifiers and the patient. The source part includes the phase-space source, source models and full MC transport through the treatment head. The beam modifier part consists of one module for each beam modifier. To simulate the radiation transport through each individual beam modifier, one out of three full MC transport codes can be selected independently. Additionally, for each beam modifier a simple or an exact geometry can be chosen. Thereby, different complexity levels of radiation transport are applied during the simulation. For the patient dose calculation, two different MC codes are available. A special plug-in in Eclipse providing all necessary information by means of Dicom streams was used to start the developed MC GUI. The implementation of this framework separates the MC transport from the geometry and the modules pass the particles in memory; hence, no files are used as the interface. The implementation is realized for 6 and 15 MV beams of a Varian Clinac 2300 C/D. Several applications demonstrate the usefulness of the framework. Apart from applications dealing with the beam modifiers, two patient cases are shown. Thereby

  16. Extrapolated HPGe efficiency estimates based on a single calibration measurement

    International Nuclear Information System (INIS)

    Gamma spectroscopists often must analyze samples with geometries for which their detectors are not calibrated. The effort to experimentally recalibrate a detector for a new geometry can be quite time consuming, causing delay in reporting useful results. Such concerns have motivated development of a method for extrapolating HPGe efficiency estimates from an existing single measured efficiency. Overall, the method provides useful preliminary results for analyses that do not require exceptional accuracy, while reliably bracketing the credible range. The estimated efficiency element-of for a uniform sample in a geometry with volume V is extrapolated from the measured element-of 0 of the base sample of volume V0. Assuming all samples are centered atop the detector for maximum efficiency, element-of decreases monotonically as V increases about V0, and vice versa. Extrapolation of high and low efficiency estimates element-of h and element-of L provides an average estimate of element-of = 1/2 [element-of h + element-of L] ± 1/2 [element-of h - element-of L] (general) where an uncertainty D element-of = 1/2 (element-of h - element-of L] brackets limits for a maximum possible error. The element-of h and element-of L both diverge from element-of 0 as V deviates from V0, causing D element-of to increase accordingly. The above concepts guided development of both conservative and refined estimates for element-of

  17. Euromet action 428: transfer of ge detectors efficiency calibration from point source geometry to other geometries

    International Nuclear Information System (INIS)

    The EUROMET project 428 examines efficiency transfer computation for Ge gamma-ray spectrometers when the efficiency is known for a reference point source geometry in the 60 keV to 2 MeV energy range. For this, different methods are used, such as Monte Carlo simulation or semi-empirical computation. The exercise compares the application of these methods to the same selected experimental cases to determine the usage limitations versus the requested accuracy. For carefully examining these results and trying to derive information for improving the computation codes, this study was limited to a few simple cases, from an experimental efficiency calibration for point source at 10-cm source-to-detector distance. The first part concerns the simplest case of geometry transfer, i.e., using point sources for 3 source-to-detector distances: 2,5 and 20 cm; the second part deals with transfer from point source geometry to cylindrical geometry with three different matrices. The general results show that the deviations between the computed results and the measured efficiencies are for the most part within 10%. The quality of the results is rather inhomogeneous and shows that these codes cannot be used directly for metrological purposes. However, most of them are operational for routine measurements when efficiency uncertainties of 5-10% can be sufficient. (author)

  18. Double Chooz Neutron Detection Efficiency with Calibration System

    Science.gov (United States)

    Chang, Pi-Jung

    2012-03-01

    The Double Chooz experiment is designed to search for a non-vanishing mixing angle theta13 with unprecedented sensitivity. The first results obtained with the far detector only indicate a non-zero value of theta13. The Double Chooz detector system consists of a main detector, an outer veto system and a number of calibration systems. The main detector consists of a series of concentric cylinders. The target vessel, a liquid scintillator loaded with 0.1% Gd, is surrounded by the gamma-catcher, a non-loaded liquid scintillator. A buffer region of non-scintillating liquid surrounds the gamma-catcher and serves to decrease the level of accidental background. There is the Inner Veto region outside the buffer. The experiment is calibrated with light sources, radioactive point sources, cosmics and natural radioactivity. The radio-isotopes sealed in miniature capsules are deployed in the target and the gamma-catcher. Neutron detection efficiency is one of the major systematic components in the measurement of anti-neutrino disappearance. An untagged 252Cf source was used to determine fractions of neutron captures on Gd, neutron capture time systematic and neutron delayed energy systematic. The details will be explained in the talk.

  19. Highly Efficient Monte-Carlo for Estimating the Unavailability of Markov Dynamic System1)

    Institute of Scientific and Technical Information of China (English)

    XIAOGang; DENGLi; ZHANGBen-Ai; ZHUJian-Shi

    2004-01-01

    Monte Carlo simulation has become an important tool for estimating the reliability andavailability of dynamic system, since conventional numerical methods are no longer efficient whenthe size of the system to solve is large. However, evaluating by a simulation the probability of oc-currence of very rare events means playing a very large number of histories of the system, whichleads to unacceptable computing time. Highly efficient Monte Carlo should be worked out. In thispaper, based on the integral equation describing state transitions of Markov dynamic system, a u-niform Monte Carlo for estimating unavailability is presented. Using free-flight estimator, directstatistical estimation Monte Carlo is achieved. Using both free-flight estimator and biased proba-bility space of sampling, weighted statistical estimation Monte Carlo is also achieved. Five MonteCarlo schemes, including crude simulation, analog simulation, statistical estimation based oncrude and analog simulation, and weighted statistical estimation, are used for calculating the un-availability of a repairable Con/3/30 : F system. Their efficiencies are compared with each other.The results show the weighted statistical estimation Monte Carlo has the smallest variance and thehighest efficiency in very rare events simulation.

  20. Efficiency Calibration for Measuring the 12C(n, 2n)11C Cross Section

    Science.gov (United States)

    Eckert, Thomas; Gula, August; Vincett, Laurel; Yuly, Mark; Padalino, Stephen; Russ, Megan; Bienstock, Mollie; Simone, Angela; Ellison, Drew; Desmitt, Holly; Sangster, Craig; Regan, Sean; Fitzgerald, Ryan

    2015-11-01

    One possible inertial confinement fusion diagnostic involves tertiary neutron activation via the 12C(n, 2n)11C reaction. A recent experiment to measure this reaction cross-section involved coincidence counting the annihilation gamma rays produced by the positron decay of 11C. This requires an accurate value for the full-peak coincidence efficiency of the NaI detector system. The GEANT 4 toolkit was used to develop a Monte Carlo simulation of the detector system which can be used to calculate the required efficiencies. For validation, simulation predictions have been compared with the results of two experiments. In the first, full-peak coincidence positron annihilation efficiencies were measured for 22Na decay positrons that annihilate in a small plastic scintillator. In the second, a NIST-calibrated 68Ge source was used. A comparison of calculated with measured efficiencies, as well as 12C(n, 2n)11C cross sections are presented. Funded in part by a grant from the DOE through the Laboratory for Laser Energetics.

  1. A Monte Carlo (MC) based individual calibration method for in vivo x-ray fluorescence analysis (XRF)

    Science.gov (United States)

    Hansson, Marie; Isaksson, Mats

    2007-04-01

    X-ray fluorescence analysis (XRF) is a non-invasive method that can be used for in vivo determination of thyroid iodine content. System calibrations with phantoms resembling the neck may give misleading results in the cases when the measurement situation largely differs from the calibration situation. In such cases, Monte Carlo (MC) simulations offer a possibility of improving the calibration by better accounting for individual features of the measured subjects. This study investigates the prospects of implementing MC simulations in a calibration procedure applicable to in vivo XRF measurements. Simulations were performed with Penelope 2005 to examine a procedure where a parameter, independent of the iodine concentration, was used to get an estimate of the expected detector signal if the thyroid had been measured outside the neck. An attempt to increase the simulation speed and reduce the variance by exclusion of electrons and by implementation of interaction forcing was conducted. Special attention was given to the geometry features: analysed volume, source-sample-detector distances, thyroid lobe size and position in the neck. Implementation of interaction forcing and exclusion of electrons had no obvious adverse effect on the quotients while the simulation time involved in an individual calibration was low enough to be clinically feasible.

  2. Russian roulette efficiency in Monte Carlo resonant absorption calculations

    Energy Technology Data Exchange (ETDEWEB)

    Ghassoun, J. E-mail: ghassoun@ucam.ac.ma; Jehouani, A

    2000-11-15

    The resonant absorption calculation in media containing heavy resonant nuclei is one of the most difficult problems treated in reactor physics. Deterministic techniques need many approximations to solve this kind of problem. On the other hand, the Monte Carlo method is a reliable mathematical tool for evaluating the neutron resonance escape probability. But it suffers from large statistical deviations of results and long computation times. In order to overcome this problem, we have used the Splitting and Russian Roulette technique coupled separately to the survival biasing and to the importance sampling for the energy parameter. These techniques have been used to calculate the neutron resonance absorption in infinite homogenous media containing hydrogen and uranium characterized by the dilution (ratio of the concentrations of hydrogen to uranium). The punctual neutron source energy is taken at E{sub s}=2 MeV and E{sub s}=676.45 eV, whereas the energy cut-off is fixed at E{sub c}=2.768 eV. The results show a large reduction of computation time and statistical deviation, without altering the mean resonance escape probability compared to the usual analog simulation. The Splitting and Russian Roulette coupled to the survival biasing method is found to be the best methods for studying the neutron resonant absorption, particularly for high energies. A comparison is done between the Monte Carlo and deterministic methods based on the numerical solution of the neutron slowing down equations by the iterative method results for several dilutions.

  3. Russian roulette efficiency in Monte Carlo resonant absorption calculations

    Science.gov (United States)

    Ghassoun; Jehouani

    2000-10-01

    The resonant absorption calculation in media containing heavy resonant nuclei is one of the most difficult problems treated in reactor physics. Deterministic techniques need many approximations to solve this kind of problem. On the other hand, the Monte Carlo method is a reliable mathematical tool for evaluating the neutron resonance escape probability. But it suffers from large statistical deviations of results and long computation times. In order to overcome this problem, we have used the Splitting and Russian Roulette technique coupled separately to the survival biasing and to the importance sampling for the energy parameter. These techniques have been used to calculate the neutron resonance absorption in infinite homogenous media containing hydrogen and uranium characterized by the dilution (ratio of the concentrations of hydrogen to uranium). The punctual neutron source energy is taken at Es = 2 MeV and Es = 676.45 eV, whereas the energy cut-off is fixed at Ec = 2.768 eV. The results show a large reduction of computation time and statistical deviation, without altering the mean resonance escape probability compared to the usual analog simulation. The Splitting and Russian Roulette coupled to the survival biasing method is found to be the best methods for studying the neutron resonant absorption, particularly for high energies. A comparison is done between the Monte Carlo and deterministic methods based on the numerical solution of the neutron slowing down equations by the iterative method results for several dilutions. PMID:11003535

  4. Russian roulette efficiency in Monte Carlo resonant absorption calculations

    International Nuclear Information System (INIS)

    The resonant absorption calculation in media containing heavy resonant nuclei is one of the most difficult problems treated in reactor physics. Deterministic techniques need many approximations to solve this kind of problem. On the other hand, the Monte Carlo method is a reliable mathematical tool for evaluating the neutron resonance escape probability. But it suffers from large statistical deviations of results and long computation times. In order to overcome this problem, we have used the Splitting and Russian Roulette technique coupled separately to the survival biasing and to the importance sampling for the energy parameter. These techniques have been used to calculate the neutron resonance absorption in infinite homogenous media containing hydrogen and uranium characterized by the dilution (ratio of the concentrations of hydrogen to uranium). The punctual neutron source energy is taken at Es=2 MeV and Es=676.45 eV, whereas the energy cut-off is fixed at Ec=2.768 eV. The results show a large reduction of computation time and statistical deviation, without altering the mean resonance escape probability compared to the usual analog simulation. The Splitting and Russian Roulette coupled to the survival biasing method is found to be the best methods for studying the neutron resonant absorption, particularly for high energies. A comparison is done between the Monte Carlo and deterministic methods based on the numerical solution of the neutron slowing down equations by the iterative method results for several dilutions

  5. A combined approach of variance-reduction techniques for the efficient Monte Carlo simulation of linacs

    International Nuclear Information System (INIS)

    A method based on a combination of the variance-reduction techniques of particle splitting and Russian roulette is presented. This method improves the efficiency of radiation transport through linear accelerator geometries simulated with the Monte Carlo method. The method named as ‘splitting-roulette’ was implemented on the Monte Carlo code PENELOPE and tested on an Elekta linac, although it is general enough to be implemented on any other general-purpose Monte Carlo radiation transport code and linac geometry. Splitting-roulette uses any of the following two modes of splitting: simple splitting and ‘selective splitting’. Selective splitting is a new splitting mode based on the angular distribution of bremsstrahlung photons implemented in the Monte Carlo code PENELOPE. Splitting-roulette improves the simulation efficiency of an Elekta SL25 linac by a factor of 45. (paper)

  6. Monte-Carlo based uncertainty analysis: Sampling efficiency and sampling convergence

    International Nuclear Information System (INIS)

    Monte Carlo analysis has become nearly ubiquitous since its introduction, now over 65 years ago. It is an important tool in many assessments of the reliability and robustness of systems, structures or solutions. As the deterministic core simulation can be lengthy, the computational costs of Monte Carlo can be a limiting factor. To reduce that computational expense as much as possible, sampling efficiency and convergence for Monte Carlo are investigated in this paper. The first section shows that non-collapsing space-filling sampling strategies, illustrated here with the maximin and uniform Latin hypercube designs, highly enhance the sampling efficiency, and render a desired level of accuracy of the outcomes attainable with far lesser runs. In the second section it is demonstrated that standard sampling statistics are inapplicable for Latin hypercube strategies. A sample-splitting approach is put forward, which in combination with a replicated Latin hypercube sampling allows assessing the accuracy of Monte Carlo outcomes. The assessment in turn permits halting the Monte Carlo simulation when the desired levels of accuracy are reached. Both measures form fairly noncomplex upgrades of the current state-of-the-art in Monte-Carlo based uncertainty analysis but give a substantial further progress with respect to its applicability.

  7. Refined Stratified Sampling for efficient Monte Carlo based uncertainty quantification

    International Nuclear Information System (INIS)

    A general adaptive approach rooted in stratified sampling (SS) is proposed for sample-based uncertainty quantification (UQ). To motivate its use in this context the space-filling, orthogonality, and projective properties of SS are compared with simple random sampling and Latin hypercube sampling (LHS). SS is demonstrated to provide attractive properties for certain classes of problems. The proposed approach, Refined Stratified Sampling (RSS), capitalizes on these properties through an adaptive process that adds samples sequentially by dividing the existing subspaces of a stratified design. RSS is proven to reduce variance compared to traditional stratified sample extension methods while providing comparable or enhanced variance reduction when compared to sample size extension methods for LHS – which do not afford the same degree of flexibility to facilitate a truly adaptive UQ process. An initial investigation of optimal stratification is presented and motivates the potential for major advances in variance reduction through optimally designed RSS. Potential paths for extension of the method to high dimension are discussed. Two examples are provided. The first involves UQ for a low dimensional function where convergence is evaluated analytically. The second presents a study to asses the response variability of a floating structure to an underwater shock. - Highlights: • An adaptive process, rooted in stratified sampling, is proposed for Monte Carlo-based uncertainty quantification. • Space-filling, orthogonality, and projective properties of stratified sampling are investigated • Stratified sampling is shown to possess attractive properties for certain classes of problems. • Refined Stratified Sampling, a new sampling method is proposed that enables the adaptive UQ process. • Optimality of RSS stratum division is explored

  8. Source-free efficiency calibration and verification of a γ-ray spectral system

    International Nuclear Information System (INIS)

    In this paper, the source-free efficiency calibration method is used to verify the HPGe γ-spectrometer. A 60Co standard point source and three body sources were used in the efficiency calibration with the LabSOCS code. The results show that this method can be used as a supplement method to efficiency calculation and laboratory quality control. Meanwhile, the impact factors of source-free efficiency fitting were analyzed. (authors)

  9. Whole body counter calibration using Monte Carlo modeling with an array of phantom sizes based on national anthropometric reference data

    Science.gov (United States)

    During construction of the whole body counter (WBC) at the Children’s Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of 40K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Car...

  10. Efficient planar camera calibration via automatic image selection

    OpenAIRE

    Byrne, Brendan P.; Mallon, John; Whelan, Paul F.

    2009-01-01

    This paper details a novel approach to automatically selecting images which improve camera calibration results. An algorithm is presented which identifies calibration images that inherently improve camera parameter estimates based on their geometric configuration or image network geometry. Analysing images in a more intuitive geometric framework allows image networks to be formed based on the relationship between their world to image homographies. Geometrically, it is equivalent to enforcing ma...

  11. Calibration of a neutron moisture gauge by Monte-Carlo simulation

    International Nuclear Information System (INIS)

    Neutron transport calculations using the MCNP code have been used to determine flux distributions in soils and to derive the calibration curves of a neutron guage. The calculations were carried out for a typical geometry identical with that of the moisture guage HUMITERRA developed by the Laboratorio Nacional de Engenharia e Tecnologia Industrial, Portugal. To test the reliability of the method a comparison of computed and experimental results was made. The effect on the guage calibration curve of varying the values of several parameters which characterize the measurement system was studied, namely the soil dry bulk density, the active length of the neutron detector, the materials and wall thickness of the probe casing and of the access tubes. The usefulness of the method in the design, development and calibration of neutron guages for soil moisture determinations is discussed. (Author)

  12. Response and Monte Carlo evaluation of a reference ionization chamber for radioprotection level at calibration laboratories

    International Nuclear Information System (INIS)

    A special parallel plate ionization chamber, inserted in a slab phantom for the personal dose equivalent Hp(10) determination, was developed and characterized in this work. This ionization chamber has collecting electrodes and window made of graphite, and the walls and phantom made of PMMA. The tests comprise experimental evaluation following international standards and Monte Carlo simulations, employing the PENELOPE code to evaluate the design of this new dosimeter. The experimental tests were conducted employing the radioprotection level quality N-60 established at the IPEN, and all results were within the recommended standards. - Highlights: • A special ionization chamber, inserted in a slab phantom, was designed and evaluated. • This dosimeter was utilized for the Hp(10) determination. • The evaluation of this dosimeter followed international standards. • The PENELOPE Monte Carlo code was used to evaluate the design of this dosimeter. • The tests indicated that this dosimeter may be used as a reference dosimeter

  13. Monte Carlo simulation of GM probe and NaI detector efficiency for surface activity measurements

    International Nuclear Information System (INIS)

    This paper deals with the direct measurement of total (fixed plus removable) surface activity in the presence of interfering radiation fields. Two methods based on Monte Carlo simulations are used: one for a Geiger–Muller (GM) ionisation probe and the other for sodium iodide (NaI) detector with lead collimators; equations for the most general case and the geometry models for Monte Carlo simulation of both (GM and NaI) detectors are employed. Finally, an example of application is discussed. - Highlights: • Two methods for direct measurements of beta/gamma surface activity are proposed. • Monte Carlo simulated efficiency of detectors was validated and tested. • The calculated and measured efficiencies of detection systems were very similar. • The comparison between two different methods shows good agreement. • Methods can be used for rapid and accurate direct measurements of surface activity

  14. Monte Carlo calculation of efficiencies of whole-body counter, by microcomputer

    International Nuclear Information System (INIS)

    A computer programming using the Monte Carlo method for calculation of efficiencies of whole-body counting of body radiation distribution is presented. An analytical simulator (for man e for child) incorporated with99mTc, 131I and 42K is used. (M.A.C.)

  15. Calibration of a gamma spectrometer for measuring natural radioactivity. Experimental measurements and modeling by Monte-Carlo methods

    International Nuclear Information System (INIS)

    The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of 137Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the 60Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)

  16. Efficiency of Static Knowledge Bias in Monte-Carlo Tree Search

    OpenAIRE

    Ikeda, Kokolo; Viennot, Simon

    2014-01-01

    Monte-Carlo methods are currently the best known algorithms for the game of Go. It is already known that Monte-Carlo simulations based on a probability model containing static knowledge of the game are more efficient than random simulations. Such probability models are also used by some programs in the tree search policy to limit the search to a subset of the legal moves or to bias the search, but this aspect is not so well documented. In this article, we try to describe more precisely how st...

  17. Monte Carlo studies on the hadronic calibration of the H1 calorimeter with HERA events

    International Nuclear Information System (INIS)

    Two different methods to calibrate the H1 calorimeter with hadrons from HERA events are investigated. For these studies the LEPTO/JETSET event generator and the fast H1 detector simulation program P.S.I. were used. Isolated particles, measured and reconstructed with the track chambers, may cause isolated showers within the calorimeter. The measured momenta of hadrons (up to about 20 GeV/c) can be compared with the measured energy in the calorimeter. The influence of neutral particles and of neighbouring showers on the energy deposition is discussed. It is shown that a calibration is possible by comparing the transverse momentum of the scattered electron and of secondary hadrons. Disturbing effects on this measurement (e.g. energy losses in the beamhole) are presented. In both cases the number of events with Q2>10 GeV2 corresponding to 1 pb-1 is found to be sufficient to apply the mentioned methods for a global calibration. (orig./HSI)

  18. Simulation of germanium detector calibration using the Monte Carlo method: comparison between point and surface source models.

    Science.gov (United States)

    Ródenas, J; Burgos, M C; Zarza, I; Gallardo, S

    2005-01-01

    Simulation of detector calibration using the Monte Carlo method is very convenient. The computational calibration procedure using the MCNP code was validated by comparing results of the simulation with laboratory measurements. The standard source used for this validation was a disc-shaped filter where fission and activation products were deposited. Some discrepancies between the MCNP results and laboratory measurements were attributed to the point source model adopted. In this paper, the standard source has been simulated using both point and surface source models. Results from both models are compared with each other as well as with experimental measurements. Two variables, namely, the collimator diameter and detector-source distance have been considered in the comparison analysis. The disc model is seen to be a better model as expected. However, the point source model is good for large collimator diameter and also when the distance from detector to source increases, although for smaller sizes of the collimator and lower distances a surface source model is necessary. PMID:16604596

  19. An Efficient Feedback Calibration Algorithm for Direct Imaging Radio Telescopes

    CERN Document Server

    Beardsley, Adam P; Bowman, Judd D; Morales, Miguel F

    2016-01-01

    We present the E-field Parallel Imaging Calibration (EPICal) algorithm, which addresses the need for a real-time calibration method for direct imaging radio astronomy correlators. Direct imaging involves a spatial fast Fourier transform of antenna voltages, alleviating the harsh $\\mathcal{O}(N_a^2)$ computational scaling to a more gentle $\\mathcal{O}(N_a \\log_2 N_a)$, which can save orders of magnitude in computation cost for next generation arrays consisting of hundreds to thousands of antennas. However, because signals are mixed in the correlator, gain correction must be applied on the front end. We develop the EPICal algorithm to form gain solutions in real time without ever forming visibilities. This method scales as the number of antennas, and produces results comparable to those from visibilities. Through simulations and application to Long Wavelength Array data we show this algorithm is a promising solution for next generation instruments.

  20. Calibration of the Top-Quark Monte Carlo Mass.

    Science.gov (United States)

    Kieseler, Jan; Lipka, Katerina; Moch, Sven-Olaf

    2016-04-22

    We present a method to establish, experimentally, the relation between the top-quark mass m_{t}^{MC} as implemented in Monte Carlo generators and the Lagrangian mass parameter m_{t} in a theoretically well-defined renormalization scheme. We propose a simultaneous fit of m_{t}^{MC} and an observable sensitive to m_{t}, which does not rely on any prior assumptions about the relation between m_{t} and m_{t}^{MC}. The measured observable is independent of m_{t}^{MC} and can be used subsequently for a determination of m_{t}. The analysis strategy is illustrated with examples for the extraction of m_{t} from inclusive and differential cross sections for hadroproduction of top quarks. PMID:27152794

  1. Validation of a Monte Carlo Model of the Fork Detector with a Calibrated Neutron Source

    Science.gov (United States)

    Borella, Alessandro; Mihailescu, Liviu-Cristian

    2014-02-01

    The investigation of experimental methods for safeguarding spent fuel elements is one of the research areas at the Belgian Nuclear Research Centre SCK•CEN. A version of the so-called Fork Detector has been designed at SCK•CEN and is in use at the Belgian Nuclear Power Plant of Doel for burnup determination purposes. The Fork Detector relies on passive neutron and gamma measurements for the assessment of the burnup and safeguards verification activities. In order to better evaluate and understand the method and in view to extend its capabilities, an effort to model the Fork detector was made with the code MCNPX. A validation of the model was done in the past using spent fuel measurement data. This paper reports about the measurements carried out at the Laboratory for Nuclear Calibrations (LNK) of SCK•CEN with a 252Cf source calibrated according to ISO 8529 standards. The experimental data are presented and compared with simulations. In the simulations, not only was the detector modeled but also the measurement room was taken into account based on the available design information. The results of this comparison exercise are also presented in this paper.

  2. Study of efficiency calibration of the HPGe detector for aurum

    International Nuclear Information System (INIS)

    Neutron flux measurement in PUSPATI TRIGA Reactor carried out using Au-198 foil activation technique and high-purity germanium (HPGe) detector is used to count this foil. The quality of the results of this gamma spectrometry measurement depends directly on the accuracy of the detection efficiency in the specific measurement geometry. Experimental efficiency determination was restricted to several measurement geometries and cannot be applied directly to all measurement configurations. In this work an approach using efficiencies measured with disk sources is applied to plot the curve of detectors efficiency as a function of gamma energy ε(E) and to obtain the detection efficiency for Aurum as a function of distance with a high-purity germanium (HPGe) detector. The aurum detection efficiency is found to be decreased exponentially as the source to detector distance increase. The efficiency curves obtained in this way will be applied to the measurement of irradiated foils for neutron flux mapping of PUSPATI TRIGA Reactor. (author)

  3. A Proposal on the Geometry Splitting Strategy to Enhance the Calculation Efficiency in Monte Carlo Simulation

    International Nuclear Information System (INIS)

    In this study, how the geometry splitting strategy affects the calculation efficiency was analyzed. In this study, a geometry splitting method was proposed to increase the calculation efficiency in Monte Carlo simulation. First, the analysis of the neutron distribution characteristics in a deep penetration problem was performed. Then, considering the neutron population distribution, a geometry splitting method was devised. Using the proposed method, the FOMs with benchmark problems were estimated and compared with the conventional geometry splitting strategy. The results show that the proposed method can considerably increase the calculation efficiency in using geometry splitting method. It is expected that the proposed method will contribute to optimizing the computational cost as well as reducing the human errors in Monte Carlo simulation. Geometry splitting in Monte Carlo (MC) calculation is one of the most popular variance reduction techniques due to its simplicity, reliability and efficiency. For the use of the geometry splitting, the user should determine locations of geometry splitting and assign the relative importance of each region. Generally, the splitting parameters are decided by the user's experience. However, in this process, the splitting parameters can ineffectively or erroneously be selected. In order to prevent it, there is a recommendation to help the user eliminate guesswork, which is to split the geometry evenly. And then, the importance is estimated by a few iterations for preserving population of particle penetrating each region. However, evenly geometry splitting method can make the calculation inefficient due to the change in mean free path (MFP) of particles

  4. Theoretical and practical study of the variance and efficiency of a Monte Carlo calculation due to Russian roulette

    International Nuclear Information System (INIS)

    Although Russian roulette is applied very often in Monte Carlo calculations, not much literature exists on its quantitative influence on the variance and efficiency of a Monte Carlo calculation. Elaborating on the work of Lux and Koblinger using moment equations, new relevant equations are derived to calculate the variance of a Monte Carlo simulation using Russian roulette. To demonstrate its practical application the theory is applied to a simplified transport model resulting in explicit analytical expressions for the variance of a Monte Carlo calculation and for the expected number of collisions per history. From these expressions numerical results are shown and compared with actual Monte Carlo calculations, showing an excellent agreement. By considering the number of collisions in a Monte Carlo calculation as a measure of the CPU time, also the efficiency of the Russian roulette can be studied. It opens the way for further investigations, including optimization of Russian roulette parameters. (authors)

  5. An efficient interpolation technique for jump proposals in reversible-jump Markov chain Monte Carlo calculations

    OpenAIRE

    Farr, W. M.; Stevens, D; Mandel, Ilya

    2015-01-01

    Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted ...

  6. Characterizing a thermal neutron calibration assembly by solid state, nuclear track dosemeters and Monte Carlo technique

    Energy Technology Data Exchange (ETDEWEB)

    Vilela, E.; Morelli, B.; Gualdrini, G.; Burn, K.W.; Monteventi, F.; Fantuzzi, E. [ENEA, Centro Ricerche Ezio Clementel, Bologna (Italy). Dipt. Ambiente

    1998-07-01

    In the present report a thermal neutron assembly consisting of a polyethylene cube for calibrating dosimetric instruments at the ENEA (National Agency for New Technology, Energy and the Environment) Institute for Radiation Protection is described. The characterization of such a facility in terms of the spectral neutron fluence and the ambient dose equivalent rates according to the ICRP60 document is illustrated in detail. Special variance reduction algorithms developed at ENEA allowed satisfactory statistics to be obtained over the whole investigated energy domain. [Italian] Nel presente rapporto viene descritto un insieme per neutroni termici, impiegato nella calibrazione di strumenti dosimetrici presso l'Istituto di Radioprotezione delle ENEA e viene mostrata in dettaglio la caratterizzazione dell'insieme in termini di ratei di fluenza neutronica spettrale e di equivalente di dose ambientale in accorso con il documento ICRP60. Speciali tecniche di riduzione della varianza sviluppate presso l'ENEA hanno consentito di ottenere una statistica soddisfacente su tutto il dominio di energia studiato.

  7. Evidence-Based Model Calibration for Efficient Building Energy Services

    OpenAIRE

    Bertagnolio, Stéphane

    2012-01-01

    Energy services play a growing role in the control of energy consumption and the improvement of energy efficiency in non-residential buildings. Most of the energy use analyses involved in the energy efficiency service process require on-field measurements and energy use analysis. Today, while detailed on-field measurements and energy counting stay generally expensive and time-consuming, energy simulations are increasingly cheaper due to the continuous improvement of computer speed. This work ...

  8. Monte Carlo simulation of a scanning detector whole body counter and the effect of BOMAB phantom size on the calibration.

    Science.gov (United States)

    Kramer, Gary H; Burns, Linda C; Guerriere, Steven

    2002-10-01

    Monte Carlo simulation has been used to model the Human Monitoring Laboratory's scanning detector whole body counter. This paper has also shown that a scanning detector counting system can be satisfactorily simulated by putting the detector in different places relative to the phantom and averaging the results. This technique was verified by experimental work that obtained an agreement of 96% between scanning and averaging. The BOMAB phantom family in use at the Human Monitoring Laboratory was also modeled so that both counting efficiency and size correction factors could be estimated. It was found that the size correction factors lie in the region of 2.4 to 0.66 depending on phantom size and photon energy. The efficiency results from the MCNP scanning simulations were 97% of the measured scanning efficiency. A single function that fits counting efficiency, size, and photon energy was also developed. The function gives predicted efficiencies that are in the range of +10% to -8% of the true value. PMID:12240728

  9. Calculation of neutron detection efficiency for the thick lithium glass using Monte Carlo method

    International Nuclear Information System (INIS)

    The neutron detector efficiencies of a NE912 (45mm in diameter, 9.55 mm in thickness) and 2 pieces of ST601 (40mm in diameter, 3 and 10 mm in thickness respectively) lithium glasses have been calculated with a Monte Carlo computer code. The energy range in the calculation is 10 keV to 2.0 MeV. The effect of time delayed caused by neutron multiple scattering in the detectors (prompt neutron detection efficiency) has been considered

  10. Study On The Peak Efficiency Curve Of HPGe Detector With Marinelli Beakers By Monte Carlo Method

    International Nuclear Information System (INIS)

    In this paper, the peak efficiency curves of HPGe detector for Marinelli beakers were determined by the Los MCNP4C2 code of the Alamos Laboratory in three Marinelli beakers of different size. The influence of matrix density on the efficiency was also studied by simulation of the efficiency curves for different matrix materials: U-CaCO3-MgCO3 calibrated mixture with density of 1.51 g/cm3; ThO2-SiO2-Polyeste calibrated mixture with density of 1.95 g/cm3 and Ta2O5 with density of 8.20 g/cm3. The effect of K-edge absorption for matrices of Thorium and Tantalum in which there are high-Z elements of rather high contents was recorded. (author)

  11. Mathematical efficiency calibration with uncertain source geometries using smart optimization

    Energy Technology Data Exchange (ETDEWEB)

    Menaa, N. [AREVA/CANBERRA Nuclear Measurements Business Unit, Saint Quentin-en-Yvelines 78182 (France); Bosko, A.; Bronson, F.; Venkataraman, R.; Russ, W. R.; Mueller, W. [AREVA/CANBERRA Nuclear Measurements Business Unit, Meriden, CT (United States); Nizhnik, V. [International Atomic Energy Agency, Vienna (Austria); Mirolo, L. [AREVA/CANBERRA Nuclear Measurements Business Unit, Saint Quentin-en-Yvelines 78182 (France)

    2011-07-01

    The In Situ Object Counting Software (ISOCS), a mathematical method developed by CANBERRA, is a well established technique for computing High Purity Germanium (HPGe) detector efficiencies for a wide variety of source shapes and sizes. In the ISOCS method, the user needs to input the geometry related parameters such as: the source dimensions, matrix composition and density, along with the source-to-detector distance. In many applications, the source dimensions, the matrix material and density may not be well known. Under such circumstances, the efficiencies may not be very accurate since the modeled source geometry may not be very representative of the measured geometry. CANBERRA developed an efficiency optimization software known as 'Advanced ISOCS' that varies the not well known parameters within user specified intervals and determines the optimal efficiency shape and magnitude based on available benchmarks in the measured spectra. The benchmarks could be results from isotopic codes such as MGAU, MGA, IGA, or FRAM, activities from multi-line nuclides, and multiple counts of the same item taken in different geometries (from the side, bottom, top etc). The efficiency optimization is carried out using either a random search based on standard probability distributions, or using numerical techniques that carry out a more directed (referred to as 'smart' in this paper) search. Measurements were carried out using representative source geometries and radionuclide distributions. The radionuclide activities were determined using the optimum efficiency and compared against the true activities. The 'Advanced ISOCS' method has many applications among which are: Safeguards, Decommissioning and Decontamination, Non-Destructive Assay systems and Nuclear reactor outages maintenance. (authors)

  12. Mathematical efficiency calibration with uncertain source geometries using smart optimization

    International Nuclear Information System (INIS)

    The In Situ Object Counting Software (ISOCS), a mathematical method developed by CANBERRA, is a well established technique for computing High Purity Germanium (HPGe) detector efficiencies for a wide variety of source shapes and sizes. In the ISOCS method, the user needs to input the geometry related parameters such as: the source dimensions, matrix composition and density, along with the source-to-detector distance. In many applications, the source dimensions, the matrix material and density may not be well known. Under such circumstances, the efficiencies may not be very accurate since the modeled source geometry may not be very representative of the measured geometry. CANBERRA developed an efficiency optimization software known as 'Advanced ISOCS' that varies the not well known parameters within user specified intervals and determines the optimal efficiency shape and magnitude based on available benchmarks in the measured spectra. The benchmarks could be results from isotopic codes such as MGAU, MGA, IGA, or FRAM, activities from multi-line nuclides, and multiple counts of the same item taken in different geometries (from the side, bottom, top etc). The efficiency optimization is carried out using either a random search based on standard probability distributions, or using numerical techniques that carry out a more directed (referred to as 'smart' in this paper) search. Measurements were carried out using representative source geometries and radionuclide distributions. The radionuclide activities were determined using the optimum efficiency and compared against the true activities. The 'Advanced ISOCS' method has many applications among which are: Safeguards, Decommissioning and Decontamination, Non-Destructive Assay systems and Nuclear reactor outages maintenance. (authors)

  13. Verification of Absolute Calibration of Quantum Efficiency for LSST CCDs

    Science.gov (United States)

    Coles, Rebecca; Chiang, James; Cinabro, David; Gilbertson, Woodrow; Haupt, justine; Kotov, Ivan; Neal, Homer; Nomerotski, Andrei; O'Connor, Paul; Stubbs, Christopher; Takacs, Peter

    2016-01-01

    We describe a system to measure the Quantum Efficiency in the wavelength range of 300nm to 1100nm of 40x40 mm n-channel CCD sensors for the construction of the 3.2 gigapixel LSST focal plane. The technique uses a series of instruments to create a very uniform flux of photons of controllable intensity in the wavelength range of interest across the face of the sensor. This allows the absolute Quantum Efficiency to be measured with an accuracy in the 1% range. This system will be part of a production facility at Brookhaven National Lab for the basic components of the LSST camera.

  14. Calibrating the photon detection efficiency in IceCube

    CERN Document Server

    Tosi, Delia

    2015-01-01

    The IceCube neutrino observatory is composed of more than five thousand light sensors, Digital Optical Modules (DOMs), installed on the surface and at depths between 1450 and 2450 m in clear ice at the South Pole. Each DOM incorporates a 10-inch diameter photomultiplier tube (PMT) intended to detect light emitted when high energy neutrinos interact with atoms in the ice. Depending on the energy of the neutrino and the distance from secondary particle tracks, PMTs can be hit by up to several thousand photons within a few hundred nanoseconds. The number of photons per PMT and their time distribution is used to reject background events and to determine the energy and direction of each neutrino. The detector energy scale was established from previous lab measurements of DOM optical sensitivity, then refined based on observed light yield from stopping muons and calibration of ice properties. A laboratory setup has now been developed to more precisely measure the DOM optical sensitivity as a function of angle and w...

  15. A computationally efficient moment-preserving Monte Carlo electron transport method with implementation in Geant4

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, D.A., E-mail: ddixon@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663, MS P365, Los Alamos, NM 87545 (United States); Prinja, A.K., E-mail: prinja@unm.edu [Department of Nuclear Engineering, MSC01 1120, 1 University of New Mexico, Albuquerque, NM 87131-0001 (United States); Franke, B.C., E-mail: bcfrank@sandia.gov [Sandia National Laboratories, Albuquerque, NM 87123 (United States)

    2015-09-15

    This paper presents the theoretical development and numerical demonstration of a moment-preserving Monte Carlo electron transport method. Foremost, a full implementation of the moment-preserving (MP) method within the Geant4 particle simulation toolkit is demonstrated. Beyond implementation details, it is shown that the MP method is a viable alternative to the condensed history (CH) method for inclusion in current and future generation transport codes through demonstration of the key features of the method including: systematically controllable accuracy, computational efficiency, mathematical robustness, and versatility. A wide variety of results common to electron transport are presented illustrating the key features of the MP method. In particular, it is possible to achieve accuracy that is statistically indistinguishable from analog Monte Carlo, while remaining up to three orders of magnitude more efficient than analog Monte Carlo simulations. Finally, it is shown that the MP method can be generalized to any applicable analog scattering DCS model by extending previous work on the MP method beyond analytical DCSs to the partial-wave (PW) elastic tabulated DCS data.

  16. Developing an Efficient Calibration System for Joint Offset of Industrial Robots

    OpenAIRE

    Bingtuan Gao; Yong Liu; Ning Xi; Yantao Shen

    2014-01-01

    Joint offset calibration is one of the most important methods to improve the positioning accuracy for industrial robots. This paper presents an efficient method to calibrate industrial robot joint offset. The proposed method mainly relies on a laser pointer mounted on the robot end-effector and a position sensitive device (PSD) located in the work space arbitrarily. A vision based control was employed to aid the laser beam shooting at the center of PSD surface from several initial robot p...

  17. Detection efficiency calibration of a semiconductor γ-spectrometer for environmental stamples

    International Nuclear Information System (INIS)

    A semi-empirical formula was adopted to calibrate an HPGe γ-spectrometer for its full energy peak efficiency to environmental samples. The procedure of calibration was described, and the results were compared with those obtained through a set of reference souces of 137Cs, 226Ra-series and 232Th-series, which were made up with mediums of soil, coal and stone-coal. It was found that the both results were consistent within 7.1%

  18. Monte Carlo evaluation of the neutron detection efficiency of a superheated drop detector

    International Nuclear Information System (INIS)

    Neuron dosimetry has recently gained renewed attention, following concerns on the exposure of crew members on board aircraft, and of workers around the increasing number of high energy accelerators for medical and research purpose. At the same time the new operational qualities for radiation dosimetry introduced by ICRU and the ICRP, aiming at a unified metrological system applicable to all types of radiation exposure, involved the need to update current devices in order to meet new requirements. Superheated Drop (Bubble) Detectors (SDD) offer an alternative approach to neutron radiation protection dosimetry. The SDDs are currently studied within a large collaborative effort involving Yale University. New Haven CT, Pisa (IT) University, the Physikalisch-Technische Bundesanstalt, Braunschweig D, and ENEA (Italian National Agency for new Technologies Energy and the Environment) Centre of Bologna. The detectors were characterised through calibrations with monoenergetic neutron beams and where experimental investigations were inadequate or impossible, such as in the intermediate energy range , parametric Monte Carlo calculations of the response were carried out. This report describes the general characteristic of the SDDs along with the Monte Carlo computations of the energy response and a comparison with the experimental results

  19. Validation of an efficiency calibration procedure for a coaxial n-type and a well-type HPGe detector used for the measurement of environmental radioactivity

    Science.gov (United States)

    Morera-Gómez, Yasser; Cartas-Aguila, Héctor A.; Alonso-Hernández, Carlos M.; Nuñez-Duartes, Carlos

    2016-05-01

    To obtain reliable measurements of the environmental radionuclide activity using HPGe (High Purity Germanium) detectors, the knowledge of the absolute peak efficiency is required. This work presents a practical procedure for efficiency calibration of a coaxial n-type and a well-type HPGe detector using experimental and Monte Carlo simulations methods. The method was performed in an energy range from 40 to 1460 keV and it can be used for both, solid and liquid environmental samples. The calibration was initially verified measuring several reference materials provided by the IAEA (International Atomic Energy Agency). Finally, through the participation in two Proficiency Tests organized by IAEA for the members of the ALMERA network (Analytical Laboratories for the Measurement of Environmental Radioactivity) the validity of the developed procedure was confirmed. The validation also showed that measurement of 226Ra should be conducted using coaxial n-type HPGe detector in order to minimize the true coincidence summing effect.

  20. Efficiency calibration of an HPGe X-ray detector for quantitative PIXE analysis

    International Nuclear Information System (INIS)

    Particle Induced X-ray Emission (PIXE) is an analytical technique, which provides reliably and accurately quantitative results without the need of standards when the efficiency of the X-ray detection system is calibrated. The ion beam microprobe of the Ion Beam Modification and Analysis Laboratory at the University of North Texas is equipped with a 100 mm2 high purity germanium X-ray detector (Canberra GUL0110 Ultra-LEGe). In order to calibrate the efficiency of the detector for standard less PIXE analysis we have measured the X-ray yield of a set of commercially available X-ray fluorescence standards. The set contained elements from low atomic number Z = 11 (sodium) to higher atomic numbers to cover the X-ray energy region from 1.25 keV to about 20 keV where the detector is most efficient. The effective charge was obtained from the proton backscattering yield of a calibrated particle detector

  1. Analysis of the simulation of Ge-detector calibration for environmental radioactive samples in a Marinelli beaker source using the Monte Carlo method

    International Nuclear Information System (INIS)

    Different models have been developed considering some features of the attenuating geometry. An ideal bare Marinelli model has been compared with the actual plastic model. Concerning the detector, a bare detector model has been improved including an Aluminium absorber layer and a dead layer of inactive Germanium. Calculation results of the Monte Carlo simulation have been compared with experimental measurements carried out in laboratory for various radionuclides from a calibration gamma cocktail solution with energies ranging in a wide interval. (orig.)

  2. A priori efficiency calculations for Monte Carlo applications in neutron transport

    International Nuclear Information System (INIS)

    In this paper a general derivation is given of equations describing the variance of an arbitrary detector response in a Monte Carlo simulation and the average number of collisions a particle will suffer until its history ends. The theory is validated for a simple slab system using the two-direction transport model and for a two-group infinite system, which both allow analytical solutions. Numerical results from the analytical solutions are compared with actual Monte Carlo calculations, showing excellent agreement. These analytical solutions demonstrate the possibilities for optimizing the weight window settings with respect to variance. Using the average number of collisions as a measure for the simulation time a cost function inversely proportional to the usual figure of merit is defined, which allows optimization with respect to overall efficiency of the Monte Carlo calculation. For practical applications it is outlined how the equations for the variance and average number of collisions can be solved using a suitable existing deterministic neutron transport code with adapted number of energy groups and scattering matrices. (author)

  3. Efficiency calibration of a HPGe detector for [{sup 18}F] FDG activity measurements

    Energy Technology Data Exchange (ETDEWEB)

    Fragoso, Maria da Conceicao de Farias; Lacerda, Isabelle Viviane Batista de; Albuquerque, Antonio Morais de Sa, E-mail: mariacc05@yahoo.com.br, E-mail: isabelle.lacerda@ufpe.br, E-mail: moraisalbuquerque@hotmaiI.com [Universidade Federal de Pernambuco (DEN/UFPE), Recife, PE (Brazil). Departamento de Energia Nuclear; Oliveira, Mercia Liane de; Hazin, Clovis Abrahao; Lima, Fernando Roberto de Andrade, E-mail: mercial@cnen.gov.br, E-mail: chazin@cnen.gov.br, E-mail: falima@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2013-11-01

    The radionuclide {sup 18}F, in the form of flurodeoxyglucose (FDG), is the most used radiopharmaceutical for Positron Emission Tomography (PET). Due to [{sup 18}F]FDG increasing demand, it is important to ensure high quality activity measurements in the nuclear medicine practice. Therefore, standardized reference sources are necessary to calibrate of {sup 18}F measuring systems. Usually, the activity measurements are performed in re-entrant ionization chambers, also known as radionuclide calibrators. Among the existing alternatives for the standardization of radioactive sources, the method known as gamma spectrometry is widely used for short-lived radionuclides, since it is essential to minimize source preparation time. The purpose of this work was to perform the standardization of the [{sup 18}F]FDG solution by gamma spectrometry. In addition, the reference sources calibrated by this method can be used to calibrate and test the radionuclide calibrators from the Divisao de Producao de Radiofarmacos (DIPRA) of the Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE). Standard sources of {sup 152}Eu, {sup 137}Cs and {sup 68}Ge were used for the efficiency calibration of the spectrometer system. As a result, the efficiency curve as a function of energy was determined in wide energy range from 122 to 1408 keV. Reference sources obtained by this method can be used in [{sup 18}F]FDG activity measurements comparison programs for PET services localized in the Brazilian Northeast region. (author)

  4. Efficiency calibration of a HPGe detector for [18F] FDG activity measurements

    International Nuclear Information System (INIS)

    The radionuclide 18F, in the form of flurodeoxyglucose (FDG), is the most used radiopharmaceutical for Positron Emission Tomography (PET). Due to [18F]FDG increasing demand, it is important to ensure high quality activity measurements in the nuclear medicine practice. Therefore, standardized reference sources are necessary to calibrate of 18F measuring systems. Usually, the activity measurements are performed in re-entrant ionization chambers, also known as radionuclide calibrators. Among the existing alternatives for the standardization of radioactive sources, the method known as gamma spectrometry is widely used for short-lived radionuclides, since it is essential to minimize source preparation time. The purpose of this work was to perform the standardization of the [18F]FDG solution by gamma spectrometry. In addition, the reference sources calibrated by this method can be used to calibrate and test the radionuclide calibrators from the Divisao de Producao de Radiofarmacos (DIPRA) of the Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE). Standard sources of 152Eu, 137Cs and 68Ge were used for the efficiency calibration of the spectrometer system. As a result, the efficiency curve as a function of energy was determined in wide energy range from 122 to 1408 keV. Reference sources obtained by this method can be used in [18F]FDG activity measurements comparison programs for PET services localized in the Brazilian Northeast region. (author)

  5. Application of LabSOCS efficiency calibration method in rapid analysis at lab under emergency monitoring of the nuclear incidents

    International Nuclear Information System (INIS)

    Objective: To explore the effectiveness of the method of LabSOCS(Laboratory sourceless calibration software) efficiency calibration in laboratory rapid analysis for emergency monitoring of nuclear incidents. Methods: The detection efficiency of three kinds of environmental samples in emergency monitoring Wag calculated by using the LabSOCS efficiency calibration method, and compared with the values that were obtained by way of radioactive source calibration method. Results: The maximum relative deviation of the detection efficiency between the two methods was less than 15%, and the values with relative deviation less than 5% accounted for 70%. Conclusions: The LabSOCS efficiency calibration method might take the place of radioactive source efficiency calibration method, and meet the requirement of rapid analysis in emergency monitoring of the nuclear incidents. (authors)

  6. Absolute calibration of the antiproton detection efficiency for BESS below 1 GeV

    CERN Document Server

    Asaoka, Y; Yoshida, T; Abe, K; Anraku, K; Fujikawa, M; Fuke, H; Haino, S; Izumi, K; Maeno, T; Makida, Y; Matsui, N; Matsumoto, H; Matsunaga, H; Motoki, M; Nozaki, M; Orito, S; Sanuki, T; Sasaki, M; Shikaze, Y; Sonoda, T; Suzuki, J; Tanaka, K; Toki, Y; Yamamoto, A

    2001-01-01

    An accelerator beam experiment was performed using a low-energy antiproton beam to measure the antiproton detection efficiency of the BESS detector. Measured and calculated efficiencies derived from the BESS Monte Carlo simulation based on GEANT/GHEISHA showed good agreement. With the detailed verification of the BESS simulation, results demonstrate that the relative systematic error of detection efficiency derived from the BESS simulation is within 5 %, being previously estimated as 15 % which was the dominant uncertainty for measurements of cosmic-ray antiproton flux.

  7. Plastic-NaI (T1) crystal composite virtual calibration method of detection efficiency

    International Nuclear Information System (INIS)

    In order to study the relationship between the counting efficiency and crystal size, based on Monte Carlo and related software, in the development platform of the VC++, this paper prepared to customize the plastic-NaI (T1) crystal size of the software complex, for implementation of different energy γ-ray detection efficiency of the calculation. According to calculation of the data matrix, it fit a different size detector efficiency function of point source, and determined the parameters of the function. (authors)

  8. Amorphous silicon EPID calibration for dosimetric applications: comparison of a method based on Monte Carlo prediction of response with existing techniques

    International Nuclear Information System (INIS)

    For EPID dosimetry, the calibration should ensure that all pixels have a similar response to a given irradiation. A calibration method (MC), using an analytical fit of a Monte Carlo simulated flood field EPID image to correct for the flood field image pixel intensity shape, was proposed. It was compared with the standard flood field calibration (FF), with the use of a water slab placed in the beam to flatten the flood field (WS) and with a multiple field calibration where the EPID was irradiated with a fixed 10 x 10 field for 16 different positions (MF). The EPID was used in its normal configuration (clinical setup) and with an additional 3 mm copper slab (modified setup). Beam asymmetry measured with a diode array was taken into account in MC and WS methods. For both setups, the MC method provided pixel sensitivity values within 3% of those obtained with the MF and WS methods (mean difference 2) and IMRT fields to within 3% of that obtained with WS and MF calibrations while differences with images calibrated with the FF method for fields larger than 10 x 10 cm2 were up to 8%. MC, WS and MF methods all provided a major improvement on the FF method. Advantages and drawbacks of each method were reviewed

  9. Ge(Li) intrinsic efficiency calculation using Monte Carlo simulation for γ radiation transport

    International Nuclear Information System (INIS)

    To solve a radiation transport problem by using Monte Carlo simulation method, the evolution of a large number of radiations must be simulated and also the analysis of their history must be done. The evolution of a radiation starts by the radiation emission, followed by the radiation unperturbed propagation in the medium between the successive interactions and then the radiation parameters modification in the points where interactions occur. The goal of this paper consists in the calculation of the total detection efficiency and the intrinsic efficiency for a coaxial Ge(Li) detector, using Monte Carlo method in order to simulate the γ radiation transport. A Ge(Li) detector with 106 cm3 active volume and γ photons with energies in 50 keV - 2 MeV range, emitted by a point source situated on the detector axis, were considered. Each γ photon evolution is simulated by an analogue process step-by-step until the photon escapes from the detector or is completely absorbed in the active volume of the detector. (author)

  10. Simple formalism for efficient derivatives and multi-determinant expansions in quantum Monte Carlo

    Science.gov (United States)

    Filippi, Claudia; Assaraf, Roland; Moroni, Saverio

    2016-05-01

    We present a simple and general formalism to compute efficiently the derivatives of a multi-determinant Jastrow-Slater wave function, the local energy, the interatomic forces, and similar quantities needed in quantum Monte Carlo. Through a straightforward manipulation of matrices evaluated on the occupied and virtual orbitals, we obtain an efficiency equivalent to algorithmic differentiation in the computation of the interatomic forces and the optimization of the orbital parameters. Furthermore, for a large multi-determinant expansion, the significant computational gain afforded by a recently introduced table method is here extended to the local value of any one-body operator and to its derivatives, in both all-electron and pseudopotential calculations.

  11. Study of RPC Barrel maximum efficiency in 2012 and 2015 calibration collision runs

    CERN Document Server

    Cassar, Samwel

    2015-01-01

    The maximum efficiency of each of the 1020 Resistive Plate Chamber (RPC) rolls in the barrel region of the CMS muon detector is calculated from the best sigmoid fit of efficiency against high voltage (HV). Data from the HV scans, collected during calibration runs in 2012 and 2015, were compared and the rolls exhibiting a change in maximum efficiency were identified. The chi-square value of the sigmoid fit for each roll was considered in determining the significance of the maximum efficiency for the respective roll.

  12. An efficient calibration method for SQUID measurement system using three orthogonal Helmholtz coils

    Science.gov (United States)

    Hua, Li; Shu-Lin, Zhang; Chao-Xiang, Zhang; Xiang-Yan, Kong; Xiao-Ming, Xie

    2016-06-01

    For a practical superconducting quantum interference device (SQUID) based measurement system, the Tesla/volt coefficient must be accurately calibrated. In this paper, we propose a highly efficient method of calibrating a SQUID magnetometer system using three orthogonal Helmholtz coils. The Tesla/volt coefficient is regarded as the magnitude of a vector pointing to the normal direction of the pickup coil. By applying magnetic fields through a three-dimensional Helmholtz coil, the Tesla/volt coefficient can be directly calculated from magnetometer responses to the three orthogonally applied magnetic fields. Calibration with alternating current (AC) field is normally used for better signal-to-noise ratio in noisy urban environments and the results are compared with the direct current (DC) calibration to avoid possible effects due to eddy current. In our experiment, a calibration relative error of about 6.89 × 10‑4 is obtained, and the error is mainly caused by the non-orthogonality of three axes of the Helmholtz coils. The method does not need precise alignment of the magnetometer inside the Helmholtz coil. It can be used for the multichannel magnetometer system calibration effectively and accurately. Project supported by the “Strategic Priority Research Program (B)” of the Chinese Academy of Sciences (Grant No. XDB04020200) and the Shanghai Municipal Science and Technology Commission Project, China (Grant No. 15DZ1940902).

  13. Efficiency calibration of a planar HPGe spectrometer and measurement method for 210Pb

    International Nuclear Information System (INIS)

    A semi-empirical formula was adopted to calibrate an φ43.8 x 5 mm planar HPGe low energy γ and X ray spectrometer for the full energy peak efficiency of environmental samples in the energy range between about 14 and 63 keV. The procedure of calibration was described, and the results of calibration were compared with those obtained through a set of volume reference sources of 241Am and U-Ra (in equilibrium), which were made of mediums of coal, gangue, soil and coal ash. It was found that results from semi-empirical formula and reference sources were consistent within +- 5.5%. Another simple technique for calibration and measurement of 210Pb in environmental samples were also described. The measurement results obtained with the spectrometer calibrated with semi-empirical formula and simple technique for calibration and measurement of 210Pb were agreement within +-7.5% for 210Pb and 238U in gangue samples

  14. Modeling of detection efficiency of HPGe semiconductor detector by Monte Carlo method

    International Nuclear Information System (INIS)

    Over the past ten years following the gradual adoption of new legislative standards for protection against ionizing radiation was significant penetration of gamma-spectrometry between standard radioanalytical methods. In terms of nuclear power plant gamma-spectrometry has shown as the most effective method of determining of the activity of individual radionuclides. Spectrometric laboratories were gradually equipped with the most modern technical equipment. Nevertheless, due to the use of costly and time intensive experimental calibration methods, the possibilities of gamma-spectrometry were partially limited. Mainly in late 90-ies during substantial renovation and modernization works. For this reason, in spectrometric laboratory in Nuclear Power Plants Bohunice in cooperation with the Department of Nuclear Physics FMPI in Bratislava were developed and tested several calibration procedures based on computer simulations using GEANT program. In presented thesis the calibration method for measuring of bulk samples based on auto-absorption factors is described. The accuracy of the proposed method is at least comparable with other used methods, but it surpasses them significantly in terms of efficiency and financial time and simplicity. The described method has been used successfully almost for two years in laboratory spectrometric Radiation Protection Division in Bohunice nuclear power. It is shown by the results of international comparison measurements and repeated validation measurements performed by Slovak Institute of Metrology in Bratislava.

  15. On preparation of efficiency calibration standards for gamma-ray spectrometers

    International Nuclear Information System (INIS)

    The work on preparation of calibration standards started for the following reasons: development of gamma-spectrometry hardware and software requires adequate quality assurance system; the calibration standards offered by established firms are expensive. Preparation of standards in geometries to our specification would make them even more expensive; the analyst community accepted the idea of uniform quality assurance program and uniform calibration politics. Studied materials were: organic (styropore, ground coffee, tobacco leaves, seeds, flour, semolina, lentils, sugar, ion exchange resins, PTFE powder, rice, beans and bee honey) and inorganic (quartz sand, chalcedony sand, active charcoal, marble powder, zeolite, different clays, barite, soil, perlite, talc powder and their mixtures). Efficiency curves for geometry TB with different densities; Efficiency for different geometries; Comparison with Czech source 540-01, silicone resin, r = 0,98 g/cm3 for Co-60, Co-57, Cs-137 and Cs-139 are presented. Conclusions: A procedure for preparation of mixed-nuclide efficiency calibration standards in different geometries and having different densities has been developed. Advantages: different natural and artificial matrices used; gravimetrically controlled activity application; activated charcoal used as supporter of the activity; the preparation is in the container of the standard and no losses of activity occurs; high degree of activity distribution homogeneity; fixed volume of the standard

  16. TH-C-17A-08: Monte Carlo Based Design of Efficient Scintillating Fiber Dosimeters

    International Nuclear Information System (INIS)

    Purpose: To accurately predict Cherenkov radiation generation in scintillating fiber dosimeters. Quantifying Cherenkov radiation provides a method for optimizing fiber dimensions, orientation, optical filters, and photodiode spectral sensitivity to achieve efficient real time imaging dosimeter designs. Methods: We develop in-house Monte Carlo simulation software to model polymer scintillation fibers' fluorescence and Cherenkov emission in megavoltage clinical beams. The model computes emissions using generation probabilities, wavelength sampling, fiber photon capture, and fiber transport efficiency and incorporates the fiber's index of refraction, optical attenuation in the Cherenkov and visible spectrum and fiber dimensions. Detector component selection based on parameters such as silicon photomultiplier efficiency and optical coupling filters separates Cherenkov radiation from the dose-proportional scintillating emissions. The computation uses spectral and geometrical separation of Cherenkov radiation, however other filtering techniques can expand the model. Results: We compute Cherenkov generation per electron and fiber capture and transmission of those photons toward the detector with incident electron beam angle dependence. The model accounts for beam obliquity and nonperpendicular electron fiber impingement, which increases Cherenkov emission and trapping. The rotational angle around square fibers shows trapping efficiency variation from the normally incident minimum to a maximum at 45 degrees rotation. For rotation in the plane formed by the fiber axis and its surface normal, trapping efficiency increases with angle from the normal. The Cherenkov spectrum follows the theoretical curve from 300nm to 800nm, the wavelength range of interest defined by silicon photomultiplier and photodiode spectral efficiency. Conclusion: We are able to compute Cherenkov generation in realistic real time scintillating fiber dosimeter geometries. Design parameters

  17. Quantum Monte Carlo for large chemical systems: implementing efficient strategies for peta scale platforms and beyond

    International Nuclear Information System (INIS)

    Various strategies to implement efficiently quantum Monte Carlo (QMC) simulations for large chemical systems are presented. These include: (i) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), (ii) the possibility of keeping the memory footprint minimal, (iii) the important enhancement of single-core performance when efficient optimization tools are used, and (iv) the definition of a universal, dynamic, fault-tolerant, and load-balanced framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC-Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056, and 1731 electrons). Using 10-80 k computing cores of the Curie machine (GENCI-TGCC-CEA, France), QMC-Chem has been shown to be capable of running at the peta scale level, thus demonstrating that for this machine a large part of the peak performance can be achieved. Implementation of large-scale QMC simulations for future exa scale platforms with a comparable level of efficiency is expected to be feasible. (authors)

  18. SU-E-J-60: Efficient Monte Carlo Dose Calculation On CPU-GPU Heterogeneous Systems

    International Nuclear Information System (INIS)

    Purpose: It is well-known that the performance of GPU-based Monte Carlo dose calculation implementations is bounded by memory bandwidth. One major cause of this bottleneck is the random memory writing patterns in dose deposition, which leads to several memory efficiency issues on GPU such as un-coalesced writing and atomic operations. We propose a new method to alleviate such issues on CPU-GPU heterogeneous systems, which achieves overall performance improvement for Monte Carlo dose calculation. Methods: Dose deposition is to accumulate dose into the voxels of a dose volume along the trajectories of radiation rays. Our idea is to partition this procedure into the following three steps, which are fine-tuned for CPU or GPU: (1) each GPU thread writes dose results with location information to a buffer on GPU memory, which achieves fully-coalesced and atomic-free memory transactions; (2) the dose results in the buffer are transferred to CPU memory; (3) the dose volume is constructed from the dose buffer on CPU. We organize the processing of all radiation rays into streams. Since the steps within a stream use different hardware resources (i.e., GPU, DMA, CPU), we can overlap the execution of these steps for different streams by pipelining. Results: We evaluated our method using a Monte Carlo Convolution Superposition (MCCS) program and tested our implementation for various clinical cases on a heterogeneous system containing an Intel i7 quad-core CPU and an NVIDIA TITAN GPU. Comparing with a straightforward MCCS implementation on the same system (using both CPU and GPU for radiation ray tracing), our method gained 2-5X speedup without losing dose calculation accuracy. Conclusion: The results show that our new method improves the effective memory bandwidth and overall performance for MCCS on the CPU-GPU systems. Our proposed method can also be applied to accelerate other Monte Carlo dose calculation approaches. This research was supported in part by NSF under Grants CCF

  19. Efficient Monte Carlo and greedy heuristic for the inference of stochastic block models

    CERN Document Server

    Peixoto, Tiago P

    2014-01-01

    We present an efficient algorithm for the inference of stochastic block models in large networks. The algorithm can be used as an optimized Markov chain Monte Carlo (MCMC) method, with a fast mixing time and a much reduced susceptibility to getting trapped in metastable states, or as a greedy agglomerative heuristic, with an almost linear $O(N\\ln^2N)$ complexity, where $N$ is the number of nodes in the network, independent on the number of blocks being inferred. We show that the heuristic is capable of delivering results which are indistinguishable from the more exact and numerically expensive MCMC method in many artificial and empirical networks, despite being much faster. The method is entirely unbiased towards any specific mixing pattern, and in particular it does not favor assortative community structures.

  20. Efficiency determination of whole-body counter by Monte Carlo method, using a microcomputer

    International Nuclear Information System (INIS)

    The purpose of this investigation was the development of an analytical microcomputer model to evaluate a whole body counter efficiency. The model is based on a modified Sryder's model. A stretcher type geometry along with the Monte Carlo method and a Synclair type microcomputer were used. Experimental measurements were performed using two phantoms, one as an adult and the other as a 5 year old child. The phantoms were made in acrylic and and 99mTc, 131I and 42K were the radioisotopes utilized. Results showed a close relationship between experimental and predicted data for energies ranging from 250 keV to 2 MeV, but some discrepancies were found for lower energies. (author)

  1. Amorphous silicon EPID calibration for dosimetric applications: comparison of a method based on Monte Carlo prediction of response with existing techniques

    Energy Technology Data Exchange (ETDEWEB)

    Parent, L [Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Sutton (United Kingdom); Fielding, A L [School of Physical and Chemical Sciences, Queensland University of Technology, Brisbane (Australia); Dance, D R [Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, London (United Kingdom); Seco, J [Department of Radiation Oncology, Francis Burr Proton Therapy Center, Massachusetts General Hospital, Harvard Medical School, Boston (United States); Evans, P M [Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Sutton (United Kingdom)

    2007-07-21

    For EPID dosimetry, the calibration should ensure that all pixels have a similar response to a given irradiation. A calibration method (MC), using an analytical fit of a Monte Carlo simulated flood field EPID image to correct for the flood field image pixel intensity shape, was proposed. It was compared with the standard flood field calibration (FF), with the use of a water slab placed in the beam to flatten the flood field (WS) and with a multiple field calibration where the EPID was irradiated with a fixed 10 x 10 field for 16 different positions (MF). The EPID was used in its normal configuration (clinical setup) and with an additional 3 mm copper slab (modified setup). Beam asymmetry measured with a diode array was taken into account in MC and WS methods. For both setups, the MC method provided pixel sensitivity values within 3% of those obtained with the MF and WS methods (mean difference <1%, standard deviation <2%). The difference of pixel sensitivity between MC and FF methods was up to 12.2% (clinical setup) and 11.8% (modified setup). MC calibration provided images of open fields (5 x 5 to 20 x 20 cm{sup 2}) and IMRT fields to within 3% of that obtained with WS and MF calibrations while differences with images calibrated with the FF method for fields larger than 10 x 10 cm{sup 2} were up to 8%. MC, WS and MF methods all provided a major improvement on the FF method. Advantages and drawbacks of each method were reviewed.

  2. Efficiency of rejection-free methods for dynamic Monte Carlo studies of off-lattice interacting particles

    KAUST Repository

    Guerra, Marta L.

    2009-02-23

    We calculate the efficiency of a rejection-free dynamic Monte Carlo method for d -dimensional off-lattice homogeneous particles interacting through a repulsive power-law potential r-p. Theoretically we find the algorithmic efficiency in the limit of low temperatures and/or high densities is asymptotically proportional to ρ (p+2) /2 T-d/2 with the particle density ρ and the temperature T. Dynamic Monte Carlo simulations are performed in one-, two-, and three-dimensional systems with different powers p, and the results agree with the theoretical predictions. © 2009 The American Physical Society.

  3. Efficiency calibration of an HPGe X-ray detector for quantitative PIXE analysis

    Energy Technology Data Exchange (ETDEWEB)

    Mulware, Stephen J., E-mail: Stephenmulware@my.unt.edu; Baxley, Jacob D., E-mail: jacob.baxley351@topper.wku.edu; Rout, Bibhudutta, E-mail: bibhu@unt.edu; Reinert, Tilo, E-mail: tilo@unt.edu

    2014-08-01

    Particle Induced X-ray Emission (PIXE) is an analytical technique, which provides reliably and accurately quantitative results without the need of standards when the efficiency of the X-ray detection system is calibrated. The ion beam microprobe of the Ion Beam Modification and Analysis Laboratory at the University of North Texas is equipped with a 100 mm{sup 2} high purity germanium X-ray detector (Canberra GUL0110 Ultra-LEGe). In order to calibrate the efficiency of the detector for standard less PIXE analysis we have measured the X-ray yield of a set of commercially available X-ray fluorescence standards. The set contained elements from low atomic number Z = 11 (sodium) to higher atomic numbers to cover the X-ray energy region from 1.25 keV to about 20 keV where the detector is most efficient. The effective charge was obtained from the proton backscattering yield of a calibrated particle detector.

  4. The efficiency calibration for the β-γ coincidence system using 133Xe and 131mXe mixture

    International Nuclear Information System (INIS)

    Background: Being one of the sixteen radionuclide laboratories for CTBT, Beijing radionuclide laboratory studied the β-γ coincidence system to measure the activities of xenon isotopes (131mXe, 133mXe, 133Xe and 135Xe). The efficiency calibration is an important and difficult technique in the β-γ coincidence measurement. Purpose: The study is carried out to calibrate the efficiency for the β-γ coincidence system. Methods: The efficiency of the β(γ) particle can be calculate by the ratio of the coincidence counts/single γ(β) counts without knowing the sample activity. A 133Xe and 131mXe mixture, whose activity is not known, is used to calibrate the efficiency. Results: The efficiency for the β-γ coincidence system is got by this method. Conclusions: The method has been used to calibrate the efficiencies of β-γ coincidence system in our laboratory. (authors)

  5. Monte Carlo polarimetric efficiency simulations for a single monolithic CdTe thick matrix

    International Nuclear Information System (INIS)

    Polarimetric measurements for hard X- and soft gamma-rays are still quite unexplored in astrophysical source observations. In order to improve the study of these sources through Compton polarimetry, detectors should have a good polarimetric efficiency and also satisfy the demands of the typical exigent detection environments for this kind of missions. Herein we present a simple concept for such systems, since we propose the use of a single thick (∝10 mm) monolithic matrix of CdTe of 32 x 32 pixels, with an active area of about 40 cm2. In order to predict the best configuration and dimension of detector pixels defined inside the CdTe monolithic piece, a Monte Carlo code based on GEANT4 library modules was developed. Efficiency and polarimetric modulation factor results as a function of energy and detector thickness, are presented and discussed. Q factor of the order of 0.3 has been found up to several hundreds of keV. (orig.)

  6. Monte Carlo polarimetric efficiency simulations for a single monolithic CdTe thick matrix

    Energy Technology Data Exchange (ETDEWEB)

    Curado da Silva, R.M.; Hage-Ali, M.; Siffert, P. [Lab. PHASE, CNRS, Strasbourg (France); Caroli, E.; Stephen, J.B. [Inst. TESRE/CNR, Bologna (Italy)

    2001-07-01

    Polarimetric measurements for hard X- and soft gamma-rays are still quite unexplored in astrophysical source observations. In order to improve the study of these sources through Compton polarimetry, detectors should have a good polarimetric efficiency and also satisfy the demands of the typical exigent detection environments for this kind of missions. Herein we present a simple concept for such systems, since we propose the use of a single thick ({proportional_to}10 mm) monolithic matrix of CdTe of 32 x 32 pixels, with an active area of about 40 cm{sup 2}. In order to predict the best configuration and dimension of detector pixels defined inside the CdTe monolithic piece, a Monte Carlo code based on GEANT4 library modules was developed. Efficiency and polarimetric modulation factor results as a function of energy and detector thickness, are presented and discussed. Q factor of the order of 0.3 has been found up to several hundreds of keV. (orig.)

  7. Efficient Calibration of Computationally Intensive Groundwater Models through Surrogate Modelling with Lower Levels of Fidelity

    Science.gov (United States)

    Razavi, S.; Anderson, D.; Martin, P.; MacMillan, G.; Tolson, B.; Gabriel, C.; Zhang, B.

    2012-12-01

    Many sophisticated groundwater models tend to be computationally intensive as they rigorously represent detailed scientific knowledge about the groundwater systems. Calibration (model inversion), which is a vital step of groundwater model development, can require hundreds or thousands of model evaluations (runs) for different sets of parameters and as such demand prohibitively large computational time and resources. One common strategy to circumvent this computational burden is surrogate modelling which is concerned with developing and utilizing fast-to-run surrogates of the original computationally intensive models (also called fine models). Surrogates can be either based on statistical and data-driven models such as kriging and neural networks or simplified physically-based models with lower fidelity to the original system (also called coarse models). Fidelity in this context refers to the degree of the realism of a simulation model. This research initially investigates different strategies for developing lower-fidelity surrogates of a fine groundwater model and their combinations. These strategies include coarsening the fine model, relaxing the numerical convergence criteria, and simplifying the model geological conceptualisation. Trade-offs between model efficiency and fidelity (accuracy) are of special interest. A methodological framework is developed for coordinating the original fine model with its lower-fidelity surrogates with the objective of efficiently calibrating the parameters of the original model. This framework is capable of mapping the original model parameters to the corresponding surrogate model parameters and also mapping the surrogate model response for the given parameters to the original model response. This framework is general in that it can be used with different optimization and/or uncertainty analysis techniques available for groundwater model calibration and parameter/predictive uncertainty assessment. A real-world computationally

  8. Efficient and robust calibration of the Heston option pricing model for American options using an improved Cuckoo Search Algorithm

    OpenAIRE

    Stefan Haring; Ronald Hochreiter

    2015-01-01

    In this paper an improved Cuckoo Search Algorithm is developed to allow for an efficient and robust calibration of the Heston option pricing model for American options. Calibration of stochastic volatility models like the Heston is significantly harder than classical option pricing models as more parameters have to be estimated. The difficult task of calibrating one of these models to American Put options data is the main objective of this paper. Numerical results are shown to substantiate th...

  9. How effective and efficient are multiobjective evolutionary algorithms at hydrologic model calibration?

    Science.gov (United States)

    Tang, Y.; Reed, P.; Wagener, T.

    2006-05-01

    This study provides a comprehensive assessment of state-of-the-art evolutionary multiobjective optimization (EMO) tools' relative effectiveness in calibrating hydrologic models. The relative computational efficiency, accuracy, and ease-of-use of the following EMO algorithms are tested: Epsilon Dominance Nondominated Sorted Genetic Algorithm-II (ɛ-NSGAII), the Multiobjective Shuffled Complex Evolution Metropolis algorithm (MOSCEM-UA), and the Strength Pareto Evolutionary Algorithm 2 (SPEA2). This study uses three test cases to compare the algorithms' performances: (1) a standardized test function suite from the computer science literature, (2) a benchmark hydrologic calibration test case for the Leaf River near Collins, Mississippi, and (3) a computationally intensive integrated surface-subsurface model application in the Shale Hills watershed in Pennsylvania. One challenge and contribution of this work is the development of a methodology for comprehensively comparing EMO algorithms that have different search operators and randomization techniques. Overall, SPEA2 attained competitive to superior results for most of the problems tested in this study. The primary strengths of the SPEA2 algorithm lie in its search reliability and its diversity preservation operator. The biggest challenge in maximizing the performance of SPEA2 lies in specifying an effective archive size without a priori knowledge of the Pareto set. In practice, this would require significant trial-and-error analysis, which is problematic for more complex, computationally intensive calibration applications. ɛ-NSGAII appears to be superior to MOSCEM-UA and competitive with SPEA2 for hydrologic model calibration. ɛ-NSGAII's primary strength lies in its ease-of-use due to its dynamic population sizing and archiving which lead to rapid convergence to very high quality solutions with minimal user input. MOSCEM-UA is best suited for hydrologic model calibration applications that have small parameter sets

  10. Multi-level Monte Carlo Methods for Efficient Simulation of Coulomb Collisions

    Science.gov (United States)

    Ricketson, Lee

    2013-10-01

    We discuss the use of multi-level Monte Carlo (MLMC) schemes--originally introduced by Giles for financial applications--for the efficient simulation of Coulomb collisions in the Fokker-Planck limit. The scheme is based on a Langevin treatment of collisions, and reduces the computational cost of achieving a RMS error scaling as ɛ from O (ɛ-3) --for standard Langevin methods and binary collision algorithms--to the theoretically optimal scaling O (ɛ-2) for the Milstein discretization, and to O (ɛ-2 (logɛ)2) with the simpler Euler-Maruyama discretization. In practice, this speeds up simulation by factors up to 100. We summarize standard MLMC schemes, describe some tricks for achieving the optimal scaling, present results from a test problem, and discuss the method's range of applicability. This work was performed under the auspices of the U.S. DOE by the University of California, Los Angeles, under grant DE-FG02-05ER25710, and by LLNL under contract DE-AC52-07NA27344.

  11. Study of Efficiency Calibrations of HPGe Detectors for Radioactivity Measurements of Environmental Samples

    International Nuclear Information System (INIS)

    In this paper, we describe a method of calibrating of efficiency of a HPGe gamma-ray spectrometry of bulk environmental samples (Tea, crops, water, and soil) is a significant part of the environmental radioactivity measurements. Here we will discuss the full energy peak efficiency (FEPE) of three HPGe detectors it as a consequence, it is essential that the efficiency is determined for each set-up employed. Besides to take full advantage at gamma-ray spectrometry, a set of efficiency at several energies which covers the wide the range in energy, the large the number of radionuclides whose concentration can be determined to measure the main natural gamma-ray emitters, the efficiency should be known at least from 46.54 keV (210Pb) to 1836 keV (88Y). Radioactive sources were prepared from two different standards, a first mixed standard QC Y40 containing 210Pb, 241Am, 109Cd, and Co57, and the second QC Y48 containing 241Am, 109Cd, 57Co, 139 Ce, 113Sn, 85Sr, 137Cs, 88Y, and 60Co is necessary in order to calculate the activity of the different radionuclides contained in a sample. In this work, we will study the efficiency calibration as a function of different parameters as:- Energy of gamma ray from 46.54 keV (210Pb) to 1836 keV (88Y), three different detectors A, B, and C, geometry of containers (point source, marinelli beaker, and cylindrical bottle 1 L), height of standard soil samples in bottle 250 ml, and density of standard environmental samples. These standard environmental sample must be measured before added standard solution because we will use the same environmental samples in order to consider the self absorption especially and composition in the case of volume samples.

  12. A fast, primary-interaction Monte Carlo methodology for determination of total efficiency of cylindrical scintillation gamma-ray detectors

    Directory of Open Access Journals (Sweden)

    Rehman Shakeel U.

    2009-01-01

    Full Text Available A primary-interaction based Monte Carlo algorithm has been developed for determination of the total efficiency of cylindrical scintillation g-ray detectors. This methodology has been implemented in a Matlab based computer program BPIMC. For point isotropic sources at axial locations with respect to the detector axis, excellent agreement has been found between the predictions of the BPIMC code with the corresponding results obtained by using hybrid Monte Carlo as well as by experimental measurements over a wide range of g-ray energy values. For off-axis located point sources, the comparison of the BPIMC predictions with the corresponding results obtained by direct calculations as well as by conventional Monte Carlo schemes shows good agreement validating the proposed algorithm. Using the BPIMC program, the energy dependent detector efficiency has been found to approach an asymptotic profile by increasing either thickness or diameter of scintillator while keeping the other fixed. The variation of energy dependent total efficiency of a 3'x3' NaI(Tl scintillator with axial distance has been studied using the BPIMC code. About two orders of magnitude change in detector efficiency has been observed for zero to 50 cm variation in the axial distance. For small values of axial separation, a similar large variation has also been observed in total efficiency for 137Cs as well as for 60Co sources by increasing the axial-offset from zero to 50 cm.

  13. Improved efficiency in Monte Carlo simulation for passive-scattering proton therapy

    International Nuclear Information System (INIS)

    The aim of this work was to improve the computational efficiency of Monte Carlo simulations when tracking protons through a proton therapy treatment head. Two proton therapy facilities were considered, the Francis H Burr Proton Therapy Center (FHBPTC) at the Massachusetts General Hospital and the Crocker Lab eye treatment facility used by University of California at San Francisco (UCSFETF). The computational efficiency was evaluated for phase space files scored at the exit of the treatment head to determine optimal parameters to improve efficiency while maintaining accuracy in the dose calculation.For FHBPTC, particles were split by a factor of 8 upstream of the second scatterer and upstream of the aperture. The radius of the region for Russian roulette was set to 2.5 or 1.5 times the radius of the aperture and a secondary particle production cut (PC) of 50 mm was applied. For UCSFETF, particles were split a factor of 16 upstream of a water absorber column and upstream of the aperture. Here, the radius of the region for Russian roulette was set to 4 times the radius of the aperture and a PC of 0.05 mm was applied. In both setups, the cylindrical symmetry of the proton beam was exploited to position the split particles randomly spaced around the beam axis.When simulating a phase space for subsequent water phantom simulations, efficiency gains between a factor of 19.9  ±  0.1 and 52.21  ±  0.04 for the FHTPC setups and 57.3  ±  0.5 for the UCSFETF setups were obtained. For a phase space used as input for simulations in a patient geometry, the gain was a factor of 78.6  ±  7.5. Lateral-dose curves in water were within the accepted clinical tolerance of 2%, with statistical uncertainties of 0.5% for the two facilities. For the patient geometry and by considering the 2% and 2mm criteria, 98.4% of the voxels showed a gamma index lower than unity. An analysis of the dose distribution resulted in systematic deviations below of 0.88% for 20

  14. Calibration of STUD+ parameters to achieve optimally efficient broadband adiabatic decoupling in a single transient

    Science.gov (United States)

    Bendall; Skinner

    1998-10-01

    To provide the most efficient conditions for spin decoupling with least RF power, master calibration curves are provided for the maximum centerband amplitude, and the minimum amplitude for the largest cycling sideband, resulting from STUD+ adiabatic decoupling applied during a single free induction decay. The principal curve is defined as a function of the four most critical experimental input parameters: the maximum amplitude of the RF field, RFmax, the length of the sech/tanh pulse, Tp, the extent of the frequency sweep, bwdth, and the coupling constant, Jo. Less critical parameters, the effective (or actual) decoupled bandwidth, bweff, and the sech/tanh truncation factor, beta, which become more important as bwdth is decreased, are calibrated in separate curves. The relative importance of nine additional factors in determining optimal decoupling performance in a single transient are considered. Specific parameters for efficient adiabatic decoupling can be determined via a set of four equations which will be most useful for 13C decoupling, covering the range of one-bond 13C1H coupling constants from 125 to 225 Hz, and decoupled bandwidths of 7 to 100 kHz, with a bandwidth of 100 kHz being the requirement for a 2 GHz spectrometer. The four equations are derived from a recent vector model of adiabatic decoupling, and experiment, supported by computer simulations. The vector model predicts an inverse linear relation between the centerband and maximum sideband amplitudes, and it predicts a simple parabolic relationship between maximum sideband amplitude and the product JoTp. The ratio bwdth/(RFmax)2 can be viewed as a characteristic time scale, tauc, affecting sideband levels, with tauc approximately Tp giving the most efficient STUD+ decoupling, as suggested by the adiabatic condition. Functional relationships between bwdth and less critical parameters, bweff and beta, for efficient decoupling can be derived from Bloch-equation calculations of the inversion profile

  15. Monte Carlo simulation of efficient data acquisition for an entire-body PET scanner

    Energy Technology Data Exchange (ETDEWEB)

    Isnaini, Ismet; Obi, Takashi [Tokyo Institute of Technology, 4259 Nagatsuta-cho, Midori-ku, Yokohama 226-8503 (Japan); Yoshida, Eiji, E-mail: rush@nirs.go.jp [National Institute of Radiological Sciences, 4-9-1 Inage-ku, Chiba 263-8555 (Japan); Yamaya, Taiga [National Institute of Radiological Sciences, 4-9-1 Inage-ku, Chiba 263-8555 (Japan)

    2014-07-01

    Conventional PET scanners can image the whole body using many bed positions. On the other hand, an entire-body PET scanner with an extended axial FOV, which can trace whole-body uptake images at the same time and improve sensitivity dynamically, has been desired. The entire-body PET scanner would have to process a large amount of data effectively. As a result, the entire-body PET scanner has high dead time at a multiplex detector grouping process. Also, the entire-body PET scanner has many oblique line-of-responses. In this work, we study an efficient data acquisition for the entire-body PET scanner using the Monte Carlo simulation. The simulated entire-body PET scanner based on depth-of-interaction detectors has a 2016-mm axial field-of-view (FOV) and an 80-cm ring diameter. Since the entire-body PET scanner has higher single data loss than a conventional PET scanner at grouping circuits, the NECR of the entire-body PET scanner decreases. But, single data loss is mitigated by separating the axially arranged detector into multiple parts. Our choice of 3 groups of axially-arranged detectors has shown to increase the peak NECR by 41%. An appropriate choice of maximum ring difference (MRD) will also maintain the same high performance of sensitivity and high peak NECR while at the same time reduces the data size. The extremely-oblique line of response for large axial FOV does not contribute much to the performance of the scanner. The total sensitivity with full MRD increased only 15% than that with about half MRD. The peak NECR was saturated at about half MRD. The entire-body PET scanner promises to provide a large axial FOV and to have sufficient performance values without using the full data.

  16. Monte Carlo simulation of efficient data acquisition for an entire-body PET scanner

    International Nuclear Information System (INIS)

    Conventional PET scanners can image the whole body using many bed positions. On the other hand, an entire-body PET scanner with an extended axial FOV, which can trace whole-body uptake images at the same time and improve sensitivity dynamically, has been desired. The entire-body PET scanner would have to process a large amount of data effectively. As a result, the entire-body PET scanner has high dead time at a multiplex detector grouping process. Also, the entire-body PET scanner has many oblique line-of-responses. In this work, we study an efficient data acquisition for the entire-body PET scanner using the Monte Carlo simulation. The simulated entire-body PET scanner based on depth-of-interaction detectors has a 2016-mm axial field-of-view (FOV) and an 80-cm ring diameter. Since the entire-body PET scanner has higher single data loss than a conventional PET scanner at grouping circuits, the NECR of the entire-body PET scanner decreases. But, single data loss is mitigated by separating the axially arranged detector into multiple parts. Our choice of 3 groups of axially-arranged detectors has shown to increase the peak NECR by 41%. An appropriate choice of maximum ring difference (MRD) will also maintain the same high performance of sensitivity and high peak NECR while at the same time reduces the data size. The extremely-oblique line of response for large axial FOV does not contribute much to the performance of the scanner. The total sensitivity with full MRD increased only 15% than that with about half MRD. The peak NECR was saturated at about half MRD. The entire-body PET scanner promises to provide a large axial FOV and to have sufficient performance values without using the full data

  17. Thermodynamics of long supercoiled molecules: insights from highly efficient Monte Carlo simulations.

    Science.gov (United States)

    Lepage, Thibaut; Képès, François; Junier, Ivan

    2015-07-01

    Supercoiled DNA polymer models for which the torsional energy depends on the total twist of molecules (Tw) are a priori well suited for thermodynamic analysis of long molecules. So far, nevertheless, the exact determination of Tw in these models has been based on a computation of the writhe of the molecules (Wr) by exploiting the conservation of the linking number, Lk=Tw+Wr, which reflects topological constraints coming from the helical nature of DNA. Because Wr is equal to the number of times the main axis of a DNA molecule winds around itself, current Monte Carlo algorithms have a quadratic time complexity, O(L(2)), with respect to the contour length (L) of the molecules. Here, we present an efficient method to compute Tw exactly, leading in principle to algorithms with a linear complexity, which in practice is O(L(1.2)). Specifically, we use a discrete wormlike chain that includes the explicit double-helix structure of DNA and where the linking number is conserved by continuously preventing the generation of twist between any two consecutive cylinders of the discretized chain. As an application, we show that long (up to 21 kbp) linear molecules stretched by mechanical forces akin to magnetic tweezers contain, in the buckling regime, multiple and branched plectonemes that often coexist with curls and helices, and whose length and number are in good agreement with experiments. By attaching the ends of the molecules to a reservoir of twists with which these can exchange helix turns, we also show how to compute the torques in these models. As an example, we report values that are in good agreement with experiments and that concern the longest molecules that have been studied so far (16 kbp). PMID:26153710

  18. Monte Carlo-derived TLD cross-calibration factors for treatment verification and measurement of skin dose in accelerated partial breast irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Garnica-Garza, H M [Centro de Investigacion y de Estudios Avanzados del Instituto Politecnico Nacional Unidad Monterrey, VIa del Conocimiento 201 Parque de Investigacion e Innovacion Tecnologica, Apodaca NL C.P. 66600 (Mexico)], E-mail: hgarnica@cinvestav.mx

    2009-03-21

    Monte Carlo simulation was employed to calculate the response of TLD-100 chips under irradiation conditions such as those found during accelerated partial breast irradiation with the MammoSite radiation therapy system. The absorbed dose versus radius in the last 0.5 cm of the treated volume was also calculated, employing a resolution of 20 {mu}m, and a function that fits the observed data was determined. Several clinically relevant irradiation conditions were simulated for different combinations of balloon size, balloon-to-surface distance and contents of the contrast solution used to fill the balloon. The thermoluminescent dosemeter (TLD) cross-calibration factors were derived assuming that the calibration of the dosemeters was carried out using a Cobalt 60 beam, and in such a way that they provide a set of parameters that reproduce the function that describes the behavior of the absorbed dose versus radius curve. Such factors may also prove to be useful for those standardized laboratories that provide postal dosimetry services.

  19. Monte Carlo-derived TLD cross-calibration factors for treatment verification and measurement of skin dose in accelerated partial breast irradiation

    International Nuclear Information System (INIS)

    Monte Carlo simulation was employed to calculate the response of TLD-100 chips under irradiation conditions such as those found during accelerated partial breast irradiation with the MammoSite radiation therapy system. The absorbed dose versus radius in the last 0.5 cm of the treated volume was also calculated, employing a resolution of 20 μm, and a function that fits the observed data was determined. Several clinically relevant irradiation conditions were simulated for different combinations of balloon size, balloon-to-surface distance and contents of the contrast solution used to fill the balloon. The thermoluminescent dosemeter (TLD) cross-calibration factors were derived assuming that the calibration of the dosemeters was carried out using a Cobalt 60 beam, and in such a way that they provide a set of parameters that reproduce the function that describes the behavior of the absorbed dose versus radius curve. Such factors may also prove to be useful for those standardized laboratories that provide postal dosimetry services.

  20. Determination of relative efficiency of a detector using Monte Carlo method; Determinacao da eficiencia relativa de um detector usando metodo de Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Medeiros, M.P.C.; Rebello, W.F., E-mail: eng.cavaliere@ime.eb.br, E-mail: rebello@ime.eb.br [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil). Secao de Engenharia Nuclear; Lopes, J.M.; Silva, A.X., E-mail: marqueslopez@yahoo.com.br, E-mail: ademir@nuclear.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear

    2015-07-01

    High-purity germanium detectors (HPGe) are mandatory tools for spectrometry because of their excellent energy resolution. The efficiency of such detectors, quoted in the list of specifications by the manufacturer, frequently refers to the relative full-energy peak efficiency, related to the absolute full-energy peak efficiency of a 7.6 cm x 7.6 cm (diameter x height) NaI(Tl) crystal, based on the 1.33 MeV peak of a {sup 60}Co source positioned 25 cm from the detector. In this study, we used MCNPX code to simulate an HPGe detector (Canberra GC3020), from Real-Time Neutrongraphy Laboratory of UFRJ, to survey the spectrum of a {sup 60}Co source located 25 cm from the detector in order to calculate and confirm the efficiency declared by the manufacturer. Agreement between experimental and simulated data was achieved. The model under development will be used for calculating and comparison purposes with the detector calibration curve from software Genie2000™, also serving as a reference for future studies. (author)

  1. CdTe detector efficiency calibration using thick targets of pure and stable compounds

    International Nuclear Information System (INIS)

    Quantitative PIXE measurements require perfectly calibrated set-ups. Cooled CdTe detectors have good efficiency for energies above those covered by Si(Li) detectors and turn on the possibility of studying K X-rays lines instead of L X-rays lines for medium and eventually heavy elements, which is an important advantage in various cases, if only limited resolution systems are available in the low energy range. In this work we present and discuss spectra from a CdTe semiconductor detector covering the energy region from Cu (Kα1 = 8.047 keV) to U (Kα1 = 98.439 keV). Pure thick samples were irradiated with proton beams at the ITN 3.0 MV Tandetron accelerator in the High Resolution High Energy PIXE set-up. Results and the application to the study of a Portuguese Ossa Morena region Dark Stone sample are presented in this work.

  2. Calibrating Self-Reported Measures of Maternal Smoking in Pregnancy via Bioassays Using a Monte Carlo Approach

    Directory of Open Access Journals (Sweden)

    Lauren S. Wakschlag

    2009-06-01

    Full Text Available Maternal smoking during pregnancy is a major public health problem that has been associated with numerous short- and long-term adverse health outcomes in offspring. However, characterizing smoking exposure during pregnancy precisely has been rather difficult: self-reported measures of smoking often suffer from recall bias, deliberate misreporting, and selective non-disclosure, while single bioassay measures of nicotine metabolites only reflect recent smoking history and cannot capture the fluctuating and complex patterns of varying exposure of the fetus. Recently, Dukic et al. [1] have proposed a statistical method for combining information from both sources in order to increase the precision of the exposure measurement and power to detect more subtle effects of smoking. In this paper, we extend the Dukic et al. [1] method to incorporate individual variation of the metabolic parameters (such as clearance rates into the calibration model of smoking exposure during pregnancy. We apply the new method to the Family Health and Development Project (FHDP, a small convenience sample of 96 predominantly working-class white pregnant women oversampled for smoking. We find that, on average, misreporters smoke 7.5 cigarettes more than what they report to smoke, with about one third underreporting by 1.5, one third under-reporting by about 6.5, and one third underreporting by 8.5 cigarettes. Partly due to the limited demographic heterogeneity in the FHDP sample, the results are similar to those obtained by the deterministic calibration model, whose adjustments were slightly lower (by 0.5 cigarettes on average. The new results are also, as expected, less sensitive to assumed values of cotinine half-life.

  3. Precise Efficiency Calibration of an HPGe Detector Using the Decay of 180m Hf

    International Nuclear Information System (INIS)

    Superallowed 0+ → 0+ nuclear beta decays provide both the best test of the Conserved Vector Current (CVC) hypothesis and, together with the muon lifetime, the most accurate value for the up-down quark-mixing matrix element, Vud , of the Cabibbo-Kobayashi-Maskawa (CKM) matrix. This matrix should be unitary, and experimental verification of that expectation constitutes an important test of the Standard Model. In aiming for a definitive test of CKM unitarity we have mounted a program at the Texas A and M University to establish (or eliminate) the discrepancy with unitarity. One correction accounts for isospin symmetry breaking and its accuracy can be tested by measurements of the ft-values of Tz = - 1 superallowed emitters (e.g. 22 Mg and 30 S) to a precision of about ±0.1%. A requirement for these measurements is a detector whose detection efficiency is known to the same precision. However, calibration of a detector's efficiency to this level of precision is extremely challenging since very few sources provide γ-rays whose intensities (relative or absolute) are known to better than ±0.5%. The isomer 180m Hf (t 1/2 = 5.5 h) provides a very precise γ-ray calibration source in the 90 to 330 keV energy range. The decay of 180m Hf to the 180 Hf ground state includes a cascade of three consecutive E2 γ-ray transitions of energies 93.3, 215.2 and 332.3 keV with no other feeding of the intermediate states. This provides a uniquely well-known calibration standard since the relative γ-ray intensities emitted are dependent only on the calculated E2 conversion coefficients. The 180m Hf isomer was produced by irradiation of a 0.91 mg sample of HfO2 isotopically enriched to 87% in179 Hf, at the TRIGA reactor in the TAMU Nuclear Science Center. In order to minimise the self-absorption of γ-rays in Hf we required a thin source that was prepared following a procedure described by Kellog and Norman. The activated HfO2sample was dissolved in 0.50 ml of hot 48% HF acid to

  4. An Analysis on the Calculation Efficiency of the Responses Caused by the Biased Adjoint Fluxes in Hybrid Monte Carlo Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Khuat, Quang Huy; Kim, Song Hyun; Kim, Do Hyun; Shin, Chang Ho [Hanyang University, Seoul (Korea, Republic of)

    2015-05-15

    This technique is known as Consistent Adjoint Driven Importance Sampling (CADIS) method and it is implemented in SCALE code system. In the CADIS method, adjoint transport equation has to be solved to determine deterministic importance functions. Using the CADIS method, a problem was noted that the biased adjoint flux estimated by deterministic methods can affect the calculation efficiency and error. The biases of adjoint function are caused by the methodology, calculation strategy, tolerance of result calculated by the deterministic method and inaccurate multi-group cross section libraries. In this paper, a study to analyze the influence of the biased adjoint functions into Monte Carlo computational efficiency is pursued. In this study, a method to estimate the calculation efficiency was proposed for applying the biased adjoint fluxes in the CADIS approach. For a benchmark problem, the responses and FOMs using SCALE code system were evaluated as applying the adjoint fluxes. The results show that the biased adjoint fluxes significantly affects the calculation efficiencies.

  5. Calibration Analyses and Efficiency Studies for the Anti Coincidence Detector on the Fermi Gamma Ray Space Telescope

    Energy Technology Data Exchange (ETDEWEB)

    Kachulis, Chris; /Yale U. /SLAC

    2011-06-22

    The Anti Coincidence Detector (ACD) on the Fermi Gamma Ray Space Telescope provides charged particle rejection for the Large Area Telescope (LAT). We use two calibrations used by the ACD to conduct three studies on the performance of the ACD. We examine the trending of the calibrations to search for damage and find a timescale over which the calibrations can be considered reliable. We also calculated the number of photoelectrons counted by a PMT on the ACD from a normal proton. Third, we calculated the veto efficiencies of the ACD for two different veto settings. The trends of the calibrations exhibited no signs of damage, and indicated timescales of reliability for the calibrations of one to two years. The number of photoelectrons calculated ranged from 5 to 25. Large errors in the effect of the energy spectrum of the charged particles caused these values to have very large errors of around 60 percent. Finally, the veto efficiencies were found to be very high at both veto values, both for charged particles and for the lower energy backsplash spectrum. The Anti Coincidence Detector (ACD) on the Fermi Gamma Ray Space Telescope is a detector system built around the silicon strip tracker on the Large Area Telescope (LAT). The purpose of the ACD is to provide charged particle rejection for the LAT. To do this, the ACD must be calibrated correctly in flight, and must be able to efficiently veto charged particle events while minimizing false vetoes due to 'backsplash' from photons in the calorimeter. There are eleven calibrations used by the ACD. In this paper, we discuss the use of two of these calibrations to preform three studies on the performance of the ACD. The first study examines trending of the calibrations to check for possible hardware degradation. The second study uses the calibrations to explore the efficiency of an on-board hardware veto. The third study uses the calibrations to calculate the number of photoelectrons seen by each PMT when a

  6. The Adjoint Monte Carlo - a viable option for efficient radiotherapy treatment planning

    International Nuclear Information System (INIS)

    In cancer therapy using collimated beams of photons, the radiation oncologist must determine a set of beams that delivers the required dose to each point in the tumor and minimizes the risk of damage to the healthy tissue and vital organs. Currently, the oncologist determines these beams iteratively, by using a sequence of dose calculations using approximate numerical methods. In this paper, a more accurate and potentially faster approach, based on the Adjoint Monte Carlo method, is presented (authors)

  7. Nonequilibrium candidate Monte Carlo: A new tool for efficient equilibrium simulation

    Energy Technology Data Exchange (ETDEWEB)

    Nilmeier, Jerome P.; Crooks, Gavin E.; Minh, David D. L.; Chodera, John D.

    2011-11-08

    Metropolis Monte Carlo simulation is a powerful tool for studying the equilibrium properties of matter. In complex condensed-phase systems, however, it is difficult to design Monte Carlo moves with high acceptance probabilities that also rapidly sample uncorrelated configurations. Here, we introduce a new class of moves based on nonequilibrium dynamics: candidate configurations are generated through a finite-time process in which a system is actively driven out of equilibrium, and accepted with criteria that preserve the equilibrium distribution. The acceptance rule is similar to the Metropolis acceptance probability, but related to the nonequilibrium work rather than the instantaneous energy difference. Our method is applicable to sampling from both a single thermodynamic state or a mixture of thermodynamic states, and allows both coordinates and thermodynamic parameters to be driven in nonequilibrium proposals. While generating finite-time switching trajectories incurs an additional cost, driving some degrees of freedom while allowing others to evolve naturally can lead to large enhancements in acceptance probabilities, greatly reducing structural correlation times. Using nonequilibrium driven processes vastly expands the repertoire of useful Monte Carlo proposals in simulations of dense solvated systems.

  8. Thermal inertia and energy efficiency – Parametric simulation assessment on a calibrated case study

    International Nuclear Information System (INIS)

    Highlights: • We perform a parametric simulation study on a calibrated building energy model. • We introduce adaptive shadings and night free cooling in simulations. • We analyze the effect of thermal capacity on the parametric simulations results. • We recognize that cooling demand and savings scales linearly with thermal capacity. • We assess the advantage of medium-heavy over medium and light configurations. - Abstract: The reduction of energy consumption for heating and cooling services in the existing building stock is a key challenge for global sustainability today and buildings’ envelopes retrofit is one the main issues. Most of the existing buildings’ envelopes have low levels of insulation, high thermal losses due to thermal bridges and cracks, absence of appropriate solar control, etc. Further, in building refurbishment, the importance of a system level approach is often undervalued in favour of simplistic “off the shelf” efficient solutions, focused on the reduction of thermal transmittance and on the enhancement of solar control capabilities. In many cases, the importance of the dynamic thermal properties is often neglected or underestimated and the effective thermal capacity is not properly considered as one of the design parameters. The research presented aims to critically assess the influence of the dynamic thermal properties of the building fabric (roof, walls and floors) on sensible heating and cooling energy demand for a case study. The case study chosen is an existing office building which has been retrofitted in recent years and whose energy model has been calibrated according to the data collected in the monitoring process. The research illustrates the variations of the sensible thermal energy demand of the building in different retrofit scenarios, and relates them to the variations of the dynamic thermal properties of the construction components. A parametric simulation study has been performed, encompassing the use of

  9. Efficient Estimation of Highly Structured Posteriors of Gravitational-Wave Signals with Markov-Chain Monte Carlo

    CERN Document Server

    Farr, Benjamin; Luijten, Erik

    2013-01-01

    We introduce a new Markov-Chain Monte Carlo (MCMC) approach designed for efficient sampling of highly correlated and multimodal posteriors. Parallel tempering, though effective, is a costly technique for sampling such posteriors. Our approach minimizes the use of parallel tempering, only using it for a short time to tune a new jump proposal. For complex posteriors we find efficiency improvements up to a factor of ~13. The estimation of parameters of gravitational-wave signals measured by ground-based detectors is currently done through Bayesian inference with MCMC one of the leading sampling methods. Posteriors for these signals are typically multimodal with strong non-linear correlations, making sampling difficult. As we enter the advanced-detector era, improved sensitivities and wider bandwidths will drastically increase the computational cost of analyses, demanding more efficient search algorithms to meet these challenges.

  10. Efficient implementation of the Monte Carlo method for lattice gauge theory calculations on the floating point systems FPS-164

    International Nuclear Information System (INIS)

    The computer program calculates the average action per plaquette for SU(6)/Z6 lattice gauge theory. By considering quantum field theory on a space-time lattice, the ultraviolet divergences of the theory are regulated through the finite lattice spacing. The continuum theory results can be obtained by a renormalization group procedure. Making use of the FPS Mathematics Library (MATHLIB), we are able to generate an efficient code for the Monte Carlo algorithm for lattice gauge theory calculations which compares favourably with the performance of the CDC 7600. (orig.)

  11. Experimental characterization and Monte Carlo simulation of Si(Li) detector efficiency by radioactive sources and PIXE

    Energy Technology Data Exchange (ETDEWEB)

    Mesradi, M. [Institut Pluridisciplinaire Hubert-Curien, UMR 7178 CNRS/IN2P3 et Universite Louis Pasteur, 23 rue du Loess, BP 28, F-67037 Strasbourg Cedex 2 (France); Elanique, A. [Departement de Physique, FS/BP 8106, Universite Ibn Zohr, Agadir, Maroc (Morocco); Nourreddine, A. [Institut Pluridisciplinaire Hubert-Curien, UMR 7178 CNRS/IN2P3 et Universite Louis Pasteur, 23 rue du Loess, BP 28, F-67037 Strasbourg Cedex 2 (France)], E-mail: abdelmjid.nourreddine@ires.in2p3.fr; Pape, A.; Raiser, D.; Sellam, A. [Institut Pluridisciplinaire Hubert-Curien, UMR 7178 CNRS/IN2P3 et Universite Louis Pasteur, 23 rue du Loess, BP 28, F-67037 Strasbourg Cedex 2 (France)

    2008-06-15

    This work relates to the study and characterization of the response function of an X-ray spectrometry system. The intrinsic efficiency of a Si(Li) detector has been simulated with the Monte Carlo codes MCNP and GEANT4 in the photon energy range of 2.6-59.5 keV. After finding it necessary to take a radiograph of the detector inside its cryostat to learn the correct dimensions, agreement within 10% between the simulations and experimental measurements with several point-like sources and PIXE results was obtained.

  12. A regional application of the MAGIC model in Wales: calibration and assessment of future recovery using a Monte-Carlo approach

    Directory of Open Access Journals (Sweden)

    C. E. M. Sefton

    1998-01-01

    Full Text Available A survey and resurvey of 77 headwater streams in Wales provides an opportunity for assessing changes in streamwater chemistry in the region. The Model of Acidification of Groundwater In Catchment (MAGIC has been calibrated to the second of two surveys, taken in 1994-1995, using a Monte-Carlo methodology. The first survey, 1983-1984, provides a basis for model validation. The model simulates a significant decline of water quality across the region since industrialisation. Agreed reductions in sulphur (S emissions in Europe in accordance with the Second S Protocol will result in a 49% reduction of S deposition across Wales from 1996 to 2010. In response to these reductions, the proportion of streams in the region with mean annual acid neutralising capacity (ANC > 0 is predicted to increase from 81% in 1995 to 90% by 2030. The greatest recovery between 1984 and 1995 and into the future is at those streams with low ANC. In order to ensure that streams in the most heavily acidified areas of Wales recover to ANC zero by 2030, a reduction of S deposition of 80-85% will be required.

  13. Efficient Monte Carlo modelling of individual tumour cell propagation for hypoxic head and neck cancer

    Science.gov (United States)

    Tuckwell, W.; Bezak, E.; Yeoh, E.; Marcu, L.

    2008-09-01

    A Monte Carlo tumour model has been developed to simulate tumour cell propagation for head and neck squamous cell carcinoma. The model aims to eventually provide a radiobiological tool for radiation oncology clinicians to plan patient treatment schedules based on properties of the individual tumour. The inclusion of an oxygen distribution amongst the tumour cells enables the model to incorporate hypoxia and other associated parameters, which affect tumour growth. The object oriented program FORTRAN 95 has been used to create the model algorithm, with Monte Carlo methods being employed to randomly assign many of the cell parameters from probability distributions. Hypoxia has been implemented through random assignment of partial oxygen pressure values to individual cells during tumour growth, based on in vivo Eppendorf probe experimental data. The accumulation of up to 10 million virtual tumour cells in 15 min of computer running time has been achieved. The stem cell percentage and the degree of hypoxia are the parameters which most influence the final tumour growth rate. For a tumour with a doubling time of 40 days, the final stem cell percentage is approximately 1% of the total cell population. The effect of hypoxia on the tumour growth rate is significant. Using a hypoxia induced cell quiescence limit which affects 50% of cells with and oxygen levels less than 1 mm Hg, the tumour doubling time increases to over 200 days and the time of tumour growth for a clinically detectable tumour (109 cells) increases from 3 to 8 years. A biologically plausible Monte Carlo model of hypoxic head and neck squamous cell carcinoma tumour growth has been developed for real time assessment of the effects of multiple biological parameters which impact upon the response of the individual patient to fractionated radiotherapy.

  14. Efficient Monte Carlo modelling of individual tumour cell propagation for hypoxic head and neck cancer

    International Nuclear Information System (INIS)

    A Monte Carlo tumour model has been developed to simulate tumour cell propagation for head and neck squamous cell carcinoma. The model aims to eventually provide a radiobiological tool for radiation oncology clinicians to plan patient treatment schedules based on properties of the individual tumour. The inclusion of an oxygen distribution amongst the tumour cells enables the model to incorporate hypoxia and other associated parameters, which affect tumour growth. The object oriented program FORTRAN 95 has been used to create the model algorithm, with Monte Carlo methods being employed to randomly assign many of the cell parameters from probability distributions. Hypoxia has been implemented through random assignment of partial oxygen pressure values to individual cells during tumour growth, based on in vivo Eppendorf probe experimental data. The accumulation of up to 10 million virtual tumour cells in 15 min of computer running time has been achieved. The stem cell percentage and the degree of hypoxia are the parameters which most influence the final tumour growth rate. For a tumour with a doubling time of 40 days, the final stem cell percentage is approximately 1% of the total cell population. The effect of hypoxia on the tumour growth rate is significant. Using a hypoxia induced cell quiescence limit which affects 50% of cells with and oxygen levels less than 1 mm Hg, the tumour doubling time increases to over 200 days and the time of tumour growth for a clinically detectable tumour (109 cells) increases from 3 to 8 years. A biologically plausible Monte Carlo model of hypoxic head and neck squamous cell carcinoma tumour growth has been developed for real time assessment of the effects of multiple biological parameters which impact upon the response of the individual patient to fractionated radiotherapy

  15. Efficient Monte Carlo simulations using a shuffled nested Weyl sequence random number generator.

    Science.gov (United States)

    Tretiakov, K V; Wojciechowski, K W

    1999-12-01

    The pseudorandom number generator proposed recently by Holian et al. [B. L. Holian, O. E. Percus, T. T. Warnock, and P. A. Whitlock, Phys. Rev. E 50, 1607 (1994)] is tested via Monte Carlo computation of the free energy difference between the defectless hcp and fcc hard sphere crystals by the Frenkel-Ladd method [D. Frenkel and A. J. C. Ladd, J. Chem. Phys. 81, 3188 (1984)]. It is shown that this fast and convenient for parallel computing generator gives results in good agreement with results obtained by other generators. An estimate of high accuracy is obtained for the hcp-fcc free energy difference near melting. PMID:11970727

  16. Efficient data management techniques implemented in the Karlsruhe Monte Carlo code KAMCCO

    International Nuclear Information System (INIS)

    The Karlsruhe Monte Carlo Code KAMCCO is a forward neutron transport code with an eigenfunction and a fixed source option, including time-dependence. A continuous energy model is combined with a detailed representation of neutron cross sections, based on linear interpolation, Breit-Wigner resonances and probability tables. All input is processed into densely packed, dynamically addressed parameter fields and networks of pointers (addresses). Estimation routines are decoupled from random walk and analyze a storage region with sample records. This technique leads to fast execution with moderate storage requirements and without any I/O-operations except in the input and output stages. 7 references. (U.S.)

  17. Efficiency Calibration of LaBr3(Ce) γ Spectroscopy in Analyzing Radionucles in Reactor Loop Water

    Institute of Scientific and Technical Information of China (English)

    CHEN; Xi-lin; QIN; Guo-xiu; GUO; Xiao-qing; CHEN; Yong-yong; MENG; Jun

    2013-01-01

    Monitoring the occurring and radioactivity concentration of fission products in nuclear reactor loop water is important for the nuclear reactor safe running evaluation,prevention of accidence and safe protection of working personnel.Study on the efficiency calibration for a LaBr3(Ce)detector experimental

  18. A mathematical model concerned in self-absorption correction to calibrate detection efficiency of the Ge gamma-ray spectrometer

    International Nuclear Information System (INIS)

    A self-absorption correction function used for cylindrical samples with different density in the Gamma-ray spectrum analysis is reported. The effects of the Gamma-ray energy and sample density on the self-absorption are unitized in the function model, and so a shortcut for detection efficiency calibration in the Gamma-ray spectrum analysis is found

  19. Efficiency Calibration of a Mini-Orange Type beta-Spectrometer by the $\\beta^{-}$-Spectrum of $^{90}$Sr

    CERN Document Server

    Kalinnikov, V G; Solnyshkin, A A; Sereeter, Z; Lebedev, N A; Chumin, V G; Ibrakhin, Ya S

    2002-01-01

    A specific method for efficiency calibration of a mini-orange type beta-spectrometer by means of the continuous beta^{-}-spectrum of ^{90}Sr and the conversion electron spectrum of ^{207}Bi in the energy range from 500 to 2200 keV has been elaborated. In the experiment typical SmCo_5 magnets (6A and 8A) were used. An accuracy of efficiency determination was 5-10 %.

  20. Absolute intensity calibration of the Wendelstein 7-X high efficiency extreme ultraviolet overview spectrometer system

    International Nuclear Information System (INIS)

    The new high effiency extreme ultraviolet overview spectrometer (HEXOS) system for the stellarator Wendelstein 7-X is now mounted for testing and adjustment at the tokamak experiment for technology oriented research (TEXTOR). One part of the testing phase was the intensity calibration of the two double spectrometers which in total cover a spectral range from 2.5 to 160.0 nm with overlap. This work presents the current intensity calibration curves for HEXOS and describes the method of calibration. The calibration was implemented with calibrated lines of a hollow cathode light source and the branching ratio technique. The hollow cathode light source provides calibrated lines from 16 up to 147 nm. We could extend the calibrated region in the spectrometers down to 2.8 nm by using the branching line pairs emitted by an uncalibrated pinch extreme ultraviolet light source as well as emission lines from boron and carbon in TEXTOR plasmas. In total HEXOS is calibrated from 2.8 up to 147 nm, which covers most of the observable wavelength region. The approximate density of carbon in the range of the minor radius from 18 to 35 cm in a TEXTOR plasma determined by simulating calibrated vacuum ultraviolet emission lines with a transport code was 5.5x1017 m-3 which corresponds to a local carbon concentration of 2%.

  1. Absolute intensity calibration of the Wendelstein 7-X high efficiency extreme ultraviolet overview spectrometer system

    Science.gov (United States)

    Greiche, Albert; Biel, Wolfgang; Marchuk, Oleksandr; Burhenn, Rainer

    2008-09-01

    The new high effiency extreme ultraviolet overview spectrometer (HEXOS) system for the stellarator Wendelstein 7-X is now mounted for testing and adjustment at the tokamak experiment for technology oriented research (TEXTOR). One part of the testing phase was the intensity calibration of the two double spectrometers which in total cover a spectral range from 2.5 to 160.0 nm with overlap. This work presents the current intensity calibration curves for HEXOS and describes the method of calibration. The calibration was implemented with calibrated lines of a hollow cathode light source and the branching ratio technique. The hollow cathode light source provides calibrated lines from 16 up to 147 nm. We could extend the calibrated region in the spectrometers down to 2.8 nm by using the branching line pairs emitted by an uncalibrated pinch extreme ultraviolet light source as well as emission lines from boron and carbon in TEXTOR plasmas. In total HEXOS is calibrated from 2.8 up to 147 nm, which covers most of the observable wavelength region. The approximate density of carbon in the range of the minor radius from 18 to 35 cm in a TEXTOR plasma determined by simulating calibrated vacuum ultraviolet emission lines with a transport code was 5.5×1017 m-3 which corresponds to a local carbon concentration of 2%.

  2. Calibrating and Controlling the Quantum Efficiency Distribution of Inhomogeneously Broadened Quantum Rods Using a Mirror Ball

    CERN Document Server

    Lunnemann, Per; van Dijk-Moes, Relinde J A; Pietra, Francesca; Vanmaekelbergh, Daniël; Koenderink, A Femius

    2013-01-01

    We demonstrate that a simple silver coated ball lens can be used to accurately measure the entire distribution of radiative transition rates of quantum dot nanocrystals. This simple and cost-effective implementation of Drexhage's method that uses nanometer-controlled optical mode density variations near a mirror, not only allows to extract calibrated ensemble-averaged rates, but for the first time also to quantify the full inhomogeneous dispersion of radiative and non radiative decay rates across thousands of nanocrystals. We apply the technique to novel ultra-stable CdSe/CdS dot-in-rod emitters. The emitters are of large current interest due to their improved stability and reduced blinking. We retrieve a room-temperature ensemble average quantum efficiency of 0.87+-0.08 at a mean lifetime around 20 ns. We confirm a log-normal distribution of decay rates as often assumed in literature and we show that the rate distribution-width, that amounts to about 30% of the mean decay rate, is strongly dependent on the l...

  3. Development of self-learning Monte Carlo technique for more efficient modeling of nuclear logging measurements

    International Nuclear Information System (INIS)

    The self-learning Monte Carlo technique has been implemented to the commonly used general purpose neutron transport code MORSE, in order to enhance sampling of the particle histories that contribute to a detector response. The parameters of all the biasing techniques available in MORSE, i.e. of splitting, Russian roulette, source and collision outgoing energy importance sampling, path length transformation and additional biasing of the source angular distribution are optimized. The learning process is iteratively performed after each batch of particles, by retrieving the data concerning the subset of histories that passed the detector region and energy range in the previous batches. This procedure has been tested on two sample problems in nuclear geophysics, where an unoptimized Monte Carlo calculation is particularly inefficient. The results are encouraging, although the presented method does not directly minimize the variance and the convergence of our algorithm is restricted by the statistics of successful histories from previous random walk. Further applications for modeling of the nuclear logging measurements seem to be promising. 11 refs., 2 figs., 3 tabs. (author)

  4. On stochastic error and computational efficiency of the Markov Chain Monte Carlo method

    KAUST Repository

    Li, Jun

    2014-01-01

    In Markov Chain Monte Carlo (MCMC) simulations, thermal equilibria quantities are estimated by ensemble average over a sample set containing a large number of correlated samples. These samples are selected in accordance with the probability distribution function, known from the partition function of equilibrium state. As the stochastic error of the simulation results is significant, it is desirable to understand the variance of the estimation by ensemble average, which depends on the sample size (i.e., the total number of samples in the set) and the sampling interval (i.e., cycle number between two consecutive samples). Although large sample sizes reduce the variance, they increase the computational cost of the simulation. For a given CPU time, the sample size can be reduced greatly by increasing the sampling interval, while having the corresponding increase in variance be negligible if the original sampling interval is very small. In this work, we report a few general rules that relate the variance with the sample size and the sampling interval. These results are observed and confirmed numerically. These variance rules are derived for theMCMCmethod but are also valid for the correlated samples obtained using other Monte Carlo methods. The main contribution of this work includes the theoretical proof of these numerical observations and the set of assumptions that lead to them. © 2014 Global-Science Press.

  5. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution

    Czech Academy of Sciences Publication Activity Database

    Mukhopadhyay, N. D.; Sampson, A. J.; Deniz, D.; Carlsson, G. A.; Williamson, J.; Malušek, Alexandr

    2012-01-01

    Roč. 70, č. 1 (2012), s. 315-323. ISSN 0969-8043 Institutional research plan: CEZ:AV0Z10480505 Keywords : Monte Carlo * correlated sampling * efficiency * uncertainty * bootstrap Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 1.179, year: 2012 http://www.sciencedirect.com/science/article/pii/S0969804311004775

  6. Efficient estimation of adjoint-weighted kinetics parameters in the Monte Carlo Wielandt calculations

    International Nuclear Information System (INIS)

    The effective delayed neutron fraction, βeff, and the prompt neutron generation time, Λ, in the point kinetics equation are weighted by the adjoint flux to improve the accuracy of the reactivity estimate. Recently the Monte Carlo (MC) kinetics parameter estimation methods by using the adjoint flux calculated in the MC forward simulations have been developed and successfully applied for reactor analyses. However these adjoint estimation methods based on the cycle-by-cycle genealogical table require a huge memory size to store the pedigree hierarchy. In this paper, we present a new adjoint estimation method in which the pedigree of a single history is utilized by applying the MC Wielandt method. The algorithm of the new method is derived and its effectiveness is demonstrated in the kinetics parameter estimations for infinite homogeneous two-group problems and critical facilities. (author)

  7. An Efficient Monte Carlo Method for Modeling Radiative Transfer in Protoplanetary Disks

    Science.gov (United States)

    Kim, Stacy

    2011-01-01

    Monte Carlo methods have been shown to be effective and versatile in modeling radiative transfer processes to calculate model temperature profiles for protoplanetary disks. Temperatures profiles are important for connecting physical structure to observation and for understanding the conditions for planet formation and migration. However, certain areas of the disk such as the optically thick disk interior are under-sampled, or are of particular interest such as the snow line (where water vapor condenses into ice) and the area surrounding a protoplanet. To improve the sampling, photon packets can be preferentially scattered and reemitted toward the preferred locations at the cost of weighting packet energies to conserve the average energy flux. Here I report on the weighting schemes developed, how they can be applied to various models, and how they affect simulation mechanics and results. We find that improvements in sampling do not always imply similar improvements in temperature accuracies and calculation speeds.

  8. Ninth degree polynomial fit function for calculation of efficiency calibrations for Ge(Li) and HPGe detectors

    International Nuclear Information System (INIS)

    A new 9 th degree polynomial fit function has been constructed to calculate the absolute γ-ray detection efficiencies (ηth) of Ge(Li) and HPGe Detectors, for calculating the absolute efficiency at any interesting γ-energy in the energy range between 25 and 2000 keV and distance between 6 and 148 cm. The total absolute γ -ray detection efficiencies have been calculated for six detectors, three of them are Ge(Li) and three HPGe at different distances. The absolute efficiency of the different detectors was calculated at the specific energy of the standard sources for each measuring distances. In this calculation, experimental (ηexp) and fitting (ηfit) efficiency have been calculated. Seven calibrated point sources Am-241, Ba-133, Co-57, Co-60, Cs-137, Eu-152 and Ra-226 were used. The uncertainties of efficiency calibration have been calculated also for quality control. The measured (ηexp) and (ηfit) calculated efficiency values were compared with efficiency, which calculated, by Gray fit function (time)- The results obtained on the basis of (ηexp)and (ηfit) seem to be in very good agreement

  9. An empirical method for in-situ relative detection efficiency calibration for k0-based IM-NAA method

    International Nuclear Information System (INIS)

    The in-situ relative detection efficiency strongly affects the qualitative aspects of k0-based internal mono standard instrumental neutron activation analysis (IM-NAA) method, which is used to analyze small to large size samples with irregular geometries. An empirical method is described for in-situ relative detection efficiency calibration. Two IAEA reference materials (RMs) Soil-7 and 1633b Coal Fly Ash were irradiated for elemental analysis using in-situ relative detector efficiency. The efficiency was measured from 0.12 MeV to 2.7 MeV. Both RMs were analyzed and the ξ-score values are within ±1 at 95% confidence level whereas the % deviation is within ±9 % for most of the elements. This reflects the good accuracy of the in-situ relative detection efficiency. (author)

  10. Broad-band efficiency calibration of ITER bolometer prototypes using Pt absorbers on SiN membranes

    International Nuclear Information System (INIS)

    The energy resolved efficiency of two bolometer detector prototypes for ITER with 4 channels each and absorber thicknesses of 4.5 μm and 12.5 μm, respectively, has been calibrated in a broad spectral range from 1.46 eV up to 25 keV. The calibration in the energy range above 3 eV was performed against previously calibrated silicon photodiodes using monochromatized synchrotron radiation provided by five different beamlines of Physikalische Technische Bundesanstalt at the electron storage rings BESSY II and Metrology Light Source in Berlin. For the measurements in the visible range, a setup was realised using monochromatized halogen lamp radiation and a calibrated laser power meter as reference. The measurements clearly demonstrate that the efficiency of the bolometer prototype detectors in the range from 50 eV up to ≈6 keV is close to unity; at a photon energy of 20 keV the bolometer with the thick absorber detects 80% of the photons, the one with the thin absorber about 50%. This indicates that the detectors will be well capable of measuring the plasma radiation expected from the standard ITER scenario. However, a minimum absorber thickness will be required for the high temperatures in the central plasma. At 11.56 keV, the sharp Pt-L3 absorption edge allowed to cross-check the absorber thickness by fitting the measured efficiency to the theoretically expected absorption of X-rays in a homogeneous Pt-layer. Furthermore, below 50 eV the efficiency first follows the losses due to reflectance expected for Pt, but below 10 eV it is reduced further by a factor of 2 for the thick absorber and a factor of 4 for the thin absorber. Most probably, the different histories in production, storage, and operation led to varying surface conditions and additional loss channels

  11. Broad-band efficiency calibration of ITER bolometer prototypes using Pt absorbers on SiN membranes.

    Science.gov (United States)

    Meister, H; Willmeroth, M; Zhang, D; Gottwald, A; Krumrey, M; Scholze, F

    2013-12-01

    The energy resolved efficiency of two bolometer detector prototypes for ITER with 4 channels each and absorber thicknesses of 4.5 μm and 12.5 μm, respectively, has been calibrated in a broad spectral range from 1.46 eV up to 25 keV. The calibration in the energy range above 3 eV was performed against previously calibrated silicon photodiodes using monochromatized synchrotron radiation provided by five different beamlines of Physikalische Technische Bundesanstalt at the electron storage rings BESSY II and Metrology Light Source in Berlin. For the measurements in the visible range, a setup was realised using monochromatized halogen lamp radiation and a calibrated laser power meter as reference. The measurements clearly demonstrate that the efficiency of the bolometer prototype detectors in the range from 50 eV up to ≈6 keV is close to unity; at a photon energy of 20 keV the bolometer with the thick absorber detects 80% of the photons, the one with the thin absorber about 50%. This indicates that the detectors will be well capable of measuring the plasma radiation expected from the standard ITER scenario. However, a minimum absorber thickness will be required for the high temperatures in the central plasma. At 11.56 keV, the sharp Pt-L3 absorption edge allowed to cross-check the absorber thickness by fitting the measured efficiency to the theoretically expected absorption of X-rays in a homogeneous Pt-layer. Furthermore, below 50 eV the efficiency first follows the losses due to reflectance expected for Pt, but below 10 eV it is reduced further by a factor of 2 for the thick absorber and a factor of 4 for the thin absorber. Most probably, the different histories in production, storage, and operation led to varying surface conditions and additional loss channels. PMID:24387428

  12. Broad-band efficiency calibration of ITER bolometer prototypes using Pt absorbers on SiN membranes

    Energy Technology Data Exchange (ETDEWEB)

    Meister, H.; Willmeroth, M. [Max-Planck-Institut für Plasmaphysik (IPP), EURATOM Association, Boltzmannstr. 2, 85748 Garching (Germany); Zhang, D. [Max-Planck-Institut für Plasmaphysik (IPP), EURATOM Association, Teilinstitut Greifswald, Wendelsteinstraße 1, 17491 Greifswald (Germany); Gottwald, A.; Krumrey, M.; Scholze, F. [Physikalisch-Technische Bundesanstalt (PTB), Abbestraße 2-12, 10587 Berlin (Germany)

    2013-12-15

    The energy resolved efficiency of two bolometer detector prototypes for ITER with 4 channels each and absorber thicknesses of 4.5 μm and 12.5 μm, respectively, has been calibrated in a broad spectral range from 1.46 eV up to 25 keV. The calibration in the energy range above 3 eV was performed against previously calibrated silicon photodiodes using monochromatized synchrotron radiation provided by five different beamlines of Physikalische Technische Bundesanstalt at the electron storage rings BESSY II and Metrology Light Source in Berlin. For the measurements in the visible range, a setup was realised using monochromatized halogen lamp radiation and a calibrated laser power meter as reference. The measurements clearly demonstrate that the efficiency of the bolometer prototype detectors in the range from 50 eV up to ≈6 keV is close to unity; at a photon energy of 20 keV the bolometer with the thick absorber detects 80% of the photons, the one with the thin absorber about 50%. This indicates that the detectors will be well capable of measuring the plasma radiation expected from the standard ITER scenario. However, a minimum absorber thickness will be required for the high temperatures in the central plasma. At 11.56 keV, the sharp Pt-L{sub 3} absorption edge allowed to cross-check the absorber thickness by fitting the measured efficiency to the theoretically expected absorption of X-rays in a homogeneous Pt-layer. Furthermore, below 50 eV the efficiency first follows the losses due to reflectance expected for Pt, but below 10 eV it is reduced further by a factor of 2 for the thick absorber and a factor of 4 for the thin absorber. Most probably, the different histories in production, storage, and operation led to varying surface conditions and additional loss channels.

  13. The Role of Mathematical Methods in Efficiency Calibration and Uncertainty Estimation in Gamma Based Non-Destructive Assay - 12311

    International Nuclear Information System (INIS)

    Mathematical methods are being increasingly employed in the efficiency calibration of gamma based systems for non-destructive assay (NDA) of radioactive waste and for the estimation of the Total Measurement Uncertainty (TMU). Recently, ASTM (American Society for Testing and Materials) released a standard guide for use of modeling passive gamma measurements. This is a testimony to the common use and increasing acceptance of mathematical techniques in the calibration and characterization of NDA systems. Mathematical methods offer flexibility and cost savings in terms of rapidly incorporating calibrations for multiple container types, geometries, and matrix types in a new waste assay system or a system that may already be operational. Mathematical methods are also useful in modeling heterogeneous matrices and non-uniform activity distributions. In compliance with good practice, if a computational method is used in waste assay (or in any other radiological application), it must be validated or benchmarked using representative measurements. In this paper, applications involving mathematical methods in gamma based NDA systems are discussed with several examples. The application examples are from NDA systems that were recently calibrated and performance tested. Measurement based verification results are presented. Mathematical methods play an important role in the efficiency calibration of gamma based NDA systems. This is especially true when the measurement program involves a wide variety of complex item geometries and matrix combinations for which the development of physical standards may be impractical. Mathematical methods offer a cost effective means to perform TMU campaigns. Good practice demands that all mathematical estimates be benchmarked and validated using representative sets of measurements. (authors)

  14. Improving the trade-off between simulation time and accuracy in efficiency calibrations with the code DETEFF

    Energy Technology Data Exchange (ETDEWEB)

    Cornejo Diaz, N. [Centre for Radiological Protection and Hygiene, P.O. Box 6195, Habana (Cuba); Jurado Vargas, M., E-mail: mjv@unex.e [Physics Department, University of Extremadura, 06071 Badajoz (Spain)

    2010-07-15

    Quick and relatively simple procedures were incorporated into the Monte Carlo code DETEFF in order to consider the escape of Bremsstrahlung radiation and secondary electrons. The relative bias in efficiency values was thus reduced for photon energies between 1500 and 2000 keV, without any noticeable increment of the simulation time. A relatively simple method was also included to consider the rounding of detector edges. The validation studies showed relative deviations of about 1% in the energy range 10-2000 keV.

  15. A Calibration Routine for Efficient ETD in Large-Scale Proteomics

    Science.gov (United States)

    Rose, Christopher M.; Rush, Matthew J. P.; Riley, Nicholas M.; Merrill, Anna E.; Kwiecien, Nicholas W.; Holden, Dustin D.; Mullen, Christopher; Westphall, Michael S.; Coon, Joshua J.

    2015-11-01

    Electron transfer dissociation (ETD) has been broadly adopted and is now available on a variety of commercial mass spectrometers. Unlike collisional activation techniques, optimal performance of ETD requires considerable user knowledge and input. ETD reaction duration is one key parameter that can greatly influence spectral quality and overall experiment outcome. We describe a calibration routine that determines the correct number of reagent anions necessary to reach a defined ETD reaction rate. Implementation of this automated calibration routine on two hybrid Orbitrap platforms illustrate considerable advantages, namely, increased product ion yield with concomitant reduction in scan rates netting up to 75% more unique peptide identifications in a shotgun experiment.

  16. Efficiency of radiation protection equipment in interventional radiology: a systematic Monte Carlo study of eye lens and whole body doses

    International Nuclear Information System (INIS)

    Monte Carlo calculations were used to investigate the efficiency of radiation protection equipment in reducing eye and whole body doses during fluoroscopically guided interventional procedures. Eye lens doses were determined considering different models of eyewear with various shapes, sizes and lead thickness. The origin of scattered radiation reaching the eyes was also assessed to explain the variation in the protection efficiency of the different eyewear models with exposure conditions. The work also investigates the variation of eye and whole body doses with ceiling-suspended shields of various shapes and positioning. For all simulations, a broad spectrum of configurations typical for most interventional procedures was considered. Calculations showed that ‘wrap around’ glasses are the most efficient eyewear models reducing, on average, the dose by 74% and 21% for the left and right eyes respectively. The air gap between the glasses and the eyes was found to be the primary source of scattered radiation reaching the eyes. The ceiling-suspended screens were more efficient when positioned close to the patient’s skin and to the x-ray field. With the use of such shields, the Hp(10) values recorded at the collar, chest and waist level and the Hp(3) values for both eyes were reduced on average by 47%, 37%, 20% and 56% respectively. Finally, simulations proved that beam quality and lead thickness have little influence on eye dose while beam projection, the position and head orientation of the operator as well as the distance between the image detector and the patient are key parameters affecting eye and whole body doses. (paper)

  17. Efficient Monte Carlo Methods for the Potts Model at Low Temperature

    CERN Document Server

    Molkaraie, Mehdi

    2015-01-01

    We consider the problem of estimating the partition function of the ferromagnetic $q$-state Potts model. We propose an importance sampling algorithm in the dual of the normal factor graph representing the model. The algorithm can efficiently compute an estimate of the partition function in a wide range of parameters; in particular, when the coupling parameters of the model are strong (corresponding to models at low temperature) or when the model contains a mixture of strong and weak couplings. We show that, in this setting, the proposed algorithm significantly outperforms the state of the art methods in the primal and in the dual domains.

  18. Beta-efficiency of a typical gas-flow ionization chamber using GEANT4 Monte Carlo simulations

    Directory of Open Access Journals (Sweden)

    Hussain Abid

    2011-01-01

    Full Text Available GEANT4 based Monte Carlo simulations have been carried out for the determination of efficiency and conversion factors of a gas-flow ionization chamber for beta particles emitted by 86 different radioisotopes covering the average-b energy range of 5.69 keV-2.061 MeV. Good agreements were found between the GEANT4 predicted values and corresponding experimental data, as well as with EGS4 based calculations. For the reported set of b-emitters, the values of the conversion factor have been established in the range of 0.5×1013-2.5×1013 Bqcm-3/A. The computed xenon-to-air conversion factor ratios have attained the minimum value of 0.2 in the range of 0.1-1 MeV. As the radius and/or volume of the ion chamber increases, conversion factors approach a flat energy response. These simulations show a small, but significant dependence of ionization efficiency on the type of wall material.

  19. Determination of photoelectric counting efficiency in a whole body counter using Monte Carlo method and a small micro-computer Sinclair type (16K)

    International Nuclear Information System (INIS)

    It was developed a program in Basic language applied to Sinclair type personal computer. The code is able to calculate the Whole counting efficiency when applying a cillindrical type detector. The scope of the code made use of the Monte Carlo Method. (Author)

  20. Wavelength-scanning calibration of detection efficiency of single photon detectors by direct comparison with a photodiode

    Science.gov (United States)

    Lee, Hee Jung; Park, Seongchong; Park, Hee Su; Hong, Kee Suk; Lee, Dong-Hoon; Kim, Heonoh; Cha, Myoungsik; Seb Moon, Han

    2016-04-01

    We present a practical calibration method of the detection efficiency (DE) of single photon detectors (SPDs) in a wide wavelength range from 480 nm to 840 nm. The setup consists of a GaN laser diode emitting a broadband luminescence, a tunable bandpass filter, a beam splitter, and a switched integrating amplifier which can measure the photocurrent down to the 100 fA level. The SPD under test with a fibre-coupled beam input is directly compared with a reference photodiode without using any calibrated attenuator. The relative standard uncertainty of the DE of the SPD is evaluated to be from 0.8% to 2.2% varying with wavelength (k  =  1).

  1. Calibration with MCNP of NaI detector for the determination of natural radioactivity levels in the field

    OpenAIRE

    CINELLI GIORGIA; TOSITTI Laura; Mostacci, Domiziano; BARE Jonathan

    2015-01-01

    In view of assessing natural radioactivity with on-site quantitative gamma spectrometry, efficiency calibration of NaI(Tl) detectors is investigated. A calibration based on Monte Carlo simulation of detector response is proposed, to render reliable quantitative analysis practicable in field campaigns. The method is developed with reference to contact geometry, in which measurements are taken placing the NaI(Tl) probe directly against the solid source to be analyzed. The Monte Carlo code us...

  2. Efficient masonry vault inspection by Monte Carlo simulations: Case of hidden defect

    Directory of Open Access Journals (Sweden)

    Abdelmounaim Zanaz

    2016-06-01

    Full Text Available This paper presents a methodology for probabilistic assessment of masonry vaults bearing capacity with the consideration of existing defects. A comprehensive methodology and software package have been developed and adapted to inspection requirements. First, the mechanical analysis model is explained and validated by showing a good compromise between computation time and accuracy. This compromise is required when probabilistic approach is considered, as it requires a large number of mechanical analysis runs. To model the defect, an inspection case is simulated by considering a segmental vault. As the inspection data is often insufficient, the defect position and size are considered to be unknown. As the NDT results could not provide useful and reliable information, it is therefore decided to take samples with the obligation to minimize as much as possible their number. In this case the main difficulty is to know on which segment the coring would be mostly efficient. To find out, all possible positions are studied with the consideration of one single core. Using probabilistic approaches, the distribution function of the critical load has been determined for each segment. The results allow to identify the best segment for vault inspection.

  3. Efficient Orientation and Calibration of Large Aerial Blocks of Multi-Camera Platforms

    Science.gov (United States)

    Karel, W.; Ressl, C.; Pfeifer, N.

    2016-06-01

    Aerial multi-camera platforms typically incorporate a nadir-looking camera accompanied by further cameras that provide oblique views, potentially resulting in utmost coverage, redundancy, and accuracy even on vertical surfaces. However, issues have remained unresolved with the orientation and calibration of the resulting imagery, to two of which we present feasible solutions. First, as standard feature point descriptors used for the automated matching of homologous points are only invariant to the geometric variations of translation, rotation, and scale, they are not invariant to general changes in perspective. While the deviations from local 2D-similarity transforms may be negligible for corresponding surface patches in vertical views of flat land, they become evident at vertical surfaces, and in oblique views in general. Usage of such similarity-invariant descriptors thus limits the amount of tie points that stabilize the orientation and calibration of oblique views and cameras. To alleviate this problem, we present the positive impact on image connectivity of using a quasi affine-invariant descriptor. Second, no matter which hard- and software are used, at some point, the number of unknowns of a bundle block may be too large to be handled. With multi-camera platforms, these limits are reached even sooner. Adjustment of sub-blocks is sub-optimal, as it complicates data management, and hinders self-calibration. Simply discarding unreliable tie points of low manifold is not an option either, because these points are needed at the block borders and in poorly textured areas. As a remedy, we present a straight-forward method how to considerably reduce the number of tie points and hence unknowns before bundle block adjustment, while preserving orientation and calibration quality.

  4. Study on efficiency calibration method using 82Br, 160Tb and 40K for the high purity germanium detector

    International Nuclear Information System (INIS)

    Background: High-purity germanium (HPGe) detector need to be calibrate detection efficiency for the measured sample using radioactivity standard sources. However, if a great variety of samples which have different materials, densities or geometries need calibrating, the standard sources used will be very expensive and are not beneficial to environmental protection. Purpose: To study a new Full-energy peak (FEP) efficiency calibration method without artificial standard sources for the high purity germanium detector using 82Br, 160Tb produced by neutron activation and 40K. Methods: A HPGe detector (diameter 76 mm) with relative efficiency of 42% and two soil samples (Φ70 mm × 66 mm) were used in the experiments. The ratios relative to 554.3 keV of FEP counting rates, εBr(Ei) for different γ energies, Ei, of 82Br were used to fit relative efficiency function, fBr(E). The ratios relative to 1271.8 keV, Uj, of those for γ-energy, Ej, of 86.7, 197.0, 215.6, 298.6 and 392.5 keV of 160Tb were calculated, and transformed into normalized relative efficiencies to γ-energy of 554.3 keV of 82Br by using the formula εBr(Ej)=uj fBr(E1271). Then the data of εBr(Ei) and εBe(Ej) were fitted to normalized relative efficiency function f(E). The absolute efficiency εK of 40K y energy (EK=1460.8 keV), resulting from KCI, which is mixed homogeneously with the sample can be determined. Thus, those of other any energy, E, can also be determined by using the formula, ε(E) =εK·f(E)/f(EK). Results: The experiments showed that change of fBr(E) is no significant when the sample-detector distance is more than 3 cm. In order to verify the new method. the activity in two soil samples (Φ70 mm × 66 mm) were measured (sample-detector distance =3.1 cm) and the results were compared with γ-γ coincidence method. The results of the activity concentration of seven radionuclides including 226Ra, 235U, 232Th, 40K, 134Cs, 137Cs and 60Co in each sample were in good agreement within

  5. Representing full-energy peak gamma-ray efficiency surfaces in energy and density when the calibration data is correlated

    International Nuclear Information System (INIS)

    The non intrusive quantification of gamma-emitters in waste containers calls for a correction to be made for the perturbation introduced by the container contents relative to the configuration used for the absolute calibration of the measurement system. There are several potential ways of achieving this including the use of an external transmission beam, and, the exploitation of the differential attenuation between different energy lines from the same nuclide. The former requires additional hardware while the second is not always applicable. A third method, which overcomes both these objections and is also commonly applied as the method of choice on systems that do not have the capability to (axially) scan the item, is termed the Multi-Curve approach. When applying the Multi-Curve method the density of the waste matrix inside the item under study is estimated from the net weight and the fill height estimate and the efficiency at the assay energies of interest is obtained by navigating the efficiency-energy-density surface using the (interpolation) scheme developed for the model parameters via the calibration procedure. In addition to the nominal efficiency values an uncertainty estimate for each is made using an independent analysis engine which is designed to incorporate reasonable deviations in the properties of the item from the ideal conditions of calibration. Prominent amongst these would be deviations of fill matrix homogeneity, deviations from a uniform activity distribution and deviation in the atomic composition of the materials present from those used to make the calibration items (this is of concern below about 200 keV where the photoelectric effect, which has a strong Z-dependence, comes into play). The Multi-Curve approach has proven to be robust and reliable. However what one finds is that the uncertainties assigned to the traditional Multi-Curve interpolation scheme are underestimated. This is because correlations in the input data are neglected and

  6. ALIS: An efficient method to compute high spectral resolution polarized solar radiances using the Monte Carlo approach

    International Nuclear Information System (INIS)

    An efficient method to compute accurate polarized solar radiance spectra using the (3D) Monte Carlo model MYSTIC has been developed. Such high resolution spectra are measured by various satellite instruments for remote sensing of atmospheric trace gases. ALIS (Absorption Lines Importance Sampling) allows the calculation of spectra by tracing photons at only one wavelength. In order to take into account the spectral dependence of the absorption coefficient a spectral absorption weight is calculated for each photon path. At each scattering event the local estimate method is combined with an importance sampling method to take into account the spectral dependence of the scattering coefficient. Since each wavelength grid point is computed based on the same set of random photon paths, the statistical error is almost same for all wavelengths and hence the simulated spectrum is not noisy. The statistical error mainly results in a small relative deviation which is independent of wavelength and can be neglected for those remote sensing applications, where differential absorption features are of interest. Two example applications are presented: The simulation of shortwave-infrared polarized spectra as measured by GOSAT from which CO2 is retrieved, and the simulation of the differential optical thickness in the visible spectral range which is derived from SCIAMACHY measurements to retrieve NO2. The computational speed of ALIS (for 1D or 3D atmospheres) is of the order of or even faster than that of one-dimensional discrete ordinate methods, in particular when polarization is considered.

  7. Efficiency calibration and minimum detectable activity concentration of a real-time UAV airborne sensor system with two gamma spectrometers.

    Science.gov (United States)

    Tang, Xiao-Bin; Meng, Jia; Wang, Peng; Cao, Ye; Huang, Xi; Wen, Liang-Sheng; Chen, Da

    2016-04-01

    A small-sized UAV (NH-UAV) airborne system with two gamma spectrometers (LaBr3 detector and HPGe detector) was developed to monitor activity concentration in serious nuclear accidents, such as the Fukushima nuclear accident. The efficiency calibration and determination of minimum detectable activity concentration (MDAC) of the specific system were studied by MC simulations at different flight altitudes, different horizontal distances from the detection position to the source term center and different source term sizes. Both air and ground radiation were considered in the models. The results obtained may provide instructive suggestions for in-situ radioactivity measurements of NH-UAV. PMID:26773821

  8. Application of the Gamma Spectrometry Sourceless Efficiency Calibration Method to the Measurement of Radionuclides in Rare Earth Residues

    International Nuclear Information System (INIS)

    The paper investigates and analyses NORM residues from rare earth smelting and separation plants in Jiangsu Province using the high purity germanium gamma spectrometry sourceless efficiency calibration method which was verified by IAEA reference materials. The results show that in the rare earth residues the radioactive equilibrium of uranium and thorium decay series has been broken and the activity concentrations in the samples have obvious differences. Based on the results, the paper makes some suggestions and proposes some protective measures for the disposal of rare earth residues. (author)

  9. Syringe shape and positioning relative to efficiency volume inside dose calibrators and its role in nuclear medicine quality assurance programs

    International Nuclear Information System (INIS)

    A careful analysis of geometry and source positioning influence in the activity measurement outcome of a nuclear medicine dose calibrator is presented for 99mTc. The implementation of a quasi-point source apparent activity curve measurement is proposed for an accurate correction of the activity inside several syringes, and compared with a theoretical geometric efficiency model. Additionally, new geometrical parameters are proposed to test and verify the correct positioning of the syringes as part of acceptance testing and quality control procedures.

  10. Corruption of accuracy and efficiency of Markov chain Monte Carlo simulation by inaccurate numerical implementation of conceptual hydrologic models

    Science.gov (United States)

    Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.

    2010-10-01

    Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.

  11. Efficiency calibration of a liquid scintillation counter for 90Y Cherenkov counting

    International Nuclear Information System (INIS)

    In this paper a complete and self-consistent method for 90Sr determination in environmental samples is presented. It is based on the Cherenkov counting of 90Y with a conventional liquid scintillation counter. The effects of color quenching on the counting efficiency and background are carefully studied. A working curve is presented which allows to quantify the correction in the counting efficiency depending on the color quenching strength. (orig.)

  12. Calibration of environmental radionuclide transfer models using a Bayesian approach with Markov chain Monte Carlo simulations and model comparisons - Calibration of radionuclides transfer models in the environment using a Bayesian approach with Markov chain Monte Carlo simulation and comparison of models

    Energy Technology Data Exchange (ETDEWEB)

    Nicoulaud-Gouin, V.; Giacalone, M.; Gonze, M.A. [Institut de Radioprotection et de Surete Nucleaire-PRP-ENV/SERIS/LM2E (France); Martin-Garin, A.; Garcia-Sanchez, L. [IRSN-PRP-ENV/SERIS/L2BT (France)

    2014-07-01

    Calibration of transfer models according to observation data is a challenge, especially if parameters uncertainty is required, and if competing models should be decided between them. Generally two main calibration methods are used: The frequentist approach in which the unknown parameter of interest is supposed fixed and its estimation is based on the data only. In this category, least squared method has many restrictions in nonlinear models and competing models need to be nested in order to be compared. The bayesian inference in which the unknown parameter of interest is supposed random and its estimation is based on the data and on prior information. Compared to frequentist method, it provides probability density functions and therefore pointwise estimation with credible intervals. However, in practical cases, Bayesian inference is a complex problem of numerical integration, which explains its low use in operational modeling including radioecology. This study aims to illustrate the interest and feasibility of Bayesian approach in radioecology particularly in the case of ordinary differential equations with non-constant coefficients models, which cover most radiological risk assessment models, notably those implemented in the Symbiose platform (Gonze et al, 2010). Markov Chain Monte Carlo (MCMC) method (Metropolis et al., 1953) was used because the posterior expectations are intractable integrals. The invariant distribution of the parameters was performed by the metropolis-Hasting algorithm (Hastings, 1970). GNU-MCSim software (Bois and Maszle, 2011) a bayesian hierarchical framework, was used to deal with nonlinear differential models. Two case studies including this type of model were investigated: An Equilibrium Kinetic sorption model (EK) (e.g. van Genuchten et al, 1974), with experimental data concerning {sup 137}Cs and {sup 85}Sr sorption and desorption in different soils studied in stirred flow-through reactors. This model, generalizing the K{sub d} approach

  13. Calibration of environmental radionuclide transfer models using a Bayesian approach with Markov chain Monte Carlo simulations and model comparisons - Calibration of radionuclides transfer models in the environment using a Bayesian approach with Markov chain Monte Carlo simulation and comparison of models

    International Nuclear Information System (INIS)

    Calibration of transfer models according to observation data is a challenge, especially if parameters uncertainty is required, and if competing models should be decided between them. Generally two main calibration methods are used: The frequentist approach in which the unknown parameter of interest is supposed fixed and its estimation is based on the data only. In this category, least squared method has many restrictions in nonlinear models and competing models need to be nested in order to be compared. The bayesian inference in which the unknown parameter of interest is supposed random and its estimation is based on the data and on prior information. Compared to frequentist method, it provides probability density functions and therefore pointwise estimation with credible intervals. However, in practical cases, Bayesian inference is a complex problem of numerical integration, which explains its low use in operational modeling including radioecology. This study aims to illustrate the interest and feasibility of Bayesian approach in radioecology particularly in the case of ordinary differential equations with non-constant coefficients models, which cover most radiological risk assessment models, notably those implemented in the Symbiose platform (Gonze et al, 2010). Markov Chain Monte Carlo (MCMC) method (Metropolis et al., 1953) was used because the posterior expectations are intractable integrals. The invariant distribution of the parameters was performed by the metropolis-Hasting algorithm (Hastings, 1970). GNU-MCSim software (Bois and Maszle, 2011) a bayesian hierarchical framework, was used to deal with nonlinear differential models. Two case studies including this type of model were investigated: An Equilibrium Kinetic sorption model (EK) (e.g. van Genuchten et al, 1974), with experimental data concerning 137Cs and 85Sr sorption and desorption in different soils studied in stirred flow-through reactors. This model, generalizing the Kd approach, distinguishes

  14. Efficiency calibration of the ELBE nuclear resonance fluorescence setup using a proton beam

    Energy Technology Data Exchange (ETDEWEB)

    Trompler, Erik; Bemmerer, Daniel; Beyer, Roland; Erhard, Martin; Grosse, Eckart; Hannaske, Roland; Junghans, Arnd Rudolf; Marta, Michele; Nair, Chithra; Schwengner, R.; Wagner, Andreas; Yakorev, Dmitry [Forschungszentrum Dresden-Rossendorf (FZD), Dresden (Germany); Broggini, Carlo; Caciolli, Antonio; Menegazzo, Roberto [INFN Sezione di Padova, Padova (Italy); Fueloep, Zsolt; Gyuerky, Gyoergy; Szuecs, Tamas [Atomki, Debrecen (Hungary)

    2009-07-01

    The nuclear resonance fluorescence (NRF) setup at ELBE uses bremsstrahlung with endpoint energies up to 20 MeV. The setup consists of four 100% high-purity germanium detectors, each surrounded by a BGO escape-suppression shield and a lead collimator. The detection efficiency up to E{sub {gamma}}=12 MeV has been determined using the proton beam from the FZD Tandetron and well-known resonances in the {sup 11}B(p,{gamma}){sup 12}C, {sup 14}N(p,{gamma}){sup 15}O, and {sup 27}Al(p,{gamma}){sup 28}Si reactions. The deduced efficiency curve allows to check efficiency curves calculated with GEANT. Future photon-scattering work can be carried out with improved precision at high energy.

  15. Efficient Calibration/Uncertainty Analysis Using Paired Complex/Surrogate Models.

    Science.gov (United States)

    Burrows, Wesley; Doherty, John

    2015-01-01

    The use of detailed groundwater models to simulate complex environmental processes can be hampered by (1) long run-times and (2) a penchant for solution convergence problems. Collectively, these can undermine the ability of a modeler to reduce and quantify predictive uncertainty, and therefore limit the use of such detailed models in the decision-making context. We explain and demonstrate a novel approach to calibration and the exploration of posterior predictive uncertainty, of a complex model, that can overcome these problems in many modelling contexts. The methodology relies on conjunctive use of a simplified surrogate version of the complex model in combination with the complex model itself. The methodology employs gradient-based subspace analysis and is thus readily adapted for use in highly parameterized contexts. In its most basic form, one or more surrogate models are used for calculation of the partial derivatives that collectively comprise the Jacobian matrix. Meanwhile, testing of parameter upgrades and the making of predictions is done by the original complex model. The methodology is demonstrated using a density-dependent seawater intrusion model in which the model domain is characterized by a heterogeneous distribution of hydraulic conductivity. PMID:25142272

  16. Efficiency calibration of the Ge(Li) detector of the BIPM for SIR-type ampoules

    International Nuclear Information System (INIS)

    The absolute full-energy peak efficiency of the Ge(Li) γ-ray spectrometer has been measured between 50 keV and 2 MeV with a relative uncertainty around 1 x 10-2 and for ampoule-to-detector distances of 20 cm and 50 cm. All the corrections applied (self-attenuation, dead time, pile up, true coincidence summing) are discussed in detail. (authors)

  17. Close-geometry efficiency calibration of LaCl3:Ce detectors: measurements and simulations

    International Nuclear Information System (INIS)

    In particular, large amount of literature is available with HPGe detectors. However, not much work has been done on coincidence summing effects in scintillation detectors. This may be due to inferiority of scintillation detectors over HPGe detectors in terms of energy resolution which makes the accurate estimation of counts under individual peak very difficult. We report here experimental measurements and realistic simulations of absolute efficiencies (both photo-peak and total detection) and of coincidence summing correction factors in LaCl3 (Ce) scintillation detectors under close-geometry. These detectors have drawn interest owing to their properties superior to that of NaI(Tl) detectors, such as high light yield (46,000 photons/MeV), energy resolution (about 4%), decay time (25 ns), etc.

  18. Calibration of the b-tagging efficiency on jets with charm quark for the ATLAS experiment

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00536668; Schiavi, Carlo

    The correct identification of jets originated from a beauty quark (b-jets) is of fundamental importance for many physics analysis performed by the ATLAS experiment, operating at the Large Hadron Collider, CERN. The efficiency to mistakenly tag a jet originated from a charm quark (c-jet) as a b-jet has been measured in data with two different methods: a first one, referred as the "D* method", uses a sample of jets containing reconstructed D* mesons (adopted for 7 TeV and 8 TeV data analyses), and a second one, referred as the "W+c method", uses a sample of c-jets produced in association with a W boson (studied on 7 TeV data). This thesis work focuses on some significant improvements made to the D* method, increasing the measurement precision. A study for the improvement of the W+c method and its first application to 13 TeV data is also presented: focusing on the event selection, the W+c signal yield has been considerably increased with respect to the background processes

  19. Calculation of the photoelectric efficiency with Monte Carlo method of a planar high purity Ge detectors and application to cross sections measurement

    International Nuclear Information System (INIS)

    The aim of this work is to elaborate a Monte Carlo programme which calculate the photoelectric efficiency of a planar high purity Ge detector for low energy photons. This programme calculate the auto absorption, the absorption in different media crossed by the photon and the intrinsic and total efficiencies. The results of this programme were very satisfactory since they reproduce the measured values in the two different cases of punctual and volumic sources. The result of the photoelectric efficiency calculation with this programme has been applied to determine the cross section of the 166-Er (n,2 n) 165-Er reaction induced by 14 MeV neutron, where only the measurement by x spectrometry is possible. The value obtained is concordant with the data given by the literature. 119 figs., 39 tabs., 96 refs. (F.M.)

  20. The final calibration of the nuclear power channels of the IPEN/MB-01 reactor by the use of the activation foils and the Monte Carlo method

    International Nuclear Information System (INIS)

    This work aims the final calibration of the nuclear power channels of the IPEN/MB-01 reactor using infinitely dilute gold foils (1% Au - 99% Al), this is, metallic alloy in the concentration levels such that the phenomena of flux disturbance, as the self-shielding factors become worthless. During the irradiations were monitored the nuclear power channels of the reactor used to obtain the neutron flux and consequently the power operation of the reactor. The current values were digitally acquired during each second of operation. Once the foils were irradiated for the analysis of its induced activity it was used a detection system of hyper-pure germanium. Ally to this experimental procedure it was used the computational code MCNP-4C as a tool for theoretical modeling of the core of the IPEN/MB-01 reactor. Thus it was possible to determine the parameters necessary to obtain the power operation of the reactor, such as the inverse of the thermal disadvantage factor and fast fission factor. Thus, using the correlation between average thermal neutron flux, proportional to a power operation and the average of the digital values of current of the nuclear channels, during the irradiations of the foils, it was obtained the calibration of the nuclear power channels, the ionization chambers number 5 and 6 of the IPEN/MB-01 reactor. (author)

  1. Efficiency calibration of a mini-orange type beta-spectrometer by the beta sup - -spectrum of sup 9 sup 0 Sr

    CERN Document Server

    Kalinnikov, V G; Ibrakhim, Y S; Lebedev, N A; Samatov, Z K; Sehrehehtehr, Z; Solnyshkin, A A

    2002-01-01

    A specific method for efficiency calibration of a mini-orange type beta-spectrometer by means of the continuous beta sup - -spectrum of sup 9 sup 0 Sr and the conversion electron spectrum of sup 2 sup 0 sup 7 Bi in the energy range from 500 to 2200 keV has been elaborated. In the experiment typical SmCo sub 5 magnets (6A and 8A) were used. An accuracy of efficiency determination was 5-10 %.

  2. Calibration of a gamma spectrometer for natural radioactivity measurement. Experimental measurements and Monte Carlo modelling; Etalonnage d'un spectrometre gamma en vue de la mesure de la radioactivite naturelle. Mesures experimentales et modelisation par techniques de Monte-Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Courtine, Fabien [Laboratoire de Physique Corpusculaire, Universite Blaise Pascal - CNRS/IN2P3, 63000 Aubiere Cedex (France)

    2007-03-15

    The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of {sup 137}Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the {sup 60}Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)

  3. Determination of mass attenuation coefficient by numerical absorption calibration with Monte-Carlo simulations at 59.54 keV

    Science.gov (United States)

    Degrelle, D.; Mavon, C.; Groetz, J.-E.

    2016-04-01

    This study presents a numerical method in order to determine the mass attenuation coefficient of a sample with an unknown chemical composition at low energy. It is compared with two experimental methods: a graphic method and a transmission method. The method proposes to realise a numerical absorption calibration curve to process experimental results. Demineralised water with known mass attenuation coefficient (0.2066cm2g-1 at 59.54 keV) is chosen to confirm the method. 0.1964 ± 0.0350cm2g-1 is the average value determined by the numerical method, that is to say less than 5% relative deviation compared to more than 47% for the experimental methods.

  4. Efficiency calibration and coincidence summing correction for large arrays of NaI(Tl) detectors in soccer-ball and castle geometries

    Energy Technology Data Exchange (ETDEWEB)

    Anil Kumar, G., E-mail: anilg@tifr.res.i [Department of Nuclear and Atomic Physics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005 (India); Mazumdar, I.; Gothe, D.A. [Department of Nuclear and Atomic Physics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005 (India)

    2009-11-21

    Efficiency calibration and coincidence summing correction have been performed for two large arrays of NaI(Tl) detectors in two different configurations. They are, a compact array of 32 conical detectors of pentagonal and hexagonal shapes in soccer-ball geometry and an array of 14 straight hexagonal NaI(Tl) detectors in castle geometry. Both of these arrays provide a large solid angle of detection, leading to considerable coincidence summing of gamma rays. The present work aims to understand the effect of coincidence summing of gamma rays while determining the energy dependence of efficiencies of these two arrays. We have carried out extensive GEANT4 simulations with radio-nuclides that decay with a two-step cascade, considering both arrays in their realistic geometries. The absolute efficiencies have been simulated for gamma energies from 700 to 2800 keV using four different double-photon emitters, namely, {sup 60}Co, {sup 46}Sc, {sup 94}Nb and {sup 24}Na. The efficiencies so obtained have been corrected for coincidence summing using the method proposed by Vidmar et al. . The simulations have also been carried out for the same energies assuming mono-energetic point sources, for comparison. Experimental measurements have also been carried out using calibrated point sources of {sup 137}Cs and {sup 60}Co. The simulated and the experimental results are found to be in good agreement. This demonstrates the reliability of the correction method for efficiency calibration of two large arrays in very different configurations.

  5. A Monte Carlo simulation and setup optimization of output efficiency to PGNAA thermal neutron using 252Cf neutrons

    Science.gov (United States)

    Zhang, Jin-Zhao; Tuo, Xian-Guo

    2014-07-01

    We present the design and optimization of a prompt γ-ray neutron activation analysis (PGNAA) thermal neutron output setup based on Monte Carlo simulations using MCNP5 computer code. In these simulations, the moderator materials, reflective materials, and structure of the PGNAA 252Cf neutrons of thermal neutron output setup are optimized. The simulation results reveal that the thin layer paraffin and the thick layer of heavy water moderating effect work best for the 252Cf neutron spectrum. Our new design shows a significantly improved performance of the thermal neutron flux and flux rate, that are increased by 3.02 times and 3.27 times, respectively, compared with the conventional neutron source design.

  6. Octree indexing of DICOM images for voxel number reduction and improvement of Monte Carlo simulation computing efficiency

    International Nuclear Information System (INIS)

    The purpose of the present study is to introduce a compression algorithm for the CT (computed tomography) data used in Monte Carlo simulations. Performing simulations on the CT data implies large computational costs as well as large memory requirements since the number of voxels in such data reaches typically into hundreds of millions voxels. CT data, however, contain homogeneous regions which could be regrouped to form larger voxels without affecting the simulation's accuracy. Based on this property we propose a compression algorithm based on octrees: in homogeneous regions the algorithm replaces groups of voxels with a smaller number of larger voxels. This reduces the number of voxels while keeping the critical high-density gradient area. Results obtained using the present algorithm on both phantom and clinical data show that compression rates up to 75% are possible without losing the dosimetric accuracy of the simulation

  7. Multidimensional B-spline parameterization of the detection probability of PET systems to improve the efficiency of Monte Carlo simulations

    International Nuclear Information System (INIS)

    Accurate modeling of system response and scatter distribution is crucial for image reconstruction in emission tomography. Monte Carlo simulations are very well suited to calculate these quantities. However, Monte Carlo simulations are also slow and many simulated counts are needed to provide a sufficiently exact estimate of the detection probabilities. In order to overcome these problems, we propose to split the simulation into two parts, the detection system and the object to be imaged (the patient). A so-called 'virtual boundary' that separates these two parts is introduced. Within the patient, particles are simulated conventionally. Whenever a photon reaches the virtual boundary, its detection probability is calculated analytically by evaluating a multi-dimensional B-spline that depends on the photon position, direction and energy. The unknown B-spline knot values that define this B-spline are fixed by a prior 'pre-' simulation that needs to be run once for each scanner type. After this pre-simulation, the B-spline model can be used in any subsequent simulation with different patients. We show that this approach yields accurate results when simulating the Biograph 16 HiREZ PET scanner with Geant4 Application for Emission Tomography (GATE). The execution time is reduced by a factor of about 22 x (scanner with voxelized phantom) to 30 x (empty scanner) with respect to conventional GATE simulations of same statistical uncertainty. The pre-simulation and calculation of the B-spline knots values could be performed within half a day on a medium-sized cluster.

  8. Efficiency calibration of scintillation detectors in the neutron energy range 1.5-25 MeV by the associated particle technique

    International Nuclear Information System (INIS)

    The associated particle technique, with a gas target, has been used to measure the absolute central neutron detection efficiency of two scintillators, (NE213 and NE102A) with an uncertainty of less than +- 2%, over the energy range 1.5-25 MeV. A commercial n/γ discrimination system was used with NE213. Efficiencies for various discrimination levels were determined simultaneously by two parameter computer storage. The average efficiency of each detector was measured by scanning the neutron cone across the front face. The measurements have been compared with two Monte Carlo efficiency programs (Stanton's and 05S), without artificially fitting any parameters. When the discrimination level (in terms of proton energy) is determined from the measured light output relationship, very good agreement (to about 3%) is obtained between the measurements and the predictions. The agreement of a simple analytical expression is also found to be good over the energy range where n-p scattering dominates. (orig.)

  9. High Efficiency, Digitally Calibrated TR Modules Enabling Lightweight SweepSAR Architectures for DESDynI-Class Radar Instruments Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Develop and demonstrate a next-generation digitally calibrated, highly scalable, L-band Transmit/Receive (TR) module to enable a precision beamforming SweepSAR...

  10. Statistical inference about the relative efficiency of a new survey protocol, based on paired-tow survey calibration data

    OpenAIRE

    Cadigan, Noel G.; Dowden, Jeff J.

    2010-01-01

    Paired-tow calibration studies provide information on changes in survey catchability that may occur because of some necessary change in protocols (e.g., change in vessel or vessel gear) in a fish stock survey. This information is important to ensure the continuity of annual time-series of survey indices of stock size that provide the basis for fish stock assessments. There are several statistical models used to analyze the paired-catch data from calibration studies. Our main contribu...

  11. Evaluation of a commercial software package's ability to calibrate an in vivo counter for emergency response.

    Science.gov (United States)

    Kramer, Gary H; Capello, Kevin; DiNardo, Anthony; Hauck, Barry

    2012-08-01

    A commercial detector calibration package has been assessed for its use to calibrate the Human Monitoring Laboratory's Portable Whole Body Counter that is used for emergency response. The advantage of such a calibration software is that calibrations can be derived very quickly once the model has been designed. The commercial package's predictions were compared to experimental point source data and to predictions from Monte Carlo simulations. It was found that the software adequately predicted the counting efficiencies of a point source geometry to values derived from Monte Carlo simulations and experimental work. Both the standing and seated counting geometries agreed sufficiently well that the commercial package could be used in the field. PMID:22739971

  12. The efficiency calibration and development of environmental correction factors for an in situ high-resolution gamma spectroscopy well logging system

    International Nuclear Information System (INIS)

    A Gamma Spectroscopy Logging System (GSLS) has been developed to study sub-surface radionuclide contamination. Absolute efficiency calibration of the GSLS was performed using simple cylindrical borehole geometry. The calibration source incorporated naturally occurring radioactive material (NORM) that emitted photons ranging from 186-keV to 2,614-keV. More complex borehole geometries were modeled using commercially available shielding software. A linear relationship was found between increasing source thickness and relative photon fluence rates at the detector. Examination of varying porosity and moisture content showed that as porosity increases, relative photon fluence rates increase linearly for all energies. Attenuation effects due to iron, water, PVC, and concrete cylindrical shields were found to agree with previous studies. Regression analyses produced energy-dependent equations for efficiency corrections applicable to spectral gamma-ray well logs collected under non-standard borehole conditions

  13. Calibration of Ge gamma-ray spectrometers for complex sample geometries and matrices

    Energy Technology Data Exchange (ETDEWEB)

    Semkow, T.M., E-mail: thomas.semkow@health.ny.gov [Wadsworth Center, New York State Department of Health, Empire State Plaza, Albany, NY 12201 (United States); Department of Environmental Health Sciences, School of Public Health, University at Albany, State University of New York, Rensselaer, NY 12144 (United States); Bradt, C.J.; Beach, S.E.; Haines, D.K.; Khan, A.J.; Bari, A.; Torres, M.A.; Marrantino, J.C.; Syed, U.-F. [Wadsworth Center, New York State Department of Health, Empire State Plaza, Albany, NY 12201 (United States); Kitto, M.E. [Wadsworth Center, New York State Department of Health, Empire State Plaza, Albany, NY 12201 (United States); Department of Environmental Health Sciences, School of Public Health, University at Albany, State University of New York, Rensselaer, NY 12144 (United States); Hoffman, T.J. [Wadsworth Center, New York State Department of Health, Empire State Plaza, Albany, NY 12201 (United States); Curtis, P. [Kiltel Systems, Inc., Clyde Hill, WA 98004 (United States)

    2015-11-01

    A comprehensive study of the efficiency calibration and calibration verification of Ge gamma-ray spectrometers was performed using semi-empirical, computational Monte-Carlo (MC), and transfer methods. The aim of this study was to evaluate the accuracy of the quantification of gamma-emitting radionuclides in complex matrices normally encountered in environmental and food samples. A wide range of gamma energies from 59.5 to 1836.0 keV and geometries from a 10-mL jar to 1.4-L Marinelli beaker were studied on four Ge spectrometers with the relative efficiencies between 102% and 140%. Density and coincidence summing corrections were applied. Innovative techniques were developed for the preparation of artificial complex matrices from materials such as acidified water, polystyrene, ethanol, sugar, and sand, resulting in the densities ranging from 0.3655 to 2.164 g cm{sup −3}. They were spiked with gamma activity traceable to international standards and used for calibration verifications. A quantitative method of tuning MC calculations to experiment was developed based on a multidimensional chi-square paraboloid. - Highlights: • Preparation and spiking of traceable complex matrices in extended geometries. • Calibration of Ge gamma spectrometers for complex matrices. • Verification of gamma calibrations. • Comparison of semi-empirical, computational Monte Carlo, and transfer methods of Ge calibration. • Tuning of Monte Carlo calculations using a multidimensional paraboloid.

  14. Calibration of Ge gamma-ray spectrometers for complex sample geometries and matrices

    International Nuclear Information System (INIS)

    A comprehensive study of the efficiency calibration and calibration verification of Ge gamma-ray spectrometers was performed using semi-empirical, computational Monte-Carlo (MC), and transfer methods. The aim of this study was to evaluate the accuracy of the quantification of gamma-emitting radionuclides in complex matrices normally encountered in environmental and food samples. A wide range of gamma energies from 59.5 to 1836.0 keV and geometries from a 10-mL jar to 1.4-L Marinelli beaker were studied on four Ge spectrometers with the relative efficiencies between 102% and 140%. Density and coincidence summing corrections were applied. Innovative techniques were developed for the preparation of artificial complex matrices from materials such as acidified water, polystyrene, ethanol, sugar, and sand, resulting in the densities ranging from 0.3655 to 2.164 g cm−3. They were spiked with gamma activity traceable to international standards and used for calibration verifications. A quantitative method of tuning MC calculations to experiment was developed based on a multidimensional chi-square paraboloid. - Highlights: • Preparation and spiking of traceable complex matrices in extended geometries. • Calibration of Ge gamma spectrometers for complex matrices. • Verification of gamma calibrations. • Comparison of semi-empirical, computational Monte Carlo, and transfer methods of Ge calibration. • Tuning of Monte Carlo calculations using a multidimensional paraboloid

  15. The GERDA calibration system

    International Nuclear Information System (INIS)

    A system with three identical custom made units is used for the energy calibration of the GERDA Ge diodes. To perform a calibration the 228Th sources are lowered from the parking positions at the top of the cryostat. Their positions are measured by two independent modules. One, the incremental encoder, counts the holes in the perforated steel band holding the sources, the other measures the drive shaft's angular position even if not powered. The system can be controlled remotely by a Labview program. The calibration data is analyzed by an iterative calibration algorithm determining the calibration functions for different energy reconstruction algorithms and the resolution of several peaks in the 228Th spectrum is determined. A Monte Carlo simulation using the GERDA simulation software MAGE has been performed to determine the background induced by the sources in the parking positions.

  16. Calibration of a compact magnetic proton recoil neutron spectrometer

    Science.gov (United States)

    Zhang, Jianfu; Ouyang, Xiaoping; Zhang, Xianpeng; Ruan, Jinlu; Zhang, Guoguang; Zhang, Xiaodong; Qiu, Suizheng; Chen, Liang; Liu, Jinliang; Song, Jiwen; Liu, Linyue; Yang, Shaohua

    2016-04-01

    Magnetic proton recoil (MPR) neutron spectrometer is considered as a powerful instrument to measure deuterium-tritium (DT) neutron spectrum, as it is currently used in inertial confinement fusion facilities and large Tokamak devices. The energy resolution (ER) and neutron detection efficiency (NDE) are the two most important parameters to characterize a neutron spectrometer. In this work, the ER calibration for the MPR spectrometer was performed by using the HI-13 tandem accelerator at China Institute of Atomic Energy (CIAE), and the NDE calibration was performed by using the neutron generator at CIAE. The specific calibration techniques used in this work and the associated accuracies were discussed in details in this paper. The calibration results were presented along with Monte Carlo simulation results.

  17. Beta-gamma coincidence counting efficiency and energy resolution optimization by Geant4 Monte Carlo simulations for a phoswich well detector.

    Science.gov (United States)

    Zhang, Weihua; Mekarski, Pawel; Ungar, Kurt

    2010-12-01

    A single-channel phoswich well detector has been assessed and analysed in order to improve beta-gamma coincidence measurement sensitivity of (131m)Xe and (133m)Xe. This newly designed phoswich well detector consists of a plastic cell (BC-404) embedded in a CsI(Tl) crystal coupled to a photomultiplier tube (PMT). It can be used to distinguish 30.0-keV X-ray signals of (131m)Xe and (133m)Xe using their unique coincidence signatures between the conversion electrons (CEs) and the 30.0-keV X-rays. The optimum coincidence efficiency signal depends on the energy resolutions of the two CE peaks, which could be affected by relative positions of the plastic cell to the CsI(Tl) because the embedded plastic cell would interrupt scintillation light path from the CsI(Tl) crystal to the PMT. In this study, several relative positions between the embedded plastic cell and the CsI(Tl) crystal have been evaluated using Monte Carlo modeling for its effects on coincidence detection efficiency and X-ray and CE energy resolutions. The results indicate that the energy resolution and beta-gamma coincidence counting efficiency of X-ray and CE depend significantly on the plastic cell locations inside the CsI(Tl). The degraded X-ray and CE peak energy resolutions due to light collection efficiency deterioration by the embedded cell can be minimised. The optimum of CE and X-ray energy resolution, beta-gamma coincidence efficiency as well as the ease of manufacturing could be achieved by varying the embedded plastic cell positions inside the CsI(Tl) and consequently setting the most efficient geometry. PMID:20598559

  18. Beta-gamma coincidence counting efficiency and energy resolution optimization by Geant4 Monte Carlo simulations for a phoswich well detector

    International Nuclear Information System (INIS)

    A single-channel phoswich well detector has been assessed and analysed in order to improve beta-gamma coincidence measurement sensitivity of 131mXe and 133mXe. This newly designed phoswich well detector consists of a plastic cell (BC-404) embedded in a CsI(Tl) crystal coupled to a photomultiplier tube (PMT). It can be used to distinguish 30.0-keV X-ray signals of 131mXe and 133mXe using their unique coincidence signatures between the conversion electrons (CEs) and the 30.0-keV X-rays. The optimum coincidence efficiency signal depends on the energy resolutions of the two CE peaks, which could be affected by relative positions of the plastic cell to the CsI(Tl) because the embedded plastic cell would interrupt scintillation light path from the CsI(Tl) crystal to the PMT. In this study, several relative positions between the embedded plastic cell and the CsI(Tl) crystal have been evaluated using Monte Carlo modeling for its effects on coincidence detection efficiency and X-ray and CE energy resolutions. The results indicate that the energy resolution and beta-gamma coincidence counting efficiency of X-ray and CE depend significantly on the plastic cell locations inside the CsI(Tl). The degraded X-ray and CE peak energy resolutions due to light collection efficiency deterioration by the embedded cell can be minimised. The optimum of CE and X-ray energy resolution, beta-gamma coincidence efficiency as well as the ease of manufacturing could be achieved by varying the embedded plastic cell positions inside the CsI(Tl) and consequently setting the most efficient geometry.

  19. Efficient Markov chain Monte Carlo implementation of Bayesian analysis of additive and dominance genetic variances in noninbred pedigrees.

    Science.gov (United States)

    Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J

    2008-06-01

    Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655

  20. Efficient Markov Chain Monte Carlo Implementation of Bayesian Analysis of Additive and Dominance Genetic Variances in Noninbred Pedigrees

    Science.gov (United States)

    Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J.

    2008-01-01

    Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655

  1. A more efficient approach to parallel-tempered Markov-chain Monte Carlo for the highly structured posteriors of gravitational-wave signals

    Science.gov (United States)

    Farr, Benjamin; Kalogera, Vicky; Luijten, Erik

    2014-07-01

    We introduce a new Markov-chain Monte Carlo (MCMC) approach designed for the efficient sampling of highly correlated and multimodal posteriors. Parallel tempering, though effective, is a costly technique for sampling such posteriors. Our approach minimizes the use of parallel tempering, only applying it for a short time to build a proposal distribution that is based upon estimation of the kernel density and tuned to the target posterior. This proposal makes subsequent use of parallel tempering unnecessary, allowing all chains to be cooled to sample the target distribution. Gains in efficiency are found to increase with increasing posterior complexity, ranging from tens of percent in the simplest cases to over a factor of 10 for the more complex cases. Our approach is particularly useful in the context of parameter estimation of gravitational-wave signals measured by ground-based detectors, which is currently done through Bayesian inference with MCMC, one of the leading sampling methods. Posteriors for these signals are typically multimodal with strong nonlinear correlations, making sampling difficult. As we enter the advanced-detector era, improved sensitivities and wider bandwidths will drastically increase the computational cost of analyses, demanding more efficient search algorithms to meet these challenges.

  2. Calibration and validation of a model describing complete autotrophic nitrogen removal in a granular SBR system

    DEFF Research Database (Denmark)

    Vangsgaard, Anna Katrine; Mutlu, Ayten Gizem; Gernaey, Krist;

    2013-01-01

    steady-state in the biofilm system. For oxygen mass transfer coefficient (kLa) estimation, long-term data, removal efficiencies, and the stoichiometry of the reactions were used. For the dynamic calibration a pragmatic model fitting approach was used - in this case an iterative Monte Carlo based...... screening of the parameter space proposed by Sin et al. (2008) - to find the best fit of the model to dynamic data. Finally, the calibrated model was validated with an independent data set. CONCLUSION: The presented calibration procedure is the first customized procedure for this type of system and is...

  3. Optimization of Monte Carlo simulations

    OpenAIRE

    Bryskhe, Henrik

    2009-01-01

    This thesis considers several different techniques for optimizing Monte Carlo simulations. The Monte Carlo system used is Penelope but most of the techniques are applicable to other systems. The two mayor techniques are the usage of the graphics card to do geometry calculations, and raytracing. Using graphics card provides a very efficient way to do fast ray and triangle intersections. Raytracing provides an approximation of Monte Carlo simulation but is much faster to perform. A program was ...

  4. Comparative study using Monte Carlo methods of the radiation detection efficiency of LSO, LuAP, GSO and YAP scintillators for use in positron emission imaging (PET)

    International Nuclear Information System (INIS)

    The radiation detection efficiency of four scintillators employed, or designed to be employed, in positron emission imaging (PET) was evaluated as a function of the crystal thickness by applying Monte Carlo Methods. The scintillators studied were the LuSiO5 (LSO), LuAlO3 (LuAP), Gd2SiO5 (GSO) and the YAlO3 (YAP). Crystal thicknesses ranged from 0 to 50 mm. The study was performed via a previously generated photon transport Monte Carlo code. All photon track and energy histories were recorded and the energy transferred or absorbed in the scintillator medium was calculated together with the energy redistributed and retransported as secondary characteristic fluorescence radiation. Various parameters were calculated e.g. the fraction of the incident photon energy absorbed, transmitted or redistributed as fluorescence radiation, the scatter to primary ratio, the photon and energy distribution within each scintillator block etc. As being most significant, the fraction of the incident photon energy absorbed was found to increase with increasing crystal thickness tending to form a plateau above the 30 mm thickness. For LSO, LuAP, GSO and YAP scintillators, respectively, this fraction had the value of 44.8, 36.9 and 45.7% at the 10 mm thickness and 96.4, 93.2 and 96.9% at the 50 mm thickness. Within the plateau area approximately (57-59)% (59-63)% (52-63)% and (58-61)% of this fraction was due to scattered and reabsorbed radiation for the LSO, GSO, YAP and LuAP scintillators, respectively. In all cases, a negligible fraction (<0.1%) of the absorbed energy was found to escape the crystal as fluorescence radiation

  5. An Efficient Method of Reweighting and Reconstructing Monte Carlo Molecular Simulation Data for Extrapolation to Different Temperature and Density Conditions

    KAUST Repository

    Sun, Shuyu

    2013-06-01

    This paper introduces an efficient technique to generate new molecular simulation Markov chains for different temperature and density conditions, which allow for rapid extrapolation of canonical ensemble averages at a range of temperatures and densities different from the original conditions where a single simulation is conducted. Obtained information from the original simulation are reweighted and even reconstructed in order to extrapolate our knowledge to the new conditions. Our technique allows not only the extrapolation to a new temperature or density, but also the double extrapolation to both new temperature and density. The method was implemented for Lennard-Jones fluid with structureless particles in single-gas phase region. Extrapolation behaviors as functions of extrapolation ranges were studied. Limits of extrapolation ranges showed a remarkable capability especially along isochors where only reweighting is required. Various factors that could affect the limits of extrapolation ranges were investigated and compared. In particular, these limits were shown to be sensitive to the number of particles used and starting point where the simulation was originally conducted.

  6. The MINOS calibration detector

    International Nuclear Information System (INIS)

    This paper describes the MINOS calibration detector (CalDet) and the procedure used to calibrate it. The CalDet, a scaled-down but functionally equivalent model of the MINOS Far and Near detectors, was exposed to test beams in the CERN PS East Area during 2001-2003 to establish the response of the MINOS calorimeters to hadrons, electrons and muons in the range 0.2-10GeV/c. The CalDet measurements are used to fix the energy scale and constrain Monte Carlo simulations of MINOS

  7. TARC: Carlo Rubbia's Energy Amplifier

    CERN Multimedia

    Laurent Guiraud

    1997-01-01

    Transmutation by Adiabatic Resonance Crossing (TARC) is Carlo Rubbia's energy amplifier. This CERN experiment demonstrated that long-lived fission fragments, such as 99-TC, can be efficiently destroyed.

  8. Calibration of HPGe detector for flowing sample neutron activation analysis

    International Nuclear Information System (INIS)

    This work is concerned with the calibration of the HPGe detector used in flowing sample neutron activation analysis technique. The optimum counting configuration and half-life based correction factors have been estimated using Monte Carlo computer simulations. Depending on detection efficiency, sample volume and flow type around the detector, the optimum geometry was achieved using 4 mm diameter hose rolled in spiral shape around the detector. The derived results showed that the half-life based efficiency correction factors are strongly dependent on sample flow rate and the isotope half-life. (author)

  9. Efficient simultaneous reverse Monte Carlo modeling of pair-distribution functions and extended x-ray-absorption fine structure spectra of crystalline disordered materials

    Science.gov (United States)

    Németh, Károly; Chapman, Karena W.; Balasubramanian, Mahalingam; Shyam, Badri; Chupas, Peter J.; Heald, Steve M.; Newville, Matt; Klingler, Robert J.; Winans, Randall E.; Almer, Jonathan D.; Sandi, Giselle; Srajer, George

    2012-02-01

    An efficient implementation of simultaneous reverse Monte Carlo (RMC) modeling of pair distribution function (PDF) and EXAFS spectra is reported. This implementation is an extension of the technique established by Krayzman et al. [J. Appl. Cryst. 42, 867 (2009)] in the sense that it enables simultaneous real-space fitting of x-ray PDF with accurate treatment of Q-dependence of the scattering cross-sections and EXAFS with multiple photoelectron scattering included. The extension also allows for atom swaps during EXAFS fits thereby enabling modeling the effects of chemical disorder, such as migrating atoms and vacancies. Significant acceleration of EXAFS computation is achieved via discretization of effective path lengths and subsequent reduction of operation counts. The validity and accuracy of the approach is illustrated on small atomic clusters and on 5500-9000 atom models of bcc-Fe and α-Fe2O3. The accuracy gains of combined simultaneous EXAFS and PDF fits are pointed out against PDF-only and EXAFS-only RMC fits. Our modeling approach may be widely used in PDF and EXAFS based investigations of disordered materials.

  10. IM3D: A parallel Monte Carlo code for efficient simulations of primary radiation displacements and damage in 3D geometry

    Science.gov (United States)

    Li, Yong Gang; Yang, Yang; Short, Michael P.; Ding, Ze Jun; Zeng, Zhi; Li, Ju

    2015-12-01

    SRIM-like codes have limitations in describing general 3D geometries, for modeling radiation displacements and damage in nanostructured materials. A universal, computationally efficient and massively parallel 3D Monte Carlo code, IM3D, has been developed with excellent parallel scaling performance. IM3D is based on fast indexing of scattering integrals and the SRIM stopping power database, and allows the user a choice of Constructive Solid Geometry (CSG) or Finite Element Triangle Mesh (FETM) method for constructing 3D shapes and microstructures. For 2D films and multilayers, IM3D perfectly reproduces SRIM results, and can be ∼102 times faster in serial execution and > 104 times faster using parallel computation. For 3D problems, it provides a fast approach for analyzing the spatial distributions of primary displacements and defect generation under ion irradiation. Herein we also provide a detailed discussion of our open-source collision cascade physics engine, revealing the true meaning and limitations of the “Quick Kinchin-Pease” and “Full Cascades” options. The issues of femtosecond to picosecond timescales in defining displacement versus damage, the limitation of the displacements per atom (DPA) unit in quantifying radiation damage (such as inadequacy in quantifying degree of chemical mixing), are discussed.

  11. Krypton calibration of time projection chambers of the NA61/SHINE experiment

    CERN Document Server

    Naskret, Michal

    The NA61/SHINE experiment at CERN is searching for the critical point in phase transition between quark-gluon plasma and hadronic matter. To do so we use the most precise apparatus - Time Projection Chamber. Its main task is to find trajectories of particles created in a relativistic collision. In order to improve efficiency of TPCs, we introduce calibration using radioactive krypton gas. Simulation of events in a TPC cham- ber through a decay of excited krypton atoms gives us a spectrum, which is later fitted to the model spectrum of krypton from a Monte-Carlo simulation. The data obtained in such a way serves us to determine malfunctioning electronics in TPCs. Thanks to the krypton calibration we can create a map of pad by pad gains. In this thesis I will de- scribe in detail the NA61 experimental setup, krypton calibration procedure, calibration algorithm and results for recent calibration runs

  12. U.S. Department Of Energy's nuclear engineering education research: highlights of recent and current research-I. 2. Monte Carlo Characterization of a Highly Efficient Photon Detector

    International Nuclear Information System (INIS)

    computational tool for the particle transport through the detector geometry. The detector geometry was implemented as a component module within the EGS4/BEAM Monte Carlo code. Each geometric component can be assigned different dimensions and materials. To validate the Monte Carlo calculation, the calculated and measured responses of the detector to a 44-cm-long and 3.56- cm-wide 6-MV fan beam (at the iso-center) were compared (Fig. 1). Although the incident photon fluence intensity has the shape of a centered triangle function along the fan beam, the dose per particle in xenon exhibits a sharp increase with increasing distance from the center of the detector. At larger distances, the response profile drops with the incident intensity. As measures of efficiency, the quantum efficiency QE (i.e., the probability of detecting a single incident quantum), and the detective quantum efficiency at zero frequency DQE(0) were calculated. The results are shown in Table I. It is clearly demonstrated that the out-of-focus position of the detector results in a higher detection efficiency, as the geometrical cross-section for the tungsten plates 'seen' by the incident photons is much larger than for the in-focus position. Compared to other technologies used for portal imaging in radiotherapy (metal/phosphor screens, or indirect and direct active matrix arrays), the efficiencies are one order of magnitude higher. A separate calculation and measurement showed that the line-spread functions (and the corresponding modulation transfer functions) were nearly independent of the detector location even when placed out of focus with the photon source. This is important to guarantee a spatially independent (shift invariant) response of the detector. In conclusion, the combination of a dense, high-atomic material with a low-density signal-generating medium might serve as a model for a future generation of highly efficient photon radiation detectors. (authors)

  13. Determination of full-energy peak efficiency at the center position of a through-hole-type clover detector between 0.05 MeV and 3.2 MeV by source measurements and Monte Carlo simulations

    International Nuclear Information System (INIS)

    Full-energy peak efficiency at the center position of a through-hole-type clover detector was determined by the measurement of standard sources and by Monte Carlo simulation. The coincidence summing under the large-solid-angle condition was corrected using Monte Carlo calculation based on the specific decay scheme for 133Ba, 152,154Eu, and 56Co. This allowed the peak efficiency to be extended from 0.05 MeV to 3.2 MeV with an approximate uncertainty of 3%. - Highlights: • Novel Ge detector having large solid angle for γ-ray measurements was developed. • Correction for coincidence summing was performed with measurements and simulation. • Peak efficiency was determined between 0.05 MeV and 3.2 MeV

  14. Euromet action 428: transfer of ge detectors efficiency calibration from point source geometry to other geometries; Action euromet 428: transfert de l'etalonnage en rendement de detecteurs au germanium pour une source ponctuelle vers d'autres geometries

    Energy Technology Data Exchange (ETDEWEB)

    Lepy, M.Ch

    2000-07-01

    The EUROMET project 428 examines efficiency transfer computation for Ge gamma-ray spectrometers when the efficiency is known for a reference point source geometry in the 60 keV to 2 MeV energy range. For this, different methods are used, such as Monte Carlo simulation or semi-empirical computation. The exercise compares the application of these methods to the same selected experimental cases to determine the usage limitations versus the requested accuracy. For carefully examining these results and trying to derive information for improving the computation codes, this study was limited to a few simple cases, from an experimental efficiency calibration for point source at 10-cm source-to-detector distance. The first part concerns the simplest case of geometry transfer, i.e., using point sources for 3 source-to-detector distances: 2,5 and 20 cm; the second part deals with transfer from point source geometry to cylindrical geometry with three different matrices. The general results show that the deviations between the computed results and the measured efficiencies are for the most part within 10%. The quality of the results is rather inhomogeneous and shows that these codes cannot be used directly for metrological purposes. However, most of them are operational for routine measurements when efficiency uncertainties of 5-10% can be sufficient. (author)

  15. An integrated approach to the simultaneous selection of variables, mathematical pre-processing and calibration samples in partial least-squares multivariate calibration.

    Science.gov (United States)

    Allegrini, Franco; Olivieri, Alejandro C

    2013-10-15

    A new optimization strategy for multivariate partial-least-squares (PLS) regression analysis is described. It was achieved by integrating three efficient strategies to improve PLS calibration models: (1) variable selection based on ant colony optimization, (2) mathematical pre-processing selection by a genetic algorithm, and (3) sample selection through a distance-based procedure. Outlier detection has also been included as part of the model optimization. All the above procedures have been combined into a single algorithm, whose aim is to find the best PLS calibration model within a Monte Carlo-type philosophy. Simulated and experimental examples are employed to illustrate the success of the proposed approach. PMID:24054659

  16. The correct and incorrect way to calibrate a Compton suppression counting system for gamma-ray efficiency

    International Nuclear Information System (INIS)

    Gamma-ray efficiency calculations for a germanium detector have been made for a Compton suppression system. Results have shown that for radionuclides that have gamma rays in coincidence the photopeaks can be severely depressed leading to erroneous results. While this can be overcome in routine neutron activation analysis using a comparator method, special consideration must be given to determine the suppression for coincident gamma rays when calculating the efficiency curve and for radionuclide activities. This is especially important for users of the k0 method and for fission product identification using Compton suppression methods. (author)

  17. Development of an absolute method for efficiency calibration of a coaxial HPGe detector for large volume sources

    Science.gov (United States)

    Ortiz-Ramírez, Pablo C.

    2015-09-01

    In this work an absolute method for the determination of the full energy peak efficiency of a gamma spectroscopy system for voluminous sources is presented. The method was tested for a high-resolution coaxial HPGe detector and cylindrical homogeneous volume source. The volume source is represented by a set of point sources filling its volume. We found that the absolute efficiency of a volume source can be determined as the average over its volume of the absolute efficiency of each point source. Experimentally, we measure the intrinsic efficiency as a function upon source-detector position. Then, considering the solid angle and the attenuations of the gamma rays emitted to the detector by each point source, considered as embedded in the source matrix, the absolute efficiency for each point source inside of the volume was determined. The factor associate with the solid angle and the self-attenuation of photons in the sample was deduced from first principles without any mathematical approximation. The method was tested by determining the specific activity of 137Cs in cylindrical homogeneous sources, using IAEA reference materials with specific activities between 14.2 Bq/kg and 9640 Bq/kg at the moment of the experimentation. The results obtained shown a good agreement with the expected values. The relative difference was less than 7% in most of the cases. The main advantage of this method is that it does not require of the use of expensive and hard to produce standard materials. In addition it does not require of matrix effect corrections, which are the main cause of error in this type of measurements, and it is easy to implement in any nuclear physics laboratory.

  18. Development of highly efficient proton recoil counter telescope for absolute measurement of neutron fluences in quasi-monoenergetic neutron calibration fields of high energy

    International Nuclear Information System (INIS)

    Precise calibration of monitors and dosimeters for use with high energy neutrons necessitates reliable and accurate neutron fluences being evaluated with use of a reference point. A highly efficient Proton Recoil counter Telescope (PRT) to make absolute measurements with use of a reference point was developed to evaluate neutron fluences in quasi-monoenergetic neutron fields. The relatively large design of the PRT componentry and relatively thick, approximately 2 mm, polyethylene converter contributed to high detection efficiency at the reference point over a large irradiation area at a long distance from the target. The polyethylene converter thickness was adjusted to maintain the same carbon density per unit area as the graphite converter for easy background subtraction. The high detection efficiency and thickness adjustment resulted in efficient absolute measurements being made of the neutron fluences of sufficient statistical precision over a short period of time. The neutron detection efficiencies of the PRT were evaluated using MCNPX code at 2.61x10-6, 2.16x10-6 and 1.14x10-6 for the respective neutron peak energies of 45, 60 and 75 MeV. The neutron fluences were determined to have been evaluated at an uncertainty of within 6.5% using analysis of measured data and the detection efficiencies. The PRT was also designed so as to be capable of simultaneously obtaining TOF data. The TOF data also increased the reliability of neutron fluence measurements and provided useful information for use in interpreting the source of proton events.

  19. Tau Reconstruction, Energy Calibration and Identification at ATLAS

    CERN Document Server

    Trottier-McDonald, M; The ATLAS collaboration

    2011-01-01

    Tau leptons play a central role in the LHC physics programme, in particular as an important signature in many Higgs boson and Supersymmetry searches. They are further used in Standard Model electroweak measurements, as well as detector related studies like the determination of the missing transverse energy scale. Copious backgrounds from QCD processes call for both efficient identification of hadronically decaying tau leptons, as well as large fake rejection. A solid understanding of the combined performance of the calorimeter and tracking detectors is also required. We present the current status of the tau reconstruction, energy calibration and identification with the ATLAS detector at the LHC. Identification efficiencies are measured in Wtaunu events in data and compared with predictions from Monte Carlo simulations, whereas the misidentification probabilities of QCD jets and electrons are determined from various jet-enriched data samples and from Zee events, respectively. The tau energy scale calibration i...

  20. Monte Carlo molecular simulations: improving the statistical efficiency of samples with the help of artificial evolution algorithms; Simulations moleculaires de Monte Carlo: amelioration de l'efficacite statistique de l'echantillonnage grace aux algorithmes d'evolution artificielle

    Energy Technology Data Exchange (ETDEWEB)

    Leblanc, B.

    2002-03-01

    Molecular simulation aims at simulating particles in interaction, describing a physico-chemical system. When considering Markov Chain Monte Carlo sampling in this context, we often meet the same problem of statistical efficiency as with Molecular Dynamics for the simulation of complex molecules (polymers for example). The search for a correct sampling of the space of possible configurations with respect to the Boltzmann-Gibbs distribution is directly related to the statistical efficiency of such algorithms (i.e. the ability of rapidly providing uncorrelated states covering all the configuration space). We investigated how to improve this efficiency with the help of Artificial Evolution (AE). AE algorithms form a class of stochastic optimization algorithms inspired by Darwinian evolution. Efficiency measures that can be turned into efficiency criteria have been first searched before identifying parameters that could be optimized. Relative frequencies for each type of Monte Carlo moves, usually empirically chosen in reasonable ranges, were first considered. We combined parallel simulations with a 'genetic server' in order to dynamically improve the quality of the sampling during the simulations progress. Our results shows that in comparison with some reference settings, it is possible to improve the quality of samples with respect to the chosen criterion. The same algorithm has been applied to improve the Parallel Tempering technique, in order to optimize in the same time the relative frequencies of Monte Carlo moves and the relative frequencies of swapping between sub-systems simulated at different temperatures. Finally, hints for further research in order to optimize the choice of additional temperatures are given. (author)

  1. Potential of modern technologies for improvement of in-vivo calibration

    International Nuclear Information System (INIS)

    Full text: In vivo counting is one of the preferred methods for the monitoring of nuclear workers exposed to a risk of internal contamination. However, some difficulties are still encountered while using this technique, principally due to calibration conditions, leading to uncertainties and important systematic errors on results. In consequence, significant corrections may need to be made to phantom-based calibration factors in order to obtain absolute calibration efficiencies applicable to a given individual. The importance of these corrections is particularly crucial for in vivo measurements of low energy photon emitting radionuclides (such as actinides) deposited in the lung. As a matter of fact, previous work has demonstrated that it was desirable to develop new method for calibrating in vivo measurement systems that is more sensitive to these types of variability and the possibility of such a calibration using the Monte Carlo technique. In the frame of IDEA project, our research programme extended such investigations to the reconstruction of numerical anthropomorphic phantoms based on personal physiological data obtained by computed tomography (CT) and MRI. New procedures are being developed to take advantage of recent progress in image-processing codes. They allow after scanning the direct and fast reconstruction of a realistic voxel phantom, coupling of voxels with the same density and chemical consistency into logical cells and conversion into computer files to be used on line for MCNP (Monte Carlo N-Particule code) calculations. Results of calculations and comparison with traditional calibration methods are presented and discussed in this work. (author)

  2. RUN DMC: AN EFFICIENT, PARALLEL CODE FOR ANALYZING RADIAL VELOCITY OBSERVATIONS USING N-BODY INTEGRATIONS AND DIFFERENTIAL EVOLUTION MARKOV CHAIN MONTE CARLO

    International Nuclear Information System (INIS)

    In the 20+ years of Doppler observations of stars, scientists have uncovered a diverse population of extrasolar multi-planet systems. A common technique for characterizing the orbital elements of these planets is the Markov Chain Monte Carlo (MCMC), using a Keplerian model with random walk proposals and paired with the Metropolis-Hastings algorithm. For approximately a couple of dozen planetary systems with Doppler observations, there are strong planet-planet interactions due to the system being in or near a mean-motion resonance (MMR). An N-body model is often required to accurately describe these systems. Further computational difficulties arise from exploring a high-dimensional parameter space (∼7 × number of planets) that can have complex parameter correlations, particularly for systems near a MMR. To surmount these challenges, we introduce a differential evolution MCMC (DEMCMC) algorithm applied to radial velocity data while incorporating self-consistent N-body integrations. Our Radial velocity Using N-body DEMCMC (RUN DMC) algorithm improves upon the random walk proposal distribution of the traditional MCMC by using an ensemble of Markov chains to adaptively improve the proposal distribution. RUN DMC can sample more efficiently from high-dimensional parameter spaces that have strong correlations between model parameters. We describe the methodology behind the algorithm, along with results of tests for accuracy and performance. We find that most algorithm parameters have a modest effect on the rate of convergence. However, the size of the ensemble can have a strong effect on performance. We show that the optimal choice depends on the number of planets in a system, as well as the computer architecture used and the resulting extent of parallelization. While the exact choices of optimal algorithm parameters will inevitably vary due to the details of individual planetary systems (e.g., number of planets, number of observations, orbital periods, and signal

  3. Absolute calibration in vivo measurement systems

    International Nuclear Information System (INIS)

    Lawrence Livermore National Laboratory (LLNL) is currently investigating a new method for obtaining absolute calibration factors for radiation measurement systems used to measure internally deposited radionuclides in vivo. Absolute calibration of in vivo measurement systems will eliminate the need to generate a series of human surrogate structures (i.e., phantoms) for calibrating in vivo measurement systems. The absolute calibration of in vivo measurement systems utilizes magnetic resonance imaging (MRI) to define physiological structure, size, and composition. The MRI image provides a digitized representation of the physiological structure, which allows for any mathematical distribution of radionuclides within the body. Using Monte Carlo transport codes, the emission spectrum from the body is predicted. The in vivo measurement equipment is calibrated using the Monte Carlo code and adjusting for the intrinsic properties of the detection system. The calibration factors are verified using measurements of existing phantoms and previously obtained measurements of human volunteers. 8 refs

  4. Calibration with MCNP of NaI detector for the determination of natural radioactivity levels in the field.

    Science.gov (United States)

    Cinelli, Giorgia; Tositti, Laura; Mostacci, Domiziano; Baré, Jonathan

    2016-05-01

    In view of assessing natural radioactivity with on-site quantitative gamma spectrometry, efficiency calibration of NaI(Tl) detectors is investigated. A calibration based on Monte Carlo simulation of detector response is proposed, to render reliable quantitative analysis practicable in field campaigns. The method is developed with reference to contact geometry, in which measurements are taken placing the NaI(Tl) probe directly against the solid source to be analyzed. The Monte Carlo code used for the simulations was MCNP. Experimental verification of the calibration goodness is obtained by comparison with appropriate standards, as reported. On-site measurements yield a quick quantitative assessment of natural radioactivity levels present ((40)K, (238)U and (232)Th). On-site gamma spectrometry can prove particularly useful insofar as it provides information on materials from which samples cannot be taken. PMID:26913974

  5. Parallelizing Monte Carlo with PMC

    Energy Technology Data Exchange (ETDEWEB)

    Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.

    1994-11-01

    PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described.

  6. Calibration uncertainty

    DEFF Research Database (Denmark)

    Heydorn, Kaj; Anglov, Thomas

    2002-01-01

    Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration...... uncertainty was verified from independent measurements of the same sample by demonstrating statistical control of analytical results and the absence of bias. The proposed method takes into account uncertainties of the measurement, as well as of the amount of calibrant. It is applicable to all types of...

  7. The establishment of quasi-monoenergetic 6-7 MeV γ-ray reference radiation field and the absolute calibration for γ-ray detector efficiency at Eγ = 6.13 MeV

    International Nuclear Information System (INIS)

    The reference radiation field of quasi-monoenergetic 6-7 MeV γ-rays and relevant characteristics were described, concentrating on the absolute calibration method for NaI(Tl) γ-detector efficiency at Eγ 6.13 MeV. The experimental results were given and their errors were discussed

  8. Neutronic analysis for in situ calibration of ITER in-vessel neutron flux monitor with microfission chamber

    International Nuclear Information System (INIS)

    Highlights: ► Neutronic analysis is performed for in situ calibration of the microfission chamber (MFC). ► The source transfer system deigned in this study does not affect MFC detection efficiency. ► The rotation method is appropriate for full calibration because the calibration time is shorter. ► But, point-by-point method should be performed to check the accuracy of the MCNP model. ► Combination of two methods are important to perform in situ calibration efficiently. -- Abstract: Neutronic analysis is performed for in situ calibration of the microfission chamber (MFC), which is the in-vessel neutron-flux monitor at the International Thermonuclear Experimental Reactor (ITER). We present the design of the transfer system for a neutron generator, which consists of two toroidal rings and a neutron-generator holder, and estimate the effect of the system on MFC detection efficiency through neutronic analysis with the Monte Carlo N-particle (MCNP) code. The result indicates that the designed transfer system does not affect MFC detection efficiency. In situ calibrations by the point-by-point method and by the rotation method are also simulated and compared by neutronic analysis. The results indicate that the rotation method is appropriate for full calibration because the calibration time is shorter (all neutron-flux monitors can be calibrated simultaneously). However, the rotation method makes it difficult to compare the results with neutronic analysis, so the point-by-point method should be performed prior to full calibration to check the accuracy of the MCNP model

  9. Energy calibration for the INDRA multidetector using recoil protons from sup 1 sup 2 C+ sup 1 H scattering

    CERN Document Server

    Trzcinski, A; Müller, W F J; Trautmann, W; Zwieglinski, B; Auger, G; Bacri, C O; Begemann-Blaich, M L; Bellaize, N; Bittiger, R; Bocage, F; Borderie, B; Bougault, R; Bouriquet, B; Buchet, P; Charvet, J L; Chbihi, A; Dayras, R; Doré, D; Durand, D; Frankland, J D; Galíchet, E; Gourio, D; Guinet, D; Hudan, S; Hurst, B; Lautesse, P; Lavaud, F; Laville, J L; Leduc, C; Lefèvre, A; Legrain, R; López, O; Lynen, U; Nalpas, L; Orth, H; Plagnol, E; Rosato, E; Saija, A; Schwarz, C; Sfienti, C; Steckmeyer, J C; Tabacaru, G; Tamain, B; Turzó, K; Vient, E; Vigilante, M; Volant, C

    2003-01-01

    An efficient method of energy scale calibration for the CsI(Tl) modules of the INDRA multidetector (rings 6-12) using elastic and inelastic sup 1 sup 2 C+ sup 1 H scattering at E( sup 1 sup 2 C)=30 MeV per nucleon is presented. Background-free spectra for the binary channels are generated by requiring the coincident detection of the light and heavy ejectiles. The gain parameter of the calibration curve is obtained by fitting the proton total charge spectra to the spectra predicted with Monte-Carlo simulations using tabulated cross section data. The method has been applied in multifragmentation experiments with INDRA at GSI.

  10. A Cherenkov radiation source for photomultiplier calibration

    International Nuclear Information System (INIS)

    The Sudbury Neutrino Observatory (SNO) will detect the Cherenkov radiation from relativistic electrons produced from neutrino interactions in a heavy water (D2O) target. A Cherenkov radiation source is required that will enable the efficiency of the photomultipliers to detect this radiation to be calibrated in situ. We discuss such a source based upon the encapsulation of a 90Sr solution in a glass bulb, and describe its construction. The Cherenkov light output of this source is computed using the theory of Frank and Tamm and an EGS4 Monte Carlo code is used to propagate the beta decay electrons. As an example of the use of this source, the single photoelectron counting efficiency of an EMI 9350 photomultiplier was measured as a function of the applied voltages, given that the quantum efficiency of its photocathode was known. The single photoelectron counting efficiencies obtained were in the range 73-87% and these are consistent with the measurements of other authors using photomultipliers of a broadly similar design. ((orig.))

  11. Application of PHOTON simulation software on calibration of HPGe detectors

    Science.gov (United States)

    Nikolic, J.; Puzovic, J.; Todorovic, D.; Rajacic, M.

    2015-11-01

    One of the major difficulties in gamma spectrometry of voluminous environmental samples is the efficiency calibration of the detectors used for the measurement. The direct measurement of different calibration sources, containing isolated γ-ray emitters within the energy range of interest, and subsequent fitting to a parametric function, is the most accurate and at the same time most complicated and time consuming method of efficiency calibration. Many other methods are developed in time, some of them using Monte Carlo simulation. One of such methods is a dedicated and user-friendly program PHOTON, developed to simulate the passage of photons through different media with different geometries. This program was used for efficiency calibration of three HPGe detectors, readily used in Laboratory for Environment and Radiation Protection of the Institute for Nuclear Sciences Vinca, Belgrade, Serbia. The simulation produced the spectral response of the detectors for fixed energy and for different sample geometries and matrices. Thus obtained efficiencies were compared to the values obtained by the measurement of the secondary reference materials and to the results obtained by GEANT4 simulation, in order to establish whether the simulated values agree with the experimental ones. To further analyze the results, a realistic measurement of the materials provided by the IAEA within different interlaboratory proficiency tests, was performed. The activities obtained using simulated efficiencies were compared to the reference values provided by the organizer. A good agreement in the mid energy section of the spectrum was obtained, while for low energies the lack of some parameters in the simulation libraries proved to produce unacceptable discrepancies.

  12. Application of PHOTON simulation software on calibration of HPGe detectors

    International Nuclear Information System (INIS)

    One of the major difficulties in gamma spectrometry of voluminous environmental samples is the efficiency calibration of the detectors used for the measurement. The direct measurement of different calibration sources, containing isolated γ-ray emitters within the energy range of interest, and subsequent fitting to a parametric function, is the most accurate and at the same time most complicated and time consuming method of efficiency calibration. Many other methods are developed in time, some of them using Monte Carlo simulation. One of such methods is a dedicated and user-friendly program PHOTON, developed to simulate the passage of photons through different media with different geometries. This program was used for efficiency calibration of three HPGe detectors, readily used in Laboratory for Environment and Radiation Protection of the Institute for Nuclear Sciences Vinca, Belgrade, Serbia. The simulation produced the spectral response of the detectors for fixed energy and for different sample geometries and matrices. Thus obtained efficiencies were compared to the values obtained by the measurement of the secondary reference materials and to the results obtained by GEANT4 simulation, in order to establish whether the simulated values agree with the experimental ones. To further analyze the results, a realistic measurement of the materials provided by the IAEA within different interlaboratory proficiency tests, was performed. The activities obtained using simulated efficiencies were compared to the reference values provided by the organizer. A good agreement in the mid energy section of the spectrum was obtained, while for low energies the lack of some parameters in the simulation libraries proved to produce unacceptable discrepancies

  13. Improvement of personalized Monte Carlo-aided direct internal contamination monitoring: optimization of calculation times and measurement methodology for the establishment of activity distribution

    International Nuclear Information System (INIS)

    To optimize the monitoring of female workers using in vivo spectrometry measurements, it is necessary to correct the typical calibration coefficients obtained with the Livermore male physical phantom. To do so, numerical calibrations based on the use of Monte Carlo simulations combined with anthropomorphic 3D phantoms were used. Such computational calibrations require on the one hand the development of representative female phantoms of different size and morphologies and on the other hand rapid and reliable Monte Carlo calculations. A library of female torso models was hence developed by fitting the weight of internal organs and breasts according to the body height and to relevant plastic surgery recommendations. This library was next used to realize a numerical calibration of the AREVA NC La Hague in vivo counting installation. Moreover, the morphology-induced counting efficiency variations with energy were put into equation and recommendations were given to correct the typical calibration coefficients for any monitored female worker as a function of body height and breast size. Meanwhile, variance reduction techniques and geometry simplification operations were considered to accelerate simulations. Furthermore, to determine the activity mapping in the case of complex contaminations, a method that combines Monte Carlo simulations with in vivo measurements was developed. This method consists of realizing several spectrometry measurements with different detector positioning. Next, the contribution of each contaminated organ to the count is assessed from Monte Carlo calculations. The in vivo measurements realized at LEDI, CIEMAT and KIT have demonstrated the effectiveness of the method and highlighted the valuable contribution of Monte Carlo simulations for a more detailed analysis of spectrometry measurements. Thus, a more precise estimate of the activity distribution is given in the case of an internal contamination. (author)

  14. A variable acceleration calibration system

    Science.gov (United States)

    Johnson, Thomas H.

    2011-12-01

    A variable acceleration calibration system that applies loads using gravitational and centripetal acceleration serves as an alternative, efficient and cost effective method for calibrating internal wind tunnel force balances. Two proof-of-concept variable acceleration calibration systems are designed, fabricated and tested. The NASA UT-36 force balance served as the test balance for the calibration experiments. The variable acceleration calibration systems are shown to be capable of performing three component calibration experiments with an approximate applied load error on the order of 1% of the full scale calibration loads. Sources of error are indentified using experimental design methods and a propagation of uncertainty analysis. Three types of uncertainty are indentified for the systems and are attributed to prediction error, calibration error and pure error. Angular velocity uncertainty is shown to be the largest indentified source of prediction error. The calibration uncertainties using a production variable acceleration based system are shown to be potentially equivalent to current methods. The production quality system can be realized using lighter materials and a more precise instrumentation. Further research is needed to account for balance deflection, forcing effects due to vibration, and large tare loads. A gyroscope measurement technique is shown to be capable of resolving the balance deflection angle calculation. Long term research objectives include a demonstration of a six degree of freedom calibration, and a large capacity balance calibration.

  15. Calibration method for a in vivo measurement system using mathematical simulation of the radiation source and the detector

    International Nuclear Information System (INIS)

    A Monte Carlo program which uses a voxel phantom has been developed to simulate in vivo measurement systems for calibration purposes. The calibration method presented here employs a mathematical phantom, produced in the form of volume elements (voxels), obtained through Magnetic Resonance Images of the human body. The calibration method uses the Monte Carlo technique to simulate the tissue contamination, the transport of the photons through the tissues and the detection of the radiation. The program simulates the transport and detection of photons between 0.035 and 2 MeV and uses, for the body representation, a voxel phantom with a format of 871 slices each of 277 x 148 picture elements. The Monte Carlo code was applied to the calibration of in vivo systems and to estimate differences in counting efficiencies between homogeneous and non-homogeneous radionuclide distributions in the lung. Calculations show a factor of 20 between deposition of 241 Am at the back compared with the front of the lung. The program was also used to estimate the 137 Cs body burden of an internally contaminated individual, counted with an 8 x 4 Nal (TI) detector and an 241 Am body burden of an internally contaminated individual, who was counted using a planar germanium detector. (author)

  16. Calibration of the Super-Kamiokande Detector

    CERN Document Server

    Abe, K; Iida, T; Iyogi, K; Kameda, J; Kishimoto, Y; Koshio, Y; Marti, Ll; Miura, M; Moriyama, S; Nakahata, M; Nakano, Y; Nakayama, S; Obayashi, Y; Sekiya, H; Shiozawa, M; Suzuki, Y; Takeda, A; Takenaga, Y; Tanaka, H; Tomura, T; Ueno, K; Wendell, R A; Yokozawa, T; Irvine, T J; Kaji, H; Kajita, T; Kaneyuki, K; Lee, K P; Nishimura, Y; Okumura, K; McLachlan, T; Labarga, L; Kearns, E; Raaf, J L; Stone, J L; Sulak, L R; Berkman, S; Tanaka, H A; Tobayama, S; Goldhaber, M; Bays, K; Carminati, G; Kropp, W R; Mine, S; Renshaw, A; Smy, M B; Sobel, H W; Ganezer, K S; Hill, J; Keig, W E; Jang, J S; Kim, J Y; Lim, I T; Hong, N; Akiri, T; Albert, J B; Himmel, A; Scholberg, K; Walter, C W; Wongjirad, T; Ishizuka, T; Tasaka, S; Learned, J G; Matsuno, S; Smith, S N; Hasegawa, T; Ishida, T; Ishii, T; Kobayashi, T; Nakadaira, T; Nakamura, K; Nishikawa, K; Oyama, Y; Sakashita, K; Sekiguchi, T; Tsukamoto, T; Suzuki, A T; Takeuchi, Y; Huang, K; Ieki, K; Ikeda, M; Kikawa, T; Kubo, H; Minamino, A; Murakami, A; Nakaya, T; Otani, M; Suzuki, K; Takahashi, S; Fukuda, Y; Choi, K; Itow, Y; Mitsuka, G; Miyake, M; Mijakowski, P; Tacik, R; Hignight, J; Imber, J; Jung, C K; Taylor, I; Yanagisawa, C; Idehara, Y; Ishino, H; Kibayashi, A; Mori, T; Sakuda, M; Yamaguchi, R; Yano, T; Kuno, Y; Kim, S B; Yang, B S; Okazawa, H; Choi, Y; Nishijima, K; Koshiba, M; Totsuka, Y; Yokoyama, M; Martens, K; Vagins, M R; Martin, J F; de Perio, P; Konaka, A; Wilking, M J; Chen, S; Heng, Y; Sui, H; Yang, Z; Zhang, H; Zhenwei, Y; Connolly, K; Dziomba, M; Wilkes, R J

    2013-01-01

    Procedures and results on hardware level detector calibration in Super-Kamiokande (SK) are presented in this paper. In particular, we report improvements made in our calibration methods for the experimental phase IV in which new readout electronics have been operating since 2008. The topics are separated into two parts. The first part describes the determination of constants needed to interpret the digitized output of our electronics so that we can obtain physical numbers such as photon counts and their arrival times for each photomultiplier tube (PMT). In this context, we developed an in-situ procedure to determine high-voltage settings for PMTs in large detectors like SK, as well as a new method for measuring PMT quantum efficiency and gain in such a detector. The second part describes the modeling of the detector in our Monte Carlo simulation, including in particular the optical properties of its water target and their variability over time. Detailed studies on the water quality are also presented. As a re...

  17. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  18. Validation of a Monte Carlo model for a GMX detector used for measurements of environmental radioactivity

    International Nuclear Information System (INIS)

    In an Environmental Radioactivity Laboratory, samples from several products are analyzed, in order to determine the amount of radioactive products they contain. A usual method is the gamma activity measurement of these samples, which typically requires the use of High Purity Germanium Detectors (HPGe). GMX (n-type) detectors can be found among this group of detectors. They have a high efficiency for low energy emissions. As any detector, it must be calibrated, in energy, efficiency and resolution (FWHM). To do this calibration, a gamma standard solution is used, whose composition and activity are certified by a reference laboratory. This source contains several radionuclides, providing a wide energy spectrum. The simulation of the detection process with MCNP5, a code based on the Monte Carlo method, is a useful tool in an Environmental Radioactivity Laboratory, since it can reproduce the experimental conditions of the essay, without manipulating radioactive sources, and consequently reducing radioactive wastes. On the other hand, the simulation of the detector calibration permits to analyze the influence of different variables on detector efficiency. In this paper, the simulation of the calibration of the GMX detector used in the Environmental Radioactivity Laboratory of the Polytechnic University of Valencia (UPV) is presented. Results obtained with this simulation are compared with laboratory measurements, in order to validate the model. (author)

  19. Calculation of the energy dependent efficiency of gridded 3He fast neutron ionization chambers

    International Nuclear Information System (INIS)

    The relative efficiency function for total energy events in a 3He fast neutron ionization chamber has been calculated with a Monte Carlo approach. It is shown that the efficiency function applicable to a point isotropic source located near the surface of the spectrometer differs significantly from that obtained in standard calibration procedures using neutrons from the 7Li(p,n)7Be reaction for Esub(n) > 1.5 MeV. (orig.)

  20. Monte Carlo modelling for the in vivo lung monitoring of enriched uranium: Results of an international comparison

    International Nuclear Information System (INIS)

    In order to assess the reliability of Monte Carlo (MC)-based numerical calibration of in vivo counting systems the EURADOS network supported a comparison of MC simulation of well-defined experiments. This action also provided training for the use of voxel phantoms. In vivo measurements of enriched uranium in a thoracic phantom have been carried out and the needed information to simulate these measurements was distributed to 17 participants. About half of the participants managed to simulate the measured counting efficiency without support from the organisers. Following additional support all participants managed to simulate the counting efficiencies within a typical agreement of ±5% with experiment.

  1. The influence of the calibration standard and the chemical composition of the water samples residue in the counting efficiency of proportional detectors for gross alpha and beta counting. Application on the radiologic control of the IPEN-CNEN/SP

    International Nuclear Information System (INIS)

    In this work the efficiency calibration curves of thin-window and low background gas-flow proportional counters were determined for calibration standards with different energies and different absorber thicknesses. For the gross alpha counting we have used 241Am and natural uranium standards and for the gross beta counting we have used 90Sr/90Y and 137Cs standards in residue thicknesses ranging from 0 to approximately 18 mg/cm2. These sample thicknesses were increased with a previously determined salted solution prepared simulating the chemical composition of the underground water of IPEN The counting efficiency for alpha emitters ranged from 0,273 +- 0,038 for a weightless residue to only 0,015 +- 0,002 in a planchet containing 15 mg/cm2 of residue for 241Am standard. For natural uranium standard the efficiency ranged from 0,322 +- 0,030 for a weightless residue to 0,023 +- 0,003 in a planchet containing 14,5 mg/cm2 of residue. The counting efficiency for beta emitters ranged from 0,430 +- 0,036 for a weightless residue to 0,247 +- 0,020 in a planchet containing 17 mg/cm2 of residue for 137Cs standard. For 90Sr/90Y standard the efficiency ranged from 0,489 +- 0,041 for a weightless residue to 0,323 +- 0,026 in a planchet containing 18 mg/cm2 of residue. Results make evident the counting efficiency variation with the alpha or beta emitters energies and the thickness of the water samples residue. So, the calibration standard, the thickness and the chemical composition of the residue must always be considered in the gross alpha and beta radioactivity determination in water samples. (author)

  2. Improvement of personalized Monte Carlo-aided direct internal contamination monitoring: optimization of calculation times and measurement methodology for the establishment of activity distribution; Amelioration des mesures anthroporadiametriques personnalisees assistees par calcul Monte Carlo: optimisation des temps de calculs et methodologie de mesure pour l'etablissement de la repartition d'activite

    Energy Technology Data Exchange (ETDEWEB)

    Farah, Jad

    2011-10-06

    To optimize the monitoring of female workers using in vivo spectrometry measurements, it is necessary to correct the typical calibration coefficients obtained with the Livermore male physical phantom. To do so, numerical calibrations based on the use of Monte Carlo simulations combined with anthropomorphic 3D phantoms were used. Such computational calibrations require on the one hand the development of representative female phantoms of different size and morphologies and on the other hand rapid and reliable Monte Carlo calculations. A library of female torso models was hence developed by fitting the weight of internal organs and breasts according to the body height and to relevant plastic surgery recommendations. This library was next used to realize a numerical calibration of the AREVA NC La Hague in vivo counting installation. Moreover, the morphology-induced counting efficiency variations with energy were put into equation and recommendations were given to correct the typical calibration coefficients for any monitored female worker as a function of body height and breast size. Meanwhile, variance reduction techniques and geometry simplification operations were considered to accelerate simulations. Furthermore, to determine the activity mapping in the case of complex contaminations, a method that combines Monte Carlo simulations with in vivo measurements was developed. This method consists of realizing several spectrometry measurements with different detector positioning. Next, the contribution of each contaminated organ to the count is assessed from Monte Carlo calculations. The in vivo measurements realized at LEDI, CIEMAT and KIT have demonstrated the effectiveness of the method and highlighted the valuable contribution of Monte Carlo simulations for a more detailed analysis of spectrometry measurements. Thus, a more precise estimate of the activity distribution is given in the case of an internal contamination. (author)

  3. Modelling of a simple bunker problem with Monte Carlo codes TRIPOLI 4.3 and MCNPX 2.4 to test the efficiency of the biasing methods

    International Nuclear Information System (INIS)

    The Monte Carlo codes are particularly used at IRSN to simulate the particle transport in complex geometries such as multi-element detectors, voxel phantoms and irradiation facilities. These calculations without any optimisation could run over several CPU days. The biasing methods of TRIPOLI 4.3 or MCNPX 2.4 appear to be very powerful but they require a careful control in order to obtain reliable results. This is why IRSN users of these codes have developed a simple model, i.e. a bunker room, in order to test in terms of CPU time and control difficulty different variance reduction methods proposed by the codes. The geometry of the model is a square room in which there is a neutron isotropic source of UO2, which is typical of the sources simulated in engineering calculations to evaluate the protection shields of the installation facilities. The ceiling, floor and walls are made of concrete. The purpose of the simulation is to calculate the ambient dose equivalent rate outside the room at 20 cm from a wall. The presented results obtained with the two codes are compared with respect to CPU time. (authors)

  4. Accurate and efficient radiation transport in optically thick media -- by means of the Symbolic Implicit Monte Carlo method in the difference formulation

    Energy Technology Data Exchange (ETDEWEB)

    Szoke, A; Brooks, E D; McKinley, M; Daffin, F

    2005-03-30

    The equations of radiation transport for thermal photons are notoriously difficult to solve in thick media without resorting to asymptotic approximations such as the diffusion limit. One source of this difficulty is that in thick, absorbing media thermal emission is almost completely balanced by strong absorption. In a previous publication [SB03], the photon transport equation was written in terms of the deviation of the specific intensity from the local equilibrium field. We called the new form of the equations the difference formulation. The difference formulation is rigorously equivalent to the original transport equation. It is particularly advantageous in thick media, where the radiation field approaches local equilibrium and the deviations from the Planck distribution are small. The difference formulation for photon transport also clarifies the diffusion limit. In this paper, the transport equation is solved by the Symbolic Implicit Monte Carlo (SIMC) method and a comparison is made between the standard formulation and the difference formulation. The SIMC method is easily adapted to the derivative source terms of the difference formulation, and a remarkable reduction in noise is obtained when the difference formulation is applied to problems involving thick media.

  5. TARGETLESS CAMERA CALIBRATION

    Directory of Open Access Journals (Sweden)

    L. Barazzetti

    2012-09-01

    Full Text Available In photogrammetry a camera is considered calibrated if its interior orientation parameters are known. These encompass the principal distance, the principal point position and some Additional Parameters used to model possible systematic errors. The current state of the art for automated camera calibration relies on the use of coded targets to accurately determine the image correspondences. This paper presents a new methodology for the efficient and rigorous photogrammetric calibration of digital cameras which does not require any longer the use of targets. A set of images depicting a scene with a good texture are sufficient for the extraction of natural corresponding image points. These are automatically matched with feature-based approaches and robust estimation techniques. The successive photogrammetric bundle adjustment retrieves the unknown camera parameters and their theoretical accuracies. Examples, considerations and comparisons with real data and different case studies are illustrated to show the potentialities of the proposed methodology.

  6. Camera calibration

    OpenAIRE

    Andrade-Cetto, J.

    2001-01-01

    This report is a tutorial on pattern based camera calibration for computer vision. The methods presented here allow for the computation of the intrinsic and extrinsic parameters of a camera. These methods are widely available in the literature, and they are only summarized here as an easy and comprehensive reference for researchers at the Institute and their collaborators.

  7. Efficiency study of a big volume well type NaI(Tl) detector by point and voluminous sources and Monte-Carlo simulation

    International Nuclear Information System (INIS)

    The activity of environmental samples is usually measured by high resolution HPGe gamma spectrometers. In this work a set-up with a 9 in.x9 in. NaI well-detector with 3 in. thickness and a 3 in.×3 in. plug detector in a 15-cm-thick lead shielding is considered as an alternative (Hansman, 2014). In spite of its much poorer resolution, it requires shorter measurement times and may possibly give better detection limits. In order to determine the U-238, Th-232, and K-40 content in the samples by this NaI(Tl) detector, the corresponding photopeak efficiencies must be known. These efficiencies can be found for certain source matrix and geometry by Geant4 simulation. We found discrepancy between simulated and experimental efficiencies of 5–50%, which can be mainly due to effects of light collection within the detector volume, an effect which was not taken into account by simulations. The influence of random coincidence summing on detection efficiency for radionuclide activities in the range 130–4000 Bq, was negligible. This paper describes also, how the efficiency in the detector depends on the position of the radioactive point source. To avoid large dead time, relatively weak Mn-54, Co-60 and Na-22 point sources of a few kBq were used. Results for single gamma lines and also for coincidence summing gamma lines are presented. - Highlights: • 9 in.x9 in. NaI well detector and 3 in.x3 in. plug detector studied. • Peak efficiency simulated with Geant4. • Result shows discrepancy with measurements. • High efficiency useful for environmental samples

  8. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    Science.gov (United States)

    Setiani, Tia Dwi; Suprijadi, Haryanto, Freddy

    2016-03-01

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 - 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 108 and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.

  9. Design of a neutron source for calibration

    International Nuclear Information System (INIS)

    The neutron spectra produced by an isotopic neutron source located at the center of moderating media were calculated using Monte Carlo method in the aim to design a neutron source for calibration purposes. To improve the evaluation of the dosimetric quantities, is recommended to calibrate the radiation protection devices with calibrated neutron sources whose neutron spectra being similar to those met in practice. Here, a 239Pu-Be neutron source was inserted in H2O, D2O and polyethylene cylindrical moderators in order to produce neutron spectra that resembles spectra found in workplaces

  10. Asynchronous Anytime Sequential Monte Carlo

    OpenAIRE

    Paige, Brooks; Wood, Frank; Doucet, Arnaud; Teh, Yee Whye

    2014-01-01

    We introduce a new sequential Monte Carlo algorithm we call the particle cascade. The particle cascade is an asynchronous, anytime alternative to traditional particle filtering algorithms. It uses no barrier synchronizations which leads to improved particle throughput and memory efficiency. It is an anytime algorithm in the sense that it can be run forever to emit an unbounded number of particles while keeping within a fixed memory budget. We prove that the particle cascade is an unbiased mar...

  11. Signal inference with unknown response: calibration uncertainty renormalized estimator

    CERN Document Server

    Dorn, Sebastian; Greiner, Maksim; Selig, Marco; Böhm, Vanessa

    2014-01-01

    The calibration of a measurement device is crucial for every scientific experiment, where a signal has to be inferred from data. We present CURE, the calibration uncertainty renormalized estimator, to reconstruct a signal and simultaneously the instrument's calibration from the same data without knowing the exact calibration, but its covariance structure. The idea of CURE is starting with an assumed calibration to successively include more and more portions of calibration uncertainty into the signal inference equations and to absorb the resulting corrections into renormalized signal (and calibration) solutions. Thereby, the signal inference and calibration problem turns into solving a single system of ordinary differential equations and can be identified with common resummation techniques used in field theories. We verify CURE by applying it to a simplistic toy example and compare it against existent self-calibration schemes, Wiener filter solutions, and Markov Chain Monte Carlo sampling. We conclude that the...

  12. Discrete diffusion Monte Carlo for frequency-dependent radiative transfer

    Energy Technology Data Exchange (ETDEWEB)

    Densmore, Jeffrey D [Los Alamos National Laboratory; Kelly, Thompson G [Los Alamos National Laboratory; Urbatish, Todd J [Los Alamos National Laboratory

    2010-11-17

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.

  13. Importance iteration in MORSE Monte Carlo calculations

    International Nuclear Information System (INIS)

    An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example, which shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation. (orig.)

  14. Tau reconstruction, energy calibration and identification at ATLAS

    Indian Academy of Sciences (India)

    Michel Trottier-McDonald; on behalf of the ATLAS Collaboration

    2012-11-01

    Tau leptons play a central role in the LHC physics programme, in particular as an important signature in many Higgs boson and supersymmetry searches. They are further used in Standard Model electroweak measurements, as well as detector-related studies like the determination of the missing transverse energy scale. Copious backgrounds from QCD processes call for both efficient identification of hadronically decaying tau leptons, as well as large suppression of fake candidates. A solid understanding of the combined performance of the calorimeter and tracking detectors is also required. We present the current status of the tau reconstruction, energy calibration and identification with the ATLAS detector at the LHC. Identification efficiencies are measured in → events in data and compared with predictions from Monte Carlo simulations, whereas the misidentification probabilities of QCD jets and electrons are determined from various jet-enriched data samples and from → events, respectively. The tau energy scale calibration is described and systematic uncertainties on both energy scale and identification efficiencies discussed.

  15. The ATLAS Electromagnetic Calorimeter Calibration Workshop

    CERN Multimedia

    Hong Ma; Isabelle Wingerter

    The ATLAS Electromagnetic Calorimeter Calibration Workshop took place at LAPP-Annecy from the 1st to the 3rd of October; 45 people attended the workshop. A detailed program was setup before the workshop. The agenda was organised around very focused presentations where questions were raised to allow arguments to be exchanged and answers to be proposed. The main topics were: Electronics calibration Handling of problematic channels Cluster level corrections for electrons and photons Absolute energy scale Streams for calibration samples Calibration constants processing Learning from commissioning Forty-five people attended the workshop. The workshop was on the whole lively and fruitful. Based on years of experience with test beam analysis and Monte Carlo simulation, and the recent operation of the detector in the commissioning, the methods to calibrate the electromagnetic calorimeter are well known. Some of the procedures are being exercised in the commisssioning, which have demonstrated the c...

  16. Implementing new recommendations for calibrating personal dosemeters

    International Nuclear Information System (INIS)

    This paper analyses the differences between the calibration procedures for personal dosemeters recommended by ICRU 47 and ISO 4037-3. The tissue equivalence of the PMMA and the ISO water slab phantoms are analysed by means of the Penelope Monte Carlo code for monoenergetic and filtered X ray photon beams and compared with the results of two other independent codes. The influence of the calibration method is also verified experimentally, both on a thermoluminescence and an electronic personal dosemeter. Good consistency between both calibration procedures is shown provided that a correction factor for backscatter differences between the PMMA and the ICRU phantom is introduced. The Monte Carlo simulation is used to determine this correction to a greater accuracy. (author)

  17. Antenna Calibration and Measurement Equipment

    Science.gov (United States)

    Rochblatt, David J.; Cortes, Manuel Vazquez

    2012-01-01

    A document describes the Antenna Calibration & Measurement Equipment (ACME) system that will provide the Deep Space Network (DSN) with instrumentation enabling a trained RF engineer at each complex to perform antenna calibration measurements and to generate antenna calibration data. This data includes continuous-scan auto-bore-based data acquisition with all-sky data gathering in support of 4th order pointing model generation requirements. Other data includes antenna subreflector focus, system noise temperature and tipping curves, antenna efficiency, reports system linearity, and instrument calibration. The ACME system design is based on the on-the-fly (OTF) mapping technique and architecture. ACME has contributed to the improved RF performance of the DSN by approximately a factor of two. It improved the pointing performances of the DSN antennas and productivity of its personnel and calibration engineers.

  18. Seabed radioactivity based on in situ measurements and Monte Carlo simulations

    International Nuclear Information System (INIS)

    Activity concentration measurements were carried out on the seabed, by implementing the underwater detection system KATERINA. The efficiency calibration was performed in the energy range 350–2600 keV, using in situ and laboratory measurements. The efficiency results were reproduced and extended in a broadened range of energies from 150 to 2600 keV, by Monte Carlo simulations, using the MCNP5 code. The concentrations of 40K, 214Bi and 208Tl were determined utilizing the present approach. The results were validated by laboratory measurements. - Highlights: • The KATERINA system was applied for marine sediments. • MC simulations using MCNP5 reproduced experimental energy spectra and efficiency. • The in-situ method provided quantitative measurements. • The measurements were validated with lab-based methods

  19. Comparison of various anthropomorphic phantom types for in vivo measurements by means of Monte Carlo simulations

    International Nuclear Information System (INIS)

    Three widely used anthropomorphic phantoms are analysed with regard to their suitability for the efficiency calibration of whole-body counters (WBCs): a Bottle Manikin Absorber (BOMAB) phantom consisting of water-filled plastic containers, a St Petersburg block phantom (Research Inst. of Sea Transport Hygiene, St Petersburg) made of polyethylene bricks and a mathematical Medical Internal Radiation Dose (MIRD) phantom, each of them representing a person weighing 70 kg. The analysis was performed by means of Monte Carlo simulations with the Monte Carlo N-Particle transport code using detailed mathematical models of the phantoms and the WBC at Forschungszentrum Juelich (FZJ). The simulated peak efficiencies for the BOMAB phantom and the MIRD phantom agree very well, but the results for the St Petersburg phantom are considerably higher. Therefore, WBCs similar to that at FZJ will probably underestimate the activity of incorporated radionuclides if they are calibrated by means of a St Petersburg phantom. Finally, the results from this work are compared with the conclusions from other studies dealing with block and BOMAB phantoms. (authors)

  20. Calibration Binaries

    Science.gov (United States)

    Drummond, J.

    2011-09-01

    Two Excel Spreadsheet files are offered to help calibrate telescope or camera image scale and orientation with binary stars for any time. One is a personally selected list of fixed position binaries and binaries with well-determined orbits, and the other contains all binaries with published orbits. Both are derived from the web site of the Washington Double Star Library. The spreadsheets give the position angle and separation of the binaries for any entered time by taking advantage of Excel's built in iteration function to solve Kepler's transcendental equation.

  1. ALTEA calibration

    Science.gov (United States)

    Zaconte, V.; Altea Team

    The ALTEA project is aimed at studying the possible functional damages to the Central Nervous System (CNS) due to particle radiation in space environment. The project is an international and multi-disciplinary collaboration. The ALTEA facility is an helmet-shaped device that will study concurrently the passage of cosmic radiation through the brain, the functional status of the visual system and the electrophysiological dynamics of the cortical activity. The basic instrumentation is composed by six active particle telescopes, one ElectroEncephaloGraph (EEG), a visual stimulator and a pushbutton. The telescopes are able to detect the passage of each particle measuring its energy, trajectory and released energy into the brain and identifying nuclear species. The EEG and the Visual Stimulator are able to measure the functional status of the visual system, the cortical electrophysiological activity, and to look for a correlation between incident particles, brain activity and Light Flash perceptions. These basic instruments can be used separately or in any combination, permitting several different experiments. ALTEA is scheduled to fly in the International Space Station (ISS) in November, 15th 2004. In this paper the calibration of the Flight Model of the silicon telescopes (Silicon Detector Units - SDUs) will be shown. These measures have been taken at the GSI heavy ion accelerator in Darmstadt. First calibration has been taken out in November 2003 on the SDU-FM1 using C nuclei at different energies: 100, 150, 400 and 600 Mev/n. We performed a complete beam scan of the SDU-FM1 to check functionality and homogeneity of all strips of silicon detector planes, for each beam energy we collected data to achieve good statistics and finally we put two different thickness of Aluminium and Plexiglas in front of the detector in order to study fragmentations. This test has been carried out with a Test Equipment to simulate the Digital Acquisition Unit (DAU). We are scheduled to

  2. The calibration of radioprotection hardware

    International Nuclear Information System (INIS)

    After having recalled recent recommendations on dose limits on the basis of two radioprotection values (the equivalent and the efficient dose), this document indicates some characteristics of these values, and discusses how they are applied for individual monitoring and for area or ambient monitoring. It presents conventions aimed at simplifying radiation fields. Then, the author gives a precise overview of some general aspects concerning calibration operations: legal requirements, radioprotection hardware controls, calibration loop organisation (calibration definition, general physical values, reference radiation, conversion factors, and metrology), comparison between operational values and the protection value (irradiation geometries, conversion factors with respect to the geometries, comparison between efficient dose and operational values). He finally describes the calibration procedures: dosemeter location, energy response, angular response, flow rate response, uncertainties

  3. Exploring Monte Carlo methods

    CERN Document Server

    Dunn, William L

    2012-01-01

    Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble

  4. Mercury Calibration System

    Energy Technology Data Exchange (ETDEWEB)

    John Schabron; Eric Kalberer; Joseph Rovani; Mark Sanderson; Ryan Boysen; William Schuster

    2009-03-11

    actual capabilities of the current calibration technology. As part of the current effort, WRI worked with Thermo Fisher elemental mercury calibrator units to conduct qualification experiments to demonstrate their performance characteristics under a variety of conditions and to demonstrate that they qualify for use in the CEM calibration program. Monitoring of speciated mercury is another concern of this research. The mercury emissions from coal-fired power plants are comprised of both elemental and oxidized mercury. Current CEM analyzers are designed to measure elemental mercury only. Oxidized mercury must first be converted to elemental mercury prior to entering the analyzer inlet in order to be measured. CEM systems must demonstrate the ability to measure both elemental and oxidized mercury. This requires the use of oxidized mercury generators with an efficient conversion of the oxidized mercury to elemental mercury. There are currently two basic types of mercuric chloride (HgCl{sub 2}) generators used for this purpose. One is an evaporative HgCl{sub 2} generator, which produces gas standards of known concentration by vaporization of aqueous HgCl{sub 2} solutions and quantitative mixing with a diluent carrier gas. The other is a device that converts the output from an elemental Hg generator to HgCl{sub 2} by means of a chemical reaction with chlorine gas. The Thermo Fisher oxidizer system involves reaction of elemental mercury vapor with chlorine gas at an elevated temperature. The draft interim protocol for oxidized mercury units involving reaction with chlorine gas requires the vendors to demonstrate high efficiency of oxidation of an elemental mercury stream from an elemental mercury vapor generator. The Thermo Fisher oxidizer unit is designed to operate at the power plant stack at the probe outlet. Following oxidation of elemental mercury from reaction with chlorine gas, a high temperature module reduces the mercuric chloride back to elemental mercury. WRI

  5. Trinocular Calibration Method Based on Binocular Calibration

    OpenAIRE

    CAO Dan-Dan; Luo, Chun; GAO Shu-Yuan; Wang, Yun; Li, Wen-Bin; XU Zhen-Ying

    2012-01-01

    In order to solve the self-occlusion problem in plane-based multi-camera calibration system and expand the measurement range, a tri-camera vision system based on binocular calibration is proposed. The three cameras are grouped into two pairs, while the public camera is taken as the reference to build the global coordinate. By calibration of the measured absolute distance and the true absolute distance, global calibration is realized. The MRE (mean relative error) of the global calibration of ...

  6. Calibration of the whole body counter for measurement of actinides in lungs

    International Nuclear Information System (INIS)

    For purposes of in vivo measurements, two whole body counters (WBC), intended for research purposes, monitoring during normal and emergency situation,and for personal dosimetry service, are operated in SURO. One of them was upgraded in 2013 and 2014 and furnished with a new installation and detectors. Therefore, new calibration of the detection efficiency for the measurement of radionuclides in lungs was required. Lawrence Livermore National Laboratory (LLNL) torso phantom was used for calibrations of the WBC detection system intended for measurement of transuranium elements in lungs. The phantom comprises three pairs of replace able lungs; one pair without activity, and two others with added activity of 239Pu or 241Am. Considering the importance of the thickness and composition of the chest wall tissue for the attenuation of low energy photon radiation of transuranium elements in lungs, the phantom contains four additional overlayers of different thickness simulating muscle and adipose tissue in ratio 1:1. Along with experimental calibrations of the WBC detection system using the physical phantom, Monte Carlo technique has been used for computational calibration. A vogel phantom has been created from CT scans of the LLNL torso phantom. The voxel model will be used for study of detection efficiencies in various measuring scenarios. (authors)

  7. Calibration of the whole body counter for measurement of transuranium elements in lungs

    International Nuclear Information System (INIS)

    For purposes of in vivo measurements, two whole body counters (WBC), intended for research purposes, monitoring during normal and emergency situation,and for personal dosimetry service, are operated in SURO. One of them was upgraded in 2013 and 2014 and furnished with a new installation and detectors. Therefore, new calibration of the detection efficiency for the measurement of radionuclides in lungs was required. Lawrence Livermore National Laboratory (LLNL) torso phantom was used for calibrations of the WBC detection system intended for measurement of transuranium elements in lungs. The phantom comprises three pairs of replace able lungs; one pair without activity, and two others with added activity of 239Pu or 241Am. Considering the importance of the thickness and composition of the chest wall tissue for the attenuation of low energy photon radiation of transuranium elements in lungs, the phantom contains four additional overlayers of different thickness simulating muscle and adipose tissue in ratio 1:1. Along with experimental calibrations of the WBC detection system using the physical phantom, Monte Carlo technique has been used for computational calibration. A vogel phantom has been created from CT scans of the LLNL torso phantom. The voxel model will be used for study of detection efficiencies in various measuring scenarios. (authors)

  8. Calibration of the whole body counter at PSI

    International Nuclear Information System (INIS)

    At the Paul Scherrer Institut (PSI), measurements with the whole body counter are routinely carried out for occupationally exposed persons and occasionally for individuals of the population suspected of radioactive intake. In total about 400 measurements are performed per year. The whole body counter is based on a p-type high purity germanium (HPGe) coaxial detector mounted above a canvas chair in a shielded small room. The detector is used to detect the presence of radionuclides that emit photons with energies between 50 keV and 2 MeV. The room itself is made of iron from old railway rails to reduce the natural background radiation to 24 n Sv/h. The present paper describes the calibration of the system with the IGOR phantom. Different body sizes are realized by different standardized configurations of polyethylene bricks, in which small tubes of calibration sources can be introduced. The efficiency of the detector was determined for four phantom geometries (P1, P2, P4 and P6 simulating human bodies in sitting position of 12 kg, 24 kg, 70 kg and 110 kg, respectively. The measurements were performed serially using five different radionuclide sources (40K, 60Co, 133Ba, 137Cs, 152Eu) within the phantom bricks. Based on results of the experiment, an efficiency curve for each configuration and the detection limits for relevant radionuclides were determined. For routine measurements, the efficiency curve obtained with the phantom geometry P4 was chosen. The detection limits range from 40 Bq to 1000 Bq for selected radionuclides applying a measurement time of 7 min. The proper calibration of the system, on one hand, is essential for the routine measurements at PSI. On the other hand, it serves as a benchmark for the already initiated characterisation of the system with Monte Carlo simulations. (author)

  9. Monte Carlo modelling for individual monitoring

    International Nuclear Information System (INIS)

    Full text: Individual monitoring techniques provide suitable tools for the estimate of personal dose equivalent Hp(d), representative of the effective dose, in case of external irradiation, or the evaluation of the committed effective dose by inference from activity measurements, in case of internal contamination. In both these fields Monte Carlo techniques play a crucial role: they can provide a series of parameters that are usually difficult, sometimes impossible, to be assessed experimentally. The aim of this paper is to give a panoramic view of Monte Carlo studies in external exposures individual monitoring field; internal dosimetry applications are briefly summarized in another paper. The operative practice in the field of occupational exposure relies on the employment of personal dosemeters to be worn appropriately on the body in order to guarantee a reliable estimate of the radiation protection quantities (i.e. effective dose or equivalent dose). Personal dosemeters are calibrated in terms of the ICRU operational quantity personal dose equivalent, Hp(d), that should, in principle, represent a reasonably conservative approximation of the radiation protection quantity (this condition is not fulfilled in a specific neutron energy range). All the theoretical and practical implementation of photon individual monitoring relies on two main aspects: the definition of the operational quantities and the calculation of the corresponding conversion coefficients for the field quantities (fluence and air kerma); the characterization of individual dosemeters in terms of these operational quantities with the associated energy and angular type test evaluations carried out on suitable calibration phantoms. For the first aspect (evaluation of conversion coefficients) rather exhaustive tabulations of Monte Carlo evaluated conversion coefficients has been published in ICRP and ICRU reports as well as in the open literature. For the second aspect (type test and calibration

  10. Monte Carlo simulation of shielded chair whole body counting system with Masonite cut sheet phantom

    International Nuclear Information System (INIS)

    The shielded chair wholebody counting system used at IGCAR is calibrated experimentally using Masonite cut sheet phantom loaded with single radionuclide of known activity. Multiple point sources of a particular radionuclide are distributed at mid-thickness in each segment of the phantom during calibration. Though the detector can be used for the measurement of gamma photons upto 3000 keV, the experimental calibration is done only upto 1500 keV according to the requirement of measurement of fission and activation products, which emits gamma energies predominantly in the regions below 1500 keV. The expertize in numerical Monte Carlo simulation was utilized to obtain the efficiency values above 1500 keV. This paper focuses on the validation of the shielded chair counting system model using the Masonite cut sheet phantom measurements and applying the validated model to extend the energy range of the calibration upto 3 MeV. A good agreement of the theoretically simulated and experimental 137Cs spectrum with respect to the spectral shape, counts in all the energy regions and the photopeak efficiency validated the modeling of the counting system. A mathematical function to fit the counting efficiencies with photon energies was developed and a set of fitting parameters were established so that the efficiency value of any energy upto 3 MeV can be obtained without performing experimental efficiency calibration. The efficiency values obtained from the fit were compared with experimental ones and found to be in agreement, i.e., within 8% for the 250–1500 keV energy range. The Compton scattering factors (CSFs) at different low energies due to high energy photons were also simulated. The theoretical and experimental CSFs were compared and found to be matching within ±20%. Simulations with uniform source distribution inside the Masonite phantom has shown that the current source distribution followed at IGCAR gives efficiency values within ±5% compared to that of uniform

  11. MontePython: Implementing Quantum Monte Carlo using Python

    OpenAIRE

    J.K. Nilsen

    2006-01-01

    We present a cross-language C++/Python program for simulations of quantum mechanical systems with the use of Quantum Monte Carlo (QMC) methods. We describe a system for which to apply QMC, the algorithms of variational Monte Carlo and diffusion Monte Carlo and we describe how to implement theses methods in pure C++ and C++/Python. Furthermore we check the efficiency of the implementations in serial and parallel cases to show that the overhead using Python can be negligible.

  12. Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo

    DEFF Research Database (Denmark)

    Herckenrath, Daan; Langevin, Christian D.; Doherty, John

    2011-01-01

    . Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed......Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction...... the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability...

  13. Spectrometric methods used in the calibration of radiodiagnostic measuring instruments

    International Nuclear Information System (INIS)

    Recently a set of parameters for checking the quality of radiation for use in diagnostic radiology was established at the calibration facility of Nederlands Meetinstituut (NMI). The establishment of the radiation quality required re-evaluation of the correction factors for the primary air-kerma standards. Free-air ionisation chambers require several correction factors to measure air-kerma according to its definition. These correction factors were calculated for the NMi free-air chamber by Monte Carlo simulations for monoenergetic photons in the energy range from 10 keV to 320 keV. The actual correction factors follow from weighting these mono-energetic correction factors with the air-kerma spectrum of the photon beam. This paper describes the determination of the photon spectra of the X-ray qualities used for the calibration of dosimetric instruments used in radiodiagnostics. The detector used for these measurements is a planar HPGe-detector, placed in the direct beam of the X-ray machine. To convert the measured pulse height spectrum to the actual photon spectrum corrections must be made for fluorescent photon escape, single and multiple compton scattering inside the detector, and detector efficiency. From the calculated photon spectra a number of parameters of the X-ray beam can be calculated. The calculated first and second half value layer in aluminum and copper are compared with the measured values of these parameters to validate the method of spectrum reconstruction. Moreover the spectrum measurements offer the possibility to calibrate the X-ray generator in terms of maximum high voltage. The maximum photon energy in the spectrum is used as a standard for calibration of kVp-meters

  14. Spectrometric methods used in the calibration of radiodiagnostic measuring instruments

    Energy Technology Data Exchange (ETDEWEB)

    De Vries, W. [Rijksuniversiteit Utrecht (Netherlands)

    1995-12-01

    Recently a set of parameters for checking the quality of radiation for use in diagnostic radiology was established at the calibration facility of Nederlands Meetinstituut (NMI). The establishment of the radiation quality required re-evaluation of the correction factors for the primary air-kerma standards. Free-air ionisation chambers require several correction factors to measure air-kerma according to its definition. These correction factors were calculated for the NMi free-air chamber by Monte Carlo simulations for monoenergetic photons in the energy range from 10 keV to 320 keV. The actual correction factors follow from weighting these mono-energetic correction factors with the air-kerma spectrum of the photon beam. This paper describes the determination of the photon spectra of the X-ray qualities used for the calibration of dosimetric instruments used in radiodiagnostics. The detector used for these measurements is a planar HPGe-detector, placed in the direct beam of the X-ray machine. To convert the measured pulse height spectrum to the actual photon spectrum corrections must be made for fluorescent photon escape, single and multiple compton scattering inside the detector, and detector efficiency. From the calculated photon spectra a number of parameters of the X-ray beam can be calculated. The calculated first and second half value layer in aluminum and copper are compared with the measured values of these parameters to validate the method of spectrum reconstruction. Moreover the spectrum measurements offer the possibility to calibrate the X-ray generator in terms of maximum high voltage. The maximum photon energy in the spectrum is used as a standard for calibration of kVp-meters.

  15. Efficiency calibration for HPGe detector using different sample densities and different volumes in the energy range 63.3 and 2614.7 keV

    International Nuclear Information System (INIS)

    Different density samples were chosen to find the relation between the sample density and the detector efficiency. All samples selected were mixed with a known weight of monazite material which contains known concentrations of 238U and 232Th. These samples are bran, water, soil and sand which have densities of 0.4513, 1.0, 1.322 and 1.869 g/cm3 respectively. Five gamma ray energie lines were selected for this study these are 92.6 keV of 234Th (U-series), 129.1 and 911.1 keV of 228Ac and 583.1 and 2614.7 keV of 208 Tl (Th-series). The obtained results showed that there are exponential decay relations between sample densities and the detector efficiency at all gamma ray energie lines selected. Also the relation between sample densities and the absolute efficiency of the detector used was studied at the same gamma-ray energy lines and the same results were obtained. The variation of absolute efficiency of the detector according to the densities was attributed mainly to the effect of the mass absorption coefficient of the different materials

  16. Calibration of a single hexagonal NaI(Tl) detector using a new numerical method based on the efficiency transfer method

    Energy Technology Data Exchange (ETDEWEB)

    Abbas, Mahmoud I., E-mail: mabbas@physicist.net [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Badawi, M.S. [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Ruskov, I.N. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, 1784 Sofia (Bulgaria); El-Khatib, A.M. [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Grozdanov, D.N. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, 1784 Sofia (Bulgaria); Thabet, A.A. [Department of Medical Equipment Technology, Faculty of Allied Medical Sciences, Pharos University in Alexandria (Egypt); Kopatch, Yu.N. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Gouda, M.M. [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Skoy, V.R. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation)

    2015-01-21

    Gamma-ray detector systems are important instruments in a broad range of science and new setup are continually developing. The most recent step in the evolution of detectors for nuclear spectroscopy is the construction of large arrays of detectors of different forms (for example, conical, pentagonal, hexagonal, etc.) and sizes, where the performance and the efficiency can be increased. In this work, a new direct numerical method (NAM), in an integral form and based on the efficiency transfer (ET) method, is used to calculate the full-energy peak efficiency of a single hexagonal NaI(Tl) detector. The algorithms and the calculations of the effective solid angle ratios for a point (isotropic irradiating) gamma-source situated coaxially at different distances from the detector front-end surface, taking into account the attenuation of the gamma-rays in the detector's material, end-cap and the other materials in-between the gamma-source and the detector, are considered as the core of this (ET) method. The calculated full-energy peak efficiency values by the (NAM) are found to be in a good agreement with the measured experimental data.

  17. Variance and efficiency of contribution Monte Carlo

    International Nuclear Information System (INIS)

    The game of contribution is compared with the game of splitting in radiation transport using numerical results obtained by solving the set of coupled integral equations for first and second moments around the score. The splitting game is found superior. (author)

  18. Efficient Monte Carlo Pricing of Basket Options

    OpenAIRE

    P. Pellizzari

    1998-01-01

    Montecarlo methods can be used to price derivatives for which closed evaluation formulas are not available or difficult to derive. A drawback of the method can be its high computational cost, especially if applied to basket options, whose payoffs depend on more than one asset. This article presents two kinds of control variates to reduce variance of estimates, based on unconditional and conditional expectations of assets respectively. We apply the previous variance reduction methods to some b...

  19. Research of Camera Calibration Based on DSP

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2013-09-01

    Full Text Available To take advantage of the high-efficiency and stability of DSP in the data processing and the functions of OpenCV library, this study brought forward a scheme that camera calibration in DSP embedded system calibration. An arithmetic of camera calibration based on OpenCV is designed by analyzing the camera model and lens distortion. The transplantation of EMCV to DSP is completed and the arithmetic of camera calibration is migrated and optimized based on the CCS development environment and the DSP/BIOS system. On the premise of realizing calibration function, this arithmetic improves the efficiency of program execution and the precision of calibration and lays the foundation for further research of the visual location based on DSP embedded system.

  20. Calibration of the Cherenkov Telescope Array

    CERN Document Server

    Gaug, Markus; Berge, David; Reyes, Raquel de los; Doro, Michele; Foerster, Andreas; Maccarone, Maria Concetta; Parsons, Dan; van Eldik, Christopher

    2015-01-01

    The construction of the Cherenkov Telescope Array is expected to start soon. We will present the baseline methods and their extensions currently foreseen to calibrate the observatory. These are bound to achieve the strong requirements on allowed systematic uncertainties for the reconstructed gamma-ray energy and flux scales, as well as on the pointing resolution, and on the overall duty cycle of the observatory. Onsite calibration activities are designed to include a robust and efficient calibration of the telescope cameras, and various methods and instruments to achieve calibration of the overall optical throughput of each telescope, leading to both inter-telescope calibration and an absolute calibration of the entire observatory. One important aspect of the onsite calibration is a correct understanding of the atmosphere above the telescopes, which constitutes the calorimeter of this detection technique. It is planned to be constantly monitored with state-of-the-art instruments to obtain a full molecular and...

  1. Traceable Pyrgeometer Calibrations

    Energy Technology Data Exchange (ETDEWEB)

    Dooraghi, Mike; Kutchenreiter, Mark; Reda, Ibrahim; Habte, Aron; Sengupta, Manajit; Andreas, Afshin; Newman, Martina

    2016-05-02

    This poster presents the development, implementation, and operation of the Broadband Outdoor Radiometer Calibrations (BORCAL) Longwave (LW) system at the Southern Great Plains Radiometric Calibration Facility for the calibration of pyrgeometers that provide traceability to the World Infrared Standard Group.

  2. Monte Carlo Radiative Transfer

    CERN Document Server

    Whitney, Barbara A

    2011-01-01

    I outline methods for calculating the solution of Monte Carlo Radiative Transfer (MCRT) in scattering, absorption and emission processes of dust and gas, including polarization. I provide a bibliography of relevant papers on methods with astrophysical applications.

  3. Monte Carlo transition probabilities

    OpenAIRE

    Lucy, L. B.

    2001-01-01

    Transition probabilities governing the interaction of energy packets and matter are derived that allow Monte Carlo NLTE transfer codes to be constructed without simplifying the treatment of line formation. These probabilities are such that the Monte Carlo calculation asymptotically recovers the local emissivity of a gas in statistical equilibrium. Numerical experiments with one-point statistical equilibrium problems for Fe II and Hydrogen confirm this asymptotic behaviour. In addition, the re...

  4. Preliminary evaluation of a Neutron Calibration Laboratory

    International Nuclear Information System (INIS)

    In the past few years, Brazil and several other countries in Latin America have experimented a great demand for the calibration of neutron detectors, mainly due to the increase in oil prospection and extraction. The only laboratory for calibration of neutron detectors in Brazil is localized at the Institute for Radioprotection and Dosimetry (IRD/CNEN), Rio de Janeiro, which is part of the IAEA SSDL network. This laboratory is the national standard laboratory in Brazil. With the increase in the demand for the calibration of neutron detectors, there is a need for another calibration services. In this context, the Calibration Laboratory of IPEN/CNEN, Sao Paulo, which already offers calibration services of radiation detectors with standard X, gamma, beta and alpha beams, has recently projected a new calibration laboratory for neutron detectors. In this work, the ambient equivalent dose rate (H⁎(10)) was evaluated in several positions inside and around this laboratory, using Monte Carlo simulation (MCNP5 code), in order to verify the adequateness of the shielding. The obtained results showed that the shielding is effective, and that this is a low-cost methodology to improve the safety of the workers and evaluate the total staff workload. (author)

  5. Parallel processing Monte Carlo radiation transport codes

    International Nuclear Information System (INIS)

    Issues related to distributed-memory multiprocessing as applied to Monte Carlo radiation transport are discussed. Measurements of communication overhead are presented for the radiation transport code MCNP which employs the communication software package PVM, and average efficiency curves are provided for a homogeneous virtual machine

  6. WFC3: UVIS Dark Calibration

    Science.gov (United States)

    Bourque, Matthew; Biretta, John A.; Anderson, Jay; Baggett, Sylvia M.; Gunning, Heather C.; MacKenty, John W.

    2014-06-01

    Wide Field Camera 3 (WFC3), a fourth-generation imaging instrument on board the Hubble Space Telescope (HST), has exhibited excellent performance since its installation during Servicing Mission 4 in May 2009. The UVIS detector, comprised of two e2v CCDs, is one of two channels available on WFC3 and is named for its ultraviolet and visible light sensitivity. We present the various procedures and results of the WFC3/UVIS dark calibration, which monitors the health and stability of the UVIS detector, provides characterization of hot pixels and dark current, and produces calibration files to be used as a correction for dark current in science images. We describe the long-term growth of hot pixels and the impacts that UVIS Charge Transfer Efficiency (CTE) losses, postflashing, and proximity to the readout amplifiers have on the population. We also discuss the evolution of the median dark current, which has been slowly increasing since the start of the mission and is currently ~6 e-/hr/pix, averaged across each chip. We outline the current algorithm for creating UVIS dark calibration files, which includes aggressive cosmic ray masking, image combination, and hot pixel flagging. Calibration products are available to the user community, typically 3-5 days after initial processing, through the Calibration Database System (CDBS). Finally, we discuss various improvements to the calibration and monitoring procedures. UVIS dark monitoring will continue throughout and beyond HST’s current proposal cycle.

  7. Calibration method for a in vivo measurement system using mathematical simulation of the radiation source and the detector; Metodo de calibracao de um sistema de medida in vivo atraves da simulacao matematica da fonte de radiacao e do detector

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, John

    1998-12-31

    A Monte Carlo program which uses a voxel phantom has been developed to simulate in vivo measurement systems for calibration purposes. The calibration method presented here employs a mathematical phantom, produced in the form of volume elements (voxels), obtained through Magnetic Resonance Images of the human body. The calibration method uses the Monte Carlo technique to simulate the tissue contamination, the transport of the photons through the tissues and the detection of the radiation. The program simulates the transport and detection of photons between 0.035 and 2 MeV and uses, for the body representation, a voxel phantom with a format of 871 slices each of 277 x 148 picture elements. The Monte Carlo code was applied to the calibration of in vivo systems and to estimate differences in counting efficiencies between homogeneous and non-homogeneous radionuclide distributions in the lung. Calculations show a factor of 20 between deposition of {sup 241} Am at the back compared with the front of the lung. The program was also used to estimate the {sup 137} Cs body burden of an internally contaminated individual, counted with an 8 x 4 Nal (TI) detector and an {sup 241} Am body burden of an internally contaminated individual, who was counted using a planar germanium detector. (author) 24 refs., 38 figs., 23 tabs.

  8. Construction of Chinese adult male phantom library and its application in the virtual calibration of in vivo measurement

    Science.gov (United States)

    Chen, Yizheng; Qiu, Rui; Li, Chunyan; Wu, Zhen; Li, Junli

    2016-03-01

    In vivo measurement is a main method of internal contamination evaluation, particularly for large numbers of people after a nuclear accident. Before the practical application, it is necessary to obtain the counting efficiency of the detector by calibration. The virtual calibration based on Monte Carlo simulation usually uses the reference human computational phantom, and the morphological difference between the monitored personnel with the calibrated phantom may lead to the deviation of the counting efficiency. Therefore, a phantom library containing a wide range of heights and total body masses is needed. In this study, a Chinese reference adult male polygon surface (CRAM_S) phantom was constructed based on the CRAM voxel phantom, with the organ models adjusted to match the Chinese reference data. CRAMS phantom was then transformed to sitting posture for convenience in practical monitoring. Referring to the mass and height distribution of the Chinese adult male, a phantom library containing 84 phantoms was constructed by deforming the reference surface phantom. Phantoms in the library have 7 different heights ranging from 155 cm to 185 cm, and there are 12 phantoms with different total body masses in each height. As an example of application, organ specific and total counting efficiencies of Ba-133 were calculated using the MCNPX code, with two series of phantoms selected from the library. The influence of morphological variation on the counting efficiency was analyzed. The results show only using the reference phantom in virtual calibration may lead to an error of 68.9% for total counting efficiency. Thus the influence of morphological difference on virtual calibration can be greatly reduced using the phantom library with a wide range of masses and heights instead of a single reference phantom.

  9. The calibration system for the GERDA experiment

    International Nuclear Information System (INIS)

    The GERDA experiment uses the neutrinoless double beta decay to probe three fundamental questions in neutrino physics - Are they Dirac or Majorana particles? What is their absolute mass? What is the mass hierarchy of the three generations? In my talk I present the calibration system for the Ge semiconductor diodes enriched in Ge-76. The system is used to set the energy scale and calibrate the pulse shapes which will be used to further reject background events. The lowest possible background is crucial for the whole experiment and therefore the calibration system must not interfere with the data acquisition phase while at the same time operate efficiently during the calibration runs.

  10. Research of Camera Calibration Based on DSP

    OpenAIRE

    Zheng Zhang; Yukun Wan; Lixin Cai

    2013-01-01

    To take advantage of the high-efficiency and stability of DSP in the data processing and the functions of OpenCV library, this study brought forward a scheme that camera calibration in DSP embedded system calibration. An arithmetic of camera calibration based on OpenCV is designed by analyzing the camera model and lens distortion. The transplantation of EMCV to DSP is completed and the arithmetic of camera calibration is migrated and optimized based on the CCS development environment and the ...

  11. CERN honours Carlo Rubbia

    CERN Multimedia

    2009-01-01

    Carlo Rubbia turned 75 on March 31, and CERN held a symposium to mark his birthday and pay tribute to his impressive contribution to both CERN and science. Carlo Rubbia, 4th from right, together with the speakers at the symposium.On 7 April CERN hosted a celebration marking Carlo Rubbia’s 75th birthday and 25 years since he was awarded the Nobel Prize for Physics. "Today we will celebrate 100 years of Carlo Rubbia" joked CERN’s Director-General, Rolf Heuer in his opening speech, "75 years of his age and 25 years of the Nobel Prize." Rubbia received the Nobel Prize along with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. During the symposium, which was held in the Main Auditorium, several eminent speakers gave lectures on areas of science to which Carlo Rubbia made decisive contributions. Among those who spoke were Michel Spiro, Director of the French National Insti...

  12. Absolute calibration technique for spontaneous fission sources

    International Nuclear Information System (INIS)

    An absolute calibration technique for a spontaneously fissioning nuclide (which involves no arbitrary parameters) allows unique determination of the detector efficiency for that nuclide, hence of the fission source strength

  13. Computing Greeks with Multilevel Monte Carlo Methods using Importance Sampling

    OpenAIRE

    Euget, Thomas

    2012-01-01

    This paper presents a new efficient way to reduce the variance of an estimator of popular payoffs and greeks encounter in financial mathematics. The idea is to apply Importance Sampling with the Multilevel Monte Carlo recently introduced by M.B. Giles. So far, Importance Sampling was proved successful in combination with standard Monte Carlo method. We will show efficiency of our approach on the estimation of financial derivatives prices and then on the estimation of Greeks (i.e. sensitivitie...

  14. Calibration of Ge gamma-ray spectrometers for complex sample geometries and matrices

    Science.gov (United States)

    Semkow, T. M.; Bradt, C. J.; Beach, S. E.; Haines, D. K.; Khan, A. J.; Bari, A.; Torres, M. A.; Marrantino, J. C.; Syed, U.-F.; Kitto, M. E.; Hoffman, T. J.; Curtis, P.

    2015-11-01

    A comprehensive study of the efficiency calibration and calibration verification of Ge gamma-ray spectrometers was performed using semi-empirical, computational Monte-Carlo (MC), and transfer methods. The aim of this study was to evaluate the accuracy of the quantification of gamma-emitting radionuclides in complex matrices normally encountered in environmental and food samples. A wide range of gamma energies from 59.5 to 1836.0 keV and geometries from a 10-mL jar to 1.4-L Marinelli beaker were studied on four Ge spectrometers with the relative efficiencies between 102% and 140%. Density and coincidence summing corrections were applied. Innovative techniques were developed for the preparation of artificial complex matrices from materials such as acidified water, polystyrene, ethanol, sugar, and sand, resulting in the densities ranging from 0.3655 to 2.164 g cm-3. They were spiked with gamma activity traceable to international standards and used for calibration verifications. A quantitative method of tuning MC calculations to experiment was developed based on a multidimensional chi-square paraboloid.

  15. A methodology to develop computational phantoms with adjustable posture for WBC calibration

    International Nuclear Information System (INIS)

    A Whole Body Counter (WBC) is a facility to routinely assess the internal contamination of exposed workers, especially in the case of radiation release accidents. The calibration of the counting device is usually done by using anthropomorphic physical phantoms representing the human body. Due to such a challenge of constructing representative physical phantoms a virtual calibration has been introduced. The use of computational phantoms and the Monte Carlo method to simulate radiation transport have been demonstrated to be a worthy alternative. In this study we introduce a methodology developed for the creation of realistic computational voxel phantoms with adjustable posture for WBC calibration. The methodology makes use of different software packages to enable the creation and modification of computational voxel phantoms. This allows voxel phantoms to be developed on demand for the calibration of different WBC configurations. This in turn helps to study the major source of uncertainty associated with the in vivo measurement routine which is the difference between the calibration phantoms and the real persons being counted. The use of realistic computational phantoms also helps the optimization of the counting measurement. Open source codes such as MakeHuman and Blender software packages have been used for the creation and modelling of 3D humanoid characters based on polygonal mesh surfaces. Also, a home-made software was developed whose goal is to convert the binary 3D voxel grid into a MCNPX input file. This paper summarizes the development of a library of phantoms of the human body that uses two basic phantoms called MaMP and FeMP (Male and Female Mesh Phantoms) to create a set of male and female phantoms that vary both in height and in weight. Two sets of MaMP and FeMP phantoms were developed and used for efficiency calibration of two different WBC set-ups: the Doel NPP WBC laboratory and AGM laboratory of SCK-CEN in Mol, Belgium. (paper)

  16. A methodology to develop computational phantoms with adjustable posture for WBC calibration

    Science.gov (United States)

    Ferreira Fonseca, T. C.; Bogaerts, R.; Hunt, John; Vanhavere, F.

    2014-11-01

    A Whole Body Counter (WBC) is a facility to routinely assess the internal contamination of exposed workers, especially in the case of radiation release accidents. The calibration of the counting device is usually done by using anthropomorphic physical phantoms representing the human body. Due to such a challenge of constructing representative physical phantoms a virtual calibration has been introduced. The use of computational phantoms and the Monte Carlo method to simulate radiation transport have been demonstrated to be a worthy alternative. In this study we introduce a methodology developed for the creation of realistic computational voxel phantoms with adjustable posture for WBC calibration. The methodology makes use of different software packages to enable the creation and modification of computational voxel phantoms. This allows voxel phantoms to be developed on demand for the calibration of different WBC configurations. This in turn helps to study the major source of uncertainty associated with the in vivo measurement routine which is the difference between the calibration phantoms and the real persons being counted. The use of realistic computational phantoms also helps the optimization of the counting measurement. Open source codes such as MakeHuman and Blender software packages have been used for the creation and modelling of 3D humanoid characters based on polygonal mesh surfaces. Also, a home-made software was developed whose goal is to convert the binary 3D voxel grid into a MCNPX input file. This paper summarizes the development of a library of phantoms of the human body that uses two basic phantoms called MaMP and FeMP (Male and Female Mesh Phantoms) to create a set of male and female phantoms that vary both in height and in weight. Two sets of MaMP and FeMP phantoms were developed and used for efficiency calibration of two different WBC set-ups: the Doel NPP WBC laboratory and AGM laboratory of SCK-CEN in Mol, Belgium.

  17. Optimal calibration of nuclear instrumentation

    International Nuclear Information System (INIS)

    Accurate knowledge of core power level is essential for the safe and efficient operation of nuclear power plants. Ionization chambers located outside the reactor core have the necessary reliability and response time characteristics and have been used extensively to indicate power level. The calibration of the ion chamber, and associated nuclear instrumentation (NI), has traditionally been based on the thermal power in the secondary coolant system. The usual NI calibration procedure consists of establishing steady-state operating conditions, calorimetrically determining the power at the secondary side of the steam generator, and adjusting the NI output to correspond to the measured thermal power. This study addresses certain questions including; (a) what sampling rate should be employed, (b) how many measurements are required, and (c) how can additional power level related information such as primary coolant loop measurements and knowledge of plant dynamics be included in the calibration procedure

  18. Composite biasing in Monte Carlo radiative transfer

    CERN Document Server

    Baes, Maarten; Lunttila, Tuomas; Bianchi, Simone; Camps, Peter; Juvela, Mika; Kuiper, Rolf

    2016-01-01

    Biasing or importance sampling is a powerful technique in Monte Carlo radiative transfer, and can be applied in different forms to increase the accuracy and efficiency of simulations. One of the drawbacks of the use of biasing is the potential introduction of large weight factors. We discuss a general strategy, composite biasing, to suppress the appearance of large weight factors. We use this composite biasing approach for two different problems faced by current state-of-the-art Monte Carlo radiative transfer codes: the generation of photon packages from multiple components, and the penetration of radiation through high optical depth barriers. In both cases, the implementation of the relevant algorithms is trivial and does not interfere with any other optimisation techniques. Through simple test models, we demonstrate the general applicability, accuracy and efficiency of the composite biasing approach. In particular, for the penetration of high optical depths, the gain in efficiency is spectacular for the spe...

  19. Clinical dosimetry in photon radiotherapy. A Monte Carlo based investigation

    International Nuclear Information System (INIS)

    Practical clinical dosimetry is a fundamental step within the radiation therapy process and aims at quantifying the absorbed radiation dose within a 1-2% uncertainty. To achieve this level of accuracy, corrections are needed for calibrated and air-filled ionization chambers, which are used for dose measurement. The procedures of correction are based on cavity theory of Spencer-Attix and are defined in current dosimetry protocols. Energy dependent corrections for deviations from calibration beams account for changed ionization chamber response in the treatment beam. The corrections applied are usually based on semi-analytical models or measurements and are generally hard to determine due to their magnitude of only a few percents or even less. Furthermore the corrections are defined for fixed geometrical reference-conditions and do not apply to non-reference conditions in modern radiotherapy applications. The stochastic Monte Carlo method for the simulation of radiation transport is becoming a valuable tool in the field of Medical Physics. As a suitable tool for calculation of these corrections with high accuracy the simulations enable the investigation of ionization chambers under various conditions. The aim of this work is the consistent investigation of ionization chamber dosimetry in photon radiation therapy with the use of Monte Carlo methods. Nowadays Monte Carlo systems exist, which enable the accurate calculation of ionization chamber response in principle. Still, their bare use for studies of this type is limited due to the long calculation times needed for a meaningful result with a small statistical uncertainty, inherent to every result of a Monte Carlo simulation. Besides heavy use of computer hardware, techniques methods of variance reduction to reduce the needed calculation time can be applied. Methods for increasing the efficiency in the results of simulation were developed and incorporated in a modern and established Monte Carlo simulation environment

  20. A new phantom for use in whole body counters: a Monte Carlo design project.

    Science.gov (United States)

    Kramer, Gary H; Capello, Kevin; Ho, Arnon

    2005-01-01

    A new phantom for calibration or performance testing of whole body counters has been conceptualized. The validity of the design has been validated by Monte Carlo simulations. The simulations have compared the expected counting efficiencies for the new design to those of a conventional phantom; both phantoms were placed in a virtual copy of the Human Monitoring Laboratory's whole body counter. The simulations covered a wide energy range (126-2,754 keV), and the agreement between the two types of phantoms was 0.988 +/- 0.005. Based on these findings, a prototype sliced BOMAB phantom corresponding to a Reference Female will be constructed. If the results were unfavorable, as was not the case, then the expense of building and testing the phantom would have been avoided. PMID:15596993

  1. Fundamentals of Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of Monte Carlo. Welcome to Los Alamos, the birthplace of “Monte Carlo” for computational physics. Stanislaw Ulam, John von Neumann, and Nicholas Metropolis are credited as the founders of modern Monte Carlo methods. The name “Monte Carlo” was chosen in reference to the Monte Carlo Casino in Monaco (purportedly a place where Ulam’s uncle went to gamble). The central idea (for us) – to use computer-generated “random” numbers to determine expected values or estimate equation solutions – has since spread to many fields. "The first thoughts and attempts I made to practice [the Monte Carlo Method] were suggested by a question which occurred to me in 1946 as I was convalescing from an illness and playing solitaires. The question was what are the chances that a Canfield solitaire laid out with 52 cards will come out successfully? After spending a lot of time trying to estimate them by pure combinatorial calculations, I wondered whether a more practical method than “abstract thinking” might not be to lay it out say one hundred times and simply observe and count the number of successful plays... Later [in 1946], I described the idea to John von Neumann, and we began to plan actual calculations." - Stanislaw Ulam.

  2. Carlo Caso (1940 - 2007)

    CERN Multimedia

    Leonardo Rossi

    Carlo Caso (1940 - 2007) Our friend and colleague Carlo Caso passed away on July 7th, after several months of courageous fight against cancer. Carlo spent most of his scientific career at CERN, taking an active part in the experimental programme of the laboratory. His long and fruitful involvement in particle physics started in the sixties, in the Genoa group led by G. Tomasini. He then made several experiments using the CERN liquid hydrogen bubble chambers -first the 2000HBC and later BEBC- to study various facets of the production and decay of meson and baryon resonances. He later made his own group and joined the NA27 Collaboration to exploit the EHS Spectrometer with a rapid cycling bubble chamber as vertex detector. Amongst their many achievements, they were the first to measure, with excellent precision, the lifetime of the charmed D mesons. At the start of the LEP era, Carlo and his group moved to the DELPHI experiment, participating in the construction and running of the HPC electromagnetic c...

  3. Trinocular Calibration Method Based on Binocular Calibration

    Directory of Open Access Journals (Sweden)

    CAO Dan-Dan

    2012-10-01

    Full Text Available In order to solve the self-occlusion problem in plane-based multi-camera calibration system and expand the measurement range, a tri-camera vision system based on binocular calibration is proposed. The three cameras are grouped into two pairs, while the public camera is taken as the reference to build the global coordinate. By calibration of the measured absolute distance and the true absolute distance, global calibration is realized. The MRE (mean relative error of the global calibration of the two camera pairs in the experiments can be as low as 0.277% and 0.328% respectively. Experiment results show that this method is feasible, simple and effective, and has high precision.

  4. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  5. Markov chain Monte Carlo methods: an introductory example

    Science.gov (United States)

    Klauenberg, Katy; Elster, Clemens

    2016-02-01

    When the Guide to the Expression of Uncertainty in Measurement (GUM) and methods from its supplements are not applicable, the Bayesian approach may be a valid and welcome alternative. Evaluating the posterior distribution, estimates or uncertainties involved in Bayesian inferences often requires numerical methods to avoid high-dimensional integrations. Markov chain Monte Carlo (MCMC) sampling is such a method—powerful, flexible and widely applied. Here, a concise introduction is given, illustrated by a simple, typical example from metrology. The Metropolis-Hastings algorithm is the most basic and yet flexible MCMC method. Its underlying concepts are explained and the algorithm is given step by step. The few lines of software code required for its implementation invite interested readers to get started. Diagnostics to evaluate the performance and common algorithmic choices are illustrated to calibrate the Metropolis-Hastings algorithm for efficiency. Routine application of MCMC algorithms may be hindered currently by the difficulty to assess the convergence of MCMC output and thus to assure the validity of results. An example points to the importance of convergence and initiates discussion about advantages as well as areas of research. Available software tools are mentioned throughout.

  6. Absolute particle flux determination in absence of known detector efficiency. The “Influence Method”

    Energy Technology Data Exchange (ETDEWEB)

    Rios, I.J.; Mayer, R.E., E-mail: mayer@cab.cnea.gov.ar

    2015-03-01

    In this article we introduce a new method, which we call the “Influence Method”, to be employed in the absolute determination of a particle flux, most especially applicable to time-of-flight spectrum determination of a neutron beam. It yields not only the absolute number of particles but also an estimator of detectors efficiencies. It may be useful when no calibration standards are available. The different estimators are introduced along with some Monte Carlo simulations to further illustrate the method. - Highlights: • “Influence Method”, a new method for absolute particle flux determination. • Absolute detector efficiency determination. • Absolute time-of-flight particle flux determination.

  7. Experience with a factory-calibrated HPGe detector

    Science.gov (United States)

    Bossus, D. A. W.; Swagten, J. J. J. M.; Kleinjans, P. A. M.

    2006-08-01

    For k0-based analysis, an HPGe detector has to be used. This detector has to be absolutely calibrated in a reference position and with a defined geometry so that, using SOLCOI/KAYZERO software package, for example, efficiencies of other positions and sample geometries can be calculated. This reference calibration is a time-consuming procedure during which the detector is not available for analyses. Therefore, DSM Resolve decided to purchase a "factory-calibrated" detector. Efficiency calibrations were ordered for a point-source geometry at a coincidence-free distance from the detector and for two additional distances closer to the detector. After delivery, the factory calibration was checked at DSM Resolve using a limited set of PTB-calibrated reference sources. At the end, we decided nevertheless to perform a standard and full calibration of the detector, because it turned out that the factory-calibrated detector was not accurate enough to be used for quantitative analyses.

  8. Automated Camera Calibration

    Science.gov (United States)

    Chen, Siqi; Cheng, Yang; Willson, Reg

    2006-01-01

    Automated Camera Calibration (ACAL) is a computer program that automates the generation of calibration data for camera models used in machine vision systems. Machine vision camera models describe the mapping between points in three-dimensional (3D) space in front of the camera and the corresponding points in two-dimensional (2D) space in the camera s image. Calibrating a camera model requires a set of calibration data containing known 3D-to-2D point correspondences for the given camera system. Generating calibration data typically involves taking images of a calibration target where the 3D locations of the target s fiducial marks are known, and then measuring the 2D locations of the fiducial marks in the images. ACAL automates the analysis of calibration target images and greatly speeds the overall calibration process.

  9. Simple recipes for ground scattering in neutron detector calibration

    International Nuclear Information System (INIS)

    Simple formulae are presented to account for ground scattering of neutrons above an earth or concrete surface without relying on albedo data. These formulae, which should be useful in calibrating neutron fluence or dose responding instruments, have been checked against Monte Carlo calculations and measurements with a PuBe source, with agreement to better than 10%. (author)

  10. Self-calibration and biconvex compressive sensing

    Science.gov (United States)

    Ling, Shuyang; Strohmer, Thomas

    2015-11-01

    The design of high-precision sensing devises becomes ever more difficult and expensive. At the same time, the need for precise calibration of these devices (ranging from tiny sensors to space telescopes) manifests itself as a major roadblock in many scientific and technological endeavors. To achieve optimal performance of advanced high-performance sensors one must carefully calibrate them, which is often difficult or even impossible to do in practice. In this work we bring together three seemingly unrelated concepts, namely self-calibration, compressive sensing, and biconvex optimization. The idea behind self-calibration is to equip a hardware device with a smart algorithm that can compensate automatically for the lack of calibration. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations {\\boldsymbol{y}}={\\boldsymbol{D}}{\\boldsymbol{A}}{\\boldsymbol{x}}, where both {\\boldsymbol{x}} and the diagonal matrix {\\boldsymbol{D}} (which models the calibration error) are unknown. By ‘lifting’ this biconvex inverse problem we arrive at a convex optimization problem. By exploiting sparsity in the signal model, we derive explicit theoretical guarantees under which both {\\boldsymbol{x}} and {\\boldsymbol{D}} can be recovered exactly, robustly, and numerically efficiently via linear programming. Applications in array calibration and wireless communications are discussed and numerical simulations are presented, confirming and complementing our theoretical analysis.

  11. Detection efficiency simulation and measurement of 6LiI/natLiI scintillation detector

    International Nuclear Information System (INIS)

    Background: Being of very high detection efficiency and small size, Lithium iodide (LiI) scintillator detector is used extensively in neutron measurement and environmental monitoring. Purpose: Using thermal reactor, neutron detectors will be tested and calibrated. And a new neutron detector device will be designed and studied. Methods: The relationship between the size and detection efficiency of the thermal neutron detector 6LiI/natLil was studied using Monte Carlo code GEANT4 and MCNP5 package, and the thermal neutron efficiency of detector was calibrated by reactor neutrons. Results: The theoretical simulation shows that the thermal neutron detection efficiency of detector of 10-mm thickness is relatively high, the enriched 6Lil is up to 98% and the nature natLiI 65%. The thermal neutron efficiency of detector is calibrated by reactor thermal neutrons. Considering the neutron scattering by the lead brick, high density polythene and environment neutron contribution, the detection efficiency of 6LiI detector is about 90% and natLiI detector 70%. Conclusion: The detector efficiency can reach the efficiency value of theoretical calculations. (authors)

  12. Monte Carlo photon benchmark problems

    International Nuclear Information System (INIS)

    Photon benchmark calculations have been performed to validate the MCNP Monte Carlo computer code. These are compared to both the COG Monte Carlo computer code and either experimental or analytic results. The calculated solutions indicate that the Monte Carlo method, and MCNP and COG in particular, can accurately model a wide range of physical problems. 8 refs., 5 figs

  13. ORNL calibrations facility

    International Nuclear Information System (INIS)

    The ORNL Calibrations Facility is operated by the Instrumentation Group of the Industrial Safety and Applied Health Physics Division. Its primary purpose is to maintain radiation calibration standards for calibration of ORNL health physics instruments and personnel dosimeters. This report includes a discussion of the radioactive sources and ancillary equipment in use and a step-by-step procedure for calibration of those survey instruments and personnel dosimeters in routine use at ORNL

  14. Analytical multicollimator camera calibration

    Science.gov (United States)

    Tayman, W.P.

    1978-01-01

    Calibration with the U.S. Geological survey multicollimator determines the calibrated focal length, the point of symmetry, the radial distortion referred to the point of symmetry, and the asymmetric characteristiecs of the camera lens. For this project, two cameras were calibrated, a Zeiss RMK A 15/23 and a Wild RC 8. Four test exposures were made with each camera. Results are tabulated for each exposure and averaged for each set. Copies of the standard USGS calibration reports are included. ?? 1978.

  15. Monte Carlo based method for conversion of in-situ gamma ray spectra obtained with a portable Ge detector to an incident photon flux energy distribution.

    Science.gov (United States)

    Clouvas, A; Xanthos, S; Antonopoulos-Domis, M; Silva, J

    1998-02-01

    A Monte Carlo based method for the conversion of an in-situ gamma-ray spectrum obtained with a portable Ge detector to photon flux energy distribution is proposed. The spectrum is first stripped of the partial absorption and cosmic-ray events leaving only the events corresponding to the full absorption of a gamma ray. Applying to the resulting spectrum the full absorption efficiency curve of the detector determined by calibrated point sources and Monte Carlo simulations, the photon flux energy distribution is deduced. The events corresponding to partial absorption in the detector are determined by Monte Carlo simulations for different incident photon energies and angles using the CERN's GEANT library. Using the detector's characteristics given by the manufacturer as input it is impossible to reproduce experimental spectra obtained with point sources. A transition zone of increasing charge collection efficiency has to be introduced in the simulation geometry, after the inactive Ge layer, in order to obtain good agreement between the simulated and experimental spectra. The functional form of the charge collection efficiency is deduced from a diffusion model. PMID:9450590

  16. Spiral reader calibration

    International Nuclear Information System (INIS)

    The method to calibrate the spiral reader (SR) is presented. A brief description of the main procedures of the calibration program SCALP, adapted for the IHEP equipment and purposes, is described. The precision characteristics of the IHEP SR have been analysed on the results, presented in the form of diagrams. There is a calibration manual for the user

  17. Experimental calibration of transmission grating and theoretical calculation of diffraction efficiency%透射光栅的实验标定和衍射效率的理论模拟

    Institute of Scientific and Technical Information of China (English)

    尚万里; 杨家敏; 赵屹东; 崔明启; 郑雷; 韩勇; 周克瑾; 马陈燕; 朱托; 熊刚; 赵阳; 张文海; 易荣清; 况龙钰; 曹磊峰; 高宇林

    2011-01-01

    透射光栅广泛应用于软X射线能谱测量.为了获得用于惯性约束聚变研究的透射光栅的各级衍射效率及其他参数,在北京同步辐射源上200-1600eV能量范围内对其进行了标定,获得了透射光栅衍射效率的实验结果.扩展了透射光栅衍射效率的计算方法,提出了7边准梯形截面衍射效率计算模型.分析拟合了实验数据,理论结果与实验结果很好符合.得到了7边准梯形的透射光栅栅线截面结构.%Transmission grating is widely used in measurement of soft X rays. In order to measure the diffraction efficiencies of the transmission grating which is used in laser fusion, the transmission grating was calibrated on Beijing Synchrotron Radiation Facility in the energy region from 200eV to 1600eV, and the experimental results have been obtained. The model for grating efficiency simulation has been developed and calculations using a new so called 7-side quasi-trapezoidal cross section model were carried out. The results from the new model are in good agreement with the experimental data.The exact grating wire cross section is described.

  18. Multidetector calibration for mass spectrometers

    International Nuclear Information System (INIS)

    The International Atomic Energy Agency's Safeguards Analytical Laboratory has performed calibration experiments to measure the different efficiencies among multi-Faraday detectors for a Finnigan-MAT 261 mass spectrometer. Two types of calibration experiments were performed: (1) peak-shift experiments and (2) peak-jump experiments. For peak-shift experiments, the ion intensities were measured for all isotopes of an element in different Faraday detectors. Repeated measurements were made by shifting the isotopes to various Faraday detectors. Two different peak-shifting schemes were used to measure plutonium (UK Pu5/92138) samples. For peak-jump experiments, ion intensities were measured in a reference Faraday detector for a single isotope and compared with those measured in the other Faraday detectors. Repeated measurements were made by switching back-and-forth between the reference Faraday detector and a selected Faraday detector. This switching procedure is repeated for all Faraday detectors. Peak-jump experiments were performed with replicate measurements of 239Pu, 187Re, and 238U. Detector efficiency factors were estimated for both peak-jump and peak-shift experiments using a flexible calibration model to statistically analyze both types of multidetector calibration experiments. Calculated detector efficiency factors were shown to depend on both the material analyzed and the experimental conditions. A single detector efficiency factor is not recommended for each detector that would be used to correct routine sample analyses. An alternative three-run peak-shift sample analysis should be considered. A statistical analysis of the data from this peak-shift experiment can adjust the isotopic ratio estimates for detector differences due to each sample analysis

  19. PERSONALISED BODY COUNTER CALIBRATION USING ANTHROPOMETRIC PARAMETERS.

    Science.gov (United States)

    Pölz, S; Breustedt, B

    2016-09-01

    Current calibration methods for body counting offer personalisation for lung counting predominantly with respect to ratios of body mass and height. Chest wall thickness is used as an intermediate parameter. This work revises and extends these methods using a series of computational phantoms derived from medical imaging data in combination with radiation transport simulation and statistical analysis. As an example, the method is applied to the calibration of the In Vivo Measurement Laboratory (IVM) at Karlsruhe Institute of Technology (KIT) comprising four high-purity germanium detectors in two partial body measurement set-ups. The Monte Carlo N-Particle (MCNP) transport code and the Extended Cardiac-Torso (XCAT) phantom series have been used. Analysis of the computed sample data consisting of 18 anthropometric parameters and calibration factors generated from 26 photon sources for each of the 30 phantoms reveals the significance of those parameters required for producing an accurate estimate of the calibration function. Body circumferences related to the source location perform best in the example, while parameters related to body mass show comparable but lower performances, and those related to body height and other lengths exhibit low performances. In conclusion, it is possible to give more accurate estimates of calibration factors using this proposed approach including estimates of uncertainties related to interindividual anatomical variation of the target population. PMID:26396263

  20. Who Writes Carlos Bulosan?

    OpenAIRE

    Charlie Samuya Veric

    2001-01-01

    The importance of Carlos Bulosan in Filipino and Filipino-American radical history and literature is indisputable. His eminence spans the pacific, and he is known, diversely, as a radical poet, fictionist, novelist, and labor organizer. Author of the canonical America Iis the Hearts, Bulosan is celebrated for chronicling the conditions in America in his time, such as racism and unemployment. In the history of criticism on Bulosan's life and work, however, there is an undeclared general consen...

  1. Fundamentals of Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  2. Contributon Monte Carlo

    International Nuclear Information System (INIS)

    The contributon Monte Carlo method is based on a new recipe to calculate target responses by means of volume integral of the contributon current in a region between the source and the detector. A comprehensive description of the method, its implementation in the general-purpose MCNP code, and results of the method for realistic nonhomogeneous, energy-dependent problems are presented. 23 figures, 10 tables

  3. Lookahead Strategies for Sequential Monte Carlo

    OpenAIRE

    Lin, Ming; Chen, Rong; Liu, Jun

    2013-01-01

    Based on the principles of importance sampling and resampling, sequential Monte Carlo (SMC) encompasses a large set of powerful techniques dealing with complex stochastic dynamic systems. Many of these systems possess strong memory, with which future information can help sharpen the inference about the current state. By providing theoretical justification of several existing algorithms and introducing several new ones, we study systematically how to construct efficient SMC algorithms to take ...

  4. Who Writes Carlos Bulosan?

    Directory of Open Access Journals (Sweden)

    Charlie Samuya Veric

    2001-12-01

    Full Text Available The importance of Carlos Bulosan in Filipino and Filipino-American radical history and literature is indisputable. His eminence spans the pacific, and he is known, diversely, as a radical poet, fictionist, novelist, and labor organizer. Author of the canonical America Iis the Hearts, Bulosan is celebrated for chronicling the conditions in America in his time, such as racism and unemployment. In the history of criticism on Bulosan's life and work, however, there is an undeclared general consensus that views Bulosan and his work as coherent permanent texts of radicalism and anti-imperialism. Central to the existence of such a tradition of critical reception are the generations of critics who, in more ways than one, control the discourse on and of Carlos Bulosan. This essay inquires into the sphere of the critical reception that orders, for our time and for the time ahead, the reading and interpretation of Bulosan. What eye and seeing, the essay asks, determine the perception of Bulosan as the angel of radicalism? What is obscured in constructing Bulosan as an immutable figure of the political? What light does the reader conceive when the personal is brought into the open and situated against the political? the essay explores the answers to these questions in Bulosan's loving letters to various friends, strangers, and white American women. The presence of these interrogations, the essay believes, will secure ultimately the continuing importance of Carlos Bulosan to radical literature and history.

  5. CERN honours Carlo Rubbia

    CERN Multimedia

    2009-01-01

    On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency, will speak about his work with Carlo Rubbia. Finally, Hans Joachim Schellnhuber of the Potsdam Institute for Climate Research and Sven Kul...

  6. CERN honours Carlo Rubbia

    CERN Multimedia

    2009-01-01

    On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency and Professor at the IUSS School for Advanced Studies in Pavia will speak about his work with Carlo Rubbia. Finally, Hans Joachim Sch...

  7. Photon personal dosemeter calibration based on ISO 4037-3

    International Nuclear Information System (INIS)

    The aim of this paper is to present the results of the influence of this new standard compared with the previously approved calibration protocol. The former calibration protocol used a 30 cm x 30 cm x 15 cm PMMA phantom and included back-scatter correction factors estimated from Monte Carlo calculations. Previous studies (Ginjaume et al., 2001) had shown, for a specific type of dosemeter, that the differences between both calibrations were very small, within 2%. This work planned, within the framework of the 2001 national intercomparison, to enlarge the preliminary conclusions by studying the influence of the calibration procedure on a larger set of dosemeters, which would be representative of the different Spanish Dosimetry Services. We were also interested in confirming that the new calibration procedure would not influence the general performance of the services and the corresponding registered doses

  8. TOD to TTP calibration

    Science.gov (United States)

    Bijl, Piet; Reynolds, Joseph P.; Vos, Wouter K.; Hogervorst, Maarten A.; Fanning, Jonathan D.

    2011-05-01

    The TTP (Targeting Task Performance) metric, developed at NVESD, is the current standard US Army model to predict EO/IR Target Acquisition performance. This model however does not have a corresponding lab or field test to empirically assess the performance of a camera system. The TOD (Triangle Orientation Discrimination) method, developed at TNO in The Netherlands, provides such a measurement. In this study, we make a direct comparison between TOD performance for a range of sensors and the extensive historical US observer performance database built to develop and calibrate the TTP metric. The US perception data were collected doing an identification task by military personnel on a standard 12 target, 12 aspect tactical vehicle image set that was processed through simulated sensors for which the most fundamental sensor parameters such as blur, sampling, spatial and temporal noise were varied. In the present study, we measured TOD sensor performance using exactly the same sensors processing a set of TOD triangle test patterns. The study shows that good overall agreement is obtained when the ratio between target characteristic size and TOD test pattern size at threshold equals 6.3. Note that this number is purely based on empirical data without any intermediate modeling. The calibration of the TOD to the TTP is highly beneficial to the sensor modeling and testing community for a variety of reasons. These include: i) a connection between requirement specification and acceptance testing, and ii) a very efficient method to quickly validate or extend the TTP range prediction model to new systems and tasks.

  9. Parallel Calibration for Sensor Array Radio Interferometers

    CERN Document Server

    Brossard, Martin; Pesavento, Marius; Boyer, Rémy; Larzabal, Pascal; Wijnholds, Stefan J

    2016-01-01

    In order to meet the theoretically achievable imaging performance, calibration of modern radio interferometers is a mandatory challenge, especially at low frequencies. In this perspective, we propose a novel parallel iterative multi-wavelength calibration algorithm. The proposed algorithm estimates the apparent directions of the calibration sources, the directional and undirectional complex gains of the array elements and their noise powers, with a reasonable computational complexity. Furthermore, the algorithm takes into account the specific variation of the aforementioned parameter values across wavelength. Realistic numerical simulations reveal that the proposed scheme outperforms the mono-wavelength calibration scheme and approaches the derived constrained Cram\\'er-Rao bound even with the presence of non-calibration sources at unknown directions, in a computationally efficient manner.

  10. Residual gas analyzer calibration

    Science.gov (United States)

    Lilienkamp, R. H.

    1972-01-01

    A technique which employs known gas mixtures to calibrate the residual gas analyzer (RGA) is described. The mass spectra from the RGA are recorded for each gas mixture. This mass spectra data and the mixture composition data each form a matrix. From the two matrices the calibration matrix may be computed. The matrix mathematics requires the number of calibration gas mixtures be equal to or greater than the number of gases included in the calibration. This technique was evaluated using a mathematical model of an RGA to generate the mass spectra. This model included shot noise errors in the mass spectra. Errors in the gas concentrations were also included in the valuation. The effects of these errors was studied by varying their magnitudes and comparing the resulting calibrations. Several methods of evaluating an actual calibration are presented. The effects of the number of gases in then, the composition of the calibration mixture, and the number of mixtures used are discussed.

  11. Adaptive Hamiltonian and Riemann Manifold Monte Carlo Samplers

    OpenAIRE

    Wang, Ziyu; MOHAMED, SHAKIR; De Freitas, Nando

    2013-01-01

    In this paper we address the widely-experienced difficulty in tuning Hamiltonian-based Monte Carlo samplers. We develop an algorithm that allows for the adaptation of Hamiltonian and Riemann manifold Hamiltonian Monte Carlo samplers using Bayesian optimization that allows for infinite adaptation of the parameters of these samplers. We show that the resulting sampling algorithms are ergodic, and that the use of our adaptive algorithms makes it easy to obtain more efficient samplers, in some ca...

  12. Projector Calibration Using a Markerless Plane

    OpenAIRE

    Draréni, Jamil; Roy, Sébastien; Sturm, Peter

    2009-01-01

    In this paper we address the problem of geometric video projector calibration using a markerless planar surface (wall) and a partially calibrated camera. Instead of using control points to infer the camera-wall orientation, we find such relation by efficiently sampling the hemisphere of possible orientations. This process is so fast that even the focal of the camera can be estimated during the sampling process. Hence, physical grids and full knowledge of camera parameters are no longer necess...

  13. Development of methodology for characterization of cartridge filters from the IEA-R1 using the Monte Carlo method; Desenvolvimento de uma metodologia para caracterizacao do filtro cuno do reator IEA-R1 utilizando o Metodo Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Priscila

    2014-07-01

    The Cuno filter is part of the water processing circuit of the IEA-R1 reactor and, when saturated, it is replaced and becomes a radioactive waste, which must be managed. In this work, the primary characterization of the Cuno filter of the IEA-R1 nuclear reactor at IPEN was carried out using gamma spectrometry associated with the Monte Carlo method. The gamma spectrometry was performed using a hyperpure germanium detector (HPGe). The germanium crystal represents the detection active volume of the HPGe detector, which has a region called dead layer or inactive layer. It has been reported in the literature a difference between the theoretical and experimental values when obtaining the efficiency curve of these detectors. In this study we used the MCNP-4C code to obtain the detector calibration efficiency for the geometry of the Cuno filter, and the influence of the dead layer and the effect of sum in cascade at the HPGe detector were studied. The correction of the dead layer values were made by varying the thickness and the radius of the germanium crystal. The detector has 75.83 cm{sup 3} of active volume of detection, according to information provided by the manufacturer. Nevertheless, the results showed that the actual value of active volume is less than the one specified, where the dead layer represents 16% of the total volume of the crystal. A Cuno filter analysis by gamma spectrometry has enabled identifying energy peaks. Using these peaks, three radionuclides were identified in the filter: {sup 108m}Ag, {sup 110m}Ag and {sup 60}Co. From the calibration efficiency obtained by the Monte Carlo method, the value of activity estimated for these radionuclides is in the order of MBq. (author)

  14. Study of the response of an ORTEC GMX45 HPGe detector with a multi-radionuclide volume source using Monte Carlo simulations.

    Science.gov (United States)

    Saraiva, A; Oliveira, C; Reis, M; Portugal, L; Paiva, I; Cruz, C

    2016-07-01

    A model of an n-type ORTEC GMX45 HPGe detector was created using the MCNPX and the MCNP-CP codes. In order to validate the model, experimental efficiency was compared with the Monte Carlo simulations results. The reference source is a NIST traceable multi-gamma volume source in a water-equivalent epoxy resin matrix (1.15gcm(-3) density) containing several radionuclides: (210)Pb, (241)Am, (137)Cs and (60)Co in a cylinder shape container. Two distances of source bottom to end cap front surface of the detector have been considered. The efficiency for the nearest distance is higher than for longer distance. The relative difference between the measured and the simulated full-energy peak efficiency is less than 4.0% except for the 46.5keV energy peak of (210)Pb for the longer distance (6.5%) allowing to consider the model validated. In the absence of adequate standard calibration sources, efficiency and efficiency transfer factors for geometry deviations and matrix effects can be accurately computed by using Monte Carlo methods even if true coincidence could occur as is the case when the (60)Co radioisotope is present in the source. PMID:27131096

  15. Monte-Carlo Simulation for PDC-Based Optical CDMA System

    OpenAIRE

    FAHIM AZIZ UMRANI; AHSAN AHMED URSANI; ABDUL WAHEED UMRANI

    2010-01-01

    This paper presents the Monte-Carlo simulation of Optical CDMA (Code Division Multiple Access) systems, and analyse its performance in terms of the BER (Bit Error Rate). The spreading sequence chosen for CDMA is Perfect Difference Codes. Furthermore, this paper derives the expressions of noise variances from first principles to calibrate the noise for both bipolar (electrical domain) and unipolar (optical domain) signalling required for Monte-Carlo simulation. The simulated res...

  16. Development of methodology for characterization of cartridge filters from the IEA-R1 using the Monte Carlo method

    International Nuclear Information System (INIS)

    The Cuno filter is part of the water processing circuit of the IEA-R1 reactor and, when saturated, it is replaced and becomes a radioactive waste, which must be managed. In this work, the primary characterization of the Cuno filter of the IEA-R1 nuclear reactor at IPEN was carried out using gamma spectrometry associated with the Monte Carlo method. The gamma spectrometry was performed using a hyperpure germanium detector (HPGe). The germanium crystal represents the detection active volume of the HPGe detector, which has a region called dead layer or inactive layer. It has been reported in the literature a difference between the theoretical and experimental values when obtaining the efficiency curve of these detectors. In this study we used the MCNP-4C code to obtain the detector calibration efficiency for the geometry of the Cuno filter, and the influence of the dead layer and the effect of sum in cascade at the HPGe detector were studied. The correction of the dead layer values were made by varying the thickness and the radius of the germanium crystal. The detector has 75.83 cm3 of active volume of detection, according to information provided by the manufacturer. Nevertheless, the results showed that the actual value of active volume is less than the one specified, where the dead layer represents 16% of the total volume of the crystal. A Cuno filter analysis by gamma spectrometry has enabled identifying energy peaks. Using these peaks, three radionuclides were identified in the filter: 108mAg, 110mAg and 60Co. From the calibration efficiency obtained by the Monte Carlo method, the value of activity estimated for these radionuclides is in the order of MBq. (author)

  17. An integrated hydrological, ecological, and economical (HEE) modeling system for assessing water resources and ecosystem production: calibration and validation in the upper and middle parts of the Yellow River Basin, China

    Science.gov (United States)

    Li, Xianglian; Yang, Xiusheng; Gao, Wei

    2006-08-01

    Effective management of water resources in arid and semi-arid areas demands studies that cross over the disciplinaries of natural and social sciences. An integrated Hydrological, Ecological and Economical (HEE) modeling system at regional scale has been developed to assess water resources use and ecosystem production in arid and semi-arid areas. As a physically-based distributed modeling system, the HEE modeling system requires various input parameters including those for soil, vegetation, topography, groundwater, and water and agricultural management at different spatial levels. A successful implementation of the modeling system highly depends on how well it is calibrated. This paper presented an automatic calibration procedure for the HEE modeling system and its test in the upper and middle parts of the Yellow River basin. Previous to calibration, comprehensive literature investigation and sensitivity analysis were performed to identify important parameters for calibration. The automatic calibration procedure was base on conventional Monte Carlo sampling method together with a multi-objective criterion for calibration over multi-site and multi-output. The multi-objective function consisted of optimizing statistics of mean absolute relative error (MARE), Nash-Sutcliffe model efficiency coefficient (E NS), and coefficient of determination (R2). The modeling system was calibrated against streamflow and harvest yield data from multiple sites/provinces within the basin over 2001 by using the proposed automatic procedure, and validated over 1993-1995. Over the calibration period, the mean absolute relative error of simulated daily streamflow was within 7% while the statistics R2 and E NS of daily streamflow were 0.61 and 0.49 respectively. Average simulated harvest yield over the calibration period was about 9.2% less than that of observations. Overall calibration results have indicated that the calibration procedures developed in this study can efficiently calibrate

  18. The role of research efficiency in the evolution of scientific productivity and impact: An agent-based model

    Science.gov (United States)

    You, Zhi-Qiang; Han, Xiao-Pu; Hadzibeganovic, Tarik

    2016-02-01

    We introduce an agent-based model to investigate the effects of production efficiency (PE) and hot field tracing capability (HFTC) on productivity and impact of scientists embedded in a competitive research environment. Agents compete to publish and become cited by occupying the nodes of a citation network calibrated by real-world citation datasets. Our Monte-Carlo simulations reveal that differences in individual performance are strongly related to PE, whereas HFTC alone cannot provide sustainable academic careers under intensely competitive conditions. Remarkably, the negative effect of high competition levels on productivity can be buffered by elevated research efficiency if simultaneously HFTC is sufficiently low.

  19. RF impedance measurement calibration

    International Nuclear Information System (INIS)

    The intent of this note is not to explain all of the available calibration methods in detail. Instead, we will focus on the calibration methods of interest for RF impedance coupling measurements and attempt to explain: (1). The standards and measurements necessary for the various calibration techniques. (2). The advantages and disadvantages of each technique. (3). The mathematical manipulations that need to be applied to the measured standards and devices. (4). An outline of the steps needed for writing a calibration routine that operated from a remote computer. For further details of the various techniques presented in this note, the reader should consult the references

  20. The COS Calibration Pipeline

    Science.gov (United States)

    Hodge, Philip E.; Keyes, C.; Kaiser, M.

    2007-12-01

    The COS calibration pipeline (CALCOS) includes three main components: basic calibration, wavelength calibration, and spectral extraction. Calibration of modes using the far ultraviolet (FUV) and near ultraviolet (NUV) detectors share a common structure, although the individual reference files differ and there are some additional steps for the FUV channel. The pipeline is designed to calibrate data acquired in either ACCUM or time-tag mode. The basic calibration includes pulse-height filtering and geometric correction for FUV, and flat-field, deadtime, and Doppler correction for both detectors. Wavelength calibration can be done either by using separate lamp exposures or by taking several short lamp exposures concurrently with a science exposure. For time-tag data, the latter mode ("tagflash") will allow better correction of potential drift of the spectrum on the detector. One-dimensional spectra will be extracted and saved in a FITS binary table. Separate columns will be used for the flux-calibrated spectrum, error estimate, and the associated wavelengths. CALCOS is written in Python, with some functions in C. It is similar in style to other HST pipeline code in that it uses an association table to specify which files to be included, and the calibration steps to be performed and the reference files to use are specified by header keywords. Currently, in conjunction with the Instrument Definition Team (led by J. Green), the ground-based reference files are being refined, delivered, and tested with the pipeline.

  1. Energy calibration via correlation

    Science.gov (United States)

    Maier, Daniel; Limousin, Olivier

    2016-03-01

    The main task of an energy calibration is to find a relation between pulse-height values and the corresponding energies. Doing this for each pulse-height channel individually requires an elaborated input spectrum with an excellent counting statistics and a sophisticated data analysis. This work presents an easy to handle energy calibration process which can operate reliably on calibration measurements with low counting statistics. The method uses a parameter based model for the energy calibration and concludes on the optimal parameters of the model by finding the best correlation between the measured pulse-height spectrum and multiple synthetic pulse-height spectra which are constructed with different sets of calibration parameters. A CdTe-based semiconductor detector and the line emissions of an 241Am source were used to test the performance of the correlation method in terms of systematic calibration errors for different counting statistics. Up to energies of 60 keV systematic errors were measured to be less than ~ 0.1 keV. Energy calibration via correlation can be applied to any kind of calibration spectra and shows a robust behavior at low counting statistics. It enables a fast and accurate calibration that can be used to monitor the spectroscopic properties of a detector system in near realtime.

  2. Quantum Gibbs ensemble Monte Carlo

    International Nuclear Information System (INIS)

    We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of 4He in two dimensions

  3. Uncertainty budget for a whole body counter in the scan geometry and computer simulation of the calibration phantoms

    International Nuclear Information System (INIS)

    At the Austrian Research Centers Seibersdorf (ARCS), a whole body counter (WBC) in the scan geometry is used to perform routine measurements for the determination of radioactive intake of workers. The calibration of the WBC is made using bottle phantoms with a homogeneous activity distribution. The same calibration procedures have been simulated using Monte Carlo N-Particle (MCNP) code and FLUKA and the results of the full energy peak efficiencies for eight energies and five phantoms have been compared with the experimental results. The deviation between experiment and simulation results is within 10%. Furthermore, uncertainty budget evaluations have been performed to find out which parameters make substantial contributions to these differences. Therefore, statistical errors of the Monte Carlo simulation, uncertainties in the cross section tables and differences due to geometrical considerations have been taken into account. Comparisons between these results and the one with inhomogeneous distribution, for which the activity is concentrated only in certain parts of the body (such as head, lung, arms and legs), have been performed. The maximum deviation of 43% from the homogeneous case has been found when the activity is concentrated on the arms. (authors)

  4. Multilevel Monte Carlo Approaches for Numerical Homogenization

    KAUST Repository

    Efendiev, Yalchin R.

    2015-10-01

    In this article, we study the application of multilevel Monte Carlo (MLMC) approaches to numerical random homogenization. Our objective is to compute the expectation of some functionals of the homogenized coefficients, or of the homogenized solutions. This is accomplished within MLMC by considering different sizes of representative volumes (RVEs). Many inexpensive computations with the smallest RVE size are combined with fewer expensive computations performed on larger RVEs. Likewise, when it comes to homogenized solutions, different levels of coarse-grid meshes are used to solve the homogenized equation. We show that, by carefully selecting the number of realizations at each level, we can achieve a speed-up in the computations in comparison to a standard Monte Carlo method. Numerical results are presented for both one-dimensional and two-dimensional test-cases that illustrate the efficiency of the approach.

  5. Coevolution Based Adaptive Monte Carlo Localization (CEAMCL

    Directory of Open Access Journals (Sweden)

    Luo Ronghua

    2008-11-01

    Full Text Available An adaptive Monte Carlo localization algorithm based on coevolution mechanism of ecological species is proposed. Samples are clustered into species, each of which represents a hypothesis of the robot's pose. Since the coevolution between the species ensures that the multiple distinct hypotheses can be tracked stably, the problem of premature convergence when using MCL in highly symmetric environments can be solved. And the sample size can be adjusted adaptively over time according to the uncertainty of the robot's pose by using the population growth model. In addition, by using the crossover and mutation operators in evolutionary computation, intra-species evolution can drive the samples move towards the regions where the desired posterior density is large. So a small size of samples can represent the desired density well enough to make precise localization. The new algorithm is termed coevolution based adaptive Monte Carlo localization (CEAMCL. Experiments have been carried out to prove the efficiency of the new localization algorithm.

  6. Calibrating Gyrochronology using Kepler Asteroseismic targets

    CERN Document Server

    Angus, Ruth; Foreman-Mackey, Daniel; McQuillan, Amy

    2015-01-01

    Among the available methods for dating stars, gyrochronology is a powerful one because it requires knowledge of only the star's mass and rotation period. Gyrochronology relations have previously been calibrated using young clusters, with the Sun providing the only age dependence, and are therefore poorly calibrated at late ages. We used rotation period measurements of 310 Kepler stars with asteroseismic ages, 50 stars from the Hyades and Coma Berenices clusters and 6 field stars (including the Sun) with precise age measurements to calibrate the gyrochronology relation, whilst fully accounting for measurement uncertainties in all observable quantities. We calibrated a relation of the form $P=A^n\\times(B-V-c)^b$, where $P$ is rotation period in days, $A$ is age in Myr, $B$ and $V$ are magnitudes and $a$, $b$ and $n$ are the free parameters of our model. We found $a = 0.40^{+0.3}_{-0.05}$, $b = 0.31^{+0.05}_{-0.02}$ and $n = 0.55^{+0.02}_{-0.09}$. Markov Chain Monte Carlo methods were used to explore the posteri...

  7. Quasi Monte Carlo methods for optimization models of the energy industry with pricing and load processes

    International Nuclear Information System (INIS)

    We discuss progress in quasi Monte Carlo methods for numerical calculation integrals or expected values and justify why these methods are more efficient than the classic Monte Carlo methods. Quasi Monte Carlo methods are found to be particularly efficient if the integrands have a low effective dimension. That's why We also discuss the concept of effective dimension and prove on the example of a stochastic Optimization model of the energy industry that such models can posses a low effective dimension. Modern quasi Monte Carlo methods are therefore for such models very promising.

  8. Absolute calibration of photomultiplier based detectors - difficulties and uncertainties

    International Nuclear Information System (INIS)

    Photomultiplier manufacturers can provide a calibration of quantum efficiency over a range of wavelengths with an accuracy of up to 2%. To convert these figures to absolute counting efficiency requires knowledge of photomultiplier collection efficiency, F. Traditional methods for determining F are discussed with emphasis on sources of error. Light sources emitting at a known photon rate allow the absolute quantum efficiency to be determined directly. It is important in all attempts at absolute calibration to appreciate the conditions which manufacturers apply when calibrating photomultipliers

  9. Photogrammetric camera calibration

    Science.gov (United States)

    Tayman, W.P.; Ziemann, H.

    1984-01-01

    Section 2 (Calibration) of the document "Recommended Procedures for Calibrating Photogrammetric Cameras and Related Optical Tests" from the International Archives of Photogrammetry, Vol. XIII, Part 4, is reviewed in the light of recent practical work, and suggestions for changes are made. These suggestions are intended as a basis for a further discussion. ?? 1984.

  10. Calculation of HPGe efficiency for environmental samples: comparison of EFFTRAN and GEANT4

    Energy Technology Data Exchange (ETDEWEB)

    Nikolic, Jelena, E-mail: jnikolic@vinca.rs [University of Belgrade Institut for Nuclear Sciences Vinča, Mike Petrovica Alasa 12-16, 11001 Belgrade (Serbia); Vidmar, Tim [SCK.CEN, Belgian Nuclear Research Centre, Boeretang 200, BE-2400 Mol (Belgium); Jokovic, Dejan [University of Belgrade, Institute for Physics, Pregrevica 18, Belgrade (Serbia); Rajacic, Milica; Todorovic, Dragana [University of Belgrade Institut for Nuclear Sciences Vinča, Mike Petrovica Alasa 12-16, 11001 Belgrade (Serbia)

    2014-11-01

    Determination of full energy peak efficiency is one of the most important tasks that have to be performed before gamma spectrometry of environmental samples. Many methods, including measurement of specific reference materials, Monte Carlo simulations, efficiency transfer and semi empirical calculations, were developed in order to complete this task. Monte Carlo simulation, based on GEANT4 simulation package and EFFTRAN efficiency transfer software are applied for the efficiency calibration of three detectors, readily used in the Environment and Radiation Protection Laboratory of Institute for Nuclear Sciences Vinca, for measurement of environmental samples. Efficiencies were calculated for water, soil and aerosol samples. The aim of this paper is to perform efficiency calculations for HPGe detectors using both GEANT4 simulation and EFFTRAN efficiency transfer software and to compare obtained results with the experimental results. This comparison should show how the two methods agree with experimentally obtained efficiencies of our measurement system and in which part of the spectrum do the discrepancies appear. The detailed knowledge of accuracy and precision of both methods should enable us to choose an appropriate method for each situation that is presented in our and other laboratories on a daily basis.

  11. Calibration and measurement of 210Pb using two independent techniques

    International Nuclear Information System (INIS)

    An experimental procedure has been developed for a rapid and accurate determination of the activity concentration of 210Pb in sediments by liquid scintillation counting (LSC). Additionally, an alternative technique using γ-spectrometry and Monte Carlo simulation has been developed. A radiochemical procedure, based on radium and barium sulphates co-precipitation have been applied to isolate the Pb-isotopes. 210Pb activity measurements were done in a low background scintillation spectrometer Quantulus 1220. A calibration of the liquid scintillation spectrometer, including its α/β discrimination system, has been made, in order to minimize background and, additionally, some improvements are suggested for the calculation of the 210Pb activity concentration, taking into account that 210Pb counting efficiency cannot be accurately determined. Therefore, the use of an effective radiochemical yield, which can be empirically evaluated, is proposed. 210Pb activity concentration in riverbed sediments from an area affected by NORM wastes has been determined using both the proposed method. Results using γ-spectrometry and LSC are compared to the results obtained following indirect α-spectrometry (210Po) method

  12. Sandia WIPP calibration traceability

    Energy Technology Data Exchange (ETDEWEB)

    Schuhen, M.D. [Sandia National Labs., Albuquerque, NM (United States); Dean, T.A. [RE/SPEC, Inc., Albuquerque, NM (United States)

    1996-05-01

    This report summarizes the work performed to establish calibration traceability for the instrumentation used by Sandia National Laboratories at the Waste Isolation Pilot Plant (WIPP) during testing from 1980-1985. Identifying the calibration traceability is an important part of establishing a pedigree for the data and is part of the qualification of existing data. In general, the requirement states that the calibration of Measuring and Test equipment must have a valid relationship to nationally recognized standards or the basis for the calibration must be documented. Sandia recognized that just establishing calibration traceability would not necessarily mean that all QA requirements were met during the certification of test instrumentation. To address this concern, the assessment was expanded to include various activities.

  13. Sandia WIPP calibration traceability

    International Nuclear Information System (INIS)

    This report summarizes the work performed to establish calibration traceability for the instrumentation used by Sandia National Laboratories at the Waste Isolation Pilot Plant (WIPP) during testing from 1980-1985. Identifying the calibration traceability is an important part of establishing a pedigree for the data and is part of the qualification of existing data. In general, the requirement states that the calibration of Measuring and Test equipment must have a valid relationship to nationally recognized standards or the basis for the calibration must be documented. Sandia recognized that just establishing calibration traceability would not necessarily mean that all QA requirements were met during the certification of test instrumentation. To address this concern, the assessment was expanded to include various activities

  14. Development and calibration of a real-time airborne radioactivity monitor using direct gamma-ray spectrometry with two scintillation detectors

    International Nuclear Information System (INIS)

    The implementation of in-situ gamma-ray spectrometry in an automatic real-time environmental radiation surveillance network can help to identify and characterize abnormal radioactivity increases quickly. For this reason, a Real-time Airborne Radioactivity Monitor using direct gamma-ray spectrometry with two scintillation detectors (RARM-D2) was developed. The two scintillation detectors in the RARM-D2 are strategically shielded with Pb to permit the separate measurement of the airborne isotopes with respect to the deposited isotopes.In this paper, we describe the main aspects of the development and calibration of the RARM-D2 when using NaI(Tl) or LaBr3(Ce) detectors. The calibration of the monitor was performed experimentally with the exception of the efficiency curve, which was set using Monte Carlo (MC) simulations with the EGS5 code system. Prior to setting the efficiency curve, the effect of the radioactive source term size on the efficiency calculations was studied for the gamma-rays from 137Cs. Finally, to study the measurement capabilities of the RARM-D2, the minimum detectable activity concentrations for 131I and 137Cs were calculated for typical spectra at different integration times. - Highlights: • A real-time airborne radioactivity monitor was developed. • The monitor is formed using two scintillation detectors for gamma-ray spectrometry. • The detectors are shielded with Pb. One detector is pointing up and the other down. • The monitors were calibrated using experimental data and Monte Carlo simulations. • The efficiency calculations and MDAC values are given

  15. Monte Carlo techniques

    International Nuclear Information System (INIS)

    The course of ''Monte Carlo Techniques'' will try to give a general overview of how to build up a method based on a given theory, allowing you to compare the outcome of an experiment with that theory. Concepts related with the construction of the method, such as, random variables, distributions of random variables, generation of random variables, random-based numerical methods, will be introduced in this course. Examples of some of the current theories in High Energy Physics describing the e+e- annihilation processes (QED, Electro-Weak, QCD) will also be briefly introduced. A second step in the employment of this method is related to the detector. The interactions that a particle could have along its way, through the detector as well as the response of the different materials which compound the detector will be quoted in this course. An example of detector at LEP era, in which these techniques are being applied, will close the course. (orig.)

  16. Absolute angular calibration of a submarine km3 neutrino telescope

    International Nuclear Information System (INIS)

    A requirement for neutrino telescope is the ability to resolve point sources of neutrinos. In order to understand its resolving power a way to perform absolute angular calibration with muons is required. Muons produced by cosmic rays in the atmosphere offer an abundant calibration source. By covering a surface vessel with 200 modules of 5 m2 plastic scintillator a surface air shower array can be set up. Running this array in coincidence with a deep-sea km3 size neutrino detector, where the coincidence is defined by the absolute clock timing stamp for each event, would allow absolute angular calibration to be performed. Monte Carlo results simulating the absolute angular calibration of the km3 size neutrino detector will be presented. Future work and direction will be discussed.

  17. MCMini: Monte Carlo on GPGPU

    Energy Technology Data Exchange (ETDEWEB)

    Marcus, Ryan C. [Los Alamos National Laboratory

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  18. Monte Carlo methods for the self-avoiding walk

    Energy Technology Data Exchange (ETDEWEB)

    Janse van Rensburg, E J [Department of Mathematics and Statistics, York University, Toronto, ON M3J 1P3 (Canada)], E-mail: rensburg@yorku.ca

    2009-08-14

    The numerical simulation of self-avoiding walks remains a significant component in the study of random objects in lattices. In this review, I give a comprehensive overview of the current state of Monte Carlo simulations of models of self-avoiding walks. The self-avoiding walk model is revisited, and the motivations for Monte Carlo simulations of this model are discussed. Efficient sampling of self-avoiding walks remains an elusive objective, but significant progress has been made over the last three decades. The model still poses challenging numerical questions however, and I review specific Monte Carlo methods for improved sampling including general Monte Carlo techniques such as Metropolis sampling, umbrella sampling and multiple Markov Chain sampling. In addition, specific static and dynamic algorithms for walks are presented, and I give an overview of recent innovations in this field, including algorithms such as flatPERM, flatGARM and flatGAS. (topical review)

  19. Monte Carlo methods for the self-avoiding walk

    International Nuclear Information System (INIS)

    The numerical simulation of self-avoiding walks remains a significant component in the study of random objects in lattices. In this review, I give a comprehensive overview of the current state of Monte Carlo simulations of models of self-avoiding walks. The self-avoiding walk model is revisited, and the motivations for Monte Carlo simulations of this model are discussed. Efficient sampling of self-avoiding walks remains an elusive objective, but significant progress has been made over the last three decades. The model still poses challenging numerical questions however, and I review specific Monte Carlo methods for improved sampling including general Monte Carlo techniques such as Metropolis sampling, umbrella sampling and multiple Markov Chain sampling. In addition, specific static and dynamic algorithms for walks are presented, and I give an overview of recent innovations in this field, including algorithms such as flatPERM, flatGARM and flatGAS. (topical review)

  20. Non-iterative method for camera calibration.

    Science.gov (United States)

    Hong, Yuzhen; Ren, Guoqiang; Liu, Enhai

    2015-09-01

    This paper presents a new and effective technique to calibrate a camera without nonlinear iteration optimization. To this end, the centre-of-distortion is accurately estimated firstly. Based on the radial distortion division model, point correspondences between model plane and its image were used to compute the homography and distortion coefficients afterwards. Once the homographies of calibration images are obtained, the camera intrinsic parameters are solved analytically. All the solution techniques applied in this calibration process are non-iterative that do not need any initial guess, with no risk of local minima. Moreover, estimation of the distortion coefficients and intrinsic parameters could be successfully decoupled, yielding the more stable and reliable result. Both simulative and real experiments have been carried out to show that the proposed method is reliable and effective. Without nonlinear iteration optimization, the proposed method is computationally efficient and can be applied to real-time online calibration. PMID:26368490

  1. Calibration of the JEM-EUSO detector

    Directory of Open Access Journals (Sweden)

    Gorodetzky P.

    2013-06-01

    Full Text Available In order to unveil the mystery of ultra high energy cosmic rays (UHECRs, JEM-EUSO (Extreme Universe Space Observatory on-board Japan Experiment Module will observe extensive air showers induced by UHECRs from the International Space Station orbit with a huge acceptance. Calibration of the JEM-EUSO instrument, which consists of Fresnel optics and a focal surface detector with 5000 photomultipliers, is very important to discuss the origin of UHECRs precisely with the observed results. In this paper, the calibration before launch and on-orbit is described. The calibration before flight will be performed as precisely as possible with integrating spheres. In the orbit, the relative change of the performance will be checked regularly with on-board and on-ground light sources. The absolute calibration of photon detection efficiency may be performed with the moon, which is a stable light source in the nature.

  2. Monte Carlo Methods in Physics

    International Nuclear Information System (INIS)

    Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained

  3. Monte Carlo Form-Finding Method for Tensegrity Structures

    Science.gov (United States)

    Li, Yue; Feng, Xi-Qiao; Cao, Yan-Ping

    2010-05-01

    In this paper, we propose a Monte Carlo-based approach to solve tensegrity form-finding problems. It uses a stochastic procedure to find the deterministic equilibrium configuration of a tensegrity structure. The suggested Monte Carlo form-finding (MCFF) method is highly efficient because it does not involve complicated matrix operations and symmetry analysis and it works for arbitrary initial configurations. Both regular and non-regular tensegrity problems of large scale can be solved. Some representative examples are presented to demonstrate the efficiency and accuracy of this versatile method.

  4. Remote calibration of ionization chambers for radioactivity calibration

    International Nuclear Information System (INIS)

    A new calibration technique, referred to as e-trace, has been developed by the National Institute of Advanced Industrial Science and Technology (AIST). The e-trace technique enables rapid remote calibration of measurement equipment and requires minimal resources. We calibrated radioisotope calibrators of the Japan Radioisotope Association (JRIA) and the Nishina Memorial Cyclotron Center (NMCC) remotely and confirmed that remote calibration provided results that are consistent with the results obtained by existing methods within the limits of uncertainty. Accordingly, e-trace has been approved as the standard calibration method at AIST. We intend to apply remote calibration to radioisotope calibrators in hospitals and isotope facilities. (author)

  5. Calibrations of photomultiplier tubes

    International Nuclear Information System (INIS)

    The experimental methods for calibration photomultiplier tubes used in the multichannel fast-pulse-detection system of Thomson scattering measurements for nuclear fusion devices is reported. The most important parameters of the photomultiplier tubes to be calibrated include: linearity of output electric signals to input light signals, response time of pulsed light, spectral response, absolute responsibility, and sensitivity as a function of the chain voltage. The calibrations of all these parameters are carried out by using EMI 9558 B and RCA 7265 photomultiplier tubes respectively. The experimental methods presented in the paper are common to those quantitative measurements that require phomultiplier tubes as detectors

  6. Equipment for dosemeter calibration

    International Nuclear Information System (INIS)

    The device is used for precise calibration of dosimetric instrumentation, such as used at nuclear facilities. The high precision of the calibration procedure is primarily due to the fact that one single and steady radiation source is used. The accurate alignment of the source and the absence of shielding materials in the beam axis make for high homogeneity of the beam and reproducibility of the measurement; this is also contributed to by the horizontal displacement of the optical bench, which ensures a constant temperature field and the possibility of adjusting the radiation source at a sufficient distance from the instrument to be calibrated. (Z.S.). 3 figs

  7. Lidar Calibration Centre

    Science.gov (United States)

    Pappalardo, Gelsomina; Freudenthaler, Volker; Nicolae, Doina; Mona, Lucia; Belegante, Livio; D'Amico, Giuseppe

    2016-06-01

    This paper presents the newly established Lidar Calibration Centre, a distributed infrastructure in Europe, whose goal is to offer services for complete characterization and calibration of lidars and ceilometers. Mobile reference lidars, laboratories for testing and characterization of optics and electronics, facilities for inspection and debugging of instruments, as well as for training in good practices are open to users from the scientific community, operational services and private sector. The Lidar Calibration Centre offers support for trans-national access through the EC HORIZON2020 project ACTRIS-2.

  8. Monte Carlo Methods for Rough Free Energy Landscapes: Population Annealing and Parallel Tempering

    OpenAIRE

    Machta, Jon; Ellis, Richard S.

    2011-01-01

    Parallel tempering and population annealing are both effective methods for simulating equilibrium systems with rough free energy landscapes. Parallel tempering, also known as replica exchange Monte Carlo, is a Markov chain Monte Carlo method while population annealing is a sequential Monte Carlo method. Both methods overcome the exponential slowing associated with high free energy barriers. The convergence properties and efficiency of the two methods are compared. For large systems, populatio...

  9. Calibration curve for germanium spectrometers from solutions calibrated by liquid scintillation counting

    International Nuclear Information System (INIS)

    The beta-gamma emitters ''60Co, ''137 Cs, ''131 I, ''210 Pb y ''129 Iare radionuclides for which the calibration by the CIEMAT/NIST method ispossible with uncertainties less than 1%. We prepared, from standardized solutions of these radionuclides, samples in vials of 20 ml. We obtained the calibration curves, efficiency as a function of energy, for two germanium detectors. (Author) 5 refs

  10. Device calibration impacts security of quantum key distribution

    OpenAIRE

    Jain, Nitin; Wittmann, Christoffer; Lydersen, Lars; Wiechers, Carlos; Elser, Dominique; Marquardt, Christoph; Makarov, Vadim; Leuchs, Gerd

    2011-01-01

    Characterizing the physical channel and calibrating the cryptosystem hardware are prerequisites for establishing a quantum channel for quantum key distribution (QKD). Moreover, an inappropriately implemented calibration routine can open a fatal security loophole. We propose and experimentally demonstrate a method to induce a large temporal detector efficiency mismatch in a commercial QKD system by deceiving a channel length calibration routine. We then devise an optimal and realistic strategy...

  11. SPOTS Calibration Example

    Directory of Open Access Journals (Sweden)

    Patterson E.

    2010-06-01

    Full Text Available The results are presented using the procedure outlined by the Standardisation Project for Optical Techniques of Strain measurement to calibrate a digital image correlation system. The process involves comparing the experimental data obtained with the optical measurement system to the theoretical values for a specially designed specimen. The standard states the criteria which must be met in order to achieve successful calibration, in addition to quantifying the measurement uncertainty in the system. The system was evaluated at three different displacement load levels, generating strain ranges from 289 µstrain to 2110 µstrain. At the 289 µstrain range, the calibration uncertainty was found to be 14.1 µstrain, and at the 2110 µstrain range it was found to be 28.9 µstrain. This calibration procedure was performed without painting a speckle pattern on the surface of the metal. Instead, the specimen surface was prepared using different grades of grit paper to produce the desired texture.

  12. Traceable Pyrgeometer Calibrations

    Energy Technology Data Exchange (ETDEWEB)

    Dooraghi, Mike; Kutchenreiter, Mark; Reda, Ibrahim; Habte, Aron; Sengupta, Manajit; Andreas, Afshin; Newman, Martina; Webb, Craig

    2016-05-02

    This presentation provides a high-level overview of the progress on the Broadband Outdoor Radiometer Calibrations for all shortwave and longwave radiometers that are deployed by the Atmospheric Radiation Measurement program.

  13. Air Data Calibration Facility

    Data.gov (United States)

    Federal Laboratory Consortium — This facility is for low altitude subsonic altimeter system calibrations of air vehicles. Mission is a direct support of the AFFTC mission. Postflight data merge is...

  14. Monte Carlo Simulation of an American Option

    Directory of Open Access Journals (Sweden)

    Gikiri Thuo

    2007-04-01

    Full Text Available We implement gradient estimation techniques for sensitivity analysis of option pricing which can be efficiently employed in Monte Carlo simulation. Using these techniques we can simultaneously obtain an estimate of the option value together with the estimates of sensitivities of the option value to various parameters of the model. After deriving the gradient estimates we incorporate them in an iterative stochastic approximation algorithm for pricing an option with early exercise features. We illustrate the procedure using an example of an American call option with a single dividend that is analytically tractable. In particular we incorporate estimates for the gradient with respect to the early exercise threshold level.

  15. Monte Carlo modelling for neutron guide losses

    International Nuclear Information System (INIS)

    In modern research reactors, neutron guides are commonly used for beam conducting. The neutron guide is a well polished or equivalently smooth glass tube covered inside by sputtered or evaporated film of natural Ni or 58Ni isotope where the neutrons are totally reflected. A Monte Carlo calculation was carried out to establish the real efficiency and the spectral as well as spatial distribution of the neutron beam at the end of a glass mirror guide. The losses caused by mechanical inaccuracy and mirror quality were considered and the effects due to the geometrical arrangement were analyzed. (author) 2 refs.; 2 figs

  16. Approximation Behooves Calibration

    DEFF Research Database (Denmark)

    da Silva Ribeiro, André Manuel; Poulsen, Rolf

    2013-01-01

    Calibration based on an expansion approximation for option prices in the Heston stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005–2009.......Calibration based on an expansion approximation for option prices in the Heston stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005–2009....

  17. Scanner calibration revisited

    Directory of Open Access Journals (Sweden)

    Pozhitkov Alexander E

    2010-07-01

    Full Text Available Abstract Background Calibration of a microarray scanner is critical for accurate interpretation of microarray results. Shi et al. (BMC Bioinformatics, 2005, 6, Art. No. S11 Suppl. 2. reported usage of a Full Moon BioSystems slide for calibration. Inspired by the Shi et al. work, we have calibrated microarray scanners in our previous research. We were puzzled however, that most of the signal intensities from a biological sample fell below the sensitivity threshold level determined by the calibration slide. This conundrum led us to re-investigate the quality of calibration provided by the Full Moon BioSystems slide as well as the accuracy of the analysis performed by Shi et al. Methods Signal intensities were recorded on three different microarray scanners at various photomultiplier gain levels using the same calibration slide from Full Moon BioSystems. Data analysis was conducted on raw signal intensities without normalization or transformation of any kind. Weighted least-squares method was used to fit the data. Results We found that initial analysis performed by Shi et al. did not take into account autofluorescence of the Full Moon BioSystems slide, which led to a grossly distorted microarray scanner response. Our analysis revealed that a power-law function, which is explicitly accounting for the slide autofluorescence, perfectly described a relationship between signal intensities and fluorophore quantities. Conclusions Microarray scanners respond in a much less distorted fashion than was reported by Shi et al. Full Moon BioSystems calibration slides are inadequate for performing calibration. We recommend against using these slides.

  18. Calibrating nacelle lidars

    Energy Technology Data Exchange (ETDEWEB)

    Courtney, M.

    2013-01-15

    Nacelle mounted, forward looking wind lidars are beginning to be used to provide reference wind speed measurements for the power performance testing of wind turbines. In such applications, a formal calibration procedure with a corresponding uncertainty assessment will be necessary. This report presents four concepts for performing such a nacelle lidar calibration. Of the four methods, two are found to be immediately relevant and are pursued in some detail. The first of these is a line of sight calibration method in which both lines of sight (for a two beam lidar) are individually calibrated by accurately aligning the beam to pass close to a reference wind speed sensor. A testing procedure is presented, reporting requirements outlined and the uncertainty of the method analysed. It is seen that the main limitation of the line of sight calibration method is the time required to obtain a representative distribution of radial wind speeds. An alternative method is to place the nacelle lidar on the ground and incline the beams upwards to bisect a mast equipped with reference instrumentation at a known height and range. This method will be easier and faster to implement and execute but the beam inclination introduces extra uncertainties. A procedure for conducting such a calibration is presented and initial indications of the uncertainties given. A discussion of the merits and weaknesses of the two methods is given together with some proposals for the next important steps to be taken in this work. (Author)

  19. Energy calibration via correlation

    CERN Document Server

    Maier, Daniel

    2015-01-01

    The main task of an energy calibration is to find a relation between pulse-height values and the corresponding energies. Doing this for each pulse-height channel individually requires an elaborated input spectrum with an excellent counting statistics and a sophisticated data analysis. This work presents an easy to handle energy calibration process which can operate reliably on calibration measurements with low counting statistics. The method uses a parameter based model for the energy calibration and concludes on the optimal parameters of the model by finding the best correlation between the measured pulse-height spectrum and multiple synthetic pulse-height spectra which are constructed with different sets of calibration parameters. A CdTe-based semiconductor detector and the line emissions of an 241 Am source were used to test the performance of the correlation method in terms of systematic calibration errors for different counting statistics. Up to energies of 60 keV systematic errors were measured to be le...

  20. Solutions to the linear camera calibration problem

    Science.gov (United States)

    Grosky, William I.; Tamburino, Louis A.

    1987-01-01

    The general linear camera calibration problem is formulated and several classification schemes for various subcases of this problem are developed. For each subcase, simple solutions are found that satisfy all necessary constraints. The results improve those already in the literature with respect to simplicity, efficiency, and coverage. However, the classification scheme is not exhaustive.

  1. Estimation of population variance in contributon Monte Carlo

    International Nuclear Information System (INIS)

    Based on the theory of contributons, a new Monte Carlo method known as the contributon Monte Carlo method has recently been developed. The method has found applications in several practical shielding problems. The authors analyze theoretically the variance and efficiency of the new method, by taking moments around the score. In order to compare the contributon game with a game of simple geometrical splitting and also to get the optimal placement of the contributon volume, the moments equations were solved numerically for a one-dimensional, one-group problem using a 10-mfp-thick homogeneous slab. It is found that the optimal placement of the contributon volume is adjacent to the detector; even at its most optimal the contributon Monte Carlo is less efficient than geometrical splitting

  2. Monte Carlo simulation of mixed neutron-gamma radiation fields and dosimetry devices

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Guoqing

    2011-12-22

    Monte Carlo methods based on random sampling are widely used in different fields for the capability of solving problems with a large number of coupled degrees of freedom. In this work, Monte Carlos methods are successfully applied for the simulation of the mixed neutron-gamma field in an interim storage facility and neutron dosimeters of different types. Details are discussed in two parts: In the first part, the method of simulating an interim storage facility loaded with CASTORs is presented. The size of a CASTOR is rather large (several meters) and the CASTOR wall is very thick (tens of centimeters). Obtaining the results of dose rates outside a CASTOR with reasonable errors costs usually hours or even days. For the simulation of a large amount of CASTORs in an interim storage facility, it needs weeks or even months to finish a calculation. Variance reduction techniques were used to reduce the calculation time and to achieve reasonable relative errors. Source clones were applied to avoid unnecessary repeated calculations. In addition, the simulations were performed on a cluster system. With the calculation techniques discussed above, the efficiencies of calculations can be improved evidently. In the second part, the methods of simulating the response of neutron dosimeters are presented. An Alnor albedo dosimeter was modelled in MCNP, and it has been simulated in the facility to calculate the calibration factor to get the evaluated response to a Cf-252 source. The angular response of Makrofol detectors to fast neutrons has also been investigated. As a kind of SSNTD, Makrofol can detect fast neutrons by recording the neutron induced heavy charged recoils. To obtain the information of charged recoils, general-purpose Monte Carlo codes were used for transporting incident neutrons. The response of Makrofol to fast neutrons is dependent on several factors. Based on the parameters which affect the track revealing, the formation of visible tracks was determined. For

  3. Monte Carlo simulation of mixed neutron-gamma radiation fields and dosimetry devices

    International Nuclear Information System (INIS)

    Monte Carlo methods based on random sampling are widely used in different fields for the capability of solving problems with a large number of coupled degrees of freedom. In this work, Monte Carlos methods are successfully applied for the simulation of the mixed neutron-gamma field in an interim storage facility and neutron dosimeters of different types. Details are discussed in two parts: In the first part, the method of simulating an interim storage facility loaded with CASTORs is presented. The size of a CASTOR is rather large (several meters) and the CASTOR wall is very thick (tens of centimeters). Obtaining the results of dose rates outside a CASTOR with reasonable errors costs usually hours or even days. For the simulation of a large amount of CASTORs in an interim storage facility, it needs weeks or even months to finish a calculation. Variance reduction techniques were used to reduce the calculation time and to achieve reasonable relative errors. Source clones were applied to avoid unnecessary repeated calculations. In addition, the simulations were performed on a cluster system. With the calculation techniques discussed above, the efficiencies of calculations can be improved evidently. In the second part, the methods of simulating the response of neutron dosimeters are presented. An Alnor albedo dosimeter was modelled in MCNP, and it has been simulated in the facility to calculate the calibration factor to get the evaluated response to a Cf-252 source. The angular response of Makrofol detectors to fast neutrons has also been investigated. As a kind of SSNTD, Makrofol can detect fast neutrons by recording the neutron induced heavy charged recoils. To obtain the information of charged recoils, general-purpose Monte Carlo codes were used for transporting incident neutrons. The response of Makrofol to fast neutrons is dependent on several factors. Based on the parameters which affect the track revealing, the formation of visible tracks was determined. For

  4. Variations of Dose to the Lung During Computed Tomography (CT) Thorax Examinations: A Monte Carlo Study

    International Nuclear Information System (INIS)

    This study determined the influence of patient individuality on lung organ doses for chest computed tomography (CT) examinations, viewed in the context of the recommendation of the ICRP 103. Within this current recommendation, a more individualized dose estimation is emphasized. The new ICRP 110 voxelized adult phantom was used and compared to calculation of lung doses for chest CT studies with identical scan parameters (120 kV, 135 mAs, 100 mm collimation, 1.5 pitch). For all patient images, the lung was contoured, and the scanning geometry was simulated using the Monte Carlo method. The lungs were completely included in the scan area. A user code was developed for the Monte Carlo package EGSnrc, which enables the simulation of a CT examination procedure and allows an efficient dose scoring within a patient geometry. All simulations were calculated with the same CT source model and calibrated to a realistic CTDIair value. Simulation values were grouped into 1 mSv classes. The organ dose classes fit well to a Gaussian distribution (adjusted correlation coefficient R2 = 0.95). The mean value of the fit was 10 mSv, with a standard deviation of 2 mSv. The variability was about ±30% with a minimum of 8 mSv and maximum of 13 mSv. The calculated lung dose of the ICRP adult female phantom was approximately 11 mSv and thus within the calculated standard deviation of the patient pool. The correlation between lung volume and dose was weak (adjusted correlation coefficient R2 = 0.33). Gender specific differences between the ICRP male and female phantoms were about 17%. In comparison, the differences between the female and a limited set of male patient studies were not statistically significant. Further, the relation between the HU values of CT scans and material/density necessary for the Monte Carlo simulations was investigated. It resulted that the simple but commonly employed relationship leads to significant deviations compared to definite materials in the ICRP phantoms

  5. Establishing a NORM based radiation calibration facility.

    Science.gov (United States)

    Wallace, J

    2016-05-01

    An environmental radiation calibration facility has been constructed by the Radiation and Nuclear Sciences unit of Queensland Health at the Forensic and Scientific Services Coopers Plains campus in Brisbane. This facility consists of five low density concrete pads, spiked with a NORM source, to simulate soil and effectively provide a number of semi-infinite uniformly distributed sources for improved energy response calibrations of radiation equipment used in NORM measurements. The pads have been sealed with an environmental epoxy compound to restrict radon loss and so enhance the quality of secular equilibrium achieved. Monte Carlo models (MCNP),used to establish suitable design parameters and identify appropriate geometric correction factors linking the air kerma measured above these calibration pads to that predicted for an infinite plane using adjusted ICRU53 data, are discussed. Use of these correction factors as well as adjustments for cosmic radiation and the impact of surrounding low levels of NORM in the soil, allows for good agreement between the radiation fields predicted and measured above the pads at both 0.15 m and 1 m. PMID:26921707

  6. Calibration algorithms of RPC detectors at Daya Bay Neutrino Experiment

    International Nuclear Information System (INIS)

    During the commissioning of RPC detector systems at the Daya Bay Reactor Neutrino Experiment, calibration algorithms were developed and tuned, in order to evaluate and optimize the performance of the RPC detectors. Based on a description of the hardware structure of the RPC detector systems, this paper introduces the algorithms used for detector calibration, including trigger rate, efficiency, noise rate, purity and muon flux.

  7. Improvement of the WBC calibration of the Internal Dosimetry Laboratory of the CDTN/CNEN using MCNPX code

    Energy Technology Data Exchange (ETDEWEB)

    Guerra P, F.; Heeren de O, A. [Universidade Federal de Minas Gerais, Departamento de Engenharia Nuclear, Programa de Pos Graduacao em Ciencias e Tecnicas Nucleares, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil); Melo, B. M.; Lacerda, M. A. S.; Da Silva, T. A.; Ferreira F, T. C., E-mail: tcff01@gmail.com [Centro de Desenvolvimento da Tecnologia Nuclear, Programa de Pos Graduacao / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil)

    2015-10-15

    The Plan of Radiological Protection licensed by the National Nuclear Energy Commission - CNEN in Brazil includes the risks of assessment of internal and external exposure by implementing a program of individual monitoring which is responsible of controlling exposures and ensuring the maintenance of radiation safety. The Laboratory of Internal Dosimetry of the Center for Development of Nuclear Technology - LID/CDTN is responsible for routine monitoring of internal contamination of the Individuals Occupationally Exposed (IOEs). These are, the IOEs involved in handling {sup 18}F produced by the Unit for Research and Production of Radiopharmaceuticals sources; as well a monitoring of the entire body of workers from the Research Reactor TRIGA IPR-R1/CDTN or whenever there is any risk of accidental incorporation. The determination of photon emitting radionuclides from the human body requires calibration techniques of the counting geometries, in order to obtain a curve of efficiency. The calibration process normally makes use of physical phantoms containing certified activities of the radionuclides of interest. The objective of this project is the calibration of the WBC facility of the LID/CDTN using the BOMAB physical phantom and Monte Carlo simulations. Three steps were needed to complete the calibration process. First, the BOMAB was filled with a KCl solution and several measurements of the gamma ray energy (1.46 MeV) emitted by {sup 40}K were done. Second, simulations using MCNPX code were performed to calculate the counting efficiency (Ce) for the BOMAB model phantom and compared with the measurements Ce results. Third and last step, the modeled BOMAB phantom was used to calculate the Ce covering the energy range of interest. The results showed a good agreement and are within the expected ratio between the measured and simulated results. (Author)

  8. Carlos Chagas: biographical sketch.

    Science.gov (United States)

    Moncayo, Alvaro

    2010-01-01

    Carlos Chagas was born on 9 July 1878 in the farm "Bon Retiro" located close to the City of Oliveira in the interior of the State of Minas Gerais, Brazil. He started his medical studies in 1897 at the School of Medicine of Rio de Janeiro. In the late XIX century, the works by Louis Pasteur and Robert Koch induced a change in the medical paradigm with emphasis in experimental demonstrations of the causal link between microbes and disease. During the same years in Germany appeared the pathological concept of disease, linking organic lesions with symptoms. All these innovations were adopted by the reforms of the medical schools in Brazil and influenced the scientific formation of Chagas. Chagas completed his medical studies between 1897 and 1903 and his examinations during these years were always ranked with high grades. Oswaldo Cruz accepted Chagas as a doctoral candidate and directed his thesis on "Hematological studies of Malaria" which was received with honors by the examiners. In 1903 the director appointed Chagas as research assistant at the Institute. In those years, the Institute of Manguinhos, under the direction of Oswaldo Cruz, initiated a process of institutional growth and gathered a distinguished group of Brazilian and foreign scientists. In 1907, he was requested to investigate and control a malaria outbreak in Lassance, Minas Gerais. In this moment Chagas could not have imagined that this field research was the beginning of one of the most notable medical discoveries. Chagas was, at the age of 28, a Research Assistant at the Institute of Manguinhos and was studying a new flagellate parasite isolated from triatomine insects captured in the State of Minas Gerais. Chagas made his discoveries in this order: first the causal agent, then the vector and finally the human cases. These notable discoveries were carried out by Chagas in twenty months. At the age of 33 Chagas had completed his discoveries and published the scientific articles that gave him world

  9. A hybrid Monte Carlo and response matrix Monte Carlo method in criticality calculation

    International Nuclear Information System (INIS)

    Full core calculations are very useful and important in reactor physics analysis, especially in computing the full core power distributions, optimizing the refueling strategies and analyzing the depletion of fuels. To reduce the computing time and accelerate the convergence, a method named Response Matrix Monte Carlo (RMMC) method based on analog Monte Carlo simulation was used to calculate the fixed source neutron transport problems in repeated structures. To make more accurate calculations, we put forward the RMMC method based on non-analog Monte Carlo simulation and investigate the way to use RMMC method in criticality calculations. Then a new hybrid RMMC and MC (RMMC+MC) method is put forward to solve the criticality problems with combined repeated and flexible geometries. This new RMMC+MC method, having the advantages of both MC method and RMMC method, can not only increase the efficiency of calculations, also simulate more complex geometries rather than repeated structures. Several 1-D numerical problems are constructed to test the new RMMC and RMMC+MC method. The results show that RMMC method and RMMC+MC method can efficiently reduce the computing time and variations in the calculations. Finally, the future research directions are mentioned and discussed at the end of this paper to make RMMC method and RMMC+MC method more powerful. (authors)

  10. Methods for fitting of efficiency curves obtained by means of HPGe gamma rays spectrometers

    International Nuclear Information System (INIS)

    The present work describes a few methodologies developed for fitting efficiency curves obtained by means of a HPGe gamma-ray spectrometer. The interpolated values were determined by simple polynomial fitting and polynomial fitting between the ratio of experimental peak efficiency and total efficiency, calculated by Monte Carlo technique, as a function of gamma-ray energy. Moreover, non-linear fitting has been performed using a segmented polynomial function and applying the Gauss-Marquardt method. For the peak area obtainment different methodologies were developed in order to estimate the background area under the peak. This information was obtained by numerical integration or by using analytical functions associated to the background. One non-calibrated radioactive source has been included in the curve efficiency in order to provide additional calibration points. As a by-product, it was possible to determine the activity of this non-calibrated source. For all fittings developed in the present work the covariance matrix methodology was used, which is an essential procedure in order to give a complete description of the partial uncertainties involved. (author)

  11. A residual Monte Carlo method for discrete thermal radiative diffusion

    International Nuclear Information System (INIS)

    Residual Monte Carlo methods reduce statistical error at a rate of exp(-bN), where b is a positive constant and N is the number of particle histories. Contrast this convergence rate with 1/√N, which is the rate of statistical error reduction for conventional Monte Carlo methods. Thus, residual Monte Carlo methods hold great promise for increased efficiency relative to conventional Monte Carlo methods. Previous research has shown that the application of residual Monte Carlo methods to the solution of continuum equations, such as the radiation transport equation, is problematic for all but the simplest of cases. However, the residual method readily applies to discrete systems as long as those systems are monotone, i.e., they produce positive solutions given positive sources. We develop a residual Monte Carlo method for solving a discrete 1D non-linear thermal radiative equilibrium diffusion equation, and we compare its performance with that of the discrete conventional Monte Carlo method upon which it is based. We find that the residual method provides efficiency gains of many orders of magnitude. Part of the residual gain is due to the fact that we begin each timestep with an initial guess equal to the solution from the previous timestep. Moreover, fully consistent non-linear solutions can be obtained in a reasonable amount of time because of the effective lack of statistical noise. We conclude that the residual approach has great potential and that further research into such methods should be pursued for more general discrete and continuum systems

  12. Multiple Tree for Partially Observable Monte-Carlo Tree Search

    OpenAIRE

    Auger, David

    2011-01-01

    We propose an algorithm for computing approximate Nash equilibria of partially observable games using Monte-Carlo tree search based on recent bandit methods. We obtain experimental results for the game of phantom tic-tac-toe, showing that strong strategies can be efficiently computed by our algorithm.

  13. Monte Carlo Greeks for financial products via approximative transition densities

    OpenAIRE

    Joerg Kampen; Anastasia Kolodko; John Schoenmakers

    2008-01-01

    In this paper we introduce efficient Monte Carlo estimators for the valuation of high-dimensional derivatives and their sensitivities (''Greeks''). These estimators are based on an analytical, usually approximative representation of the underlying density. We study approximative densities obtained by the WKB method. The results are applied in the context of a Libor market model.

  14. Analytical band Monte Carlo analysis of electron transport in silicene

    Science.gov (United States)

    Yeoh, K. H.; Ong, D. S.; Ooi, C. H. Raymond; Yong, T. K.; Lim, S. K.

    2016-06-01

    An analytical band Monte Carlo (AMC) with linear energy band dispersion has been developed to study the electron transport in suspended silicene and silicene on aluminium oxide (Al2O3) substrate. We have calibrated our model against the full band Monte Carlo (FMC) results by matching the velocity-field curve. Using this model, we discover that the collective effects of charge impurity scattering and surface optical phonon scattering can degrade the electron mobility down to about 400 cm2 V‑1 s‑1 and thereafter it is less sensitive to the changes of charge impurity in the substrate and surface optical phonon. We also found that further reduction of mobility to ∼100 cm2 V‑1 s‑1 as experimentally demonstrated by Tao et al (2015 Nat. Nanotechnol. 10 227) can only be explained by the renormalization of Fermi velocity due to interaction with Al2O3 substrate.

  15. HAWC Timing Calibration

    CERN Document Server

    Huentemeyer, Petra; Dingus, Brenda

    2009-01-01

    The High-Altitude Water Cherenkov (HAWC) Experiment is a second-generation highsensitivity gamma-ray and cosmic-ray detector that builds on the experience and technology of the Milagro observatory. Like Milagro, HAWC utilizes the water Cherenkov technique to measure extensive air showers. Instead of a pond filled with water (as in Milagro) an array of closely packed water tanks is used. The event direction will be reconstructed using the times when the PMTs in each tank are triggered. Therefore, the timing calibration will be crucial for reaching an angular resolution as low as 0.25 degrees.We propose to use a laser calibration system, patterned after the calibration system in Milagro. Like Milagro, the HAWC optical calibration system will use ~1 ns laser light pulses. Unlike Milagro, the PMTs are optically isolated and require their own optical fiber calibration. For HAWC the laser light pulses will be directed through a series of optical fan-outs and fibers to illuminate the PMTs in approximately one half o...

  16. Calibration Under Uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton; Trucano, Timothy Guy

    2005-03-01

    This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.

  17. GTC Photometric Calibration

    Science.gov (United States)

    di Cesare, M. A.; Hammersley, P. L.; Rodriguez Espinosa, J. M.

    2006-06-01

    We are currently developing the calibration programme for GTC using techniques similar to the ones use for the space telescope calibration (Hammersley et al. 1998, A&AS, 128, 207; Cohen et al. 1999, AJ, 117, 1864). We are planning to produce a catalogue with calibration stars which are suitable for a 10-m telescope. These sources will be not variable, non binary and do not have infrared excesses if they are to be used in the infrared. The GTC science instruments require photometric calibration between 0.35 and 2.5 microns. The instruments are: OSIRIS (Optical System for Imaging low Resolution Integrated Spectroscopy), ELMER and EMIR (Espectrógrafo Multiobjeto Infrarrojo) and the Acquisition and Guiding boxes (Di Césare, Hammersley, & Rodriguez Espinosa 2005, RevMexAA Ser. Conf., 24, 231). The catalogue will consist of 30 star fields distributed in all of North Hemisphere. We will use fields containing sources over the range 12 to 22 magnitude, and spanning a wide range of spectral types (A to M) for the visible and near infrared. In the poster we will show the method used for selecting these fields and we will present the analysis of the data on the first calibration fields observed.

  18. Characterization of the water filters cartridges from the iea-r1 reactor using the Monte Carlo method

    International Nuclear Information System (INIS)

    Filter cartridges are part of the primary water treatment system of the IEA-R1 Research Reactor and, when saturated, they are replaced and become radioactive waste. The IEA-R1 is located at the Nuclear and Energy Research Institute (IPEN), in Sao Paulo, Brazil. The primary characterization is the main step of the radioactive waste management in which the physical, chemical and radiological properties are determined. It is a very important step because the information obtained in this moment enables the choice of the appropriate management process and the definition of final disposal options. In this paper, it is presented a non-destructive method for primary characterization, using the Monte Carlo method associated with the gamma spectrometry. Gamma spectrometry allows the identification of radionuclides and their activity values. The detection efficiency is an important parameter, which is related to the photon energy, detector geometry and the matrix of the sample to be analyzed. Due to the difficult to obtain a standard source with the same geometry of the filter cartridge, another technique is necessary to calibrate the detector. The technique described in this paper uses the Monte Carlo method for primary characterization of the IEA-R1 filter cartridges. (author)

  19. Feedback-optimized parallel tempering Monte Carlo

    Science.gov (United States)

    Katzgraber, Helmut G.; Trebst, Simon; Huse, David A.; Troyer, Matthias

    2006-03-01

    We introduce an algorithm for systematically improving the efficiency of parallel tempering Monte Carlo simulations by optimizing the simulated temperature set. Our approach is closely related to a recently introduced adaptive algorithm that optimizes the simulated statistical ensemble in generalized broad-histogram Monte Carlo simulations. Conventionally, a temperature set is chosen in such a way that the acceptance rates for replica swaps between adjacent temperatures are independent of the temperature and large enough to ensure frequent swaps. In this paper, we show that by choosing the temperatures with a modified version of the optimized ensemble feedback method we can minimize the round-trip times between the lowest and highest temperatures which effectively increases the efficiency of the parallel tempering algorithm. In particular, the density of temperatures in the optimized temperature set increases at the 'bottlenecks' of the simulation, such as phase transitions. In turn, the acceptance rates are now temperature dependent in the optimized temperature ensemble. We illustrate the feedback-optimized parallel tempering algorithm by studying the two-dimensional Ising ferromagnet and the two-dimensional fully frustrated Ising model, and briefly discuss possible feedback schemes for systems that require configurational averages, such as spin glasses.

  20. HENC performance evaluation and plutonium calibration

    International Nuclear Information System (INIS)

    The authors have designed a high-efficiency neutron counter (HENC) to increase the plutonium content in 200-L waste drums. The counter uses totals neutron counting, coincidence counting, and multiplicity counting to determine the plutonium mass. The HENC was developed as part of a Cooperative Research and Development Agreement between the Department of Energy and Canberra Industries. This report presents the results of the detector modifications, the performance tests, the add-a-source calibration, and the plutonium calibration at Los Alamos National Laboratory (TA-35) in 1996

  1. Calibrating nacelle lidars

    DEFF Research Database (Denmark)

    Courtney, Michael

    Nacelle mounted, forward looking wind lidars are beginning to be used to provide reference wind speed measurements for the power performance testing of wind turbines. In such applications, a formal calibration procedure with a corresponding uncertainty assessment will be necessary. This report...... accurately aligning the beam to pass close to a reference wind speed sensor. A testing procedure is presented, reporting requirements outlined and the uncertainty of the method analysed. It is seen that the main limitation of the line of sight calibration method is the time required to obtain a...... inclination introduces extra uncertainties. A procedure for conducting such a calibration is presented and initial indications of the uncertainties given. A discussion of the merits and weaknesses of the two methods is given together with some proposals for the next important steps to be taken in this work....

  2. Some improvements of BES II TOF Monte Carlo simulation

    International Nuclear Information System (INIS)

    BES II detector has been upgraded from 1995, the TOF time resolution is about 180 ps for Bhabha events, a big improvement compared with 330 ps of BES I. With the upgrade of the detector, the software including calibration, reconstruction and Monte Carlo (M.C.) simulation needs corresponding improvement, especially for M.C. simulation. Using 50 M J/ψ data taken in the last two years at BES II, the authors studied the TOF resolution carefully, and made some improvements for TOF MC simulation. After such an improvement, the authors compared the TOF resolutions between real data and M.C. data and found they agree with each other

  3. Individual dosimetry and calibration

    International Nuclear Information System (INIS)

    In 1995 both the Individual Dosimetry and Calibration Sections worked under the condition of a status quo and concentrated fully on the routine part of their work. Nevertheless, the machine for printing the bar code which will be glued onto the film holder and hence identify the people when entering into high radiation areas was put into operation and most of the holders were equipped with the new identification. As far as the Calibration Section is concerned the project of the new source control system that is realized by the Technical Support Section was somewhat accelerated

  4. Example of Monte Carlo uncertainty assessment in the field of radionuclide metrology

    Science.gov (United States)

    Cassette, Philippe; Bochud, François; Keightley, John

    2015-06-01

    This chapter presents possible uses and examples of Monte Carlo methods for the evaluation of uncertainties in the field of radionuclide metrology. The method is already well documented in GUM supplement 1, but here we present a more restrictive approach, where the quantities of interest calculated by the Monte Carlo method are estimators of the expectation and standard deviation of the measurand, and the Monte Carlo method is used to propagate the uncertainties of the input parameters through the measurement model. This approach is illustrated by an example of the activity calibration of a 103Pd source by liquid scintillation counting and the calculation of a linear regression on experimental data points. An electronic supplement presents some algorithms which may be used to generate random numbers with various statistical distributions, for the implementation of this Monte Carlo calculation method.

  5. Self-optimizing Monte Carlo method for nuclear well logging simulation

    Science.gov (United States)

    Liu, Lianyan

    1997-09-01

    In order to increase the efficiency of Monte Carlo simulation for nuclear well logging problems, a new method has been developed for variance reduction. With this method, an importance map is generated in the regular Monte Carlo calculation as a by-product, and the importance map is later used to conduct the splitting and Russian roulette for particle population control. By adopting a spatial mesh system, which is independent of physical geometrical configuration, the method allows superior user-friendliness. This new method is incorporated into the general purpose Monte Carlo code MCNP4A through a patch file. Two nuclear well logging problems, a neutron porosity tool and a gamma-ray lithology density tool are used to test the performance of this new method. The calculations are sped up over analog simulation by 120 and 2600 times, for the neutron porosity tool and for the gamma-ray lithology density log, respectively. The new method enjoys better performance by a factor of 4~6 times than that of MCNP's cell-based weight window, as per the converged figure-of-merits. An indirect comparison indicates that the new method also outperforms the AVATAR process for gamma-ray density tool problems. Even though it takes quite some time to generate a reasonable importance map from an analog run, a good initial map can create significant CPU time savings. This makes the method especially suitable for nuclear well logging problems, since one or several reference importance maps are usually available for a given tool. Study shows that the spatial mesh sizes should be chosen according to the mean-free-path. The overhead of the importance map generator is 6% and 14% for neutron and gamma-ray cases. The learning ability towards a correct importance map is also demonstrated. Although false-learning may happen, physical judgement can help diagnose with contributon maps. Calibration and analysis are performed for the neutron tool and the gamma-ray tool. Due to the fact that a very

  6. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    Directory of Open Access Journals (Sweden)

    Jianhua Zhang

    2014-01-01

    Full Text Available This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views’ calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain.

  7. Muon Calibration at SoLid

    CERN Document Server

    Saunders, Daniel

    2016-01-01

    The SoLid experiment aims to make a measurement of very short distance neutrino oscillations using reactor antineutrinos. Key to its sensitivity are the experiments high spatial and energy resolution, combined with a very suitable reactor source and efficient background rejection. The fine segmentation of the detector (cubes of side 5cm), and ability to resolve signals in space and time, gives SoLid the capability to track cosmic muons. In principle a source of background, these turn into a valuable calibration source if they can be cleanly identified. This work presents the first energy calibration results, using cosmic muons, of the 288kg SoLid prototype SM1. This includes the methodology of tracking at SoLid, cosmic ray angular analyses at the reactor site, estimates of the time resolution, and calibrations at the cube level.

  8. Analysis of some splitting and roulette algorithms in shield calculations by the Monte Carlo method

    International Nuclear Information System (INIS)

    Different schemes of using the splitting and roulette methods in calculation of radiation transport in nuclear facility shields by the Monte Carlo method are considered. Efficiency of the considered schemes is estimated on the example of test calculations

  9. SOLFAST, a Ray-Tracing Monte-Carlo software for solar concentrating facilities

    International Nuclear Information System (INIS)

    In this communication, the software SOLFAST is presented. It is a simulation tool based on the Monte-Carlo method and accelerated Ray-Tracing techniques to evaluate efficiently the energy flux in concentrated solar installations.

  10. Calibration System for Dosimetry and Radioactivity of Beta-Emitting Sources

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Chang Heon; Ye, Sungjoon [Seoul National Univ., Seoul (Korea, Republic of); Son, Kwangjae; Park, Uljae [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-07-01

    This study aims to develop a calibration system for radioactivity of beta sources using a calibration constant which derived from comparing measurement and simulation. It is hard to measure the activity of beta emitter isotope due to self absorption and scattering. So the activity involves high levels of uncertainty. The surface dose of Sr/Y-90 standard isotope was measured using extrapolation chamber and calculated using Monte Carlo. The activity (4.077 kBq) of source was measured by NIST measurement assurance program. And several correction factors were calculated Monte Carlo method. The measurement result was corrected by correction factors. The calibration constant was defined as the ratio of surface dose to activity. It was 4.5Χ10{sup -8} and 6.52Χ10{sup -8} for measurement and Monte Carlo, respectively. There was about 15.4% difference in the calibration constant determined by the two techniques. The depth uncertainty makes the difference because of high dose gradients. Some correction factors have error due to scattering by detector geometry. A test source will be produced by HANARO. The activity will be calculated using calibration constant. The activity will be performed cross-calibration with NIST. Finally, the system will provide accurate information of sources.

  11. Monte Carlo simulation of source-excited in vivo x-ray fluorescence measurements of heavy metals.

    Science.gov (United States)

    O'Meara, J M; Chettle, D R; McNeill, F E; Prestwich, W V; Svensson, C E

    1998-06-01

    This paper reports on the Monte Carlo simulation of in vivo x-ray fluorescence (XRF) measurements. Our model is an improvement on previously reported simulations in that it relies on a theoretical basis for modelling Compton momentum broadening as well as detector efficiency. Furthermore, this model is an accurate simulation of experimentally detected spectra when comparisons are made in absolute counts; preceding models have generally only achieved agreement with spectra normalized to unit area. Our code is sufficiently flexible to be applied to the investigation of numerous source-excited in vivo XRF systems. Thus far the simulation has been applied to the modelling of two different systems. The first application was the investigation of various aspects of a new in vivo XRF system, the measurement of uranium in bone with 57Co in a backscatter (approximately 180 degrees) geometry. The Monte Carlo simulation was critical in assessing the potential of applying XRF to the measurement of uranium in bone. Currently the Monte Carlo code is being used to evaluate a potential means of simplifying an established in vivo XRF system, the measurement of lead in bone with 57Co in a 90 degrees geometry. The results from these simulations may demonstrate that calibration procedures can be significantly simplified and subject dose may be reduced. As well as providing an excellent tool for optimizing designs of new systems and improving existing techniques, this model can be used in the investigation of the dosimetry of various XRF systems. Our simulation allows a detailed understanding of the numerous processes involved when heavy metal concentrations are measured in vivo with XRF. PMID:9651014

  12. Monte Carlo simulation of source-excited in vivo x-ray fluorescence measurements of heavy metals

    Science.gov (United States)

    O'Meara, J. M.; Chettle, D. R.; McNeill, F. E.; Prestwich, W. V.; Svensson, C. E.

    1998-06-01

    This paper reports on the Monte Carlo simulation of in vivo x-ray fluorescence (XRF) measurements. Our model is an improvement on previously reported simulations in that it relies on a theoretical basis for modelling Compton momentum broadening as well as detector efficiency. Furthermore, this model is an accurate simulation of experimentally detected spectra when comparisons are made in absolute counts; preceding models have generally only achieved agreement with spectra normalized to unit area. Our code is sufficiently flexible to be applied to the investigation of numerous source-excited in vivo XRF systems. Thus far the simulation has been applied to the modelling of two different systems. The first application was the investigation of various aspects of a new in vivo XRF system, the measurement of uranium in bone with images/0031-9155/43/6/003/img1.gif" ALIGN="MIDDLE"/> in a backscatter images/0031-9155/43/6/003/img2.gif" ALIGN="MIDDLE"/> geometry. The Monte Carlo simulation was critical in assessing the potential of applying XRF to the measurement of uranium in bone. Currently the Monte Carlo code is being used to evaluate a potential means of simplifying an established in vivo XRF system, the measurement of lead in bone with images/0031-9155/43/6/003/img1.gif" ALIGN="MIDDLE"/> in a images/0031-9155/43/6/003/img4.gif" ALIGN="MIDDLE"/> geometry. The results from these simulations may demonstrate that calibration procedures can be significantly simplified and subject dose may be reduced. As well as providing an excellent tool for optimizing designs of new systems and improving existing techniques, this model can be used in the investigation of the dosimetry of various XRF systems. Our simulation allows a detailed understanding of the numerous processes involved when heavy metal concentrations are measured in vivo with XRF.

  13. Weighted-delta-tracking for Monte Carlo particle transport

    International Nuclear Information System (INIS)

    Highlights: • This paper presents an alteration to the Monte Carlo Woodcock tracking technique. • The alteration improves computational efficiency within regions of high absorbers. • The rejection technique is replaced by a statistical weighting mechanism. • The modified Woodcock method is shown to be faster than standard Woodcock tracking. • The modified Woodcock method achieves a lower variance, given a specified accuracy. - Abstract: Monte Carlo particle transport (MCPT) codes are incredibly powerful and versatile tools to simulate particle behavior in a multitude of scenarios, such as core/criticality studies, radiation protection, shielding, medicine and fusion research to name just a small subset applications. However, MCPT codes can be very computationally expensive to run when the model geometry contains large attenuation depths and/or contains many components. This paper proposes a simple modification to the Woodcock tracking method used by some Monte Carlo particle transport codes. The Woodcock method utilizes the rejection method for sampling virtual collisions as a method to remove collision distance sampling at material boundaries. However, it suffers from poor computational efficiency when the sample acceptance rate is low. The proposed method removes rejection sampling from the Woodcock method in favor of a statistical weighting scheme, which improves the computational efficiency of a Monte Carlo particle tracking code. It is shown that the modified Woodcock method is less computationally expensive than standard ray-tracing and rejection-based Woodcock tracking methods and achieves a lower variance, given a specified accuracy

  14. Measurement and simulation of neutron detection efficiency in lead-scintillating fiber calorimeters

    Science.gov (United States)

    Anelli, M.; Bertolucci, S.; Bini, C.; Branchini, P.; Curceanu, C.; De Zorzi, G.; Di Domenico, A.; Di Micco, B.; Ferrari, A.; Fiore, S.; Gauzzi, P.; Giovannella, S.; Happacher, F.; Iliescu, M.; Martini, M.; Miscetti, S.; Nguyen, F.; Passeri, A.; Prokofiev, A.; Sciascia, B.; Sirghi, F.

    2009-12-01

    The overall detection efficiency to neutrons of a small prototype of the KLOE lead-scintillating fiber calorimeter has been measured at the neutron beam facility of The Svedberg Laboratory, TSL, Uppsala, in the kinetic energy range [5-175] MeV. The measurement of the neutron detection efficiency of a NE110 scintillator provided a reference calibration. At the lowest trigger threshold, the overall calorimeter efficiency ranges from 30% to 50%. This value largely exceeds the estimated 8-15% expected if the response were proportional only to the scintillator equivalent thickness. A detailed simulation of the calorimeter and of the TSL beam line has been performed with the FLUKA Monte Carlo code. First data-MC comparisons are encouraging and allow to disentangle a neutron halo component in the beam.

  15. Measurement and simulation of neutron detection efficiency in lead-scintillating fiber calorimeters

    Energy Technology Data Exchange (ETDEWEB)

    Anelli, M.; Bertolucci, S. [Laboratori Nazionali di Frascati, INFN (Italy); Bini, C. [Dipartimento di Fisica dell' Universita ' La Sapienza' , Roma (Italy); INFN Sezione di Roma, Roma (Italy); Branchini, P. [INFN Sezione di Roma Tre, Roma (Italy); Curceanu, C. [Laboratori Nazionali di Frascati, INFN (Italy); De Zorzi, G.; Di Domenico, A. [Dipartimento di Fisica dell' Universita ' La Sapienza' , Roma (Italy); INFN Sezione di Roma, Roma (Italy); Di Micco, B. [Dipartimento di Fisica dell' Universita ' Roma Tre' , Roma (Italy); INFN Sezione di Roma Tre, Roma (Italy); Ferrari, A. [Fondazione CNAO, Milano (Italy); Fiore, S.; Gauzzi, P. [Dipartimento di Fisica dell' Universita ' La Sapienza' , Roma (Italy); INFN Sezione di Roma, Roma (Italy); Giovannella, S., E-mail: simona.giovannella@lnf.infn.i [Laboratori Nazionali di Frascati, INFN (Italy); Happacher, F. [Laboratori Nazionali di Frascati, INFN (Italy); Iliescu, M. [Laboratori Nazionali di Frascati, INFN (Italy); IFIN-HH, Bucharest (Romania); Martini, M. [Laboratori Nazionali di Frascati, INFN (Italy); Dipartimento di Energetica dell' Universita ' La Sapienza' , Roma (Italy); Miscetti, S. [Laboratori Nazionali di Frascati, INFN (Italy); Nguyen, F. [Dipartimento di Fisica dell' Universita ' Roma Tre' , Roma (Italy); INFN Sezione di Roma Tre, Roma (Italy); Passeri, A. [INFN Sezione di Roma Tre, Roma (Italy); Prokofiev, A. [Svedberg Laboratory, Uppsala University (Sweden); Sciascia, B. [Laboratori Nazionali di Frascati, INFN (Italy)

    2009-12-15

    The overall detection efficiency to neutrons of a small prototype of the KLOE lead-scintillating fiber calorimeter has been measured at the neutron beam facility of The Svedberg Laboratory, TSL, Uppsala, in the kinetic energy range [5-175] MeV. The measurement of the neutron detection efficiency of a NE110 scintillator provided a reference calibration. At the lowest trigger threshold, the overall calorimeter efficiency ranges from 30% to 50%. This value largely exceeds the estimated 8-15% expected if the response were proportional only to the scintillator equivalent thickness. A detailed simulation of the calorimeter and of the TSL beam line has been performed with the FLUKA Monte Carlo code. First data-MC comparisons are encouraging and allow to disentangle a neutron halo component in the beam.

  16. Calibration of farmer dosemeters

    International Nuclear Information System (INIS)

    The Farmer Dosemeters of Atomic Energy Medical Centre (AEMC) Jamshoro were calibrated in the Secondary Standard Dosimetry Laboratory (SSDL) at PINSTECH, using the NPL Secondary Standard Therapy level X-ray exposure meter. The results are presented in this report. (authors)

  17. Calibration Of Oxygen Monitors

    Science.gov (United States)

    Zalenski, M. A.; Rowe, E. L.; Mcphee, J. R.

    1988-01-01

    Readings corrected for temperature, pressure, and humidity of air. Program for handheld computer developed to ensure accuracy of oxygen monitors in National Transonic Facility, where liquid nitrogen stored. Calibration values, determined daily, based on entries of data on barometric pressure, temperature, and relative humidity. Output provided directly in millivolts.

  18. Commodity-Free Calibration

    Science.gov (United States)

    2008-01-01

    Commodity-free calibration is a reaction rate calibration technique that does not require the addition of any commodities. This technique is a specific form of the reaction rate technique, where all of the necessary reactants, other than the sample being analyzed, are either inherent in the analyzing system or specifically added or provided to the system for a reason other than calibration. After introduction, the component of interest is exposed to other reactants or flow paths already present in the system. The instrument detector records one of the following to determine the rate of reaction: the increase in the response of the reaction product, a decrease in the signal of the analyte response, or a decrease in the signal from the inherent reactant. With this data, the initial concentration of the analyte is calculated. This type of system can analyze and calibrate simultaneously, reduce the risk of false positives and exposure to toxic vapors, and improve accuracy. Moreover, having an excess of the reactant already present in the system eliminates the need to add commodities, which further reduces cost, logistic problems, and potential contamination. Also, the calculations involved can be simplified by comparison to those of the reaction rate technique. We conducted tests with hypergols as an initial investigation into the feasiblility of the technique.

  19. Calibration bench of flowmeters

    International Nuclear Information System (INIS)

    This equipment is devoted to the comparison of signals from two turbines installed in the Cabri experimental loop. The signal is compared to the standard turbine. The characteristics and the performance of the calibration bench are presented. (A.L.B.)

  20. Measurement System & Calibration report

    DEFF Research Database (Denmark)

    Vesth, Allan; Kock, Carsten Weber

    This Measurement System & Calibration report is describing DTU’s measurement system installed at a specific wind turbine. A major part of the sensors has been installed by others (see [1]) the rest of the sensors have been installed by DTU. The results of the measurements, described in this report...

  1. Calibration with Absolute Shrinkage

    DEFF Research Database (Denmark)

    Øjelund, Henrik; Madsen, Henrik; Thyregod, Poul

    2001-01-01

    In this paper, penalized regression using the L-1 norm on the estimated parameters is proposed for chemometric je calibration. The algorithm is of the lasso type, introduced by Tibshirani in 1996 as a linear regression method with bound on the absolute length of the parameters, but a modification...

  2. Calibration issues for MUSE

    Science.gov (United States)

    Kelz, Andreas; Roth, Martin; Bauer, Svend; Gerssen, Joris; Hahn, Thomas; Weilbacher, Peter; Laux, Uwe; Loupias, Magali; Kosmalski, Johan; McDermid, Richard; Bacon, Roland

    2008-07-01

    The Multi-Unit Spectroscopic Explorer (MUSE) is an integral-field spectrograph for the VLT for the next decade. Using an innovative field-splitting and slicing design, combined with an assembly of 24 spectrographs, MUSE will provide some 90,000 spectra in one exposure, which cover a simultaneous spectral range from 465 to 930nm. The design and manufacture of the Calibration Unit, the alignment tests of the Spectrograph and Detector sub-systems, and the development of the Data Reduction Software for MUSE are work-packages under the responsibility of the AIP, who is a partner in a European-wide consortium of 6 institutes and ESO, that is led by the Centre de Recherche Astronomique de Lyon. MUSE will be operated and therefore has to be calibrated in a variety of modes, which include seeing-limited and AO-assisted operations, providing a wide and narrow-field-of-view. MUSE aims to obtain unprecedented ultra-deep 3D-spectroscopic exposures, involving integration times of the order of 80 hours at the VLT. To achieve the corresponding science goals, instrumental stability, accurate calibration and adequate data reduction tools are needed. The paper describes the status at PDR of the AIP related work-packages, in particular with respect to the spatial, spectral, image quality, and geometrical calibration and related data reduction aspects.

  3. Entropic calibration revisited

    Energy Technology Data Exchange (ETDEWEB)

    Brody, Dorje C. [Blackett Laboratory, Imperial College, London SW7 2BZ (United Kingdom)]. E-mail: d.brody@imperial.ac.uk; Buckley, Ian R.C. [Centre for Quantitative Finance, Imperial College, London SW7 2AZ (United Kingdom); Constantinou, Irene C. [Blackett Laboratory, Imperial College, London SW7 2BZ (United Kingdom); Meister, Bernhard K. [Blackett Laboratory, Imperial College, London SW7 2BZ (United Kingdom)

    2005-04-11

    The entropic calibration of the risk-neutral density function is effective in recovering the strike dependence of options, but encounters difficulties in determining the relevant greeks. By use of put-call reversal we apply the entropic method to the time reversed economy, which allows us to obtain the spot price dependence of options and the relevant greeks.

  4. Physiotherapy ultrasound calibrations

    International Nuclear Information System (INIS)

    Calibration of physiotherapy ultrasound equipment has long been a problem. Numerous surveys around the world over the past 20 years have all found that only a low percentage of the units tested had an output within 30% of that indicatd. In New Zealand, a survey carried out by the NRL in 1985 found that only 24% had an output, at the maximum setting, within + or - 20% of that indicated. The present performance Standard for new equipment (NZS 3200.2.5:1992) requires that the measured output should not deviate from that indicated by more than + or - 30 %. This may be tightened to + or - 20% in the next few years. Any calibration is only as good as the calibration equipment. Some force balances can be tested with small weights to simulate the force exerted by an ultrasound beam, but with others this is not possible. For such balances, testing may only be feasible with a calibrated source which could be used like a transfer standard. (author). 4 refs., 3 figs

  5. NVLAP calibration laboratory program

    Energy Technology Data Exchange (ETDEWEB)

    Cigler, J.L.

    1993-12-31

    This paper presents an overview of the progress up to April 1993 in the development of the Calibration Laboratories Accreditation Program within the framework of the National Voluntary Laboratory Accreditation Program (NVLAP) at the National Institute of Standards and Technology (NIST).

  6. LOFAR Facet Calibration

    Science.gov (United States)

    van Weeren, R. J.; Williams, W. L.; Hardcastle, M. J.; Shimwell, T. W.; Rafferty, D. A.; Sabater, J.; Heald, G.; Sridhar, S. S.; Dijkema, T. J.; Brunetti, G.; Brüggen, M.; Andrade-Santos, F.; Ogrean, G. A.; Röttgering, H. J. A.; Dawson, W. A.; Forman, W. R.; de Gasperin, F.; Jones, C.; Miley, G. K.; Rudnick, L.; Sarazin, C. L.; Bonafede, A.; Best, P. N.; Bîrzan, L.; Cassano, R.; Chyży, K. T.; Croston, J. H.; Ensslin, T.; Ferrari, C.; Hoeft, M.; Horellou, C.; Jarvis, M. J.; Kraft, R. P.; Mevius, M.; Intema, H. T.; Murray, S. S.; Orrú, E.; Pizzo, R.; Simionescu, A.; Stroe, A.; van der Tol, S.; White, G. J.

    2016-03-01

    LOFAR, the Low-Frequency Array, is a powerful new radio telescope operating between 10 and 240 MHz. LOFAR allows detailed sensitive high-resolution studies of the low-frequency radio sky. At the same time LOFAR also provides excellent short baseline coverage to map diffuse extended emission. However, producing high-quality deep images is challenging due to the presence of direction-dependent calibration errors, caused by imperfect knowledge of the station beam shapes and the ionosphere. Furthermore, the large data volume and presence of station clock errors present additional difficulties. In this paper we present a new calibration scheme, which we name facet calibration, to obtain deep high-resolution LOFAR High Band Antenna images using the Dutch part of the array. This scheme solves and corrects the direction-dependent errors in a number of facets that cover the observed field of view. Facet calibration provides close to thermal noise limited images for a typical 8 hr observing run at ∼ 5\\prime\\prime resolution, meeting the specifications of the LOFAR Tier-1 northern survey.

  7. Radiation monitor calibration technique

    International Nuclear Information System (INIS)

    Reference radiations in the Secondary Standard Dosimetry Laboratory, OAEP have been improved and modified by employing lead attenuators. To identify low-level exposure rate, shadow-cone method has been applied. The secondary standard dosemeter has been used periodically to check the constancy of reference radiations to assure the calibration of dosemeters and dose-ratemeters used for radiation protection

  8. LOFAR facet calibration

    CERN Document Server

    van Weeren, R J; Hardcastle, M J; Shimwell, T W; Rafferty, D A; Sabater, J; Heald, G; Sridhar, S S; Dijkema, T J; Brunetti, G; Brüggen, M; Andrade-Santos, F; Ogrean, G A; Röttgering, H J A; Dawson, W A; Forman, W R; de Gasperin, F; Jones, C; Miley, G K; Rudnick, L; Sarazin, C L; Bonafede, A; Best, P N; Bîrzan, L; Cassano, R; Chyży, K T; Croston, J H; Ensslin, T; Ferrari, C; Hoeft, M; Horellou, C; Jarvis, M J; Kraft, R P; Mevius, M; Intema, H T; Murray, S S; Orrú, E; Pizzo, R; Simionescu, A; Stroe, A; van der Tol, S; White, G J

    2016-01-01

    LOFAR, the Low-Frequency Array, is a powerful new radio telescope operating between 10 and 240 MHz. LOFAR allows detailed sensitive high-resolution studies of the low-frequency radio sky. At the same time LOFAR also provides excellent short baseline coverage to map diffuse extended emission. However, producing high-quality deep images is challenging due to the presence of direction dependent calibration errors, caused by imperfect knowledge of the station beam shapes and the ionosphere. Furthermore, the large data volume and presence of station clock errors present additional difficulties. In this paper we present a new calibration scheme, which we name facet calibration, to obtain deep high-resolution LOFAR High Band Antenna images using the Dutch part of the array. This scheme solves and corrects the direction dependent errors in a number of facets that cover the observed field of view. Facet calibration provides close to thermal noise limited images for a typical 8 hr observing run at $\\sim$ 5arcsec resolu...

  9. Pseudo Linear Gyro Calibration

    Science.gov (United States)

    Harman, Richard; Bar-Itzhack, Itzhack Y.

    2003-01-01

    Previous high fidelity onboard attitude algorithms estimated only the spacecraft attitude and gyro bias. The desire to promote spacecraft and ground autonomy and improvements in onboard computing power has spurred development of more sophisticated calibration algorithms. Namely, there is a desire to provide for sensor calibration through calibration parameter estimation onboard the spacecraft as well as autonomous estimation on the ground. Gyro calibration is a particularly challenging area of research. There are a variety of gyro devices available for any prospective mission ranging from inexpensive low fidelity gyros with potentially unstable scale factors to much more expensive extremely stable high fidelity units. Much research has been devoted to designing dedicated estimators such as particular Extended Kalman Filter (EKF) algorithms or Square Root Information Filters. This paper builds upon previous attitude, rate, and specialized gyro parameter estimation work performed with Pseudo Linear Kalman Filter (PSELIKA). The PSELIKA advantage is the use of the standard linear Kalman Filter algorithm. A PSELIKA algorithm for an orthogonal gyro set which includes estimates of attitude, rate, gyro misalignments, gyro scale factors, and gyro bias is developed and tested using simulated and flight data. The measurements PSELIKA uses include gyro and quaternion tracker data.

  10. Pleiades Absolute Calibration : Inflight Calibration Sites and Methodology

    Science.gov (United States)

    Lachérade, S.; Fourest, S.; Gamet, P.; Lebègue, L.

    2012-07-01

    In-flight calibration of space sensors once in orbit is a decisive step to be able to fulfil the mission objectives. This article presents the methods of the in-flight absolute calibration processed during the commissioning phase. Four In-flight calibration methods are used: absolute calibration, cross-calibration with reference sensors such as PARASOL or MERIS, multi-temporal monitoring and inter-bands calibration. These algorithms are based on acquisitions over natural targets such as African deserts, Antarctic sites, La Crau (Automatic calibration station) and Oceans (Calibration over molecular scattering) or also new extra-terrestrial sites such as the Moon and selected stars. After an overview of the instrument and a description of the calibration sites, it is pointed out how each method is able to address one or several aspects of the calibration. We focus on how these methods complete each other in their operational use, and how they help building a coherent set of information that addresses all aspects of in-orbit calibration. Finally, we present the perspectives that the high level of agility of PLEIADES offers for the improvement of its calibration and a better characterization of the calibration sites.

  11. Mercury CEM Calibration

    Energy Technology Data Exchange (ETDEWEB)

    John F. Schabron; Joseph F. Rovani; Susan S. Sorini

    2007-03-31

    The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005, requires that calibration of mercury continuous emissions monitors (CEMs) be performed with NIST-traceable standards. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor generators. The traceability protocol will be written by EPA. Traceability will be based on the actual analysis of the output of each calibration unit at several concentration levels ranging from about 2-40 ug/m{sup 3}, and this analysis will be directly traceable to analyses by NIST using isotope dilution inductively coupled plasma/mass spectrometry (ID ICP/MS) through a chain of analyses linking the calibration unit in the power plant to the NIST ID ICP/MS. Prior to this project, NIST did not provide a recommended mercury vapor pressure equation or list mercury vapor pressure in its vapor pressure database. The NIST Physical and Chemical Properties Division in Boulder, Colorado was subcontracted under this project to study the issue in detail and to recommend a mercury vapor pressure equation that the vendors of mercury vapor pressure calibration units can use to calculate the elemental mercury vapor concentration in an equilibrium chamber at a particular temperature. As part of this study, a preliminary evaluation of calibration units from five vendors was made. The work was performed by NIST in Gaithersburg, MD and Joe Rovani from WRI who traveled to NIST as a Visiting Scientist.

  12. Calibrated Properties Model

    International Nuclear Information System (INIS)

    The purpose of this Model Report is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Office of Repository Development (ORD). The UZ contains the unsaturated rock layers overlying the repository and host unit, which constitute a natural barrier to flow, and the unsaturated rock layers below the repository which constitute a natural barrier to flow and transport. This work followed, and was planned in, ''Technical Work Plan (TWP) for: Performance Assessment Unsaturated Zone'' (BSC 2002 [160819], Section 1.10.8 [under Work Package (WP) AUZM06, Climate Infiltration and Flow], and Section I-1-1 [in Attachment I, Model Validation Plans]). In Section 4.2, four acceptance criteria (ACs) are identified for acceptance of this Model Report; only one of these (Section 4.2.1.3.6.3, AC 3) was identified in the TWP (BSC 2002 [160819], Table 3-1). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, and drift-scale and mountain-scale coupled-process models from the UZ Flow, Transport and Coupled Processes Department in the Natural Systems Subproject of the Performance Assessment (PA) Project. The Calibrated Properties Model output will also be used by the Engineered Barrier System Department in the Engineering Systems Subproject. The Calibrated Properties Model provides input through the UZ Model and other process models of natural and engineered systems to the Total System Performance Assessment (TSPA) models, in accord with the PA Strategy and Scope in the PA Project of the Bechtel SAIC Company, LLC (BSC). The UZ process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions. UZ flow is a TSPA model component

  13. Hierarchical Bayesian Data Analysis in Radiometric SAR System Calibration: A Case Study on Transponder Calibration with RADARSAT-2 Data

    Directory of Open Access Journals (Sweden)

    Björn J. Döring

    2013-12-01

    Full Text Available A synthetic aperture radar (SAR system requires external absolute calibration so that radiometric measurements can be exploited in numerous scientific and commercial applications. Besides estimating a calibration factor, metrological standards also demand the derivation of a respective calibration uncertainty. This uncertainty is currently not systematically determined. Here for the first time it is proposed to use hierarchical modeling and Bayesian statistics as a consistent method for handling and analyzing the hierarchical data typically acquired during external calibration campaigns. Through the use of Markov chain Monte Carlo simulations, a joint posterior probability can be conveniently derived from measurement data despite the necessary grouping of data samples. The applicability of the method is demonstrated through a case study: The radar reflectivity of DLR’s new C-band Kalibri transponder is derived through a series of RADARSAT-2 acquisitions and a comparison with reference point targets (corner reflectors. The systematic derivation of calibration uncertainties is seen as an important step toward traceable radiometric calibration of synthetic aperture radars.

  14. Measurement and simulation of the neutron detection efficiency with a Pb-scintillating fiber calorimeter

    Science.gov (United States)

    Anelli, M.; Battistoni, G.; Bertolucci, S.; Bini, C.; Branchini, P.; Curceanu, C.; DeZorzi, G.; Domenico, Adi; Di Micco, B.; Ferrari, A.; Fiore, S.; Gauzzi, P.; Giovannella, S.; Happacher, F.; Iliescu, M.; Martini, M.; Miscetti, S.; Ngugen, F.; Paseri, A.; Prokfiev, A.; Sala, P.; Sciascia, B.; Sirghi, F.

    2009-04-01

    We have measured the overall detection efficiency of a small prototype of the KLOE PB-scintilation fiber calorimeter to neutrons with kinetic energy range [5,175] MeV. The measurement has been done in a dedicated test beam in the neutron beam facility of the Svedberg Laboratory, TSL Uppsala. The measurements of the neutron detection efficiency of a NE110 scintillator provided a reference calibration. At the lowest trigger threshold, the overall calorimeter efficiency ranges from 28% to 33%. This value largely exceeds the estimated ~8% expected if the response were proporetional only to the scintillator equivalent thickness. A detailed simulation of the calorimeter and of the TSL beam line has been performed with the FLUKA Monte Carlo code. The simulated response of the detector to neutrons is presented together with the first data to Monte Carlo comparison. The results show an overall neutron efficiency of about 35%. The reasons for such an efficiency enhancement, in comparison with the typical scintillator-based neutron counters, are explained, opening the road to a novel neutron detector.

  15. Measurement and simulation of the neutron detection efficiency with a Pb-scintillating fiber calorimeter

    Energy Technology Data Exchange (ETDEWEB)

    Anelli, M; Bertolucci, S; Curceanu, C; Giovannella, S; Happacher, F; Iliescu, M; Martini, M; Miscetti, S [Laboratori Nazionali di Frascati, INFN (Italy); Battistoni, G [Sezione INFN di Milano (Italy); Bini, C; Zorzi, G De; Domenico, Adi; Gauzzi, P [Ubiversita degli Studi ' La Sapienza' e Sezine INFN di Roma (Italy); Branchini, P; Micco, B Di; Ngugen, F; Paseri, A [Universita degli di Studi ' Roma Tre' e Sezione INFN di Roma Tre (Italy); Ferrari, A [Fondazione CNAO, Milano (Italy); Prokfiev, A [Svedberg Laboratory, Uppsala University (Sweden); Fiore, S, E-mail: matteo.martino@inf.infn.i

    2009-04-01

    We have measured the overall detection efficiency of a small prototype of the KLOE PB-scintillation fiber calorimeter to neutrons with kinetic energy range [5,175] MeV. The measurement has been done in a dedicated test beam in the neutron beam facility of the Svedberg Laboratory, TSL Uppsala. The measurements of the neutron detection efficiency of a NE110 scintillator provided a reference calibration. At the lowest trigger threshold, the overall calorimeter efficiency ranges from 28% to 33%. This value largely exceeds the estimated {approx}8% expected if the response were proportional only to the scintillator equivalent thickness. A detailed simulation of the calorimeter and of the TSL beam line has been performed with the FLUKA Monte Carlo code. The simulated response of the detector to neutrons is presented together with the first data to Monte Carlo comparison. The results show an overall neutron efficiency of about 35%. The reasons for such an efficiency enhancement, in comparison with the typical scintillator-based neutron counters, are explained, opening the road to a novel neutron detector.

  16. Enhancing Seismic Calibration Research Through Software Automation

    Energy Technology Data Exchange (ETDEWEB)

    Ruppert, S; Dodge, D; Elliott, A; Ganzberger, M; Hauk, T; Matzel, E; Ryall, F

    2004-07-09

    The National Nuclear Security Administration (NNSA) Ground-Based Nuclear Explosion Monitoring Research and Engineering (GNEM R&E) Program has made significant progress enhancing the process of deriving seismic calibrations and performing scientific integration with automation tools. We present an overview of our software automation efforts and framework to address the problematic issues of very large datasets and varied formats utilized during seismic calibration research. The software and scientific automation initiatives directly support the rapid collection of raw and contextual seismic data used in research, provide efficient interfaces for researchers to measure/analyze data, and provide a framework for research dataset integration. The automation also improves the researcher's ability to assemble quality controlled research products for delivery into the NNSA Knowledge Base (KB). The software and scientific automation tasks provide the robust foundation upon which synergistic and efficient development of, GNEM R&E Program, seismic calibration research may be built. The task of constructing many seismic calibration products is labor intensive and complex, hence expensive. However, aspects of calibration product construction are susceptible to automation and future economies. We are applying software and scientific automation to problems within two distinct phases or 'tiers' of the seismic calibration process. The first tier involves initial collection of waveform and parameter (bulletin) data that comprise the 'raw materials' from which signal travel-time and amplitude correction surfaces are derived and is highly suited for software automation. The second tier in seismic research content development activities include development of correction surfaces and other calibrations. This second tier is less susceptible to complete automation, as these activities require the judgment of scientists skilled in the interpretation of often highly

  17. VARIATIONAL MONTE-CARLO APPROACH FOR ARTICULATED OBJECT TRACKING

    Directory of Open Access Journals (Sweden)

    Kartik Dwivedi

    2013-12-01

    Full Text Available In this paper, we describe a novel variational Monte Carlo approach for modeling and tracking body parts of articulated objects. An articulated object (human target is represented as a dynamic Markov network of the different constituent parts. The proposed approach combines local information of individual body parts and other spatial constraints influenced by neighboring parts. The movement of the relative parts of the articulated body is modeled with local information of displacements from the Markov network and the global information from other neighboring parts. We explore the effect of certain model parameters (including the number of parts tracked; number of Monte-Carlo cycles, etc. on system accuracy and show that ourvariational Monte Carlo approach achieves better efficiency and effectiveness compared to other methods on a number of real-time video datasets containing single targets.

  18. Application of biasing techniques to the contributon Monte Carlo method

    International Nuclear Information System (INIS)

    Recently, a new Monte Carlo Method called the Contribution Monte Carlo Method was developed. The method is based on the theory of contributions, and uses a new receipe for estimating target responses by a volume integral over the contribution current. The analog features of the new method were discussed in previous publications. The application of some biasing methods to the new contribution scheme is examined here. A theoretical model is developed that enables an analytic prediction of the benefit to be expected when these biasing schemes are applied to both the contribution method and regular Monte Carlo. This model is verified by a variety of numerical experiments and is shown to yield satisfying results, especially for deep-penetration problems. Other considerations regarding the efficient use of the new method are also discussed, and remarks are made as to the application of other biasing methods. 14 figures, 1 tables

  19. Field calibration of cup anemometers

    DEFF Research Database (Denmark)

    Schmidt Paulsen, Uwe; Mortensen, Niels Gylling; Hansen, Jens Carsten;

    2007-01-01

    A field calibration method and results are described along with the experience gained with the method. The cup anemometers to be calibrated are mounted in a row on a 10-m high rig and calibrated in the free wind against a reference cup anemometer. The method has been reported [1] to improve the...... statistical bias on the data relative to calibrations carried out in a wind tunnel. The methodology is sufficiently accurate for calibration of cup anemometers used for wind resource assessments and provides a simple, reliable and cost-effective solution to cup anemometer calibration, especially suited for...

  20. Proton Upset Monte Carlo Simulation

    Science.gov (United States)

    O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.

    2009-01-01

    The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

  1. Reduced Calibration Curve for Proton Computed Tomography

    Science.gov (United States)

    Yevseyeva, Olga; de Assis, Joaquim; Evseev, Ivan; Schelin, Hugo; Paschuk, Sergei; Milhoretto, Edney; Setti, João; Díaz, Katherin; Hormaza, Joel; Lopes, Ricardo

    2010-05-01

    The pCT deals with relatively thick targets like the human head or trunk. Thus, the fidelity of pCT as a tool for proton therapy planning depends on the accuracy of physical formulas used for proton interaction with thick absorbers. Although the actual overall accuracy of the proton stopping power in the Bethe-Bloch domain is about 1%, the analytical calculations and the Monte Carlo simulations with codes like TRIM/SRIM, MCNPX and GEANT4 do not agreed with each other. A tentative to validate the codes against experimental data for thick absorbers bring some difficulties: only a few data is available and the existing data sets have been acquired at different initial proton energies, and for different absorber materials. In this work we compare the results of our Monte Carlo simulations with existing experimental data in terms of reduced calibration curve, i.e. the range—energy dependence normalized on the range scale by the full projected CSDA range for given initial proton energy in a given material, taken from the NIST PSTAR database, and on the final proton energy scale—by the given initial energy of protons. This approach is almost energy and material independent. The results of our analysis are important for pCT development because the contradictions observed at arbitrary low initial proton energies could be easily scaled now to typical pCT energies.

  2. Calibration of filters for detection of airborne I-131 in the environment of nuclear power plant

    International Nuclear Information System (INIS)

    A simple and clean method for efficiency calibration of filters for collection of airborne I and corresponding Ge(Li) spectrometer is described. As the calibrated source of gaseous I-131 the radiopharmaceutical water solution of NaI is used. As calibration example the absolute activity distribution of I-131 measured in a charcoal filter is shown. (author)

  3. Applications of Monte Carlo methods in nuclear science and engineering

    International Nuclear Information System (INIS)

    With the advent of inexpensive computing power over the past two decades and development of variance reduction techniques, applications of Monte Carlo radiation transport techniques have proliferated dramatically. The motivation of variance reduction technique is for computational efficiency. The typical variance reduction techniques worth mentioning here are: importance sampling, implicit capture, energy and angular biasing, Russian Roulette, exponential transform, next event estimator, weight window generator, range rejection technique (only for charged particles) etc. Applications of Monte Carlo in radiation transport include nuclear safeguards, accelerator applications, homeland security, nuclear criticality, health physics, radiological safety, radiography, radiotherapy physics, radiation standards, nuclear medicine (dosimetry and imaging) etc. Towards health care, Monte Carlo particle transport techniques offer exciting tools for radiotherapy research (cancer treatments involving photons, electrons, neutrons, protons, pions and other heavy ions) where they play an increasingly important role. Research and applications of Monte Carlo techniques in radiotherapy span a very wide range from fundamental studies of cross sections and development of particle transport algorithms, to clinical evaluation of treatment plans for a variety of radiotherapy modalities. Recent development is the voxel-based Monte Carlo Radiotherapy Treatment Planning involving external electron beam and patient data in the form of DICOM (Digital Imaging and Communications in Medicine) images. Articles relevant to the INIS are indexed separately

  4. Quantum Monte Carlo Endstation for Petascale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Lubos Mitas

    2011-01-26

    NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlo code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13

  5. A continuation multilevel Monte Carlo algorithm

    KAUST Repository

    Collier, Nathan

    2014-09-05

    We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error tolerance is satisfied. CMLMC assumes discretization hierarchies that are defined a priori for each level and are geometrically refined across levels. The actual choice of computational work across levels is based on parametric models for the average cost per sample and the corresponding variance and weak error. These parameters are calibrated using Bayesian estimation, taking particular notice of the deepest levels of the discretization hierarchy, where only few realizations are available to produce the estimates. The resulting CMLMC estimator exhibits a non-trivial splitting between bias and statistical contributions. We also show the asymptotic normality of the statistical error in the MLMC estimator and justify in this way our error estimate that allows prescribing both required accuracy and confidence in the final result. Numerical results substantiate the above results and illustrate the corresponding computational savings in examples that are described in terms of differential equations either driven by random measures or with random coefficients. © 2014, Springer Science+Business Media Dordrecht.

  6. Monte Carlo simulation framework for TMT

    Science.gov (United States)

    Vogiatzis, Konstantinos; Angeli, George Z.

    2008-07-01

    This presentation describes a strategy for assessing the performance of the Thirty Meter Telescope (TMT). A Monte Carlo Simulation Framework has been developed to combine optical modeling with Computational Fluid Dynamics simulations (CFD), Finite Element Analysis (FEA) and controls to model the overall performance of TMT. The framework consists of a two year record of observed environmental parameters such as atmospheric seeing, site wind speed and direction, ambient temperature and local sunset and sunrise times, along with telescope azimuth and elevation with a given sampling rate. The modeled optical, dynamic and thermal seeing aberrations are available in a matrix form for distinct values within the range of influencing parameters. These parameters are either part of the framework parameter set or can be derived from them at each time-step. As time advances, the aberrations are interpolated and combined based on the current value of their parameters. Different scenarios can be generated based on operating parameters such as venting strategy, optical calibration frequency and heat source control. Performance probability distributions are obtained and provide design guidance. The sensitivity of the system to design, operating and environmental parameters can be assessed in order to maximize the % of time the system meets the performance specifications.

  7. Information Geometry and Sequential Monte Carlo

    CERN Document Server

    Sim, Aaron; Stumpf, Michael P H

    2012-01-01

    This paper explores the application of methods from information geometry to the sequential Monte Carlo (SMC) sampler. In particular the Riemannian manifold Metropolis-adjusted Langevin algorithm (mMALA) is adapted for the transition kernels in SMC. Similar to its function in Markov chain Monte Carlo methods, the mMALA is a fully adaptable kernel which allows for efficient sampling of high-dimensional and highly correlated parameter spaces. We set up the theoretical framework for its use in SMC with a focus on the application to the problem of sequential Bayesian inference for dynamical systems as modelled by sets of ordinary differential equations. In addition, we argue that defining the sequence of distributions on geodesics optimises the effective sample sizes in the SMC run. We illustrate the application of the methodology by inferring the parameters of simulated Lotka-Volterra and Fitzhugh-Nagumo models. In particular we demonstrate that compared to employing a standard adaptive random walk kernel, the SM...

  8. Quantum Monte Carlo for atoms and molecules

    International Nuclear Information System (INIS)

    The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H2, LiH, Li2, and H2O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li2, and H2O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations, the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions

  9. Self-attenuation and coincidence-summing corrections calculated by Monte Carlo simulations for gamma-spectrometric measurements with well-type germanium detectors

    International Nuclear Information System (INIS)

    A Monte Carlo simulation package for the computation of the full-energy peak efficiency, self-attenuation correction factors and coincidence-summing corrections has been developed to assist in the calibration of well-type germanium detector measurements. Due to the almost 4π solid angle for this measurement geometry, particularly high coincidence-summing effects occur in the case of multi-photon emitting nuclides. Besides pair coincidences, the correction terms required to describe higher order coincidences have been included in the computation of the coincidence-summing effects. A general algorithm for self-attenuation calculations, working accurately as well in the case of high attenuating media, has been implemented. The computed results are in good agreement with the experimental data. (Author)

  10. Comparison of the St. Petersburg phantom with a BOMAB phantom in the ORTEC StandFast whole body counter: a Monte Carlo simulation.

    Science.gov (United States)

    Kramer, Gary H; Capello, Kevin; Sung, Jeremy

    2008-05-01

    Three sizes of the St. Petersburg phantom have been compared to six sizes of BOMAB phantoms measured by a virtual StandFast whole body counter using Monte Carlo simulations to investigate if the counting efficiencies are equivalent. This work shows that previously published data comparing the Reference Man sized phantom at 662 keV is supported; however, the simulations also show that the smaller sized St. Petersburg phantoms do not agree well with BOMAB phantoms. It is concluded that, compared with BOMAB phantoms, the St. Petersburg phantoms are system dependent and that they should be validated over a wide photon energy range against corresponding BOMAB phantoms prior to their use for calibrating whole body counters. PMID:18403961

  11. Streak camera time calibration procedures

    Science.gov (United States)

    Long, J.; Jackson, I.

    1978-01-01

    Time calibration procedures for streak cameras utilizing a modulated laser beam are described. The time calibration determines a writing rate accuracy of 0.15% with a rotating mirror camera and 0.3% with an image converter camera.

  12. The Calibration Reference Data System

    Science.gov (United States)

    Greenfield, P.; Miller, T.

    2016-07-01

    We describe a software architecture and implementation for using rules to determine which calibration files are appropriate for calibrating a given observation. This new system, the Calibration Reference Data System (CRDS), replaces what had been previously used for the Hubble Space Telescope (HST) calibration pipelines, the Calibration Database System (CDBS). CRDS will be used for the James Webb Space Telescope (JWST) calibration pipelines, and is currently being used for HST calibration pipelines. CRDS can be easily generalized for use in similar applications that need a rules-based system for selecting the appropriate item for a given dataset; we give some examples of such generalizations that will likely be used for JWST. The core functionality of the Calibration Reference Data System is available under an Open Source license. CRDS is briefly contrasted with a sampling of other similar systems used at other observatories.

  13. Calibration of thin-film dosimeters irradiated with 80-120 kev electrons

    DEFF Research Database (Denmark)

    Helt-Hansen, J.; Miller, A.; McEwen, M.;

    2004-01-01

    A method for calibration of thin-film dosimeters irradiated with 80-120keV electrons has been developed. The method is based on measurement of dose with a totally absorbing graphite calorimeter, and conversion of dose in the graphite calorimeter to dose in the film dosimeter by Monte Carlo calcul...

  14. Calibration Report for the WRAP Facility Gamma Energy Analysis System

    International Nuclear Information System (INIS)

    The Waste Receiving And Processing facility (WRAP) adheres to providing gamma-ray spectroscopy instrument calibrations traceable to the National Institute for Standards and Technology (NIST) standards. The detectors are used to produce quantitative results for the Waste Isolation Pilot Plant (WIPP) and must meet calibration programmatic calibration goals. Instruments must meet portions of ANSI N42.14, 1978 guide for Germanium detectors. The Non-Destructive Assay (NDA) Gamma Energy Analysis (GEA) utilizes NIST traceable line source standards for the detector system calibrations. The counting configuration is a series of drums containing the line sources and different density filler matrices. The drums are used to develop system efficiencies with respect to density. The efficiency and density correction factors are required for the processing of drummed waste materials of similar densities. The calibration verification is carried out after the calibration is deemed final, by counting a second drum of NIST traceable sources. Three in-depth calibrations have been completed on one of the two systems to date, the first being the system acceptance plan. This report has a secondary function; that being the development of the instrument calibration errors which are to be folded into the Total Instrument Uncertainty document, HNF-4050

  15. Calibration Report for the WRAP Facility Gamma Energy Analysis System

    International Nuclear Information System (INIS)

    The Waste Receiving And Processing facility (WRAP) adheres to providing gamma-ray spectroscopy instrument calibrations traceable to the National Institute for Standards and Technology (NIST) standard(4). The detectors are used to produce quantitative results for the Waste Isolation Pilot Plant (WIPP) and must meet calibration programmatic calibration goals. Instruments must meet portions of ANSI N42.14, 1978 guide for Germanium detectors. The Non-Destructive Assay (NDA) Gamma Energy Analysis (GEA) utilizes NIST traceable line source standards for the detector system calibrations. The counting configuration is a series of drums containing the line sources and different density filler matrices. The drums are used to develop system efficiencies with respect to density. The efficiency and density correction factors are required for the processing of drummed waste materials of similar densities. The calibration verification is carried out after the calibration is deemed final, by counting a second drum of NIST traceable sources. Three in-depth calibrations have been completed on one of the two systems to date, the first being the system acceptance plan. This report has a secondary function; that being the development of the instrument calibration errors which are to be folded into the Total Instrument Uncertainty document, HNF-4050

  16. Optical tweezers absolute calibration

    CERN Document Server

    Dutra, R S; Neto, P A Maia; Nussenzveig, H M

    2014-01-01

    Optical tweezers are highly versatile laser traps for neutral microparticles, with fundamental applications in physics and in single molecule cell biology. Force measurements are performed by converting the stiffness response to displacement of trapped transparent microspheres, employed as force transducers. Usually, calibration is indirect, by comparison with fluid drag forces. This can lead to discrepancies by sizable factors. Progress achieved in a program aiming at absolute calibration, conducted over the past fifteen years, is briefly reviewed. Here we overcome its last major obstacle, a theoretical overestimation of the peak stiffness, within the most employed range for applications, and we perform experimental validation. The discrepancy is traced to the effect of primary aberrations of the optical system, which are now included in the theory. All required experimental parameters are readily accessible. Astigmatism, the dominant effect, is measured by analyzing reflected images of the focused laser spo...

  17. Astrid-2 SSC ASUMagnetic Calibration

    DEFF Research Database (Denmark)

    Primdahl, Fritz

    1997-01-01

    Report of the inter calibration between the starcamera and the fluxgate magnetometer onboard the ASTRID-2 satellite. This calibration was performed in the night between the 15. and 16. May 1997 at the Lovö magnetic observatory.......Report of the inter calibration between the starcamera and the fluxgate magnetometer onboard the ASTRID-2 satellite. This calibration was performed in the night between the 15. and 16. May 1997 at the Lovö magnetic observatory....

  18. Optical Calibration of SNO+

    Science.gov (United States)

    Maneira, J.; Peeters, S.; Sinclair, J.

    2015-04-01

    SNO is being upgraded to SNO+, which has as its main goal the search for neutrinoless double-beta decay. The upgrade is defined by filling with a novel scintillator mixture containing 130Te. With a lower energy threshold than SNO, SNO+ will be sensitive to other exciting new physics. Here we are describing new optical calibration system that meets new, more stringent radiopurity requirements has been developed.

  19. Camera Calibration Using Silhouettes

    OpenAIRE

    Boyer, Edmond

    2005-01-01

    This report addresses the problem of estimating camera parameters from images where object silhouettes only are known. Several modeling applications make use of silhouettes, and while calibration methods are well known when considering points or lines matched along image sequences, the problem appears to be more difficult when considering silhouettes. However, such primitives encode also information on camera parameters by the fact that their associated viewing cones should present a common i...

  20. Program Calibrates Strain Gauges

    Science.gov (United States)

    Okazaki, Gary D.

    1991-01-01

    Program dramatically reduces personnel and time requirements for acceptance tests of hardware. Data-acquisition system reads output from Wheatstone full-bridge strain-gauge circuit and calculates strain by use of shunt calibration technique. Program nearly instantaneously tabulates and plots strain data against load-cell outputs. Modified to acquire strain data for other specimens wherever full-bridge strain-gauge circuits used. Written in HP BASIC.