Monte Carlo efficiency calibration of a neutron generator-based total-body irradiator
Many body composition measurement systems are calibrated against a single-sized reference phantom. Prompt-gamma neutron activation (PGNA) provides the only direct measure of total body nitrogen (TBN), an index of the body's lean tissue mass. In PGNA systems, body size influences neutron flux attenuation, induced gamma signal distribution, and counting efficiency. Thus, calibration based on a single-sized phantom could result in inaccurate TBN values. We used Monte Carlo simulations (MCNP-5; Los Alamos National Laboratory) in order to map a system's response to the range of body weights (65-160 kg) and body fat distributions (25-60%) in obese humans. Calibration curves were constructed to derive body-size correction factors relative to a standard reference phantom, providing customized adjustments to account for differences in body habitus of obese adults. The use of MCNP-generated calibration curves should allow for a better estimate of the true changes in lean tissue mass that many occur during intervention programs focused only on weight loss. (author)
Carrazana Gonzalez, J.; Cornejo Diaz, N. [Centre for Radiological Protection and Hygiene, P.O. Box 6195, Habana (Cuba); Jurado Vargas, M., E-mail: mjv@unex.es [Departamento de Fisica, Universidad de Extremadura, 06071 Badajoz (Spain)
2012-05-15
We studied the applicability of the Monte Carlo code DETEFF for the efficiency calibration of detectors for in situ gamma-ray spectrometry determinations of ground deposition activity levels. For this purpose, the code DETEFF was applied to a study case, and the calculated {sup 137}Cs activity deposition levels at four sites were compared with published values obtained both by soil sampling and by in situ measurements. The {sup 137}Cs ground deposition levels obtained with DETEFF were found to be equivalent to the results of the study case within the uncertainties involved. The code DETEFF could thus be used for the efficiency calibration of in situ gamma-ray spectrometry for the determination of ground deposition activity using the uniform slab model. It has the advantage of requiring far less simulation time than general Monte Carlo codes adapted for efficiency computation, which is essential for in situ gamma-ray spectrometry where the measurement configuration yields low detection efficiency. - Highlights: Black-Right-Pointing-Pointer Application of the code DETEFF to in situ gamma-ray spectrometry. Black-Right-Pointing-Pointer {sup 137}Cs ground deposition levels evaluated assuming a uniform slab model. Black-Right-Pointing-Pointer Code DETEFF allows a rapid efficiency calibration.
High Purity Germanium (HPGe) detector is widely used to measure the γ-rays from neutron activated foils used for neutron spectra measurement due to its better energy resolution and photopeak efficiency. To determine the neutron induced activity in foils, it is very important to carry out absolute calibration for photo-peak efficiency in a wide range of γ-ray energy.Neutron activated foils are considered as extended γ-ray sources. The sources available for efficiency calibration are usually point sources. Therefore it is difficult to determine the photo-peak efficiency for extended sources using these point sources. A method has been developed to address this problem. This method is a combination of experimental measurement with point sources and development of an optimized model for Monte-Carlo N-Particle Code (MCNP) with the help of these experimental measurements. This MCNP model then can be used to find the photo-peak efficiency for any kind of source at any energy. (author)
Detector efficiency calibration of in vivo bioassay measurements is based on physical anthropomorphic phantoms that can be loaded with radionuclides of the suspected incorporation. Systematic errors of traditional calibration methods can cause considerable over- or underestimation of the incorporated activity and hence the absorbed dose in the human body. In this work Monte Carlo methods for radiation transport problem are used. Virtual models of the in vivo measurement equipment used at the Institute of Radiation Research, including detectors and anthropomorphic phantoms have been developed. Software tools have been coded to handle memory intensive human models for the visualization, preparation and evaluation of simulations of in vivo measurement scenarios. The used tools, methods, and models have been validated. Various parameters have been investigated for their sensitivity on the detector efficiency to identify and quantify possible systematic errors. Measures have been implemented to improve the determination of the detector efficiency in regard to apply them in the routine of the in vivo measurement laboratory of the institute. A positioning system has been designed and installed in the Partial Body Counter measurement chamber to measure the relative position of the detector to the test person, which has been identified to be a sensitive parameter. A computer cluster has been set up to facilitate the Monte Carlo simulations and reduce computing time. Methods based on image registration techniques have been developed to transform existing human models to match with an individual test person. The measures and methods developed have improved the classic detector efficiency methods successfully. (orig.)
Casanovas, R., E-mail: ramon.casanovas@urv.cat [Unitat de Fisica Medica, Facultat de Medicina i Ciencies de la Salut, Universitat Rovira i Virgili, ES-43201 Reus (Tarragona) (Spain); Morant, J.J. [Servei de Proteccio Radiologica, Facultat de Medicina i Ciencies de la Salut, Universitat Rovira i Virgili, ES-43201 Reus (Tarragona) (Spain); Salvado, M. [Unitat de Fisica Medica, Facultat de Medicina i Ciencies de la Salut, Universitat Rovira i Virgili, ES-43201 Reus (Tarragona) (Spain)
2012-05-21
The radiation detectors yield the optimal performance if they are accurately calibrated. This paper presents the energy, resolution and efficiency calibrations for two scintillation detectors, NaI(Tl) and LaBr{sub 3}(Ce). For the two former calibrations, several fitting functions were tested. To perform the efficiency calculations, a Monte Carlo user code for the EGS5 code system was developed with several important implementations. The correct performance of the simulations was validated by comparing the simulated spectra with the experimental spectra and reproducing a number of efficiency and activity calculations. - Highlights: Black-Right-Pointing-Pointer NaI(Tl) and LaBr{sub 3}(Ce) scintillation detectors are used for gamma-ray spectrometry. Black-Right-Pointing-Pointer Energy, resolution and efficiency calibrations are discussed for both detectors. Black-Right-Pointing-Pointer For the two former calibrations, several fitting functions are tested. Black-Right-Pointing-Pointer A Monte Carlo user code for EGS5 was developed for the efficiency calculations. Black-Right-Pointing-Pointer The code was validated reproducing some efficiency and activity calculations.
The radiation detectors yield the optimal performance if they are accurately calibrated. This paper presents the energy, resolution and efficiency calibrations for two scintillation detectors, NaI(Tl) and LaBr3(Ce). For the two former calibrations, several fitting functions were tested. To perform the efficiency calculations, a Monte Carlo user code for the EGS5 code system was developed with several important implementations. The correct performance of the simulations was validated by comparing the simulated spectra with the experimental spectra and reproducing a number of efficiency and activity calculations. - Highlights: ► NaI(Tl) and LaBr3(Ce) scintillation detectors are used for gamma-ray spectrometry. ► Energy, resolution and efficiency calibrations are discussed for both detectors. ► For the two former calibrations, several fitting functions are tested. ► A Monte Carlo user code for EGS5 was developed for the efficiency calculations. ► The code was validated reproducing some efficiency and activity calculations.
Well-type high-purity germanium (HPGe) detectors are well suited to the analysis of small amounts of environmental samples, as they can combine both low background and high detection efficiency. A low-background well-type detector is installed in the Modane underground Laboratory. In the well geometry, coincidence-summing effects are high and make the construction of the full energy peak efficiency curve a difficult task with an usual calibration standard, especially in the high energy range. Using the GEANT code and taking into account a detailed description of the detector and the source, efficiency curves have been modelled for several filling heights of the vial. With a special routine taking into account the decay schemes of the radionuclides, corrections for coincidence-summing effects that occur when measuring samples containing 238U, 232Th or 134Cs have been computed. The results are found to be in good agreement with the experimental data. It is shown that triple coincidences effect on counting losses accounts for 7-15% of pair coincidences effect in the case of 604 and 796 keV lines of 134Cs
A virtual point source calibration method is developed to finish the calibration of voluminous sample. We used a mixed point source to get the parameters of efficiency function, obtaining the virtual position of voluminous sample. So, the detection efficiency of xenon samples and standard soil samples were calibrated by placing the point source at their virtual position. The Monte Carlo method was also used to simulate the detector efficiency of xenon samples. Deviations between the virtual source method and Monte Carlo simulation are within 2.2 % for xenon samples. Thus, we have developed two robust efficiency calibration methods based on Monte Carlo simulations and virtual point source, respectively. (author)
Mathematical efficiency calibration in gamma spectroscopy
Kaminski, S; Wilhelm, C
2003-01-01
Mathematical efficiency calibration with the LabSOCS software was introduced for two detectors in the measurement laboratory of the Central Safety Department of Forschungszentrum Karlsruhe. In the present contribution, conventional efficiency calibration of gamma spectroscopy systems and mathematical efficiency calibration with LabSOCS are compared with respect to their performance, uncertainties, expenses, and results. It is reported about the experience gained, and the advantages and disadvantages of both methods of efficiency calibration are listed. The results allow the conclusion to be drawn that mathematical efficiency calibration is a real alternative to conventional efficiency calibration of gamma spectroscopy systems as obtained by measurements of mixed gamma ray standard sources.
Monte Carlo simulation: tool for the calibration in analytical determination of radionuclides
This work shows how is established the traceability of the analytical determinations using this calibration method. Highlights the advantages offered by Monte Carlo simulation for the application of corrections by differences in chemical composition, density and height of the samples analyzed. Likewise, the results obtained by the LVRA in two exercises organized by the International Agency for Atomic Energy (IAEA) are presented. In these exercises (an intercomparison and a proficiency test) all reported analytical results were obtained based on calibrations in efficiency by Monte Carlo simulation using the DETEFF program
Detector characterization for efficiency calibration in different measurement geometries
In order to perform an accurate efficiency calibration for different measurement geometries a good knowledge of the detector characteristics is required. The Monte Carlo simulation program GESPECOR is applied. The detector characterization required for Monte Carlo simulation is achieved using the efficiency values obtained from measuring a point source. The point source was measured in two significant geometries: the source placed in a vertical plane containing the vertical symmetry axis of the detector and in a horizontal plane containing the centre of the active volume of the detector. The measurements were made using gamma spectrometry technique. (authors)
Efficiency calibration of low background gamma spectrometer
A method of efficiency calibration is described. The authors used standard ores of U, Ra and Th (power form), KCl and Cs-137 sources to do calibration volume-sources which were directly placed on the detector end cap. In such a measuring geometry, it is not necessary to make coincidence-summing correction. The efficiency calibration curve obtained by the method were compared with results measured by Am-241, Cd-109 and Eu-152 calibration sources. The agree in the error of about 5%
Gonzalez, Jorge A. Carrazana; Ferrera, Eduardo A. Capote; Gomez, Isis M. Fernandez; Castro, Gloria V. Rodriguez; Ricardo, Niury Martinez, E-mail: cphr@cphr.edu.cu [Centro de Proteccion e Higiene de las Radiaciones (CPHR), La Habana (Cuba)
2013-07-01
This work shows how is established the traceability of the analytical determinations using this calibration method. Highlights the advantages offered by Monte Carlo simulation for the application of corrections by differences in chemical composition, density and height of the samples analyzed. Likewise, the results obtained by the LVRA in two exercises organized by the International Agency for Atomic Energy (IAEA) are presented. In these exercises (an intercomparison and a proficiency test) all reported analytical results were obtained based on calibrations in efficiency by Monte Carlo simulation using the DETEFF program.
Results of monte Carlo calibrations of a low energy germanium detector
Normally, measurements of the peak efficiency of a gamma ray detector are performed with calibrated samples which are prepared to match the measured ones in all important characteristics like its volume, chemical composition and density. Another way to determine the peak efficiency is to calculate it with special monte Carlo programs. In principle the program 'Pencyl' from the source code 'P.E.N.E.L.O.P.E. 2003' can be used for peak efficiency calibration of a cylinder symmetric detector however exact data for the geometries and the materials is needed. The interpretation of the simulation results is not clear but we found a way to convert the data into values which can be compared to our measurement results. It is possible to find other simulation parameters which perform the same or better results. Further improvements can be expected by longer simulation times and more simulations in the questionable ranges of densities and filling heights. (N.C.)
Top Quark Mass Calibration for Monte Carlo Event Generators
Butenschoen, Mathias; Hoang, Andre H; Mateu, Vicent; Preisser, Moritz; Stewart, Iain W
2016-01-01
The most precise top quark mass measurements use kinematic reconstruction methods, determining the top mass parameter of a Monte Carlo event generator, $m_t^{\\rm MC}$. Due to hadronization and parton shower dynamics, relating $m_t^{\\rm MC}$ to a field theory mass is difficult. We present a calibration procedure to determine this relation using hadron level QCD predictions for observables with kinematic mass sensitivity. Fitting $e^+e^-$ 2-Jettiness calculations at NLL/NNLL order to Pythia 8.205, $m_t^{\\rm MC}$ differs from the pole mass by $900$/$600$ MeV, and agrees with the MSR mass within uncertainties, $m_t^{\\rm MC}\\simeq m_{t,1\\,{\\rm GeV}}^{\\rm MSR}$.
The contribution describes a technique of determination of calibration coefficients of a radioactivity monitor using Monte Carlo calculations. The monitor is installed at the NPP Temelin adjacent to lines with a radioactive medium. The output quantity is the activity concentration (in Bq/m3) that is converted from the number of counts per minute measured by the monitor. The value of this conversion constant, i.e. calibration coefficient, was calculated for gamma photons emitted by Co-60 and compared to the data stated by the manufacturer and supplier of these monitors, General Atomic Electronic Systems, Inc., USA. Results of the comparison show very good agreement between calculations and manufacturer data; the differences are lower than the quadratic sum of uncertainties. (authors)
van der Graaf, E. R.; Limburg, J.; Koomans, R. L.; Tijs, M.
2011-01-01
The calibration of scintillation detectors for gamma radiation in a well characterized setup can be transferred to other geometries using Monte Carlo simulations to account for the differences between the calibration and the other geometry. In this study a calibration facility was used that is const
High-precision efficiency calibration of a high-purity co-axial germanium detector
A high-purity co-axial germanium detector has been calibrated in efficiency to a precision of about 0.15% over a wide energy range. High-precision scans of the detector crystal and γ-ray source measurements have been compared to Monte-Carlo simulations to adjust the dimensions of a detector model. For this purpose, standard calibration sources and short-lived online sources have been used. The resulting efficiency calibration reaches the precision needed e.g. for branching ratio measurements of super-allowed β decays for tests of the weak-interaction standard model
This work is based on the determination of the detection efficiency of 125I and 131I in thyroid of the identiFINDER detector using the Monte Carlo method. The suitability of the calibration method is analyzed, when comparing the results of the direct Monte Carlo method with the corrected, choosing the latter because the differences with the real efficiency stayed below 10%. To simulate the detector their geometric parameters were optimized using a tomographic study, what allowed the uncertainties minimization of the estimates. Finally were obtained the simulations of the detector geometry-point source to find the correction factors to 5 cm, 15 cm and 25 cm, and those corresponding to the detector-simulator arrangement for the method validation and final calculation of the efficiency, demonstrating that in the Monte Carlo method implementation if simulates at a greater distance than the used in the Laboratory measurements an efficiency overestimation can be obtained, while if simulates at a shorter distance this will be underestimated, so should be simulated at the same distance to which will be measured in the reality. Also, is achieved the obtaining of the efficiency curves and minimum detectable activity for the measurement of 131I and 125I. In general is achieved the implementation of the Monte Carlo methodology for the identiFINDER calibration with the purpose of estimating the measured activity of iodine in thyroid. This method represents an ideal way to replace the lack of patterns solutions and simulators assuring the capacities of the Internal Contamination Laboratory of the Centro de Proteccion e Higiene de las Radiaciones are always calibrated for the iodine measurement in thyroid. (author)
Calibration and Monte Carlo modelling of neutron long counters
Tagziria, H
2000-01-01
The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivit...
Force calibration using errors-in-variables regression and Monte Carlo uncertainty evaluation
Bartel, Thomas; Stoudt, Sara; Possolo, Antonio
2016-06-01
An errors-in-variables regression method is presented as an alternative to the ordinary least-squares regression computation currently employed for determining the calibration function for force measuring instruments from data acquired during calibration. A Monte Carlo uncertainty evaluation for the errors-in-variables regression is also presented. The corresponding function (which we call measurement function, often called analysis function in gas metrology) necessary for the subsequent use of the calibrated device to measure force, and the associated uncertainty evaluation, are also derived from the calibration results. Comparisons are made, using real force calibration data, between the results from the errors-in-variables and ordinary least-squares analyses, as well as between the Monte Carlo uncertainty assessment and the conventional uncertainty propagation employed at the National Institute of Standards and Technology (NIST). The results show that the errors-in-variables analysis properly accounts for the uncertainty in the applied calibrated forces, and that the Monte Carlo method, owing to its intrinsic ability to model uncertainty contributions accurately, yields a better representation of the calibration uncertainty throughout the transducer’s force range than the methods currently in use. These improvements notwithstanding, the differences between the results produced by the current and by the proposed new methods generally are small because the relative uncertainties of the inputs are small and most contemporary load cells respond approximately linearly to such inputs. For this reason, there will be no compelling need to revise any of the force calibration reports previously issued by NIST.
Calibration and validation of a Monte Carlo model for PGNAA of chlorine in soil
A prompt gamma-ray neutron activation analysis (PGNAA) system was used to calibrate and validate a Monte Carlo model as a proof of principle for the quantification of chlorine in soil. First, the response of an n-type HPGe detector to point sources of 60Co and 152Eu was determined experimentally and used to calibrate an MCNP4a model of the detector. The refined MCNP4a detector model can predict the absolute peak detection efficiency within 12% in the energy range of 120-1400 keV. Second, a PGNAA system consisting of a light-water moderated 252Cf (1.06 μg) neutron source, and the shielded and collimated HPGe detector was used to collect prompt gamma-ray spectra from Savannah River Site (SRS) soil spiked with chlorine. The spectra were used to calculate the minimum detectable concentration (MDC) of chlorine and the prompt gamma-ray detection probability. Using the 252Cf based PGNAA system, the MDC for Cl in the SRS soil is 4400 μg/g for an 1800-second irradiation based on the analysis of the 6110 keV prompt gamma-ray. MCNP4a was used to predict the PGNAA detection probability, which was accomplished by modeling the neutron and gamma-ray transport components separately. In the energy range of 788 to 6110 keV, the MCNP4a predictions of the prompt gamma-ray detection probability were generally within 60% of the experimental value, thus validating the Monte Carlo model. (author)
Monte Carlo simulation for the calibration of neutron source strength measurement of JT-60 upgrade
The calibration of the relation between the neutron source strength in the whole plasma and the output of neutron monitor is important to evaluate the fusion gain in tokamaks with DD or DT operation. JT-60 will be modified to be tokamak of deuterium plasma with Ip≤7MA and V≤110 m3. The source strength of JT-60 Upgrade will be measured with 235U and 238U fission chambers. Detection efficiencies for source neutron are calculated by the Monte Carlo code MCNP with 3-dimensional modelling of JT-60 Upgrade and with the poloidally distributed neutron source. More than 90% of fission chamber's counts are contributed by source of -85deg235U and 238U detectors, respectively. Detection efficiencies are sensitive to major radius of the detector position, but not so sensitive to vertical and toroidal shift of the detector positions. And total uncertainties combined detector position errors are ±13% and ±9% for 235U and 238U detectors, respectively. The modelling errors of the detection efficiencies are so large for the 238U detector that more precise modelling including the port boxes is needed. (author)
Calibration of the Top-Quark Monte Carlo Mass
Kieseler, Jan; Lipka, Katerina; Moch, Sven-Olaf
2016-04-01
We present a method to establish, experimentally, the relation between the top-quark mass mtMC as implemented in Monte Carlo generators and the Lagrangian mass parameter mt in a theoretically well-defined renormalization scheme. We propose a simultaneous fit of mtMC and an observable sensitive to mt, which does not rely on any prior assumptions about the relation between mt and mtMC. The measured observable is independent of mtMC and can be used subsequently for a determination of mt. The analysis strategy is illustrated with examples for the extraction of mt from inclusive and differential cross sections for hadroproduction of top quarks.
The peak efficiency calibration of volume source using 152Eu point source in computer
The author describes the method of the peak efficiency calibration of volume source by means of 152Eu point source for HPGe γ spectrometer. The peak efficiency can be computed by Monte Carlo simulation, after inputting parameter of detector. The computation results are in agreement with the experimental results with an error of +-3.8%, with an exception one is about +-7.4%
Confidence and efficiency scaling in Variational Quantum Monte Carlo calculations
Delyon, François; Holzmann, Markus
2016-01-01
Based on the central limit theorem, we discuss the problem of evaluation of the statistical error of Monte Carlo calculations using a time discretized diffusion process. We present a robust and practical method to determine the effective variance of general observables and show how to verify the equilibrium hypothesis by the Kolmogorov-Smirnov test. We then derive scaling laws of the efficiency illustrated by Variational Monte Carlo calculations on the two dimensional electron gas.
Calibration of the top-quark Monte-Carlo mass
Kieseler, Jan; Lipka, Katerina [DESY Hamburg (Germany); Moch, Sven-Olaf [Hamburg Univ. (Germany). II. Inst. fuer Theoretische Physik
2015-11-15
We present a method to establish experimentally the relation between the top-quark mass m{sup MC}{sub t} as implemented in Monte-Carlo generators and the Lagrangian mass parameter m{sub t} in a theoretically well-defined renormalization scheme. We propose a simultaneous fit of m{sup MC}{sub t} and an observable sensitive to m{sub t}, which does not rely on any prior assumptions about the relation between m{sub t} and m{sup MC}{sub t}. The measured observable is independent of m{sup MC}{sub t} and can be used subsequently for a determination of m{sub t}. The analysis strategy is illustrated with examples for the extraction of m{sub t} from inclusive and differential cross sections for hadro-production of top-quarks.
Calibration of the Top-Quark Monte-Carlo Mass
Kieseler, Jan; Moch, Sven-Olaf
2015-01-01
We present a method to establish experimentally the relation between the top-quark mass $m_t^{MC}$ as implemented in Monte-Carlo generators and the Lagrangian mass parameter $m_t$ in a theoretically well-defined renormalization scheme. We propose a simultaneous fit of $m_t^{MC}$ and an observable sensitive to $m_t$, which does not rely on any prior assumptions about the relation between $m_t$ and $m_t^{MC}$. The measured observable is independent of $m_t^{MC}$ and can be used subsequently for a determination of $m_t$. The analysis strategy is illustrated with examples for the extraction of $m_t$ from inclusive and differential cross sections for hadro-production of top-quarks.
Calibration of the top-quark Monte-Carlo mass
We present a method to establish experimentally the relation between the top-quark mass mMCt as implemented in Monte-Carlo generators and the Lagrangian mass parameter mt in a theoretically well-defined renormalization scheme. We propose a simultaneous fit of mMCt and an observable sensitive to mt, which does not rely on any prior assumptions about the relation between mt and mMCt. The measured observable is independent of mMCt and can be used subsequently for a determination of mt. The analysis strategy is illustrated with examples for the extraction of mt from inclusive and differential cross sections for hadro-production of top-quarks.
Strategies for improving the efficiency of quantum Monte Carlo calculations
Lee, R M; Nemec, N; Rios, P Lopez; Drummond, N D
2010-01-01
We describe a number of strategies for optimizing the efficiency of quantum Monte Carlo (QMC) calculations. We investigate the dependence of the efficiency of the variational Monte Carlo method on the sampling algorithm. Within a unified framework, we compare several commonly used variants of diffusion Monte Carlo (DMC). We then investigate the behavior of DMC calculations on parallel computers and the details of parallel implementations, before proposing a technique to optimize the efficiency of the extrapolation of DMC results to zero time step, finding that a relative time step ratio of 1:4 is optimal. Finally, we discuss the removal of serial correlation from data sets by reblocking, setting out criteria for the choice of block length and quantifying the effects of the uncertainty in the estimated correlation length and the presence of divergences in the local energy on estimated error bars on QMC energies.
Optimum and efficient sampling for variational quantum Monte Carlo
Trail, John Robert; 10.1063/1.3488651
2010-01-01
Quantum mechanics for many-body systems may be reduced to the evaluation of integrals in 3N dimensions using Monte-Carlo, providing the Quantum Monte Carlo ab initio methods. Here we limit ourselves to expectation values for trial wavefunctions, that is to Variational quantum Monte Carlo. Almost all previous implementations employ samples distributed as the physical probability density of the trial wavefunction, and assume the Central Limit Theorem to be valid. In this paper we provide an analysis of random error in estimation and optimisation that leads naturally to new sampling strategies with improved computational and statistical properties. A rigorous lower limit to the random error is derived, and an efficient sampling strategy presented that significantly increases computational efficiency. In addition the infinite variance heavy tailed random errors of optimum parameters in conventional methods are replaced with a Normal random error, strengthening the theoretical basis of optimisation. The method is ...
HPGe Detector Efficiency Calibration Using HEU Standards
Salaymeh, S.R.
2000-10-12
The Analytical Development Section of SRTC was requested by the Facilities Disposition Division (FDD) to determine the holdup of enriched uranium in the 321-M facility as part of an overall deactivation project of the facility. The 321-M facility was used to fabricate enriched uranium fuel assemblies, lithium-aluminum target tubes, neptunium assemblies, and miscellaneous components for the production reactors. The facility also includes the 324-M storage building and the passageway connecting it to 321-M. The results of the holdup assays are essential for determining compliance with the Solid Waste's Waste Acceptance Criteria, Material Control and Accountability, and to meet criticality safety controls. Two measurement systems will be used to determine highly enriched uranium (HEU) holdup: One is a portable HPGe detector and EG and G Dart system that contains high voltage power supply and signal processing electronics. A personal computer with Gamma-Vision software was used to provide an MCA card, and space to store and manipulate multiple 4096-channel g-ray spectra. The other is a 2 inches x 2 inches NaI crystal with an MCA that uses a portable computer with a Canberra NaI plus card installed. This card converts the PC to a full function MCA and contains the ancillary electronics, high voltage power supply and amplifier, required for data acquisition. This report describes and documents the HPGe point, line, area, and constant geometry-constant transmission detector efficiency calibrations acquired and calculated for use in conducting holdup measurements as part of the overall deactivation project of building 321-M.
Rodenas, Jose [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain)], E-mail: jrodenas@iqn.upv.es; Gallardo, Sergio; Ballester, Silvia; Primault, Virginie [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain); Ortiz, Josefina [Laboratorio de Radiactividad Ambiental, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain)
2007-10-15
A gamma spectrometer including an HP Ge detector is commonly used for environmental radioactivity measurements. The efficiency of the detector should be calibrated for each geometry considered. Simulation of the calibration procedure with a validated computer program is an important auxiliary tool for environmental radioactivity laboratories. The MCNP code based on the Monte Carlo method has been applied to simulate the detection process in order to obtain spectrum peaks and determine the efficiency curve for each modelled geometry. The source used for measurements was a calibration mixed radionuclide gamma reference solution, covering a wide energy range (50-2000 keV). Two measurement geometries - Marinelli beaker and Petri boxes - as well as different materials - water, charcoal, sand - containing the source have been considered. Results obtained from the Monte Carlo model have been compared with experimental measurements in the laboratory in order to validate the model.
Rodenas, J.; Pascual, A.; Zarza, I.; Serradell, V.; Ortiz, J.; Ballesteros, L.
2001-07-01
One of the most powerful tools used for environmental radioactivity measurements is a gamma spectrometer, which usually includes a hp ge detector. The detector should be calibrated in efficiency for each considered geometry. Simulation of the calibration procedure with a validated computer program becomes an important auxiliary tool for an environmental radioactivity laboratory being that it permits one to optimise calibration procedures and reduce the amount of radioactive wastes produced. The Monte Carlo method is applied to simulate the detection process and obtain spectrum peaks for each modelled geometry. An accurate detector model should be developed in order to obtain a good accuracy in the output of the calibration simulation. an important parameter in the detector model is the thickness of any absorber layer surrounding the Ge crystal, particularly the inactive Ge layer. In this paper, a sensitivity analysis on the inactive Ge layer thickness is performed using MCNP 4B code. Results are compared with experimental measured efficiency. A sensitivity analysis is also performed on the aluminium cap thickness. (Author) 5 refs.
The results of Monte Carlo simulation studies of the timing calibration accuracy required by the NEMO underwater neutrino telescope are presented. The NEMO Collaboration is conducting a long term R and D activity toward the installation of a km3 apparatus in the Mediterranean Sea. An optimal site has been found and characterized at 3500 m depth off the Sicilian coast. Monte Carlo simulation shows that the angular resolution of the telescope remains approximately unchanged if the offset errors of timing calibration are less than 1 ns. This value is tolerable because the apparatus performance is not significantly changed when such inaccuracies are added to the other sources of error (e.g., the accuracy position of optical modules). We also discuss the optical background rate effect on the angular resolution of the apparatus and we compare the present version of the NEMO telescope with a different configuration.
During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of 40K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.
Shypailo, R. J.; Ellis, K. J.
2011-05-01
During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of 40K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.
Self-absorption correction in γ-ray efficiency calibration of fission gas nuclide
In order to solve the problem of self-absorption correction in γ-ray efficiency calibration of fission gas nuclide, the parameters about source container, detector and source matrix etc.were described, and Monte-Carlo model and program of efficiency computation for HPGe detector were established according to experiment layout. The efficiency of fission gas nuclides was calculated in the different source matrices, and the corresponding self-absorption coefficients were obtained. The reliability of the model was validated by the experiment data. (authors)
Monte Carlo Studies for the Calibration System of the GERDA Experiment
Baudis, Laura; Froborg, Francis; Tarka, Michal
2013-01-01
The GERmanium Detector Array, GERDA, searches for neutrinoless double beta decay in Ge-76 using bare high-purity germanium detectors submerged in liquid argon. For the calibration of these detectors gamma emitting sources have to be lowered from their parking position on top of the cryostat over more than five meters down to the germanium crystals. With the help of Monte Carlo simulations, the relevant parameters of the calibration system were determined. It was found that three Th-228 sources with an activity of 20 kBq each at two different vertical positions will be necessary to reach sufficient statistics in all detectors in less than four hours of calibration time. These sources will contribute to the background of the experiment with a total of (1.07 +/- 0.04(stat) +0.13 -0.19(sys)) 10^{-4} cts/(keV kg yr) when shielded from below with 6 cm of tantalum in the parking position.
Model calibration for building energy efficiency simulation
Highlights: • Developing a 3D model relating to building architecture, occupancy and HVAC operation. • Two calibration stages developed, final model providing accurate results. • Using an onsite weather station for generating the weather data file in EnergyPlus. • Predicting thermal behaviour of underfloor heating, heat pump and natural ventilation. • Monthly energy saving opportunities related to heat pump of 20–27% was identified. - Abstract: This research work deals with an Environmental Research Institute (ERI) building where an underfloor heating system and natural ventilation are the main systems used to maintain comfort condition throughout 80% of the building areas. Firstly, this work involved developing a 3D model relating to building architecture, occupancy and HVAC operation. Secondly, the calibration methodology, which consists of two levels, was then applied in order to insure accuracy and reduce the likelihood of errors. To further improve the accuracy of calibration a historical weather data file related to year 2011, was created from the on-site local weather station of ERI building. After applying the second level of calibration process, the values of Mean bias Error (MBE) and Cumulative Variation of Root Mean Squared Error (CV(RMSE)) on hourly based analysis for heat pump electricity consumption varied within the following ranges: (MBE)hourly from −5.6% to 7.5% and CV(RMSE)hourly from 7.3% to 25.1%. Finally, the building was simulated with EnergyPlus to identify further possibilities of energy savings supplied by a water to water heat pump to underfloor heating system. It found that electricity consumption savings from the heat pump can vary between 20% and 27% on monthly bases
Efficient Monte Carlo methods for light transport in scattering media
Jarosz, Wojciech
2008-01-01
In this dissertation we focus on developing accurate and efficient Monte Carlo methods for synthesizing images containing general participating media. Participating media such as clouds, smoke, and fog are ubiquitous in the world and are responsible for many important visual phenomena which are of interest to computer graphics as well as related fields. When present, the medium participates in lighting interactions by scattering or absorbing photons as they travel through the scene. Though th...
Efficiency calibration of solid track spark auto counter
The factors influencing detection efficiency of solid track spark auto counter were analyzed, and the best etch condition and parameters of charge were also reconfirmed. With small plate fission ionization chamber, the efficiency of solid track spark auto counter at various experiment assemblies was re-calibrated. The efficiency of solid track spark auto counter at various experimental conditions was obtained. (authors)
The Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) has established a method for ionisation chamber calibrations using megavoltage photon reference beams. The new method will reduce the calibration uncertainty compared to a 60Co calibration combined with the TRS-398 energy correction factor. The calibration method employs a graphite calorimeter and a Monte Carlo (MC) conversion factor to convert the absolute dose to graphite to absorbed dose to water. EGSnrc is used to model the linac head and doses in the calorimeter and water phantom. The linac model is validated by comparing measured and modelled PDDs and profiles. The relative standard uncertainties in the calibration factors at the ARPANSA beam qualities were found to be 0.47% at 6 MV, 0.51% at 10 MV and 0.46% for the 18 MV beam. A comparison with the Bureau International des Poids et Mesures (BIPM) as part of the key comparison BIPM.RI(I)-K6 gave results of 0.9965(55), 0.9924(60) and 0.9932(59) for the 6, 10 and 18 MV beams, respectively, with all beams within 1σ of the participant average. The measured kQ values for an NE2571 Farmer chamber were found to be lower than those in TRS-398 but are consistent with published measured and modelled values. Users can expect a shift in the calibration factor at user energies of an NE2571 chamber between 0.4–1.1% across the range of calibration energies compared to the current calibration method. (paper)
The interpretation of data obtained from fixed, ground-based dose rate monitoring stations of environmental networks, in terms of deposited radionuclide activity per unit area, requires not only the knowledge of the nuclide spectrum and the deposition mechanism, but also the knowledge of the situation in the vicinity at the probe if it significantly differs from ideal conditions, which are defined to be an infinitely-extended plane surface. Distance-dependent calibration factors for different gamma-ray energies and depth profiles are calculated with the new Monte Carlo code PENELOPE. How these distance-dependent calibration factors can take inhomogeneous surface types in the vicinity of the probe into account will also be discussed. In addition, calibration factors for different detector heights and calibration factors for gamma sources in the air will also be calculated. The main irregularities that, in practice, occur in the vicinity of such probes are discussed. Their impact on the representativeness of the site is assessed. For some typical irregularities parameterized calibration factors are calculated and discussed. Sewage plants and sandboxes are discussed as further special examples. The application of these results to real sites allows for the improved interpretation of data and the quantitative assessment of the representativeness of the site. A semi-quantitative scoring scheme helps to decide to what extent irregularities can be tolerated. Its application is straightforward and provides a coarse but objective description of the site-specific conditions of a dose rate probe. (orig.)
At the ENEA Institute for Radiation Protection (IRP) the fast neutron calibration facility consists of a remote control device which allows locating different sources (Am-Be, Pu-Li, bare and D2O moderated 252Cf) at the reference position, at the desired height from the floor, inside a 10x10x3 m3 irradiation room. Either the ISO reference sources or the Pu-Li source have been characterised in terms of uncollided H*(10) and neutron fluence according to the ISO calibration procedures. A spectral fluence mapping, carried out with the Monte Carlo Code MCNPTM, allowed characterising the calibration point, in scattered field conditions, according to the most recent international recommendations. Moreover, the irradiation of personal dosemeters on the ISO water filled slab phantom was simulated to determine the field homogeneity of the calibration area and the variability of the neutron field (including the backscattered component) along the phantom surface. At the ENEA Institute for Radiation Protection the calibration of neutron area monitors as well as personal dosemeters can now be performed according to the international standards, at the same time guaranteeing suitable conditions for research and qualification purposes in the field of neutron dosimetry
Efficient Monte Carlo methods for continuum radiative transfer
Juvela, M
2005-01-01
We discuss the efficiency of Monte Carlo methods in solving continuum radiative transfer problems. The sampling of the radiation field and convergence of dust temperature calculations in the case of optically thick clouds are both studied. For spherically symmetric clouds we find that the computational cost of Monte Carlo simulations can be reduced, in some cases by orders of magnitude, with simple importance weighting schemes. This is particularly true for models consisting of cells of different sizes for which the run times would otherwise be determined by the size of the smallest cell. We present a new idea of extending importance weighting to scattered photons. This is found to be useful in calculations of scattered flux and could be important for three-dimensional models when observed intensity is needed only for one general direction of observations. Convergence of dust temperature calculations is studied for models with optical depths 10-10000. We examine acceleration methods where radiative interactio...
Intensity Modulated Radiation Therapy (IMRT) treatments are some of the most complex being delivered by modern megavoltage radiotherapy accelerators. Therefore verification of the dose, or the presecribed Monitor Units (MU), predicted by the planning system is a key element to ensuring that patients should receive an accurate radiation dose plan during IMRT. One inherently accurate method is by comparison with absolute calibrated Monte Carlo simulations of the IMRT delivery by the linac head and corresponding delivery of the plan to a patient based phantom. In this work this approach has been taken using BEAMnrc for simulation of the treatment head, and both DOSXYZnrc and Geant4 for the phantom dose calculation. The two Monte Carlo codes agreed to within 1% of each other, and these matched very well to our planning system for IMRT plans to the brain, nasopharynx, and head and neck.
Ramos M, D.; Yera S, Y.; Lopez B, G. M.; Acosta R, N.; Vergara G, A., E-mail: dayana@cphr.edu.cu [Centro de Proteccion e Higiene de las Radiaciones, Calle 20 No. 4113 e/ 41 y 47, Playa, 10600 La Habana (Cuba)
2014-08-15
This work is based on the determination of the detection efficiency of {sup 125}I and {sup 131}I in thyroid of the identiFINDER detector using the Monte Carlo method. The suitability of the calibration method is analyzed, when comparing the results of the direct Monte Carlo method with the corrected, choosing the latter because the differences with the real efficiency stayed below 10%. To simulate the detector their geometric parameters were optimized using a tomographic study, what allowed the uncertainties minimization of the estimates. Finally were obtained the simulations of the detector geometry-point source to find the correction factors to 5 cm, 15 cm and 25 cm, and those corresponding to the detector-simulator arrangement for the method validation and final calculation of the efficiency, demonstrating that in the Monte Carlo method implementation if simulates at a greater distance than the used in the Laboratory measurements an efficiency overestimation can be obtained, while if simulates at a shorter distance this will be underestimated, so should be simulated at the same distance to which will be measured in the reality. Also, is achieved the obtaining of the efficiency curves and minimum detectable activity for the measurement of {sup 131}I and {sup 125}I. In general is achieved the implementation of the Monte Carlo methodology for the identiFINDER calibration with the purpose of estimating the measured activity of iodine in thyroid. This method represents an ideal way to replace the lack of patterns solutions and simulators assuring the capacities of the Internal Contamination Laboratory of the Centro de Proteccion e Higiene de las Radiaciones are always calibrated for the iodine measurement in thyroid. (author)
McNamara, A L; Heijnis, H; Fierro, D; Reinhard, M I
2012-04-01
A Compton suppressed high-purity germanium (HPGe) detector is well suited to the analysis of low levels of radioactivity in environmental samples. The difference in geometry, density and composition of environmental calibration standards (e.g. soil) can contribute to excessive experimental uncertainty to the measured efficiency curve. Furthermore multiple detectors, like those used in a Compton suppressed system, can add complexities to the calibration process. Monte Carlo simulations can be a powerful complement in calibrating these types of detector systems, provided enough physical information on the system is known. A full detector model using the Geant4 simulation toolkit is presented and the system is modelled in both the suppressed and unsuppressed mode of operation. The full energy peak efficiencies of radionuclides from a standard source sample is calculated and compared to experimental measurements. The experimental results agree relatively well with the simulated values (within ∼5 - 20%). The simulations show that coincidence losses in the Compton suppression system can cause radionuclide specific effects on the detector efficiency, especially in the Compton suppressed mode of the detector. Additionally since low energy photons are more sensitive to small inaccuracies in the computational detector model than high energy photons, large discrepancies may occur at energies lower than ∼100 keV. PMID:22304994
Analysis of the effect of true coincidence summing on efficiency calibration for an HP GE detector
Rodenas, J.; Gallardo, S.; Ballester, S.; Primault, V. [Valencia Univ. Politecnica, Dept. de Ingenieria Quimica y Nuclear (Spain); Ortiz, J. [Valencia Univ. Politecnica, Lab. de Radiactividad Ambiental (Spain)
2006-07-01
The H.P. (High Purity) Germanium detector is commonly used for gamma spectrometry in environmental radioactivity laboratories. The efficiency of the detector must be calibrated for each geometry considered. This calibration is performed using a standard solution containing gamma emitter sources. The usual goal is the obtaining of an efficiency curve to be used in the determination of the activity of samples with the same geometry. It is evident the importance of the detector calibration. However, the procedure presents some problems as it depends on the source geometry (shape, volume, distance to detector, etc.) and shall be repeated when these factors change. That means an increasing use of standard solutions and consequently an increasing generation of radioactive wastes. Simulation of the calibration procedure with a validated computer program is clearly an important auxiliary tool for environmental radioactivity laboratories. This simulation is useful for both optimising calibration procedures and reducing the amount of radioactivity wastes produced. The M.C.N.P. code, based on the Monte Carlo method, has been used in this work for the simulation of detector calibration. A model has been developed for the detector as well as for the source contained in a Petri box. The source is a standard solution that contains the following radionuclides: {sup 241}Am, {sup 109}Cd, {sup 57}Co, {sup 139}Ce, {sup 203}Hg, {sup 113}Sn, {sup 85}Sr, {sup 137}Cs, {sup 88}Y and {sup 60}Co; covering a wide energy range (50 to 2000 keV). However, there are two radionuclides in the solution ({sup 60}Co and {sup 88}Y) that emit gamma rays in true coincidence. The effect of the true coincidence summing produces a distortion of the calibration curve at higher energies. To decrease this effect some measurements have been performed at increasing distances between the source and the detector. As the true coincidence effect is observed in experimental measurements but not in the Monte Carlo
New data concerning the efficiency calibration of a drum waste assay system
The study is focused on the efficiency calibration of a gamma spectroscopy system for drum waste assay.The measurement of a radioactive drum waste is usually difficult because of its large volume, the varied distribution of the waste within the drum and its high self attenuation.To solve this problems, a complex calibration of the system is required. For this purpose, a calibration drum provided with seven tubes, placed at different distances from its center was used, the rest of the drum being filled with Portland cement. For the efficiency determination of a uniformly distributed source, a linear source of 152 Eu was used.The linear calibration source was introduced successively inside the seven tubes, the gamma spectra being recorded while the drum was translated and simultaneously rotated. Using the GENIE-PC software, the gamma-spectra were analyzed and the detection efficiencies for shell-sources were obtained. Using this efficiencies, the total response of the detector and the detection efficiency appropriate to a uniform volume source were calculated. For the efficiency determination of a non-homogenous source, additional measurements in the following geometries were made. First, with a 152 Eu point source placed in front of the detector, measured in all seven tubes, the drum being only rotated. Second, with the linear source of 152 Eu placed in front of the detector, measured in all seven tubes, only the drum being rotated. For each position the gamma spectra was recorded and the detection efficiency was calculated.The obtained values for efficiency were verified using GESPECOR software, which has been developed for the computation of the efficiency of Ge detectors for a wide class of measurement configurations, using Monte-Carlo method. (authors)
O5S, Calibration of Organic Scintillation Detector by Monte-Carlo
1 - Nature of physical problem solved: O5S is designed to directly simulate the experimental techniques used to obtain the pulse height distribution for a parallel beam of mono-energetic neutrons incident on organic scintillator systems. Developed to accurately calibrate the nominally 2 in. by 2 in. liquid organic scintillator NE-213 (composition CH-1.2), the programme should be readily adaptable to many similar problems. 2 - Method of solution: O5S is a Monte Carlo programme patterned after the general-purpose Monte Carlo neutron transport programme system, O5R. The O5S Monte Carlo experiment follows the course of each neutron through the scintillator and obtains the energy-deposits of the ions produced by elastic scatterings and reactions. The light pulse produced by the neutron is obtained by summing up the contributions of the various ions with the use of appropriate light vs. ion-energy tables. Because of the specialized geometry and simpler cross section needs O5S is able to by-pass many features included in O5R. For instance, neutrons may be followed individually, their histories analyzed as they occur, and upon completion of the experiment, the results analyzed to obtain the pulse-height distribution during one pass on the computer. O5S does allow the absorption of neutrons, but does not allow splitting or Russian roulette (biased weighting schemes). SMOOTHIE is designed to smooth O5S histogram data using Gaussian functions with parameters specified by the user
Calibration of AGILE-GRID with in-flight data and Monte Carlo simulations
Chen, A. W.; Argan, A.; Bulgarelli, A.; Cattaneo, P. W.; Contessi, T.; Giuliani, A.; Pittori, C.; Pucella, G.; Tavani, M.; Trois, A.; Verrecchia, F.; Barbiellini, G.; Caraveo, P.; Colafrancesco, S.; Costa, E.; De Paris, G.; Del Monte, E.; Di Cocco, G.; Donnarumma, I.; Evangelista, Y.; Ferrari, A.; Feroci, M.; Fioretti, V.; Fiorini, M.; Fuschino, F.; Galli, M.; Gianotti, F.; Giommi, P.; Giusti, M.; Labanti, C.; Lapshov, I.; Lazzarotto, F.; Lipari, P.; Longo, F.; Lucarelli, F.; Marisaldi, M.; Mereghetti, S.; Morelli, E.; Moretti, E.; Morselli, A.; Pacciani, L.; Pellizzoni, A.; Perotti, F.; Piano, G.; Picozza, P.; Pilia, M.; Prest, M.; Rapisarda, M.; Rappoldi, A.; Rubini, A.; Sabatini, S.; Santolamazza, P.; Soffitta, P.; Striani, E.; Trifoglio, M.; Valentini, G.; Vallazza, E.; Vercellone, S.; Vittorini, V.; Zanello, D.
2013-10-01
Context. AGILE is a γ-ray astrophysics mission which has been in orbit since 23 April 2007 and continues to operate reliably. The γ-ray detector, AGILE-GRID, has observed Galactic and extragalactic sources, many of which were collected in the first AGILE Catalog. Aims: We present the calibration of the AGILE-GRID using in-flight data and Monte Carlo simulations, producing instrument response functions (IRFs) for the effective area (Aeff), energy dispersion probability (EDP), and point spread function (PSF), each as a function of incident direction in instrument coordinates and energy. Methods: We performed Monte Carlo simulations at different γ-ray energies and incident angles, including background rejection filters and Kalman filter-based γ-ray reconstruction. Long integrations of in-flight observations of the Vela, Crab and Geminga sources in broad and narrow energy bands were used to validate and improve the accuracy of the instrument response functions. Results: The weighted average PSFs as a function of spectra correspond well to the data for all sources and energy bands. Conclusions: Changes in the interpolation of the PSF from Monte Carlo data and in the procedure for construction of the energy-weighted effective areas have improved the correspondence between predicted and observed fluxes and spectra of celestial calibration sources, reducing false positives and obviating the need for post-hoc energy-dependent scaling factors. The new IRFs have been publicly available from the AGILE Science Data Center since November 25, 2011, while the changes in the analysis software will be distributed in an upcoming release.
An Efficient Approach to Ab Initio Monte Carlo Simulation
Leiding, Jeff
2013-01-01
We present a Nested Markov Chain Monte Carlo (NMC) scheme for building equilibrium averages based on accurate potentials such as density functional theory. Metropolis sampling of a reference system, defined by an inexpensive but approximate potential, is used to substantially decorrelate configurations at which the potential of interest is evaluated, thereby dramatically reducing the number needed to build ensemble averages at a given level of precision. The efficiency of this procedure is maximized on-the-fly through variation of the reference system thermodynamic state (characterized here by its inverse temperature \\beta^0), which is otherwise unconstrained. Local density approximation (LDA) results are presented for shocked states in argon at pressures from 4 to 60 GPa. Depending on the quality of the reference potential, the acceptance probability is enhanced by factors of 1.2-28 relative to unoptimized NMC sampling, and the procedure's efficiency is found to be competitive with that of standard ab initio...
Improved photon counting efficiency calibration using superconducting single photon detectors
Gan, Haiyong; Xu, Nan; Li, Jianwei; Sun, Ruoduan; Feng, Guojin; Wang, Yanfei; Ma, Chong; Lin, Yandong; Zhang, Labao; Kang, Lin; Chen, Jian; Wu, Peiheng
2015-10-01
The quantum efficiency of photon counters can be measured with standard uncertainty below 1% level using correlated photon pairs generated through spontaneous parametric down-conversion process. Normally a laser in UV, blue or green wavelength range with sufficient photon energy is applied to produce energy and momentum conserved photon pairs in two channels with desired wavelengths for calibration. One channel is used as the heralding trigger, and the other is used for the calibration of the detector under test. A superconducting nanowire single photon detector with advantages such as high photon counting speed (responsivity (UV to near infrared) is used as the trigger detector, enabling correlated photons calibration capabilities into shortwave visible range. For a 355nm single longitudinal mode pump laser, when a superconducting nanowire single photon detector is used as the trigger detector at 1064nm and 1560nm in the near infrared range, the photon counting efficiency calibration capabilities can be realized at 532nm and 460nm. The quantum efficiency measurement on photon counters such as photomultiplier tubes and avalanche photodiodes can be then further extended in a wide wavelength range (e.g. 400-1000nm) using a flat spectral photon flux source to meet the calibration demands in cutting edge low light applications such as time resolved fluorescence and nonlinear optical spectroscopy, super resolution microscopy, deep space observation, and so on.
Monte Carlo studies and optimization for the calibration system of the GERDA experiment
Baudis, L.; Ferella, A. D.; Froborg, F.; Tarka, M.
2013-11-01
The GERmanium Detector Array, GERDA, searches for neutrinoless double β decay in 76Ge using bare high-purity germanium detectors submerged in liquid argon. For the calibration of these detectors γ emitting sources have to be lowered from their parking position on the top of the cryostat over more than 5 m down to the germanium crystals. With the help of Monte Carlo simulations, the relevant parameters of the calibration system were determined. It was found that three 228Th sources with an activity of 20 kBq each at two different vertical positions will be necessary to reach sufficient statistics in all detectors in less than 4 h of calibration time. These sources will contribute to the background of the experiment with a total of (1.07±0.04(stat)-0.19+0.13(sys))×10-4 cts/(keV kg yr)) when shielded from below with 6 cm of tantalum in the parking position.
Monte Carlo studies and optimization for the calibration system of the GERDA experiment
Baudis, L. [Physics Institute, University of Zurich, Winterthurerstrasse 190, 8057 Zürich (Switzerland); Ferella, A.D. [Physics Institute, University of Zurich, Winterthurerstrasse 190, 8057 Zürich (Switzerland); INFN Laboratori Nazionali del Gran Sasso, 67010 Assergi (Italy); Froborg, F., E-mail: francis@froborg.de [Physics Institute, University of Zurich, Winterthurerstrasse 190, 8057 Zürich (Switzerland); Tarka, M. [Physics Institute, University of Zurich, Winterthurerstrasse 190, 8057 Zürich (Switzerland); Physics Department, University of Illinois, 1110 West Green Street, Urbana, IL 61801 (United States)
2013-11-21
The GERmanium Detector Array, GERDA, searches for neutrinoless double β decay in {sup 76}Ge using bare high-purity germanium detectors submerged in liquid argon. For the calibration of these detectors γ emitting sources have to be lowered from their parking position on the top of the cryostat over more than 5 m down to the germanium crystals. With the help of Monte Carlo simulations, the relevant parameters of the calibration system were determined. It was found that three {sup 228}Th sources with an activity of 20 kBq each at two different vertical positions will be necessary to reach sufficient statistics in all detectors in less than 4 h of calibration time. These sources will contribute to the background of the experiment with a total of (1.07±0.04(stat){sub −0.19}{sup +0.13}(sys))×10{sup −4}cts/(keVkgyr)) when shielded from below with 6 cm of tantalum in the parking position.
Houska, Tobias; Multsch, Sebastian; Kraft, Philipp; Frede, Hans-Georg; Breuer, Lutz
2014-05-01
Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures, e.g. by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow a more detailed analysis of the dynamic behaviour of the soil-plant interface. We used the Python programming language to couple two of such high process oriented independent models and to calibrate both models simultaneously. The Catchment Modelling Framework (CMF) simulated soil hydrology based on the Richards equation and the Van-Genuchten-Mualem retention curve. CMF was coupled with the Plant growth Modelling Framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo based Generalised Likelihood Uncertainty Estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions to it. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 x 106 model runs randomly drawn from an equally distributed parameter space. Three objective functions were used to evaluate the model performance, i.e. coefficient of determination (R2), bias and model efficiency according to Nash Sutcliffe (NSE). The model was applied to three sites with different management in Muencheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matters of roots, storages, stems and leaves. Best parameter sets resulted in NSE of 0.57 for the simulation of soil moisture across all three sites. The shape
Analysis of the Tandem Calibration Method for Kerma Area Product Meters Via Monte Carlo Simulations
The IAEA recommends that uncertainties of dosimetric measurements in diagnostic radiology for risk assessment and quality assurance should be less than 7% on the confidence level of 95%. This accuracy is difficult to achieve with kerma area product (KAP) meters currently used in clinics. The reasons range from the high energy dependence of KAP meters to the wide variety of configurations in which KAP meters are used and calibrated. The tandem calibration method introduced by Poeyry, Komppa and Kosunen in 2005 has the potential to make the calibration procedure simpler and more accurate compared to the traditional beam-area method. In this method, two positions of the reference KAP meter are of interest: (a) a position close to the field KAP meter and (b) a position 20 cm above the couch. In the close position, the distance between the two KAP meters should be at least 30 cm to reduce the effect of back scatter. For the other position, which is recommended for the beam-area calibration method, the distance of 70 cm between the KAP meters was used in this study. The aim of this work was to complement existing experimental data comparing the two configurations with Monte Carlo (MC) simulations. In a geometry consisting of a simplified model of the VacuTec 70157 type KAP meter, the MCNP code was used to simulate the kerma area product, PKA, for the two (close and distant) reference planes. It was found that PKA values for the tube voltage of 40 kV were about 2.5% lower for the distant plane than for the close one. For higher tube voltages, the difference was smaller. The difference was mainly caused by attenuation of the X ray beam in air. Since the problem with high uncertainties in PKA measurements is also caused by the current design of X ray machines, possible solutions are discussed. (author)
The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of 137Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the 60Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)
Improving computational efficiency of Monte Carlo simulations with variance reduction
CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)
Calibration of AGILE-GRID with In-Flight Data and Monte Carlo Simulations
Chen, Andrew W; Bulgarelli, A; Cattaneo, P W; Contessi, T; Giuliani, A; Pittori, C; Pucella, G; Tavani, M; Trois, A; Verrecchia, F; Barbiellini, G; Caraveo, P; Colafrancesco, S; Costa, E; De Paris, G; Del Monte, E; Di Cocco, G; Donnarumma, I; Evangelista, Y; Ferrari, A; Feroci, M; Fioretti, V; Fiorini, M; Fuschino, F; Galli, M; Gianotti, F; Giommi, P; Giusti, M; Labanti, C; Lapshov, I; Lazzarotto, F; Lipari, P; Longo, F; Lucarelli, F; Marisaldi, M; Mereghetti, S; Morelli, E; Moretti, E; Morselli, A; Pacciani, L; Pellizzoni, A; Perotti, F; Piano, G; Picozza, P; Pilia, M; Prest, M; Rapisarda, M; Rappoldi, A; Rubini, A; Sabatini, S; Santolamazza, P; Soffitta, P; Striani, E; Trifoglio, M; Valentini, G; Vallazza, E; Vercellone, S; Vittorini, V; Zanello, D
2013-01-01
Context: AGILE is a gamma-ray astrophysics mission which has been in orbit since 23 April 2007 and continues to operate reliably. The gamma-ray detector, AGILE-GRID, has observed Galactic and extragalactic sources, many of which were collected in the first AGILE Catalog. Aims: We present the calibration of the AGILE-GRID using in-flight data and Monte Carlo simulations, producing Instrument Response Functions (IRFs) for the effective area A_eff), Energy Dispersion Probability (EDP), and Point Spread Function (PSF), each as a function of incident direction in instrument coordinates and energy. Methods: We performed Monte Carlo simulations at different gamma-ray energies and incident angles, including background rejection filters and Kalman filter-based gamma-ray reconstruction. Long integrations of in-flight observations of the Vela, Crab and Geminga sources in broad and narrow energy bands were used to validate and improve the accuracy of the instrument response functions. Results: The weighted average PSFs a...
Effect of lung volume on counting efficiency: a Monte Carlo investigation.
Kramer, Gary H; Capello, Kevin
2005-04-01
Lung counters are usually calibrated with an anthropometric phantom that has a fixed lung size; however, people have widely varying lung sizes (both volume and dimensions). This work uses a simple Monte Carlo simulation to investigate the effect on the counting efficiency of a lung counter based on a four detector array of 50 mm diameter, 70 mm diameter, or 85 mm diameter as lung size varies. The simulations were carried out at several photon energies (17, 60, 120, and 1,000 keV). Comparing the simulated efficiencies with a reference value close to the lung volume of Reference Man, biases in the range of -21% to 63% were discovered. The values from the Monte Carlo simulation have also been compared with some literature data based on experimental measurements, and the agreement was found to be comparable suggesting that lung volume is indeed a factor that should be considered when trying to make an accurate estimate of a lung burden. PMID:15761297
Novotny, M.A.
2010-02-01
The efficiency of dynamic Monte Carlo algorithms for off-lattice systems composed of particles is studied for the case of a single impurity particle. The theoretical efficiencies of the rejection-free method and of the Monte Carlo with Absorbing Markov Chains method are given. Simulation results are presented to confirm the theoretical efficiencies. © 2010.
A Generic Algorithm for IACT Optical Efficiency Calibration using Muons
Mitchell, A M W; Parsons, R D
2015-01-01
Muons produced in Extensive Air Showers (EAS) generate ring-like images in Imaging Atmospheric Cherenkov Telescopes when travelling near parallel to the optical axis. From geometrical parameters of these images, the absolute amount of light emitted may be calculated analytically. Comparing the amount of light recorded in these images to expectation is a well established technique for telescope optical efficiency calibration. However, this calculation is usually performed under the assumption of an approximately circular telescope mirror. The H.E.S.S. experiment entered its second phase in 2012, with the addition of a fifth telescope with a non-circular 600m$^2$ mirror. Due to the differing mirror shape of this telescope to the original four H.E.S.S. telescopes, adaptations to the standard muon calibration were required. We present a generalised muon calibration procedure, adaptable to telescopes of differing shapes and sizes, and demonstrate its performance on the H.E.S.S. II array.
Mazrou, Hakim, E-mail: mazrou_h@crna.d [Centre de Recherche Nucleaire d' Alger (CRNA), 02 Boulevard Frantz, Fanon, B.P. 399, Alger-RP 16000 (Algeria); Sidahmed, Tassadit [Centre de Recherche Nucleaire d' Alger (CRNA), 02 Boulevard Frantz, Fanon, B.P. 399, Alger-RP 16000 (Algeria); Allab, Malika [Faculte de Physique, Universite des Sciences et de la Technologie de Houari-Boumediene (USTHB), 16111, Alger (Algeria)
2010-10-15
An irradiation system has been acquired by the Nuclear Research Center of Algiers (CRNA) to provide neutron references for metrology and dosimetry purposes. It consists of an {sup 241}Am-Be radionuclide source of 185 GBq (5 Ci) activity inside a cylindrical steel-enveloped polyethylene container with radially positioned beam channel. Because of its composition, filled with hydrogenous material, which is not recommended by ISO standards, we expect large changes in the physical quantities of primary importance of the source compared to a free-field situation. Thus, the main goal of the present work is to fully characterize neutron field of such special delivered set-up. This was conducted by both extensive Monte-Carlo calculations and experimental measurements obtained by using BF{sub 3} and {sup 3}He based neutron area dosimeters. Effects of each component present in the bunker facility of the Algerian Secondary Standard Dosimetry Laboratory (SSDL) on the energy neutron spectrum have been investigated by simulating four irradiation configurations and comparison to the ISO spectrum has been performed. The ambient dose equivalent rate was determined based upon a correct estimate of the mean fluence to ambient dose equivalent conversion factors at different irradiations positions by means of a 3-D transport code MCNP5. Finally, according to practical requirements established for calibration purposes an optimal irradiation position has been suggested to the SSDL staff to perform, in appropriate manner, their routine calibrations.
Simplified methods for coincidence summing corrections in HPGe efficiency calibration
Simple and practical coincidence summing corrections for n-type HPGe detectors are presented for the common calibration nuclides 57Co and 60Co using a defined “virtual peak” and accounting for the summing of gamma photons with x-rays having energies up to 40 keV (88Y and 139Ce). These corrections make it possible to easily and effectively establish peak and total efficiency curves suitable for subsequent summing corrections in routine gamma spectrometry analyses. Experimental verification of the methods shows excellent agreement for measurements of different reference solutions. - Highlights: ► Coincidence summing corrections are important in environmental gamma-ray spectrometry. ► Simple and practical corrections are presented for HPGe efficiency calibrations. ► Emphasis is placed on summing with low-energy photons in n-type detectors. ► The experimental validations of the methods show excellent agreement.
Rodenas, J. E-mail: jrodenas@iqn.upv.es; Pascual, A.; Zarza, I.; Serradell, V.; Ortiz, J.; Ballesteros, L
2003-01-11
Germanium crystals have a dead layer that causes a decrease in efficiency, since the layer is not useful for detection, but strongly attenuates photons. The thickness of this inactive layer is not well known due to the existence of a transition zone where photons are increasingly absorbed. Therefore, using data provided by manufacturers in the detector simulation model, some strong discrepancies appear between calculated and measured efficiencies. The Monte Carlo method is applied to simulate the calibration of a HP Ge detector in order to determine the total inactive germanium layer thickness and the active volume that are needed in order to obtain the minimum discrepancy between estimated and experimental efficiency. Calculations and measurements were performed for all of the radionuclides included in a standard calibration gamma cocktail solution. A Marinelli beaker was considered for this analysis, as it is one of the most commonly used sample container for environmental radioactivity measurements. Results indicated that a good agreement between calculated and measured efficiencies is obtained using a value for the inactive germanium layer thickness equal to approximately twice the value provided by the detector manufacturer. For all energy peaks included in the calibration, the best agreement with experimental efficiency was found using a combination of a small thickness of the inactive germanium layer and a small detection active volume.
Ródenas, J.; Pascual, A.; Zarza, I.; Serradell, V.; Ortiz, J.; Ballesteros, L.
2003-01-01
Germanium crystals have a dead layer that causes a decrease in efficiency, since the layer is not useful for detection, but strongly attenuates photons. The thickness of this inactive layer is not well known due to the existence of a transition zone where photons are increasingly absorbed. Therefore, using data provided by manufacturers in the detector simulation model, some strong discrepancies appear between calculated and measured efficiencies. The Monte Carlo method is applied to simulate the calibration of a HP Ge detector in order to determine the total inactive germanium layer thickness and the active volume that are needed in order to obtain the minimum discrepancy between estimated and experimental efficiency. Calculations and measurements were performed for all of the radionuclides included in a standard calibration gamma cocktail solution. A Marinelli beaker was considered for this analysis, as it is one of the most commonly used sample container for environmental radioactivity measurements. Results indicated that a good agreement between calculated and measured efficiencies is obtained using a value for the inactive germanium layer thickness equal to approximately twice the value provided by the detector manufacturer. For all energy peaks included in the calibration, the best agreement with experimental efficiency was found using a combination of a small thickness of the inactive germanium layer and a small detection active volume.
Abdollahnejad, Hamed; Vosoughi, Naser; Zare, Mohammad Reza
2016-08-01
Simulation, design and fabrication of a sealing enclosure is carried out for a NaI(Tl) 2″×2″ detector, to be used as in situ gamma radioactivity measurement system in marine environment. Effect of sealing enclosure on performance of the system in laboratory and marine environment (distinct tank with 10m(3) volume) were studied using point sources. The marine volumetric efficiency for radiation with 1461keV energy (from (40)K) is measured with KCl volumetric liquid source diluted in distinct tank. The experimental and simulated efficiency values agreed well. Marine volumetric efficiency calibration curve is calculated for 60keV to 1461keV energy with Monte Carlo method. This curve indicates that efficiency increasing rapidly up to 140.5keV but then drops exponentially. PMID:27213808
Zeng, Zhi; Ma, Hao; He, Jianhua; Cang, Jirong; Zeng, Ming; Cheng, Jianping
2016-01-01
An underwater in situ gamma ray spectrometer based on LaBr was developed and optimized to monitor marine radioactivity. The intrinsic background mainly from 138La and 227Ac of LaBr was well determined by low background measurement and pulse shape discrimination method. A method of self calibration using three internal contaminant peaks was proposed to eliminate the peak shift during long term monitoring. With experiments under different temperatures, the method was proved to be helpful for maintaining long term stability. To monitor the marine radioactivity, the spectrometer's efficiency was calculated via water tank experiment as well as Monte Carlo simulation.
Usability of potassium compounds as efficiency calibrators for gamma spectrometer
K-40 is widely used for efficiency calibration of gamma spectrometer because provision of it is easy, it is cheap, it is found in each environmental sample and it gives a clear peak at 1460,8 keV. In this study, 18 different chemical compounds', that have different potassium portions, types of elements, densities and particle sizes, activity concentrations are measured and effect of these parameters on the activity of compounds is investigated. As a result, it is seen that the compounds which have low density, low molecular weight and high potassium portion have high potassium activity concentration and the compounds that have these properties are much suitable for using as calibration sources.
Calibration and simulation of a HPGe well detector using Monte Carlo computer code
Monte Carlo methods are often used in simulating physical and mathematical systems. This computer code is a class of computational algorithms that rely on repeated random sampling to compute their results. Because of their reliance on repeated computation of random or pseudo-random numbers, these methods are most suited to calculation by a computer and tend to be used when it is unfeasible or impossible to compute an exact result with a deterministic algorithm. The Monte Carlo method is used to determine a detector's response curves which are difficult to obtain experimentally. It deals with random numbers for the simulation of the decay conditions and angle of incidence at a given energy value, studying, thus, the random behavior of the radiation, providing response and efficiency curves. The MCNP5 computer code provides means to simulate gamma ray detectors and has been used for this work for the 50keV - 2000 keV energy range. The HPGe well detector was simulated with the MCNP5 computer code and compared with experimental data. The dimensions of both dead layer and the transition layer were determined, and the response curve for a particular geometry was then obtained and compared with the experimental results, in order to verify the detector's simulation. Both results were in very good agreement. (author)
Calibration of an in-situ BEGe detector using semi-empirical and Monte Carlo techniques.
Agrafiotis, K; Karfopoulos, K L; Anagnostakis, M J
2011-08-01
In the case of a nuclear or radiological accident a rapid estimation of the qualitative and quantitative characteristics of the potential radioactive pollution is needed. For aerial releases the radioactive pollutants are finally deposited on the ground forming a surface source. In this case, in-situ γ-ray spectrometry is a powerful tool for the determination of ground pollution. In this work, the procedure followed at the Nuclear Engineering Department of the National Technical University of Athens (NED-NTUA) for the calibration of an in-situ Broad Energy Germanium (BEGe) detector, for the determination of gamma-emitting radionuclides deposited on the ground surface, is presented. BEGe detectors due to their technical characteristics are suitable for the analysis of photons in a wide energy region. Two different techniques were applied for the full-energy peak efficiency calibration of the BEGe detector in the energy region 60-1600 keV: Full-energy peak efficiencies determined using the two methods agree within statistical uncertainties. PMID:21193317
An efficient framework for photon Monte Carlo treatment planning.
Fix, Michael K; Manser, Peter; Frei, Daniel; Volken, Werner; Mini, Roberto; Born, Ernst J
2007-10-01
Currently photon Monte Carlo treatment planning (MCTP) for a patient stored in the patient database of a treatment planning system (TPS) can usually only be performed using a cumbersome multi-step procedure where many user interactions are needed. This means automation is needed for usage in clinical routine. In addition, because of the long computing time in MCTP, optimization of the MC calculations is essential. For these purposes a new graphical user interface (GUI)-based photon MC environment has been developed resulting in a very flexible framework. By this means appropriate MC transport methods are assigned to different geometric regions by still benefiting from the features included in the TPS. In order to provide a flexible MC environment, the MC particle transport has been divided into different parts: the source, beam modifiers and the patient. The source part includes the phase-space source, source models and full MC transport through the treatment head. The beam modifier part consists of one module for each beam modifier. To simulate the radiation transport through each individual beam modifier, one out of three full MC transport codes can be selected independently. Additionally, for each beam modifier a simple or an exact geometry can be chosen. Thereby, different complexity levels of radiation transport are applied during the simulation. For the patient dose calculation, two different MC codes are available. A special plug-in in Eclipse providing all necessary information by means of Dicom streams was used to start the developed MC GUI. The implementation of this framework separates the MC transport from the geometry and the modules pass the particles in memory; hence, no files are used as the interface. The implementation is realized for 6 and 15 MV beams of a Varian Clinac 2300 C/D. Several applications demonstrate the usefulness of the framework. Apart from applications dealing with the beam modifiers, two patient cases are shown. Thereby
Extrapolated HPGe efficiency estimates based on a single calibration measurement
Gamma spectroscopists often must analyze samples with geometries for which their detectors are not calibrated. The effort to experimentally recalibrate a detector for a new geometry can be quite time consuming, causing delay in reporting useful results. Such concerns have motivated development of a method for extrapolating HPGe efficiency estimates from an existing single measured efficiency. Overall, the method provides useful preliminary results for analyses that do not require exceptional accuracy, while reliably bracketing the credible range. The estimated efficiency element-of for a uniform sample in a geometry with volume V is extrapolated from the measured element-of 0 of the base sample of volume V0. Assuming all samples are centered atop the detector for maximum efficiency, element-of decreases monotonically as V increases about V0, and vice versa. Extrapolation of high and low efficiency estimates element-of h and element-of L provides an average estimate of element-of = 1/2 [element-of h + element-of L] ± 1/2 [element-of h - element-of L] (general) where an uncertainty D element-of = 1/2 (element-of h - element-of L] brackets limits for a maximum possible error. The element-of h and element-of L both diverge from element-of 0 as V deviates from V0, causing D element-of to increase accordingly. The above concepts guided development of both conservative and refined estimates for element-of
The EUROMET project 428 examines efficiency transfer computation for Ge gamma-ray spectrometers when the efficiency is known for a reference point source geometry in the 60 keV to 2 MeV energy range. For this, different methods are used, such as Monte Carlo simulation or semi-empirical computation. The exercise compares the application of these methods to the same selected experimental cases to determine the usage limitations versus the requested accuracy. For carefully examining these results and trying to derive information for improving the computation codes, this study was limited to a few simple cases, from an experimental efficiency calibration for point source at 10-cm source-to-detector distance. The first part concerns the simplest case of geometry transfer, i.e., using point sources for 3 source-to-detector distances: 2,5 and 20 cm; the second part deals with transfer from point source geometry to cylindrical geometry with three different matrices. The general results show that the deviations between the computed results and the measured efficiencies are for the most part within 10%. The quality of the results is rather inhomogeneous and shows that these codes cannot be used directly for metrological purposes. However, most of them are operational for routine measurements when efficiency uncertainties of 5-10% can be sufficient. (author)
Double Chooz Neutron Detection Efficiency with Calibration System
Chang, Pi-Jung
2012-03-01
The Double Chooz experiment is designed to search for a non-vanishing mixing angle theta13 with unprecedented sensitivity. The first results obtained with the far detector only indicate a non-zero value of theta13. The Double Chooz detector system consists of a main detector, an outer veto system and a number of calibration systems. The main detector consists of a series of concentric cylinders. The target vessel, a liquid scintillator loaded with 0.1% Gd, is surrounded by the gamma-catcher, a non-loaded liquid scintillator. A buffer region of non-scintillating liquid surrounds the gamma-catcher and serves to decrease the level of accidental background. There is the Inner Veto region outside the buffer. The experiment is calibrated with light sources, radioactive point sources, cosmics and natural radioactivity. The radio-isotopes sealed in miniature capsules are deployed in the target and the gamma-catcher. Neutron detection efficiency is one of the major systematic components in the measurement of anti-neutrino disappearance. An untagged 252Cf source was used to determine fractions of neutron captures on Gd, neutron capture time systematic and neutron delayed energy systematic. The details will be explained in the talk.
Highly Efficient Monte-Carlo for Estimating the Unavailability of Markov Dynamic System1）
XIAOGang; DENGLi; ZHANGBen-Ai; ZHUJian-Shi
2004-01-01
Monte Carlo simulation has become an important tool for estimating the reliability andavailability of dynamic system, since conventional numerical methods are no longer efficient whenthe size of the system to solve is large. However, evaluating by a simulation the probability of oc-currence of very rare events means playing a very large number of histories of the system, whichleads to unacceptable computing time. Highly efficient Monte Carlo should be worked out. In thispaper, based on the integral equation describing state transitions of Markov dynamic system, a u-niform Monte Carlo for estimating unavailability is presented. Using free-flight estimator, directstatistical estimation Monte Carlo is achieved. Using both free-flight estimator and biased proba-bility space of sampling, weighted statistical estimation Monte Carlo is also achieved. Five MonteCarlo schemes, including crude simulation, analog simulation, statistical estimation based oncrude and analog simulation, and weighted statistical estimation, are used for calculating the un-availability of a repairable Con/3/30 : F system. Their efficiencies are compared with each other.The results show the weighted statistical estimation Monte Carlo has the smallest variance and thehighest efficiency in very rare events simulation.
Efficiency Calibration for Measuring the 12C(n, 2n)11C Cross Section
Eckert, Thomas; Gula, August; Vincett, Laurel; Yuly, Mark; Padalino, Stephen; Russ, Megan; Bienstock, Mollie; Simone, Angela; Ellison, Drew; Desmitt, Holly; Sangster, Craig; Regan, Sean; Fitzgerald, Ryan
2015-11-01
One possible inertial confinement fusion diagnostic involves tertiary neutron activation via the 12C(n, 2n)11C reaction. A recent experiment to measure this reaction cross-section involved coincidence counting the annihilation gamma rays produced by the positron decay of 11C. This requires an accurate value for the full-peak coincidence efficiency of the NaI detector system. The GEANT 4 toolkit was used to develop a Monte Carlo simulation of the detector system which can be used to calculate the required efficiencies. For validation, simulation predictions have been compared with the results of two experiments. In the first, full-peak coincidence positron annihilation efficiencies were measured for 22Na decay positrons that annihilate in a small plastic scintillator. In the second, a NIST-calibrated 68Ge source was used. A comparison of calculated with measured efficiencies, as well as 12C(n, 2n)11C cross sections are presented. Funded in part by a grant from the DOE through the Laboratory for Laser Energetics.
A Monte Carlo (MC) based individual calibration method for in vivo x-ray fluorescence analysis (XRF)
Hansson, Marie; Isaksson, Mats
2007-04-01
X-ray fluorescence analysis (XRF) is a non-invasive method that can be used for in vivo determination of thyroid iodine content. System calibrations with phantoms resembling the neck may give misleading results in the cases when the measurement situation largely differs from the calibration situation. In such cases, Monte Carlo (MC) simulations offer a possibility of improving the calibration by better accounting for individual features of the measured subjects. This study investigates the prospects of implementing MC simulations in a calibration procedure applicable to in vivo XRF measurements. Simulations were performed with Penelope 2005 to examine a procedure where a parameter, independent of the iodine concentration, was used to get an estimate of the expected detector signal if the thyroid had been measured outside the neck. An attempt to increase the simulation speed and reduce the variance by exclusion of electrons and by implementation of interaction forcing was conducted. Special attention was given to the geometry features: analysed volume, source-sample-detector distances, thyroid lobe size and position in the neck. Implementation of interaction forcing and exclusion of electrons had no obvious adverse effect on the quotients while the simulation time involved in an individual calibration was low enough to be clinically feasible.
Russian roulette efficiency in Monte Carlo resonant absorption calculations
Ghassoun, J. E-mail: ghassoun@ucam.ac.ma; Jehouani, A
2000-11-15
The resonant absorption calculation in media containing heavy resonant nuclei is one of the most difficult problems treated in reactor physics. Deterministic techniques need many approximations to solve this kind of problem. On the other hand, the Monte Carlo method is a reliable mathematical tool for evaluating the neutron resonance escape probability. But it suffers from large statistical deviations of results and long computation times. In order to overcome this problem, we have used the Splitting and Russian Roulette technique coupled separately to the survival biasing and to the importance sampling for the energy parameter. These techniques have been used to calculate the neutron resonance absorption in infinite homogenous media containing hydrogen and uranium characterized by the dilution (ratio of the concentrations of hydrogen to uranium). The punctual neutron source energy is taken at E{sub s}=2 MeV and E{sub s}=676.45 eV, whereas the energy cut-off is fixed at E{sub c}=2.768 eV. The results show a large reduction of computation time and statistical deviation, without altering the mean resonance escape probability compared to the usual analog simulation. The Splitting and Russian Roulette coupled to the survival biasing method is found to be the best methods for studying the neutron resonant absorption, particularly for high energies. A comparison is done between the Monte Carlo and deterministic methods based on the numerical solution of the neutron slowing down equations by the iterative method results for several dilutions.
Russian roulette efficiency in Monte Carlo resonant absorption calculations
Ghassoun; Jehouani
2000-10-01
The resonant absorption calculation in media containing heavy resonant nuclei is one of the most difficult problems treated in reactor physics. Deterministic techniques need many approximations to solve this kind of problem. On the other hand, the Monte Carlo method is a reliable mathematical tool for evaluating the neutron resonance escape probability. But it suffers from large statistical deviations of results and long computation times. In order to overcome this problem, we have used the Splitting and Russian Roulette technique coupled separately to the survival biasing and to the importance sampling for the energy parameter. These techniques have been used to calculate the neutron resonance absorption in infinite homogenous media containing hydrogen and uranium characterized by the dilution (ratio of the concentrations of hydrogen to uranium). The punctual neutron source energy is taken at Es = 2 MeV and Es = 676.45 eV, whereas the energy cut-off is fixed at Ec = 2.768 eV. The results show a large reduction of computation time and statistical deviation, without altering the mean resonance escape probability compared to the usual analog simulation. The Splitting and Russian Roulette coupled to the survival biasing method is found to be the best methods for studying the neutron resonant absorption, particularly for high energies. A comparison is done between the Monte Carlo and deterministic methods based on the numerical solution of the neutron slowing down equations by the iterative method results for several dilutions. PMID:11003535
Russian roulette efficiency in Monte Carlo resonant absorption calculations
The resonant absorption calculation in media containing heavy resonant nuclei is one of the most difficult problems treated in reactor physics. Deterministic techniques need many approximations to solve this kind of problem. On the other hand, the Monte Carlo method is a reliable mathematical tool for evaluating the neutron resonance escape probability. But it suffers from large statistical deviations of results and long computation times. In order to overcome this problem, we have used the Splitting and Russian Roulette technique coupled separately to the survival biasing and to the importance sampling for the energy parameter. These techniques have been used to calculate the neutron resonance absorption in infinite homogenous media containing hydrogen and uranium characterized by the dilution (ratio of the concentrations of hydrogen to uranium). The punctual neutron source energy is taken at Es=2 MeV and Es=676.45 eV, whereas the energy cut-off is fixed at Ec=2.768 eV. The results show a large reduction of computation time and statistical deviation, without altering the mean resonance escape probability compared to the usual analog simulation. The Splitting and Russian Roulette coupled to the survival biasing method is found to be the best methods for studying the neutron resonant absorption, particularly for high energies. A comparison is done between the Monte Carlo and deterministic methods based on the numerical solution of the neutron slowing down equations by the iterative method results for several dilutions
A method based on a combination of the variance-reduction techniques of particle splitting and Russian roulette is presented. This method improves the efficiency of radiation transport through linear accelerator geometries simulated with the Monte Carlo method. The method named as ‘splitting-roulette’ was implemented on the Monte Carlo code PENELOPE and tested on an Elekta linac, although it is general enough to be implemented on any other general-purpose Monte Carlo radiation transport code and linac geometry. Splitting-roulette uses any of the following two modes of splitting: simple splitting and ‘selective splitting’. Selective splitting is a new splitting mode based on the angular distribution of bremsstrahlung photons implemented in the Monte Carlo code PENELOPE. Splitting-roulette improves the simulation efficiency of an Elekta SL25 linac by a factor of 45. (paper)
Monte-Carlo based uncertainty analysis: Sampling efficiency and sampling convergence
Monte Carlo analysis has become nearly ubiquitous since its introduction, now over 65 years ago. It is an important tool in many assessments of the reliability and robustness of systems, structures or solutions. As the deterministic core simulation can be lengthy, the computational costs of Monte Carlo can be a limiting factor. To reduce that computational expense as much as possible, sampling efficiency and convergence for Monte Carlo are investigated in this paper. The first section shows that non-collapsing space-filling sampling strategies, illustrated here with the maximin and uniform Latin hypercube designs, highly enhance the sampling efficiency, and render a desired level of accuracy of the outcomes attainable with far lesser runs. In the second section it is demonstrated that standard sampling statistics are inapplicable for Latin hypercube strategies. A sample-splitting approach is put forward, which in combination with a replicated Latin hypercube sampling allows assessing the accuracy of Monte Carlo outcomes. The assessment in turn permits halting the Monte Carlo simulation when the desired levels of accuracy are reached. Both measures form fairly noncomplex upgrades of the current state-of-the-art in Monte-Carlo based uncertainty analysis but give a substantial further progress with respect to its applicability.
Refined Stratified Sampling for efficient Monte Carlo based uncertainty quantification
A general adaptive approach rooted in stratified sampling (SS) is proposed for sample-based uncertainty quantification (UQ). To motivate its use in this context the space-filling, orthogonality, and projective properties of SS are compared with simple random sampling and Latin hypercube sampling (LHS). SS is demonstrated to provide attractive properties for certain classes of problems. The proposed approach, Refined Stratified Sampling (RSS), capitalizes on these properties through an adaptive process that adds samples sequentially by dividing the existing subspaces of a stratified design. RSS is proven to reduce variance compared to traditional stratified sample extension methods while providing comparable or enhanced variance reduction when compared to sample size extension methods for LHS – which do not afford the same degree of flexibility to facilitate a truly adaptive UQ process. An initial investigation of optimal stratification is presented and motivates the potential for major advances in variance reduction through optimally designed RSS. Potential paths for extension of the method to high dimension are discussed. Two examples are provided. The first involves UQ for a low dimensional function where convergence is evaluated analytically. The second presents a study to asses the response variability of a floating structure to an underwater shock. - Highlights: • An adaptive process, rooted in stratified sampling, is proposed for Monte Carlo-based uncertainty quantification. • Space-filling, orthogonality, and projective properties of stratified sampling are investigated • Stratified sampling is shown to possess attractive properties for certain classes of problems. • Refined Stratified Sampling, a new sampling method is proposed that enables the adaptive UQ process. • Optimality of RSS stratum division is explored
Source-free efficiency calibration and verification of a γ-ray spectral system
In this paper, the source-free efficiency calibration method is used to verify the HPGe γ-spectrometer. A 60Co standard point source and three body sources were used in the efficiency calibration with the LabSOCS code. The results show that this method can be used as a supplement method to efficiency calculation and laboratory quality control. Meanwhile, the impact factors of source-free efficiency fitting were analyzed. (authors)
During construction of the whole body counter (WBC) at the Children’s Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of 40K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Car...
Efficient planar camera calibration via automatic image selection
Byrne, Brendan P.; Mallon, John; Whelan, Paul F.
2009-01-01
This paper details a novel approach to automatically selecting images which improve camera calibration results. An algorithm is presented which identiﬁes calibration images that inherently improve camera parameter estimates based on their geometric conﬁguration or image network geometry. Analysing images in a more intuitive geometric framework allows image networks to be formed based on the relationship between their world to image homographies. Geometrically, it is equivalent to enforcing ma...
Calibration of a neutron moisture gauge by Monte-Carlo simulation
Neutron transport calculations using the MCNP code have been used to determine flux distributions in soils and to derive the calibration curves of a neutron guage. The calculations were carried out for a typical geometry identical with that of the moisture guage HUMITERRA developed by the Laboratorio Nacional de Engenharia e Tecnologia Industrial, Portugal. To test the reliability of the method a comparison of computed and experimental results was made. The effect on the guage calibration curve of varying the values of several parameters which characterize the measurement system was studied, namely the soil dry bulk density, the active length of the neutron detector, the materials and wall thickness of the probe casing and of the access tubes. The usefulness of the method in the design, development and calibration of neutron guages for soil moisture determinations is discussed. (Author)
A special parallel plate ionization chamber, inserted in a slab phantom for the personal dose equivalent Hp(10) determination, was developed and characterized in this work. This ionization chamber has collecting electrodes and window made of graphite, and the walls and phantom made of PMMA. The tests comprise experimental evaluation following international standards and Monte Carlo simulations, employing the PENELOPE code to evaluate the design of this new dosimeter. The experimental tests were conducted employing the radioprotection level quality N-60 established at the IPEN, and all results were within the recommended standards. - Highlights: • A special ionization chamber, inserted in a slab phantom, was designed and evaluated. • This dosimeter was utilized for the Hp(10) determination. • The evaluation of this dosimeter followed international standards. • The PENELOPE Monte Carlo code was used to evaluate the design of this dosimeter. • The tests indicated that this dosimeter may be used as a reference dosimeter
Monte Carlo simulation of GM probe and NaI detector efficiency for surface activity measurements
This paper deals with the direct measurement of total (fixed plus removable) surface activity in the presence of interfering radiation fields. Two methods based on Monte Carlo simulations are used: one for a Geiger–Muller (GM) ionisation probe and the other for sodium iodide (NaI) detector with lead collimators; equations for the most general case and the geometry models for Monte Carlo simulation of both (GM and NaI) detectors are employed. Finally, an example of application is discussed. - Highlights: • Two methods for direct measurements of beta/gamma surface activity are proposed. • Monte Carlo simulated efficiency of detectors was validated and tested. • The calculated and measured efficiencies of detection systems were very similar. • The comparison between two different methods shows good agreement. • Methods can be used for rapid and accurate direct measurements of surface activity
Monte Carlo calculation of efficiencies of whole-body counter, by microcomputer
A computer programming using the Monte Carlo method for calculation of efficiencies of whole-body counting of body radiation distribution is presented. An analytical simulator (for man e for child) incorporated with99mTc, 131I and 42K is used. (M.A.C.)
The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of 137Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the 60Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)
Efficiency of Static Knowledge Bias in Monte-Carlo Tree Search
Ikeda, Kokolo; Viennot, Simon
2014-01-01
Monte-Carlo methods are currently the best known algorithms for the game of Go. It is already known that Monte-Carlo simulations based on a probability model containing static knowledge of the game are more efficient than random simulations. Such probability models are also used by some programs in the tree search policy to limit the search to a subset of the legal moves or to bias the search, but this aspect is not so well documented. In this article, we try to describe more precisely how st...
Monte Carlo studies on the hadronic calibration of the H1 calorimeter with HERA events
Two different methods to calibrate the H1 calorimeter with hadrons from HERA events are investigated. For these studies the LEPTO/JETSET event generator and the fast H1 detector simulation program P.S.I. were used. Isolated particles, measured and reconstructed with the track chambers, may cause isolated showers within the calorimeter. The measured momenta of hadrons (up to about 20 GeV/c) can be compared with the measured energy in the calorimeter. The influence of neutral particles and of neighbouring showers on the energy deposition is discussed. It is shown that a calibration is possible by comparing the transverse momentum of the scattered electron and of secondary hadrons. Disturbing effects on this measurement (e.g. energy losses in the beamhole) are presented. In both cases the number of events with Q2>10 GeV2 corresponding to 1 pb-1 is found to be sufficient to apply the mentioned methods for a global calibration. (orig./HSI)
Ródenas, J; Burgos, M C; Zarza, I; Gallardo, S
2005-01-01
Simulation of detector calibration using the Monte Carlo method is very convenient. The computational calibration procedure using the MCNP code was validated by comparing results of the simulation with laboratory measurements. The standard source used for this validation was a disc-shaped filter where fission and activation products were deposited. Some discrepancies between the MCNP results and laboratory measurements were attributed to the point source model adopted. In this paper, the standard source has been simulated using both point and surface source models. Results from both models are compared with each other as well as with experimental measurements. Two variables, namely, the collimator diameter and detector-source distance have been considered in the comparison analysis. The disc model is seen to be a better model as expected. However, the point source model is good for large collimator diameter and also when the distance from detector to source increases, although for smaller sizes of the collimator and lower distances a surface source model is necessary. PMID:16604596
An Efficient Feedback Calibration Algorithm for Direct Imaging Radio Telescopes
Beardsley, Adam P; Bowman, Judd D; Morales, Miguel F
2016-01-01
We present the E-field Parallel Imaging Calibration (EPICal) algorithm, which addresses the need for a real-time calibration method for direct imaging radio astronomy correlators. Direct imaging involves a spatial fast Fourier transform of antenna voltages, alleviating the harsh $\\mathcal{O}(N_a^2)$ computational scaling to a more gentle $\\mathcal{O}(N_a \\log_2 N_a)$, which can save orders of magnitude in computation cost for next generation arrays consisting of hundreds to thousands of antennas. However, because signals are mixed in the correlator, gain correction must be applied on the front end. We develop the EPICal algorithm to form gain solutions in real time without ever forming visibilities. This method scales as the number of antennas, and produces results comparable to those from visibilities. Through simulations and application to Long Wavelength Array data we show this algorithm is a promising solution for next generation instruments.
Calibration of the Top-Quark Monte Carlo Mass.
Kieseler, Jan; Lipka, Katerina; Moch, Sven-Olaf
2016-04-22
We present a method to establish, experimentally, the relation between the top-quark mass m_{t}^{MC} as implemented in Monte Carlo generators and the Lagrangian mass parameter m_{t} in a theoretically well-defined renormalization scheme. We propose a simultaneous fit of m_{t}^{MC} and an observable sensitive to m_{t}, which does not rely on any prior assumptions about the relation between m_{t} and m_{t}^{MC}. The measured observable is independent of m_{t}^{MC} and can be used subsequently for a determination of m_{t}. The analysis strategy is illustrated with examples for the extraction of m_{t} from inclusive and differential cross sections for hadroproduction of top quarks. PMID:27152794
Validation of a Monte Carlo Model of the Fork Detector with a Calibrated Neutron Source
Borella, Alessandro; Mihailescu, Liviu-Cristian
2014-02-01
The investigation of experimental methods for safeguarding spent fuel elements is one of the research areas at the Belgian Nuclear Research Centre SCK•CEN. A version of the so-called Fork Detector has been designed at SCK•CEN and is in use at the Belgian Nuclear Power Plant of Doel for burnup determination purposes. The Fork Detector relies on passive neutron and gamma measurements for the assessment of the burnup and safeguards verification activities. In order to better evaluate and understand the method and in view to extend its capabilities, an effort to model the Fork detector was made with the code MCNPX. A validation of the model was done in the past using spent fuel measurement data. This paper reports about the measurements carried out at the Laboratory for Nuclear Calibrations (LNK) of SCK•CEN with a 252Cf source calibrated according to ISO 8529 standards. The experimental data are presented and compared with simulations. In the simulations, not only was the detector modeled but also the measurement room was taken into account based on the available design information. The results of this comparison exercise are also presented in this paper.
Study of efficiency calibration of the HPGe detector for aurum
Neutron flux measurement in PUSPATI TRIGA Reactor carried out using Au-198 foil activation technique and high-purity germanium (HPGe) detector is used to count this foil. The quality of the results of this gamma spectrometry measurement depends directly on the accuracy of the detection efficiency in the specific measurement geometry. Experimental efficiency determination was restricted to several measurement geometries and cannot be applied directly to all measurement configurations. In this work an approach using efficiencies measured with disk sources is applied to plot the curve of detectors efficiency as a function of gamma energy ε(E) and to obtain the detection efficiency for Aurum as a function of distance with a high-purity germanium (HPGe) detector. The aurum detection efficiency is found to be decreased exponentially as the source to detector distance increase. The efficiency curves obtained in this way will be applied to the measurement of irradiated foils for neutron flux mapping of PUSPATI TRIGA Reactor. (author)
In this study, how the geometry splitting strategy affects the calculation efficiency was analyzed. In this study, a geometry splitting method was proposed to increase the calculation efficiency in Monte Carlo simulation. First, the analysis of the neutron distribution characteristics in a deep penetration problem was performed. Then, considering the neutron population distribution, a geometry splitting method was devised. Using the proposed method, the FOMs with benchmark problems were estimated and compared with the conventional geometry splitting strategy. The results show that the proposed method can considerably increase the calculation efficiency in using geometry splitting method. It is expected that the proposed method will contribute to optimizing the computational cost as well as reducing the human errors in Monte Carlo simulation. Geometry splitting in Monte Carlo (MC) calculation is one of the most popular variance reduction techniques due to its simplicity, reliability and efficiency. For the use of the geometry splitting, the user should determine locations of geometry splitting and assign the relative importance of each region. Generally, the splitting parameters are decided by the user's experience. However, in this process, the splitting parameters can ineffectively or erroneously be selected. In order to prevent it, there is a recommendation to help the user eliminate guesswork, which is to split the geometry evenly. And then, the importance is estimated by a few iterations for preserving population of particle penetrating each region. However, evenly geometry splitting method can make the calculation inefficient due to the change in mean free path (MFP) of particles
Although Russian roulette is applied very often in Monte Carlo calculations, not much literature exists on its quantitative influence on the variance and efficiency of a Monte Carlo calculation. Elaborating on the work of Lux and Koblinger using moment equations, new relevant equations are derived to calculate the variance of a Monte Carlo simulation using Russian roulette. To demonstrate its practical application the theory is applied to a simplified transport model resulting in explicit analytical expressions for the variance of a Monte Carlo calculation and for the expected number of collisions per history. From these expressions numerical results are shown and compared with actual Monte Carlo calculations, showing an excellent agreement. By considering the number of collisions in a Monte Carlo calculation as a measure of the CPU time, also the efficiency of the Russian roulette can be studied. It opens the way for further investigations, including optimization of Russian roulette parameters. (authors)
Farr, W. M.; Stevens, D; Mandel, Ilya
2015-01-01
Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted ...
Vilela, E.; Morelli, B.; Gualdrini, G.; Burn, K.W.; Monteventi, F.; Fantuzzi, E. [ENEA, Centro Ricerche Ezio Clementel, Bologna (Italy). Dipt. Ambiente
1998-07-01
In the present report a thermal neutron assembly consisting of a polyethylene cube for calibrating dosimetric instruments at the ENEA (National Agency for New Technology, Energy and the Environment) Institute for Radiation Protection is described. The characterization of such a facility in terms of the spectral neutron fluence and the ambient dose equivalent rates according to the ICRP60 document is illustrated in detail. Special variance reduction algorithms developed at ENEA allowed satisfactory statistics to be obtained over the whole investigated energy domain. [Italian] Nel presente rapporto viene descritto un insieme per neutroni termici, impiegato nella calibrazione di strumenti dosimetrici presso l'Istituto di Radioprotezione delle ENEA e viene mostrata in dettaglio la caratterizzazione dell'insieme in termini di ratei di fluenza neutronica spettrale e di equivalente di dose ambientale in accorso con il documento ICRP60. Speciali tecniche di riduzione della varianza sviluppate presso l'ENEA hanno consentito di ottenere una statistica soddisfacente su tutto il dominio di energia studiato.
Evidence-Based Model Calibration for Efficient Building Energy Services
Bertagnolio, Stéphane
2012-01-01
Energy services play a growing role in the control of energy consumption and the improvement of energy efficiency in non-residential buildings. Most of the energy use analyses involved in the energy efficiency service process require on-field measurements and energy use analysis. Today, while detailed on-field measurements and energy counting stay generally expensive and time-consuming, energy simulations are increasingly cheaper due to the continuous improvement of computer speed. This work ...
Kramer, Gary H; Burns, Linda C; Guerriere, Steven
2002-10-01
Monte Carlo simulation has been used to model the Human Monitoring Laboratory's scanning detector whole body counter. This paper has also shown that a scanning detector counting system can be satisfactorily simulated by putting the detector in different places relative to the phantom and averaging the results. This technique was verified by experimental work that obtained an agreement of 96% between scanning and averaging. The BOMAB phantom family in use at the Human Monitoring Laboratory was also modeled so that both counting efficiency and size correction factors could be estimated. It was found that the size correction factors lie in the region of 2.4 to 0.66 depending on phantom size and photon energy. The efficiency results from the MCNP scanning simulations were 97% of the measured scanning efficiency. A single function that fits counting efficiency, size, and photon energy was also developed. The function gives predicted efficiencies that are in the range of +10% to -8% of the true value. PMID:12240728
Calculation of neutron detection efficiency for the thick lithium glass using Monte Carlo method
The neutron detector efficiencies of a NE912 (45mm in diameter, 9.55 mm in thickness) and 2 pieces of ST601 (40mm in diameter, 3 and 10 mm in thickness respectively) lithium glasses have been calculated with a Monte Carlo computer code. The energy range in the calculation is 10 keV to 2.0 MeV. The effect of time delayed caused by neutron multiple scattering in the detectors (prompt neutron detection efficiency) has been considered
Study On The Peak Efficiency Curve Of HPGe Detector With Marinelli Beakers By Monte Carlo Method
In this paper, the peak efficiency curves of HPGe detector for Marinelli beakers were determined by the Los MCNP4C2 code of the Alamos Laboratory in three Marinelli beakers of different size. The influence of matrix density on the efficiency was also studied by simulation of the efficiency curves for different matrix materials: U-CaCO3-MgCO3 calibrated mixture with density of 1.51 g/cm3; ThO2-SiO2-Polyeste calibrated mixture with density of 1.95 g/cm3 and Ta2O5 with density of 8.20 g/cm3. The effect of K-edge absorption for matrices of Thorium and Tantalum in which there are high-Z elements of rather high contents was recorded. (author)
Mathematical efficiency calibration with uncertain source geometries using smart optimization
Menaa, N. [AREVA/CANBERRA Nuclear Measurements Business Unit, Saint Quentin-en-Yvelines 78182 (France); Bosko, A.; Bronson, F.; Venkataraman, R.; Russ, W. R.; Mueller, W. [AREVA/CANBERRA Nuclear Measurements Business Unit, Meriden, CT (United States); Nizhnik, V. [International Atomic Energy Agency, Vienna (Austria); Mirolo, L. [AREVA/CANBERRA Nuclear Measurements Business Unit, Saint Quentin-en-Yvelines 78182 (France)
2011-07-01
The In Situ Object Counting Software (ISOCS), a mathematical method developed by CANBERRA, is a well established technique for computing High Purity Germanium (HPGe) detector efficiencies for a wide variety of source shapes and sizes. In the ISOCS method, the user needs to input the geometry related parameters such as: the source dimensions, matrix composition and density, along with the source-to-detector distance. In many applications, the source dimensions, the matrix material and density may not be well known. Under such circumstances, the efficiencies may not be very accurate since the modeled source geometry may not be very representative of the measured geometry. CANBERRA developed an efficiency optimization software known as 'Advanced ISOCS' that varies the not well known parameters within user specified intervals and determines the optimal efficiency shape and magnitude based on available benchmarks in the measured spectra. The benchmarks could be results from isotopic codes such as MGAU, MGA, IGA, or FRAM, activities from multi-line nuclides, and multiple counts of the same item taken in different geometries (from the side, bottom, top etc). The efficiency optimization is carried out using either a random search based on standard probability distributions, or using numerical techniques that carry out a more directed (referred to as 'smart' in this paper) search. Measurements were carried out using representative source geometries and radionuclide distributions. The radionuclide activities were determined using the optimum efficiency and compared against the true activities. The 'Advanced ISOCS' method has many applications among which are: Safeguards, Decommissioning and Decontamination, Non-Destructive Assay systems and Nuclear reactor outages maintenance. (authors)
Mathematical efficiency calibration with uncertain source geometries using smart optimization
The In Situ Object Counting Software (ISOCS), a mathematical method developed by CANBERRA, is a well established technique for computing High Purity Germanium (HPGe) detector efficiencies for a wide variety of source shapes and sizes. In the ISOCS method, the user needs to input the geometry related parameters such as: the source dimensions, matrix composition and density, along with the source-to-detector distance. In many applications, the source dimensions, the matrix material and density may not be well known. Under such circumstances, the efficiencies may not be very accurate since the modeled source geometry may not be very representative of the measured geometry. CANBERRA developed an efficiency optimization software known as 'Advanced ISOCS' that varies the not well known parameters within user specified intervals and determines the optimal efficiency shape and magnitude based on available benchmarks in the measured spectra. The benchmarks could be results from isotopic codes such as MGAU, MGA, IGA, or FRAM, activities from multi-line nuclides, and multiple counts of the same item taken in different geometries (from the side, bottom, top etc). The efficiency optimization is carried out using either a random search based on standard probability distributions, or using numerical techniques that carry out a more directed (referred to as 'smart' in this paper) search. Measurements were carried out using representative source geometries and radionuclide distributions. The radionuclide activities were determined using the optimum efficiency and compared against the true activities. The 'Advanced ISOCS' method has many applications among which are: Safeguards, Decommissioning and Decontamination, Non-Destructive Assay systems and Nuclear reactor outages maintenance. (authors)
Verification of Absolute Calibration of Quantum Efficiency for LSST CCDs
Coles, Rebecca; Chiang, James; Cinabro, David; Gilbertson, Woodrow; Haupt, justine; Kotov, Ivan; Neal, Homer; Nomerotski, Andrei; O'Connor, Paul; Stubbs, Christopher; Takacs, Peter
2016-01-01
We describe a system to measure the Quantum Efficiency in the wavelength range of 300nm to 1100nm of 40x40 mm n-channel CCD sensors for the construction of the 3.2 gigapixel LSST focal plane. The technique uses a series of instruments to create a very uniform flux of photons of controllable intensity in the wavelength range of interest across the face of the sensor. This allows the absolute Quantum Efficiency to be measured with an accuracy in the 1% range. This system will be part of a production facility at Brookhaven National Lab for the basic components of the LSST camera.
Calibrating the photon detection efficiency in IceCube
Tosi, Delia
2015-01-01
The IceCube neutrino observatory is composed of more than five thousand light sensors, Digital Optical Modules (DOMs), installed on the surface and at depths between 1450 and 2450 m in clear ice at the South Pole. Each DOM incorporates a 10-inch diameter photomultiplier tube (PMT) intended to detect light emitted when high energy neutrinos interact with atoms in the ice. Depending on the energy of the neutrino and the distance from secondary particle tracks, PMTs can be hit by up to several thousand photons within a few hundred nanoseconds. The number of photons per PMT and their time distribution is used to reject background events and to determine the energy and direction of each neutrino. The detector energy scale was established from previous lab measurements of DOM optical sensitivity, then refined based on observed light yield from stopping muons and calibration of ice properties. A laboratory setup has now been developed to more precisely measure the DOM optical sensitivity as a function of angle and w...
Dixon, D.A., E-mail: ddixon@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663, MS P365, Los Alamos, NM 87545 (United States); Prinja, A.K., E-mail: prinja@unm.edu [Department of Nuclear Engineering, MSC01 1120, 1 University of New Mexico, Albuquerque, NM 87131-0001 (United States); Franke, B.C., E-mail: bcfrank@sandia.gov [Sandia National Laboratories, Albuquerque, NM 87123 (United States)
2015-09-15
This paper presents the theoretical development and numerical demonstration of a moment-preserving Monte Carlo electron transport method. Foremost, a full implementation of the moment-preserving (MP) method within the Geant4 particle simulation toolkit is demonstrated. Beyond implementation details, it is shown that the MP method is a viable alternative to the condensed history (CH) method for inclusion in current and future generation transport codes through demonstration of the key features of the method including: systematically controllable accuracy, computational efficiency, mathematical robustness, and versatility. A wide variety of results common to electron transport are presented illustrating the key features of the MP method. In particular, it is possible to achieve accuracy that is statistically indistinguishable from analog Monte Carlo, while remaining up to three orders of magnitude more efficient than analog Monte Carlo simulations. Finally, it is shown that the MP method can be generalized to any applicable analog scattering DCS model by extending previous work on the MP method beyond analytical DCSs to the partial-wave (PW) elastic tabulated DCS data.
Developing an Efficient Calibration System for Joint Offset of Industrial Robots
Bingtuan Gao; Yong Liu; Ning Xi; Yantao Shen
2014-01-01
Joint offset calibration is one of the most important methods to improve the positioning accuracy for industrial robots. This paper presents an efficient method to calibrate industrial robot joint offset. The proposed method mainly relies on a laser pointer mounted on the robot end-effector and a position sensitive device (PSD) located in the work space arbitrarily. A vision based control was employed to aid the laser beam shooting at the center of PSD surface from several initial robot p...
Detection efficiency calibration of a semiconductor γ-spectrometer for environmental stamples
A semi-empirical formula was adopted to calibrate an HPGe γ-spectrometer for its full energy peak efficiency to environmental samples. The procedure of calibration was described, and the results were compared with those obtained through a set of reference souces of 137Cs, 226Ra-series and 232Th-series, which were made up with mediums of soil, coal and stone-coal. It was found that the both results were consistent within 7.1%
Monte Carlo evaluation of the neutron detection efficiency of a superheated drop detector
Neuron dosimetry has recently gained renewed attention, following concerns on the exposure of crew members on board aircraft, and of workers around the increasing number of high energy accelerators for medical and research purpose. At the same time the new operational qualities for radiation dosimetry introduced by ICRU and the ICRP, aiming at a unified metrological system applicable to all types of radiation exposure, involved the need to update current devices in order to meet new requirements. Superheated Drop (Bubble) Detectors (SDD) offer an alternative approach to neutron radiation protection dosimetry. The SDDs are currently studied within a large collaborative effort involving Yale University. New Haven CT, Pisa (IT) University, the Physikalisch-Technische Bundesanstalt, Braunschweig D, and ENEA (Italian National Agency for new Technologies Energy and the Environment) Centre of Bologna. The detectors were characterised through calibrations with monoenergetic neutron beams and where experimental investigations were inadequate or impossible, such as in the intermediate energy range , parametric Monte Carlo calculations of the response were carried out. This report describes the general characteristic of the SDDs along with the Monte Carlo computations of the energy response and a comparison with the experimental results
Morera-Gómez, Yasser; Cartas-Aguila, Héctor A.; Alonso-Hernández, Carlos M.; Nuñez-Duartes, Carlos
2016-05-01
To obtain reliable measurements of the environmental radionuclide activity using HPGe (High Purity Germanium) detectors, the knowledge of the absolute peak efficiency is required. This work presents a practical procedure for efficiency calibration of a coaxial n-type and a well-type HPGe detector using experimental and Monte Carlo simulations methods. The method was performed in an energy range from 40 to 1460 keV and it can be used for both, solid and liquid environmental samples. The calibration was initially verified measuring several reference materials provided by the IAEA (International Atomic Energy Agency). Finally, through the participation in two Proficiency Tests organized by IAEA for the members of the ALMERA network (Analytical Laboratories for the Measurement of Environmental Radioactivity) the validity of the developed procedure was confirmed. The validation also showed that measurement of 226Ra should be conducted using coaxial n-type HPGe detector in order to minimize the true coincidence summing effect.
Efficiency calibration of an HPGe X-ray detector for quantitative PIXE analysis
Particle Induced X-ray Emission (PIXE) is an analytical technique, which provides reliably and accurately quantitative results without the need of standards when the efficiency of the X-ray detection system is calibrated. The ion beam microprobe of the Ion Beam Modification and Analysis Laboratory at the University of North Texas is equipped with a 100 mm2 high purity germanium X-ray detector (Canberra GUL0110 Ultra-LEGe). In order to calibrate the efficiency of the detector for standard less PIXE analysis we have measured the X-ray yield of a set of commercially available X-ray fluorescence standards. The set contained elements from low atomic number Z = 11 (sodium) to higher atomic numbers to cover the X-ray energy region from 1.25 keV to about 20 keV where the detector is most efficient. The effective charge was obtained from the proton backscattering yield of a calibrated particle detector
Different models have been developed considering some features of the attenuating geometry. An ideal bare Marinelli model has been compared with the actual plastic model. Concerning the detector, a bare detector model has been improved including an Aluminium absorber layer and a dead layer of inactive Germanium. Calculation results of the Monte Carlo simulation have been compared with experimental measurements carried out in laboratory for various radionuclides from a calibration gamma cocktail solution with energies ranging in a wide interval. (orig.)
A priori efficiency calculations for Monte Carlo applications in neutron transport
In this paper a general derivation is given of equations describing the variance of an arbitrary detector response in a Monte Carlo simulation and the average number of collisions a particle will suffer until its history ends. The theory is validated for a simple slab system using the two-direction transport model and for a two-group infinite system, which both allow analytical solutions. Numerical results from the analytical solutions are compared with actual Monte Carlo calculations, showing excellent agreement. These analytical solutions demonstrate the possibilities for optimizing the weight window settings with respect to variance. Using the average number of collisions as a measure for the simulation time a cost function inversely proportional to the usual figure of merit is defined, which allows optimization with respect to overall efficiency of the Monte Carlo calculation. For practical applications it is outlined how the equations for the variance and average number of collisions can be solved using a suitable existing deterministic neutron transport code with adapted number of energy groups and scattering matrices. (author)
Efficiency calibration of a HPGe detector for [{sup 18}F] FDG activity measurements
Fragoso, Maria da Conceicao de Farias; Lacerda, Isabelle Viviane Batista de; Albuquerque, Antonio Morais de Sa, E-mail: mariacc05@yahoo.com.br, E-mail: isabelle.lacerda@ufpe.br, E-mail: moraisalbuquerque@hotmaiI.com [Universidade Federal de Pernambuco (DEN/UFPE), Recife, PE (Brazil). Departamento de Energia Nuclear; Oliveira, Mercia Liane de; Hazin, Clovis Abrahao; Lima, Fernando Roberto de Andrade, E-mail: mercial@cnen.gov.br, E-mail: chazin@cnen.gov.br, E-mail: falima@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)
2013-11-01
The radionuclide {sup 18}F, in the form of flurodeoxyglucose (FDG), is the most used radiopharmaceutical for Positron Emission Tomography (PET). Due to [{sup 18}F]FDG increasing demand, it is important to ensure high quality activity measurements in the nuclear medicine practice. Therefore, standardized reference sources are necessary to calibrate of {sup 18}F measuring systems. Usually, the activity measurements are performed in re-entrant ionization chambers, also known as radionuclide calibrators. Among the existing alternatives for the standardization of radioactive sources, the method known as gamma spectrometry is widely used for short-lived radionuclides, since it is essential to minimize source preparation time. The purpose of this work was to perform the standardization of the [{sup 18}F]FDG solution by gamma spectrometry. In addition, the reference sources calibrated by this method can be used to calibrate and test the radionuclide calibrators from the Divisao de Producao de Radiofarmacos (DIPRA) of the Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE). Standard sources of {sup 152}Eu, {sup 137}Cs and {sup 68}Ge were used for the efficiency calibration of the spectrometer system. As a result, the efficiency curve as a function of energy was determined in wide energy range from 122 to 1408 keV. Reference sources obtained by this method can be used in [{sup 18}F]FDG activity measurements comparison programs for PET services localized in the Brazilian Northeast region. (author)
Efficiency calibration of a HPGe detector for [18F] FDG activity measurements
The radionuclide 18F, in the form of flurodeoxyglucose (FDG), is the most used radiopharmaceutical for Positron Emission Tomography (PET). Due to [18F]FDG increasing demand, it is important to ensure high quality activity measurements in the nuclear medicine practice. Therefore, standardized reference sources are necessary to calibrate of 18F measuring systems. Usually, the activity measurements are performed in re-entrant ionization chambers, also known as radionuclide calibrators. Among the existing alternatives for the standardization of radioactive sources, the method known as gamma spectrometry is widely used for short-lived radionuclides, since it is essential to minimize source preparation time. The purpose of this work was to perform the standardization of the [18F]FDG solution by gamma spectrometry. In addition, the reference sources calibrated by this method can be used to calibrate and test the radionuclide calibrators from the Divisao de Producao de Radiofarmacos (DIPRA) of the Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE). Standard sources of 152Eu, 137Cs and 68Ge were used for the efficiency calibration of the spectrometer system. As a result, the efficiency curve as a function of energy was determined in wide energy range from 122 to 1408 keV. Reference sources obtained by this method can be used in [18F]FDG activity measurements comparison programs for PET services localized in the Brazilian Northeast region. (author)
Objective: To explore the effectiveness of the method of LabSOCS(Laboratory sourceless calibration software) efficiency calibration in laboratory rapid analysis for emergency monitoring of nuclear incidents. Methods: The detection efficiency of three kinds of environmental samples in emergency monitoring Wag calculated by using the LabSOCS efficiency calibration method, and compared with the values that were obtained by way of radioactive source calibration method. Results: The maximum relative deviation of the detection efficiency between the two methods was less than 15%, and the values with relative deviation less than 5% accounted for 70%. Conclusions: The LabSOCS efficiency calibration method might take the place of radioactive source efficiency calibration method, and meet the requirement of rapid analysis in emergency monitoring of the nuclear incidents. (authors)
Absolute calibration of the antiproton detection efficiency for BESS below 1 GeV
Asaoka, Y; Yoshida, T; Abe, K; Anraku, K; Fujikawa, M; Fuke, H; Haino, S; Izumi, K; Maeno, T; Makida, Y; Matsui, N; Matsumoto, H; Matsunaga, H; Motoki, M; Nozaki, M; Orito, S; Sanuki, T; Sasaki, M; Shikaze, Y; Sonoda, T; Suzuki, J; Tanaka, K; Toki, Y; Yamamoto, A
2001-01-01
An accelerator beam experiment was performed using a low-energy antiproton beam to measure the antiproton detection efficiency of the BESS detector. Measured and calculated efficiencies derived from the BESS Monte Carlo simulation based on GEANT/GHEISHA showed good agreement. With the detailed verification of the BESS simulation, results demonstrate that the relative systematic error of detection efficiency derived from the BESS simulation is within 5 %, being previously estimated as 15 % which was the dominant uncertainty for measurements of cosmic-ray antiproton flux.
Plastic-NaI (T1) crystal composite virtual calibration method of detection efficiency
In order to study the relationship between the counting efficiency and crystal size, based on Monte Carlo and related software, in the development platform of the VC++, this paper prepared to customize the plastic-NaI (T1) crystal size of the software complex, for implementation of different energy γ-ray detection efficiency of the calculation. According to calculation of the data matrix, it fit a different size detector efficiency function of point source, and determined the parameters of the function. (authors)
For EPID dosimetry, the calibration should ensure that all pixels have a similar response to a given irradiation. A calibration method (MC), using an analytical fit of a Monte Carlo simulated flood field EPID image to correct for the flood field image pixel intensity shape, was proposed. It was compared with the standard flood field calibration (FF), with the use of a water slab placed in the beam to flatten the flood field (WS) and with a multiple field calibration where the EPID was irradiated with a fixed 10 x 10 field for 16 different positions (MF). The EPID was used in its normal configuration (clinical setup) and with an additional 3 mm copper slab (modified setup). Beam asymmetry measured with a diode array was taken into account in MC and WS methods. For both setups, the MC method provided pixel sensitivity values within 3% of those obtained with the MF and WS methods (mean difference 2) and IMRT fields to within 3% of that obtained with WS and MF calibrations while differences with images calibrated with the FF method for fields larger than 10 x 10 cm2 were up to 8%. MC, WS and MF methods all provided a major improvement on the FF method. Advantages and drawbacks of each method were reviewed
Ge(Li) intrinsic efficiency calculation using Monte Carlo simulation for γ radiation transport
To solve a radiation transport problem by using Monte Carlo simulation method, the evolution of a large number of radiations must be simulated and also the analysis of their history must be done. The evolution of a radiation starts by the radiation emission, followed by the radiation unperturbed propagation in the medium between the successive interactions and then the radiation parameters modification in the points where interactions occur. The goal of this paper consists in the calculation of the total detection efficiency and the intrinsic efficiency for a coaxial Ge(Li) detector, using Monte Carlo method in order to simulate the γ radiation transport. A Ge(Li) detector with 106 cm3 active volume and γ photons with energies in 50 keV - 2 MeV range, emitted by a point source situated on the detector axis, were considered. Each γ photon evolution is simulated by an analogue process step-by-step until the photon escapes from the detector or is completely absorbed in the active volume of the detector. (author)
Simple formalism for efficient derivatives and multi-determinant expansions in quantum Monte Carlo
Filippi, Claudia; Assaraf, Roland; Moroni, Saverio
2016-05-01
We present a simple and general formalism to compute efficiently the derivatives of a multi-determinant Jastrow-Slater wave function, the local energy, the interatomic forces, and similar quantities needed in quantum Monte Carlo. Through a straightforward manipulation of matrices evaluated on the occupied and virtual orbitals, we obtain an efficiency equivalent to algorithmic differentiation in the computation of the interatomic forces and the optimization of the orbital parameters. Furthermore, for a large multi-determinant expansion, the significant computational gain afforded by a recently introduced table method is here extended to the local value of any one-body operator and to its derivatives, in both all-electron and pseudopotential calculations.
Study of RPC Barrel maximum efficiency in 2012 and 2015 calibration collision runs
Cassar, Samwel
2015-01-01
The maximum efficiency of each of the 1020 Resistive Plate Chamber (RPC) rolls in the barrel region of the CMS muon detector is calculated from the best sigmoid fit of efficiency against high voltage (HV). Data from the HV scans, collected during calibration runs in 2012 and 2015, were compared and the rolls exhibiting a change in maximum efficiency were identified. The chi-square value of the sigmoid fit for each roll was considered in determining the significance of the maximum efficiency for the respective roll.
An efficient calibration method for SQUID measurement system using three orthogonal Helmholtz coils
Hua, Li; Shu-Lin, Zhang; Chao-Xiang, Zhang; Xiang-Yan, Kong; Xiao-Ming, Xie
2016-06-01
For a practical superconducting quantum interference device (SQUID) based measurement system, the Tesla/volt coefficient must be accurately calibrated. In this paper, we propose a highly efficient method of calibrating a SQUID magnetometer system using three orthogonal Helmholtz coils. The Tesla/volt coefficient is regarded as the magnitude of a vector pointing to the normal direction of the pickup coil. By applying magnetic fields through a three-dimensional Helmholtz coil, the Tesla/volt coefficient can be directly calculated from magnetometer responses to the three orthogonally applied magnetic fields. Calibration with alternating current (AC) field is normally used for better signal-to-noise ratio in noisy urban environments and the results are compared with the direct current (DC) calibration to avoid possible effects due to eddy current. In our experiment, a calibration relative error of about 6.89 × 10‑4 is obtained, and the error is mainly caused by the non-orthogonality of three axes of the Helmholtz coils. The method does not need precise alignment of the magnetometer inside the Helmholtz coil. It can be used for the multichannel magnetometer system calibration effectively and accurately. Project supported by the “Strategic Priority Research Program (B)” of the Chinese Academy of Sciences (Grant No. XDB04020200) and the Shanghai Municipal Science and Technology Commission Project, China (Grant No. 15DZ1940902).
Efficiency calibration of a planar HPGe spectrometer and measurement method for 210Pb
A semi-empirical formula was adopted to calibrate an φ43.8 x 5 mm planar HPGe low energy γ and X ray spectrometer for the full energy peak efficiency of environmental samples in the energy range between about 14 and 63 keV. The procedure of calibration was described, and the results of calibration were compared with those obtained through a set of volume reference sources of 241Am and U-Ra (in equilibrium), which were made of mediums of coal, gangue, soil and coal ash. It was found that results from semi-empirical formula and reference sources were consistent within +- 5.5%. Another simple technique for calibration and measurement of 210Pb in environmental samples were also described. The measurement results obtained with the spectrometer calibrated with semi-empirical formula and simple technique for calibration and measurement of 210Pb were agreement within +-7.5% for 210Pb and 238U in gangue samples
Modeling of detection efficiency of HPGe semiconductor detector by Monte Carlo method
Over the past ten years following the gradual adoption of new legislative standards for protection against ionizing radiation was significant penetration of gamma-spectrometry between standard radioanalytical methods. In terms of nuclear power plant gamma-spectrometry has shown as the most effective method of determining of the activity of individual radionuclides. Spectrometric laboratories were gradually equipped with the most modern technical equipment. Nevertheless, due to the use of costly and time intensive experimental calibration methods, the possibilities of gamma-spectrometry were partially limited. Mainly in late 90-ies during substantial renovation and modernization works. For this reason, in spectrometric laboratory in Nuclear Power Plants Bohunice in cooperation with the Department of Nuclear Physics FMPI in Bratislava were developed and tested several calibration procedures based on computer simulations using GEANT program. In presented thesis the calibration method for measuring of bulk samples based on auto-absorption factors is described. The accuracy of the proposed method is at least comparable with other used methods, but it surpasses them significantly in terms of efficiency and financial time and simplicity. The described method has been used successfully almost for two years in laboratory spectrometric Radiation Protection Division in Bohunice nuclear power. It is shown by the results of international comparison measurements and repeated validation measurements performed by Slovak Institute of Metrology in Bratislava.
On preparation of efficiency calibration standards for gamma-ray spectrometers
The work on preparation of calibration standards started for the following reasons: development of gamma-spectrometry hardware and software requires adequate quality assurance system; the calibration standards offered by established firms are expensive. Preparation of standards in geometries to our specification would make them even more expensive; the analyst community accepted the idea of uniform quality assurance program and uniform calibration politics. Studied materials were: organic (styropore, ground coffee, tobacco leaves, seeds, flour, semolina, lentils, sugar, ion exchange resins, PTFE powder, rice, beans and bee honey) and inorganic (quartz sand, chalcedony sand, active charcoal, marble powder, zeolite, different clays, barite, soil, perlite, talc powder and their mixtures). Efficiency curves for geometry TB with different densities; Efficiency for different geometries; Comparison with Czech source 540-01, silicone resin, r = 0,98 g/cm3 for Co-60, Co-57, Cs-137 and Cs-139 are presented. Conclusions: A procedure for preparation of mixed-nuclide efficiency calibration standards in different geometries and having different densities has been developed. Advantages: different natural and artificial matrices used; gravimetrically controlled activity application; activated charcoal used as supporter of the activity; the preparation is in the container of the standard and no losses of activity occurs; high degree of activity distribution homogeneity; fixed volume of the standard
TH-C-17A-08: Monte Carlo Based Design of Efficient Scintillating Fiber Dosimeters
Purpose: To accurately predict Cherenkov radiation generation in scintillating fiber dosimeters. Quantifying Cherenkov radiation provides a method for optimizing fiber dimensions, orientation, optical filters, and photodiode spectral sensitivity to achieve efficient real time imaging dosimeter designs. Methods: We develop in-house Monte Carlo simulation software to model polymer scintillation fibers' fluorescence and Cherenkov emission in megavoltage clinical beams. The model computes emissions using generation probabilities, wavelength sampling, fiber photon capture, and fiber transport efficiency and incorporates the fiber's index of refraction, optical attenuation in the Cherenkov and visible spectrum and fiber dimensions. Detector component selection based on parameters such as silicon photomultiplier efficiency and optical coupling filters separates Cherenkov radiation from the dose-proportional scintillating emissions. The computation uses spectral and geometrical separation of Cherenkov radiation, however other filtering techniques can expand the model. Results: We compute Cherenkov generation per electron and fiber capture and transmission of those photons toward the detector with incident electron beam angle dependence. The model accounts for beam obliquity and nonperpendicular electron fiber impingement, which increases Cherenkov emission and trapping. The rotational angle around square fibers shows trapping efficiency variation from the normally incident minimum to a maximum at 45 degrees rotation. For rotation in the plane formed by the fiber axis and its surface normal, trapping efficiency increases with angle from the normal. The Cherenkov spectrum follows the theoretical curve from 300nm to 800nm, the wavelength range of interest defined by silicon photomultiplier and photodiode spectral efficiency. Conclusion: We are able to compute Cherenkov generation in realistic real time scintillating fiber dosimeter geometries. Design parameters
Various strategies to implement efficiently quantum Monte Carlo (QMC) simulations for large chemical systems are presented. These include: (i) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), (ii) the possibility of keeping the memory footprint minimal, (iii) the important enhancement of single-core performance when efficient optimization tools are used, and (iv) the definition of a universal, dynamic, fault-tolerant, and load-balanced framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC-Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056, and 1731 electrons). Using 10-80 k computing cores of the Curie machine (GENCI-TGCC-CEA, France), QMC-Chem has been shown to be capable of running at the peta scale level, thus demonstrating that for this machine a large part of the peak performance can be achieved. Implementation of large-scale QMC simulations for future exa scale platforms with a comparable level of efficiency is expected to be feasible. (authors)
SU-E-J-60: Efficient Monte Carlo Dose Calculation On CPU-GPU Heterogeneous Systems
Purpose: It is well-known that the performance of GPU-based Monte Carlo dose calculation implementations is bounded by memory bandwidth. One major cause of this bottleneck is the random memory writing patterns in dose deposition, which leads to several memory efficiency issues on GPU such as un-coalesced writing and atomic operations. We propose a new method to alleviate such issues on CPU-GPU heterogeneous systems, which achieves overall performance improvement for Monte Carlo dose calculation. Methods: Dose deposition is to accumulate dose into the voxels of a dose volume along the trajectories of radiation rays. Our idea is to partition this procedure into the following three steps, which are fine-tuned for CPU or GPU: (1) each GPU thread writes dose results with location information to a buffer on GPU memory, which achieves fully-coalesced and atomic-free memory transactions; (2) the dose results in the buffer are transferred to CPU memory; (3) the dose volume is constructed from the dose buffer on CPU. We organize the processing of all radiation rays into streams. Since the steps within a stream use different hardware resources (i.e., GPU, DMA, CPU), we can overlap the execution of these steps for different streams by pipelining. Results: We evaluated our method using a Monte Carlo Convolution Superposition (MCCS) program and tested our implementation for various clinical cases on a heterogeneous system containing an Intel i7 quad-core CPU and an NVIDIA TITAN GPU. Comparing with a straightforward MCCS implementation on the same system (using both CPU and GPU for radiation ray tracing), our method gained 2-5X speedup without losing dose calculation accuracy. Conclusion: The results show that our new method improves the effective memory bandwidth and overall performance for MCCS on the CPU-GPU systems. Our proposed method can also be applied to accelerate other Monte Carlo dose calculation approaches. This research was supported in part by NSF under Grants CCF
Efficient Monte Carlo and greedy heuristic for the inference of stochastic block models
Peixoto, Tiago P
2014-01-01
We present an efficient algorithm for the inference of stochastic block models in large networks. The algorithm can be used as an optimized Markov chain Monte Carlo (MCMC) method, with a fast mixing time and a much reduced susceptibility to getting trapped in metastable states, or as a greedy agglomerative heuristic, with an almost linear $O(N\\ln^2N)$ complexity, where $N$ is the number of nodes in the network, independent on the number of blocks being inferred. We show that the heuristic is capable of delivering results which are indistinguishable from the more exact and numerically expensive MCMC method in many artificial and empirical networks, despite being much faster. The method is entirely unbiased towards any specific mixing pattern, and in particular it does not favor assortative community structures.
Efficiency determination of whole-body counter by Monte Carlo method, using a microcomputer
The purpose of this investigation was the development of an analytical microcomputer model to evaluate a whole body counter efficiency. The model is based on a modified Sryder's model. A stretcher type geometry along with the Monte Carlo method and a Synclair type microcomputer were used. Experimental measurements were performed using two phantoms, one as an adult and the other as a 5 year old child. The phantoms were made in acrylic and and 99mTc, 131I and 42K were the radioisotopes utilized. Results showed a close relationship between experimental and predicted data for energies ranging from 250 keV to 2 MeV, but some discrepancies were found for lower energies. (author)
Parent, L [Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Sutton (United Kingdom); Fielding, A L [School of Physical and Chemical Sciences, Queensland University of Technology, Brisbane (Australia); Dance, D R [Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, London (United Kingdom); Seco, J [Department of Radiation Oncology, Francis Burr Proton Therapy Center, Massachusetts General Hospital, Harvard Medical School, Boston (United States); Evans, P M [Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Sutton (United Kingdom)
2007-07-21
For EPID dosimetry, the calibration should ensure that all pixels have a similar response to a given irradiation. A calibration method (MC), using an analytical fit of a Monte Carlo simulated flood field EPID image to correct for the flood field image pixel intensity shape, was proposed. It was compared with the standard flood field calibration (FF), with the use of a water slab placed in the beam to flatten the flood field (WS) and with a multiple field calibration where the EPID was irradiated with a fixed 10 x 10 field for 16 different positions (MF). The EPID was used in its normal configuration (clinical setup) and with an additional 3 mm copper slab (modified setup). Beam asymmetry measured with a diode array was taken into account in MC and WS methods. For both setups, the MC method provided pixel sensitivity values within 3% of those obtained with the MF and WS methods (mean difference <1%, standard deviation <2%). The difference of pixel sensitivity between MC and FF methods was up to 12.2% (clinical setup) and 11.8% (modified setup). MC calibration provided images of open fields (5 x 5 to 20 x 20 cm{sup 2}) and IMRT fields to within 3% of that obtained with WS and MF calibrations while differences with images calibrated with the FF method for fields larger than 10 x 10 cm{sup 2} were up to 8%. MC, WS and MF methods all provided a major improvement on the FF method. Advantages and drawbacks of each method were reviewed.
Guerra, Marta L.
2009-02-23
We calculate the efficiency of a rejection-free dynamic Monte Carlo method for d -dimensional off-lattice homogeneous particles interacting through a repulsive power-law potential r-p. Theoretically we find the algorithmic efficiency in the limit of low temperatures and/or high densities is asymptotically proportional to ρ (p+2) /2 T-d/2 with the particle density ρ and the temperature T. Dynamic Monte Carlo simulations are performed in one-, two-, and three-dimensional systems with different powers p, and the results agree with the theoretical predictions. © 2009 The American Physical Society.
Efficiency calibration of an HPGe X-ray detector for quantitative PIXE analysis
Mulware, Stephen J., E-mail: Stephenmulware@my.unt.edu; Baxley, Jacob D., E-mail: jacob.baxley351@topper.wku.edu; Rout, Bibhudutta, E-mail: bibhu@unt.edu; Reinert, Tilo, E-mail: tilo@unt.edu
2014-08-01
Particle Induced X-ray Emission (PIXE) is an analytical technique, which provides reliably and accurately quantitative results without the need of standards when the efficiency of the X-ray detection system is calibrated. The ion beam microprobe of the Ion Beam Modification and Analysis Laboratory at the University of North Texas is equipped with a 100 mm{sup 2} high purity germanium X-ray detector (Canberra GUL0110 Ultra-LEGe). In order to calibrate the efficiency of the detector for standard less PIXE analysis we have measured the X-ray yield of a set of commercially available X-ray fluorescence standards. The set contained elements from low atomic number Z = 11 (sodium) to higher atomic numbers to cover the X-ray energy region from 1.25 keV to about 20 keV where the detector is most efficient. The effective charge was obtained from the proton backscattering yield of a calibrated particle detector.
The efficiency calibration for the β-γ coincidence system using 133Xe and 131mXe mixture
Background: Being one of the sixteen radionuclide laboratories for CTBT, Beijing radionuclide laboratory studied the β-γ coincidence system to measure the activities of xenon isotopes (131mXe, 133mXe, 133Xe and 135Xe). The efficiency calibration is an important and difficult technique in the β-γ coincidence measurement. Purpose: The study is carried out to calibrate the efficiency for the β-γ coincidence system. Methods: The efficiency of the β(γ) particle can be calculate by the ratio of the coincidence counts/single γ(β) counts without knowing the sample activity. A 133Xe and 131mXe mixture, whose activity is not known, is used to calibrate the efficiency. Results: The efficiency for the β-γ coincidence system is got by this method. Conclusions: The method has been used to calibrate the efficiencies of β-γ coincidence system in our laboratory. (authors)
Monte Carlo polarimetric efficiency simulations for a single monolithic CdTe thick matrix
Polarimetric measurements for hard X- and soft gamma-rays are still quite unexplored in astrophysical source observations. In order to improve the study of these sources through Compton polarimetry, detectors should have a good polarimetric efficiency and also satisfy the demands of the typical exigent detection environments for this kind of missions. Herein we present a simple concept for such systems, since we propose the use of a single thick (∝10 mm) monolithic matrix of CdTe of 32 x 32 pixels, with an active area of about 40 cm2. In order to predict the best configuration and dimension of detector pixels defined inside the CdTe monolithic piece, a Monte Carlo code based on GEANT4 library modules was developed. Efficiency and polarimetric modulation factor results as a function of energy and detector thickness, are presented and discussed. Q factor of the order of 0.3 has been found up to several hundreds of keV. (orig.)
Monte Carlo polarimetric efficiency simulations for a single monolithic CdTe thick matrix
Curado da Silva, R.M.; Hage-Ali, M.; Siffert, P. [Lab. PHASE, CNRS, Strasbourg (France); Caroli, E.; Stephen, J.B. [Inst. TESRE/CNR, Bologna (Italy)
2001-07-01
Polarimetric measurements for hard X- and soft gamma-rays are still quite unexplored in astrophysical source observations. In order to improve the study of these sources through Compton polarimetry, detectors should have a good polarimetric efficiency and also satisfy the demands of the typical exigent detection environments for this kind of missions. Herein we present a simple concept for such systems, since we propose the use of a single thick ({proportional_to}10 mm) monolithic matrix of CdTe of 32 x 32 pixels, with an active area of about 40 cm{sup 2}. In order to predict the best configuration and dimension of detector pixels defined inside the CdTe monolithic piece, a Monte Carlo code based on GEANT4 library modules was developed. Efficiency and polarimetric modulation factor results as a function of energy and detector thickness, are presented and discussed. Q factor of the order of 0.3 has been found up to several hundreds of keV. (orig.)
Razavi, S.; Anderson, D.; Martin, P.; MacMillan, G.; Tolson, B.; Gabriel, C.; Zhang, B.
2012-12-01
Many sophisticated groundwater models tend to be computationally intensive as they rigorously represent detailed scientific knowledge about the groundwater systems. Calibration (model inversion), which is a vital step of groundwater model development, can require hundreds or thousands of model evaluations (runs) for different sets of parameters and as such demand prohibitively large computational time and resources. One common strategy to circumvent this computational burden is surrogate modelling which is concerned with developing and utilizing fast-to-run surrogates of the original computationally intensive models (also called fine models). Surrogates can be either based on statistical and data-driven models such as kriging and neural networks or simplified physically-based models with lower fidelity to the original system (also called coarse models). Fidelity in this context refers to the degree of the realism of a simulation model. This research initially investigates different strategies for developing lower-fidelity surrogates of a fine groundwater model and their combinations. These strategies include coarsening the fine model, relaxing the numerical convergence criteria, and simplifying the model geological conceptualisation. Trade-offs between model efficiency and fidelity (accuracy) are of special interest. A methodological framework is developed for coordinating the original fine model with its lower-fidelity surrogates with the objective of efficiently calibrating the parameters of the original model. This framework is capable of mapping the original model parameters to the corresponding surrogate model parameters and also mapping the surrogate model response for the given parameters to the original model response. This framework is general in that it can be used with different optimization and/or uncertainty analysis techniques available for groundwater model calibration and parameter/predictive uncertainty assessment. A real-world computationally
Stefan Haring; Ronald Hochreiter
2015-01-01
In this paper an improved Cuckoo Search Algorithm is developed to allow for an efficient and robust calibration of the Heston option pricing model for American options. Calibration of stochastic volatility models like the Heston is significantly harder than classical option pricing models as more parameters have to be estimated. The difficult task of calibrating one of these models to American Put options data is the main objective of this paper. Numerical results are shown to substantiate th...
Tang, Y.; Reed, P.; Wagener, T.
2006-05-01
This study provides a comprehensive assessment of state-of-the-art evolutionary multiobjective optimization (EMO) tools' relative effectiveness in calibrating hydrologic models. The relative computational efficiency, accuracy, and ease-of-use of the following EMO algorithms are tested: Epsilon Dominance Nondominated Sorted Genetic Algorithm-II (ɛ-NSGAII), the Multiobjective Shuffled Complex Evolution Metropolis algorithm (MOSCEM-UA), and the Strength Pareto Evolutionary Algorithm 2 (SPEA2). This study uses three test cases to compare the algorithms' performances: (1) a standardized test function suite from the computer science literature, (2) a benchmark hydrologic calibration test case for the Leaf River near Collins, Mississippi, and (3) a computationally intensive integrated surface-subsurface model application in the Shale Hills watershed in Pennsylvania. One challenge and contribution of this work is the development of a methodology for comprehensively comparing EMO algorithms that have different search operators and randomization techniques. Overall, SPEA2 attained competitive to superior results for most of the problems tested in this study. The primary strengths of the SPEA2 algorithm lie in its search reliability and its diversity preservation operator. The biggest challenge in maximizing the performance of SPEA2 lies in specifying an effective archive size without a priori knowledge of the Pareto set. In practice, this would require significant trial-and-error analysis, which is problematic for more complex, computationally intensive calibration applications. ɛ-NSGAII appears to be superior to MOSCEM-UA and competitive with SPEA2 for hydrologic model calibration. ɛ-NSGAII's primary strength lies in its ease-of-use due to its dynamic population sizing and archiving which lead to rapid convergence to very high quality solutions with minimal user input. MOSCEM-UA is best suited for hydrologic model calibration applications that have small parameter sets
Multi-level Monte Carlo Methods for Efficient Simulation of Coulomb Collisions
Ricketson, Lee
2013-10-01
We discuss the use of multi-level Monte Carlo (MLMC) schemes--originally introduced by Giles for financial applications--for the efficient simulation of Coulomb collisions in the Fokker-Planck limit. The scheme is based on a Langevin treatment of collisions, and reduces the computational cost of achieving a RMS error scaling as ɛ from O (ɛ-3) --for standard Langevin methods and binary collision algorithms--to the theoretically optimal scaling O (ɛ-2) for the Milstein discretization, and to O (ɛ-2 (logɛ)2) with the simpler Euler-Maruyama discretization. In practice, this speeds up simulation by factors up to 100. We summarize standard MLMC schemes, describe some tricks for achieving the optimal scaling, present results from a test problem, and discuss the method's range of applicability. This work was performed under the auspices of the U.S. DOE by the University of California, Los Angeles, under grant DE-FG02-05ER25710, and by LLNL under contract DE-AC52-07NA27344.
In this paper, we describe a method of calibrating of efficiency of a HPGe gamma-ray spectrometry of bulk environmental samples (Tea, crops, water, and soil) is a significant part of the environmental radioactivity measurements. Here we will discuss the full energy peak efficiency (FEPE) of three HPGe detectors it as a consequence, it is essential that the efficiency is determined for each set-up employed. Besides to take full advantage at gamma-ray spectrometry, a set of efficiency at several energies which covers the wide the range in energy, the large the number of radionuclides whose concentration can be determined to measure the main natural gamma-ray emitters, the efficiency should be known at least from 46.54 keV (210Pb) to 1836 keV (88Y). Radioactive sources were prepared from two different standards, a first mixed standard QC Y40 containing 210Pb, 241Am, 109Cd, and Co57, and the second QC Y48 containing 241Am, 109Cd, 57Co, 139 Ce, 113Sn, 85Sr, 137Cs, 88Y, and 60Co is necessary in order to calculate the activity of the different radionuclides contained in a sample. In this work, we will study the efficiency calibration as a function of different parameters as:- Energy of gamma ray from 46.54 keV (210Pb) to 1836 keV (88Y), three different detectors A, B, and C, geometry of containers (point source, marinelli beaker, and cylindrical bottle 1 L), height of standard soil samples in bottle 250 ml, and density of standard environmental samples. These standard environmental sample must be measured before added standard solution because we will use the same environmental samples in order to consider the self absorption especially and composition in the case of volume samples.
Rehman Shakeel U.
2009-01-01
Full Text Available A primary-interaction based Monte Carlo algorithm has been developed for determination of the total efficiency of cylindrical scintillation g-ray detectors. This methodology has been implemented in a Matlab based computer program BPIMC. For point isotropic sources at axial locations with respect to the detector axis, excellent agreement has been found between the predictions of the BPIMC code with the corresponding results obtained by using hybrid Monte Carlo as well as by experimental measurements over a wide range of g-ray energy values. For off-axis located point sources, the comparison of the BPIMC predictions with the corresponding results obtained by direct calculations as well as by conventional Monte Carlo schemes shows good agreement validating the proposed algorithm. Using the BPIMC program, the energy dependent detector efficiency has been found to approach an asymptotic profile by increasing either thickness or diameter of scintillator while keeping the other fixed. The variation of energy dependent total efficiency of a 3'x3' NaI(Tl scintillator with axial distance has been studied using the BPIMC code. About two orders of magnitude change in detector efficiency has been observed for zero to 50 cm variation in the axial distance. For small values of axial separation, a similar large variation has also been observed in total efficiency for 137Cs as well as for 60Co sources by increasing the axial-offset from zero to 50 cm.
Improved efficiency in Monte Carlo simulation for passive-scattering proton therapy
The aim of this work was to improve the computational efficiency of Monte Carlo simulations when tracking protons through a proton therapy treatment head. Two proton therapy facilities were considered, the Francis H Burr Proton Therapy Center (FHBPTC) at the Massachusetts General Hospital and the Crocker Lab eye treatment facility used by University of California at San Francisco (UCSFETF). The computational efficiency was evaluated for phase space files scored at the exit of the treatment head to determine optimal parameters to improve efficiency while maintaining accuracy in the dose calculation.For FHBPTC, particles were split by a factor of 8 upstream of the second scatterer and upstream of the aperture. The radius of the region for Russian roulette was set to 2.5 or 1.5 times the radius of the aperture and a secondary particle production cut (PC) of 50 mm was applied. For UCSFETF, particles were split a factor of 16 upstream of a water absorber column and upstream of the aperture. Here, the radius of the region for Russian roulette was set to 4 times the radius of the aperture and a PC of 0.05 mm was applied. In both setups, the cylindrical symmetry of the proton beam was exploited to position the split particles randomly spaced around the beam axis.When simulating a phase space for subsequent water phantom simulations, efficiency gains between a factor of 19.9 ± 0.1 and 52.21 ± 0.04 for the FHTPC setups and 57.3 ± 0.5 for the UCSFETF setups were obtained. For a phase space used as input for simulations in a patient geometry, the gain was a factor of 78.6 ± 7.5. Lateral-dose curves in water were within the accepted clinical tolerance of 2%, with statistical uncertainties of 0.5% for the two facilities. For the patient geometry and by considering the 2% and 2mm criteria, 98.4% of the voxels showed a gamma index lower than unity. An analysis of the dose distribution resulted in systematic deviations below of 0.88% for 20
Bendall; Skinner
1998-10-01
To provide the most efficient conditions for spin decoupling with least RF power, master calibration curves are provided for the maximum centerband amplitude, and the minimum amplitude for the largest cycling sideband, resulting from STUD+ adiabatic decoupling applied during a single free induction decay. The principal curve is defined as a function of the four most critical experimental input parameters: the maximum amplitude of the RF field, RFmax, the length of the sech/tanh pulse, Tp, the extent of the frequency sweep, bwdth, and the coupling constant, Jo. Less critical parameters, the effective (or actual) decoupled bandwidth, bweff, and the sech/tanh truncation factor, beta, which become more important as bwdth is decreased, are calibrated in separate curves. The relative importance of nine additional factors in determining optimal decoupling performance in a single transient are considered. Specific parameters for efficient adiabatic decoupling can be determined via a set of four equations which will be most useful for 13C decoupling, covering the range of one-bond 13C1H coupling constants from 125 to 225 Hz, and decoupled bandwidths of 7 to 100 kHz, with a bandwidth of 100 kHz being the requirement for a 2 GHz spectrometer. The four equations are derived from a recent vector model of adiabatic decoupling, and experiment, supported by computer simulations. The vector model predicts an inverse linear relation between the centerband and maximum sideband amplitudes, and it predicts a simple parabolic relationship between maximum sideband amplitude and the product JoTp. The ratio bwdth/(RFmax)2 can be viewed as a characteristic time scale, tauc, affecting sideband levels, with tauc approximately Tp giving the most efficient STUD+ decoupling, as suggested by the adiabatic condition. Functional relationships between bwdth and less critical parameters, bweff and beta, for efficient decoupling can be derived from Bloch-equation calculations of the inversion profile
Monte Carlo simulation of efficient data acquisition for an entire-body PET scanner
Isnaini, Ismet; Obi, Takashi [Tokyo Institute of Technology, 4259 Nagatsuta-cho, Midori-ku, Yokohama 226-8503 (Japan); Yoshida, Eiji, E-mail: rush@nirs.go.jp [National Institute of Radiological Sciences, 4-9-1 Inage-ku, Chiba 263-8555 (Japan); Yamaya, Taiga [National Institute of Radiological Sciences, 4-9-1 Inage-ku, Chiba 263-8555 (Japan)
2014-07-01
Conventional PET scanners can image the whole body using many bed positions. On the other hand, an entire-body PET scanner with an extended axial FOV, which can trace whole-body uptake images at the same time and improve sensitivity dynamically, has been desired. The entire-body PET scanner would have to process a large amount of data effectively. As a result, the entire-body PET scanner has high dead time at a multiplex detector grouping process. Also, the entire-body PET scanner has many oblique line-of-responses. In this work, we study an efficient data acquisition for the entire-body PET scanner using the Monte Carlo simulation. The simulated entire-body PET scanner based on depth-of-interaction detectors has a 2016-mm axial field-of-view (FOV) and an 80-cm ring diameter. Since the entire-body PET scanner has higher single data loss than a conventional PET scanner at grouping circuits, the NECR of the entire-body PET scanner decreases. But, single data loss is mitigated by separating the axially arranged detector into multiple parts. Our choice of 3 groups of axially-arranged detectors has shown to increase the peak NECR by 41%. An appropriate choice of maximum ring difference (MRD) will also maintain the same high performance of sensitivity and high peak NECR while at the same time reduces the data size. The extremely-oblique line of response for large axial FOV does not contribute much to the performance of the scanner. The total sensitivity with full MRD increased only 15% than that with about half MRD. The peak NECR was saturated at about half MRD. The entire-body PET scanner promises to provide a large axial FOV and to have sufficient performance values without using the full data.
Monte Carlo simulation of efficient data acquisition for an entire-body PET scanner
Conventional PET scanners can image the whole body using many bed positions. On the other hand, an entire-body PET scanner with an extended axial FOV, which can trace whole-body uptake images at the same time and improve sensitivity dynamically, has been desired. The entire-body PET scanner would have to process a large amount of data effectively. As a result, the entire-body PET scanner has high dead time at a multiplex detector grouping process. Also, the entire-body PET scanner has many oblique line-of-responses. In this work, we study an efficient data acquisition for the entire-body PET scanner using the Monte Carlo simulation. The simulated entire-body PET scanner based on depth-of-interaction detectors has a 2016-mm axial field-of-view (FOV) and an 80-cm ring diameter. Since the entire-body PET scanner has higher single data loss than a conventional PET scanner at grouping circuits, the NECR of the entire-body PET scanner decreases. But, single data loss is mitigated by separating the axially arranged detector into multiple parts. Our choice of 3 groups of axially-arranged detectors has shown to increase the peak NECR by 41%. An appropriate choice of maximum ring difference (MRD) will also maintain the same high performance of sensitivity and high peak NECR while at the same time reduces the data size. The extremely-oblique line of response for large axial FOV does not contribute much to the performance of the scanner. The total sensitivity with full MRD increased only 15% than that with about half MRD. The peak NECR was saturated at about half MRD. The entire-body PET scanner promises to provide a large axial FOV and to have sufficient performance values without using the full data
Lepage, Thibaut; Képès, François; Junier, Ivan
2015-07-01
Supercoiled DNA polymer models for which the torsional energy depends on the total twist of molecules (Tw) are a priori well suited for thermodynamic analysis of long molecules. So far, nevertheless, the exact determination of Tw in these models has been based on a computation of the writhe of the molecules (Wr) by exploiting the conservation of the linking number, Lk=Tw+Wr, which reflects topological constraints coming from the helical nature of DNA. Because Wr is equal to the number of times the main axis of a DNA molecule winds around itself, current Monte Carlo algorithms have a quadratic time complexity, O(L(2)), with respect to the contour length (L) of the molecules. Here, we present an efficient method to compute Tw exactly, leading in principle to algorithms with a linear complexity, which in practice is O(L(1.2)). Specifically, we use a discrete wormlike chain that includes the explicit double-helix structure of DNA and where the linking number is conserved by continuously preventing the generation of twist between any two consecutive cylinders of the discretized chain. As an application, we show that long (up to 21 kbp) linear molecules stretched by mechanical forces akin to magnetic tweezers contain, in the buckling regime, multiple and branched plectonemes that often coexist with curls and helices, and whose length and number are in good agreement with experiments. By attaching the ends of the molecules to a reservoir of twists with which these can exchange helix turns, we also show how to compute the torques in these models. As an example, we report values that are in good agreement with experiments and that concern the longest molecules that have been studied so far (16 kbp). PMID:26153710
Garnica-Garza, H M [Centro de Investigacion y de Estudios Avanzados del Instituto Politecnico Nacional Unidad Monterrey, VIa del Conocimiento 201 Parque de Investigacion e Innovacion Tecnologica, Apodaca NL C.P. 66600 (Mexico)], E-mail: hgarnica@cinvestav.mx
2009-03-21
Monte Carlo simulation was employed to calculate the response of TLD-100 chips under irradiation conditions such as those found during accelerated partial breast irradiation with the MammoSite radiation therapy system. The absorbed dose versus radius in the last 0.5 cm of the treated volume was also calculated, employing a resolution of 20 {mu}m, and a function that fits the observed data was determined. Several clinically relevant irradiation conditions were simulated for different combinations of balloon size, balloon-to-surface distance and contents of the contrast solution used to fill the balloon. The thermoluminescent dosemeter (TLD) cross-calibration factors were derived assuming that the calibration of the dosemeters was carried out using a Cobalt 60 beam, and in such a way that they provide a set of parameters that reproduce the function that describes the behavior of the absorbed dose versus radius curve. Such factors may also prove to be useful for those standardized laboratories that provide postal dosimetry services.
Monte Carlo simulation was employed to calculate the response of TLD-100 chips under irradiation conditions such as those found during accelerated partial breast irradiation with the MammoSite radiation therapy system. The absorbed dose versus radius in the last 0.5 cm of the treated volume was also calculated, employing a resolution of 20 μm, and a function that fits the observed data was determined. Several clinically relevant irradiation conditions were simulated for different combinations of balloon size, balloon-to-surface distance and contents of the contrast solution used to fill the balloon. The thermoluminescent dosemeter (TLD) cross-calibration factors were derived assuming that the calibration of the dosemeters was carried out using a Cobalt 60 beam, and in such a way that they provide a set of parameters that reproduce the function that describes the behavior of the absorbed dose versus radius curve. Such factors may also prove to be useful for those standardized laboratories that provide postal dosimetry services.
Medeiros, M.P.C.; Rebello, W.F., E-mail: eng.cavaliere@ime.eb.br, E-mail: rebello@ime.eb.br [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil). Secao de Engenharia Nuclear; Lopes, J.M.; Silva, A.X., E-mail: marqueslopez@yahoo.com.br, E-mail: ademir@nuclear.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear
2015-07-01
High-purity germanium detectors (HPGe) are mandatory tools for spectrometry because of their excellent energy resolution. The efficiency of such detectors, quoted in the list of specifications by the manufacturer, frequently refers to the relative full-energy peak efficiency, related to the absolute full-energy peak efficiency of a 7.6 cm x 7.6 cm (diameter x height) NaI(Tl) crystal, based on the 1.33 MeV peak of a {sup 60}Co source positioned 25 cm from the detector. In this study, we used MCNPX code to simulate an HPGe detector (Canberra GC3020), from Real-Time Neutrongraphy Laboratory of UFRJ, to survey the spectrum of a {sup 60}Co source located 25 cm from the detector in order to calculate and confirm the efficiency declared by the manufacturer. Agreement between experimental and simulated data was achieved. The model under development will be used for calculating and comparison purposes with the detector calibration curve from software Genie2000™, also serving as a reference for future studies. (author)
CdTe detector efficiency calibration using thick targets of pure and stable compounds
Quantitative PIXE measurements require perfectly calibrated set-ups. Cooled CdTe detectors have good efficiency for energies above those covered by Si(Li) detectors and turn on the possibility of studying K X-rays lines instead of L X-rays lines for medium and eventually heavy elements, which is an important advantage in various cases, if only limited resolution systems are available in the low energy range. In this work we present and discuss spectra from a CdTe semiconductor detector covering the energy region from Cu (Kα1 = 8.047 keV) to U (Kα1 = 98.439 keV). Pure thick samples were irradiated with proton beams at the ITN 3.0 MV Tandetron accelerator in the High Resolution High Energy PIXE set-up. Results and the application to the study of a Portuguese Ossa Morena region Dark Stone sample are presented in this work.
Lauren S. Wakschlag
2009-06-01
Full Text Available Maternal smoking during pregnancy is a major public health problem that has been associated with numerous short- and long-term adverse health outcomes in offspring. However, characterizing smoking exposure during pregnancy precisely has been rather difficult: self-reported measures of smoking often suffer from recall bias, deliberate misreporting, and selective non-disclosure, while single bioassay measures of nicotine metabolites only reflect recent smoking history and cannot capture the fluctuating and complex patterns of varying exposure of the fetus. Recently, Dukic et al. [1] have proposed a statistical method for combining information from both sources in order to increase the precision of the exposure measurement and power to detect more subtle effects of smoking. In this paper, we extend the Dukic et al. [1] method to incorporate individual variation of the metabolic parameters (such as clearance rates into the calibration model of smoking exposure during pregnancy. We apply the new method to the Family Health and Development Project (FHDP, a small convenience sample of 96 predominantly working-class white pregnant women oversampled for smoking. We find that, on average, misreporters smoke 7.5 cigarettes more than what they report to smoke, with about one third underreporting by 1.5, one third under-reporting by about 6.5, and one third underreporting by 8.5 cigarettes. Partly due to the limited demographic heterogeneity in the FHDP sample, the results are similar to those obtained by the deterministic calibration model, whose adjustments were slightly lower (by 0.5 cigarettes on average. The new results are also, as expected, less sensitive to assumed values of cotinine half-life.
Precise Efficiency Calibration of an HPGe Detector Using the Decay of 180m Hf
Superallowed 0+ → 0+ nuclear beta decays provide both the best test of the Conserved Vector Current (CVC) hypothesis and, together with the muon lifetime, the most accurate value for the up-down quark-mixing matrix element, Vud , of the Cabibbo-Kobayashi-Maskawa (CKM) matrix. This matrix should be unitary, and experimental verification of that expectation constitutes an important test of the Standard Model. In aiming for a definitive test of CKM unitarity we have mounted a program at the Texas A and M University to establish (or eliminate) the discrepancy with unitarity. One correction accounts for isospin symmetry breaking and its accuracy can be tested by measurements of the ft-values of Tz = - 1 superallowed emitters (e.g. 22 Mg and 30 S) to a precision of about ±0.1%. A requirement for these measurements is a detector whose detection efficiency is known to the same precision. However, calibration of a detector's efficiency to this level of precision is extremely challenging since very few sources provide γ-rays whose intensities (relative or absolute) are known to better than ±0.5%. The isomer 180m Hf (t 1/2 = 5.5 h) provides a very precise γ-ray calibration source in the 90 to 330 keV energy range. The decay of 180m Hf to the 180 Hf ground state includes a cascade of three consecutive E2 γ-ray transitions of energies 93.3, 215.2 and 332.3 keV with no other feeding of the intermediate states. This provides a uniquely well-known calibration standard since the relative γ-ray intensities emitted are dependent only on the calculated E2 conversion coefficients. The 180m Hf isomer was produced by irradiation of a 0.91 mg sample of HfO2 isotopically enriched to 87% in179 Hf, at the TRIGA reactor in the TAMU Nuclear Science Center. In order to minimise the self-absorption of γ-rays in Hf we required a thin source that was prepared following a procedure described by Kellog and Norman. The activated HfO2sample was dissolved in 0.50 ml of hot 48% HF acid to
Khuat, Quang Huy; Kim, Song Hyun; Kim, Do Hyun; Shin, Chang Ho [Hanyang University, Seoul (Korea, Republic of)
2015-05-15
This technique is known as Consistent Adjoint Driven Importance Sampling (CADIS) method and it is implemented in SCALE code system. In the CADIS method, adjoint transport equation has to be solved to determine deterministic importance functions. Using the CADIS method, a problem was noted that the biased adjoint flux estimated by deterministic methods can affect the calculation efficiency and error. The biases of adjoint function are caused by the methodology, calculation strategy, tolerance of result calculated by the deterministic method and inaccurate multi-group cross section libraries. In this paper, a study to analyze the influence of the biased adjoint functions into Monte Carlo computational efficiency is pursued. In this study, a method to estimate the calculation efficiency was proposed for applying the biased adjoint fluxes in the CADIS approach. For a benchmark problem, the responses and FOMs using SCALE code system were evaluated as applying the adjoint fluxes. The results show that the biased adjoint fluxes significantly affects the calculation efficiencies.
Kachulis, Chris; /Yale U. /SLAC
2011-06-22
The Anti Coincidence Detector (ACD) on the Fermi Gamma Ray Space Telescope provides charged particle rejection for the Large Area Telescope (LAT). We use two calibrations used by the ACD to conduct three studies on the performance of the ACD. We examine the trending of the calibrations to search for damage and find a timescale over which the calibrations can be considered reliable. We also calculated the number of photoelectrons counted by a PMT on the ACD from a normal proton. Third, we calculated the veto efficiencies of the ACD for two different veto settings. The trends of the calibrations exhibited no signs of damage, and indicated timescales of reliability for the calibrations of one to two years. The number of photoelectrons calculated ranged from 5 to 25. Large errors in the effect of the energy spectrum of the charged particles caused these values to have very large errors of around 60 percent. Finally, the veto efficiencies were found to be very high at both veto values, both for charged particles and for the lower energy backsplash spectrum. The Anti Coincidence Detector (ACD) on the Fermi Gamma Ray Space Telescope is a detector system built around the silicon strip tracker on the Large Area Telescope (LAT). The purpose of the ACD is to provide charged particle rejection for the LAT. To do this, the ACD must be calibrated correctly in flight, and must be able to efficiently veto charged particle events while minimizing false vetoes due to 'backsplash' from photons in the calorimeter. There are eleven calibrations used by the ACD. In this paper, we discuss the use of two of these calibrations to preform three studies on the performance of the ACD. The first study examines trending of the calibrations to check for possible hardware degradation. The second study uses the calibrations to explore the efficiency of an on-board hardware veto. The third study uses the calibrations to calculate the number of photoelectrons seen by each PMT when a
The Adjoint Monte Carlo - a viable option for efficient radiotherapy treatment planning
In cancer therapy using collimated beams of photons, the radiation oncologist must determine a set of beams that delivers the required dose to each point in the tumor and minimizes the risk of damage to the healthy tissue and vital organs. Currently, the oncologist determines these beams iteratively, by using a sequence of dose calculations using approximate numerical methods. In this paper, a more accurate and potentially faster approach, based on the Adjoint Monte Carlo method, is presented (authors)
Nonequilibrium candidate Monte Carlo: A new tool for efficient equilibrium simulation
Nilmeier, Jerome P.; Crooks, Gavin E.; Minh, David D. L.; Chodera, John D.
2011-11-08
Metropolis Monte Carlo simulation is a powerful tool for studying the equilibrium properties of matter. In complex condensed-phase systems, however, it is difficult to design Monte Carlo moves with high acceptance probabilities that also rapidly sample uncorrelated configurations. Here, we introduce a new class of moves based on nonequilibrium dynamics: candidate configurations are generated through a finite-time process in which a system is actively driven out of equilibrium, and accepted with criteria that preserve the equilibrium distribution. The acceptance rule is similar to the Metropolis acceptance probability, but related to the nonequilibrium work rather than the instantaneous energy difference. Our method is applicable to sampling from both a single thermodynamic state or a mixture of thermodynamic states, and allows both coordinates and thermodynamic parameters to be driven in nonequilibrium proposals. While generating finite-time switching trajectories incurs an additional cost, driving some degrees of freedom while allowing others to evolve naturally can lead to large enhancements in acceptance probabilities, greatly reducing structural correlation times. Using nonequilibrium driven processes vastly expands the repertoire of useful Monte Carlo proposals in simulations of dense solvated systems.
Thermal inertia and energy efficiency – Parametric simulation assessment on a calibrated case study
Highlights: • We perform a parametric simulation study on a calibrated building energy model. • We introduce adaptive shadings and night free cooling in simulations. • We analyze the effect of thermal capacity on the parametric simulations results. • We recognize that cooling demand and savings scales linearly with thermal capacity. • We assess the advantage of medium-heavy over medium and light configurations. - Abstract: The reduction of energy consumption for heating and cooling services in the existing building stock is a key challenge for global sustainability today and buildings’ envelopes retrofit is one the main issues. Most of the existing buildings’ envelopes have low levels of insulation, high thermal losses due to thermal bridges and cracks, absence of appropriate solar control, etc. Further, in building refurbishment, the importance of a system level approach is often undervalued in favour of simplistic “off the shelf” efficient solutions, focused on the reduction of thermal transmittance and on the enhancement of solar control capabilities. In many cases, the importance of the dynamic thermal properties is often neglected or underestimated and the effective thermal capacity is not properly considered as one of the design parameters. The research presented aims to critically assess the influence of the dynamic thermal properties of the building fabric (roof, walls and floors) on sensible heating and cooling energy demand for a case study. The case study chosen is an existing office building which has been retrofitted in recent years and whose energy model has been calibrated according to the data collected in the monitoring process. The research illustrates the variations of the sensible thermal energy demand of the building in different retrofit scenarios, and relates them to the variations of the dynamic thermal properties of the construction components. A parametric simulation study has been performed, encompassing the use of
Farr, Benjamin; Luijten, Erik
2013-01-01
We introduce a new Markov-Chain Monte Carlo (MCMC) approach designed for efficient sampling of highly correlated and multimodal posteriors. Parallel tempering, though effective, is a costly technique for sampling such posteriors. Our approach minimizes the use of parallel tempering, only using it for a short time to tune a new jump proposal. For complex posteriors we find efficiency improvements up to a factor of ~13. The estimation of parameters of gravitational-wave signals measured by ground-based detectors is currently done through Bayesian inference with MCMC one of the leading sampling methods. Posteriors for these signals are typically multimodal with strong non-linear correlations, making sampling difficult. As we enter the advanced-detector era, improved sensitivities and wider bandwidths will drastically increase the computational cost of analyses, demanding more efficient search algorithms to meet these challenges.
The computer program calculates the average action per plaquette for SU(6)/Z6 lattice gauge theory. By considering quantum field theory on a space-time lattice, the ultraviolet divergences of the theory are regulated through the finite lattice spacing. The continuum theory results can be obtained by a renormalization group procedure. Making use of the FPS Mathematics Library (MATHLIB), we are able to generate an efficient code for the Monte Carlo algorithm for lattice gauge theory calculations which compares favourably with the performance of the CDC 7600. (orig.)
Mesradi, M. [Institut Pluridisciplinaire Hubert-Curien, UMR 7178 CNRS/IN2P3 et Universite Louis Pasteur, 23 rue du Loess, BP 28, F-67037 Strasbourg Cedex 2 (France); Elanique, A. [Departement de Physique, FS/BP 8106, Universite Ibn Zohr, Agadir, Maroc (Morocco); Nourreddine, A. [Institut Pluridisciplinaire Hubert-Curien, UMR 7178 CNRS/IN2P3 et Universite Louis Pasteur, 23 rue du Loess, BP 28, F-67037 Strasbourg Cedex 2 (France)], E-mail: abdelmjid.nourreddine@ires.in2p3.fr; Pape, A.; Raiser, D.; Sellam, A. [Institut Pluridisciplinaire Hubert-Curien, UMR 7178 CNRS/IN2P3 et Universite Louis Pasteur, 23 rue du Loess, BP 28, F-67037 Strasbourg Cedex 2 (France)
2008-06-15
This work relates to the study and characterization of the response function of an X-ray spectrometry system. The intrinsic efficiency of a Si(Li) detector has been simulated with the Monte Carlo codes MCNP and GEANT4 in the photon energy range of 2.6-59.5 keV. After finding it necessary to take a radiograph of the detector inside its cryostat to learn the correct dimensions, agreement within 10% between the simulations and experimental measurements with several point-like sources and PIXE results was obtained.
C. E. M. Sefton
1998-01-01
Full Text Available A survey and resurvey of 77 headwater streams in Wales provides an opportunity for assessing changes in streamwater chemistry in the region. The Model of Acidification of Groundwater In Catchment (MAGIC has been calibrated to the second of two surveys, taken in 1994-1995, using a Monte-Carlo methodology. The first survey, 1983-1984, provides a basis for model validation. The model simulates a significant decline of water quality across the region since industrialisation. Agreed reductions in sulphur (S emissions in Europe in accordance with the Second S Protocol will result in a 49% reduction of S deposition across Wales from 1996 to 2010. In response to these reductions, the proportion of streams in the region with mean annual acid neutralising capacity (ANC > 0 is predicted to increase from 81% in 1995 to 90% by 2030. The greatest recovery between 1984 and 1995 and into the future is at those streams with low ANC. In order to ensure that streams in the most heavily acidified areas of Wales recover to ANC zero by 2030, a reduction of S deposition of 80-85% will be required.
Tuckwell, W.; Bezak, E.; Yeoh, E.; Marcu, L.
2008-09-01
A Monte Carlo tumour model has been developed to simulate tumour cell propagation for head and neck squamous cell carcinoma. The model aims to eventually provide a radiobiological tool for radiation oncology clinicians to plan patient treatment schedules based on properties of the individual tumour. The inclusion of an oxygen distribution amongst the tumour cells enables the model to incorporate hypoxia and other associated parameters, which affect tumour growth. The object oriented program FORTRAN 95 has been used to create the model algorithm, with Monte Carlo methods being employed to randomly assign many of the cell parameters from probability distributions. Hypoxia has been implemented through random assignment of partial oxygen pressure values to individual cells during tumour growth, based on in vivo Eppendorf probe experimental data. The accumulation of up to 10 million virtual tumour cells in 15 min of computer running time has been achieved. The stem cell percentage and the degree of hypoxia are the parameters which most influence the final tumour growth rate. For a tumour with a doubling time of 40 days, the final stem cell percentage is approximately 1% of the total cell population. The effect of hypoxia on the tumour growth rate is significant. Using a hypoxia induced cell quiescence limit which affects 50% of cells with and oxygen levels less than 1 mm Hg, the tumour doubling time increases to over 200 days and the time of tumour growth for a clinically detectable tumour (109 cells) increases from 3 to 8 years. A biologically plausible Monte Carlo model of hypoxic head and neck squamous cell carcinoma tumour growth has been developed for real time assessment of the effects of multiple biological parameters which impact upon the response of the individual patient to fractionated radiotherapy.
A Monte Carlo tumour model has been developed to simulate tumour cell propagation for head and neck squamous cell carcinoma. The model aims to eventually provide a radiobiological tool for radiation oncology clinicians to plan patient treatment schedules based on properties of the individual tumour. The inclusion of an oxygen distribution amongst the tumour cells enables the model to incorporate hypoxia and other associated parameters, which affect tumour growth. The object oriented program FORTRAN 95 has been used to create the model algorithm, with Monte Carlo methods being employed to randomly assign many of the cell parameters from probability distributions. Hypoxia has been implemented through random assignment of partial oxygen pressure values to individual cells during tumour growth, based on in vivo Eppendorf probe experimental data. The accumulation of up to 10 million virtual tumour cells in 15 min of computer running time has been achieved. The stem cell percentage and the degree of hypoxia are the parameters which most influence the final tumour growth rate. For a tumour with a doubling time of 40 days, the final stem cell percentage is approximately 1% of the total cell population. The effect of hypoxia on the tumour growth rate is significant. Using a hypoxia induced cell quiescence limit which affects 50% of cells with and oxygen levels less than 1 mm Hg, the tumour doubling time increases to over 200 days and the time of tumour growth for a clinically detectable tumour (109 cells) increases from 3 to 8 years. A biologically plausible Monte Carlo model of hypoxic head and neck squamous cell carcinoma tumour growth has been developed for real time assessment of the effects of multiple biological parameters which impact upon the response of the individual patient to fractionated radiotherapy
Efficient Monte Carlo simulations using a shuffled nested Weyl sequence random number generator.
Tretiakov, K V; Wojciechowski, K W
1999-12-01
The pseudorandom number generator proposed recently by Holian et al. [B. L. Holian, O. E. Percus, T. T. Warnock, and P. A. Whitlock, Phys. Rev. E 50, 1607 (1994)] is tested via Monte Carlo computation of the free energy difference between the defectless hcp and fcc hard sphere crystals by the Frenkel-Ladd method [D. Frenkel and A. J. C. Ladd, J. Chem. Phys. 81, 3188 (1984)]. It is shown that this fast and convenient for parallel computing generator gives results in good agreement with results obtained by other generators. An estimate of high accuracy is obtained for the hcp-fcc free energy difference near melting. PMID:11970727
Efficient data management techniques implemented in the Karlsruhe Monte Carlo code KAMCCO
The Karlsruhe Monte Carlo Code KAMCCO is a forward neutron transport code with an eigenfunction and a fixed source option, including time-dependence. A continuous energy model is combined with a detailed representation of neutron cross sections, based on linear interpolation, Breit-Wigner resonances and probability tables. All input is processed into densely packed, dynamically addressed parameter fields and networks of pointers (addresses). Estimation routines are decoupled from random walk and analyze a storage region with sample records. This technique leads to fast execution with moderate storage requirements and without any I/O-operations except in the input and output stages. 7 references. (U.S.)
Efficiency Calibration of LaBr3（Ce） γ Spectroscopy in Analyzing Radionucles in Reactor Loop Water
CHEN; Xi-lin; QIN; Guo-xiu; GUO; Xiao-qing; CHEN; Yong-yong; MENG; Jun
2013-01-01
Monitoring the occurring and radioactivity concentration of fission products in nuclear reactor loop water is important for the nuclear reactor safe running evaluation,prevention of accidence and safe protection of working personnel.Study on the efficiency calibration for a LaBr3(Ce)detector experimental
A self-absorption correction function used for cylindrical samples with different density in the Gamma-ray spectrum analysis is reported. The effects of the Gamma-ray energy and sample density on the self-absorption are unitized in the function model, and so a shortcut for detection efficiency calibration in the Gamma-ray spectrum analysis is found
Kalinnikov, V G; Solnyshkin, A A; Sereeter, Z; Lebedev, N A; Chumin, V G; Ibrakhin, Ya S
2002-01-01
A specific method for efficiency calibration of a mini-orange type beta-spectrometer by means of the continuous beta^{-}-spectrum of ^{90}Sr and the conversion electron spectrum of ^{207}Bi in the energy range from 500 to 2200 keV has been elaborated. In the experiment typical SmCo_5 magnets (6A and 8A) were used. An accuracy of efficiency determination was 5-10 %.
The new high effiency extreme ultraviolet overview spectrometer (HEXOS) system for the stellarator Wendelstein 7-X is now mounted for testing and adjustment at the tokamak experiment for technology oriented research (TEXTOR). One part of the testing phase was the intensity calibration of the two double spectrometers which in total cover a spectral range from 2.5 to 160.0 nm with overlap. This work presents the current intensity calibration curves for HEXOS and describes the method of calibration. The calibration was implemented with calibrated lines of a hollow cathode light source and the branching ratio technique. The hollow cathode light source provides calibrated lines from 16 up to 147 nm. We could extend the calibrated region in the spectrometers down to 2.8 nm by using the branching line pairs emitted by an uncalibrated pinch extreme ultraviolet light source as well as emission lines from boron and carbon in TEXTOR plasmas. In total HEXOS is calibrated from 2.8 up to 147 nm, which covers most of the observable wavelength region. The approximate density of carbon in the range of the minor radius from 18 to 35 cm in a TEXTOR plasma determined by simulating calibrated vacuum ultraviolet emission lines with a transport code was 5.5x1017 m-3 which corresponds to a local carbon concentration of 2%.
Greiche, Albert; Biel, Wolfgang; Marchuk, Oleksandr; Burhenn, Rainer
2008-09-01
The new high effiency extreme ultraviolet overview spectrometer (HEXOS) system for the stellarator Wendelstein 7-X is now mounted for testing and adjustment at the tokamak experiment for technology oriented research (TEXTOR). One part of the testing phase was the intensity calibration of the two double spectrometers which in total cover a spectral range from 2.5 to 160.0 nm with overlap. This work presents the current intensity calibration curves for HEXOS and describes the method of calibration. The calibration was implemented with calibrated lines of a hollow cathode light source and the branching ratio technique. The hollow cathode light source provides calibrated lines from 16 up to 147 nm. We could extend the calibrated region in the spectrometers down to 2.8 nm by using the branching line pairs emitted by an uncalibrated pinch extreme ultraviolet light source as well as emission lines from boron and carbon in TEXTOR plasmas. In total HEXOS is calibrated from 2.8 up to 147 nm, which covers most of the observable wavelength region. The approximate density of carbon in the range of the minor radius from 18 to 35 cm in a TEXTOR plasma determined by simulating calibrated vacuum ultraviolet emission lines with a transport code was 5.5×1017 m-3 which corresponds to a local carbon concentration of 2%.
Lunnemann, Per; van Dijk-Moes, Relinde J A; Pietra, Francesca; Vanmaekelbergh, Daniël; Koenderink, A Femius
2013-01-01
We demonstrate that a simple silver coated ball lens can be used to accurately measure the entire distribution of radiative transition rates of quantum dot nanocrystals. This simple and cost-effective implementation of Drexhage's method that uses nanometer-controlled optical mode density variations near a mirror, not only allows to extract calibrated ensemble-averaged rates, but for the first time also to quantify the full inhomogeneous dispersion of radiative and non radiative decay rates across thousands of nanocrystals. We apply the technique to novel ultra-stable CdSe/CdS dot-in-rod emitters. The emitters are of large current interest due to their improved stability and reduced blinking. We retrieve a room-temperature ensemble average quantum efficiency of 0.87+-0.08 at a mean lifetime around 20 ns. We confirm a log-normal distribution of decay rates as often assumed in literature and we show that the rate distribution-width, that amounts to about 30% of the mean decay rate, is strongly dependent on the l...
The self-learning Monte Carlo technique has been implemented to the commonly used general purpose neutron transport code MORSE, in order to enhance sampling of the particle histories that contribute to a detector response. The parameters of all the biasing techniques available in MORSE, i.e. of splitting, Russian roulette, source and collision outgoing energy importance sampling, path length transformation and additional biasing of the source angular distribution are optimized. The learning process is iteratively performed after each batch of particles, by retrieving the data concerning the subset of histories that passed the detector region and energy range in the previous batches. This procedure has been tested on two sample problems in nuclear geophysics, where an unoptimized Monte Carlo calculation is particularly inefficient. The results are encouraging, although the presented method does not directly minimize the variance and the convergence of our algorithm is restricted by the statistics of successful histories from previous random walk. Further applications for modeling of the nuclear logging measurements seem to be promising. 11 refs., 2 figs., 3 tabs. (author)
On stochastic error and computational efficiency of the Markov Chain Monte Carlo method
Li, Jun
2014-01-01
In Markov Chain Monte Carlo (MCMC) simulations, thermal equilibria quantities are estimated by ensemble average over a sample set containing a large number of correlated samples. These samples are selected in accordance with the probability distribution function, known from the partition function of equilibrium state. As the stochastic error of the simulation results is significant, it is desirable to understand the variance of the estimation by ensemble average, which depends on the sample size (i.e., the total number of samples in the set) and the sampling interval (i.e., cycle number between two consecutive samples). Although large sample sizes reduce the variance, they increase the computational cost of the simulation. For a given CPU time, the sample size can be reduced greatly by increasing the sampling interval, while having the corresponding increase in variance be negligible if the original sampling interval is very small. In this work, we report a few general rules that relate the variance with the sample size and the sampling interval. These results are observed and confirmed numerically. These variance rules are derived for theMCMCmethod but are also valid for the correlated samples obtained using other Monte Carlo methods. The main contribution of this work includes the theoretical proof of these numerical observations and the set of assumptions that lead to them. © 2014 Global-Science Press.
Mukhopadhyay, N. D.; Sampson, A. J.; Deniz, D.; Carlsson, G. A.; Williamson, J.; Malušek, Alexandr
2012-01-01
Roč. 70, č. 1 (2012), s. 315-323. ISSN 0969-8043 Institutional research plan: CEZ:AV0Z10480505 Keywords : Monte Carlo * correlated sampling * efficiency * uncertainty * bootstrap Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 1.179, year: 2012 http://www.sciencedirect.com/science/article/pii/S0969804311004775
The effective delayed neutron fraction, βeff, and the prompt neutron generation time, Λ, in the point kinetics equation are weighted by the adjoint flux to improve the accuracy of the reactivity estimate. Recently the Monte Carlo (MC) kinetics parameter estimation methods by using the adjoint flux calculated in the MC forward simulations have been developed and successfully applied for reactor analyses. However these adjoint estimation methods based on the cycle-by-cycle genealogical table require a huge memory size to store the pedigree hierarchy. In this paper, we present a new adjoint estimation method in which the pedigree of a single history is utilized by applying the MC Wielandt method. The algorithm of the new method is derived and its effectiveness is demonstrated in the kinetics parameter estimations for infinite homogeneous two-group problems and critical facilities. (author)
An Efficient Monte Carlo Method for Modeling Radiative Transfer in Protoplanetary Disks
Kim, Stacy
2011-01-01
Monte Carlo methods have been shown to be effective and versatile in modeling radiative transfer processes to calculate model temperature profiles for protoplanetary disks. Temperatures profiles are important for connecting physical structure to observation and for understanding the conditions for planet formation and migration. However, certain areas of the disk such as the optically thick disk interior are under-sampled, or are of particular interest such as the snow line (where water vapor condenses into ice) and the area surrounding a protoplanet. To improve the sampling, photon packets can be preferentially scattered and reemitted toward the preferred locations at the cost of weighting packet energies to conserve the average energy flux. Here I report on the weighting schemes developed, how they can be applied to various models, and how they affect simulation mechanics and results. We find that improvements in sampling do not always imply similar improvements in temperature accuracies and calculation speeds.
A new 9 th degree polynomial fit function has been constructed to calculate the absolute γ-ray detection efficiencies (ηth) of Ge(Li) and HPGe Detectors, for calculating the absolute efficiency at any interesting γ-energy in the energy range between 25 and 2000 keV and distance between 6 and 148 cm. The total absolute γ -ray detection efficiencies have been calculated for six detectors, three of them are Ge(Li) and three HPGe at different distances. The absolute efficiency of the different detectors was calculated at the specific energy of the standard sources for each measuring distances. In this calculation, experimental (ηexp) and fitting (ηfit) efficiency have been calculated. Seven calibrated point sources Am-241, Ba-133, Co-57, Co-60, Cs-137, Eu-152 and Ra-226 were used. The uncertainties of efficiency calibration have been calculated also for quality control. The measured (ηexp) and (ηfit) calculated efficiency values were compared with efficiency, which calculated, by Gray fit function (time)- The results obtained on the basis of (ηexp)and (ηfit) seem to be in very good agreement
An empirical method for in-situ relative detection efficiency calibration for k0-based IM-NAA method
The in-situ relative detection efficiency strongly affects the qualitative aspects of k0-based internal mono standard instrumental neutron activation analysis (IM-NAA) method, which is used to analyze small to large size samples with irregular geometries. An empirical method is described for in-situ relative detection efficiency calibration. Two IAEA reference materials (RMs) Soil-7 and 1633b Coal Fly Ash were irradiated for elemental analysis using in-situ relative detector efficiency. The efficiency was measured from 0.12 MeV to 2.7 MeV. Both RMs were analyzed and the ξ-score values are within ±1 at 95% confidence level whereas the % deviation is within ±9 % for most of the elements. This reflects the good accuracy of the in-situ relative detection efficiency. (author)
Broad-band efficiency calibration of ITER bolometer prototypes using Pt absorbers on SiN membranes
The energy resolved efficiency of two bolometer detector prototypes for ITER with 4 channels each and absorber thicknesses of 4.5 μm and 12.5 μm, respectively, has been calibrated in a broad spectral range from 1.46 eV up to 25 keV. The calibration in the energy range above 3 eV was performed against previously calibrated silicon photodiodes using monochromatized synchrotron radiation provided by five different beamlines of Physikalische Technische Bundesanstalt at the electron storage rings BESSY II and Metrology Light Source in Berlin. For the measurements in the visible range, a setup was realised using monochromatized halogen lamp radiation and a calibrated laser power meter as reference. The measurements clearly demonstrate that the efficiency of the bolometer prototype detectors in the range from 50 eV up to ≈6 keV is close to unity; at a photon energy of 20 keV the bolometer with the thick absorber detects 80% of the photons, the one with the thin absorber about 50%. This indicates that the detectors will be well capable of measuring the plasma radiation expected from the standard ITER scenario. However, a minimum absorber thickness will be required for the high temperatures in the central plasma. At 11.56 keV, the sharp Pt-L3 absorption edge allowed to cross-check the absorber thickness by fitting the measured efficiency to the theoretically expected absorption of X-rays in a homogeneous Pt-layer. Furthermore, below 50 eV the efficiency first follows the losses due to reflectance expected for Pt, but below 10 eV it is reduced further by a factor of 2 for the thick absorber and a factor of 4 for the thin absorber. Most probably, the different histories in production, storage, and operation led to varying surface conditions and additional loss channels
Broad-band efficiency calibration of ITER bolometer prototypes using Pt absorbers on SiN membranes.
Meister, H; Willmeroth, M; Zhang, D; Gottwald, A; Krumrey, M; Scholze, F
2013-12-01
The energy resolved efficiency of two bolometer detector prototypes for ITER with 4 channels each and absorber thicknesses of 4.5 μm and 12.5 μm, respectively, has been calibrated in a broad spectral range from 1.46 eV up to 25 keV. The calibration in the energy range above 3 eV was performed against previously calibrated silicon photodiodes using monochromatized synchrotron radiation provided by five different beamlines of Physikalische Technische Bundesanstalt at the electron storage rings BESSY II and Metrology Light Source in Berlin. For the measurements in the visible range, a setup was realised using monochromatized halogen lamp radiation and a calibrated laser power meter as reference. The measurements clearly demonstrate that the efficiency of the bolometer prototype detectors in the range from 50 eV up to ≈6 keV is close to unity; at a photon energy of 20 keV the bolometer with the thick absorber detects 80% of the photons, the one with the thin absorber about 50%. This indicates that the detectors will be well capable of measuring the plasma radiation expected from the standard ITER scenario. However, a minimum absorber thickness will be required for the high temperatures in the central plasma. At 11.56 keV, the sharp Pt-L3 absorption edge allowed to cross-check the absorber thickness by fitting the measured efficiency to the theoretically expected absorption of X-rays in a homogeneous Pt-layer. Furthermore, below 50 eV the efficiency first follows the losses due to reflectance expected for Pt, but below 10 eV it is reduced further by a factor of 2 for the thick absorber and a factor of 4 for the thin absorber. Most probably, the different histories in production, storage, and operation led to varying surface conditions and additional loss channels. PMID:24387428
Broad-band efficiency calibration of ITER bolometer prototypes using Pt absorbers on SiN membranes
Meister, H.; Willmeroth, M. [Max-Planck-Institut für Plasmaphysik (IPP), EURATOM Association, Boltzmannstr. 2, 85748 Garching (Germany); Zhang, D. [Max-Planck-Institut für Plasmaphysik (IPP), EURATOM Association, Teilinstitut Greifswald, Wendelsteinstraße 1, 17491 Greifswald (Germany); Gottwald, A.; Krumrey, M.; Scholze, F. [Physikalisch-Technische Bundesanstalt (PTB), Abbestraße 2-12, 10587 Berlin (Germany)
2013-12-15
The energy resolved efficiency of two bolometer detector prototypes for ITER with 4 channels each and absorber thicknesses of 4.5 μm and 12.5 μm, respectively, has been calibrated in a broad spectral range from 1.46 eV up to 25 keV. The calibration in the energy range above 3 eV was performed against previously calibrated silicon photodiodes using monochromatized synchrotron radiation provided by five different beamlines of Physikalische Technische Bundesanstalt at the electron storage rings BESSY II and Metrology Light Source in Berlin. For the measurements in the visible range, a setup was realised using monochromatized halogen lamp radiation and a calibrated laser power meter as reference. The measurements clearly demonstrate that the efficiency of the bolometer prototype detectors in the range from 50 eV up to ≈6 keV is close to unity; at a photon energy of 20 keV the bolometer with the thick absorber detects 80% of the photons, the one with the thin absorber about 50%. This indicates that the detectors will be well capable of measuring the plasma radiation expected from the standard ITER scenario. However, a minimum absorber thickness will be required for the high temperatures in the central plasma. At 11.56 keV, the sharp Pt-L{sub 3} absorption edge allowed to cross-check the absorber thickness by fitting the measured efficiency to the theoretically expected absorption of X-rays in a homogeneous Pt-layer. Furthermore, below 50 eV the efficiency first follows the losses due to reflectance expected for Pt, but below 10 eV it is reduced further by a factor of 2 for the thick absorber and a factor of 4 for the thin absorber. Most probably, the different histories in production, storage, and operation led to varying surface conditions and additional loss channels.
Mathematical methods are being increasingly employed in the efficiency calibration of gamma based systems for non-destructive assay (NDA) of radioactive waste and for the estimation of the Total Measurement Uncertainty (TMU). Recently, ASTM (American Society for Testing and Materials) released a standard guide for use of modeling passive gamma measurements. This is a testimony to the common use and increasing acceptance of mathematical techniques in the calibration and characterization of NDA systems. Mathematical methods offer flexibility and cost savings in terms of rapidly incorporating calibrations for multiple container types, geometries, and matrix types in a new waste assay system or a system that may already be operational. Mathematical methods are also useful in modeling heterogeneous matrices and non-uniform activity distributions. In compliance with good practice, if a computational method is used in waste assay (or in any other radiological application), it must be validated or benchmarked using representative measurements. In this paper, applications involving mathematical methods in gamma based NDA systems are discussed with several examples. The application examples are from NDA systems that were recently calibrated and performance tested. Measurement based verification results are presented. Mathematical methods play an important role in the efficiency calibration of gamma based NDA systems. This is especially true when the measurement program involves a wide variety of complex item geometries and matrix combinations for which the development of physical standards may be impractical. Mathematical methods offer a cost effective means to perform TMU campaigns. Good practice demands that all mathematical estimates be benchmarked and validated using representative sets of measurements. (authors)
Cornejo Diaz, N. [Centre for Radiological Protection and Hygiene, P.O. Box 6195, Habana (Cuba); Jurado Vargas, M., E-mail: mjv@unex.e [Physics Department, University of Extremadura, 06071 Badajoz (Spain)
2010-07-15
Quick and relatively simple procedures were incorporated into the Monte Carlo code DETEFF in order to consider the escape of Bremsstrahlung radiation and secondary electrons. The relative bias in efficiency values was thus reduced for photon energies between 1500 and 2000 keV, without any noticeable increment of the simulation time. A relatively simple method was also included to consider the rounding of detector edges. The validation studies showed relative deviations of about 1% in the energy range 10-2000 keV.
A Calibration Routine for Efficient ETD in Large-Scale Proteomics
Rose, Christopher M.; Rush, Matthew J. P.; Riley, Nicholas M.; Merrill, Anna E.; Kwiecien, Nicholas W.; Holden, Dustin D.; Mullen, Christopher; Westphall, Michael S.; Coon, Joshua J.
2015-11-01
Electron transfer dissociation (ETD) has been broadly adopted and is now available on a variety of commercial mass spectrometers. Unlike collisional activation techniques, optimal performance of ETD requires considerable user knowledge and input. ETD reaction duration is one key parameter that can greatly influence spectral quality and overall experiment outcome. We describe a calibration routine that determines the correct number of reagent anions necessary to reach a defined ETD reaction rate. Implementation of this automated calibration routine on two hybrid Orbitrap platforms illustrate considerable advantages, namely, increased product ion yield with concomitant reduction in scan rates netting up to 75% more unique peptide identifications in a shotgun experiment.
Monte Carlo calculations were used to investigate the efficiency of radiation protection equipment in reducing eye and whole body doses during fluoroscopically guided interventional procedures. Eye lens doses were determined considering different models of eyewear with various shapes, sizes and lead thickness. The origin of scattered radiation reaching the eyes was also assessed to explain the variation in the protection efficiency of the different eyewear models with exposure conditions. The work also investigates the variation of eye and whole body doses with ceiling-suspended shields of various shapes and positioning. For all simulations, a broad spectrum of configurations typical for most interventional procedures was considered. Calculations showed that ‘wrap around’ glasses are the most efficient eyewear models reducing, on average, the dose by 74% and 21% for the left and right eyes respectively. The air gap between the glasses and the eyes was found to be the primary source of scattered radiation reaching the eyes. The ceiling-suspended screens were more efficient when positioned close to the patient’s skin and to the x-ray field. With the use of such shields, the Hp(10) values recorded at the collar, chest and waist level and the Hp(3) values for both eyes were reduced on average by 47%, 37%, 20% and 56% respectively. Finally, simulations proved that beam quality and lead thickness have little influence on eye dose while beam projection, the position and head orientation of the operator as well as the distance between the image detector and the patient are key parameters affecting eye and whole body doses. (paper)
Efficient Monte Carlo Methods for the Potts Model at Low Temperature
Molkaraie, Mehdi
2015-01-01
We consider the problem of estimating the partition function of the ferromagnetic $q$-state Potts model. We propose an importance sampling algorithm in the dual of the normal factor graph representing the model. The algorithm can efficiently compute an estimate of the partition function in a wide range of parameters; in particular, when the coupling parameters of the model are strong (corresponding to models at low temperature) or when the model contains a mixture of strong and weak couplings. We show that, in this setting, the proposed algorithm significantly outperforms the state of the art methods in the primal and in the dual domains.
Beta-efficiency of a typical gas-flow ionization chamber using GEANT4 Monte Carlo simulations
Hussain Abid
2011-01-01
Full Text Available GEANT4 based Monte Carlo simulations have been carried out for the determination of efficiency and conversion factors of a gas-flow ionization chamber for beta particles emitted by 86 different radioisotopes covering the average-b energy range of 5.69 keV-2.061 MeV. Good agreements were found between the GEANT4 predicted values and corresponding experimental data, as well as with EGS4 based calculations. For the reported set of b-emitters, the values of the conversion factor have been established in the range of 0.5×1013-2.5×1013 Bqcm-3/A. The computed xenon-to-air conversion factor ratios have attained the minimum value of 0.2 in the range of 0.1-1 MeV. As the radius and/or volume of the ion chamber increases, conversion factors approach a flat energy response. These simulations show a small, but significant dependence of ionization efficiency on the type of wall material.
It was developed a program in Basic language applied to Sinclair type personal computer. The code is able to calculate the Whole counting efficiency when applying a cillindrical type detector. The scope of the code made use of the Monte Carlo Method. (Author)
Lee, Hee Jung; Park, Seongchong; Park, Hee Su; Hong, Kee Suk; Lee, Dong-Hoon; Kim, Heonoh; Cha, Myoungsik; Seb Moon, Han
2016-04-01
We present a practical calibration method of the detection efficiency (DE) of single photon detectors (SPDs) in a wide wavelength range from 480 nm to 840 nm. The setup consists of a GaN laser diode emitting a broadband luminescence, a tunable bandpass filter, a beam splitter, and a switched integrating amplifier which can measure the photocurrent down to the 100 fA level. The SPD under test with a fibre-coupled beam input is directly compared with a reference photodiode without using any calibrated attenuator. The relative standard uncertainty of the DE of the SPD is evaluated to be from 0.8% to 2.2% varying with wavelength (k = 1).
CINELLI GIORGIA; TOSITTI Laura; Mostacci, Domiziano; BARE Jonathan
2015-01-01
In view of assessing natural radioactivity with on-site quantitative gamma spectrometry, efficiency calibration of NaI(Tl) detectors is investigated. A calibration based on Monte Carlo simulation of detector response is proposed, to render reliable quantitative analysis practicable in field campaigns. The method is developed with reference to contact geometry, in which measurements are taken placing the NaI(Tl) probe directly against the solid source to be analyzed. The Monte Carlo code us...
Efficient masonry vault inspection by Monte Carlo simulations: Case of hidden defect
Abdelmounaim Zanaz
2016-06-01
Full Text Available This paper presents a methodology for probabilistic assessment of masonry vaults bearing capacity with the consideration of existing defects. A comprehensive methodology and software package have been developed and adapted to inspection requirements. First, the mechanical analysis model is explained and validated by showing a good compromise between computation time and accuracy. This compromise is required when probabilistic approach is considered, as it requires a large number of mechanical analysis runs. To model the defect, an inspection case is simulated by considering a segmental vault. As the inspection data is often insufficient, the defect position and size are considered to be unknown. As the NDT results could not provide useful and reliable information, it is therefore decided to take samples with the obligation to minimize as much as possible their number. In this case the main difficulty is to know on which segment the coring would be mostly efficient. To find out, all possible positions are studied with the consideration of one single core. Using probabilistic approaches, the distribution function of the critical load has been determined for each segment. The results allow to identify the best segment for vault inspection.
Efficient Orientation and Calibration of Large Aerial Blocks of Multi-Camera Platforms
Karel, W.; Ressl, C.; Pfeifer, N.
2016-06-01
Aerial multi-camera platforms typically incorporate a nadir-looking camera accompanied by further cameras that provide oblique views, potentially resulting in utmost coverage, redundancy, and accuracy even on vertical surfaces. However, issues have remained unresolved with the orientation and calibration of the resulting imagery, to two of which we present feasible solutions. First, as standard feature point descriptors used for the automated matching of homologous points are only invariant to the geometric variations of translation, rotation, and scale, they are not invariant to general changes in perspective. While the deviations from local 2D-similarity transforms may be negligible for corresponding surface patches in vertical views of flat land, they become evident at vertical surfaces, and in oblique views in general. Usage of such similarity-invariant descriptors thus limits the amount of tie points that stabilize the orientation and calibration of oblique views and cameras. To alleviate this problem, we present the positive impact on image connectivity of using a quasi affine-invariant descriptor. Second, no matter which hard- and software are used, at some point, the number of unknowns of a bundle block may be too large to be handled. With multi-camera platforms, these limits are reached even sooner. Adjustment of sub-blocks is sub-optimal, as it complicates data management, and hinders self-calibration. Simply discarding unreliable tie points of low manifold is not an option either, because these points are needed at the block borders and in poorly textured areas. As a remedy, we present a straight-forward method how to considerably reduce the number of tie points and hence unknowns before bundle block adjustment, while preserving orientation and calibration quality.
Background: High-purity germanium (HPGe) detector need to be calibrate detection efficiency for the measured sample using radioactivity standard sources. However, if a great variety of samples which have different materials, densities or geometries need calibrating, the standard sources used will be very expensive and are not beneficial to environmental protection. Purpose: To study a new Full-energy peak (FEP) efficiency calibration method without artificial standard sources for the high purity germanium detector using 82Br, 160Tb produced by neutron activation and 40K. Methods: A HPGe detector (diameter 76 mm) with relative efficiency of 42% and two soil samples (Φ70 mm × 66 mm) were used in the experiments. The ratios relative to 554.3 keV of FEP counting rates, εBr(Ei) for different γ energies, Ei, of 82Br were used to fit relative efficiency function, fBr(E). The ratios relative to 1271.8 keV, Uj, of those for γ-energy, Ej, of 86.7, 197.0, 215.6, 298.6 and 392.5 keV of 160Tb were calculated, and transformed into normalized relative efficiencies to γ-energy of 554.3 keV of 82Br by using the formula εBr(Ej)=uj fBr(E1271). Then the data of εBr(Ei) and εBe(Ej) were fitted to normalized relative efficiency function f(E). The absolute efficiency εK of 40K y energy (EK=1460.8 keV), resulting from KCI, which is mixed homogeneously with the sample can be determined. Thus, those of other any energy, E, can also be determined by using the formula, ε(E) =εK·f(E)/f(EK). Results: The experiments showed that change of fBr(E) is no significant when the sample-detector distance is more than 3 cm. In order to verify the new method. the activity in two soil samples (Φ70 mm × 66 mm) were measured (sample-detector distance =3.1 cm) and the results were compared with γ-γ coincidence method. The results of the activity concentration of seven radionuclides including 226Ra, 235U, 232Th, 40K, 134Cs, 137Cs and 60Co in each sample were in good agreement within
The non intrusive quantification of gamma-emitters in waste containers calls for a correction to be made for the perturbation introduced by the container contents relative to the configuration used for the absolute calibration of the measurement system. There are several potential ways of achieving this including the use of an external transmission beam, and, the exploitation of the differential attenuation between different energy lines from the same nuclide. The former requires additional hardware while the second is not always applicable. A third method, which overcomes both these objections and is also commonly applied as the method of choice on systems that do not have the capability to (axially) scan the item, is termed the Multi-Curve approach. When applying the Multi-Curve method the density of the waste matrix inside the item under study is estimated from the net weight and the fill height estimate and the efficiency at the assay energies of interest is obtained by navigating the efficiency-energy-density surface using the (interpolation) scheme developed for the model parameters via the calibration procedure. In addition to the nominal efficiency values an uncertainty estimate for each is made using an independent analysis engine which is designed to incorporate reasonable deviations in the properties of the item from the ideal conditions of calibration. Prominent amongst these would be deviations of fill matrix homogeneity, deviations from a uniform activity distribution and deviation in the atomic composition of the materials present from those used to make the calibration items (this is of concern below about 200 keV where the photoelectric effect, which has a strong Z-dependence, comes into play). The Multi-Curve approach has proven to be robust and reliable. However what one finds is that the uncertainties assigned to the traditional Multi-Curve interpolation scheme are underestimated. This is because correlations in the input data are neglected and
An efficient method to compute accurate polarized solar radiance spectra using the (3D) Monte Carlo model MYSTIC has been developed. Such high resolution spectra are measured by various satellite instruments for remote sensing of atmospheric trace gases. ALIS (Absorption Lines Importance Sampling) allows the calculation of spectra by tracing photons at only one wavelength. In order to take into account the spectral dependence of the absorption coefficient a spectral absorption weight is calculated for each photon path. At each scattering event the local estimate method is combined with an importance sampling method to take into account the spectral dependence of the scattering coefficient. Since each wavelength grid point is computed based on the same set of random photon paths, the statistical error is almost same for all wavelengths and hence the simulated spectrum is not noisy. The statistical error mainly results in a small relative deviation which is independent of wavelength and can be neglected for those remote sensing applications, where differential absorption features are of interest. Two example applications are presented: The simulation of shortwave-infrared polarized spectra as measured by GOSAT from which CO2 is retrieved, and the simulation of the differential optical thickness in the visible spectral range which is derived from SCIAMACHY measurements to retrieve NO2. The computational speed of ALIS (for 1D or 3D atmospheres) is of the order of or even faster than that of one-dimensional discrete ordinate methods, in particular when polarization is considered.
Tang, Xiao-Bin; Meng, Jia; Wang, Peng; Cao, Ye; Huang, Xi; Wen, Liang-Sheng; Chen, Da
2016-04-01
A small-sized UAV (NH-UAV) airborne system with two gamma spectrometers (LaBr3 detector and HPGe detector) was developed to monitor activity concentration in serious nuclear accidents, such as the Fukushima nuclear accident. The efficiency calibration and determination of minimum detectable activity concentration (MDAC) of the specific system were studied by MC simulations at different flight altitudes, different horizontal distances from the detection position to the source term center and different source term sizes. Both air and ground radiation were considered in the models. The results obtained may provide instructive suggestions for in-situ radioactivity measurements of NH-UAV. PMID:26773821
The paper investigates and analyses NORM residues from rare earth smelting and separation plants in Jiangsu Province using the high purity germanium gamma spectrometry sourceless efficiency calibration method which was verified by IAEA reference materials. The results show that in the rare earth residues the radioactive equilibrium of uranium and thorium decay series has been broken and the activity concentrations in the samples have obvious differences. Based on the results, the paper makes some suggestions and proposes some protective measures for the disposal of rare earth residues. (author)
A careful analysis of geometry and source positioning influence in the activity measurement outcome of a nuclear medicine dose calibrator is presented for 99mTc. The implementation of a quasi-point source apparent activity curve measurement is proposed for an accurate correction of the activity inside several syringes, and compared with a theoretical geometric efficiency model. Additionally, new geometrical parameters are proposed to test and verify the correct positioning of the syringes as part of acceptance testing and quality control procedures.
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.
2010-10-01
Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.
Efficiency calibration of a liquid scintillation counter for 90Y Cherenkov counting
In this paper a complete and self-consistent method for 90Sr determination in environmental samples is presented. It is based on the Cherenkov counting of 90Y with a conventional liquid scintillation counter. The effects of color quenching on the counting efficiency and background are carefully studied. A working curve is presented which allows to quantify the correction in the counting efficiency depending on the color quenching strength. (orig.)
Nicoulaud-Gouin, V.; Giacalone, M.; Gonze, M.A. [Institut de Radioprotection et de Surete Nucleaire-PRP-ENV/SERIS/LM2E (France); Martin-Garin, A.; Garcia-Sanchez, L. [IRSN-PRP-ENV/SERIS/L2BT (France)
2014-07-01
Calibration of transfer models according to observation data is a challenge, especially if parameters uncertainty is required, and if competing models should be decided between them. Generally two main calibration methods are used: The frequentist approach in which the unknown parameter of interest is supposed fixed and its estimation is based on the data only. In this category, least squared method has many restrictions in nonlinear models and competing models need to be nested in order to be compared. The bayesian inference in which the unknown parameter of interest is supposed random and its estimation is based on the data and on prior information. Compared to frequentist method, it provides probability density functions and therefore pointwise estimation with credible intervals. However, in practical cases, Bayesian inference is a complex problem of numerical integration, which explains its low use in operational modeling including radioecology. This study aims to illustrate the interest and feasibility of Bayesian approach in radioecology particularly in the case of ordinary differential equations with non-constant coefficients models, which cover most radiological risk assessment models, notably those implemented in the Symbiose platform (Gonze et al, 2010). Markov Chain Monte Carlo (MCMC) method (Metropolis et al., 1953) was used because the posterior expectations are intractable integrals. The invariant distribution of the parameters was performed by the metropolis-Hasting algorithm (Hastings, 1970). GNU-MCSim software (Bois and Maszle, 2011) a bayesian hierarchical framework, was used to deal with nonlinear differential models. Two case studies including this type of model were investigated: An Equilibrium Kinetic sorption model (EK) (e.g. van Genuchten et al, 1974), with experimental data concerning {sup 137}Cs and {sup 85}Sr sorption and desorption in different soils studied in stirred flow-through reactors. This model, generalizing the K{sub d} approach
Calibration of transfer models according to observation data is a challenge, especially if parameters uncertainty is required, and if competing models should be decided between them. Generally two main calibration methods are used: The frequentist approach in which the unknown parameter of interest is supposed fixed and its estimation is based on the data only. In this category, least squared method has many restrictions in nonlinear models and competing models need to be nested in order to be compared. The bayesian inference in which the unknown parameter of interest is supposed random and its estimation is based on the data and on prior information. Compared to frequentist method, it provides probability density functions and therefore pointwise estimation with credible intervals. However, in practical cases, Bayesian inference is a complex problem of numerical integration, which explains its low use in operational modeling including radioecology. This study aims to illustrate the interest and feasibility of Bayesian approach in radioecology particularly in the case of ordinary differential equations with non-constant coefficients models, which cover most radiological risk assessment models, notably those implemented in the Symbiose platform (Gonze et al, 2010). Markov Chain Monte Carlo (MCMC) method (Metropolis et al., 1953) was used because the posterior expectations are intractable integrals. The invariant distribution of the parameters was performed by the metropolis-Hasting algorithm (Hastings, 1970). GNU-MCSim software (Bois and Maszle, 2011) a bayesian hierarchical framework, was used to deal with nonlinear differential models. Two case studies including this type of model were investigated: An Equilibrium Kinetic sorption model (EK) (e.g. van Genuchten et al, 1974), with experimental data concerning 137Cs and 85Sr sorption and desorption in different soils studied in stirred flow-through reactors. This model, generalizing the Kd approach, distinguishes
Efficiency calibration of the ELBE nuclear resonance fluorescence setup using a proton beam
Trompler, Erik; Bemmerer, Daniel; Beyer, Roland; Erhard, Martin; Grosse, Eckart; Hannaske, Roland; Junghans, Arnd Rudolf; Marta, Michele; Nair, Chithra; Schwengner, R.; Wagner, Andreas; Yakorev, Dmitry [Forschungszentrum Dresden-Rossendorf (FZD), Dresden (Germany); Broggini, Carlo; Caciolli, Antonio; Menegazzo, Roberto [INFN Sezione di Padova, Padova (Italy); Fueloep, Zsolt; Gyuerky, Gyoergy; Szuecs, Tamas [Atomki, Debrecen (Hungary)
2009-07-01
The nuclear resonance fluorescence (NRF) setup at ELBE uses bremsstrahlung with endpoint energies up to 20 MeV. The setup consists of four 100% high-purity germanium detectors, each surrounded by a BGO escape-suppression shield and a lead collimator. The detection efficiency up to E{sub {gamma}}=12 MeV has been determined using the proton beam from the FZD Tandetron and well-known resonances in the {sup 11}B(p,{gamma}){sup 12}C, {sup 14}N(p,{gamma}){sup 15}O, and {sup 27}Al(p,{gamma}){sup 28}Si reactions. The deduced efficiency curve allows to check efficiency curves calculated with GEANT. Future photon-scattering work can be carried out with improved precision at high energy.
Efficient Calibration/Uncertainty Analysis Using Paired Complex/Surrogate Models.
Burrows, Wesley; Doherty, John
2015-01-01
The use of detailed groundwater models to simulate complex environmental processes can be hampered by (1) long run-times and (2) a penchant for solution convergence problems. Collectively, these can undermine the ability of a modeler to reduce and quantify predictive uncertainty, and therefore limit the use of such detailed models in the decision-making context. We explain and demonstrate a novel approach to calibration and the exploration of posterior predictive uncertainty, of a complex model, that can overcome these problems in many modelling contexts. The methodology relies on conjunctive use of a simplified surrogate version of the complex model in combination with the complex model itself. The methodology employs gradient-based subspace analysis and is thus readily adapted for use in highly parameterized contexts. In its most basic form, one or more surrogate models are used for calculation of the partial derivatives that collectively comprise the Jacobian matrix. Meanwhile, testing of parameter upgrades and the making of predictions is done by the original complex model. The methodology is demonstrated using a density-dependent seawater intrusion model in which the model domain is characterized by a heterogeneous distribution of hydraulic conductivity. PMID:25142272
Efficiency calibration of the Ge(Li) detector of the BIPM for SIR-type ampoules
The absolute full-energy peak efficiency of the Ge(Li) γ-ray spectrometer has been measured between 50 keV and 2 MeV with a relative uncertainty around 1 x 10-2 and for ampoule-to-detector distances of 20 cm and 50 cm. All the corrections applied (self-attenuation, dead time, pile up, true coincidence summing) are discussed in detail. (authors)
Close-geometry efficiency calibration of LaCl3:Ce detectors: measurements and simulations
In particular, large amount of literature is available with HPGe detectors. However, not much work has been done on coincidence summing effects in scintillation detectors. This may be due to inferiority of scintillation detectors over HPGe detectors in terms of energy resolution which makes the accurate estimation of counts under individual peak very difficult. We report here experimental measurements and realistic simulations of absolute efficiencies (both photo-peak and total detection) and of coincidence summing correction factors in LaCl3 (Ce) scintillation detectors under close-geometry. These detectors have drawn interest owing to their properties superior to that of NaI(Tl) detectors, such as high light yield (46,000 photons/MeV), energy resolution (about 4%), decay time (25 ns), etc.
Calibration of the b-tagging efficiency on jets with charm quark for the ATLAS experiment
AUTHOR|(INSPIRE)INSPIRE-00536668; Schiavi, Carlo
The correct identification of jets originated from a beauty quark (b-jets) is of fundamental importance for many physics analysis performed by the ATLAS experiment, operating at the Large Hadron Collider, CERN. The efficiency to mistakenly tag a jet originated from a charm quark (c-jet) as a b-jet has been measured in data with two different methods: a first one, referred as the "D* method", uses a sample of jets containing reconstructed D* mesons (adopted for 7 TeV and 8 TeV data analyses), and a second one, referred as the "W+c method", uses a sample of c-jets produced in association with a W boson (studied on 7 TeV data). This thesis work focuses on some significant improvements made to the D* method, increasing the measurement precision. A study for the improvement of the W+c method and its first application to 13 TeV data is also presented: focusing on the event selection, the W+c signal yield has been considerably increased with respect to the background processes
The aim of this work is to elaborate a Monte Carlo programme which calculate the photoelectric efficiency of a planar high purity Ge detector for low energy photons. This programme calculate the auto absorption, the absorption in different media crossed by the photon and the intrinsic and total efficiencies. The results of this programme were very satisfactory since they reproduce the measured values in the two different cases of punctual and volumic sources. The result of the photoelectric efficiency calculation with this programme has been applied to determine the cross section of the 166-Er (n,2 n) 165-Er reaction induced by 14 MeV neutron, where only the measurement by x spectrometry is possible. The value obtained is concordant with the data given by the literature. 119 figs., 39 tabs., 96 refs. (F.M.)
This work aims the final calibration of the nuclear power channels of the IPEN/MB-01 reactor using infinitely dilute gold foils (1% Au - 99% Al), this is, metallic alloy in the concentration levels such that the phenomena of flux disturbance, as the self-shielding factors become worthless. During the irradiations were monitored the nuclear power channels of the reactor used to obtain the neutron flux and consequently the power operation of the reactor. The current values were digitally acquired during each second of operation. Once the foils were irradiated for the analysis of its induced activity it was used a detection system of hyper-pure germanium. Ally to this experimental procedure it was used the computational code MCNP-4C as a tool for theoretical modeling of the core of the IPEN/MB-01 reactor. Thus it was possible to determine the parameters necessary to obtain the power operation of the reactor, such as the inverse of the thermal disadvantage factor and fast fission factor. Thus, using the correlation between average thermal neutron flux, proportional to a power operation and the average of the digital values of current of the nuclear channels, during the irradiations of the foils, it was obtained the calibration of the nuclear power channels, the ionization chambers number 5 and 6 of the IPEN/MB-01 reactor. (author)
Kalinnikov, V G; Ibrakhim, Y S; Lebedev, N A; Samatov, Z K; Sehrehehtehr, Z; Solnyshkin, A A
2002-01-01
A specific method for efficiency calibration of a mini-orange type beta-spectrometer by means of the continuous beta sup - -spectrum of sup 9 sup 0 Sr and the conversion electron spectrum of sup 2 sup 0 sup 7 Bi in the energy range from 500 to 2200 keV has been elaborated. In the experiment typical SmCo sub 5 magnets (6A and 8A) were used. An accuracy of efficiency determination was 5-10 %.
Courtine, Fabien [Laboratoire de Physique Corpusculaire, Universite Blaise Pascal - CNRS/IN2P3, 63000 Aubiere Cedex (France)
2007-03-15
The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of {sup 137}Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the {sup 60}Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)
Degrelle, D.; Mavon, C.; Groetz, J.-E.
2016-04-01
This study presents a numerical method in order to determine the mass attenuation coefficient of a sample with an unknown chemical composition at low energy. It is compared with two experimental methods: a graphic method and a transmission method. The method proposes to realise a numerical absorption calibration curve to process experimental results. Demineralised water with known mass attenuation coefficient (0.2066cm2g-1 at 59.54 keV) is chosen to confirm the method. 0.1964 ± 0.0350cm2g-1 is the average value determined by the numerical method, that is to say less than 5% relative deviation compared to more than 47% for the experimental methods.
Anil Kumar, G., E-mail: anilg@tifr.res.i [Department of Nuclear and Atomic Physics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005 (India); Mazumdar, I.; Gothe, D.A. [Department of Nuclear and Atomic Physics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005 (India)
2009-11-21
Efficiency calibration and coincidence summing correction have been performed for two large arrays of NaI(Tl) detectors in two different configurations. They are, a compact array of 32 conical detectors of pentagonal and hexagonal shapes in soccer-ball geometry and an array of 14 straight hexagonal NaI(Tl) detectors in castle geometry. Both of these arrays provide a large solid angle of detection, leading to considerable coincidence summing of gamma rays. The present work aims to understand the effect of coincidence summing of gamma rays while determining the energy dependence of efficiencies of these two arrays. We have carried out extensive GEANT4 simulations with radio-nuclides that decay with a two-step cascade, considering both arrays in their realistic geometries. The absolute efficiencies have been simulated for gamma energies from 700 to 2800 keV using four different double-photon emitters, namely, {sup 60}Co, {sup 46}Sc, {sup 94}Nb and {sup 24}Na. The efficiencies so obtained have been corrected for coincidence summing using the method proposed by Vidmar et al. . The simulations have also been carried out for the same energies assuming mono-energetic point sources, for comparison. Experimental measurements have also been carried out using calibrated point sources of {sup 137}Cs and {sup 60}Co. The simulated and the experimental results are found to be in good agreement. This demonstrates the reliability of the correction method for efficiency calibration of two large arrays in very different configurations.
Zhang, Jin-Zhao; Tuo, Xian-Guo
2014-07-01
We present the design and optimization of a prompt γ-ray neutron activation analysis (PGNAA) thermal neutron output setup based on Monte Carlo simulations using MCNP5 computer code. In these simulations, the moderator materials, reflective materials, and structure of the PGNAA 252Cf neutrons of thermal neutron output setup are optimized. The simulation results reveal that the thin layer paraffin and the thick layer of heavy water moderating effect work best for the 252Cf neutron spectrum. Our new design shows a significantly improved performance of the thermal neutron flux and flux rate, that are increased by 3.02 times and 3.27 times, respectively, compared with the conventional neutron source design.
The purpose of the present study is to introduce a compression algorithm for the CT (computed tomography) data used in Monte Carlo simulations. Performing simulations on the CT data implies large computational costs as well as large memory requirements since the number of voxels in such data reaches typically into hundreds of millions voxels. CT data, however, contain homogeneous regions which could be regrouped to form larger voxels without affecting the simulation's accuracy. Based on this property we propose a compression algorithm based on octrees: in homogeneous regions the algorithm replaces groups of voxels with a smaller number of larger voxels. This reduces the number of voxels while keeping the critical high-density gradient area. Results obtained using the present algorithm on both phantom and clinical data show that compression rates up to 75% are possible without losing the dosimetric accuracy of the simulation
Accurate modeling of system response and scatter distribution is crucial for image reconstruction in emission tomography. Monte Carlo simulations are very well suited to calculate these quantities. However, Monte Carlo simulations are also slow and many simulated counts are needed to provide a sufficiently exact estimate of the detection probabilities. In order to overcome these problems, we propose to split the simulation into two parts, the detection system and the object to be imaged (the patient). A so-called 'virtual boundary' that separates these two parts is introduced. Within the patient, particles are simulated conventionally. Whenever a photon reaches the virtual boundary, its detection probability is calculated analytically by evaluating a multi-dimensional B-spline that depends on the photon position, direction and energy. The unknown B-spline knot values that define this B-spline are fixed by a prior 'pre-' simulation that needs to be run once for each scanner type. After this pre-simulation, the B-spline model can be used in any subsequent simulation with different patients. We show that this approach yields accurate results when simulating the Biograph 16 HiREZ PET scanner with Geant4 Application for Emission Tomography (GATE). The execution time is reduced by a factor of about 22 x (scanner with voxelized phantom) to 30 x (empty scanner) with respect to conventional GATE simulations of same statistical uncertainty. The pre-simulation and calculation of the B-spline knots values could be performed within half a day on a medium-sized cluster.
The associated particle technique, with a gas target, has been used to measure the absolute central neutron detection efficiency of two scintillators, (NE213 and NE102A) with an uncertainty of less than +- 2%, over the energy range 1.5-25 MeV. A commercial n/γ discrimination system was used with NE213. Efficiencies for various discrimination levels were determined simultaneously by two parameter computer storage. The average efficiency of each detector was measured by scanning the neutron cone across the front face. The measurements have been compared with two Monte Carlo efficiency programs (Stanton's and 05S), without artificially fitting any parameters. When the discrimination level (in terms of proton energy) is determined from the measured light output relationship, very good agreement (to about 3%) is obtained between the measurements and the predictions. The agreement of a simple analytical expression is also found to be good over the energy range where n-p scattering dominates. (orig.)
National Aeronautics and Space Administration — Develop and demonstrate a next-generation digitally calibrated, highly scalable, L-band Transmit/Receive (TR) module to enable a precision beamforming SweepSAR...
Cadigan, Noel G.; Dowden, Jeff J.
2010-01-01
Paired-tow calibration studies provide information on changes in survey catchability that may occur because of some necessary change in protocols (e.g., change in vessel or vessel gear) in a fish stock survey. This information is important to ensure the continuity of annual time-series of survey indices of stock size that provide the basis for fish stock assessments. There are several statistical models used to analyze the paired-catch data from calibration studies. Our main contribu...
Kramer, Gary H; Capello, Kevin; DiNardo, Anthony; Hauck, Barry
2012-08-01
A commercial detector calibration package has been assessed for its use to calibrate the Human Monitoring Laboratory's Portable Whole Body Counter that is used for emergency response. The advantage of such a calibration software is that calibrations can be derived very quickly once the model has been designed. The commercial package's predictions were compared to experimental point source data and to predictions from Monte Carlo simulations. It was found that the software adequately predicted the counting efficiencies of a point source geometry to values derived from Monte Carlo simulations and experimental work. Both the standing and seated counting geometries agreed sufficiently well that the commercial package could be used in the field. PMID:22739971
A Gamma Spectroscopy Logging System (GSLS) has been developed to study sub-surface radionuclide contamination. Absolute efficiency calibration of the GSLS was performed using simple cylindrical borehole geometry. The calibration source incorporated naturally occurring radioactive material (NORM) that emitted photons ranging from 186-keV to 2,614-keV. More complex borehole geometries were modeled using commercially available shielding software. A linear relationship was found between increasing source thickness and relative photon fluence rates at the detector. Examination of varying porosity and moisture content showed that as porosity increases, relative photon fluence rates increase linearly for all energies. Attenuation effects due to iron, water, PVC, and concrete cylindrical shields were found to agree with previous studies. Regression analyses produced energy-dependent equations for efficiency corrections applicable to spectral gamma-ray well logs collected under non-standard borehole conditions
Calibration of Ge gamma-ray spectrometers for complex sample geometries and matrices
Semkow, T.M., E-mail: thomas.semkow@health.ny.gov [Wadsworth Center, New York State Department of Health, Empire State Plaza, Albany, NY 12201 (United States); Department of Environmental Health Sciences, School of Public Health, University at Albany, State University of New York, Rensselaer, NY 12144 (United States); Bradt, C.J.; Beach, S.E.; Haines, D.K.; Khan, A.J.; Bari, A.; Torres, M.A.; Marrantino, J.C.; Syed, U.-F. [Wadsworth Center, New York State Department of Health, Empire State Plaza, Albany, NY 12201 (United States); Kitto, M.E. [Wadsworth Center, New York State Department of Health, Empire State Plaza, Albany, NY 12201 (United States); Department of Environmental Health Sciences, School of Public Health, University at Albany, State University of New York, Rensselaer, NY 12144 (United States); Hoffman, T.J. [Wadsworth Center, New York State Department of Health, Empire State Plaza, Albany, NY 12201 (United States); Curtis, P. [Kiltel Systems, Inc., Clyde Hill, WA 98004 (United States)
2015-11-01
A comprehensive study of the efficiency calibration and calibration verification of Ge gamma-ray spectrometers was performed using semi-empirical, computational Monte-Carlo (MC), and transfer methods. The aim of this study was to evaluate the accuracy of the quantification of gamma-emitting radionuclides in complex matrices normally encountered in environmental and food samples. A wide range of gamma energies from 59.5 to 1836.0 keV and geometries from a 10-mL jar to 1.4-L Marinelli beaker were studied on four Ge spectrometers with the relative efficiencies between 102% and 140%. Density and coincidence summing corrections were applied. Innovative techniques were developed for the preparation of artificial complex matrices from materials such as acidified water, polystyrene, ethanol, sugar, and sand, resulting in the densities ranging from 0.3655 to 2.164 g cm{sup −3}. They were spiked with gamma activity traceable to international standards and used for calibration verifications. A quantitative method of tuning MC calculations to experiment was developed based on a multidimensional chi-square paraboloid. - Highlights: • Preparation and spiking of traceable complex matrices in extended geometries. • Calibration of Ge gamma spectrometers for complex matrices. • Verification of gamma calibrations. • Comparison of semi-empirical, computational Monte Carlo, and transfer methods of Ge calibration. • Tuning of Monte Carlo calculations using a multidimensional paraboloid.
Calibration of Ge gamma-ray spectrometers for complex sample geometries and matrices
A comprehensive study of the efficiency calibration and calibration verification of Ge gamma-ray spectrometers was performed using semi-empirical, computational Monte-Carlo (MC), and transfer methods. The aim of this study was to evaluate the accuracy of the quantification of gamma-emitting radionuclides in complex matrices normally encountered in environmental and food samples. A wide range of gamma energies from 59.5 to 1836.0 keV and geometries from a 10-mL jar to 1.4-L Marinelli beaker were studied on four Ge spectrometers with the relative efficiencies between 102% and 140%. Density and coincidence summing corrections were applied. Innovative techniques were developed for the preparation of artificial complex matrices from materials such as acidified water, polystyrene, ethanol, sugar, and sand, resulting in the densities ranging from 0.3655 to 2.164 g cm−3. They were spiked with gamma activity traceable to international standards and used for calibration verifications. A quantitative method of tuning MC calculations to experiment was developed based on a multidimensional chi-square paraboloid. - Highlights: • Preparation and spiking of traceable complex matrices in extended geometries. • Calibration of Ge gamma spectrometers for complex matrices. • Verification of gamma calibrations. • Comparison of semi-empirical, computational Monte Carlo, and transfer methods of Ge calibration. • Tuning of Monte Carlo calculations using a multidimensional paraboloid
A system with three identical custom made units is used for the energy calibration of the GERDA Ge diodes. To perform a calibration the 228Th sources are lowered from the parking positions at the top of the cryostat. Their positions are measured by two independent modules. One, the incremental encoder, counts the holes in the perforated steel band holding the sources, the other measures the drive shaft's angular position even if not powered. The system can be controlled remotely by a Labview program. The calibration data is analyzed by an iterative calibration algorithm determining the calibration functions for different energy reconstruction algorithms and the resolution of several peaks in the 228Th spectrum is determined. A Monte Carlo simulation using the GERDA simulation software MAGE has been performed to determine the background induced by the sources in the parking positions.
Calibration of a compact magnetic proton recoil neutron spectrometer
Zhang, Jianfu; Ouyang, Xiaoping; Zhang, Xianpeng; Ruan, Jinlu; Zhang, Guoguang; Zhang, Xiaodong; Qiu, Suizheng; Chen, Liang; Liu, Jinliang; Song, Jiwen; Liu, Linyue; Yang, Shaohua
2016-04-01
Magnetic proton recoil (MPR) neutron spectrometer is considered as a powerful instrument to measure deuterium-tritium (DT) neutron spectrum, as it is currently used in inertial confinement fusion facilities and large Tokamak devices. The energy resolution (ER) and neutron detection efficiency (NDE) are the two most important parameters to characterize a neutron spectrometer. In this work, the ER calibration for the MPR spectrometer was performed by using the HI-13 tandem accelerator at China Institute of Atomic Energy (CIAE), and the NDE calibration was performed by using the neutron generator at CIAE. The specific calibration techniques used in this work and the associated accuracies were discussed in details in this paper. The calibration results were presented along with Monte Carlo simulation results.
Zhang, Weihua; Mekarski, Pawel; Ungar, Kurt
2010-12-01
A single-channel phoswich well detector has been assessed and analysed in order to improve beta-gamma coincidence measurement sensitivity of (131m)Xe and (133m)Xe. This newly designed phoswich well detector consists of a plastic cell (BC-404) embedded in a CsI(Tl) crystal coupled to a photomultiplier tube (PMT). It can be used to distinguish 30.0-keV X-ray signals of (131m)Xe and (133m)Xe using their unique coincidence signatures between the conversion electrons (CEs) and the 30.0-keV X-rays. The optimum coincidence efficiency signal depends on the energy resolutions of the two CE peaks, which could be affected by relative positions of the plastic cell to the CsI(Tl) because the embedded plastic cell would interrupt scintillation light path from the CsI(Tl) crystal to the PMT. In this study, several relative positions between the embedded plastic cell and the CsI(Tl) crystal have been evaluated using Monte Carlo modeling for its effects on coincidence detection efficiency and X-ray and CE energy resolutions. The results indicate that the energy resolution and beta-gamma coincidence counting efficiency of X-ray and CE depend significantly on the plastic cell locations inside the CsI(Tl). The degraded X-ray and CE peak energy resolutions due to light collection efficiency deterioration by the embedded cell can be minimised. The optimum of CE and X-ray energy resolution, beta-gamma coincidence efficiency as well as the ease of manufacturing could be achieved by varying the embedded plastic cell positions inside the CsI(Tl) and consequently setting the most efficient geometry. PMID:20598559
A single-channel phoswich well detector has been assessed and analysed in order to improve beta-gamma coincidence measurement sensitivity of 131mXe and 133mXe. This newly designed phoswich well detector consists of a plastic cell (BC-404) embedded in a CsI(Tl) crystal coupled to a photomultiplier tube (PMT). It can be used to distinguish 30.0-keV X-ray signals of 131mXe and 133mXe using their unique coincidence signatures between the conversion electrons (CEs) and the 30.0-keV X-rays. The optimum coincidence efficiency signal depends on the energy resolutions of the two CE peaks, which could be affected by relative positions of the plastic cell to the CsI(Tl) because the embedded plastic cell would interrupt scintillation light path from the CsI(Tl) crystal to the PMT. In this study, several relative positions between the embedded plastic cell and the CsI(Tl) crystal have been evaluated using Monte Carlo modeling for its effects on coincidence detection efficiency and X-ray and CE energy resolutions. The results indicate that the energy resolution and beta-gamma coincidence counting efficiency of X-ray and CE depend significantly on the plastic cell locations inside the CsI(Tl). The degraded X-ray and CE peak energy resolutions due to light collection efficiency deterioration by the embedded cell can be minimised. The optimum of CE and X-ray energy resolution, beta-gamma coincidence efficiency as well as the ease of manufacturing could be achieved by varying the embedded plastic cell positions inside the CsI(Tl) and consequently setting the most efficient geometry.
Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J
2008-06-01
Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655
Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J.
2008-01-01
Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655
Farr, Benjamin; Kalogera, Vicky; Luijten, Erik
2014-07-01
We introduce a new Markov-chain Monte Carlo (MCMC) approach designed for the efficient sampling of highly correlated and multimodal posteriors. Parallel tempering, though effective, is a costly technique for sampling such posteriors. Our approach minimizes the use of parallel tempering, only applying it for a short time to build a proposal distribution that is based upon estimation of the kernel density and tuned to the target posterior. This proposal makes subsequent use of parallel tempering unnecessary, allowing all chains to be cooled to sample the target distribution. Gains in efficiency are found to increase with increasing posterior complexity, ranging from tens of percent in the simplest cases to over a factor of 10 for the more complex cases. Our approach is particularly useful in the context of parameter estimation of gravitational-wave signals measured by ground-based detectors, which is currently done through Bayesian inference with MCMC, one of the leading sampling methods. Posteriors for these signals are typically multimodal with strong nonlinear correlations, making sampling difficult. As we enter the advanced-detector era, improved sensitivities and wider bandwidths will drastically increase the computational cost of analyses, demanding more efficient search algorithms to meet these challenges.
Vangsgaard, Anna Katrine; Mutlu, Ayten Gizem; Gernaey, Krist;
2013-01-01
steady-state in the biofilm system. For oxygen mass transfer coefficient (kLa) estimation, long-term data, removal efficiencies, and the stoichiometry of the reactions were used. For the dynamic calibration a pragmatic model fitting approach was used - in this case an iterative Monte Carlo based...... screening of the parameter space proposed by Sin et al. (2008) - to find the best fit of the model to dynamic data. Finally, the calibrated model was validated with an independent data set. CONCLUSION: The presented calibration procedure is the first customized procedure for this type of system and is...
Optimization of Monte Carlo simulations
Bryskhe, Henrik
2009-01-01
This thesis considers several different techniques for optimizing Monte Carlo simulations. The Monte Carlo system used is Penelope but most of the techniques are applicable to other systems. The two mayor techniques are the usage of the graphics card to do geometry calculations, and raytracing. Using graphics card provides a very efficient way to do fast ray and triangle intersections. Raytracing provides an approximation of Monte Carlo simulation but is much faster to perform. A program was ...
The radiation detection efficiency of four scintillators employed, or designed to be employed, in positron emission imaging (PET) was evaluated as a function of the crystal thickness by applying Monte Carlo Methods. The scintillators studied were the LuSiO5 (LSO), LuAlO3 (LuAP), Gd2SiO5 (GSO) and the YAlO3 (YAP). Crystal thicknesses ranged from 0 to 50 mm. The study was performed via a previously generated photon transport Monte Carlo code. All photon track and energy histories were recorded and the energy transferred or absorbed in the scintillator medium was calculated together with the energy redistributed and retransported as secondary characteristic fluorescence radiation. Various parameters were calculated e.g. the fraction of the incident photon energy absorbed, transmitted or redistributed as fluorescence radiation, the scatter to primary ratio, the photon and energy distribution within each scintillator block etc. As being most significant, the fraction of the incident photon energy absorbed was found to increase with increasing crystal thickness tending to form a plateau above the 30 mm thickness. For LSO, LuAP, GSO and YAP scintillators, respectively, this fraction had the value of 44.8, 36.9 and 45.7% at the 10 mm thickness and 96.4, 93.2 and 96.9% at the 50 mm thickness. Within the plateau area approximately (57-59)% (59-63)% (52-63)% and (58-61)% of this fraction was due to scattered and reabsorbed radiation for the LSO, GSO, YAP and LuAP scintillators, respectively. In all cases, a negligible fraction (<0.1%) of the absorbed energy was found to escape the crystal as fluorescence radiation
Sun, Shuyu
2013-06-01
This paper introduces an efficient technique to generate new molecular simulation Markov chains for different temperature and density conditions, which allow for rapid extrapolation of canonical ensemble averages at a range of temperatures and densities different from the original conditions where a single simulation is conducted. Obtained information from the original simulation are reweighted and even reconstructed in order to extrapolate our knowledge to the new conditions. Our technique allows not only the extrapolation to a new temperature or density, but also the double extrapolation to both new temperature and density. The method was implemented for Lennard-Jones fluid with structureless particles in single-gas phase region. Extrapolation behaviors as functions of extrapolation ranges were studied. Limits of extrapolation ranges showed a remarkable capability especially along isochors where only reweighting is required. Various factors that could affect the limits of extrapolation ranges were investigated and compared. In particular, these limits were shown to be sensitive to the number of particles used and starting point where the simulation was originally conducted.
The MINOS calibration detector
This paper describes the MINOS calibration detector (CalDet) and the procedure used to calibrate it. The CalDet, a scaled-down but functionally equivalent model of the MINOS Far and Near detectors, was exposed to test beams in the CERN PS East Area during 2001-2003 to establish the response of the MINOS calorimeters to hadrons, electrons and muons in the range 0.2-10GeV/c. The CalDet measurements are used to fix the energy scale and constrain Monte Carlo simulations of MINOS
TARC: Carlo Rubbia's Energy Amplifier
Laurent Guiraud
1997-01-01
Transmutation by Adiabatic Resonance Crossing (TARC) is Carlo Rubbia's energy amplifier. This CERN experiment demonstrated that long-lived fission fragments, such as 99-TC, can be efficiently destroyed.
Calibration of HPGe detector for flowing sample neutron activation analysis
This work is concerned with the calibration of the HPGe detector used in flowing sample neutron activation analysis technique. The optimum counting configuration and half-life based correction factors have been estimated using Monte Carlo computer simulations. Depending on detection efficiency, sample volume and flow type around the detector, the optimum geometry was achieved using 4 mm diameter hose rolled in spiral shape around the detector. The derived results showed that the half-life based efficiency correction factors are strongly dependent on sample flow rate and the isotope half-life. (author)
Németh, Károly; Chapman, Karena W.; Balasubramanian, Mahalingam; Shyam, Badri; Chupas, Peter J.; Heald, Steve M.; Newville, Matt; Klingler, Robert J.; Winans, Randall E.; Almer, Jonathan D.; Sandi, Giselle; Srajer, George
2012-02-01
An efficient implementation of simultaneous reverse Monte Carlo (RMC) modeling of pair distribution function (PDF) and EXAFS spectra is reported. This implementation is an extension of the technique established by Krayzman et al. [J. Appl. Cryst. 42, 867 (2009)] in the sense that it enables simultaneous real-space fitting of x-ray PDF with accurate treatment of Q-dependence of the scattering cross-sections and EXAFS with multiple photoelectron scattering included. The extension also allows for atom swaps during EXAFS fits thereby enabling modeling the effects of chemical disorder, such as migrating atoms and vacancies. Significant acceleration of EXAFS computation is achieved via discretization of effective path lengths and subsequent reduction of operation counts. The validity and accuracy of the approach is illustrated on small atomic clusters and on 5500-9000 atom models of bcc-Fe and α-Fe2O3. The accuracy gains of combined simultaneous EXAFS and PDF fits are pointed out against PDF-only and EXAFS-only RMC fits. Our modeling approach may be widely used in PDF and EXAFS based investigations of disordered materials.
Li, Yong Gang; Yang, Yang; Short, Michael P.; Ding, Ze Jun; Zeng, Zhi; Li, Ju
2015-12-01
SRIM-like codes have limitations in describing general 3D geometries, for modeling radiation displacements and damage in nanostructured materials. A universal, computationally efficient and massively parallel 3D Monte Carlo code, IM3D, has been developed with excellent parallel scaling performance. IM3D is based on fast indexing of scattering integrals and the SRIM stopping power database, and allows the user a choice of Constructive Solid Geometry (CSG) or Finite Element Triangle Mesh (FETM) method for constructing 3D shapes and microstructures. For 2D films and multilayers, IM3D perfectly reproduces SRIM results, and can be ∼102 times faster in serial execution and > 104 times faster using parallel computation. For 3D problems, it provides a fast approach for analyzing the spatial distributions of primary displacements and defect generation under ion irradiation. Herein we also provide a detailed discussion of our open-source collision cascade physics engine, revealing the true meaning and limitations of the “Quick Kinchin-Pease” and “Full Cascades” options. The issues of femtosecond to picosecond timescales in defining displacement versus damage, the limitation of the displacements per atom (DPA) unit in quantifying radiation damage (such as inadequacy in quantifying degree of chemical mixing), are discussed.