Monte Carlo efficiency calibration of a neutron generator-based total-body irradiator
International Nuclear Information System (INIS)
Shypailo, R.J.; Ellis, K.J.
2009-01-01
Many body composition measurement systems are calibrated against a single-sized reference phantom. Prompt-gamma neutron activation (PGNA) provides the only direct measure of total body nitrogen (TBN), an index of the body's lean tissue mass. In PGNA systems, body size influences neutron flux attenuation, induced gamma signal distribution, and counting efficiency. Thus, calibration based on a single-sized phantom could result in inaccurate TBN values. We used Monte Carlo simulations (MCNP-5; Los Alamos National Laboratory) in order to map a system's response to the range of body weights (65-160 kg) and body fat distributions (25-60%) in obese humans. Calibration curves were constructed to derive body-size correction factors relative to a standard reference phantom, providing customized adjustments to account for differences in body habitus of obese adults. The use of MCNP-generated calibration curves should allow for a better estimate of the true changes in lean tissue mass that many occur during intervention programs focused only on weight loss. (author)
Energy Technology Data Exchange (ETDEWEB)
Zhai, Y. John [Vanderbilt University, Vanderbilt-Ingram Cancer Center, Nashville, TN 37232 (United States)
2016-06-15
Purpose: To obtain an improved precise gamma efficiency calibration curve of HPGe (High Purity Germanium) detector with a new comprehensive approach. Methods: Both of radioactive sources and Monte Carlo simulation (CYLTRAN) are used to determine HPGe gamma efficiency for energy range of 0–8 MeV. The HPGe is a GMX coaxial 280 cm{sup 3} N-type 70% gamma detector. Using Momentum Achromat Recoil Spectrometer (MARS) at the K500 superconducting cyclotron of Texas A&M University, the radioactive nucleus {sup 24} Al was produced and separated. This nucleus has positron decays followed by gamma transitions up to 8 MeV from {sup 24} Mg excited states which is used to do HPGe efficiency calibration. Results: With {sup 24} Al gamma energy spectrum up to 8MeV, the efficiency for γ ray 7.07 MeV at 4.9 cm distance away from the radioactive source {sup 24} Al was obtained at a value of 0.194(4)%, by carefully considering various factors such as positron annihilation, peak summing effect, beta detector efficiency and internal conversion effect. The Monte Carlo simulation (CYLTRAN) gave a value of 0.189%, which was in agreement with the experimental measurements. Applying to different energy points, then a precise efficiency calibration curve of HPGe detector up to 7.07 MeV at 4.9 cm distance away from the source {sup 24} Al was obtained. Using the same data analysis procedure, the efficiency for the 7.07 MeV gamma ray at 15.1 cm from the source {sup 24} Al was obtained at a value of 0.0387(6)%. MC simulation got a similar value of 0.0395%. This discrepancy led us to assign an uncertainty of 3% to the efficiency at 15.1 cm up to 7.07 MeV. The MC calculations also reproduced the intensity of observed single-and double-escape peaks, providing that the effects of positron annihilation-in-flight were incorporated. Conclusion: The precision improved gamma efficiency calibration curve provides more accurate radiation detection and dose calculation for cancer radiotherapy treatment.
Monte carlo efficiency calibration of a neutron generator-based total-body irradiator
The increasing prevalence of obesity world-wide has focused attention on the need for accurate body composition assessments, especially of large subjects. However, many body composition measurement systems are calibrated against a single-sized phantom, often based on the standard Reference Man mode...
International Nuclear Information System (INIS)
Baccouche, S.; Al-Azmi, D.; Karunakara, N.; Trabelsi, A.
2012-01-01
Gamma-ray measurements in terrestrial/environmental samples require the use of high efficient detectors because of the low level of the radionuclide activity concentrations in the samples; thus scintillators are suitable for this purpose. Two scintillation detectors were studied in this work; CsI(Tl) and NaI(Tl) with identical size for measurement of terrestrial samples for performance study. This work describes a Monte Carlo method for making the full-energy efficiency calibration curves for both detectors using gamma-ray energies associated with the decay of naturally occurring radionuclides 137 Cs (661 keV), 40 K (1460 keV), 238 U ( 214 Bi, 1764 keV) and 232 Th ( 208 Tl, 2614 keV), which are found in terrestrial samples. The magnitude of the coincidence summing effect occurring for the 2614 keV emission of 208 Tl is assessed by simulation. The method provides an efficient tool to make the full-energy efficiency calibration curve for scintillation detectors for any samples geometry and volume in order to determine accurate activity concentrations in terrestrial samples. - Highlights: ► CsI (Tl) and NaI (Tl) detectors were studied for the measurement of terrestrial samples. ► Monte Carlo method was used for efficiency calibration using natural gamma emitting terrestrial radionuclides. ► The coincidence summing effect occurring for the 2614 keV emission of 208 Tl is assessed by simulation.
Energy Technology Data Exchange (ETDEWEB)
Baccouche, S., E-mail: souad.baccouche@cnstn.rnrt.tn [UR-MDTN, National Center for Nuclear Sciences and Technology, Technopole Sidi Thabet, 2020 Sidi Thabet (Tunisia); Al-Azmi, D., E-mail: ds.alazmi@paaet.edu.kw [Department of Applied Sciences, College of Technological Studies, Public Authority for Applied Education and Training, Shuwaikh, P.O. Box 42325, Code 70654 (Kuwait); Karunakara, N., E-mail: karunakara_n@yahoo.com [University Science Instrumentation Centre, Mangalore University, Mangalagangotri 574199 (India); Trabelsi, A., E-mail: adel.trabelsi@fst.rnu.tn [UR-MDTN, National Center for Nuclear Sciences and Technology, Technopole Sidi Thabet, 2020 Sidi Thabet (Tunisia); UR-UPNHE, Faculty of Sciences of Tunis, El-Manar University, 2092 Tunis (Tunisia)
2012-01-15
Gamma-ray measurements in terrestrial/environmental samples require the use of high efficient detectors because of the low level of the radionuclide activity concentrations in the samples; thus scintillators are suitable for this purpose. Two scintillation detectors were studied in this work; CsI(Tl) and NaI(Tl) with identical size for measurement of terrestrial samples for performance study. This work describes a Monte Carlo method for making the full-energy efficiency calibration curves for both detectors using gamma-ray energies associated with the decay of naturally occurring radionuclides {sup 137}Cs (661 keV), {sup 40}K (1460 keV), {sup 238}U ({sup 214}Bi, 1764 keV) and {sup 232}Th ({sup 208}Tl, 2614 keV), which are found in terrestrial samples. The magnitude of the coincidence summing effect occurring for the 2614 keV emission of {sup 208}Tl is assessed by simulation. The method provides an efficient tool to make the full-energy efficiency calibration curve for scintillation detectors for any samples geometry and volume in order to determine accurate activity concentrations in terrestrial samples. - Highlights: Black-Right-Pointing-Pointer CsI (Tl) and NaI (Tl) detectors were studied for the measurement of terrestrial samples. Black-Right-Pointing-Pointer Monte Carlo method was used for efficiency calibration using natural gamma emitting terrestrial radionuclides. Black-Right-Pointing-Pointer The coincidence summing effect occurring for the 2614 keV emission of {sup 208}Tl is assessed by simulation.
Monte Carlo simulation: tool for the calibration in analytical determination of radionuclides
International Nuclear Information System (INIS)
Gonzalez, Jorge A. Carrazana; Ferrera, Eduardo A. Capote; Gomez, Isis M. Fernandez; Castro, Gloria V. Rodriguez; Ricardo, Niury Martinez
2013-01-01
This work shows how is established the traceability of the analytical determinations using this calibration method. Highlights the advantages offered by Monte Carlo simulation for the application of corrections by differences in chemical composition, density and height of the samples analyzed. Likewise, the results obtained by the LVRA in two exercises organized by the International Agency for Atomic Energy (IAEA) are presented. In these exercises (an intercomparison and a proficiency test) all reported analytical results were obtained based on calibrations in efficiency by Monte Carlo simulation using the DETEFF program
Detector characterization for efficiency calibration in different measurement geometries
International Nuclear Information System (INIS)
Toma, M.; Dinescu, L.; Sima, O.
2005-01-01
In order to perform an accurate efficiency calibration for different measurement geometries a good knowledge of the detector characteristics is required. The Monte Carlo simulation program GESPECOR is applied. The detector characterization required for Monte Carlo simulation is achieved using the efficiency values obtained from measuring a point source. The point source was measured in two significant geometries: the source placed in a vertical plane containing the vertical symmetry axis of the detector and in a horizontal plane containing the centre of the active volume of the detector. The measurements were made using gamma spectrometry technique. (authors)
Absolute efficiency calibration of HPGe detector by simulation method
International Nuclear Information System (INIS)
Narayani, K.; Pant, Amar D.; Verma, Amit K.; Bhosale, N.A.; Anilkumar, S.
2018-01-01
High resolution gamma ray spectrometry by HPGe detectors is a powerful radio analytical technique for estimation of activity of various radionuclides. In the present work absolute efficiency calibration of the HPGe detector was carried out using Monte Carlo simulation technique and results are compared with those obtained by experiment using standard radionuclides of 152 Eu and 133 Ba. The coincidence summing correction factors for the measurement of these nuclides were also calculated
Calibration simulation. A calibration Monte-Carlo program for the OPAL jet chamber
International Nuclear Information System (INIS)
Biebel, O.
1989-12-01
A calibration Monte Carlo program has been developed as a tool to investigate the interdependence of track reconstruction and calibration constants. Three categories of calibration effects have been considered: The precise knowledge of sense wire positions, necessary to reconstruct the particle trajectories in the jet chamber. Included are the staggering and the sag of the sense wires as well as tilts and rotations of their support structures. The various contributions to the measured drift time, with special emphasis on the aberration due to the track angle and the presence of a transverse magnetic field. A very precise knowledge of the drift velocity and the Lorentz angle of the drift paths with respect to the drift field is also required. The effects degrading particle identification via energy loss dE/dx. Impurities of the gas mixture and saturation effects depending on the track angle as well as the influence of the pulse shaping-electronics have been studied. These effects have been parametrised with coefficients corresponding to the calibration constants required for track reconstruction. Excellent agreement with the input data has been achieved when determining calibration constants from Monte Carlo data generated with these parametrisations. (orig.) [de
Energy Technology Data Exchange (ETDEWEB)
Gonzalez, Jorge A. Carrazana; Ferrera, Eduardo A. Capote; Gomez, Isis M. Fernandez; Castro, Gloria V. Rodriguez; Ricardo, Niury Martinez, E-mail: cphr@cphr.edu.cu [Centro de Proteccion e Higiene de las Radiaciones (CPHR), La Habana (Cuba)
2013-07-01
This work shows how is established the traceability of the analytical determinations using this calibration method. Highlights the advantages offered by Monte Carlo simulation for the application of corrections by differences in chemical composition, density and height of the samples analyzed. Likewise, the results obtained by the LVRA in two exercises organized by the International Agency for Atomic Energy (IAEA) are presented. In these exercises (an intercomparison and a proficiency test) all reported analytical results were obtained based on calibrations in efficiency by Monte Carlo simulation using the DETEFF program.
Results of monte Carlo calibrations of a low energy germanium detector
International Nuclear Information System (INIS)
Brettner-Messler, R.; Brettner-Messler, R.; Maringer, F.J.
2006-01-01
Normally, measurements of the peak efficiency of a gamma ray detector are performed with calibrated samples which are prepared to match the measured ones in all important characteristics like its volume, chemical composition and density. Another way to determine the peak efficiency is to calculate it with special monte Carlo programs. In principle the program 'Pencyl' from the source code 'P.E.N.E.L.O.P.E. 2003' can be used for peak efficiency calibration of a cylinder symmetric detector however exact data for the geometries and the materials is needed. The interpretation of the simulation results is not clear but we found a way to convert the data into values which can be compared to our measurement results. It is possible to find other simulation parameters which perform the same or better results. Further improvements can be expected by longer simulation times and more simulations in the questionable ranges of densities and filling heights. (N.C.)
International Nuclear Information System (INIS)
Solc, J.; Suran, J.; Novotna, M.; Pavlis, J.
2008-01-01
The contribution describes a technique of determination of calibration coefficients of a radioactivity monitor using Monte Carlo calculations. The monitor is installed at the NPP Temelin adjacent to lines with a radioactive medium. The output quantity is the activity concentration (in Bq/m3) that is converted from the number of counts per minute measured by the monitor. The value of this conversion constant, i.e. calibration coefficient, was calculated for gamma photons emitted by Co-60 and compared to the data stated by the manufacturer and supplier of these monitors, General Atomic Electronic Systems, Inc., USA. Results of the comparison show very good agreement between calculations and manufacturer data; the differences are lower than the quadratic sum of uncertainties. (authors)
Soucreless efficiency calibration for HPGe detector based on medical images
International Nuclear Information System (INIS)
Chen Chaobin; She Ruogu; Xiao Gang; Zuo Li
2012-01-01
Digital phantom of patient and region of interest (supposed to be filled with isotropy volume source) are built from medical CT images. They are used to calculate the detection efficiency of HPGe detectors located outside of human body by sourceless calibration method based on a fast integral technique and MCNP code respectively, and the results from two codes are in good accord besides a max difference about 5% at intermediate energy region. The software produced in this work are in better behavior than Monte Carlo code not only in time consume but also in complexity of problem to solve. (authors)
Calibration of detector efficiency of neutron detector
International Nuclear Information System (INIS)
Guo Hongsheng; He Xijun; Xu Rongkun; Peng Taiping
2001-01-01
BF 3 neutron detector has been set up. Detector efficiency is calibrated by associated particle technique. It is about 3.17 x 10 -4 (1 +- 18%). Neutron yield of neutron generator per pulse (10 7 /pulse) is measured by using the detector
Hydrogen analysis depth calibration by CORTEO Monte-Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Moser, M., E-mail: marcus.moser@unibw.de [Universität der Bundeswehr München, Institut für Angewandte Physik und Messtechnik LRT2, Fakultät für Luft- und Raumfahrttechnik, 85577 Neubiberg (Germany); Reichart, P.; Bergmaier, A.; Greubel, C. [Universität der Bundeswehr München, Institut für Angewandte Physik und Messtechnik LRT2, Fakultät für Luft- und Raumfahrttechnik, 85577 Neubiberg (Germany); Schiettekatte, F. [Université de Montréal, Département de Physique, Montréal, QC H3C 3J7 (Canada); Dollinger, G., E-mail: guenther.dollinger@unibw.de [Universität der Bundeswehr München, Institut für Angewandte Physik und Messtechnik LRT2, Fakultät für Luft- und Raumfahrttechnik, 85577 Neubiberg (Germany)
2016-03-15
Hydrogen imaging with sub-μm lateral resolution and sub-ppm sensitivity has become possible with coincident proton–proton (pp) scattering analysis (Reichart et al., 2004). Depth information is evaluated from the energy sum signal with respect to energy loss of both protons on their path through the sample. In first order, there is no angular dependence due to elastic scattering. In second order, a path length effect due to different energy loss on the paths of the protons causes an angular dependence of the energy sum. Therefore, the energy sum signal has to be de-convoluted depending on the matrix composition, i.e. mainly the atomic number Z, in order to get a depth calibrated hydrogen profile. Although the path effect can be calculated analytically in first order, multiple scattering effects lead to significant deviations in the depth profile. Hence, in our new approach, we use the CORTEO Monte-Carlo code (Schiettekatte, 2008) in order to calculate the depth of a coincidence event depending on the scattering angle. The code takes individual detector geometry into account. In this paper we show, that the code correctly reproduces measured pp-scattering energy spectra with roughness effects considered. With more than 100 μm thick Mylar-sandwich targets (Si, Fe, Ge) we demonstrate the deconvolution of the energy spectra on our current multistrip detector at the microprobe SNAKE at the Munich tandem accelerator lab. As a result, hydrogen profiles can be evaluated with an accuracy in depth of about 1% of the sample thickness.
High precision efficiency calibration of a HPGe detector
International Nuclear Information System (INIS)
Nica, N.; Hardy, J.C.; Iacob, V.E.; Helmer, R.G.
2003-01-01
Many experiments involving measurements of γ rays require a very precise efficiency calibration. Since γ-ray detection and identification also requires good energy resolution, the most commonly used detectors are of the coaxial HPGe type. We have calibrated our 70% HPGe to ∼ 0.2% precision, motivated by the measurement of precise branching ratios (BR) in superallowed 0 + → 0 + β decays. These BRs are essential ingredients in extracting ft-values needed to test the Standard Model via the unitarity of the Cabibbo-Kobayashi-Maskawa matrix, a test that it currently fails by more than two standard deviations. To achieve the required high precision in our efficiency calibration, we measured 17 radioactive sources at a source-detector distance of 15 cm. Some of these were commercial 'standard' sources but we achieved the highest relative precision with 'home-made' sources selected because they have simple decay schemes with negligible side feeding, thus providing exactly matched γ-ray intensities. These latter sources were produced by us at Texas A and M by n-activation or by nuclear reactions. Another critical source among the 17 was a 60 Co source produced by Physikalisch-Technische Bundesanstalt, Braunschweig, Germany: its absolute activity was quoted to better than 0.06%. We used it to establish our absolute efficiency, while all the other sources were used to determine relative efficiencies, extending our calibration over a large energy range (40-3500 keV). Efficiencies were also determined with Monte Carlo calculations performed with the CYLTRAN code. The physical parameters of the Ge crystal were independently determined and only two (unmeasurable) dead-layers were adjusted, within physically reasonable limits, to achieve precise absolute agreement with our measured efficiencies. The combination of measured efficiencies at more than 60 individual energies and Monte Carlo calculations to interpolate between them allows us to quote the efficiency of our
Lu, Dan; Ricciuto, Daniel; Walker, Anthony; Safta, Cosmin; Munger, William
2017-09-01
Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM) algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results in a better model fit and predictive performance compared to the popular adaptive Metropolis (AM) scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. The result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.
Efficiency and accuracy of Monte Carlo (importance) sampling
Waarts, P.H.
2003-01-01
Monte Carlo Analysis is often regarded as the most simple and accurate reliability method. Be-sides it is the most transparent method. The only problem is the accuracy in correlation with the efficiency. Monte Carlo gets less efficient or less accurate when very low probabilities are to be computed
Calibration of lung counter using a CT model of Torso phantom and Monte Carlo method
International Nuclear Information System (INIS)
Zhang Binquan; Ma Jizeng; Yang Duanjie; Liu Liye; Cheng Jianping
2006-01-01
Tomography image of a Torso phantom was obtained from CT-Scan. The Torso phantom represents the trunk of an adult man that is 170 cm high and weight of 65 kg. After these images were segmented, cropped, and resized, a 3-dimension voxel phantom was created. The voxel phantom includes more than 2 million voxels, which size was 2.73 mm x 2.73 mm x 3 mm. This model could be used for the calibration of lung counter with Monte Carlo method. On the assumption that radioactive material was homogeneously distributed throughout the lung, counting efficiencies of a HPGe detector in different positions were calculated as Adipose Mass fraction (AMF) was different in the soft tissue in chest. The results showed that counting efficiencies of the lung counter changed up to 67% for 17.5 keV γ ray and 20% for 25 keV γ ray when AMF changed from 0 to 40%. (authors)
High-precision efficiency calibration of a high-purity co-axial germanium detector
Energy Technology Data Exchange (ETDEWEB)
Blank, B., E-mail: blank@cenbg.in2p3.fr [Centre d' Etudes Nucléaires de Bordeaux Gradignan, UMR 5797, CNRS/IN2P3, Université de Bordeaux, Chemin du Solarium, BP 120, 33175 Gradignan Cedex (France); Souin, J.; Ascher, P.; Audirac, L.; Canchel, G.; Gerbaux, M.; Grévy, S.; Giovinazzo, J.; Guérin, H.; Nieto, T. Kurtukian; Matea, I. [Centre d' Etudes Nucléaires de Bordeaux Gradignan, UMR 5797, CNRS/IN2P3, Université de Bordeaux, Chemin du Solarium, BP 120, 33175 Gradignan Cedex (France); Bouzomita, H.; Delahaye, P.; Grinyer, G.F.; Thomas, J.C. [Grand Accélérateur National d' Ions Lourds, CEA/DSM, CNRS/IN2P3, Bvd Henri Becquerel, BP 55027, F-14076 CAEN Cedex 5 (France)
2015-03-11
A high-purity co-axial germanium detector has been calibrated in efficiency to a precision of about 0.15% over a wide energy range. High-precision scans of the detector crystal and γ-ray source measurements have been compared to Monte-Carlo simulations to adjust the dimensions of a detector model. For this purpose, standard calibration sources and short-lived online sources have been used. The resulting efficiency calibration reaches the precision needed e.g. for branching ratio measurements of super-allowed β decays for tests of the weak-interaction standard model.
International Nuclear Information System (INIS)
Ramos M, D.; Yera S, Y.; Lopez B, G. M.; Acosta R, N.; Vergara G, A.
2014-08-01
This work is based on the determination of the detection efficiency of 125 I and 131 I in thyroid of the identiFINDER detector using the Monte Carlo method. The suitability of the calibration method is analyzed, when comparing the results of the direct Monte Carlo method with the corrected, choosing the latter because the differences with the real efficiency stayed below 10%. To simulate the detector their geometric parameters were optimized using a tomographic study, what allowed the uncertainties minimization of the estimates. Finally were obtained the simulations of the detector geometry-point source to find the correction factors to 5 cm, 15 cm and 25 cm, and those corresponding to the detector-simulator arrangement for the method validation and final calculation of the efficiency, demonstrating that in the Monte Carlo method implementation if simulates at a greater distance than the used in the Laboratory measurements an efficiency overestimation can be obtained, while if simulates at a shorter distance this will be underestimated, so should be simulated at the same distance to which will be measured in the reality. Also, is achieved the obtaining of the efficiency curves and minimum detectable activity for the measurement of 131 I and 125 I. In general is achieved the implementation of the Monte Carlo methodology for the identiFINDER calibration with the purpose of estimating the measured activity of iodine in thyroid. This method represents an ideal way to replace the lack of patterns solutions and simulators assuring the capacities of the Internal Contamination Laboratory of the Centro de Proteccion e Higiene de las Radiaciones are always calibrated for the iodine measurement in thyroid. (author)
Calibration and Monte Carlo modelling of neutron long counters
Tagziria, H
2000-01-01
The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivit...
Time step length versus efficiency of Monte Carlo burnup calculations
International Nuclear Information System (INIS)
Dufek, Jan; Valtavirta, Ville
2014-01-01
Highlights: • Time step length largely affects efficiency of MC burnup calculations. • Efficiency of MC burnup calculations improves with decreasing time step length. • Results were obtained from SIE-based Monte Carlo burnup calculations. - Abstract: We demonstrate that efficiency of Monte Carlo burnup calculations can be largely affected by the selected time step length. This study employs the stochastic implicit Euler based coupling scheme for Monte Carlo burnup calculations that performs a number of inner iteration steps within each time step. In a series of calculations, we vary the time step length and the number of inner iteration steps; the results suggest that Monte Carlo burnup calculations get more efficient as the time step length is reduced. More time steps must be simulated as they get shorter; however, this is more than compensated by the decrease in computing cost per time step needed for achieving a certain accuracy
The peak efficiency calibration of volume source using 152Eu point source in computer
International Nuclear Information System (INIS)
Shen Tingyun; Qian Jianfu; Nan Qinliang; Zhou Yanguo
1997-01-01
The author describes the method of the peak efficiency calibration of volume source by means of 152 Eu point source for HPGe γ spectrometer. The peak efficiency can be computed by Monte Carlo simulation, after inputting parameter of detector. The computation results are in agreement with the experimental results with an error of +-3.8%, with an exception one is about +-7.4%
Calibration of the top-quark Monte-Carlo mass
International Nuclear Information System (INIS)
Kieseler, Jan; Lipka, Katerina; Moch, Sven-Olaf
2015-11-01
We present a method to establish experimentally the relation between the top-quark mass m MC t as implemented in Monte-Carlo generators and the Lagrangian mass parameter m t in a theoretically well-defined renormalization scheme. We propose a simultaneous fit of m MC t and an observable sensitive to m t , which does not rely on any prior assumptions about the relation between m t and m MC t . The measured observable is independent of m MC t and can be used subsequently for a determination of m t . The analysis strategy is illustrated with examples for the extraction of m t from inclusive and differential cross sections for hadro-production of top-quarks.
A Monte Carlo modeling alternative for the API Gamma Ray Calibration Facility
International Nuclear Information System (INIS)
Galford, J.E.
2017-01-01
The gamma ray pit at the API Calibration Facility, located on the University of Houston campus, defines the API unit for natural gamma ray logs used throughout the petroleum logging industry. Future use of the facility is uncertain. An alternative method is proposed to preserve the gamma ray API unit definition as an industry standard by using Monte Carlo modeling to obtain accurate counting rate-to-API unit conversion factors for gross-counting and spectral gamma ray tool designs. - Highlights: • A Monte Carlo alternative is proposed to replace empirical calibration procedures. • The proposed Monte Carlo alternative preserves the original API unit definition. • MCNP source and materials descriptions are provided for the API gamma ray pit. • Simulated results are presented for several wireline logging tool designs. • The proposed method can be adapted for use with logging-while-drilling tools.
International Nuclear Information System (INIS)
Bhati, S.; Patni, H.K.; Singh, I.S.; Garg, S.P.
2005-01-01
A shadow shield scanning bed whole body monitor incorporating a (102 mm dia x 76 mm thick) NaI(Tl) detector, is employed for assessment of high-energy photon emitters at BARC. The monitor is calibrated using a Reference BOMAB phantom representative of an average Indian radiation worker. However to account for the size variation in the physique of workers, it is required to calibrate the system with different size BOMAB phantoms which is both difficult and expensive. Therefore, a theoretical approach based on Monte Carlo techniques has been employed to calibrate the system with BOMAB phantoms of different sizes for several radionuclides of interest. A computer program developed for this purpose, simulates the scanning geometry of the whole body monitor and computes detection efficiencies for the BARC Reference phantom (63 kg/168 cm), ICRP Reference phantom (70 kg/170 cm) and several of its scaled versions covering a wide range of body builds. The detection efficiencies computed for different photon energies for BARC Reference phantom were found to be in very good agreement with experimental data, thus validating the Monte Carlo scheme used in the computer code. The results from this study could be used for assessment of internal contamination due to high-energy photon emitters for radiation workers of different physiques. (author)
International Nuclear Information System (INIS)
Shypailo, R J; Ellis, K J
2011-01-01
During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of 40 K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.
Model calibration for building energy efficiency simulation
International Nuclear Information System (INIS)
Mustafaraj, Giorgio; Marini, Dashamir; Costa, Andrea; Keane, Marcus
2014-01-01
Highlights: • Developing a 3D model relating to building architecture, occupancy and HVAC operation. • Two calibration stages developed, final model providing accurate results. • Using an onsite weather station for generating the weather data file in EnergyPlus. • Predicting thermal behaviour of underfloor heating, heat pump and natural ventilation. • Monthly energy saving opportunities related to heat pump of 20–27% was identified. - Abstract: This research work deals with an Environmental Research Institute (ERI) building where an underfloor heating system and natural ventilation are the main systems used to maintain comfort condition throughout 80% of the building areas. Firstly, this work involved developing a 3D model relating to building architecture, occupancy and HVAC operation. Secondly, the calibration methodology, which consists of two levels, was then applied in order to insure accuracy and reduce the likelihood of errors. To further improve the accuracy of calibration a historical weather data file related to year 2011, was created from the on-site local weather station of ERI building. After applying the second level of calibration process, the values of Mean bias Error (MBE) and Cumulative Variation of Root Mean Squared Error (CV(RMSE)) on hourly based analysis for heat pump electricity consumption varied within the following ranges: (MBE) hourly from −5.6% to 7.5% and CV(RMSE) hourly from 7.3% to 25.1%. Finally, the building was simulated with EnergyPlus to identify further possibilities of energy savings supplied by a water to water heat pump to underfloor heating system. It found that electricity consumption savings from the heat pump can vary between 20% and 27% on monthly bases
International Nuclear Information System (INIS)
Mallett, M.W.
1991-01-01
Lawrence Livermore National Laboratory (LLNL) is currently investigating a new method for obtaining absolute calibration factors for radiation measurement systems used to measure internally deposited radionuclides in vivo. This method uses magnetic resonance imaging (MRI) to determine the anatomical makeup of an individual. A new MRI technique is also employed that is capable of resolving the fat and water content of the human tissue. This anatomical and biochemical information is used to model a mathematical phantom. Monte Carlo methods are then used to simulate the transport of radiation throughout the phantom. By modeling the detection equipment of the in vivo measurement system into the code, calibration factors are generated that are specific to the individual. Furthermore, this method eliminates the need for surrogate human structures in the calibration process. A demonstration of the proposed method is being performed using a fat/water matrix
Efficiency calibration of solid track spark auto counter
International Nuclear Information System (INIS)
Wang Mei; Wen Zhongwei; Lin Jufang; Liu Rong; Jiang Li; Lu Xinxin; Zhu Tonghua
2008-01-01
The factors influencing detection efficiency of solid track spark auto counter were analyzed, and the best etch condition and parameters of charge were also reconfirmed. With small plate fission ionization chamber, the efficiency of solid track spark auto counter at various experiment assemblies was re-calibrated. The efficiency of solid track spark auto counter at various experimental conditions was obtained. (authors)
New approach for calibration the efficiency of HPGe detectors
International Nuclear Information System (INIS)
Alnour, I.A.; Wagiran, H.; Suhaimi Hamzah; Siong, W.B.; Mohd Suhaimi Elias
2013-01-01
Full-text: This work evaluates the efficiency calibrating of HPGe detector coupled with Canberra GC3018 with Genie 2000 software and Ortec GEM25-76-XLB-C with Gamma Vision software; available at Neutron activation analysis laboratory in Malaysian Nuclear Agency (NM). The efficiency calibration curve was constructed from measurement of an IAEA, standard gamma point sources set composed by 214 Am, 57 Co, 133 Ba, 152 Eu, 137 Cs and 60 Co. The efficiency calibrations were performed for three different geometries: 5, 10 and 15 cm distances from the end cap detector. The polynomial parameters functions were simulated through a computer program, MATLAB in order to find an accurate fit to the experimental data points. The efficiency equation was established from the known fitted parameters which allow for the efficiency evaluation at particular energy of interest. The study shows that significant deviations in the efficiency, depending on the source-detector distance and photon energy. (author)
Study on efficiency calibration of tritium in liquid scintillation spectrometry
International Nuclear Information System (INIS)
Zhai Xiufang; Wang Yaoqin; Li Weiping; Liang Wei; Xu Hui; Zhang Ruirong
2014-01-01
The method for efficiency calibration of tritium sample in liquid scintillation spectrometry was presented. The quenching effects from different chemical quenchers (Acidbase, CH_3NO_2, CCl_4, CH_3COCH_3) and color quencher (Na_2CrO_4) were studied. For each quencher, the methods of sample channel ratio (SCR), spectrum index of the sample (SIS) and spectral quenching parameter of the external standard (SQP (E)) were used for efficiency calibration respectively, and three methods were compared. The results show that the quenching from the various chemical quencher can be unified for one chemical quenching for efficiency calibration. There is great difference in the correction curves of chemical quenching and color quenching, and the fact is independent of the used efficiency calibration method. The SCR method is not advantageous for the tritium sample with low radioactivity or strong quenching. The SQP (E) method is independent of the sample count rate, and it is especially suitable for the efficiency calibration of low radioactivity tritium. The SIS method can be used for samples with high radioactivity. The accurate efficiency calibration for various quenching can be carried out by combining the SIS method and the SQP (E) method. (authors)
Energy Technology Data Exchange (ETDEWEB)
Ramos M, D.; Yera S, Y.; Lopez B, G. M.; Acosta R, N.; Vergara G, A., E-mail: dayana@cphr.edu.cu [Centro de Proteccion e Higiene de las Radiaciones, Calle 20 No. 4113 e/ 41 y 47, Playa, 10600 La Habana (Cuba)
2014-08-15
This work is based on the determination of the detection efficiency of {sup 125}I and {sup 131}I in thyroid of the identiFINDER detector using the Monte Carlo method. The suitability of the calibration method is analyzed, when comparing the results of the direct Monte Carlo method with the corrected, choosing the latter because the differences with the real efficiency stayed below 10%. To simulate the detector their geometric parameters were optimized using a tomographic study, what allowed the uncertainties minimization of the estimates. Finally were obtained the simulations of the detector geometry-point source to find the correction factors to 5 cm, 15 cm and 25 cm, and those corresponding to the detector-simulator arrangement for the method validation and final calculation of the efficiency, demonstrating that in the Monte Carlo method implementation if simulates at a greater distance than the used in the Laboratory measurements an efficiency overestimation can be obtained, while if simulates at a shorter distance this will be underestimated, so should be simulated at the same distance to which will be measured in the reality. Also, is achieved the obtaining of the efficiency curves and minimum detectable activity for the measurement of {sup 131}I and {sup 125}I. In general is achieved the implementation of the Monte Carlo methodology for the identiFINDER calibration with the purpose of estimating the measured activity of iodine in thyroid. This method represents an ideal way to replace the lack of patterns solutions and simulators assuring the capacities of the Internal Contamination Laboratory of the Centro de Proteccion e Higiene de las Radiaciones are always calibrated for the iodine measurement in thyroid. (author)
A Monte Carlo modeling alternative for the API Gamma Ray Calibration Facility.
Galford, J E
2017-04-01
The gamma ray pit at the API Calibration Facility, located on the University of Houston campus, defines the API unit for natural gamma ray logs used throughout the petroleum logging industry. Future use of the facility is uncertain. An alternative method is proposed to preserve the gamma ray API unit definition as an industry standard by using Monte Carlo modeling to obtain accurate counting rate-to-API unit conversion factors for gross-counting and spectral gamma ray tool designs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Coincidence corrected efficiency calibration of Compton-suppressed HPGe detectors
Energy Technology Data Exchange (ETDEWEB)
Aucott, Timothy [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Brand, Alexander [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); DiPrete, David [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2015-04-20
The authors present a reliable method to calibrate the full-energy efficiency and the coincidence correction factors using a commonly-available mixed source gamma standard. This is accomplished by measuring the peak areas from both summing and non-summing decay schemes and simultaneously fitting both the full-energy efficiency, as well as the total efficiency, as functions of energy. By using known decay schemes, these functions can then be used to provide correction factors for other nuclides not included in the calibration standard.
O5S, Calibration of Organic Scintillation Detector by Monte-Carlo
International Nuclear Information System (INIS)
1985-01-01
1 - Nature of physical problem solved: O5S is designed to directly simulate the experimental techniques used to obtain the pulse height distribution for a parallel beam of mono-energetic neutrons incident on organic scintillator systems. Developed to accurately calibrate the nominally 2 in. by 2 in. liquid organic scintillator NE-213 (composition CH-1.2), the programme should be readily adaptable to many similar problems. 2 - Method of solution: O5S is a Monte Carlo programme patterned after the general-purpose Monte Carlo neutron transport programme system, O5R. The O5S Monte Carlo experiment follows the course of each neutron through the scintillator and obtains the energy-deposits of the ions produced by elastic scatterings and reactions. The light pulse produced by the neutron is obtained by summing up the contributions of the various ions with the use of appropriate light vs. ion-energy tables. Because of the specialized geometry and simpler cross section needs O5S is able to by-pass many features included in O5R. For instance, neutrons may be followed individually, their histories analyzed as they occur, and upon completion of the experiment, the results analyzed to obtain the pulse-height distribution during one pass on the computer. O5S does allow the absorption of neutrons, but does not allow splitting or Russian roulette (biased weighting schemes). SMOOTHIE is designed to smooth O5S histogram data using Gaussian functions with parameters specified by the user
On the efficiency calibration of a drum waste assay system
Dinescu, L; Cazan, I L; Macrin, R; Caragheorgheopol, G; Rotarescu, G
2002-01-01
The efficiency calibration of a gamma spectroscopy waste assay system, constructed by IFIN-HH, was performed. The calibration technique was based on the assumption of a uniform distribution of the source activity in the drum and also a uniform sample matrix. A collimated detector (HPGe--20% relative efficiency) placed at 30 cm from the drum was used. The detection limit for sup 1 sup 3 sup 7 Cs and sup 6 sup 0 Co is approximately 45 Bq/kg for a sample of about 400 kg and a counting time of 10 min. A total measurement uncertainty of -70% to +40% was estimated.
Efficient sampling algorithms for Monte Carlo based treatment planning
International Nuclear Information System (INIS)
DeMarco, J.J.; Solberg, T.D.; Chetty, I.; Smathers, J.B.
1998-01-01
Efficient sampling algorithms are necessary for producing a fast Monte Carlo based treatment planning code. This study evaluates several aspects of a photon-based tracking scheme and the effect of optimal sampling algorithms on the efficiency of the code. Four areas were tested: pseudo-random number generation, generalized sampling of a discrete distribution, sampling from the exponential distribution, and delta scattering as applied to photon transport through a heterogeneous simulation geometry. Generalized sampling of a discrete distribution using the cutpoint method can produce speedup gains of one order of magnitude versus conventional sequential sampling. Photon transport modifications based upon the delta scattering method were implemented and compared with a conventional boundary and collision checking algorithm. The delta scattering algorithm is faster by a factor of six versus the conventional algorithm for a boundary size of 5 mm within a heterogeneous geometry. A comparison of portable pseudo-random number algorithms and exponential sampling techniques is also discussed
Monte Carlo studies and optimization for the calibration system of the GERDA experiment
Baudis, L.; Ferella, A. D.; Froborg, F.; Tarka, M.
2013-11-01
The GERmanium Detector Array, GERDA, searches for neutrinoless double β decay in 76Ge using bare high-purity germanium detectors submerged in liquid argon. For the calibration of these detectors γ emitting sources have to be lowered from their parking position on the top of the cryostat over more than 5 m down to the germanium crystals. With the help of Monte Carlo simulations, the relevant parameters of the calibration system were determined. It was found that three 228Th sources with an activity of 20 kBq each at two different vertical positions will be necessary to reach sufficient statistics in all detectors in less than 4 h of calibration time. These sources will contribute to the background of the experiment with a total of (1.07±0.04(stat)-0.19+0.13(sys))×10-4 cts/(keV kg yr)) when shielded from below with 6 cm of tantalum in the parking position.
Monte Carlo studies and optimization for the calibration system of the GERDA experiment
Energy Technology Data Exchange (ETDEWEB)
Baudis, L. [Physics Institute, University of Zurich, Winterthurerstrasse 190, 8057 Zürich (Switzerland); Ferella, A.D. [Physics Institute, University of Zurich, Winterthurerstrasse 190, 8057 Zürich (Switzerland); INFN Laboratori Nazionali del Gran Sasso, 67010 Assergi (Italy); Froborg, F., E-mail: francis@froborg.de [Physics Institute, University of Zurich, Winterthurerstrasse 190, 8057 Zürich (Switzerland); Tarka, M. [Physics Institute, University of Zurich, Winterthurerstrasse 190, 8057 Zürich (Switzerland); Physics Department, University of Illinois, 1110 West Green Street, Urbana, IL 61801 (United States)
2013-11-21
The GERmanium Detector Array, GERDA, searches for neutrinoless double β decay in {sup 76}Ge using bare high-purity germanium detectors submerged in liquid argon. For the calibration of these detectors γ emitting sources have to be lowered from their parking position on the top of the cryostat over more than 5 m down to the germanium crystals. With the help of Monte Carlo simulations, the relevant parameters of the calibration system were determined. It was found that three {sup 228}Th sources with an activity of 20 kBq each at two different vertical positions will be necessary to reach sufficient statistics in all detectors in less than 4 h of calibration time. These sources will contribute to the background of the experiment with a total of (1.07±0.04(stat){sub −0.19}{sup +0.13}(sys))×10{sup −4}cts/(keVkgyr)) when shielded from below with 6 cm of tantalum in the parking position.
The EURADOS-KIT training course on Monte Carlo methods for the calibration of body counters
International Nuclear Information System (INIS)
Breustedt, B.; Broggio, D.; Gomez-Ros, J.M.; Lopez, M.A.; Leone, D.; Poelz, S.; Marzocchi, O.; Shutt, A.
2016-01-01
Monte Carlo (MC) methods are numerical simulation techniques that can be used to extend the scope of calibrations performed in in vivo monitoring laboratories. These methods allow calibrations to be carried out for a much wider range of body shapes and sizes than would be feasible using physical phantoms. Unfortunately, nowadays, this powerful technique is still used mainly in research institutions only. In 2013, EURADOS and the in vivo monitoring laboratory of Karlsruhe Institute of Technology (KIT) organized a 3-d training course to disseminate knowledge on the application of MC methods for in vivo monitoring. It was intended as a hands-on course centered around an exercise which guided the participants step by step through the calibration process using a simplified version of KIT's equipment. Only introductory lectures on in vivo monitoring and voxel models were given. The course was based on MC codes of the MCNP family, widespread in the community. The strong involvement of the participants and the working atmosphere in the classroom as well as the formal evaluation of the course showed that the approach chosen was appropriate. Participants liked the hands-on approach and the extensive course materials on the exercise. (authors)
International Nuclear Information System (INIS)
Courtine, Fabien
2007-03-01
The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of 137 Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the 60 Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)
Improving computational efficiency of Monte Carlo simulations with variance reduction
International Nuclear Information System (INIS)
Turner, A.; Davis, A.
2013-01-01
CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)
A Monte Carlo error simulation applied to calibration-free X-ray diffraction phase analysis
International Nuclear Information System (INIS)
Braun, G.E.
1986-01-01
Quantitative phase analysis of a system of n phases can be effected without the need for calibration standards provided at least n different mixtures of these phases are available. A series of linear equations relating diffracted X-ray intensities, weight fractions and quantitation factors coupled with mass balance relationships can be solved for the unknown weight fractions and factors. Uncertainties associated with the measured X-ray intensities, owing to counting of random X-ray quanta, are used to estimate the errors in the calculated parameters utilizing a Monte Carlo simulation. The Monte Carlo approach can be generalized and applied to any quantitative X-ray diffraction phase analysis method. Two examples utilizing mixtures of CaCO 3 , Fe 2 O 3 and CaF 2 with an α-SiO 2 (quartz) internal standard illustrate the quantitative method and corresponding error analysis. One example is well conditioned; the other is poorly conditioned and, therefore, very sensitive to errors in the measured intensities. (orig.)
Novotny, M.A.
2010-02-01
The efficiency of dynamic Monte Carlo algorithms for off-lattice systems composed of particles is studied for the case of a single impurity particle. The theoretical efficiencies of the rejection-free method and of the Monte Carlo with Absorbing Markov Chains method are given. Simulation results are presented to confirm the theoretical efficiencies. © 2010.
Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector
Energy Technology Data Exchange (ETDEWEB)
Padilla Cabal, Fatima, E-mail: fpadilla@instec.c [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba); Lopez-Pino, Neivy; Luis Bernal-Castillo, Jose; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D' Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba)
2010-12-15
A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ({sup 241}Am, {sup 133}Ba, {sup 22}Na, {sup 60}Co, {sup 57}Co, {sup 137}Cs and {sup 152}Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.
International Nuclear Information System (INIS)
Srinivasan, P.; Raman, Anand; Sharma, D.N.
2009-01-01
Aerial gamma spectrometry is a very effective method for quickly surveying a large area, which might get contaminated following a nuclear accident, or due to nuclear weapon fallout. The technique not only helps in identifying the contaminating radionuclide but also in assessing the magnitude and the extent of contamination. These two factors are of importance for the authorities to quickly plan and execute effective counter measures and controls if required. The development of Airborne gamma ray spectrometry systems have been reported by different institutions. The application of these systems have been reported by different authors. Radiation Safety Systems Division of the Bhabha Atomic Research Centre has developed an Aerial Gamma Spectrometry System (AGSS) and the surveying methodology. For an online assessment of the contamination levels, it is essential to calibrate the system (AGSS) either flying it over a known contaminated area or over a simulated contaminated surface by deploying sealed sources on the ground. AGSS has been calibrated for different detectors in aerial exercises using such simulated contamination on the ground. The calibration methodology essentially needs net photo-peak counts in selected energy windows to finally arrive at the Air to Ground Correlation Factors at selected flight parameters such as altitude, speed of flight and the time interval at which each spectrum is acquired. This paper describes the methodology to predict all the necessary parameters like photon fluence at various altitudes, the photo-peak counts in different energy windows, Air to Ground Correlation Factors(AGCF), the dose rate at any height due to air scattered gamma ray photons etc. These parameters are predicted for a given source deployment matrix, detector and altitude of flying using the Monte-Carlo code MCNP (Monte Carlo Neutron and Photon Transport Code.CCC-200, RSIC, ORNL, Tennessee, USA). A methodology to generate the completely folded gamma ray count
Novotny, M.A.; Watanabe, Hiroshi; Ito, Nobuyasu
2010-01-01
The efficiency of dynamic Monte Carlo algorithms for off-lattice systems composed of particles is studied for the case of a single impurity particle. The theoretical efficiencies of the rejection-free method and of the Monte Carlo with Absorbing
Efficiency Calibration of Phantom Family for Use in Direct Bioassay of Radionuclide in the Body
International Nuclear Information System (INIS)
Kim, Ji Seok; Ha, Wi Ho; Kim, Hyun Ki; Park, Gyung Deok; Lee, Jai Ki
2008-01-01
A major source of uncertainties of in vivo bioassay using a whole body counter calibrated against a body phantom containing known radioactivities is variation of counting geometry caused by the differences in body size of the subject from that of the phantom. Phantoms such as the BOMAB phantom are based on the body size of the reference man and usually single phantom is used in usual calibration of the counter. This is because it is difficult to apply a set of phantoms having different sizes. In order to reduce the potential errors due to variation of counting geometry, use of a set of phantoms having different body-shapes have been attempted. The efficiency files are stored in the computer analyzing the measurement data and a suitable one is retrieved for the specific subject. Experimental or computational approach can be employed in generation of the efficiency files. Carlan et al. demonstrated that Monte Carlo simulations can provide acceptable efficiencies by use of the IGOR phantom family. The body size of the individual subject undergoing in vivo bioassay should be determined by an appropriate method
An efficient framework for photon Monte Carlo treatment planning
International Nuclear Information System (INIS)
Fix, Michael K; Manser, Peter; Frei, Daniel; Volken, Werner; Mini, Roberto; Born, Ernst J
2007-01-01
Currently photon Monte Carlo treatment planning (MCTP) for a patient stored in the patient database of a treatment planning system (TPS) can usually only be performed using a cumbersome multi-step procedure where many user interactions are needed. This means automation is needed for usage in clinical routine. In addition, because of the long computing time in MCTP, optimization of the MC calculations is essential. For these purposes a new graphical user interface (GUI)-based photon MC environment has been developed resulting in a very flexible framework. By this means appropriate MC transport methods are assigned to different geometric regions by still benefiting from the features included in the TPS. In order to provide a flexible MC environment, the MC particle transport has been divided into different parts: the source, beam modifiers and the patient. The source part includes the phase-space source, source models and full MC transport through the treatment head. The beam modifier part consists of one module for each beam modifier. To simulate the radiation transport through each individual beam modifier, one out of three full MC transport codes can be selected independently. Additionally, for each beam modifier a simple or an exact geometry can be chosen. Thereby, different complexity levels of radiation transport are applied during the simulation. For the patient dose calculation, two different MC codes are available. A special plug-in in Eclipse providing all necessary information by means of Dicom streams was used to start the developed MC GUI. The implementation of this framework separates the MC transport from the geometry and the modules pass the particles in memory; hence, no files are used as the interface. The implementation is realized for 6 and 15 MV beams of a Varian Clinac 2300 C/D. Several applications demonstrate the usefulness of the framework. Apart from applications dealing with the beam modifiers, two patient cases are shown. Thereby
Extrapolated HPGe efficiency estimates based on a single calibration measurement
International Nuclear Information System (INIS)
Winn, W.G.
1994-01-01
Gamma spectroscopists often must analyze samples with geometries for which their detectors are not calibrated. The effort to experimentally recalibrate a detector for a new geometry can be quite time consuming, causing delay in reporting useful results. Such concerns have motivated development of a method for extrapolating HPGe efficiency estimates from an existing single measured efficiency. Overall, the method provides useful preliminary results for analyses that do not require exceptional accuracy, while reliably bracketing the credible range. The estimated efficiency element-of for a uniform sample in a geometry with volume V is extrapolated from the measured element-of 0 of the base sample of volume V 0 . Assuming all samples are centered atop the detector for maximum efficiency, element-of decreases monotonically as V increases about V 0 , and vice versa. Extrapolation of high and low efficiency estimates element-of h and element-of L provides an average estimate of element-of = 1/2 [element-of h + element-of L ] ± 1/2 [element-of h - element-of L ] (general) where an uncertainty D element-of = 1/2 (element-of h - element-of L ] brackets limits for a maximum possible error. The element-of h and element-of L both diverge from element-of 0 as V deviates from V 0 , causing D element-of to increase accordingly. The above concepts guided development of both conservative and refined estimates for element-of
Offline Calibration of b-Jet Identification Efficiencies
Lowette, Steven; Heyninck, Jan; Vanlaer, Pascal
2006-01-01
A new method to calibrate b-tagging algorithms can be exploited at the LHC, due to the high energy and luminosity of the accelerator. The abundantly produced ttbar pairs can be used to isolate jet samples with a highly enriched b-jet content, on which the b-jet identification algorithms can be calibrated. Two methods are described to extract a b-enriched jet sample in ttbar events, using the semileptonic and the fully leptonic decay modes. The selection of jets is based on a likelihood ratio method. On the selected b-jet enriched jet samples the b-tagging performance is measured, taking into account the impurities of the samples. The most important contributions to the systematic uncertainties are evaluated, resulting in an estimate of the expected precision on the measurement of the b-jet identification performance. For 1 fb-1 (10 fb-1) of integrated luminosity the relative accuracy on the b-jet identification efficiency is expected to be about 6% (4%) in the barrel region and about 10% (5%) in the endcaps.
Ramgraber, M.; Schirmer, M.
2017-12-01
As computational power grows and wireless sensor networks find their way into common practice, it becomes increasingly feasible to pursue on-line numerical groundwater modelling. The reconciliation of model predictions with sensor measurements often necessitates the application of Sequential Monte Carlo (SMC) techniques, most prominently represented by the Ensemble Kalman Filter. In the pursuit of on-line predictions it seems advantageous to transcend the scope of pure data assimilation and incorporate on-line parameter calibration as well. Unfortunately, the interplay between shifting model parameters and transient states is non-trivial. Several recent publications (e.g. Chopin et al., 2013, Kantas et al., 2015) in the field of statistics discuss potential algorithms addressing this issue. However, most of these are computationally intractable for on-line application. In this study, we investigate to what extent compromises between mathematical rigour and computational restrictions can be made within the framework of on-line numerical modelling of groundwater. Preliminary studies are conducted in a synthetic setting, with the goal of transferring the conclusions drawn into application in a real-world setting. To this end, a wireless sensor network has been established in the valley aquifer around Fehraltorf, characterized by a highly dynamic groundwater system and located about 20 km to the East of Zürich, Switzerland. By providing continuous probabilistic estimates of the state and parameter distribution, a steady base for branched-off predictive scenario modelling could be established, providing water authorities with advanced tools for assessing the impact of groundwater management practices. Chopin, N., Jacob, P.E. and Papaspiliopoulos, O. (2013): SMC2: an efficient algorithm for sequential analysis of state space models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75 (3), p. 397-426. Kantas, N., Doucet, A., Singh, S
Efficient Monte Carlo Simulations of Gas Molecules Inside Porous Materials.
Kim, Jihan; Smit, Berend
2012-07-10
Monte Carlo (MC) simulations are commonly used to obtain adsorption properties of gas molecules inside porous materials. In this work, we discuss various optimization strategies that lead to faster MC simulations with CO2 gas molecules inside host zeolite structures used as a test system. The reciprocal space contribution of the gas-gas Ewald summation and both the direct and the reciprocal gas-host potential energy interactions are stored inside energy grids to reduce the wall time in the MC simulations. Additional speedup can be obtained by selectively calling the routine that computes the gas-gas Ewald summation, which does not impact the accuracy of the zeolite's adsorption characteristics. We utilize two-level density-biased sampling technique in the grand canonical Monte Carlo (GCMC) algorithm to restrict CO2 insertion moves into low-energy regions within the zeolite materials to accelerate convergence. Finally, we make use of the graphics processing units (GPUs) hardware to conduct multiple MC simulations in parallel via judiciously mapping the GPU threads to available workload. As a result, we can obtain a CO2 adsorption isotherm curve with 14 pressure values (up to 10 atm) for a zeolite structure within a minute of total compute wall time.
Bayesian calibration of coarse-grained forces: Efficiently addressing transferability
International Nuclear Information System (INIS)
Patrone, Paul N.; Rosch, Thomas W.; Phelan, Frederick R.
2016-01-01
Generating and calibrating forces that are transferable across a range of state-points remains a challenging task in coarse-grained (CG) molecular dynamics. In this work, we present a coarse-graining workflow, inspired by ideas from uncertainty quantification and numerical analysis, to address this problem. The key idea behind our approach is to introduce a Bayesian correction algorithm that uses functional derivatives of CG simulations to rapidly and inexpensively recalibrate initial estimates f 0 of forces anchored by standard methods such as force-matching. Taking density-temperature relationships as a running example, we demonstrate that this algorithm, in concert with various interpolation schemes, can be used to efficiently compute physically reasonable force curves on a fine grid of state-points. Importantly, we show that our workflow is robust to several choices available to the modeler, including the interpolation schemes and tools used to construct f 0 . In a related vein, we also demonstrate that our approach can speed up coarse-graining by reducing the number of atomistic simulations needed as inputs to standard methods for generating CG forces.
Bayesian calibration of coarse-grained forces: Efficiently addressing transferability
Patrone, Paul N.; Rosch, Thomas W.; Phelan, Frederick R.
2016-04-01
Generating and calibrating forces that are transferable across a range of state-points remains a challenging task in coarse-grained (CG) molecular dynamics. In this work, we present a coarse-graining workflow, inspired by ideas from uncertainty quantification and numerical analysis, to address this problem. The key idea behind our approach is to introduce a Bayesian correction algorithm that uses functional derivatives of CG simulations to rapidly and inexpensively recalibrate initial estimates f0 of forces anchored by standard methods such as force-matching. Taking density-temperature relationships as a running example, we demonstrate that this algorithm, in concert with various interpolation schemes, can be used to efficiently compute physically reasonable force curves on a fine grid of state-points. Importantly, we show that our workflow is robust to several choices available to the modeler, including the interpolation schemes and tools used to construct f0. In a related vein, we also demonstrate that our approach can speed up coarse-graining by reducing the number of atomistic simulations needed as inputs to standard methods for generating CG forces.
International Nuclear Information System (INIS)
Dongming, L.; Shuhai, J.; Houwen, L.
2016-01-01
In order to routinely evaluate workers' internal exposure due to intake of radionuclides, a whole-body counter (WBC) at the Third Qinshan Nuclear Power Co. Ltd. (TQNPC) is used. Counting would typically occur immediately after a confirmed or suspected inhalation exposure. The counting geometry would differ as a result of the height of the individual being counted, which would result in over- or underestimated intake(s). In this study, Monte Carlo simulation was applied to evaluate the counting efficiency when performing a lung count using the WBC at the TQNPC. In order to validate the simulated efficiencies for lung counting, the WBC was benchmarked for various lung positions using a 137 Cs source. The results show that the simulated efficiencies are fairly consistent with the measured ones for 137 Cs, with a relative error of 0.289%. For a lung organ simulation, the discrepancy between the calibration phantom and the Chinese reference adult person (170 cm) was within 6% for peak energies ranging from 59.5 keV to 2000 keV. The relative errors vary from 4.63% to 8.41% depending on the person's height and photon energy. Therefore, the simulation technique is effective and practical for lung counting, which is difficult to calibrate using a physical phantom. (authors)
Energy Technology Data Exchange (ETDEWEB)
Morera-Gómez, Yasser, E-mail: ymore24@gamail.com [Centro de Estudios Ambientales de Cienfuegos, AP 5. Ciudad Nuclear, CP 59350 Cienfuegos (Cuba); Departamento de Química y Edafología, Universidad de Navarra, Irunlarrea No 1, Pamplona 31009, Navarra (Spain); Cartas-Aguila, Héctor A.; Alonso-Hernández, Carlos M.; Nuñez-Duartes, Carlos [Centro de Estudios Ambientales de Cienfuegos, AP 5. Ciudad Nuclear, CP 59350 Cienfuegos (Cuba)
2016-05-11
To obtain reliable measurements of the environmental radionuclide activity using HPGe (High Purity Germanium) detectors, the knowledge of the absolute peak efficiency is required. This work presents a practical procedure for efficiency calibration of a coaxial n-type and a well-type HPGe detector using experimental and Monte Carlo simulations methods. The method was performed in an energy range from 40 to 1460 keV and it can be used for both, solid and liquid environmental samples. The calibration was initially verified measuring several reference materials provided by the IAEA (International Atomic Energy Agency). Finally, through the participation in two Proficiency Tests organized by IAEA for the members of the ALMERA network (Analytical Laboratories for the Measurement of Environmental Radioactivity) the validity of the developed procedure was confirmed. The validation also showed that measurement of {sup 226}Ra should be conducted using coaxial n-type HPGe detector in order to minimize the true coincidence summing effect. - Highlights: • An efficiency calibration for a coaxial and a well-type HPGe detector was performed. • The calibration was made using experimental and Monte Carlo simulations methods. • The procedure was verified measuring several reference materials provided by IAEA. • Calibrations were validated through the participation in 2 ALMERA Proficiency Tests.
Efficient mass calibration of magnetic sector mass spectrometers
International Nuclear Information System (INIS)
Roddick, J.C.
1996-01-01
Magnetic sector mass spectrometers used for automatic acquisition of precise isotopic data are usually controlled with Hall probes and software that uses polynomial equations to define and calibrate the mass-field relations required for mass focusing. This procedure requires a number of reference masses and careful tuning to define and maintain an accurate mass calibration. A simplified equation is presented and applied to several different magnetically controlled mass spectrometers. The equation accounts for nonlinearity in typical Hall probe controlled mass-field relations, reduces calibration to a linear fitting procedure, and is sufficiently accurate to permit calibration over a mass range of 2 to 200 amu with only two defining masses. Procedures developed can quickly correct for normal drift in calibrations and compensate for drift during isotopic analysis over a limited mass range such as a single element. The equation is: Field A·Mass 1/2 + B·(Mass) p where A, B, and p are constants. The power value p has a characteristic value for a Hall probe/controller and is insensitive to changing conditions, thus reducing calibration to a linear regression to determine optimum A and B. (author). 1 ref., 1 tab., 6 figs
Efficient mass calibration of magnetic sector mass spectrometers
Energy Technology Data Exchange (ETDEWEB)
Roddick, J C
1997-12-31
Magnetic sector mass spectrometers used for automatic acquisition of precise isotopic data are usually controlled with Hall probes and software that uses polynomial equations to define and calibrate the mass-field relations required for mass focusing. This procedure requires a number of reference masses and careful tuning to define and maintain an accurate mass calibration. A simplified equation is presented and applied to several different magnetically controlled mass spectrometers. The equation accounts for nonlinearity in typical Hall probe controlled mass-field relations, reduces calibration to a linear fitting procedure, and is sufficiently accurate to permit calibration over a mass range of 2 to 200 amu with only two defining masses. Procedures developed can quickly correct for normal drift in calibrations and compensate for drift during isotopic analysis over a limited mass range such as a single element. The equation is: Field A{center_dot}Mass{sup 1/2} + B{center_dot}(Mass){sup p} where A, B, and p are constants. The power value p has a characteristic value for a Hall probe/controller and is insensitive to changing conditions, thus reducing calibration to a linear regression to determine optimum A and B. (author). 1 ref., 1 tab., 6 figs.
International Nuclear Information System (INIS)
Greacen, E.L.; Correll, R.L.; Cunningham, R.B.; Johns, G.G.; Nicolls, K.D.
1981-01-01
Procedures common to different methods of calibration of neutron moisture meters are outlined and laboratory and field calibration methods compared. Gross errors which arise from faulty calibration techniques are described. The count rate can be affected by the dry bulk density of the soil, the volumetric content of constitutional hydrogen and other chemical components of the soil and soil solution. Calibration is further complicated by the fact that the neutron meter responds more strongly to the soil properties close to the detector and source. The differences in slope of calibration curves for different soils can be as much as 40%
A parameter for the selection of an optimum balance calibration model by Monte Carlo simulation
CSIR Research Space (South Africa)
Bidgood, Peter M
2013-09-01
Full Text Available The current trend in balance calibration-matrix generation is to use non-linear regression and statistical methods. Methods typically include Modified-Design-of-Experiment (MDOE), Response-Surface-Models (RSMs) and Analysis of Variance (ANOVA...
Adding computationally efficient realism to Monte Carlo turbulence simulation
Campbell, C. W.
1985-01-01
Frequently in aerospace vehicle flight simulation, random turbulence is generated using the assumption that the craft is small compared to the length scales of turbulence. The turbulence is presumed to vary only along the flight path of the vehicle but not across the vehicle span. The addition of the realism of three-dimensionality is a worthy goal, but any such attempt will not gain acceptance in the simulator community unless it is computationally efficient. A concept for adding three-dimensional realism with a minimum of computational complexity is presented. The concept involves the use of close rational approximations to irrational spectra and cross-spectra so that systems of stable, explicit difference equations can be used to generate the turbulence.
Simple formalism for efficient derivatives and multi-determinant expansions in quantum Monte Carlo
Filippi, Claudia; Assaraf, R.; Moroni, S.
2016-01-01
We present a simple and general formalism to compute efficiently the derivatives of a multi-determinant Jastrow-Slater wave function, the local energy, the interatomic forces, and similar quantities needed in quantum Monte Carlo. Through a straightforward manipulation of matrices evaluated on the
Monte Carlo calculation of efficiencies of whole-body counter, by microcomputer
International Nuclear Information System (INIS)
Fernandes Neto, J.M.
1987-01-01
A computer programming using the Monte Carlo method for calculation of efficiencies of whole-body counting of body radiation distribution is presented. An analytical simulator (for man e for child) incorporated with 99m Tc, 131 I and 42 K is used. (M.A.C.) [pt
High-efficiency wavefunction updates for large scale Quantum Monte Carlo
Kent, Paul; McDaniel, Tyler; Li, Ying Wai; D'Azevedo, Ed
Within ab intio Quantum Monte Carlo (QMC) simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunctions. The evaluation of each Monte Carlo move requires finding the determinant of a dense matrix, which is traditionally iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. For calculations with thousands of electrons, this operation dominates the execution profile. We propose a novel rank- k delayed update scheme. This strategy enables probability evaluation for multiple successive Monte Carlo moves, with application of accepted moves to the matrices delayed until after a predetermined number of moves, k. Accepted events grouped in this manner are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency. This procedure does not change the underlying Monte Carlo sampling or the sampling efficiency. For large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude speedups can be obtained on both multi-core CPU and on GPUs, making this algorithm highly advantageous for current petascale and future exascale computations.
Energy Technology Data Exchange (ETDEWEB)
Hurtado, S. [Servicio de Radioisotopos, Centro de Investigacion, Tecnologia e Innovacion (CITIUS), Universidad de Sevilla, Avda. Reina Mercedes s/n, 41012 Sevilla (Spain)], E-mail: shurtado@us.es; Garcia-Leon, M. [Departamento de Fisica Atomica, Molecular y Nuclear, Facultad de Fisica, Universidad de Sevilla, Aptd. 1065, 41080 Sevilla (Spain); Garcia-Tenorio, R. [Departamento de Fisica Aplicada II, E.T.S.A. Universidad de Sevilla, Avda, Reina Mercedes 2, 41012 Sevilla (Spain)
2008-09-11
In this work several mathematical functions are compared in order to perform the full-energy peak efficiency calibration of HPGe detectors using a 126cm{sup 3} HPGe coaxial detector and gamma-ray energies ranging from 36 to 1460 keV. Statistical tests and Monte Carlo simulations were used to study the performance of the fitting curve equations. Furthermore the fitting procedure of these complex functional forms to experimental data is a non-linear multi-parameter minimization problem. In gamma-ray spectrometry usually non-linear least-squares fitting algorithms (Levenberg-Marquardt method) provide a fast convergence while minimizing {chi}{sub R}{sup 2}, however, sometimes reaching only local minima. In order to overcome that shortcoming a hybrid algorithm based on simulated annealing (HSA) techniques is proposed. Additionally a new function is suggested that models the efficiency curve of germanium detectors in gamma-ray spectrometry.
International Nuclear Information System (INIS)
Courtine, Fabien
2007-01-01
The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of 137 Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the 60 Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)
DETEF a Monte Carlo system for the calculation of gamma spectrometers efficiency
International Nuclear Information System (INIS)
Cornejo, N.; Mann, G.
1996-01-01
The Monte Carlo method program DETEF calculates the efficiency of cylindrical NaI, Csi, Ge or Si, detectors for photons energy until 2 MeV and several sample geometric. These sources could be punctual, plane cylindrical or rectangular. The energy spectrum appears on the screen simultaneously with the statistical simulation. The calculated and experimental estimated efficiencies coincidence well in the standards deviations intervals
Lawler, J. E.; Den Hartog, E. A.
2018-03-01
The Ar I and II branching ratio calibration method is discussed with the goal of improving the technique. This method of establishing a relative radiometric calibration is important in ongoing research to improve atomic transition probabilities for quantitative spectroscopy in astrophysics and other fields. Specific suggestions are presented along with Monte Carlo simulations of wavelength dependent effects from scattering/reflecting of photons in a hollow cathode.
Shielding calculations for neutron calibration bunker using Monte Carlo code MCNP-4C
International Nuclear Information System (INIS)
Suman, H.; Kharita, M. H.; Yousef, S.
2008-02-01
In this work, the dose arising from an Am-Be source of 10 8 neutron/sec strength located inside the newly constructed neutron calibration bunker in the National Radiation Metrology Laboratories, was calculated using MCNP-4C code. It was found that the shielding of the neutron calibration bunker is sufficient. As the calculated dose is not expected to exceed in inhabited areas 0.183 μSv/hr, which is 10 times smaller than the regulatory dose constraints. Hence, it can be concluded that the calibration bunker can house - from the external exposure point of view - an Am-Be neutron source of 10 9 neutron/sec strength. It turned out that the neutron dose from the source is few times greater than the photon dose. The sky shine was found to contribute significantly to the total dose. This contribution was estimated to be 60% of the neutron dose and 10% of the photon dose. The systematic uncertainties due to various factors have been assessed and was found to be between 4 and 10% due to concrete density variations; 15% due to the dose estimation method; 4 -10% due to weather variations (temperature and moisture). The calculated dose was highly sensitive to the changes in source spectra. The uncertainty due to the use of two different neutron spectra is about 70%.(author)
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
McDaniel, T.; D'Azevedo, E. F.; Li, Y. W.; Wong, K.; Kent, P. R. C.
2017-11-01
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.
Energy Technology Data Exchange (ETDEWEB)
Han, Gi Yeong; Kim, Song Hyun; Kim, Do Hyun; Shin, Chang Ho; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of)
2014-05-15
In this study, how the geometry splitting strategy affects the calculation efficiency was analyzed. In this study, a geometry splitting method was proposed to increase the calculation efficiency in Monte Carlo simulation. First, the analysis of the neutron distribution characteristics in a deep penetration problem was performed. Then, considering the neutron population distribution, a geometry splitting method was devised. Using the proposed method, the FOMs with benchmark problems were estimated and compared with the conventional geometry splitting strategy. The results show that the proposed method can considerably increase the calculation efficiency in using geometry splitting method. It is expected that the proposed method will contribute to optimizing the computational cost as well as reducing the human errors in Monte Carlo simulation. Geometry splitting in Monte Carlo (MC) calculation is one of the most popular variance reduction techniques due to its simplicity, reliability and efficiency. For the use of the geometry splitting, the user should determine locations of geometry splitting and assign the relative importance of each region. Generally, the splitting parameters are decided by the user's experience. However, in this process, the splitting parameters can ineffectively or erroneously be selected. In order to prevent it, there is a recommendation to help the user eliminate guesswork, which is to split the geometry evenly. And then, the importance is estimated by a few iterations for preserving population of particle penetrating each region. However, evenly geometry splitting method can make the calculation inefficient due to the change in mean free path (MFP) of particles.
International Nuclear Information System (INIS)
Han, Gi Yeong; Kim, Song Hyun; Kim, Do Hyun; Shin, Chang Ho; Kim, Jong Kyung
2014-01-01
In this study, how the geometry splitting strategy affects the calculation efficiency was analyzed. In this study, a geometry splitting method was proposed to increase the calculation efficiency in Monte Carlo simulation. First, the analysis of the neutron distribution characteristics in a deep penetration problem was performed. Then, considering the neutron population distribution, a geometry splitting method was devised. Using the proposed method, the FOMs with benchmark problems were estimated and compared with the conventional geometry splitting strategy. The results show that the proposed method can considerably increase the calculation efficiency in using geometry splitting method. It is expected that the proposed method will contribute to optimizing the computational cost as well as reducing the human errors in Monte Carlo simulation. Geometry splitting in Monte Carlo (MC) calculation is one of the most popular variance reduction techniques due to its simplicity, reliability and efficiency. For the use of the geometry splitting, the user should determine locations of geometry splitting and assign the relative importance of each region. Generally, the splitting parameters are decided by the user's experience. However, in this process, the splitting parameters can ineffectively or erroneously be selected. In order to prevent it, there is a recommendation to help the user eliminate guesswork, which is to split the geometry evenly. And then, the importance is estimated by a few iterations for preserving population of particle penetrating each region. However, evenly geometry splitting method can make the calculation inefficient due to the change in mean free path (MFP) of particles
International Nuclear Information System (INIS)
Querol, A.; Gallardo, S.; Ródenas, J.; Verdú, G.
2015-01-01
In environmental radioactivity measurements, High Purity Germanium (HPGe) detectors are commonly used due to their excellent resolution. Efficiency calibration of detectors is essential to determine activity of radionuclides. The Monte Carlo method has been proved to be a powerful tool to complement efficiency calculations. In aged detectors, efficiency is partially deteriorated due to the dead layer increasing and consequently, the active volume decreasing. The characterization of the radiation transport in the dead layer is essential for a realistic HPGe simulation. In this work, the MCNP5 code is used to calculate the detector efficiency. The F4MESH tally is used to determine the photon and electron fluence in the dead layer and the active volume. The energy deposited in the Ge has been analyzed using the ⁎F8 tally. The F8 tally is used to obtain spectra and to calculate the detector efficiency. When the photon fluence and the energy deposition in the crystal are known, some unfolding methods can be used to estimate the activity of a given source. In this way, the efficiency is obtained and serves to verify the value obtained by other methods. - Highlights: • The MCNP5 code is used to estimate the dead layer thickness of an HPGe detector. • The F4MESH tally is applied to verify where interactions occur into the Ge crystal. • PHD and the energy deposited are obtained with F8 and ⁎F8 tallies, respectively. • An average dead layer between 70 and 80 µm is obtained for the HPGe studied. • The efficiency is calculated applying the TSVD method to the response matrix.
An efficient feedback calibration algorithm for direct imaging radio telescopes
Beardsley, Adam P.; Thyagarajan, Nithyanandan; Bowman, Judd D.; Morales, Miguel F.
2017-10-01
We present the E-field Parallel Imaging Calibration (EPICal) algorithm, which addresses the need for a fast calibration method for direct imaging radio astronomy correlators. Direct imaging involves a spatial fast Fourier transform of antenna signals, alleviating an O(Na ^2) computational bottleneck typical in radio correlators, and yielding a more gentle O(Ng log _2 Ng) scaling, where Na is the number of antennas in the array and Ng is the number of gridpoints in the imaging analysis. This can save orders of magnitude in computation cost for next generation arrays consisting of hundreds or thousands of antennas. However, because antenna signals are mixed in the imaging correlator without creating visibilities, gain correction must be applied prior to imaging, rather than on visibilities post-correlation. We develop the EPICal algorithm to form gain solutions quickly and without ever forming visibilities. This method scales as the number of antennas, and produces results comparable to those from visibilities. We use simulations to demonstrate the EPICal technique and study the noise properties of our gain solutions, showing they are similar to visibility-based solutions in realistic situations. By applying EPICal to 2 s of Long Wavelength Array data, we achieve a 65 per cent dynamic range improvement compared to uncalibrated images, showing this algorithm is a promising solution for next generation instruments.
International Nuclear Information System (INIS)
Hoogenboom, J. E.
2004-01-01
Although Russian roulette is applied very often in Monte Carlo calculations, not much literature exists on its quantitative influence on the variance and efficiency of a Monte Carlo calculation. Elaborating on the work of Lux and Koblinger using moment equations, new relevant equations are derived to calculate the variance of a Monte Carlo simulation using Russian roulette. To demonstrate its practical application the theory is applied to a simplified transport model resulting in explicit analytical expressions for the variance of a Monte Carlo calculation and for the expected number of collisions per history. From these expressions numerical results are shown and compared with actual Monte Carlo calculations, showing an excellent agreement. By considering the number of collisions in a Monte Carlo calculation as a measure of the CPU time, also the efficiency of the Russian roulette can be studied. It opens the way for further investigations, including optimization of Russian roulette parameters. (authors)
International Nuclear Information System (INIS)
Hsu, H.H.; Dowdy, E.J.; Estes, G.P.; Lucas, M.C.; Mack, J.M.; Moss, C.E.; Hamm, M.E.
1983-01-01
Monte Carlo calculations of a bismuth-germanate scintillator's efficiency agree closely with experimental measurements. For this comparison, we studied the absolute gamma-ray photopeak efficiency of a scintillator (7.62 cm long by 7.62 cm in diameter) at several gamma-ray energies from 166 to 2615 keV at distances from 0.5 to 152.4 cm. Computer calculations were done in a two-dimensional cylindrical geometry with the Monte Carlo coupled photon-electron code CYLTRAN. For the experiment we measured 11 sources with simple spectra and precisely known strengths. The average deviation between the calculations and the measurements is 3%. Our calculated results also closely agree with recently published calculated results
Efficiency determination of whole-body counters by Monte Carlo method, using a microcomputer
International Nuclear Information System (INIS)
Fernandes Neto, J.M.
1987-01-01
A computing program using Monte Carlo method for calculate the whole efficiency of distributed radiation counters in human body is developed. A simulater of human proportions was used, of which was filled with a known and uniform solution containing a quantity of radioisopes. The 99m Tc, 131 I and 42 K were used in this experience, and theirs activities compared by a liquid scintillator. (C.G.C.) [pt
Calculation of neutron detection efficiency for the thick lithium glass using Monte Carlo method
International Nuclear Information System (INIS)
Tang Guoyou; Bao Shanglian; Li Yulin; Zhong Wenguan
1989-08-01
The neutron detector efficiencies of a NE912 (45mm in diameter, 9.55 mm in thickness) and 2 pieces of ST601 (40mm in diameter, 3 and 10 mm in thickness respectively) lithium glasses have been calculated with a Monte Carlo computer code. The energy range in the calculation is 10 keV to 2.0 MeV. The effect of time delayed caused by neutron multiple scattering in the detectors (prompt neutron detection efficiency) has been considered
Monte Carlo simulation of gamma-ray total counting efficiency for a Phoswich detector
Energy Technology Data Exchange (ETDEWEB)
Yalcin, S. [Education Faculty, Kastamonu University, 37200 Kastamonu (Turkey)], E-mail: syalcin@kastamonu.edu.tr; Gurler, O. [Department of Physics, Faculty of Arts and Sciences, Uludag University, Gorukle Campus, 16059 Bursa (Turkey); Gundogdu, O. [Department of Physics, Faculty of Engineering and Physical Sciences, University of Surrey, Guildford, GU2 7XH (United Kingdom); NCCPM, Medical Physics, Royal Surrey County Hospital, Guildford, GU2 7XX (United Kingdom); Kaynak, G. [Department of Physics, Faculty of Arts and Sciences, Uludag University, Gorukle Campus, 16059 Bursa (Turkey)
2009-01-15
The LB 1000-PW detector is mainly used for determining total alpha, beta and gamma activity of low activity natural sources such as water, soil, air filters and any other environmental sources. Detector efficiency needs to be known in order to measure the absolute activity of such samples. This paper presents results on the total gamma counting efficiency of a Phoswich detector from point and disk sources. The directions of photons emitted from the source were determined by Monte Carlo techniques and the true path lengths in the detector were determined by analytical equations depending on photon directions. Results are tabulated for various gamma energies.
Monte Carlo simulation of gamma-ray total counting efficiency for a Phoswich detector
International Nuclear Information System (INIS)
Yalcin, S.; Gurler, O.; Gundogdu, O.; Kaynak, G.
2009-01-01
The LB 1000-PW detector is mainly used for determining total alpha, beta and gamma activity of low activity natural sources such as water, soil, air filters and any other environmental sources. Detector efficiency needs to be known in order to measure the absolute activity of such samples. This paper presents results on the total gamma counting efficiency of a Phoswich detector from point and disk sources. The directions of photons emitted from the source were determined by Monte Carlo techniques and the true path lengths in the detector were determined by analytical equations depending on photon directions. Results are tabulated for various gamma energies
Mathematical efficiency calibration with uncertain source geometries using smart optimization
International Nuclear Information System (INIS)
Menaa, N.; Bosko, A.; Bronson, F.; Venkataraman, R.; Russ, W. R.; Mueller, W.; Nizhnik, V.; Mirolo, L.
2011-01-01
The In Situ Object Counting Software (ISOCS), a mathematical method developed by CANBERRA, is a well established technique for computing High Purity Germanium (HPGe) detector efficiencies for a wide variety of source shapes and sizes. In the ISOCS method, the user needs to input the geometry related parameters such as: the source dimensions, matrix composition and density, along with the source-to-detector distance. In many applications, the source dimensions, the matrix material and density may not be well known. Under such circumstances, the efficiencies may not be very accurate since the modeled source geometry may not be very representative of the measured geometry. CANBERRA developed an efficiency optimization software known as 'Advanced ISOCS' that varies the not well known parameters within user specified intervals and determines the optimal efficiency shape and magnitude based on available benchmarks in the measured spectra. The benchmarks could be results from isotopic codes such as MGAU, MGA, IGA, or FRAM, activities from multi-line nuclides, and multiple counts of the same item taken in different geometries (from the side, bottom, top etc). The efficiency optimization is carried out using either a random search based on standard probability distributions, or using numerical techniques that carry out a more directed (referred to as 'smart' in this paper) search. Measurements were carried out using representative source geometries and radionuclide distributions. The radionuclide activities were determined using the optimum efficiency and compared against the true activities. The 'Advanced ISOCS' method has many applications among which are: Safeguards, Decommissioning and Decontamination, Non-Destructive Assay systems and Nuclear reactor outages maintenance. (authors)
Energy Technology Data Exchange (ETDEWEB)
Dixon, D.A., E-mail: ddixon@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663, MS P365, Los Alamos, NM 87545 (United States); Prinja, A.K., E-mail: prinja@unm.edu [Department of Nuclear Engineering, MSC01 1120, 1 University of New Mexico, Albuquerque, NM 87131-0001 (United States); Franke, B.C., E-mail: bcfrank@sandia.gov [Sandia National Laboratories, Albuquerque, NM 87123 (United States)
2015-09-15
This paper presents the theoretical development and numerical demonstration of a moment-preserving Monte Carlo electron transport method. Foremost, a full implementation of the moment-preserving (MP) method within the Geant4 particle simulation toolkit is demonstrated. Beyond implementation details, it is shown that the MP method is a viable alternative to the condensed history (CH) method for inclusion in current and future generation transport codes through demonstration of the key features of the method including: systematically controllable accuracy, computational efficiency, mathematical robustness, and versatility. A wide variety of results common to electron transport are presented illustrating the key features of the MP method. In particular, it is possible to achieve accuracy that is statistically indistinguishable from analog Monte Carlo, while remaining up to three orders of magnitude more efficient than analog Monte Carlo simulations. Finally, it is shown that the MP method can be generalized to any applicable analog scattering DCS model by extending previous work on the MP method beyond analytical DCSs to the partial-wave (PW) elastic tabulated DCS data.
Chapoutier, Nicolas; Mollier, François; Nolin, Guillaume; Culioli, Matthieu; Mace, Jean-Reynald
2017-09-01
In the context of the rising of Monte Carlo transport calculations for any kind of application, AREVA recently improved its suite of engineering tools in order to produce efficient Monte Carlo workflow. Monte Carlo codes, such as MCNP or TRIPOLI, are recognized as reference codes to deal with a large range of radiation transport problems. However the inherent drawbacks of theses codes - laboring input file creation and long computation time - contrast with the maturity of the treatment of the physical phenomena. The goals of the recent AREVA developments were to reach similar efficiency as other mature engineering sciences such as finite elements analyses (e.g. structural or fluid dynamics). Among the main objectives, the creation of a graphical user interface offering CAD tools for geometry creation and other graphical features dedicated to the radiation field (source definition, tally definition) has been reached. The computations times are drastically reduced compared to few years ago thanks to the use of massive parallel runs, and above all, the implementation of hybrid variance reduction technics. From now engineering teams are capable to deliver much more prompt support to any nuclear projects dealing with reactors or fuel cycle facilities from conceptual phase to decommissioning.
Directory of Open Access Journals (Sweden)
Chapoutier Nicolas
2017-01-01
Full Text Available In the context of the rising of Monte Carlo transport calculations for any kind of application, AREVA recently improved its suite of engineering tools in order to produce efficient Monte Carlo workflow. Monte Carlo codes, such as MCNP or TRIPOLI, are recognized as reference codes to deal with a large range of radiation transport problems. However the inherent drawbacks of theses codes - laboring input file creation and long computation time - contrast with the maturity of the treatment of the physical phenomena. The goals of the recent AREVA developments were to reach similar efficiency as other mature engineering sciences such as finite elements analyses (e.g. structural or fluid dynamics. Among the main objectives, the creation of a graphical user interface offering CAD tools for geometry creation and other graphical features dedicated to the radiation field (source definition, tally definition has been reached. The computations times are drastically reduced compared to few years ago thanks to the use of massive parallel runs, and above all, the implementation of hybrid variance reduction technics. From now engineering teams are capable to deliver much more prompt support to any nuclear projects dealing with reactors or fuel cycle facilities from conceptual phase to decommissioning.
Monte Carlo evaluation of the neutron detection efficiency of a superheated drop detector
Energy Technology Data Exchange (ETDEWEB)
Gualdrini, G F [ENEA, Centro Ricerche ` Ezio Clementel` , Bologna (Italy). Dipt. Ambiente; D` Errico, F; Noccioni, P [Pisa, Univ. (Italy). Dipt. di Costruzioni Meccaniche e Nucleari
1997-03-01
Neuron dosimetry has recently gained renewed attention, following concerns on the exposure of crew members on board aircraft, and of workers around the increasing number of high energy accelerators for medical and research purpose. At the same time the new operational qualities for radiation dosimetry introduced by ICRU and the ICRP, aiming at a unified metrological system applicable to all types of radiation exposure, involved the need to update current devices in order to meet new requirements. Superheated Drop (Bubble) Detectors (SDD) offer an alternative approach to neutron radiation protection dosimetry. The SDDs are currently studied within a large collaborative effort involving Yale University. New Haven CT, Pisa (IT) University, the Physikalisch-Technische Bundesanstalt, Braunschweig D, and ENEA (Italian National Agency for new Technologies Energy and the Environment) Centre of Bologna. The detectors were characterised through calibrations with monoenergetic neutron beams and where experimental investigations were inadequate or impossible, such as in the intermediate energy range , parametric Monte Carlo calculations of the response were carried out. This report describes the general characteristic of the SDDs along with the Monte Carlo computations of the energy response and a comparison with the experimental results.
Farr, W. M.; Mandel, I.; Stevens, D.
2015-01-01
Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the Markov chain Monte Carlo (MCMC) algorithm and convergence is correspondingly slow. Here, we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose intermodel jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient ‘global’ proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher dimensional spaces efficiently. PMID:26543580
Schwarz, Karsten; Rieger, Heiko
2013-03-01
We present an efficient Monte Carlo method to simulate reaction-diffusion processes with spatially varying particle annihilation or transformation rates as it occurs for instance in the context of motor-driven intracellular transport. Like Green's function reaction dynamics and first-passage time methods, our algorithm avoids small diffusive hops by propagating sufficiently distant particles in large hops to the boundaries of protective domains. Since for spatially varying annihilation or transformation rates the single particle diffusion propagator is not known analytically, we present an algorithm that generates efficiently either particle displacements or annihilations with the correct statistics, as we prove rigorously. The numerical efficiency of the algorithm is demonstrated with an illustrative example.
International Nuclear Information System (INIS)
Goshtasbi, K.; Ahmadi, M; Naeimi, Y.
2008-01-01
Locating the critical slip surface and the associated minimum factor of safety are two complementary parts in a slope stability analysis. A large number of computer programs exist to solve slope stability problems. Most of these programs, however, have used inefficient and unreliable search procedures to locate the global minimum factor of safety. This paper presents an efficient and reliable method to determine the global minimum factor of safety coupled with a modified version of the Monte Carlo technique. Examples arc presented to illustrate the reliability of the proposed method
Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim
2014-11-01
In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.
Efficiency calibration of a HPGe detector for [18F] FDG activity measurements
International Nuclear Information System (INIS)
Fragoso, Maria da Conceicao de Farias; Lacerda, Isabelle Viviane Batista de; Albuquerque, Antonio Morais de Sa
2013-01-01
The radionuclide 18 F, in the form of flurodeoxyglucose (FDG), is the most used radiopharmaceutical for Positron Emission Tomography (PET). Due to [ 18 F]FDG increasing demand, it is important to ensure high quality activity measurements in the nuclear medicine practice. Therefore, standardized reference sources are necessary to calibrate of 18 F measuring systems. Usually, the activity measurements are performed in re-entrant ionization chambers, also known as radionuclide calibrators. Among the existing alternatives for the standardization of radioactive sources, the method known as gamma spectrometry is widely used for short-lived radionuclides, since it is essential to minimize source preparation time. The purpose of this work was to perform the standardization of the [ 18 F]FDG solution by gamma spectrometry. In addition, the reference sources calibrated by this method can be used to calibrate and test the radionuclide calibrators from the Divisao de Producao de Radiofarmacos (DIPRA) of the Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE). Standard sources of 152 Eu, 137 Cs and 68 Ge were used for the efficiency calibration of the spectrometer system. As a result, the efficiency curve as a function of energy was determined in wide energy range from 122 to 1408 keV. Reference sources obtained by this method can be used in [ 18 F]FDG activity measurements comparison programs for PET services localized in the Brazilian Northeast region. (author)
Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.
2015-12-01
Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.
International Nuclear Information System (INIS)
Parent, L; Fielding, A L; Dance, D R; Seco, J; Evans, P M
2007-01-01
For EPID dosimetry, the calibration should ensure that all pixels have a similar response to a given irradiation. A calibration method (MC), using an analytical fit of a Monte Carlo simulated flood field EPID image to correct for the flood field image pixel intensity shape, was proposed. It was compared with the standard flood field calibration (FF), with the use of a water slab placed in the beam to flatten the flood field (WS) and with a multiple field calibration where the EPID was irradiated with a fixed 10 x 10 field for 16 different positions (MF). The EPID was used in its normal configuration (clinical setup) and with an additional 3 mm copper slab (modified setup). Beam asymmetry measured with a diode array was taken into account in MC and WS methods. For both setups, the MC method provided pixel sensitivity values within 3% of those obtained with the MF and WS methods (mean difference 2 ) and IMRT fields to within 3% of that obtained with WS and MF calibrations while differences with images calibrated with the FF method for fields larger than 10 x 10 cm 2 were up to 8%. MC, WS and MF methods all provided a major improvement on the FF method. Advantages and drawbacks of each method were reviewed
Simple formalism for efficient derivatives and multi-determinant expansions in quantum Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Filippi, Claudia, E-mail: c.filippi@utwente.nl [MESA+ Institute for Nanotechnology, University of Twente, P.O. Box 217, 7500 AE Enschede (Netherlands); Assaraf, Roland, E-mail: assaraf@lct.jussieu.fr [Sorbonne Universités, UPMC Univ Paris 06, CNRS, Laboratoire de Chimie Théorique CC 137-4, place Jussieu F-75252 Paris Cedex 05 (France); Moroni, Saverio, E-mail: moroni@democritos.it [CNR-IOM DEMOCRITOS, Istituto Officina dei Materiali, and SISSA Scuola Internazionale Superiore di Studi Avanzati, Via Bonomea 265, I-34136 Trieste (Italy)
2016-05-21
We present a simple and general formalism to compute efficiently the derivatives of a multi-determinant Jastrow-Slater wave function, the local energy, the interatomic forces, and similar quantities needed in quantum Monte Carlo. Through a straightforward manipulation of matrices evaluated on the occupied and virtual orbitals, we obtain an efficiency equivalent to algorithmic differentiation in the computation of the interatomic forces and the optimization of the orbital parameters. Furthermore, for a large multi-determinant expansion, the significant computational gain afforded by a recently introduced table method is here extended to the local value of any one-body operator and to its derivatives, in both all-electron and pseudopotential calculations.
Cai, Han-Jie; Zhang, Zhi-Lei; Fu, Fen; Li, Jian-Yang; Zhang, Xun-Chao; Zhang, Ya-Ling; Yan, Xue-Song; Lin, Ping; Xv, Jian-Ya; Yang, Lei
2018-02-01
The dense granular flow spallation target is a new target concept chosen for the Accelerator-Driven Subcritical (ADS) project in China. For the R&D of this kind of target concept, a dedicated Monte Carlo (MC) program named GMT was developed to perform the simulation study of the beam-target interaction. Owing to the complexities of the target geometry, the computational cost of the MC simulation of particle tracks is highly expensive. Thus, improvement of computational efficiency will be essential for the detailed MC simulation studies of the dense granular target. Here we present the special design of the GMT program and its high efficiency performance. In addition, the speedup potential of the GPU-accelerated spallation models is discussed.
Study on calibration of neutron efficiency and relative photo-yield of plastic scintillator
International Nuclear Information System (INIS)
Peng Taiping; Zhang Chuanfei; Li Rurong; Zhang Jianhua; Luo Xiaobing; Xia Yijun; Yang Zhihua
2002-01-01
A method used for the calibration of neutron efficiency and the relative photo yield of plastic scintillator is studied. T(p, n) and D(d, n) reactions are used as neutron resources. The neutron efficiencies and the relative photo yields of plastic scintillators 1421 (40 mm in diameter and 5 mm in thickness) and determined in the neutron energy range of 0.655-5 MeV
CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.
Saegusa, Jun
2008-01-01
The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.
Yasin, Zafar; Negoita, Florin; Tabbassum, Sana; Borcea, Ruxandra; Kisyov, Stanimir
2017-12-01
The plastic scintillators are used in different areas of science and technology. One of the use of these scintillator detectors is as beam loss monitors (BLM) for new generation of high intensity heavy ion in superconducting linear accelerators. Operated in pulse counting mode with rather high thresholds and shielded by few centimeters of lead in order to cope with radiofrequency noise and X-ray background emitted by accelerator cavities, they preserve high efficiency for high energy gamma ray and neutrons produced in the nuclear reactions of lost beam particles with accelerator components. Efficiency calculation and calibration of detectors is very important before their practical usage. In the present work, the efficiency of plastic scintillator detectors is simulated using FLUKA for different gamma and neutron sources like, 60Co, 137Cs and 238Pu-Be. The sources are placed at different positions around the detector. Calculated values are compared with the measured values and a reasonable agreement is observed.
International Nuclear Information System (INIS)
Munoz-Cobo, J.L.; Pena, J.; Chiva, S.; Mendez, S.
2007-01-01
This paper presents a study of the estimation of the correction factors for the interfacial area concentration and the bubble velocity in two phase flow measurements using the double sensor conductivity probe. Monte-Carlo calculations of these correction factors have been performed for different values of the relative distance (ΔS/D) between the tips of the conductivity probe and different values of the relative bubble velocity fluctuation parameter. Also this paper presents the Monte-Carlo calculation of the expected value of the calibration factors for bubbly flow assuming a log-normal distribution of the bubble sizes. We have computed the variation of the expected values of the calibration factors with the relative distance (ΔS/D) between the tips and the velocity fluctuation parameter. Finally, we have performed a sensitivity study of the variation of the average values of the calibration factors for bubbly flow with the geometrical standard deviation of the log-normal distribution of bubble sizes. The results of these calculations show that the total interfacial area correction factor is very close to 2, and depends very weakly on the velocity fluctuation, and the relative distance between tips. For the velocity calibration factor, the Monte-Carlo results show that for moderate values of the relative bubble velocity fluctuation parameter (H max ≤ 0.3) and values of the relative distance between tips not too small (ΔS/D ≥ 0.2), the correction velocity factor for the bubble sensor conductivity probe is close to unity, ranging from 0.96 to 1
The quantile regression approach to efficiency measurement: insights from Monte Carlo simulations.
Liu, Chunping; Laporte, Audrey; Ferguson, Brian S
2008-09-01
In the health economics literature there is an ongoing debate over approaches used to estimate the efficiency of health systems at various levels, from the level of the individual hospital - or nursing home - up to that of the health system as a whole. The two most widely used approaches to evaluating the efficiency with which various units deliver care are non-parametric data envelopment analysis (DEA) and parametric stochastic frontier analysis (SFA). Productivity researchers tend to have very strong preferences over which methodology to use for efficiency estimation. In this paper, we use Monte Carlo simulation to compare the performance of DEA and SFA in terms of their ability to accurately estimate efficiency. We also evaluate quantile regression as a potential alternative approach. A Cobb-Douglas production function, random error terms and a technical inefficiency term with different distributions are used to calculate the observed output. The results, based on these experiments, suggest that neither DEA nor SFA can be regarded as clearly dominant, and that, depending on the quantile estimated, the quantile regression approach may be a useful addition to the armamentarium of methods for estimating technical efficiency.
Modeling of detection efficiency of HPGe semiconductor detector by Monte Carlo method
International Nuclear Information System (INIS)
Rapant, T.
2003-01-01
Over the past ten years following the gradual adoption of new legislative standards for protection against ionizing radiation was significant penetration of gamma-spectrometry between standard radioanalytical methods. In terms of nuclear power plant gamma-spectrometry has shown as the most effective method of determining of the activity of individual radionuclides. Spectrometric laboratories were gradually equipped with the most modern technical equipment. Nevertheless, due to the use of costly and time intensive experimental calibration methods, the possibilities of gamma-spectrometry were partially limited. Mainly in late 90-ies during substantial renovation and modernization works. For this reason, in spectrometric laboratory in Nuclear Power Plants Bohunice in cooperation with the Department of Nuclear Physics FMPI in Bratislava were developed and tested several calibration procedures based on computer simulations using GEANT program. In presented thesis the calibration method for measuring of bulk samples based on auto-absorption factors is described. The accuracy of the proposed method is at least comparable with other used methods, but it surpasses them significantly in terms of efficiency and financial time and simplicity. The described method has been used successfully almost for two years in laboratory spectrometric Radiation Protection Division in Bohunice nuclear power. It is shown by the results of international comparison measurements and repeated validation measurements performed by Slovak Institute of Metrology in Bratislava.
New Hybrid Monte Carlo methods for efficient sampling. From physics to biology and statistics
International Nuclear Information System (INIS)
Akhmatskaya, Elena; Reich, Sebastian
2011-01-01
We introduce a class of novel hybrid methods for detailed simulations of large complex systems in physics, biology, materials science and statistics. These generalized shadow Hybrid Monte Carlo (GSHMC) methods combine the advantages of stochastic and deterministic simulation techniques. They utilize a partial momentum update to retain some of the dynamical information, employ modified Hamiltonians to overcome exponential performance degradation with the system’s size and make use of multi-scale nature of complex systems. Variants of GSHMCs were developed for atomistic simulation, particle simulation and statistics: GSHMC (thermodynamically consistent implementation of constant-temperature molecular dynamics), MTS-GSHMC (multiple-time-stepping GSHMC), meso-GSHMC (Metropolis corrected dissipative particle dynamics (DPD) method), and a generalized shadow Hamiltonian Monte Carlo, GSHmMC (a GSHMC for statistical simulations). All of these are compatible with other enhanced sampling techniques and suitable for massively parallel computing allowing for a range of multi-level parallel strategies. A brief description of the GSHMC approach, examples of its application on high performance computers and comparison with other existing techniques are given. Our approach is shown to resolve such problems as resonance instabilities of the MTS methods and non-preservation of thermodynamic equilibrium properties in DPD, and to outperform known methods in sampling efficiency by an order of magnitude. (author)
International Nuclear Information System (INIS)
Scemama, Anthony; Caffarel, Michel; Oseret, Emmanuel; Jalby, William
2013-01-01
Various strategies to implement efficiently quantum Monte Carlo (QMC) simulations for large chemical systems are presented. These include: (i) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), (ii) the possibility of keeping the memory footprint minimal, (iii) the important enhancement of single-core performance when efficient optimization tools are used, and (iv) the definition of a universal, dynamic, fault-tolerant, and load-balanced framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC-Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056, and 1731 electrons). Using 10-80 k computing cores of the Curie machine (GENCI-TGCC-CEA, France), QMC-Chem has been shown to be capable of running at the peta scale level, thus demonstrating that for this machine a large part of the peak performance can be achieved. Implementation of large-scale QMC simulations for future exa scale platforms with a comparable level of efficiency is expected to be feasible. (authors)
SU-E-J-60: Efficient Monte Carlo Dose Calculation On CPU-GPU Heterogeneous Systems
Energy Technology Data Exchange (ETDEWEB)
Xiao, K; Chen, D. Z; Hu, X. S [University of Notre Dame, Notre Dame, IN (United States); Zhou, B [Altera Corp., San Jose, CA (United States)
2014-06-01
Purpose: It is well-known that the performance of GPU-based Monte Carlo dose calculation implementations is bounded by memory bandwidth. One major cause of this bottleneck is the random memory writing patterns in dose deposition, which leads to several memory efficiency issues on GPU such as un-coalesced writing and atomic operations. We propose a new method to alleviate such issues on CPU-GPU heterogeneous systems, which achieves overall performance improvement for Monte Carlo dose calculation. Methods: Dose deposition is to accumulate dose into the voxels of a dose volume along the trajectories of radiation rays. Our idea is to partition this procedure into the following three steps, which are fine-tuned for CPU or GPU: (1) each GPU thread writes dose results with location information to a buffer on GPU memory, which achieves fully-coalesced and atomic-free memory transactions; (2) the dose results in the buffer are transferred to CPU memory; (3) the dose volume is constructed from the dose buffer on CPU. We organize the processing of all radiation rays into streams. Since the steps within a stream use different hardware resources (i.e., GPU, DMA, CPU), we can overlap the execution of these steps for different streams by pipelining. Results: We evaluated our method using a Monte Carlo Convolution Superposition (MCCS) program and tested our implementation for various clinical cases on a heterogeneous system containing an Intel i7 quad-core CPU and an NVIDIA TITAN GPU. Comparing with a straightforward MCCS implementation on the same system (using both CPU and GPU for radiation ray tracing), our method gained 2-5X speedup without losing dose calculation accuracy. Conclusion: The results show that our new method improves the effective memory bandwidth and overall performance for MCCS on the CPU-GPU systems. Our proposed method can also be applied to accelerate other Monte Carlo dose calculation approaches. This research was supported in part by NSF under Grants CCF
International Nuclear Information System (INIS)
Geraldo, L.P.; Smith, D.L.
1989-01-01
The methodology of covariance matrix and square methods have been applied in the relative efficiency calibration for a Ge(Li) detector apllied in the relative efficiency calibration for a Ge(Li) detector. Procedures employed to generate, manipulate and test covariance matrices which serve to properly represent uncertainties of experimental data are discussed. Calibration data fitting using least square methods has been performed for a particular experimental data set. (author) [pt
The calculation of the detection efficiency in the calibration of gross alpha-beta systems
International Nuclear Information System (INIS)
Marian Romeo Calin; Ileana Radulescu; Alexandru Erminiu Druker
2013-01-01
This paper presents a method for efficiency calibration of a measuring alpha-beta system PROTEAN ORTEC, MPC-2000-DP, using standard radioactive sources. The system is used to measure gross alpha-beta activity concentrations in environmental samples. The calculated efficiencies of detection were subsequently introduced in the system for two working geometries: measuring geometry-gross alpha-beta ε α g = 31,37 ± 0.25 (%)-the alpha efficiency and ε β g 44.94 ± 0.69 (%)-the beta efficiency, where the spillover factor is X talk g = 25.59 ± 0.50 (%) and measuring geometry up alpha-beta ε α u 36.23 ± 0.29 (%)-the alpha efficiency and ε β u = 48.53 ± 0.74 (%)-the beta efficiency, where the spillover factor is X talk u 31.08 ± 0.60 (%). (author)
Efficiency determination of whole-body counter by Monte Carlo method, using a microcomputer
International Nuclear Information System (INIS)
Fernandes Neto, Jose Maria
1986-01-01
The purpose of this investigation was the development of an analytical microcomputer model to evaluate a whole body counter efficiency. The model is based on a modified Sryder's model. A stretcher type geometry along with the Monte Carlo method and a Synclair type microcomputer were used. Experimental measurements were performed using two phantoms, one as an adult and the other as a 5 year old child. The phantoms were made in acrylic and and 99m Tc, 131 I and 42 K were the radioisotopes utilized. Results showed a close relationship between experimental and predicted data for energies ranging from 250 keV to 2 MeV, but some discrepancies were found for lower energies. (author)
Demonstration of pumping efficiency for rotating disks by Monte Carlo simulation
International Nuclear Information System (INIS)
Ogiwara, Norio
2010-01-01
We investigated the concept of creating a gas radial flow by employing the molecular drag effect upon gas molecules on rotating disks. All the gas molecules have a circumferential velocity rω (r: distance from the rotating axis, and ω: angular velocity) each time they leave a surface of the rotating disks. As a result, the gas molecules between the rotating disks tend on average to move outward from the center. That is, a radial flow appears. This idea was demonstrated by Monte Carlo simulation of 2 types of rotating disks (flat and corrugated ones). Pumping efficiency was clearly demonstrated for both types of disks when the velocity ratio rω/ ( : mean velocity) became larger than 1. (author)
Energy Technology Data Exchange (ETDEWEB)
Matsui, S., E-mail: smatsui@gpi.ac.jp; Mori, Y. [The Graduate School for the Creation of New Photonics Industries, 1955-1 Kurematsucho, Nishiku, Hamamatsu, Shizuoka 431-1202 (Japan); Nonaka, T.; Hattori, T.; Kasamatsu, Y.; Haraguchi, D.; Watanabe, Y.; Uchiyama, K.; Ishikawa, M. [Hamamatsu Photonics K.K. Electron Tube Division, 314-5 Shimokanzo, Iwata, Shizuoka 438-0193 (Japan)
2016-05-15
For evaluation of on-site dosimetry and process design in industrial use of ultra-low energy electron beam (ULEB) processes, we evaluate the energy deposition using a thin radiochromic film and a Monte Carlo simulation. The response of film dosimeter was calibrated using a high energy electron beam with an acceleration voltage of 2 MV and alanine dosimeters with uncertainty of 11% at coverage factor 2. Using this response function, the results of absorbed dose measurements for ULEB were evaluated from 10 kGy to 100 kGy as a relative dose. The deviation between the responses of deposit energy on the films and Monte Carlo simulations was within 15%. As far as this limitation, relative dose estimation using thin film dosimeters with response function obtained by high energy electron irradiation and simulation results is effective for ULEB irradiation processes management.
Energy Technology Data Exchange (ETDEWEB)
Parent, L [Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Sutton (United Kingdom); Fielding, A L [School of Physical and Chemical Sciences, Queensland University of Technology, Brisbane (Australia); Dance, D R [Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, London (United Kingdom); Seco, J [Department of Radiation Oncology, Francis Burr Proton Therapy Center, Massachusetts General Hospital, Harvard Medical School, Boston (United States); Evans, P M [Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Sutton (United Kingdom)
2007-07-21
For EPID dosimetry, the calibration should ensure that all pixels have a similar response to a given irradiation. A calibration method (MC), using an analytical fit of a Monte Carlo simulated flood field EPID image to correct for the flood field image pixel intensity shape, was proposed. It was compared with the standard flood field calibration (FF), with the use of a water slab placed in the beam to flatten the flood field (WS) and with a multiple field calibration where the EPID was irradiated with a fixed 10 x 10 field for 16 different positions (MF). The EPID was used in its normal configuration (clinical setup) and with an additional 3 mm copper slab (modified setup). Beam asymmetry measured with a diode array was taken into account in MC and WS methods. For both setups, the MC method provided pixel sensitivity values within 3% of those obtained with the MF and WS methods (mean difference <1%, standard deviation <2%). The difference of pixel sensitivity between MC and FF methods was up to 12.2% (clinical setup) and 11.8% (modified setup). MC calibration provided images of open fields (5 x 5 to 20 x 20 cm{sup 2}) and IMRT fields to within 3% of that obtained with WS and MF calibrations while differences with images calibrated with the FF method for fields larger than 10 x 10 cm{sup 2} were up to 8%. MC, WS and MF methods all provided a major improvement on the FF method. Advantages and drawbacks of each method were reviewed.
Energy Technology Data Exchange (ETDEWEB)
Hansen, J; Culberson, W; DeWerd, L [University of Wisconsin Medical Radiation Research Center, Madison, WI (United States); Soares, C [NIST (retired), Gaithersburg, MD (United States)
2016-06-15
Purpose: To test the validity of a windowless extrapolation chamber used to measure surface dose rate from planar ophthalmic applicators and to compare different Monte Carlo based codes for deriving correction factors. Methods: Dose rate measurements were performed using a windowless, planar extrapolation chamber with a {sup 90}Sr/{sup 90}Y Tracerlab RA-1 ophthalmic applicator previously calibrated at the National Institute of Standards and Technology (NIST). Capacitance measurements were performed to estimate the initial air gap width between the source face and collecting electrode. Current was measured as a function of air gap, and Bragg-Gray cavity theory was used to calculate the absorbed dose rate to water. To determine correction factors for backscatter, divergence, and attenuation from the Mylar entrance window found in the NIST extrapolation chamber, both EGSnrc Monte Carlo user code and Monte Carlo N-Particle Transport Code (MCNP) were utilized. Simulation results were compared with experimental current readings from the windowless extrapolation chamber as a function of air gap. Additionally, measured dose rate values were compared with the expected result from the NIST source calibration to test the validity of the windowless chamber design. Results: Better agreement was seen between EGSnrc simulated dose results and experimental current readings at very small air gaps (<100 µm) for the windowless extrapolation chamber, while MCNP results demonstrated divergence at these small gap widths. Three separate dose rate measurements were performed with the RA-1 applicator. The average observed difference from the expected result based on the NIST calibration was −1.88% with a statistical standard deviation of 0.39% (k=1). Conclusion: EGSnrc user code will be used during future work to derive correction factors for extrapolation chamber measurements. Additionally, experiment results suggest that an entrance window is not needed in order for an extrapolation
Efficient heterogeneous execution of Monte Carlo shielding calculations on a Beowulf cluster
International Nuclear Information System (INIS)
Dewar, D.; Hulse, P.; Cooper, A.; Smith, N.
2005-01-01
Recent work has been done in using a high-performance 'Beowulf' cluster computer system for the efficient distribution of Monte Carlo shielding calculations. This has enabled the rapid solution of complex shielding problems at low cost and with greater modularity and scalability than traditional platforms. The work has shown that a simple approach to distributing the workload is as efficient as using more traditional techniques such as PVM (Parallel Virtual Machine). In addition, when used in an operational setting this technique is fairer with the use of resources than traditional methods, in that it does not tie up a single computing resource but instead shares the capacity with other tasks. These developments in computing technology have enabled shielding problems to be solved that would have taken an unacceptably long time to run on traditional platforms. This paper discusses the BNFL Beowulf cluster and a number of tests that have recently been run to demonstrate the efficiency of the asynchronous technique in running the MCBEND program. The BNFL Beowulf currently consists of 84 standard PCs running RedHat Linux. Current performance of the machine has been estimated to be between 40 and 100 Gflop s -1 . When the whole system is employed on one problem up to four million particles can be tracked per second. There are plans to review its size in line with future business needs. (authors)
Guerra, Marta L.; Novotny, M. A.; Watanabe, Hiroshi; Ito, Nobuyasu
2009-01-01
We calculate the efficiency of a rejection-free dynamic Monte Carlo method for d -dimensional off-lattice homogeneous particles interacting through a repulsive power-law potential r-p. Theoretically we find the algorithmic efficiency in the limit of low temperatures and/or high densities is asymptotically proportional to ρ (p+2) /2 T-d/2 with the particle density ρ and the temperature T. Dynamic Monte Carlo simulations are performed in one-, two-, and three-dimensional systems with different powers p, and the results agree with the theoretical predictions. © 2009 The American Physical Society.
Guerra, Marta L.
2009-02-23
We calculate the efficiency of a rejection-free dynamic Monte Carlo method for d -dimensional off-lattice homogeneous particles interacting through a repulsive power-law potential r-p. Theoretically we find the algorithmic efficiency in the limit of low temperatures and/or high densities is asymptotically proportional to ρ (p+2) /2 T-d/2 with the particle density ρ and the temperature T. Dynamic Monte Carlo simulations are performed in one-, two-, and three-dimensional systems with different powers p, and the results agree with the theoretical predictions. © 2009 The American Physical Society.
Determination of relative efficiency of a detector using Monte Carlo method
International Nuclear Information System (INIS)
Medeiros, M.P.C.; Rebello, W.F.; Lopes, J.M.; Silva, A.X.
2015-01-01
High-purity germanium detectors (HPGe) are mandatory tools for spectrometry because of their excellent energy resolution. The efficiency of such detectors, quoted in the list of specifications by the manufacturer, frequently refers to the relative full-energy peak efficiency, related to the absolute full-energy peak efficiency of a 7.6 cm x 7.6 cm (diameter x height) NaI(Tl) crystal, based on the 1.33 MeV peak of a 60 Co source positioned 25 cm from the detector. In this study, we used MCNPX code to simulate an HPGe detector (Canberra GC3020), from Real-Time Neutrongraphy Laboratory of UFRJ, to survey the spectrum of a 60 Co source located 25 cm from the detector in order to calculate and confirm the efficiency declared by the manufacturer. Agreement between experimental and simulated data was achieved. The model under development will be used for calculating and comparison purposes with the detector calibration curve from software Genie2000™, also serving as a reference for future studies. (author)
Calibration method for a radwaste assay system
International Nuclear Information System (INIS)
Dulama, C.; Dobrin, R.; Toma, Al.; Paunoiu, C.
2004-01-01
A waste assay system entirely designed and manufactured in the Institute for Nuclear Research is used in radwaste treatment and conditioning stream to ensure compliance with national repository radiological requirements. Usually, waste assay systems are calibrated by using various experimental arrangements including calibration phantoms. The paper presents a comparative study concerning the efficiency calibration performed by shell source method and a semiempirical, computational method based on a Monte Carlo algorithm. (authors)
Limits on the efficiency of event-based algorithms for Monte Carlo neutron transport
Directory of Open Access Journals (Sweden)
Paul K. Romano
2017-09-01
Full Text Available The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup due to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, the vector speedup is also limited by differences in the execution time for events being carried out in a single event-iteration.
Malasics, Attila; Boda, Dezso
2010-06-28
Two iterative procedures have been proposed recently to calculate the chemical potentials corresponding to prescribed concentrations from grand canonical Monte Carlo (GCMC) simulations. Both are based on repeated GCMC simulations with updated excess chemical potentials until the desired concentrations are established. In this paper, we propose combining our robust and fast converging iteration algorithm [Malasics, Gillespie, and Boda, J. Chem. Phys. 128, 124102 (2008)] with the suggestion of Lamperski [Mol. Simul. 33, 1193 (2007)] to average the chemical potentials in the iterations (instead of just using the chemical potentials obtained in the last iteration). We apply the unified method for various electrolyte solutions and show that our algorithm is more efficient if we use the averaging procedure. We discuss the convergence problems arising from violation of charge neutrality when inserting/deleting individual ions instead of neutral groups of ions (salts). We suggest a correction term to the iteration procedure that makes the algorithm efficient to determine the chemical potentials of individual ions too.
Mesta, M.; van Eersel, H.; Coehoorn, R.; Bobbert, P.A.
2016-01-01
Triplet-triplet annihilation (TTA) and triplet-polaron quenching (TPQ) in organic light-emitting devices (OLEDs) lead to a roll-off of the internal quantum efficiency (IQE) with increasing current density J. We employ a kinetic Monte Carlo modeling study to analyze the measured IQE and color balance
International Nuclear Information System (INIS)
Moreira, M.C.F.; Conti, C.C.; Schirru, R.
2010-01-01
An NaI(Tl) multidetector layout combined with the use of Monte Carlo (MC) calculations and artificial neural networks(ANN) is proposed to assess the radioactive contamination of urban and semi-urban environment surfaces. A very simple urban environment like a model street composed of a wall on either side and the ground surface was the study case. A layout of four NaI(Tl) detectors was used, and the data corresponding to the response of the detectors were obtained by the Monte Carlo method. Two additional data sets with random values for the contamination and for detectors' response were also produced to test the ANNs. For this work, 18 feedforward topologies with backpropagation learning algorithm ANNs were chosen and trained. The results showed that some trained ANNs were able to accurately predict the contamination on the three urban surfaces when submitted to values within the training range. Other results showed that generalization outside the training range of values could not be achieved. The use of Monte Carlo calculations in combination with ANNs has been proven to be a powerful tool to perform detection calibration for highly complicated detection geometries.
Energy Technology Data Exchange (ETDEWEB)
Moreira, M.C.F., E-mail: marcos@ird.gov.b [Universidade Federal do Rio de Janeiro, COPPE, Programa de Engenharia Nuclear, Laboratorio de Monitoracao de Processos (Federal University of Rio de Janeiro, COPPE, Nuclear Engineering Program, Process Monitoring Laboratory), P.O. Box 68509, 21941-972 Rio de Janeiro (Brazil); Instituto de Radioprotecao e Dosimetria, CNEN/IRD (Radiation Protection and Dosimetry Institute, CNEN/IRD), Av. Salvador Allende s/no, P.O. Box 37750, 22780-160 Rio de Janeiro (Brazil); Conti, C.C. [Instituto de Radioprotecao e Dosimetria, CNEN/IRD (Radiation Protection and Dosimetry Institute, CNEN/IRD), Av. Salvador Allende s/no, P.O. Box 37750, 22780-160 Rio de Janeiro (Brazil); Schirru, R. [Universidade Federal do Rio de Janeiro, COPPE, Programa de Engenharia Nuclear, Laboratorio de Monitoracao de Processos (Federal University of Rio de Janeiro, COPPE, Nuclear Engineering Program, Process Monitoring Laboratory), P.O. Box 68509, 21941-972 Rio de Janeiro (Brazil)
2010-09-21
An NaI(Tl) multidetector layout combined with the use of Monte Carlo (MC) calculations and artificial neural networks(ANN) is proposed to assess the radioactive contamination of urban and semi-urban environment surfaces. A very simple urban environment like a model street composed of a wall on either side and the ground surface was the study case. A layout of four NaI(Tl) detectors was used, and the data corresponding to the response of the detectors were obtained by the Monte Carlo method. Two additional data sets with random values for the contamination and for detectors' response were also produced to test the ANNs. For this work, 18 feedforward topologies with backpropagation learning algorithm ANNs were chosen and trained. The results showed that some trained ANNs were able to accurately predict the contamination on the three urban surfaces when submitted to values within the training range. Other results showed that generalization outside the training range of values could not be achieved. The use of Monte Carlo calculations in combination with ANNs has been proven to be a powerful tool to perform detection calibration for highly complicated detection geometries.
Efficient photon treatment planning by the use of Swiss Monte Carlo Plan
International Nuclear Information System (INIS)
Fix, M K; Manser, P; Frei, D; Volken, W; Mini, R; Born, E J
2007-01-01
Currently photon Monte Carlo treatment planning (MCTP) for a patient stored in the patient database of a treatment planning system (TPS) usually can only be performed using a cumbersome multi-step procedure where many user interactions are needed. Automation is needed for usage in clinical routine. In addition, because of the long computing time in MCTP, optimization of the MC calculations is essential. For these purposes a new GUI-based photon MC environment has been developed resulting in a very flexible framework, namely the Swiss Monte Carlo Plan (SMCP). Appropriate MC transport methods are assigned to different geometric regions by still benefiting from the features included in the TPS. In order to provide a flexible MC environment the MC particle transport has been divided into different parts: source, beam modifiers, and patient. The source part includes: Phase space-source, source models, and full MC transport through the treatment head. The beam modifier part consists of one module for each beam modifier. To simulate the radiation transport through each individual beam modifier, one out of three full MC transport codes can be selected independently. Additionally, for each beam modifier a simple or an exact geometry can be chosen. Thereby, different complexity levels of radiation transport are applied during the simulation. For the patient dose calculation two different MC codes are available. A special plug-in in Eclipse providing all necessary information by means of Dicom streams was used to start the developed MC GUI. The implementation of this framework separates the MC transport from the geometry and the modules pass the particles in memory, hence no files are used as interface. The implementation is realized for 6 and 15 MV beams of a Varian Clinac 2300 C/D. Several applications demonstrate the usefulness of the framework. Apart from applications dealing with the beam modifiers, three patient cases are shown. Thereby, comparisons between MC
Sharma, Diksha; Sempau, Josep; Badano, Aldo
2018-02-01
Monte Carlo simulations require large number of histories to obtain reliable estimates of the quantity of interest and its associated statistical uncertainty. Numerous variance reduction techniques (VRTs) have been employed to increase computational efficiency by reducing the statistical uncertainty. We investigate the effect of two VRTs for optical transport methods on accuracy and computing time for the estimation of variance (noise) in x-ray imaging detectors. We describe two VRTs. In the first, we preferentially alter the direction of the optical photons to increase detection probability. In the second, we follow only a fraction of the total optical photons generated. In both techniques, the statistical weight of photons is altered to maintain the signal mean. We use fastdetect2, an open-source, freely available optical transport routine from the hybridmantis package. We simulate VRTs for a variety of detector models and energy sources. The imaging data from the VRT simulations are then compared to the analog case (no VRT) using pulse height spectra, Swank factor, and the variance of the Swank estimate. We analyze the effect of VRTs on the statistical uncertainty associated with Swank factors. VRTs increased the relative efficiency by as much as a factor of 9. We demonstrate that we can achieve the same variance of the Swank factor with less computing time. With this approach, the simulations can be stopped when the variance of the variance estimates reaches the desired level of uncertainty. We implemented analytic estimates of the variance of Swank factor and demonstrated the effect of VRTs on image quality calculations. Our findings indicate that the Swank factor is dominated by the x-ray interaction profile as compared to the additional uncertainty introduced in the optical transport by the use of VRTs. For simulation experiments that aim at reducing the uncertainty in the Swank factor estimate, any of the proposed VRT can be used for increasing the relative
Toltz, Allison; Hoesl, Michaela; Schuemann, Jan; Seuntjens, Jan; Lu, Hsiao-Ming; Paganetti, Harald
2017-11-01
Our group previously introduced an in vivo proton range verification methodology in which a silicon diode array system is used to correlate the dose rate profile per range modulation wheel cycle of the detector signal to the water-equivalent path length (WEPL) for passively scattered proton beam delivery. The implementation of this system requires a set of calibration data to establish a beam-specific response to WEPL fit for the selected 'scout' beam (a 1 cm overshoot of the predicted detector depth with a dose of 4 cGy) in water-equivalent plastic. This necessitates a separate set of measurements for every 'scout' beam that may be appropriate to the clinical case. The current study demonstrates the use of Monte Carlo simulations for calibration of the time-resolved diode dosimetry technique. Measurements for three 'scout' beams were compared against simulated detector response with Monte Carlo methods using the Tool for Particle Simulation (TOPAS). The 'scout' beams were then applied in the simulation environment to simulated water-equivalent plastic, a CT of water-equivalent plastic, and a patient CT data set to assess uncertainty. Simulated detector response in water-equivalent plastic was validated against measurements for 'scout' spread out Bragg peaks of range 10 cm, 15 cm, and 21 cm (168 MeV, 177 MeV, and 210 MeV) to within 3.4 mm for all beams, and to within 1 mm in the region where the detector is expected to lie. Feasibility has been shown for performing the calibration of the detector response for three 'scout' beams through simulation for the time-resolved diode dosimetry technique in passive scattered proton delivery. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
On an efficient multiple time step Monte Carlo simulation of the SABR model
Leitao Rodriguez, A.; Grzelak, L.A.; Oosterlee, C.W.
2017-01-01
In this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl. Math.
Multi-level Monte Carlo Methods for Efficient Simulation of Coulomb Collisions
Ricketson, Lee
2013-10-01
We discuss the use of multi-level Monte Carlo (MLMC) schemes--originally introduced by Giles for financial applications--for the efficient simulation of Coulomb collisions in the Fokker-Planck limit. The scheme is based on a Langevin treatment of collisions, and reduces the computational cost of achieving a RMS error scaling as ɛ from O (ɛ-3) --for standard Langevin methods and binary collision algorithms--to the theoretically optimal scaling O (ɛ-2) for the Milstein discretization, and to O (ɛ-2 (logɛ)2) with the simpler Euler-Maruyama discretization. In practice, this speeds up simulation by factors up to 100. We summarize standard MLMC schemes, describe some tricks for achieving the optimal scaling, present results from a test problem, and discuss the method's range of applicability. This work was performed under the auspices of the U.S. DOE by the University of California, Los Angeles, under grant DE-FG02-05ER25710, and by LLNL under contract DE-AC52-07NA27344.
Chen, Yunjie; Roux, Benoît
2014-09-21
Hybrid schemes combining the strength of molecular dynamics (MD) and Metropolis Monte Carlo (MC) offer a promising avenue to improve the sampling efficiency of computer simulations of complex systems. A number of recently proposed hybrid methods consider new configurations generated by driving the system via a non-equilibrium MD (neMD) trajectory, which are subsequently treated as putative candidates for Metropolis MC acceptance or rejection. To obey microscopic detailed balance, it is necessary to alter the momentum of the system at the beginning and/or the end of the neMD trajectory. This strict rule then guarantees that the random walk in configurational space generated by such hybrid neMD-MC algorithm will yield the proper equilibrium Boltzmann distribution. While a number of different constructs are possible, the most commonly used prescription has been to simply reverse the momenta of all the particles at the end of the neMD trajectory ("one-end momentum reversal"). Surprisingly, it is shown here that the choice of momentum reversal prescription can have a considerable effect on the rate of convergence of the hybrid neMD-MC algorithm, with the simple one-end momentum reversal encountering particularly acute problems. In these neMD-MC simulations, different regions of configurational space end up being essentially isolated from one another due to a very small transition rate between regions. In the worst-case scenario, it is almost as if the configurational space does not constitute a single communicating class that can be sampled efficiently by the algorithm, and extremely long neMD-MC simulations are needed to obtain proper equilibrium probability distributions. To address this issue, a novel momentum reversal prescription, symmetrized with respect to both the beginning and the end of the neMD trajectory ("symmetric two-ends momentum reversal"), is introduced. Illustrative simulations demonstrate that the hybrid neMD-MC algorithm robustly yields a correct
Chen, Yunjie; Roux, Benoît
2014-09-01
Hybrid schemes combining the strength of molecular dynamics (MD) and Metropolis Monte Carlo (MC) offer a promising avenue to improve the sampling efficiency of computer simulations of complex systems. A number of recently proposed hybrid methods consider new configurations generated by driving the system via a non-equilibrium MD (neMD) trajectory, which are subsequently treated as putative candidates for Metropolis MC acceptance or rejection. To obey microscopic detailed balance, it is necessary to alter the momentum of the system at the beginning and/or the end of the neMD trajectory. This strict rule then guarantees that the random walk in configurational space generated by such hybrid neMD-MC algorithm will yield the proper equilibrium Boltzmann distribution. While a number of different constructs are possible, the most commonly used prescription has been to simply reverse the momenta of all the particles at the end of the neMD trajectory ("one-end momentum reversal"). Surprisingly, it is shown here that the choice of momentum reversal prescription can have a considerable effect on the rate of convergence of the hybrid neMD-MC algorithm, with the simple one-end momentum reversal encountering particularly acute problems. In these neMD-MC simulations, different regions of configurational space end up being essentially isolated from one another due to a very small transition rate between regions. In the worst-case scenario, it is almost as if the configurational space does not constitute a single communicating class that can be sampled efficiently by the algorithm, and extremely long neMD-MC simulations are needed to obtain proper equilibrium probability distributions. To address this issue, a novel momentum reversal prescription, symmetrized with respect to both the beginning and the end of the neMD trajectory ("symmetric two-ends momentum reversal"), is introduced. Illustrative simulations demonstrate that the hybrid neMD-MC algorithm robustly yields a correct
Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport
Energy Technology Data Exchange (ETDEWEB)
Romano, Paul K.; Siegel, Andrew R.
2017-04-16
The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup due to vectorization as a function of two parameters: the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size in order to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, however, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration. For some problems, this implies that vector effciencies over 50% may not be attainable. While there are many factors impacting performance of an event-based algorithm that are not captured by our model, it nevertheless provides insights into factors that may be limiting in a real implementation.
International Nuclear Information System (INIS)
Picton, D.J.; Harris, R.G.; Randle, K.; Weaver, D.R.
1995-01-01
This paper describes a simple, accurate and efficient technique for the calculation of materials perturbation effects in Monte Carlo photon transport calculations. It is particularly suited to the application for which it was developed, namely the modelling of a dual detector density tool as used in borehole logging. However, the method would be appropriate to any photon transport calculation in the energy range 0.1 to 2 MeV, in which the predominant processes are Compton scattering and photoelectric absorption. The method enables a single set of particle histories to provide results for an array of configurations in which material densities or compositions vary. It can calculate the effects of small perturbations very accurately, but is by no means restricted to such cases. For the borehole logging application described here the method has been found to be efficient for a moderate range of variation in the bulk density (of the order of ±30% from a reference value) or even larger changes to a limited portion of the system (e.g. a low density mudcake of the order of a few tens of mm in thickness). The effective speed enhancement over an equivalent set of individual calculations is in the region of an order of magnitude or more. Examples of calculations on a dual detector density tool are given. It is demonstrated that the method predicts, to a high degree of accuracy, the variation of detector count rates with formation density, and that good results are also obtained for the effects of mudcake layers. An interesting feature of the results is that relative count rates (the ratios of count rates obtained with different configurations) can usually be determined more accurately than the absolute values of the count rates. (orig.)
International Nuclear Information System (INIS)
Harb, S.; Salahel Din, K.; Abbady, A.
2009-01-01
In this paper, we describe a method of calibrating of efficiency of a HPGe gamma-ray spectrometry of bulk environmental samples (Tea, crops, water, and soil) is a significant part of the environmental radioactivity measurements. Here we will discuss the full energy peak efficiency (FEPE) of three HPGe detectors it as a consequence, it is essential that the efficiency is determined for each set-up employed. Besides to take full advantage at gamma-ray spectrometry, a set of efficiency at several energies which covers the wide the range in energy, the large the number of radionuclides whose concentration can be determined to measure the main natural gamma-ray emitters, the efficiency should be known at least from 46.54 keV ( 210 Pb) to 1836 keV ( 88 Y). Radioactive sources were prepared from two different standards, a first mixed standard QC Y 40 containing 210 Pb, 241 Am, 109 Cd, and Co 57 , and the second QC Y 48 containing 241 Am, 109 Cd, 57 Co, 139 Ce, 113 Sn, 85 Sr, 137 Cs, 88 Y, and 60 Co is necessary in order to calculate the activity of the different radionuclides contained in a sample. In this work, we will study the efficiency calibration as a function of different parameters as:- Energy of gamma ray from 46.54 keV ( 210 Pb) to 1836 keV ( 88 Y), three different detectors A, B, and C, geometry of containers (point source, marinelli beaker, and cylindrical bottle 1 L), height of standard soil samples in bottle 250 ml, and density of standard environmental samples. These standard environmental sample must be measured before added standard solution because we will use the same environmental samples in order to consider the self absorption especially and composition in the case of volume samples.
Absolute efficiency calibration of 6LiF-based solid state thermal neutron detectors
Finocchiaro, Paolo; Cosentino, Luigi; Lo Meo, Sergio; Nolte, Ralf; Radeck, Desiree
2018-03-01
The demand for new thermal neutron detectors as an alternative to 3He tubes in research, industrial, safety and homeland security applications, is growing. These needs have triggered research and development activities about new generations of thermal neutron detectors, characterized by reasonable efficiency and gamma rejection comparable to 3He tubes. In this paper we show the state of the art of a promising low-cost technique, based on commercial solid state silicon detectors coupled with thin neutron converter layers of 6LiF deposited onto carbon fiber substrates. A few configurations were studied with the GEANT4 simulation code, and the intrinsic efficiency of the corresponding detectors was calibrated at the PTB Thermal Neutron Calibration Facility. The results show that the measured intrinsic detection efficiency is well reproduced by the simulations, therefore validating the simulation tool in view of new designs. These neutron detectors have also been tested at neutron beam facilities like ISIS (Rutherford Appleton Laboratory, UK) and n_TOF (CERN) where a few samples are already in operation for beam flux and 2D profile measurements. Forthcoming applications are foreseen for the online monitoring of spent nuclear fuel casks in interim storage sites.
Davies, Stephen R; Alamgir, Mahiuddin; Chan, Benjamin K H; Dang, Thao; Jones, Kai; Krishnaswami, Maya; Luo, Yawen; Mitchell, Peter S R; Moawad, Michael; Swan, Hilton; Tarrant, Greg J
2015-10-01
The purity determination of organic calibration standards using the traditional mass balance approach is described. Demonstrated examples highlight the potential for bias in each measurement and the need to implement an approach that provides a cross-check for each result, affording fit for purpose purity values in a timely and cost-effective manner. Chromatographic techniques such as gas chromatography with flame ionisation detection (GC-FID) and high-performance liquid chromatography with UV detection (HPLC-UV), combined with mass and NMR spectroscopy, provide a detailed impurity profile allowing an efficient conversion of chromatographic peak areas into relative mass fractions, generally avoiding the need to calibrate each impurity present. For samples analysed by GC-FID, a conservative measurement uncertainty budget is described, including a component to cover potential variations in the response of each unidentified impurity. An alternative approach is also detailed in which extensive purification eliminates the detector response factor issue, facilitating the certification of a super-pure calibration standard which can be used to quantify the main component in less-pure candidate materials. This latter approach is particularly useful when applying HPLC analysis with UV detection. Key to the success of this approach is the application of both qualitative and quantitative (1)H NMR spectroscopy.
Efficiency calibration of electron spectrometers by the help of standard spectrum
International Nuclear Information System (INIS)
Toth, J.; Cserny, I.; Varga, D.; Koever, I.; Toekesi, K.
2004-01-01
Complete text of publication follows. For studying thin films and surface nanostructures quantitative analytical applications of electron spectroscopic techniques have a great importance. The most frequently used techniques are XPS, XAES and AES in quantitative surface electron spectroscopy. Applying these techniques changes in the detection efficiency vs. electron kinetic energy change the measured electron peak intensity ratios and in this way the neglect of the energy dependence of the spectrometer efficiency can influence surface atomic concentrations derived. The importance of the precise determination of the atomic concentrations is very crucial, especially in the determination of non-destructive depth profiles by the help of AR-XPS in which small changes in the relative concentrations can change dramatically the concentration depth profiles of a few nanometer depth ranges. In the present study the REELS technique was used to determine the relative detection efficiency by the help of a standard spectrum measured on the surface of fine microcrystalline Cu specimen. The experimental relative efficiency curves vs. electron kinetic energy were compared to the calculated efficiency curve. The efficiency calibration is discussed from the point of view of quantitative XPS, AR- XPS, AES and from the point of view of IMFP determination by XPS. The work was supported by the Hungarian National Science Foundation, OTKAT038016. For the Cu specimen and the standard spectrum the authors are indebted to the Sur- face Analysis Society of Japan, to Dr. Shigeo Tanuma and Professor Keisuke Goto (NIT). (author)
International Nuclear Information System (INIS)
Xie Liang-Hai; Li Lei; Wang Jing-Dong; Tao Ran; Cheng Bing-Jun; Zhang Yi-Teng
2014-01-01
The barium release experiment is an effective method to explore the near-earth environment and to study all kinds of space physics processes. The first space barium release experiment in China was successfully carried out by a sounding rocket on April 5, 2013. This work is devoted to calculating the release efficiency of the barium release by analyzing the optical image observed during the experiment. First, we present a method to calibrate the images grey value of barium cloud with the reference stars to obtain the radiant fluxes at different moments. Then the release efficiency is obtained by a curve fitting with the theoretical evolution model of barium cloud. The calculated result is basically consistent with the test value on ground
Grefenstette, Brian W.; Bhalerao, Varun; Cook, W. Rick; Harrison, Fiona A.; Kitaguchi, Takao; Madsen, Kristin K.; Mao, Peter H.; Miyasaka, Hiromasa; Rana, Vikram
2017-08-01
Pixelated Cadmium Zinc Telluride (CdZnTe) detectors are currently flying on the Nuclear Spectroscopic Telescope ARray (NuSTAR) NASA Astrophysics Small Explorer. While the pixel pitch of the detectors is ≍ 605 μm, we can leverage the detector readout architecture to determine the interaction location of an individual photon to much higher spatial accuracy. The sub-pixel spatial location allows us to finely oversample the point spread function of the optics and reduces imaging artifacts due to pixelation. In this paper we demonstrate how the sub-pixel information is obtained, how the detectors were calibrated, and provide ground verification of the quantum efficiency of our Monte Carlo model of the detector response.
Monte Carlo simulation of efficient data acquisition for an entire-body PET scanner
Energy Technology Data Exchange (ETDEWEB)
Isnaini, Ismet; Obi, Takashi [Tokyo Institute of Technology, 4259 Nagatsuta-cho, Midori-ku, Yokohama 226-8503 (Japan); Yoshida, Eiji, E-mail: rush@nirs.go.jp [National Institute of Radiological Sciences, 4-9-1 Inage-ku, Chiba 263-8555 (Japan); Yamaya, Taiga [National Institute of Radiological Sciences, 4-9-1 Inage-ku, Chiba 263-8555 (Japan)
2014-07-01
Conventional PET scanners can image the whole body using many bed positions. On the other hand, an entire-body PET scanner with an extended axial FOV, which can trace whole-body uptake images at the same time and improve sensitivity dynamically, has been desired. The entire-body PET scanner would have to process a large amount of data effectively. As a result, the entire-body PET scanner has high dead time at a multiplex detector grouping process. Also, the entire-body PET scanner has many oblique line-of-responses. In this work, we study an efficient data acquisition for the entire-body PET scanner using the Monte Carlo simulation. The simulated entire-body PET scanner based on depth-of-interaction detectors has a 2016-mm axial field-of-view (FOV) and an 80-cm ring diameter. Since the entire-body PET scanner has higher single data loss than a conventional PET scanner at grouping circuits, the NECR of the entire-body PET scanner decreases. But, single data loss is mitigated by separating the axially arranged detector into multiple parts. Our choice of 3 groups of axially-arranged detectors has shown to increase the peak NECR by 41%. An appropriate choice of maximum ring difference (MRD) will also maintain the same high performance of sensitivity and high peak NECR while at the same time reduces the data size. The extremely-oblique line of response for large axial FOV does not contribute much to the performance of the scanner. The total sensitivity with full MRD increased only 15% than that with about half MRD. The peak NECR was saturated at about half MRD. The entire-body PET scanner promises to provide a large axial FOV and to have sufficient performance values without using the full data.
International Nuclear Information System (INIS)
Arsenault, Benoit; Le Tellier, Romain; Hebert, Alain
2008-01-01
The paper presents the results of a first implementation of a Monte Carlo module in DRAGON Version 4 based on the delta-tracking technique. The Monte Carlo module uses the geometry and the self-shielded multigroup cross-sections calculated with a deterministic model. The module has been tested with three different configurations of an ACR TM -type lattice. The paper also discusses the impact of this approach on the efficiency of the Monte Carlo module. (authors)
More efficient evolutionary strategies for model calibration with watershed model for demonstration
Baggett, J. S.; Skahill, B. E.
2008-12-01
Evolutionary strategies allow automatic calibration of more complex models than traditional gradient based approaches, but they are more computationally intensive. We present several efficiency enhancements for evolution strategies, many of which are not new, but when combined have been shown to dramatically decrease the number of model runs required for calibration of synthetic problems. To reduce the number of expensive model runs we employ a surrogate objective function for an adaptively determined fraction of the population at each generation (Kern et al., 2006). We demonstrate improvements to the adaptive ranking strategy that increase its efficiency while sacrificing little reliability and further reduce the number of model runs required in densely sampled parts of parameter space. Furthermore, we include a gradient individual in each generation that is usually not selected when the search is in a global phase or when the derivatives are poorly approximated, but when selected near a smooth local minimum can dramatically increase convergence speed (Tahk et al., 2007). Finally, the selection of the gradient individual is used to adapt the size of the population near local minima. We show, by incorporating these enhancements into the Covariance Matrix Adaption Evolution Strategy (CMAES; Hansen, 2006), that their synergetic effect is greater than their individual parts. This hybrid evolutionary strategy exploits smooth structure when it is present but degrades to an ordinary evolutionary strategy, at worst, if smoothness is not present. Calibration of 2D-3D synthetic models with the modified CMAES requires approximately 10%-25% of the model runs of ordinary CMAES. Preliminary demonstration of this hybrid strategy will be shown for watershed model calibration problems. Hansen, N. (2006). The CMA Evolution Strategy: A Comparing Review. In J.A. Lozano, P. Larrañga, I. Inza and E. Bengoetxea (Eds.). Towards a new evolutionary computation. Advances in estimation of
Computational efficiency using the CYBER-205 computer for the PACER Monte Carlo Program
International Nuclear Information System (INIS)
Candelore, N.R.; Maher, C.M.; Gast, R.C.
1985-09-01
The use of the large memory of the CYBER-205 and its vector data handling logic produced speedups over scalar code ranging from a factor of 7 for unit cell calculations with relatively few compositions to a factor of 5 for problems having more detailed geometry and materials. By vectorizing the neutron tracking in PACER (the collision analysis remained in scalar code), an asymptotic value of 200 neutrons/cpu-second was achieved for a batch size of 10,000 neutrons. The complete vectorization of the Monte Carlo method as performed by Brown resulted in even higher speedups in neutron processing rates over the use of scalar code. Large speedups in neutron processing rates are beneficial not only to achieve more accurate results for the neutronics calculations which are routinely done using Monte Carlo, but also to extend the use of the Monte Carlo method to applications that were previously considered impractical because of large running times
Efficiencies of joint non-local update moves in Monte Carlo simulations of coarse-grained polymers
Austin, Kieran S.; Marenz, Martin; Janke, Wolfhard
2018-03-01
In this study four update methods are compared in their performance in a Monte Carlo simulation of polymers in continuum space. The efficiencies of the update methods and combinations thereof are compared with the aid of the autocorrelation time with a fixed (optimal) acceptance ratio. Results are obtained for polymer lengths N = 14, 28 and 42 and temperatures below, at and above the collapse transition. In terms of autocorrelation, the optimal acceptance ratio is approximately 0.4. Furthermore, an overview of the step sizes of the update methods that correspond to this optimal acceptance ratio is given. This shall serve as a guide for future studies that rely on efficient computer simulations.
Xie, Wei-Qi; Gong, Yi-Xian; Yu, Kong-Xian
2018-06-01
An automated and accurate headspace gas chromatographic (HS-GC) technique was investigated for rapidly quantifying water content in edible oils. In this method, multiple headspace extraction (MHE) procedures were used to analyse the integrated water content from the edible oil sample. A simple vapour phase calibration technique with an external vapour standard was used to calibrate both the water content in the gas phase and the total weight of water in edible oil sample. After that the water in edible oils can be quantified. The data showed that the relative standard deviation of the present HS-GC method in the precision test was less than 1.13%, the relative differences between the new method and a reference method (i.e. the oven-drying method) were no more than 1.62%. The present HS-GC method is automated, accurate, efficient, and can be a reliable tool for quantifying water content in edible oil related products and research. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Energy Technology Data Exchange (ETDEWEB)
Zhang, X. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Izaurralde, R. C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Zong, Z. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Zhao, K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Thomson, A. M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2012-08-20
The efficiency of calibrating physically-based complex hydrologic models is a major concern in the application of those models to understand and manage natural and human activities that affect watershed systems. In this study, we developed a multi-core aware multi-objective evolutionary optimization algorithm (MAMEOA) to improve the efficiency of calibrating a worldwide used watershed model (Soil and Water Assessment Tool (SWAT)). The test results show that MAMEOA can save about 1-9%, 26-51%, and 39-56% time consumed by calibrating SWAT as compared with sequential method by using dual-core, quad-core, and eight-core machines, respectively. Potential and limitations of MAMEOA for calibrating SWAT are discussed. MAMEOA is open source software.
International Nuclear Information System (INIS)
Nikezic, D.
1994-01-01
The detection effciency, ρ, (or a calibration coefficient k) for radon measurements with a solid state nuclear track detector CR 39 was determined by many authors. There is a considerable discrepancy among reported values for ρ. This situation was a challenge to develop a software program to calculation ρ. This software is based on Bethe-Bloch's expression for the stopping power for heavy charged particles in a medium, as wll as on the Monte Carlo Method. Track parameters were calculated by using an iterative procedure as given in G. Somogyi et al., Nucl. Instr. and Meth. 109 (1973) 211. Results for an open detector and for the detector in a diffusion chamber were presented in this article. (orig.)
Kinetic Monte Carlo study of sensitiviy of OLED efficiency and lifetime to materials parameters
Coehoorn, R.; Eersel, van H.; Bobbert, P.A.; Janssen, R.A.J.
2015-01-01
The performance of organic light-emitting diodes (OLEDs) is determined by a complex interplay of the optoelectronic processes in the active layer stack. In order to enable simulation-assisted layer stack development, a three-dimensional kinetic Monte Carlo OLED simulation method which includes the
Directory of Open Access Journals (Sweden)
Lauren S. Wakschlag
2009-06-01
Full Text Available Maternal smoking during pregnancy is a major public health problem that has been associated with numerous short- and long-term adverse health outcomes in offspring. However, characterizing smoking exposure during pregnancy precisely has been rather difficult: self-reported measures of smoking often suffer from recall bias, deliberate misreporting, and selective non-disclosure, while single bioassay measures of nicotine metabolites only reflect recent smoking history and cannot capture the fluctuating and complex patterns of varying exposure of the fetus. Recently, Dukic et al. [1] have proposed a statistical method for combining information from both sources in order to increase the precision of the exposure measurement and power to detect more subtle effects of smoking. In this paper, we extend the Dukic et al. [1] method to incorporate individual variation of the metabolic parameters (such as clearance rates into the calibration model of smoking exposure during pregnancy. We apply the new method to the Family Health and Development Project (FHDP, a small convenience sample of 96 predominantly working-class white pregnant women oversampled for smoking. We find that, on average, misreporters smoke 7.5 cigarettes more than what they report to smoke, with about one third underreporting by 1.5, one third under-reporting by about 6.5, and one third underreporting by 8.5 cigarettes. Partly due to the limited demographic heterogeneity in the FHDP sample, the results are similar to those obtained by the deterministic calibration model, whose adjustments were slightly lower (by 0.5 cigarettes on average. The new results are also, as expected, less sensitive to assumed values of cotinine half-life.
International Nuclear Information System (INIS)
Khuat, Quang Huy; Kim, Song Hyun; Kim, Do Hyun; Shin, Chang Ho
2015-01-01
This technique is known as Consistent Adjoint Driven Importance Sampling (CADIS) method and it is implemented in SCALE code system. In the CADIS method, adjoint transport equation has to be solved to determine deterministic importance functions. Using the CADIS method, a problem was noted that the biased adjoint flux estimated by deterministic methods can affect the calculation efficiency and error. The biases of adjoint function are caused by the methodology, calculation strategy, tolerance of result calculated by the deterministic method and inaccurate multi-group cross section libraries. In this paper, a study to analyze the influence of the biased adjoint functions into Monte Carlo computational efficiency is pursued. In this study, a method to estimate the calculation efficiency was proposed for applying the biased adjoint fluxes in the CADIS approach. For a benchmark problem, the responses and FOMs using SCALE code system were evaluated as applying the adjoint fluxes. The results show that the biased adjoint fluxes significantly affects the calculation efficiencies
Energy Technology Data Exchange (ETDEWEB)
Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Verhaegen, F. [Department of Radiation Oncology - MAASTRO, GROW—School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4 (Canada); Jaffray, D. A. [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Ontario Cancer Institute, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada)
2015-01-15
Purpose: X-ray scatter is a significant impediment to image quality improvements in cone-beam CT (CBCT). The authors present and demonstrate a novel scatter correction algorithm using a scatter estimation method that simultaneously combines multiple Monte Carlo (MC) CBCT simulations through the use of a concurrently evaluated fitting function, referred to as concurrent MC fitting (CMCF). Methods: The CMCF method uses concurrently run MC CBCT scatter projection simulations that are a subset of the projection angles used in the projection set, P, to be corrected. The scattered photons reaching the detector in each MC simulation are simultaneously aggregated by an algorithm which computes the scatter detector response, S{sub MC}. S{sub MC} is fit to a function, S{sub F}, and if the fit of S{sub F} is within a specified goodness of fit (GOF), the simulations are terminated. The fit, S{sub F}, is then used to interpolate the scatter distribution over all pixel locations for every projection angle in the set P. The CMCF algorithm was tested using a frequency limited sum of sines and cosines as the fitting function on both simulated and measured data. The simulated data consisted of an anthropomorphic head and a pelvis phantom created from CT data, simulated with and without the use of a compensator. The measured data were a pelvis scan of a phantom and patient taken on an Elekta Synergy platform. The simulated data were used to evaluate various GOF metrics as well as determine a suitable fitness value. The simulated data were also used to quantitatively evaluate the image quality improvements provided by the CMCF method. A qualitative analysis was performed on the measured data by comparing the CMCF scatter corrected reconstruction to the original uncorrected and corrected by a constant scatter correction reconstruction, as well as a reconstruction created using a set of projections taken with a small cone angle. Results: Pearson’s correlation, r, proved to be a
The Adjoint Monte Carlo - a viable option for efficient radiotherapy treatment planning
Energy Technology Data Exchange (ETDEWEB)
Goldstein, M [Israel Atomic Energy Commission, Beersheba (Israel). Nuclear Research Center-Negev
1996-12-01
In cancer therapy using collimated beams of photons, the radiation oncologist must determine a set of beams that delivers the required dose to each point in the tumor and minimizes the risk of damage to the healthy tissue and vital organs. Currently, the oncologist determines these beams iteratively, by using a sequence of dose calculations using approximate numerical methods. In this paper, a more accurate and potentially faster approach, based on the Adjoint Monte Carlo method, is presented (authors).
Efficient Geometry and Data Handling for Large-Scale Monte Carlo - Thermal-Hydraulics Coupling
Hoogenboom, J. Eduard
2014-06-01
Detailed coupling of thermal-hydraulics calculations to Monte Carlo reactor criticality calculations requires each axial layer of each fuel pin to be defined separately in the input to the Monte Carlo code in order to assign to each volume the temperature according to the result of the TH calculation, and if the volume contains coolant, also the density of the coolant. This leads to huge input files for even small systems. In this paper a methodology for dynamical assignment of temperatures with respect to cross section data is demonstrated to overcome this problem. The method is implemented in MCNP5. The method is verified for an infinite lattice with 3x3 BWR-type fuel pins with fuel, cladding and moderator/coolant explicitly modeled. For each pin 60 axial zones are considered with different temperatures and coolant densities. The results of the axial power distribution per fuel pin are compared to a standard MCNP5 run in which all 9x60 cells for fuel, cladding and coolant are explicitly defined and their respective temperatures determined from the TH calculation. Full agreement is obtained. For large-scale application the method is demonstrated for an infinite lattice with 17x17 PWR-type fuel assemblies with 25 rods replaced by guide tubes. Again all geometrical detailed is retained. The method was used in a procedure for coupled Monte Carlo and thermal-hydraulics iterations. Using an optimised iteration technique, convergence was obtained in 11 iteration steps.
Nonequilibrium candidate Monte Carlo is an efficient tool for equilibrium simulation
Energy Technology Data Exchange (ETDEWEB)
Nilmeier, J. P.; Crooks, G. E.; Minh, D. D. L.; Chodera, J. D.
2011-10-24
Metropolis Monte Carlo simulation is a powerful tool for studying the equilibrium properties of matter. In complex condensed-phase systems, however, it is difficult to design Monte Carlo moves with high acceptance probabilities that also rapidly sample uncorrelated configurations. Here, we introduce a new class of moves based on nonequilibrium dynamics: candidate configurations are generated through a finite-time process in which a system is actively driven out of equilibrium, and accepted with criteria that preserve the equilibrium distribution. The acceptance rule is similar to the Metropolis acceptance probability, but related to the nonequilibrium work rather than the instantaneous energy difference. Our method is applicable to sampling from both a single thermodynamic state or a mixture of thermodynamic states, and allows both coordinates and thermodynamic parameters to be driven in nonequilibrium proposals. While generating finite-time switching trajectories incurs an additional cost, driving some degrees of freedom while allowing others to evolve naturally can lead to large enhancements in acceptance probabilities, greatly reducing structural correlation times. Using nonequilibrium driven processes vastly expands the repertoire of useful Monte Carlo proposals in simulations of dense solvated systems.
Thermal inertia and energy efficiency – Parametric simulation assessment on a calibrated case study
International Nuclear Information System (INIS)
Aste, Niccolò; Leonforte, Fabrizio; Manfren, Massimiliano; Mazzon, Manlio
2015-01-01
Highlights: • We perform a parametric simulation study on a calibrated building energy model. • We introduce adaptive shadings and night free cooling in simulations. • We analyze the effect of thermal capacity on the parametric simulations results. • We recognize that cooling demand and savings scales linearly with thermal capacity. • We assess the advantage of medium-heavy over medium and light configurations. - Abstract: The reduction of energy consumption for heating and cooling services in the existing building stock is a key challenge for global sustainability today and buildings’ envelopes retrofit is one the main issues. Most of the existing buildings’ envelopes have low levels of insulation, high thermal losses due to thermal bridges and cracks, absence of appropriate solar control, etc. Further, in building refurbishment, the importance of a system level approach is often undervalued in favour of simplistic “off the shelf” efficient solutions, focused on the reduction of thermal transmittance and on the enhancement of solar control capabilities. In many cases, the importance of the dynamic thermal properties is often neglected or underestimated and the effective thermal capacity is not properly considered as one of the design parameters. The research presented aims to critically assess the influence of the dynamic thermal properties of the building fabric (roof, walls and floors) on sensible heating and cooling energy demand for a case study. The case study chosen is an existing office building which has been retrofitted in recent years and whose energy model has been calibrated according to the data collected in the monitoring process. The research illustrates the variations of the sensible thermal energy demand of the building in different retrofit scenarios, and relates them to the variations of the dynamic thermal properties of the construction components. A parametric simulation study has been performed, encompassing the use of
Energy Technology Data Exchange (ETDEWEB)
Mesradi, M. [Institut Pluridisciplinaire Hubert-Curien, UMR 7178 CNRS/IN2P3 et Universite Louis Pasteur, 23 rue du Loess, BP 28, F-67037 Strasbourg Cedex 2 (France); Elanique, A. [Departement de Physique, FS/BP 8106, Universite Ibn Zohr, Agadir, Maroc (Morocco); Nourreddine, A. [Institut Pluridisciplinaire Hubert-Curien, UMR 7178 CNRS/IN2P3 et Universite Louis Pasteur, 23 rue du Loess, BP 28, F-67037 Strasbourg Cedex 2 (France)], E-mail: abdelmjid.nourreddine@ires.in2p3.fr; Pape, A.; Raiser, D.; Sellam, A. [Institut Pluridisciplinaire Hubert-Curien, UMR 7178 CNRS/IN2P3 et Universite Louis Pasteur, 23 rue du Loess, BP 28, F-67037 Strasbourg Cedex 2 (France)
2008-06-15
This work relates to the study and characterization of the response function of an X-ray spectrometry system. The intrinsic efficiency of a Si(Li) detector has been simulated with the Monte Carlo codes MCNP and GEANT4 in the photon energy range of 2.6-59.5 keV. After finding it necessary to take a radiograph of the detector inside its cryostat to learn the correct dimensions, agreement within 10% between the simulations and experimental measurements with several point-like sources and PIXE results was obtained.
International Nuclear Information System (INIS)
Moriarty, K.J.M.; Blackshaw, J.E.
1983-01-01
The computer program calculates the average action per plaquette for SU(6)/Z 6 lattice gauge theory. By considering quantum field theory on a space-time lattice, the ultraviolet divergences of the theory are regulated through the finite lattice spacing. The continuum theory results can be obtained by a renormalization group procedure. Making use of the FPS Mathematics Library (MATHLIB), we are able to generate an efficient code for the Monte Carlo algorithm for lattice gauge theory calculations which compares favourably with the performance of the CDC 7600. (orig.)
Efficient data management techniques implemented in the Karlsruhe Monte Carlo code KAMCCO
International Nuclear Information System (INIS)
Arnecke, G.; Borgwaldt, H.; Brandl, V.; Lalovic, M.
1974-01-01
The Karlsruhe Monte Carlo Code KAMCCO is a forward neutron transport code with an eigenfunction and a fixed source option, including time-dependence. A continuous energy model is combined with a detailed representation of neutron cross sections, based on linear interpolation, Breit-Wigner resonances and probability tables. All input is processed into densely packed, dynamically addressed parameter fields and networks of pointers (addresses). Estimation routines are decoupled from random walk and analyze a storage region with sample records. This technique leads to fast execution with moderate storage requirements and without any I/O-operations except in the input and output stages. 7 references. (U.S.)
Efficient pseudo-random number generation for monte-carlo simulations using graphic processors
Mohanty, Siddhant; Mohanty, A. K.; Carminati, F.
2012-06-01
A hybrid approach based on the combination of three Tausworthe generators and one linear congruential generator for pseudo random number generation for GPU programing as suggested in NVIDIA-CUDA library has been used for MONTE-CARLO sampling. On each GPU thread, a random seed is generated on fly in a simple way using the quick and dirty algorithm where mod operation is not performed explicitly due to unsigned integer overflow. Using this hybrid generator, multivariate correlated sampling based on alias technique has been carried out using both CUDA and OpenCL languages.
Efficient pseudo-random number generation for Monte-Carlo simulations using graphic processors
International Nuclear Information System (INIS)
Mohanty, Siddhant; Mohanty, A K; Carminati, F
2012-01-01
A hybrid approach based on the combination of three Tausworthe generators and one linear congruential generator for pseudo random number generation for GPU programing as suggested in NVIDIA-CUDA library has been used for MONTE-CARLO sampling. On each GPU thread, a random seed is generated on fly in a simple way using the quick and dirty algorithm where mod operation is not performed explicitly due to unsigned integer overflow. Using this hybrid generator, multivariate correlated sampling based on alias technique has been carried out using both CUDA and OpenCL languages.
Understanding and improving the efficiency of full configuration interaction quantum Monte Carlo.
Vigor, W A; Spencer, J S; Bearpark, M J; Thom, A J W
2016-03-07
Within full configuration interaction quantum Monte Carlo, we investigate how the statistical error behaves as a function of the parameters which control the stochastic sampling. We define the inefficiency as a measure of the statistical error per particle sampling the space and per time step and show there is a sizeable parameter regime where this is minimised. We find that this inefficiency increases sublinearly with Hilbert space size and can be reduced by localising the canonical Hartree-Fock molecular orbitals, suggesting that the choice of basis impacts the method beyond that of the sign problem.
Understanding and improving the efficiency of full configuration interaction quantum Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Vigor, W. A.; Bearpark, M. J. [Department of Chemistry, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Spencer, J. S. [Department of Physics, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Department of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Thom, A. J. W. [Department of Chemistry, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); University Chemical Laboratory, Lensfield Road, Cambridge CB2 1EW (United Kingdom)
2016-03-07
Within full configuration interaction quantum Monte Carlo, we investigate how the statistical error behaves as a function of the parameters which control the stochastic sampling. We define the inefficiency as a measure of the statistical error per particle sampling the space and per time step and show there is a sizeable parameter regime where this is minimised. We find that this inefficiency increases sublinearly with Hilbert space size and can be reduced by localising the canonical Hartree–Fock molecular orbitals, suggesting that the choice of basis impacts the method beyond that of the sign problem.
International Nuclear Information System (INIS)
Courtine, Fabien; Pilleyre, Thierry; Sanzelle, Serge; Miallier, Didier
2008-01-01
The project aimed at modelling an HPGe well detector in view to predict its photon-counting efficiency by means of the Monte Carlo simulation code GEANT4. Although a qualitative and quantitative description of the crystal and housing was available, uncertainties were associated to parameters controlling the detector response. This induced poor agreement between the efficiency calculated on the basis of nominal data and the actual efficiency experimentally measured with a 137 Cs point source. It was then decided to improve the model, by parameterization of a trial and error method. The distribution of the dead layers was adopted as a unique parameter, in order to explore the possibilities and pertinence of this parameter. In the course of the work, it appeared necessary to introduce the possibility that the thickness of the dead layers was not uniform for a given surface. At the end of the process, the results allowed to conclude that the approach was able to give a model adapted to practical application with a satisfactory precision in the calculated efficiency. The pattern of the 'dead layers' that was obtained is characterized by a variable thickness which seems to be physically relevant. It implicitly and partly accounts for effects that are not originated from actual dead layers, such as incomplete charge collection. But, such effects, which are uneasily accounted for, can, in a first approximation, be represented by 'dead layers'; this is an advantage of the parameterization that was adopted.
Energy Technology Data Exchange (ETDEWEB)
Maidana, Nora L., E-mail: nmaidana@if.usp.br [Instituto de Física, Universidade de São Paulo, Travessa R 187, Cidade Universitária, CEP:05508-900 São Paulo, SP (Brazil); Vanin, Vito R.; Jahnke, Viktor [Instituto de Física, Universidade de São Paulo, Travessa R 187, Cidade Universitária, CEP:05508-900 São Paulo, SP (Brazil); Fernández-Varea, José M. [Facultat de Física (ECM and ICC), Universitat de Barcelona, Diagonal 645, E-08028 Barcelona (Spain); Martins, Marcos N. [Instituto de Física, Universidade de São Paulo, Travessa R 187, Cidade Universitária, CEP:05508-900 São Paulo, SP (Brazil); Brualla, Lorenzo [NCTeam, Strahlenklinik, Universitätsklinikum Essen, Hufelandstraße 55, D-45122 Essen (Germany)
2013-11-21
We report on the efficiency calibration of a HPGe x-ray detector using radioactive sources and an analytical expression taken from the literature, in two different arrangements, with and without a broad-angle collimator. The frontal surface of the Ge crystal was scanned with pencil beams of photons. The Ge dead layer was found to be nonuniform, with central and intermediate regions that have thin (μm range) and thick (mm range) dead layers, respectively, surrounded by an insensitive ring. We discuss how this fact explains the observed efficiency curves and generalize the adopted model. We show that changes in the thickness of the Ge-crystal dead layer affect the efficiency of x-ray detectors, but the use of an appropriate broad-beam external collimator limiting the photon flux to the thin dead layer in the central region leads to the expected efficiency dependence with energy and renders the calibration simpler.
Energy Technology Data Exchange (ETDEWEB)
An, Z. E-mail: anzhu@scu.edu.cn; Liu, M.T
2002-10-01
In this paper, the efficiency calibration of a Si(Li) detector in the low-energy region down to 0.58 keV has been performed using thick-carbon-target bremsstrahlung by 19 keV electron impact. The shape of the efficiency calibration curve was determined from the thick-carbon-target bremsstrahlung spectrum, and the absolute value for the efficiency calibration was obtained from the use of {sup 241}Am radioactive standard source. The modified Wentzel's formula for thick-target bremsstrahlung was employed and it was also compared with the most recently developed theoretical model based upon the doubly differential cross-sections for bremsstrahlung of Kissel, Quarles and Pratt. In the present calculation of theoretical bremsstrahlung, the self-absorption correction and the convolution of detector's response function with the bremsstrahlung spectrum have simultaneously been taken into account. The accuracy for the efficiency calibration in the low-energy region with the method described here was estimated to be about 6%. Moreover, the self-absorption correction calculation based upon the prescription of Wolters et al. has also been presented as an analytical factor with the accuracy of {approx}1%.
International Nuclear Information System (INIS)
Zazula, J.M.
1988-01-01
The self-learning Monte Carlo technique has been implemented to the commonly used general purpose neutron transport code MORSE, in order to enhance sampling of the particle histories that contribute to a detector response. The parameters of all the biasing techniques available in MORSE, i.e. of splitting, Russian roulette, source and collision outgoing energy importance sampling, path length transformation and additional biasing of the source angular distribution are optimized. The learning process is iteratively performed after each batch of particles, by retrieving the data concerning the subset of histories that passed the detector region and energy range in the previous batches. This procedure has been tested on two sample problems in nuclear geophysics, where an unoptimized Monte Carlo calculation is particularly inefficient. The results are encouraging, although the presented method does not directly minimize the variance and the convergence of our algorithm is restricted by the statistics of successful histories from previous random walk. Further applications for modeling of the nuclear logging measurements seem to be promising. 11 refs., 2 figs., 3 tabs. (author)
On stochastic error and computational efficiency of the Markov Chain Monte Carlo method
Li, Jun
2014-01-01
In Markov Chain Monte Carlo (MCMC) simulations, thermal equilibria quantities are estimated by ensemble average over a sample set containing a large number of correlated samples. These samples are selected in accordance with the probability distribution function, known from the partition function of equilibrium state. As the stochastic error of the simulation results is significant, it is desirable to understand the variance of the estimation by ensemble average, which depends on the sample size (i.e., the total number of samples in the set) and the sampling interval (i.e., cycle number between two consecutive samples). Although large sample sizes reduce the variance, they increase the computational cost of the simulation. For a given CPU time, the sample size can be reduced greatly by increasing the sampling interval, while having the corresponding increase in variance be negligible if the original sampling interval is very small. In this work, we report a few general rules that relate the variance with the sample size and the sampling interval. These results are observed and confirmed numerically. These variance rules are derived for theMCMCmethod but are also valid for the correlated samples obtained using other Monte Carlo methods. The main contribution of this work includes the theoretical proof of these numerical observations and the set of assumptions that lead to them. © 2014 Global-Science Press.
International Nuclear Information System (INIS)
Kelsey IV, Charles T.; Prinja, Anil K.
2011-01-01
We evaluate the Monte Carlo calculation efficiency for multigroup transport relative to continuous energy transport using the MCNPX code system to evaluate secondary neutron doses from a proton beam. We consider both fully forward simulation and application of a midway forward adjoint coupling method to the problem. Previously we developed tools for building coupled multigroup proton/neutron cross section libraries and showed consistent results for continuous energy and multigroup proton/neutron transport calculations. We observed that forward multigroup transport could be more efficient than continuous energy. Here we quantify solution efficiency differences for a secondary radiation dose problem characteristic of proton beam therapy problems. We begin by comparing figures of merit for forward multigroup and continuous energy MCNPX transport and find that multigroup is 30 times more efficient. Next we evaluate efficiency gains for coupling out-of-beam adjoint solutions with forward in-beam solutions. We use a variation of a midway forward-adjoint coupling method developed by others for neutral particle transport. Our implementation makes use of the surface source feature in MCNPX and we use spherical harmonic expansions for coupling in angle rather than solid angle binning. The adjoint out-of-beam transport for organs of concern in a phantom or patient can be coupled with numerous forward, continuous energy or multigroup, in-beam perturbations of a therapy beam line configuration. Out-of-beam dose solutions are provided without repeating out-of-beam transport. (author)
Monte Carlo simulation of neutron detection efficiency for NE213 scintillation detector
International Nuclear Information System (INIS)
Xi Yinyin; Song Yushou; Chen Zhiqiang; Yang Kun; Zhangsu Yalatu; Liu Xingquan
2013-01-01
A NE213 liquid scintillation neutron detector was simulated by using the FLUKA code. The light output of the detector was obtained by transforming the secondary particles energy deposition using Birks formula. According to the measurement threshold, detection efficiencies can be calculated by integrating the light output. The light output, central efficiency and the average efficiency as a function of the front surface radius of the detector, were simulated and the results agreed well with experimental results. (authors)
Czech Academy of Sciences Publication Activity Database
Mukhopadhyay, N. D.; Sampson, A. J.; Deniz, D.; Carlsson, G. A.; Williamson, J.; Malušek, Alexandr
2012-01-01
Roč. 70, č. 1 (2012), s. 315-323 ISSN 0969-8043 Institutional research plan: CEZ:AV0Z10480505 Keywords : Monte Carlo * correlated sampling * efficiency * uncertainty * bootstrap Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 1.179, year: 2012 http://www.sciencedirect.com/science/article/pii/S0969804311004775
International Nuclear Information System (INIS)
Uosif, M.A.M.
2006-01-01
A new 9 th degree polynomial fit function has been constructed to calculate the absolute γ-ray detection efficiencies (ηth) of Ge(Li) and HPGe Detectors, for calculating the absolute efficiency at any interesting γ-energy in the energy range between 25 and 2000 keV and distance between 6 and 148 cm. The total absolute γ -ray detection efficiencies have been calculated for six detectors, three of them are Ge(Li) and three HPGe at different distances. The absolute efficiency of the different detectors was calculated at the specific energy of the standard sources for each measuring distances. In this calculation, experimental (η e xp) and fitting (η f it) efficiency have been calculated. Seven calibrated point sources Am-241, Ba-133, Co-57, Co-60, Cs-137, Eu-152 and Ra-226 were used. The uncertainties of efficiency calibration have been calculated also for quality control. The measured (η e xp) and (η f it) calculated efficiency values were compared with efficiency, which calculated, by Gray fit function (time)- The results obtained on the basis of (η e xp)and (η f it) seem to be in very good agreement
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Fromm, Steven
2017-09-01
In an effort to study and improve the optical trapping efficiency of the 225Ra Electric Dipole Moment experiment, a fully parallelized Monte Carlo simulation of the laser cooling and trapping apparatus was created at Argonne National Laboratory and now maintained and upgraded at Michigan State University. The simulation allows us to study optimizations and upgrades without having to use limited quantities of 225Ra (15 day half-life) in experiment's apparatus. It predicts a trapping efficiency that differs from the observed value in the experiment by approximately a factor of thirty. The effects of varying oven geometry, background gas interactions, laboratory magnetic fields, MOT laser beam configurations and laser frequency noise were studied and ruled out as causes of the discrepancy between measured and predicted values of the overall trapping efficiency. Presently, the simulation is being used to help optimize a planned blue slower laser upgrade in the experiment's apparatus, which will increase the overall trapping efficiency by up to two orders of magnitude. This work is supported by Michigan State University, the Director's Research Scholars Program at the National Superconducting Cyclotron Laboratory, and the U.S. DOE, Office of Science, Office of Nuclear Physics, under Contract DE-AC02-06CH11357.
Memory-efficient calculations of adjoint-weighted tallies by the Monte Carlo Wielandt method
International Nuclear Information System (INIS)
Choi, Sung Hoon; Shim, Hyung Jin
2016-01-01
Highlights: • The MC Wielandt method is applied to reduce memory for the adjoint estimation. • The adjoint-weighted kinetics parameters are estimated in the MC Wielandt calculations. • The MC S/U analyses are conducted in the MC Wielandt calculations. - Abstract: The current Monte Carlo (MC) adjoint-weighted tally techniques based on the iterated fission probability (IFP) concept require a memory amount which is proportional to the numbers of the adjoint-weighted tallies and histories per cycle to store history-wise tally estimates during the convergence of the adjoint flux. Especially the conventional MC adjoint-weighted perturbation (AWP) calculations for the nuclear data sensitivity and uncertainty (S/U) analysis suffer from the huge memory consumption to realize the IFP concept. In order to reduce the memory requirement drastically, we present a new adjoint estimation method in which the memory usage is irrelevant to the numbers of histories per cycle by applying the IFP concept for the MC Wielandt calculations. The new algorithms for the adjoint-weighted kinetics parameter estimation and the AWP calculations in the MC Wielandt method are implemented in a Seoul National University MC code, McCARD and its validity is demonstrated in critical facility problems. From the comparison of the nuclear data S/U analyses, it is demonstrated that the memory amounts to store the sensitivity estimates in the proposed method become negligibly small.
Monte Carlo calculations on efficiency of boron neutron capture therapy for brain cancer
International Nuclear Information System (INIS)
Awadalla, Galaleldin Mohamed Suliman
2015-11-01
The search for ways to treat cancer has led to many different treatments, including surgery, chemotherapy, and radiation therapy. Among these treatments, boron neutron capture therapy (BNCT) has shown promising results. BNCT is a radiotherapy treatment modality that has been proposed to treat brain cancer. In this technique, cancerous cells are being injected with 1 0B and irradiated by thermal neutrons to increase the probability of 1 0B (n, a)7 L i reaction to occur. This reaction can potentially deliver a high radiation dose sufficient to kill cancer cells by concentrating boron in them. The short rang of 1 0B (n, a) 7 L i reaction limits the damage to only cancerous cells without affecting healthy tissues. The effectiveness and safety of radiotherapy are dependent on the radiation dose delivered to the tumor and healthy tissues. In this thesis, after reviewing the basics and working principles of boron neutron capture therapy (BNCT), monte Carlo simulations were carried out to model a thermal neutron source suitable for BNCT and to examine the performance of proposed model when used to irradiate a sample of boron containing both 1 0B and 1 1B isotopes. MCNP5 code was used to examine the modeled neutron source through different shielding materials. The results were presented, analyzed and discussed at the end of the work. (author)
Zhang, Guannan; Del-Castillo-Negrete, Diego
2017-10-01
Kinetic descriptions of RE are usually based on the bounced-averaged Fokker-Planck model that determines the PDFs of RE. Despite of the simplification involved, the Fokker-Planck equation can rarely be solved analytically and direct numerical approaches (e.g., continuum and particle-based Monte Carlo (MC)) can be time consuming specially in the computation of asymptotic-type observable including the runaway probability, the slowing-down and runaway mean times, and the energy limit probability. Here we present a novel backward MC approach to these problems based on backward stochastic differential equations (BSDEs). The BSDE model can simultaneously describe the PDF of RE and the runaway probabilities by means of the well-known Feynman-Kac theory. The key ingredient of the backward MC algorithm is to place all the particles in a runaway state and simulate them backward from the terminal time to the initial time. As such, our approach can provide much faster convergence than the brute-force MC methods, which can significantly reduce the number of particles required to achieve a prescribed accuracy. Moreover, our algorithm can be parallelized as easy as the direct MC code, which paves the way for conducting large-scale RE simulation. This work is supported by DOE FES and ASCR under the Contract Numbers ERKJ320 and ERAT377.
Efficient SPECT scatter calculation in non-uniform media using correlated Monte Carlo simulation
International Nuclear Information System (INIS)
Beekman, F.J.
1999-01-01
Accurate simulation of scatter in projection data of single photon emission computed tomography (SPECT) is computationally extremely demanding for activity distribution in non-uniform dense media. This paper suggests how the computation time and memory requirements can be significantly reduced. First the scatter projection of a uniform dense object (P SDSE ) is calculated using a previously developed accurate and fast method which includes all orders of scatter (slab-derived scatter estimation), and then P SDSE is transformed towards the desired projection P which is based on the non-uniform object. The transform of P SDSE is based on two first-order Compton scatter Monte Carlo (MC) simulated projections. One is based on the uniform object (P u ) and the other on the object with non-uniformities (P ν ). P is estimated by P-tilde=P SDSE P ν /P u . A tremendous decrease in noise in P-tilde is achieved by tracking photon paths for P ν identical to those which were tracked for the calculation of P u and by using analytical rather than stochastic modelling of the collimator. The method was validated by comparing the results with standard MC-simulated scatter projections (P) of 99m Tc and 201 Tl point sources in a digital thorax phantom. After correction, excellent agreement was obtained between P-tilde and P. The total computation time required to calculate an accurate scatter projection of an extended distribution in a thorax phantom on a PC is a only few tens of seconds per projection, which makes the method attractive for application in accurate scatter correction in clinical SPECT. Furthermore, the method removes the need of excessive computer memory involved with previously proposed 3D model-based scatter correction methods. (author)
International Nuclear Information System (INIS)
Santos, J.A.M.; Carrasco, M.F.; Lencart, J.; Bastos, A.L.
2009-01-01
A careful analysis of geometry and source positioning influence in the activity measurement outcome of a nuclear medicine dose calibrator is presented for 99m Tc. The implementation of a quasi-point source apparent activity curve measurement is proposed for an accurate correction of the activity inside several syringes, and compared with a theoretical geometric efficiency model. Additionally, new geometrical parameters are proposed to test and verify the correct positioning of the syringes as part of acceptance testing and quality control procedures.
de Graaf, C.S.L.; Kandhai, D.; Sloot, P.M.A.
According to Basel III, financial institutions have to charge a credit valuation adjustment (CVA) to account for a possible counterparty default. Calculating this measure and its sensitivities is one of the biggest challenges in risk management. Here, we introduce an efficient method for the
C.S.L. de Graaf (Kees); B.D. Kandhai; P.M.A. Sloot
2017-01-01
htmlabstractAccording to Basel III, financial institutions have to charge a credit valuation adjustment (CVA) to account for a possible counterparty default. Calculating this measure and its sensitivities is one of the biggest challenges in risk management. Here, we introduce an efficient method
Energy Technology Data Exchange (ETDEWEB)
Yalcin, S. [Education Faculty, Kastamonu University, 37200 Kastamonu (Turkey)], E-mail: yalcin@gazi.edu.tr; Gurler, O.; Kaynak, G. [Department of Physics, Faculty of Arts and Sciences, Uludag University, Gorukle Campus, 16059 Bursa (Turkey); Gundogdu, O. [Department of Physics, School of Engineering and Physical Sciences, University of Surrey, Guildford GU2 7XH (United Kingdom)
2007-10-15
This paper presents results on the total gamma counting efficiency of a NaI(Tl) detector from point and disk sources. The directions of photons emitted from the source were determined by Monte-Carlo techniques and the photon path lengths in the detector were determined by analytic equations depending on photon directions. This is called the hybrid Monte-Carlo method where analytical expressions are incorporated into the Monte-Carlo simulations. A major advantage of this technique is the short computation time compared to other techniques on similar computational platforms. Another advantage is the flexibility for inputting detector-related parameters (such as source-detector distance, detector radius, source radius, detector linear attenuation coefficient) into the algorithm developed, thus making it an easy and flexible method to apply to other detector systems and configurations. The results of the total counting efficiency model put forward for point and disc sources were compared with the previous work reported in the literature.
International Nuclear Information System (INIS)
Yalcin, S.; Gurler, O.; Kaynak, G.; Gundogdu, O.
2007-01-01
This paper presents results on the total gamma counting efficiency of a NaI(Tl) detector from point and disk sources. The directions of photons emitted from the source were determined by Monte-Carlo techniques and the photon path lengths in the detector were determined by analytic equations depending on photon directions. This is called the hybrid Monte-Carlo method where analytical expressions are incorporated into the Monte-Carlo simulations. A major advantage of this technique is the short computation time compared to other techniques on similar computational platforms. Another advantage is the flexibility for inputting detector-related parameters (such as source-detector distance, detector radius, source radius, detector linear attenuation coefficient) into the algorithm developed, thus making it an easy and flexible method to apply to other detector systems and configurations. The results of the total counting efficiency model put forward for point and disc sources were compared with the previous work reported in the literature
International Nuclear Information System (INIS)
Koukorava, C; Farah, J; Clairand, I; Donadille, L; Struelens, L; Vanhavere, F; Dimitriou, P
2014-01-01
Monte Carlo calculations were used to investigate the efficiency of radiation protection equipment in reducing eye and whole body doses during fluoroscopically guided interventional procedures. Eye lens doses were determined considering different models of eyewear with various shapes, sizes and lead thickness. The origin of scattered radiation reaching the eyes was also assessed to explain the variation in the protection efficiency of the different eyewear models with exposure conditions. The work also investigates the variation of eye and whole body doses with ceiling-suspended shields of various shapes and positioning. For all simulations, a broad spectrum of configurations typical for most interventional procedures was considered. Calculations showed that ‘wrap around’ glasses are the most efficient eyewear models reducing, on average, the dose by 74% and 21% for the left and right eyes respectively. The air gap between the glasses and the eyes was found to be the primary source of scattered radiation reaching the eyes. The ceiling-suspended screens were more efficient when positioned close to the patient’s skin and to the x-ray field. With the use of such shields, the H p (10) values recorded at the collar, chest and waist level and the H p (3) values for both eyes were reduced on average by 47%, 37%, 20% and 56% respectively. Finally, simulations proved that beam quality and lead thickness have little influence on eye dose while beam projection, the position and head orientation of the operator as well as the distance between the image detector and the patient are key parameters affecting eye and whole body doses. (paper)
International Nuclear Information System (INIS)
Verma, Amit K.; Narayani, K.; Pant, Amar D.; Bhosale, Nitin; Anilkumar, S.; Palani Selvam, T.
2018-01-01
Scintillation spectrometers are widely used in detection and spectrometry of gamma photons. Sodium Iodide (NaI(Tl)) is the most commonly used scintillation detector for gamma ray spectrometry. However for portable application that require higher efficiency and better resolution Cerium Bromide (CeBr 3 ) crystals are more suitable than NaI(Tl) crystals. CeBr 3 detectors have high light output (∼ 68,000 photons/MeV), good proportionality, fast response and better energy resolution (<4% for 662 keV of 137 Cs), which makes it very promising detector for gamma ray spectrometry. In the present work, experimental and Monte Carlo based efficiencies for CeBr 3 detector for 137 Cs and 60 Co were evaluated
Efficient Monte Carlo Estimation of the Expected Value of Sample Information Using Moment Matching.
Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca
2018-02-01
The Expected Value of Sample Information (EVSI) is used to calculate the economic value of a new research strategy. Although this value would be important to both researchers and funders, there are very few practical applications of the EVSI. This is due to computational difficulties associated with calculating the EVSI in practical health economic models using nested simulations. We present an approximation method for the EVSI that is framed in a Bayesian setting and is based on estimating the distribution of the posterior mean of the incremental net benefit across all possible future samples, known as the distribution of the preposterior mean. Specifically, this distribution is estimated using moment matching coupled with simulations that are available for probabilistic sensitivity analysis, which is typically mandatory in health economic evaluations. This novel approximation method is applied to a health economic model that has previously been used to assess the performance of other EVSI estimators and accurately estimates the EVSI. The computational time for this method is competitive with other methods. We have developed a new calculation method for the EVSI which is computationally efficient and accurate. This novel method relies on some additional simulation so can be expensive in models with a large computational cost.
International Nuclear Information System (INIS)
Li Di; Wang Geng; Chen Yang; Li Lin; Shrivastav, Gaurav; Oak, Stimit; Tasch, Al; Banerjee, Sanjay; Obradovic, Borna
2001-01-01
A physically-based three-dimensional Monte Carlo simulator has been developed within UT-MARLOWE, which is capable of simulating ion implantation into multi-material systems and arbitrary topography. Introducing the third dimension can result in a severe CPU time penalty. In order to minimize this penalty, a three-dimensional trajectory replication algorithm has been developed, implemented and verified. More than two orders of magnitude savings of CPU time have been observed. An unbalanced Octree structure was used to decompose three-dimensional structures. It effectively simplifies the structure, offers a good balance between modeling accuracy and computational efficiency, and allows arbitrary precision of mapping the Octree onto desired structure. Using the well-established and validated physical models in UT-MARLOWE 5.0, this simulator has been extensively verified by comparing the integrated one-dimensional simulation results with secondary ion mass spectroscopy (SIMS). Two options, the typical case and the worst scenario, have been selected to simulate ion implantation into poly-silicon under various scenarios using this simulator: implantation into a random, amorphous network, and implantation into the worst-case channeling condition, into (1 1 0) orientated wafers
Energy Technology Data Exchange (ETDEWEB)
Mesta, M.; Coehoorn, R.; Bobbert, P. A. [Department of Applied Physics, Technische Universiteit Eindhoven, P.O. Box 513, NL-5600 MB Eindhoven (Netherlands); Eersel, H. van [Simbeyond B.V., P.O. Box 513, NL-5600 MB Eindhoven (Netherlands)
2016-03-28
Triplet-triplet annihilation (TTA) and triplet-polaron quenching (TPQ) in organic light-emitting devices (OLEDs) lead to a roll-off of the internal quantum efficiency (IQE) with increasing current density J. We employ a kinetic Monte Carlo modeling study to analyze the measured IQE and color balance as a function of J in a multilayer hybrid white OLED that combines fluorescent blue with phosphorescent green and red emission. We investigate two models for TTA and TPQ involving the phosphorescent green and red emitters: short-range nearest-neighbor quenching and long-range Förster-type quenching. Short-range quenching predicts roll-off to occur at much higher J than measured. Taking long-range quenching with Förster radii for TTA and TPQ equal to twice the Förster radii for exciton transfer leads to a fair description of the measured IQE-J curve, with the major contribution to the roll-off coming from TPQ. The measured decrease of the ratio of phosphorescent to fluorescent component of the emitted light with increasing J is correctly predicted. A proper description of the J-dependence of the ratio of red and green phosphorescent emission needs further model refinements.
Mesta, M.; van Eersel, H.; Coehoorn, R.; Bobbert, P. A.
2016-03-01
Triplet-triplet annihilation (TTA) and triplet-polaron quenching (TPQ) in organic light-emitting devices (OLEDs) lead to a roll-off of the internal quantum efficiency (IQE) with increasing current density J. We employ a kinetic Monte Carlo modeling study to analyze the measured IQE and color balance as a function of J in a multilayer hybrid white OLED that combines fluorescent blue with phosphorescent green and red emission. We investigate two models for TTA and TPQ involving the phosphorescent green and red emitters: short-range nearest-neighbor quenching and long-range Förster-type quenching. Short-range quenching predicts roll-off to occur at much higher J than measured. Taking long-range quenching with Förster radii for TTA and TPQ equal to twice the Förster radii for exciton transfer leads to a fair description of the measured IQE-J curve, with the major contribution to the roll-off coming from TPQ. The measured decrease of the ratio of phosphorescent to fluorescent component of the emitted light with increasing J is correctly predicted. A proper description of the J-dependence of the ratio of red and green phosphorescent emission needs further model refinements.
International Nuclear Information System (INIS)
Mesta, M.; Coehoorn, R.; Bobbert, P. A.; Eersel, H. van
2016-01-01
Triplet-triplet annihilation (TTA) and triplet-polaron quenching (TPQ) in organic light-emitting devices (OLEDs) lead to a roll-off of the internal quantum efficiency (IQE) with increasing current density J. We employ a kinetic Monte Carlo modeling study to analyze the measured IQE and color balance as a function of J in a multilayer hybrid white OLED that combines fluorescent blue with phosphorescent green and red emission. We investigate two models for TTA and TPQ involving the phosphorescent green and red emitters: short-range nearest-neighbor quenching and long-range Förster-type quenching. Short-range quenching predicts roll-off to occur at much higher J than measured. Taking long-range quenching with Förster radii for TTA and TPQ equal to twice the Förster radii for exciton transfer leads to a fair description of the measured IQE-J curve, with the major contribution to the roll-off coming from TPQ. The measured decrease of the ratio of phosphorescent to fluorescent component of the emitted light with increasing J is correctly predicted. A proper description of the J-dependence of the ratio of red and green phosphorescent emission needs further model refinements.
Energy Technology Data Exchange (ETDEWEB)
Santos, J.A.M. [Servico de Fisica Medica, Instituto Portugues de Oncologia do Porto Francisco Gentil, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200072 Porto (Portugal); Centro de Investigacao, Instituto Portugues de Oncologia do Porto Francisco Gentil, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200072 Porto (Portugal)], E-mail: a.miranda@portugalmail.pt; Carrasco, M.F. [Servico de Fisica Medica, Instituto Portugues de Oncologia do Porto Francisco Gentil, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200072 Porto (Portugal); Centro de Investigacao, Instituto Portugues de Oncologia do Porto Francisco Gentil, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200072 Porto (Portugal); Lencart, J. [Servico de Fisica Medica, Instituto Portugues de Oncologia do Porto Francisco Gentil, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200072 Porto (Portugal); Bastos, A.L. [Servico de Medicina Nuclear, Instituto Portugues de Oncologia do Porto Francisco Gentil, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200072 Porto (Portugal)
2009-06-15
A careful analysis of geometry and source positioning influence in the activity measurement outcome of a nuclear medicine dose calibrator is presented for {sup 99m}Tc. The implementation of a quasi-point source apparent activity curve measurement is proposed for an accurate correction of the activity inside several syringes, and compared with a theoretical geometric efficiency model. Additionally, new geometrical parameters are proposed to test and verify the correct positioning of the syringes as part of acceptance testing and quality control procedures.
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.
2010-10-01
Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans
2015-02-01
The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this
Griesbach, J.; Wetterer, C.; Sydney, P.; Gerber, J.
Photometric processing of non-resolved Electro-Optical (EO) images has commonly required the use of dark and flat calibration frames that are obtained to correct for charge coupled device (CCD) dark (thermal) noise and CCD quantum efficiency/optical path vignetting effects respectively. It is necessary to account/calibrate for these effects so that the brightness of objects of interest (e.g. stars or resident space objects (RSOs)) may be measured in a consistent manner across the CCD field of view. Detected objects typically require further calibration using aperture photometry to compensate for sky background (shot noise). For this, annuluses are measured around each detected object whose contained pixels are used to estimate an average background level that is subtracted from the detected pixel measurements. In a new photometric calibration software tool developed for AFRL/RD, called Efficient Photometry In-Frame Calibration (EPIC), an automated background normalization technique is proposed that eliminates the requirement to capture dark and flat calibration images. The proposed technique simultaneously corrects for dark noise, shot noise, and CCD quantum efficiency/optical path vignetting effects. With this, a constant detection threshold may be applied for constant false alarm rate (CFAR) object detection without the need for aperture photometry corrections. The detected pixels may be simply summed (without further correction) for an accurate instrumental magnitude estimate. The noise distribution associated with each pixel is assumed to be sampled from a Poisson distribution. Since Poisson distributed data closely resembles Gaussian data for parameterized means greater than 10, the data may be corrected by applying bias subtraction and standard-deviation division. EPIC performs automated background normalization on rate-tracked satellite images using the following technique. A deck of approximately 50-100 images is combined by performing an independent median
International Nuclear Information System (INIS)
Tang, Xiao-Bin; Meng, Jia; Wang, Peng; Cao, Ye; Huang, Xi; Wen, Liang-Sheng; Chen, Da
2016-01-01
A small-sized UAV (NH-UAV) airborne system with two gamma spectrometers (LaBr_3 detector and HPGe detector) was developed to monitor activity concentration in serious nuclear accidents, such as the Fukushima nuclear accident. The efficiency calibration and determination of minimum detectable activity concentration (MDAC) of the specific system were studied by MC simulations at different flight altitudes, different horizontal distances from the detection position to the source term center and different source term sizes. Both air and ground radiation were considered in the models. The results obtained may provide instructive suggestions for in-situ radioactivity measurements of NH-UAV. - Highlights: • A small-sized UAV airborne sensor system was developed. • Three radioactive models were chosen to simulate the Fukushima accident. • Both the air and ground radiation were considered in the models. • The efficiency calculations and MDAC values were given. • The sensor system is able to monitor in serious nuclear accidents.
Efficiency calibration of a liquid scintillation counter for 90Y Cherenkov counting
International Nuclear Information System (INIS)
Vaca, F.; Garcia-Leon, M.
1998-01-01
In this paper a complete and self-consistent method for 90 Sr determination in environmental samples is presented. It is based on the Cherenkov counting of 90 Y with a conventional liquid scintillation counter. The effects of color quenching on the counting efficiency and background are carefully studied. A working curve is presented which allows to quantify the correction in the counting efficiency depending on the color quenching strength. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Nicoulaud-Gouin, V.; Giacalone, M.; Gonze, M.A. [Institut de Radioprotection et de Surete Nucleaire-PRP-ENV/SERIS/LM2E (France); Martin-Garin, A.; Garcia-Sanchez, L. [IRSN-PRP-ENV/SERIS/L2BT (France)
2014-07-01
Calibration of transfer models according to observation data is a challenge, especially if parameters uncertainty is required, and if competing models should be decided between them. Generally two main calibration methods are used: The frequentist approach in which the unknown parameter of interest is supposed fixed and its estimation is based on the data only. In this category, least squared method has many restrictions in nonlinear models and competing models need to be nested in order to be compared. The bayesian inference in which the unknown parameter of interest is supposed random and its estimation is based on the data and on prior information. Compared to frequentist method, it provides probability density functions and therefore pointwise estimation with credible intervals. However, in practical cases, Bayesian inference is a complex problem of numerical integration, which explains its low use in operational modeling including radioecology. This study aims to illustrate the interest and feasibility of Bayesian approach in radioecology particularly in the case of ordinary differential equations with non-constant coefficients models, which cover most radiological risk assessment models, notably those implemented in the Symbiose platform (Gonze et al, 2010). Markov Chain Monte Carlo (MCMC) method (Metropolis et al., 1953) was used because the posterior expectations are intractable integrals. The invariant distribution of the parameters was performed by the metropolis-Hasting algorithm (Hastings, 1970). GNU-MCSim software (Bois and Maszle, 2011) a bayesian hierarchical framework, was used to deal with nonlinear differential models. Two case studies including this type of model were investigated: An Equilibrium Kinetic sorption model (EK) (e.g. van Genuchten et al, 1974), with experimental data concerning {sup 137}Cs and {sup 85}Sr sorption and desorption in different soils studied in stirred flow-through reactors. This model, generalizing the K{sub d} approach
Calorimetry of energy-efficient glow discharge apparatus design and calibration
International Nuclear Information System (INIS)
Benson, Thomas B.; Passell, Thomas O.
2006-01-01
This work aims to develop a 'family' or low-powered calorimetrically accurate glow discharge units, similar to that reported by Dardik et al. at lCCF-10, and to use these to test a wide range or cathode materials, electrode coatings, gas types, gas pressures, and power input levels. We will describe the design and calibration of these units. The strategy is to use a large number of very similar units so that the calorimetric response does not vary significantly for a given power level. The design is metal or sealed glass cylindrical tubes, charged with 0.4 - 50 Torr mixtures of deuterium, hydrogen, argon, or helium gases. Units operate from 0.2 to >2 W power input. The units have low mass ( 1.2 with more than 95% certainty. It provides a valuable new platform for large-scale exploration of excess heat effects in the gas phase, using low-power inputs in the 0-3 W range, This method proves to be inexpensive, quick, accurate, and easy to perform once the basics are mastered. The authors are interested in testing electrode materials from other sources, especially those that have already been successful in a liquid (electrolytic) environment
Hansen, T. M.; Cordua, K. S.
2017-12-01
Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.
Calibration efficiency of HPGe detector in the 50-1800 KeV energy range
International Nuclear Information System (INIS)
Venturini, Luzia
1996-01-01
This paper describes the efficiency of an HPGe detector in the 50 - 1800 keV energy range, for two geometries for water measurements: Marinelli breaker (850 ml) and a polyethylene flask (100 ml). The experimental data were corrected for the summing effect and fitted to a continuous, differentiable and energy dependent function given by 1n(ε)=b 0 +b 1 .1n(E/E 0 )+ β.1n(E/E 0 ) 2 , where β = b 2 if E>E 0 and β =a 2 if E ≤E 0 ; ε = the full absorption peak efficiency; E is the gamma-ray energy and {b 0 , b 1 , b 2 , a 2 , E 0 } is the parameter set to be fitted. (author)
Energy Technology Data Exchange (ETDEWEB)
Pai, S [iCAD Inc., Los Gatos, CA (United States)
2015-06-15
Purpose: The objective is to improve the efficiency and efficacy of Xoft™ Axxent™ electronic brachytherapy (EBT) calibration of the source & surface applicator using AAPM TG-61 formalism. Methods: Current method of Xoft EBT source calibration involves determination of absolute dose rate of the source in each of the four conical surface applicators using in-air chamber measurements & TG61 formalism. We propose a simplified TG-61 calibration methodology involving initial characterization of surface cone applicators. This is accomplished by calibrating dose rates for all 4 surface applicator sets (for 10 sources) which establishes the “applicator output ratios” with respect to the selected reference applicator (20 mm applicator). After the initial time, Xoft™ Axxent™ source TG61 Calibration is carried out only in the reference applicator. Using the established applicator output ratios, dose rates for other applicators will be calculated. Results: 200 sources & 8 surface applicator sets were calibrated cumulatively using a Standard Imaging A20 ion-chamber in accordance with manufacturer-recommended protocols. Dose rates of 10, 20, 35 & 50mm applicators were normalized to the reference (20mm) applicator. The data in Figure 1 indicates that the normalized dose rate variation for each applicator for all 200 sources is better than ±3%. The average output ratios are 1.11, 1.02 and 0.49 for the 10 mm,35 mm and 50 mm applicators, respectively, which are in good agreement with the manufacturer’s published output ratios of 1.13, 1.02 and 0.49. Conclusion: Our measurements successfully demonstrate the accuracy of a new calibration method using a single surface applicator for Xoft EBT sources and deriving the dose rates of other applicators. The accuracy of the calibration is improved as this method minimizes the source position variation inside the applicator during individual source calibrations. The new method significantly reduces the calibration time to less
Tang, Xiao-Bin; Meng, Jia; Wang, Peng; Cao, Ye; Huang, Xi; Wen, Liang-Sheng; Chen, Da
2016-04-01
A small-sized UAV (NH-UAV) airborne system with two gamma spectrometers (LaBr3 detector and HPGe detector) was developed to monitor activity concentration in serious nuclear accidents, such as the Fukushima nuclear accident. The efficiency calibration and determination of minimum detectable activity concentration (MDAC) of the specific system were studied by MC simulations at different flight altitudes, different horizontal distances from the detection position to the source term center and different source term sizes. Both air and ground radiation were considered in the models. The results obtained may provide instructive suggestions for in-situ radioactivity measurements of NH-UAV. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Calibrated Lumped Element Model for the Prediction of PSJ Actuator Efficiency Performance
Directory of Open Access Journals (Sweden)
Matteo Chiatto
2018-03-01
Full Text Available Among the various active flow control techniques, Plasma Synthetic Jet (PSJ actuators, or Sparkjets, represent a very promising technology, especially because of their high velocities and short response times. A practical tool, employed for design and manufacturing purposes, consists of the definition of a low-order model, lumped element model (LEM, which is able to predict the dynamic response of the actuator in a relatively quick way and with reasonable fidelity and accuracy. After a brief description of an innovative lumped model, this work faces the experimental investigation of a home-designed and manufactured PSJ actuator, for different frequencies and energy discharges. Particular attention has been taken in the power supply system design. A specific home-made Pitot tube has allowed the detection of velocity profiles along the jet radial direction, for various energy discharges, as well as the tuning of the lumped model with experimental data, where the total device efficiency has been assumed as a fitting parameter. The best fitting value not only contains information on the actual device efficiency, but includes some modeling and experimental uncertainties, related also to the used measurement technique.
Coehoorn, R.; van Eersel, H.; Bobbert, P.A.; Janssen, R.A.J.
2015-01-01
The performance of Organic Light Emitting Diodes (OLEDs) is determined by a complex interplay of the charge transport and excitonic processes in the active layer stack. We have developed a three-dimensional kinetic Monte Carlo (kMC) OLED simulation method which includes all these processes in an
International Nuclear Information System (INIS)
Abdi, M. R.; Mostajaboddavati, M.; Hassanzadeh, S.; Faghihian, H.; Rezaee, Kh.; Kamali, M.
2006-01-01
A nonlinear function in combination with the method of mixing activity calibrated is applied for fitting the experimental peak efficiency of HPGe spectrometers in 59-2614 keV energy range. The preparation of Marinelli beaker standards of mixed gamma and RG-set at secular equilibrium with its daughter radionuclides was studied. Standards were prepared by mixing of known amounts of 13B a, 241 Am, 152 Eu, 207 Bi, 24 Na, Al 2 O 3 powder and soil. The validity of these standards was checked by comparison with certified standard reference material RG-set and IAEA-Soil-6 Self-absorption was measured for the activity calculation of the gamma-ray lines about series of 238 U daughter, 232 Th series, 137 Cs and 40 K in soil samples. Self-absorption in the sample will depend on a number of factor including sample composition, density, sample size and gamma-ray energy. Seven Marinelli beaker standards were prepared in different degrees of compaction with bulk density ( ρ) of 1.000 to 1.600 g cm -3 . The detection efficiency versus density was obtained and the equation of self-absorption correction factors calculated for soil samples
International Nuclear Information System (INIS)
Kim, Jong Woo; Woo, Myeong Hyeon; Kim, Jae Hyun; Kim, Do Hyun; Shin, Chang Ho; Kim, Jong Kyung
2017-01-01
In this study hybrid Monte Carlo/Deterministic method is explained for radiation transport analysis in global system. FW-CADIS methodology construct the weight window parameter and it useful at most global MC calculation. However, Due to the assumption that a particle is scored at a tally, less particles are transported to the periphery of mesh tallies. For compensation this space-dependency, we modified the module in the ADVANTG code to add the proposed method. We solved the simple test problem for comparing with result from FW-CADIS methodology, it was confirmed that a uniform statistical error was secured as intended. In the future, it will be added more practical problems. It might be useful to perform radiation transport analysis using the Hybrid Monte Carlo/Deterministic method in global transport problems.
Energy Technology Data Exchange (ETDEWEB)
Trzcinski, A.; Zwieglinski, B. [Soltan Inst. for Nuclear Studies, Warsaw (Poland); Lynen, U. [Gesellschaft fuer Schwerionenforschung mbH, Darmstadt (Germany); Pochodzalla, J. [Max-Planck-Institut fuer Kernphysik, Heidelberg (Germany)
1998-10-01
This paper reports on a Monte-Carlo program, MSX, developed to evaluate the performance of large-volume, Gd-loaded liquid scintillation detectors used in neutron multiplicity measurements. The results of simulations are presented for the detector intended to count neutrons emitted by the excited target residue in coincidence with the charged products of the projectile fragmentation following relativistic heavy-ion collisions. The latter products could be detected with the ALADIN magnetic spectrometer at GSI-Darmstadt. (orig.) 61 refs.
Crop physiology calibration in the CLM
Directory of Open Access Journals (Sweden)
I. Bilionis
2015-04-01
scalable and adaptive scheme based on sequential Monte Carlo (SMC. The model showed significant improvement of crop productivity with the new calibrated parameters. We demonstrate that the calibrated parameters are applicable across alternative years and different sites.
Directory of Open Access Journals (Sweden)
É. Gaborit
2017-09-01
Full Text Available This work explores the potential of the distributed GEM-Hydro runoff modeling platform, developed at Environment and Climate Change Canada (ECCC over the last decade. More precisely, the aim is to develop a robust implementation methodology to perform reliable streamflow simulations with a distributed model over large and partly ungauged basins, in an efficient manner. The latest version of GEM-Hydro combines the SVS (Soil, Vegetation and Snow land-surface scheme and the WATROUTE routing scheme. SVS has never been evaluated from a hydrological point of view, which is done here for all major rivers flowing into Lake Ontario. Two established hydrological models are confronted to GEM-Hydro, namely MESH and WATFLOOD, which share the same routing scheme (WATROUTE but rely on different land-surface schemes. All models are calibrated using the same meteorological forcings, objective function, calibration algorithm, and basin delineation. GEM-Hydro is shown to be competitive with MESH and WATFLOOD: the NSE √ (Nash–Sutcliffe criterion computed on the square root of the flows is for example equal to 0.83 for MESH and GEM-Hydro in validation on the Moira River basin, and to 0.68 for WATFLOOD. A computationally efficient strategy is proposed to calibrate SVS: a simple unit hydrograph is used for routing instead of WATROUTE. Global and local calibration strategies are compared in order to estimate runoff for ungauged portions of the Lake Ontario basin. Overall, streamflow predictions obtained using a global calibration strategy, in which a single parameter set is identified for the whole basin of Lake Ontario, show accuracy comparable to the predictions based on local calibration: the average NSE √ in validation and over seven subbasins is 0.73 and 0.61, respectively for local and global calibrations. Hence, global calibration provides spatially consistent parameter values, robust performance at gauged locations, and reduces the
Gaborit, Étienne; Fortin, Vincent; Xu, Xiaoyong; Seglenieks, Frank; Tolson, Bryan; Fry, Lauren M.; Hunter, Tim; Anctil, François; Gronewold, Andrew D.
2017-09-01
This work explores the potential of the distributed GEM-Hydro runoff modeling platform, developed at Environment and Climate Change Canada (ECCC) over the last decade. More precisely, the aim is to develop a robust implementation methodology to perform reliable streamflow simulations with a distributed model over large and partly ungauged basins, in an efficient manner. The latest version of GEM-Hydro combines the SVS (Soil, Vegetation and Snow) land-surface scheme and the WATROUTE routing scheme. SVS has never been evaluated from a hydrological point of view, which is done here for all major rivers flowing into Lake Ontario. Two established hydrological models are confronted to GEM-Hydro, namely MESH and WATFLOOD, which share the same routing scheme (WATROUTE) but rely on different land-surface schemes. All models are calibrated using the same meteorological forcings, objective function, calibration algorithm, and basin delineation. GEM-Hydro is shown to be competitive with MESH and WATFLOOD: the NSE √ (Nash-Sutcliffe criterion computed on the square root of the flows) is for example equal to 0.83 for MESH and GEM-Hydro in validation on the Moira River basin, and to 0.68 for WATFLOOD. A computationally efficient strategy is proposed to calibrate SVS: a simple unit hydrograph is used for routing instead of WATROUTE. Global and local calibration strategies are compared in order to estimate runoff for ungauged portions of the Lake Ontario basin. Overall, streamflow predictions obtained using a global calibration strategy, in which a single parameter set is identified for the whole basin of Lake Ontario, show accuracy comparable to the predictions based on local calibration: the average NSE √ in validation and over seven subbasins is 0.73 and 0.61, respectively for local and global calibrations. Hence, global calibration provides spatially consistent parameter values, robust performance at gauged locations, and reduces the complexity and computation burden of the
Directory of Open Access Journals (Sweden)
M. Gålfalk
2018-03-01
Full Text Available The calibration and validation of remote sensing land cover products are highly dependent on accurate field reference data, which are costly and practically challenging to collect. We describe an optical method for collection of field reference data that is a fast, cost-efficient, and robust alternative to field surveys and UAV imaging. A lightweight, waterproof, remote-controlled RGB camera (GoPro HERO4 Silver, GoPro Inc. was used to take wide-angle images from 3.1 to 4.5 m in altitude using an extendable monopod, as well as representative near-ground (< 1 m images to identify spectral and structural features that correspond to various land covers in present lighting conditions. A semi-automatic classification was made based on six surface types (graminoids, water, shrubs, dry moss, wet moss, and rock. The method enables collection of detailed field reference data, which is critical in many remote sensing applications, such as satellite-based wetland mapping. The method uses common non-expensive equipment, does not require special skills or training, and is facilitated by a step-by-step manual that is included in the Supplement. Over time a global ground cover database can be built that can be used as reference data for studies of non-forested wetlands from satellites such as Sentinel 1 and 2 (10 m pixel size.
Gålfalk, Magnus; Karlson, Martin; Crill, Patrick; Bousquet, Philippe; Bastviken, David
2018-03-01
The calibration and validation of remote sensing land cover products are highly dependent on accurate field reference data, which are costly and practically challenging to collect. We describe an optical method for collection of field reference data that is a fast, cost-efficient, and robust alternative to field surveys and UAV imaging. A lightweight, waterproof, remote-controlled RGB camera (GoPro HERO4 Silver, GoPro Inc.) was used to take wide-angle images from 3.1 to 4.5 m in altitude using an extendable monopod, as well as representative near-ground (wetland mapping. The method uses common non-expensive equipment, does not require special skills or training, and is facilitated by a step-by-step manual that is included in the Supplement. Over time a global ground cover database can be built that can be used as reference data for studies of non-forested wetlands from satellites such as Sentinel 1 and 2 (10 m pixel size).
Energy Technology Data Exchange (ETDEWEB)
Courtine, Fabien [Laboratoire de Physique Corpusculaire, Universite Blaise Pascal - CNRS/IN2P3, 63000 Aubiere Cedex (France)
2007-03-15
The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of {sup 137}Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the {sup 60}Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)
Abuhaimed, Abdullah; J Martin, Colin; Sankaralingam, Marimuthu; J Gentle, David; McJury, Mark
2014-11-07
The IEC has introduced a practical approach to overcome shortcomings of the CTDI100 for measurements on wide beams employed for cone beam (CBCT) scans. This study evaluated the efficiency of this approach (CTDIIEC) for different arrangements using Monte Carlo simulation techniques, and compared CTDIIEC to the efficiency of CTDI100 for CBCT. Monte Carlo EGSnrc/BEAMnrc and EGSnrc/DOSXYZnrc codes were used to simulate the kV imaging system mounted on a Varian TrueBeam linear accelerator. The Monte Carlo model was benchmarked against experimental measurements and good agreement shown. Standard PMMA head and body phantoms with lengths 150, 600, and 900 mm were simulated. Beam widths studied ranged from 20-300 mm, and four scanning protocols using two acquisition modes were utilized. The efficiency values were calculated at the centre (εc) and periphery (εp) of the phantoms and for the weighted CTDI (εw). The efficiency values for CTDI100 were approximately constant for beam widths 20-40 mm, where εc(CTDI100), εp(CTDI100), and εw(CTDI100) were 74.7 ± 0.6%, 84.6 ± 0.3%, and 80.9 ± 0.4%, for the head phantom and 59.7 ± 0.3%, 82.1 ± 0.3%, and 74.9 ± 0.3%, for the body phantom, respectively. When beam width increased beyond 40 mm, ε(CTDI100) values fell steadily reaching ~30% at a beam width of 300 mm. In contrast, the efficiency of the CTDIIEC was approximately constant over all beam widths, demonstrating its suitability for assessment of CBCT. εc(CTDIIEC), εp(CTDIIEC), and εw(CTDIIEC) were 76.1 ± 0.9%, 85.9 ± 1.0%, and 82.2 ± 0.9% for the head phantom and 60.6 ± 0.7%, 82.8 ± 0.8%, and 75.8 ± 0.7%, for the body phantom, respectively, within 2% of ε(CTDI100) values for narrower beam widths. CTDI100,w and CTDIIEC,w underestimate CTDI∞,w by ~55% and ~18% for the head phantom and by ~56% and ~24% for the body phantom, respectively, using a clinical beam width 198 mm. The
Directory of Open Access Journals (Sweden)
Kowal Robert
2016-12-01
Full Text Available A simple linear regression model is one of the pillars of classic econometrics. Multiple areas of research function within its scope. One of the many fundamental questions in the model concerns proving the efficiency of the most commonly used OLS estimators and examining their properties. In the literature of the subject one can find taking back to this scope and certain solutions in that regard. Methodically, they are borrowed from the multiple regression model or also from a boundary partial model. Not everything, however, is here complete and consistent. In the paper a completely new scheme is proposed, based on the implementation of the Cauchy-Schwarz inequality in the arrangement of the constraint aggregated from calibrated appropriately secondary constraints of unbiasedness which in a result of choice the appropriate calibrator for each variable directly leads to showing this property. A separate range-is a matter of choice of such a calibrator. These deliberations, on account of the volume and kinds of the calibration, were divided into a few parts. In the one the efficiency of OLS estimators is proven in a mixed scheme of the calibration by averages, that is preliminary, and in the most basic frames of the proposed methodology. In these frames the future outlines and general premises constituting the base of more distant generalizations are being created.
The ATLAS collaboration
2015-01-01
The reconstruction algorithm, energy calibration, and identification methods for hadronically decaying tau leptons in ATLAS used at the start of Run-2 of the Large Hadron Collider are described in this note. All algorithms have been optimised for Run-2 conditions. The energy calibration relies on Monte Carlo samples with hadronic tau lepton decays, and applies multiplicative factors based on the pT of the reconstructed tau lepton to the energy measurements in the calorimeters. The identification employs boosted decision trees. Systematic uncertainties on the energy scale, reconstruction efficiency and identification efficiency of hadronically decaying tau leptons are determined using Monte Carlo samples that simulate varying conditions.
International Nuclear Information System (INIS)
Anil Kumar, G.; Mazumdar, I.; Gothe, D.A.
2009-01-01
Efficiency calibration and coincidence summing correction have been performed for two large arrays of NaI(Tl) detectors in two different configurations. They are, a compact array of 32 conical detectors of pentagonal and hexagonal shapes in soccer-ball geometry and an array of 14 straight hexagonal NaI(Tl) detectors in castle geometry. Both of these arrays provide a large solid angle of detection, leading to considerable coincidence summing of gamma rays. The present work aims to understand the effect of coincidence summing of gamma rays while determining the energy dependence of efficiencies of these two arrays. We have carried out extensive GEANT4 simulations with radio-nuclides that decay with a two-step cascade, considering both arrays in their realistic geometries. The absolute efficiencies have been simulated for gamma energies from 700 to 2800 keV using four different double-photon emitters, namely, 60 Co, 46 Sc, 94 Nb and 24 Na. The efficiencies so obtained have been corrected for coincidence summing using the method proposed by Vidmar et al. . The simulations have also been carried out for the same energies assuming mono-energetic point sources, for comparison. Experimental measurements have also been carried out using calibrated point sources of 137 Cs and 60 Co. The simulated and the experimental results are found to be in good agreement. This demonstrates the reliability of the correction method for efficiency calibration of two large arrays in very different configurations.
Energy Technology Data Exchange (ETDEWEB)
Anil Kumar, G., E-mail: anilg@tifr.res.i [Department of Nuclear and Atomic Physics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005 (India); Mazumdar, I.; Gothe, D.A. [Department of Nuclear and Atomic Physics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005 (India)
2009-11-21
Efficiency calibration and coincidence summing correction have been performed for two large arrays of NaI(Tl) detectors in two different configurations. They are, a compact array of 32 conical detectors of pentagonal and hexagonal shapes in soccer-ball geometry and an array of 14 straight hexagonal NaI(Tl) detectors in castle geometry. Both of these arrays provide a large solid angle of detection, leading to considerable coincidence summing of gamma rays. The present work aims to understand the effect of coincidence summing of gamma rays while determining the energy dependence of efficiencies of these two arrays. We have carried out extensive GEANT4 simulations with radio-nuclides that decay with a two-step cascade, considering both arrays in their realistic geometries. The absolute efficiencies have been simulated for gamma energies from 700 to 2800 keV using four different double-photon emitters, namely, {sup 60}Co, {sup 46}Sc, {sup 94}Nb and {sup 24}Na. The efficiencies so obtained have been corrected for coincidence summing using the method proposed by Vidmar et al. . The simulations have also been carried out for the same energies assuming mono-energetic point sources, for comparison. Experimental measurements have also been carried out using calibrated point sources of {sup 137}Cs and {sup 60}Co. The simulated and the experimental results are found to be in good agreement. This demonstrates the reliability of the correction method for efficiency calibration of two large arrays in very different configurations.
An Efficient Monte Carlo Approach to Compute PageRank for Large Graphs on a Single PC
Directory of Open Access Journals (Sweden)
Sonobe Tomohiro
2016-03-01
Full Text Available This paper describes a novel Monte Carlo based random walk to compute PageRanks of nodes in a large graph on a single PC. The target graphs of this paper are ones whose size is larger than the physical memory. In such an environment, memory management is a difficult task for simulating the random walk among the nodes. We propose a novel method that partitions the graph into subgraphs in order to make them fit into the physical memory, and conducts the random walk for each subgraph. By evaluating the walks lazily, we can conduct the walks only in a subgraph and approximate the random walk by rotating the subgraphs. In computational experiments, the proposed method exhibits good performance for existing large graphs with several passes of the graph data.
International Nuclear Information System (INIS)
Hubert-Tremblay, Vincent; Archambault, Louis; Tubic, Dragan; Roy, Rene; Beaulieu, Luc
2006-01-01
The purpose of the present study is to introduce a compression algorithm for the CT (computed tomography) data used in Monte Carlo simulations. Performing simulations on the CT data implies large computational costs as well as large memory requirements since the number of voxels in such data reaches typically into hundreds of millions voxels. CT data, however, contain homogeneous regions which could be regrouped to form larger voxels without affecting the simulation's accuracy. Based on this property we propose a compression algorithm based on octrees: in homogeneous regions the algorithm replaces groups of voxels with a smaller number of larger voxels. This reduces the number of voxels while keeping the critical high-density gradient area. Results obtained using the present algorithm on both phantom and clinical data show that compression rates up to 75% are possible without losing the dosimetric accuracy of the simulation
Hubert, S.; Boubault, F.
2018-03-01
In this article, we present the first X-ray calibration performed over the 0.1-1.5 keV spectral range by means of a soft X-ray Manson source and the monochromator SYMPAX. This monochromator, based on a classical Rowland geometry, presents the novelty to be able to board simultaneously two detectors and move them under vacuum in front of the exit slit of the monochromatizing stage. This provides the great advantage to perform radiometric measurements of the monochromatic X-ray photon flux with one reference detector while calibrating another X-ray detector. To achieve this, at least one secondary standard must be operated with SYMPAX. This paper presents thereby an efficiency transfer experiment between a secondary standard silicon drift detector (SDD), previously calibrated on BESSY II synchrotron Facility, and another one ("unknown" SDD), devoted to be used permanently with SYMPAX. The associated calibration process is described as well as corresponding results. Comparison with calibrated measurements performed at the Physikalisch-Technische Bundesanstalt (PTB) Radiometric Laboratory shows a very good agreement between the secondary standard and the unknown SDD.
Monte Carlo principles and applications
Energy Technology Data Exchange (ETDEWEB)
Raeside, D E [Oklahoma Univ., Oklahoma City (USA). Health Sciences Center
1976-03-01
The principles underlying the use of Monte Carlo methods are explained, for readers who may not be familiar with the approach. The generation of random numbers is discussed, and the connection between Monte Carlo methods and random numbers is indicated. Outlines of two well established Monte Carlo sampling techniques are given, together with examples illustrating their use. The general techniques for improving the efficiency of Monte Carlo calculations are considered. The literature relevant to the applications of Monte Carlo calculations in medical physics is reviewed.
Energy Technology Data Exchange (ETDEWEB)
Sharma, D; Badano, A [Division of Imaging, Diagnostics and Software Reliability, OSEL/CDRH, Food & Drug Administration, MD (United States); Sempau, J [Technical University of Catalonia, Barcelona (Spain)
2016-06-15
Purpose: Variance reduction techniques (VRTs) are employed in Monte Carlo simulations to obtain estimates with reduced statistical uncertainty for a given simulation time. In this work, we study the bias and efficiency of a VRT for estimating the response of imaging detectors. Methods: We implemented Directed Sampling (DS), preferentially directing a fraction of emitted optical photons directly towards the detector by altering the isotropic model. The weight of each optical photon is appropriately modified to maintain simulation estimates unbiased. We use a Monte Carlo tool called fastDETECT2 (part of the hybridMANTIS open-source package) for optical transport, modified for VRT. The weight of each photon is calculated as the ratio of original probability (no VRT) and the new probability for a particular direction. For our analysis of bias and efficiency, we use pulse height spectra, point response functions, and Swank factors. We obtain results for a variety of cases including analog (no VRT, isotropic distribution), and DS with 0.2 and 0.8 optical photons directed towards the sensor plane. We used 10,000, 25-keV primaries. Results: The Swank factor for all cases in our simplified model converged fast (within the first 100 primaries) to a stable value of 0.9. The root mean square error per pixel for DS VRT for the point response function between analog and VRT cases was approximately 5e-4. Conclusion: Our preliminary results suggest that DS VRT does not affect the estimate of the mean for the Swank factor. Our findings indicate that it may be possible to design VRTs for imaging detector simulations to increase computational efficiency without introducing bias.
International Nuclear Information System (INIS)
Giles, J.R.
1996-05-01
A Gamma Spectroscopy Logging System (GSLS) has been developed to study sub-surface radionuclide contamination. Absolute efficiency calibration of the GSLS was performed using simple cylindrical borehole geometry. The calibration source incorporated naturally occurring radioactive material (NORM) that emitted photons ranging from 186-keV to 2,614-keV. More complex borehole geometries were modeled using commercially available shielding software. A linear relationship was found between increasing source thickness and relative photon fluence rates at the detector. Examination of varying porosity and moisture content showed that as porosity increases, relative photon fluence rates increase linearly for all energies. Attenuation effects due to iron, water, PVC, and concrete cylindrical shields were found to agree with previous studies. Regression analyses produced energy-dependent equations for efficiency corrections applicable to spectral gamma-ray well logs collected under non-standard borehole conditions
Energy Technology Data Exchange (ETDEWEB)
Baudis, Laura; Froborg, Francis; Tarka, Michael; Bruch, Tobias; Ferella, Alfredo [Physik-Institut, Universitaet Zuerich (Switzerland); Collaboration: GERDA-Collaboration
2012-07-01
A system with three identical custom made units is used for the energy calibration of the GERDA Ge diodes. To perform a calibration the {sup 228}Th sources are lowered from the parking positions at the top of the cryostat. Their positions are measured by two independent modules. One, the incremental encoder, counts the holes in the perforated steel band holding the sources, the other measures the drive shaft's angular position even if not powered. The system can be controlled remotely by a Labview program. The calibration data is analyzed by an iterative calibration algorithm determining the calibration functions for different energy reconstruction algorithms and the resolution of several peaks in the {sup 228}Th spectrum is determined. A Monte Carlo simulation using the GERDA simulation software MAGE has been performed to determine the background induced by the sources in the parking positions.
Calibration of Ge gamma-ray spectrometers for complex sample geometries and matrices
Energy Technology Data Exchange (ETDEWEB)
Semkow, T.M., E-mail: thomas.semkow@health.ny.gov [Wadsworth Center, New York State Department of Health, Empire State Plaza, Albany, NY 12201 (United States); Department of Environmental Health Sciences, School of Public Health, University at Albany, State University of New York, Rensselaer, NY 12144 (United States); Bradt, C.J.; Beach, S.E.; Haines, D.K.; Khan, A.J.; Bari, A.; Torres, M.A.; Marrantino, J.C.; Syed, U.-F. [Wadsworth Center, New York State Department of Health, Empire State Plaza, Albany, NY 12201 (United States); Kitto, M.E. [Wadsworth Center, New York State Department of Health, Empire State Plaza, Albany, NY 12201 (United States); Department of Environmental Health Sciences, School of Public Health, University at Albany, State University of New York, Rensselaer, NY 12144 (United States); Hoffman, T.J. [Wadsworth Center, New York State Department of Health, Empire State Plaza, Albany, NY 12201 (United States); Curtis, P. [Kiltel Systems, Inc., Clyde Hill, WA 98004 (United States)
2015-11-01
A comprehensive study of the efficiency calibration and calibration verification of Ge gamma-ray spectrometers was performed using semi-empirical, computational Monte-Carlo (MC), and transfer methods. The aim of this study was to evaluate the accuracy of the quantification of gamma-emitting radionuclides in complex matrices normally encountered in environmental and food samples. A wide range of gamma energies from 59.5 to 1836.0 keV and geometries from a 10-mL jar to 1.4-L Marinelli beaker were studied on four Ge spectrometers with the relative efficiencies between 102% and 140%. Density and coincidence summing corrections were applied. Innovative techniques were developed for the preparation of artificial complex matrices from materials such as acidified water, polystyrene, ethanol, sugar, and sand, resulting in the densities ranging from 0.3655 to 2.164 g cm{sup −3}. They were spiked with gamma activity traceable to international standards and used for calibration verifications. A quantitative method of tuning MC calculations to experiment was developed based on a multidimensional chi-square paraboloid. - Highlights: • Preparation and spiking of traceable complex matrices in extended geometries. • Calibration of Ge gamma spectrometers for complex matrices. • Verification of gamma calibrations. • Comparison of semi-empirical, computational Monte Carlo, and transfer methods of Ge calibration. • Tuning of Monte Carlo calculations using a multidimensional paraboloid.
Calibration of Ge gamma-ray spectrometers for complex sample geometries and matrices
International Nuclear Information System (INIS)
Semkow, T.M.; Bradt, C.J.; Beach, S.E.; Haines, D.K.; Khan, A.J.; Bari, A.; Torres, M.A.; Marrantino, J.C.; Syed, U.-F.; Kitto, M.E.; Hoffman, T.J.; Curtis, P.
2015-01-01
A comprehensive study of the efficiency calibration and calibration verification of Ge gamma-ray spectrometers was performed using semi-empirical, computational Monte-Carlo (MC), and transfer methods. The aim of this study was to evaluate the accuracy of the quantification of gamma-emitting radionuclides in complex matrices normally encountered in environmental and food samples. A wide range of gamma energies from 59.5 to 1836.0 keV and geometries from a 10-mL jar to 1.4-L Marinelli beaker were studied on four Ge spectrometers with the relative efficiencies between 102% and 140%. Density and coincidence summing corrections were applied. Innovative techniques were developed for the preparation of artificial complex matrices from materials such as acidified water, polystyrene, ethanol, sugar, and sand, resulting in the densities ranging from 0.3655 to 2.164 g cm −3 . They were spiked with gamma activity traceable to international standards and used for calibration verifications. A quantitative method of tuning MC calculations to experiment was developed based on a multidimensional chi-square paraboloid. - Highlights: • Preparation and spiking of traceable complex matrices in extended geometries. • Calibration of Ge gamma spectrometers for complex matrices. • Verification of gamma calibrations. • Comparison of semi-empirical, computational Monte Carlo, and transfer methods of Ge calibration. • Tuning of Monte Carlo calculations using a multidimensional paraboloid
Calibration of a compact magnetic proton recoil neutron spectrometer
Energy Technology Data Exchange (ETDEWEB)
Zhang, Jianfu, E-mail: zhang_jianfu@163.com [School of Nuclear Science and Technology, Xi' an Jiaotong University, Xi' an 710049 (China); Northwest Institute of Nuclear Technology, Xi' an 710024 (China); Ouyang, Xiaoping; Zhang, Xianpeng [School of Nuclear Science and Technology, Xi' an Jiaotong University, Xi' an 710049 (China); Northwest Institute of Nuclear Technology, Xi' an 710024 (China); Ruan, Jinlu [Northwest Institute of Nuclear Technology, Xi' an 710024 (China); Zhang, Guoguang [Applied Institute of Nuclear Technology, China Institute of Atomic Energy, Beijing 102413 (China); Zhang, Xiaodong [Northwest Institute of Nuclear Technology, Xi' an 710024 (China); Qiu, Suizheng, E-mail: szqiu@mail.xjtu.edu.cn [School of Nuclear Science and Technology, Xi' an Jiaotong University, Xi' an 710049 (China); Chen, Liang; Liu, Jinliang; Song, Jiwen; Liu, Linyue; Yang, Shaohua [Northwest Institute of Nuclear Technology, Xi' an 710024 (China)
2016-04-21
Magnetic proton recoil (MPR) neutron spectrometer is considered as a powerful instrument to measure deuterium–tritium (DT) neutron spectrum, as it is currently used in inertial confinement fusion facilities and large Tokamak devices. The energy resolution (ER) and neutron detection efficiency (NDE) are the two most important parameters to characterize a neutron spectrometer. In this work, the ER calibration for the MPR spectrometer was performed by using the HI-13 tandem accelerator at China Institute of Atomic Energy (CIAE), and the NDE calibration was performed by using the neutron generator at CIAE. The specific calibration techniques used in this work and the associated accuracies were discussed in details in this paper. The calibration results were presented along with Monte Carlo simulation results.
Zambri, Brian; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem
2017-01-01
We propose a computational strategy that falls into the category of prediction/correction iterative-type approaches, for calibrating the hemodynamic model introduced by Friston et al. (2000). The proposed method is employed to estimate consecutively the values of the biophysiological system parameters and the external stimulus characteristics of the model. Numerical results corresponding to both synthetic and real functional Magnetic Resonance Imaging (fMRI) measurements for a single stimulus as well as for multiple stimuli are reported to highlight the capability of this computational methodology to fully calibrate the considered hemodynamic model. This article is protected by copyright. All rights reserved.
Zambri, Brian
2017-02-22
We propose a computational strategy that falls into the category of prediction/correction iterative-type approaches, for calibrating the hemodynamic model introduced by Friston et al. (2000). The proposed method is employed to estimate consecutively the values of the biophysiological system parameters and the external stimulus characteristics of the model. Numerical results corresponding to both synthetic and real functional Magnetic Resonance Imaging (fMRI) measurements for a single stimulus as well as for multiple stimuli are reported to highlight the capability of this computational methodology to fully calibrate the considered hemodynamic model. This article is protected by copyright. All rights reserved.
Djellouli, Rabia
2017-01-01
We propose a computational strategy that falls into the category of prediction/correction iterative-type approaches, for calibrating the hemodynamic model introduced by Friston et al. (2000). The proposed method is employed to estimate consecutively the values of the biophysiological system parameters and the external stimulus characteristics of the model. Numerical results corresponding to both synthetic and real functional Magnetic Resonance Imaging (fMRI) measurements for a single stimulus as well as for multiple stimuli are reported to highlight the capability of this computational methodology to fully calibrate the considered hemodynamic model.
Monte Carlo Calculation Of HPGe GEM 15P4 Detector Efficiency In The 59 - 2000 keV Energy Range
International Nuclear Information System (INIS)
Trinh Hoai Vinh; Pham Nguyen Thanh Vinh; Hoang Ba Kim; Vo Xuan An
2011-01-01
A precise model of a 15% relative efficiency p-type HPGe GEM 15P4 detector was created for peak efficiency curves determination using the MCNP5 code developed by The Los Alamos Laboratory. The dependence of peak efficiency on distance from the source to detector was also investigated. That model was validated by comparing experimental and calculated results using six standard point sources including 133 Ba, 109 Cd, 57 Co, 60 Co, 22 Na and 65 Zn. The sources used for more simulating are 241 Am, 75 Se, 113 Sn, 85 Sr, 54 Mn, 137 Cs, 56 Co, 94 Nb, 111 In, 139 Ce, 228 Th, 243 Am, 154 Eu, 152 Eu and 88 Y according to IAEA-TECDOC-619 document. All these sources have the same geometry. The ratio of the experimental efficiencies to calculated ones are higher than 0.94. This result indicates that our simulation program based on MCNP5 code is good enough for later studies on this HPGe spectrometer which is located in Nuclear Physics Laboratory at HCMC University of Pedagogy. (author)
International Nuclear Information System (INIS)
Singh, Sarbjit; Agarwal, Chhavi; Ramaswami, A.; Manchanda, V.K.
2007-01-01
Regular monitoring of off gases released to the environment from a nuclear reactor is mandatory. The gaseous fission products are estimated by gamma ray spectrometry using a HPGe detector coupled to a multichannel analyser. In view of the lack of availability of gaseous fission products standards, an indirect method based on the charcoal absorption technique was developed for the efficiency calibration of HPGe detector system using 133B a and 152E u standards. The known activities of 133B a and 152E u are uniformly distributed in a vial having activated charcoal and counted on the HPGe detector system at liquid nitrogen temperature to determine the gamma ray efficiency for the vial having activated charcoal. The ratio of the gamma ray efficiencies of off gas present in the normal vial and the vial having activated charcoal at liquid nitrogen temperature are used to determine the gamma ray efficiency of off gas present in the normal vial. (author)
International Nuclear Information System (INIS)
Nikolopoulos, Dimitrios; Kandarakis, Ioannis; Tsantilas, Xenophon; Valais, Ioannis; Cavouras, Dionisios; Louizi, Anna
2006-01-01
The radiation detection efficiency of four scintillators employed, or designed to be employed, in positron emission imaging (PET) was evaluated as a function of the crystal thickness by applying Monte Carlo Methods. The scintillators studied were the LuSiO 5 (LSO), LuAlO 3 (LuAP), Gd 2 SiO 5 (GSO) and the YAlO 3 (YAP). Crystal thicknesses ranged from 0 to 50 mm. The study was performed via a previously generated photon transport Monte Carlo code. All photon track and energy histories were recorded and the energy transferred or absorbed in the scintillator medium was calculated together with the energy redistributed and retransported as secondary characteristic fluorescence radiation. Various parameters were calculated e.g. the fraction of the incident photon energy absorbed, transmitted or redistributed as fluorescence radiation, the scatter to primary ratio, the photon and energy distribution within each scintillator block etc. As being most significant, the fraction of the incident photon energy absorbed was found to increase with increasing crystal thickness tending to form a plateau above the 30 mm thickness. For LSO, LuAP, GSO and YAP scintillators, respectively, this fraction had the value of 44.8, 36.9 and 45.7% at the 10 mm thickness and 96.4, 93.2 and 96.9% at the 50 mm thickness. Within the plateau area approximately (57-59)% (59-63)% (52-63)% and (58-61)% of this fraction was due to scattered and reabsorbed radiation for the LSO, GSO, YAP and LuAP scintillators, respectively. In all cases, a negligible fraction (<0.1%) of the absorbed energy was found to escape the crystal as fluorescence radiation
Monte Carlo method to characterize radioactive waste drums
International Nuclear Information System (INIS)
Lima, Josenilson B.; Dellamano, Jose C.; Potiens Junior, Ademar J.
2013-01-01
Non-destructive methods for radioactive waste drums characterization have being developed in the Waste Management Department (GRR) at Nuclear and Energy Research Institute IPEN. This study was conducted as part of the radioactive wastes characterization program in order to meet specifications and acceptance criteria for final disposal imposed by regulatory control by gamma spectrometry. One of the main difficulties in the detectors calibration process is to obtain the counting efficiencies that can be solved by the use of mathematical techniques. The aim of this work was to develop a methodology to characterize drums using gamma spectrometry and Monte Carlo method. Monte Carlo is a widely used mathematical technique, which simulates the radiation transport in the medium, thus obtaining the efficiencies calibration of the detector. The equipment used in this work is a heavily shielded Hyperpure Germanium (HPGe) detector coupled with an electronic setup composed of high voltage source, amplifier and multiport multichannel analyzer and MCNP software for Monte Carlo simulation. The developing of this methodology will allow the characterization of solid radioactive wastes packed in drums and stored at GRR. (author)
Monte Carlo dose calibration in CT scanner
International Nuclear Information System (INIS)
Yadav, Poonam; Ramasubramanian, V.; Subbaiah, K.V.; Thayalan, K.
2008-01-01
Computed Tomography (CT) scanner is a high radiation imaging modality compared to radiography. The dose from a CT examination can vary greatly depending on the particular CT scanner used, the area of the body examined, and the operating parameters of the scan. CT is a major contributor to collective effective dose in diagnostic radiology. Apart from the clinical benefits, the widespread use of multislice scanner is increasing radiation level to patient in comparison with conventional CT scanner. So, it becomes necessary to increase awareness about the CT scanner. (author)
Sun, Shuyu
2013-06-01
This paper introduces an efficient technique to generate new molecular simulation Markov chains for different temperature and density conditions, which allow for rapid extrapolation of canonical ensemble averages at a range of temperatures and densities different from the original conditions where a single simulation is conducted. Obtained information from the original simulation are reweighted and even reconstructed in order to extrapolate our knowledge to the new conditions. Our technique allows not only the extrapolation to a new temperature or density, but also the double extrapolation to both new temperature and density. The method was implemented for Lennard-Jones fluid with structureless particles in single-gas phase region. Extrapolation behaviors as functions of extrapolation ranges were studied. Limits of extrapolation ranges showed a remarkable capability especially along isochors where only reweighting is required. Various factors that could affect the limits of extrapolation ranges were investigated and compared. In particular, these limits were shown to be sensitive to the number of particles used and starting point where the simulation was originally conducted.
Sun, Shuyu; Kadoura, Ahmad Salim; Salama, Amgad
2013-01-01
This paper introduces an efficient technique to generate new molecular simulation Markov chains for different temperature and density conditions, which allow for rapid extrapolation of canonical ensemble averages at a range of temperatures and densities different from the original conditions where a single simulation is conducted. Obtained information from the original simulation are reweighted and even reconstructed in order to extrapolate our knowledge to the new conditions. Our technique allows not only the extrapolation to a new temperature or density, but also the double extrapolation to both new temperature and density. The method was implemented for Lennard-Jones fluid with structureless particles in single-gas phase region. Extrapolation behaviors as functions of extrapolation ranges were studied. Limits of extrapolation ranges showed a remarkable capability especially along isochors where only reweighting is required. Various factors that could affect the limits of extrapolation ranges were investigated and compared. In particular, these limits were shown to be sensitive to the number of particles used and starting point where the simulation was originally conducted.
Czech Academy of Sciences Publication Activity Database
Kučera, Jan; Kubešová, Marie; Lebeda, Ondřej
2018-01-01
Roč. 315, č. 3 (2018), s. 671-675 ISSN 0236-5731. [7th International K0-Users Workshop. Montreal, 03.09.2017-08.09.2017] R&D Projects: GA ČR(CZ) GBP108/12/G108; GA MŠk LM2015056 Institutional support: RVO:61389005 Keywords : k(0)-INAA * Ca determination * HPGe detector * High-energy efficiency calibration * Co-56 activity standard Subject RIV: CB - Analytical Chemistry, Separation OBOR OECD: Analytical chemistry Impact factor: 1.282, year: 2016
International Nuclear Information System (INIS)
Christmas, P.; Nichols, A.L.; Lemmel, H.D.
1989-07-01
The final official meeting of the IAEA Coordinated Research Programme on the Measurement and Evaluation of X- and Gamma-ray Standards for Detector Efficiency Calibration was held in Braunschweig from 31 May to 2 June 1989. Work undertaken by the participants was reviewed in detail, and actions were agreed to resolve specific issues and problems. Initial steps were also made to establish a format and procedure for the preparation by mid-1990 of an IAEA Technical Reports Series booklet; the measurements and recommended data will be listed, and an IAEA data file established for issue to all interested organisations. (author). 3 tabs
TARC: Carlo Rubbia's Energy Amplifier
Laurent Guiraud
1997-01-01
Transmutation by Adiabatic Resonance Crossing (TARC) is Carlo Rubbia's energy amplifier. This CERN experiment demonstrated that long-lived fission fragments, such as 99-TC, can be efficiently destroyed.
Alexiadis, Orestis; Daoulas, Kostas Ch; Mavrantzas, Vlasis G
2008-01-31
A new Monte Carlo algorithm is presented for the simulation of atomistically detailed alkanethiol self-assembled monolayers (R-SH) on a Au(111) surface. Built on a set of simpler but also more complex (sometimes nonphysical) moves, the new algorithm is capable of efficiently driving all alkanethiol molecules to the Au(111) surface, thereby leading to full surface coverage, irrespective of the initial setup of the system. This circumvents a significant limitation of previous methods in which the simulations typically started from optimally packed structures on the substrate close to thermal equilibrium. Further, by considering an extended ensemble of configurations each one of which corresponds to a different value of the sulfur-sulfur repulsive core potential, sigmass, and by allowing for configurations to swap between systems characterized by different sigmass values, the new algorithm can adequately simulate model R-SH/Au(111) systems for values of sigmass ranging from 4.25 A corresponding to the Hautman-Klein molecular model (J. Chem. Phys. 1989, 91, 4994; 1990, 93, 7483) to 4.97 A corresponding to the Siepmann-McDonald model (Langmuir 1993, 9, 2351), and practically any chain length. Detailed results are presented quantifying the efficiency and robustness of the new method. Representative simulation data for the dependence of the structural and conformational properties of the formed monolayer on the details of the employed molecular model are reported and discussed; an investigation of the variation of molecular organization and ordering on the Au(111) substrate for three CH3-(CH2)n-SH/Au(111) systems with n=9, 15, and 21 is also included.
Coehoorn, Reinder; van Eersel, Harm; Bobbert, Peter A.; Janssen, Rene A. J.
2015-10-01
The performance of Organic Light Emitting Diodes (OLEDs) is determined by a complex interplay of the charge transport and excitonic processes in the active layer stack. We have developed a three-dimensional kinetic Monte Carlo (kMC) OLED simulation method which includes all these processes in an integral manner. The method employs a physically transparent mechanistic approach, and is based on measurable parameters. All processes can be followed with molecular-scale spatial resolution and with sub-nanosecond time resolution, for any layer structure and any mixture of materials. In the talk, applications to the efficiency roll-off, emission color and lifetime of white and monochrome phosphorescent OLEDs [1,2] are demonstrated, and a comparison with experimental results is given. The simulations show to which extent the triplet-polaron quenching (TPQ) and triplet-triplet-annihilation (TTA) contribute to the roll-off, and how the microscopic parameters describing these processes can be deduced properly from dedicated experiments. Degradation is treated as a result of the (accelerated) conversion of emitter molecules to non-emissive sites upon a triplet-polaron quenching (TPQ) process. The degradation rate, and hence the device lifetime, is shown to depend on the emitter concentration and on the precise type of TPQ process. Results for both single-doped and co-doped OLEDs are presented, revealing that the kMC simulations enable efficient simulation-assisted layer stack development. [1] H. van Eersel et al., Appl. Phys. Lett. 105, 143303 (2014). [2] R. Coehoorn et al., Adv. Funct. Mater. (2015), publ. online (DOI: 10.1002/adfm.201402532)
Krypton calibration of time projection chambers of the NA61/SHINE experiment
Naskret, Michal
The NA61/SHINE experiment at CERN is searching for the critical point in phase transition between quark-gluon plasma and hadronic matter. To do so we use the most precise apparatus - Time Projection Chamber. Its main task is to find trajectories of particles created in a relativistic collision. In order to improve efficiency of TPCs, we introduce calibration using radioactive krypton gas. Simulation of events in a TPC cham- ber through a decay of excited krypton atoms gives us a spectrum, which is later fitted to the model spectrum of krypton from a Monte-Carlo simulation. The data obtained in such a way serves us to determine malfunctioning electronics in TPCs. Thanks to the krypton calibration we can create a map of pad by pad gains. In this thesis I will de- scribe in detail the NA61 experimental setup, krypton calibration procedure, calibration algorithm and results for recent calibration runs
Shecter, Liat; Oiknine, Yaniv; August, Isaac; Stern, Adrian
2017-09-01
Recently we presented a Compressive Sensing Miniature Ultra-spectral Imaging System (CS-MUSI)1 . This system consists of a single Liquid Crystal (LC) phase retarder as a spectral modulator and a gray scale sensor array to capture a multiplexed signal of the imaged scene. By designing the LC spectral modulator in compliance with the Compressive Sensing (CS) guidelines and applying appropriate algorithms we demonstrated reconstruction of spectral (hyper/ ultra) datacubes from an order of magnitude fewer samples than taken by conventional sensors. The LC modulator is designed to have an effective width of a few tens of micrometers, therefore it is prone to imperfections and spatial nonuniformity. In this work, we present the study of this nonuniformity and present a mathematical algorithm that allows the inference of the spectral transmission over the entire cell area from only a few calibration measurements.
International Nuclear Information System (INIS)
Brown, F.B.
1981-01-01
Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes
Energy Technology Data Exchange (ETDEWEB)
Lepy, M.Ch
2000-07-01
The EUROMET project 428 examines efficiency transfer computation for Ge gamma-ray spectrometers when the efficiency is known for a reference point source geometry in the 60 keV to 2 MeV energy range. For this, different methods are used, such as Monte Carlo simulation or semi-empirical computation. The exercise compares the application of these methods to the same selected experimental cases to determine the usage limitations versus the requested accuracy. For carefully examining these results and trying to derive information for improving the computation codes, this study was limited to a few simple cases, from an experimental efficiency calibration for point source at 10-cm source-to-detector distance. The first part concerns the simplest case of geometry transfer, i.e., using point sources for 3 source-to-detector distances: 2,5 and 20 cm; the second part deals with transfer from point source geometry to cylindrical geometry with three different matrices. The general results show that the deviations between the computed results and the measured efficiencies are for the most part within 10%. The quality of the results is rather inhomogeneous and shows that these codes cannot be used directly for metrological purposes. However, most of them are operational for routine measurements when efficiency uncertainties of 5-10% can be sufficient. (author)
Sahmani, S; Fattahi, A M
2017-08-01
New ceramic materials containing nanoscaled crystalline phases create a main object of scientific interest due to their attractive advantages such as biocompatibility. Zirconia as a transparent glass ceramic is one of the most useful binary oxides in a wide range of applications. In the present study, a new size-dependent plate model is constructed to predict the nonlinear axial instability characteristics of zirconia nanosheets under axial compressive load. To accomplish this end, the nonlocal continuum elasticity of Eringen is incorporated to a refined exponential shear deformation plate theory. A perturbation-based solving process is put to use to derive explicit expressions for nonlocal equilibrium paths of axial-loaded nanosheets. After that, some molecular dynamics (MD) simulations are performed for axial instability response of square zirconia nanosheets with different side lengths, the results of which are matched with those of the developed nonlocal plate model to capture the proper value of nonlocal parameter. It is demonstrated that the calibrated nonlocal plate model with nonlocal parameter equal to 0.37nm has a very good capability to predict the axial instability characteristics of zirconia nanosheets, the accuracy of which is comparable with that of MD simulation. Copyright © 2017 Elsevier Inc. All rights reserved.
Calibration of Flick standards
International Nuclear Information System (INIS)
Thalmann, Ruedi; Spiller, Jürg; Küng, Alain; Jusko, Otto
2012-01-01
Flick standards or magnification standards are widely used for an efficient and functional calibration of the sensitivity of form measuring instruments. The results of a recent measurement comparison have shown to be partially unsatisfactory and revealed problems related to the calibration of these standards. In this paper the influence factors for the calibration of Flick standards using roundness measurement instruments are discussed in detail, in particular the bandwidth of the measurement chain, residual form errors of the device under test, profile distortions due to the diameter of the probing element and questions related to the definition of the measurand. The different contributions are estimated using simulations and are experimentally verified. Also alternative methods to calibrate Flick standards are investigated. Finally the practical limitations of Flick standard calibration are shown and the usability of Flick standards both to calibrate the sensitivity of roundness instruments and to check the filter function of such instruments is analysed. (paper)
Kowalski, M P; Barbee, T W; Heidemann, K F; Gursky, H; Rife, J C; Hunter, W R; Fritz, G G; Cruddace, R G
1999-11-01
We have fabricated the four flight gratings for a sounding rocket high-resolution spectrometer using a holographic ion-etching technique. The gratings are spherical (4000-mm radius of curvature), large (160 mm x 90 mm), and have a laminar groove profile of high density (3600 grooves/mm). They have been coated with a high-reflectance multilayer of Mo/Si. Using an atomic force microscope, we examined the surface characteristics of the first grating before and after multilayer coating. The average roughness is approximately 3 A rms after coating. Using synchrotron radiation, we completed an efficiency calibration map over the wavelength range 225-245 A. At an angle of incidence of 5 degrees and a wavelength of 234 A, the average efficiency in the first inside order is 10.4 +/- 0.5%, and the derived groove efficiency is 34.8 +/- 1.6%. These values exceed all previously published results for a high-density grating.
Zambri, Brian
2015-11-05
Our aim is to propose a numerical strategy for retrieving accurately and efficiently the biophysiological parameters as well as the external stimulus characteristics corresponding to the hemodynamic mathematical model that describes changes in blood flow and blood oxygenation during brain activation. The proposed method employs the TNM-CKF method developed in [1], but in a prediction/correction framework. We present numerical results using both real and synthetic functional Magnetic Resonance Imaging (fMRI) measurements to highlight the performance characteristics of this computational methodology. © 2015 IEEE.
Zambri, Brian; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem
2015-01-01
Our aim is to propose a numerical strategy for retrieving accurately and efficiently the biophysiological parameters as well as the external stimulus characteristics corresponding to the hemodynamic mathematical model that describes changes in blood flow and blood oxygenation during brain activation. The proposed method employs the TNM-CKF method developed in [1], but in a prediction/correction framework. We present numerical results using both real and synthetic functional Magnetic Resonance Imaging (fMRI) measurements to highlight the performance characteristics of this computational methodology. © 2015 IEEE.
Comparison between two calibration models of a measurement system for thyroid monitoring
International Nuclear Information System (INIS)
Venturini, Luzia
2005-01-01
This paper shows a comparison between two theoretical calibration that use two mathematical models to represent the neck region. In the first model thyroid is considered to be just a region limited by two concentric cylinders whose dimensions are those of trachea and neck. The second model uses functional functions to get a better representation of the thyroid geometry. Efficiency values are obtained using Monte Carlo simulation. (author)
DEFF Research Database (Denmark)
Heydorn, Kaj; Anglov, Thomas
2002-01-01
Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration...
DEFF Research Database (Denmark)
Rose, Matthias; Bjørner, Jakob; Gandek, Barbara
2014-01-01
OBJECTIVE: To document the development and psychometric evaluation of the Patient-Reported Outcomes Measurement Information System (PROMIS) Physical Function (PF) item bank and static instruments. STUDY DESIGN AND SETTING: The items were evaluated using qualitative and quantitative methods. A total...... response model was used to estimate item parameters, which were normed to a mean of 50 (standard deviation [SD]=10) in a US general population sample. RESULTS: The final bank consists of 124 PROMIS items covering upper, central, and lower extremity functions and instrumental activities of daily living...... to identify differences between age and disease groups. CONCLUSION: The item bank provides a common metric and can improve the measurement of PF by facilitating the standardization of patient-reported outcome measures and implementation of CATs for more efficient PF assessments over a larger range....
Hudoklin, Domen; Drnovšek, Janko
2008-10-01
In the field of hygrometry, a primary dew-point standard can be realized according to several proven principles, such as single-pressure (1-P), two-pressure (2-P), or divided flow. Different realizations have been introduced by various national laboratories, each resulting in a stand-alone complex generation system. Recent trends in generator design favor the single-pressure principle without recirculation because it promises theoretically lower uncertainty and because it avoids problems regarding the leak tightness of the recirculation. Instead of recirculation, the efficiency of saturation, the key factor, is increased by preconditioning the inlet gas entering the saturator. For preconditioning, a presaturator or purifier is used to bring the dew point of the inlet stream close to the saturator temperature. The purpose of the paper is to identify the minimum requirements for the preconditioning system and the main saturator to assure efficient saturation for the LMK generator. Moreover, the aim is also to find out if the preconditioning system can be avoided despite the rather simple construction of the main saturator. If this proves to be the case, the generator design can be simplified while maintaining an accurate value of the generated dew point. Experiments were carried out within the scope of improving our existing primary generator in the above-ambient dew-point range up to +70°C. These results show the generated dew point is within the measurement uncertainty for any dew-point value of the inlet gas. Thus, the preconditioning subsystem can be avoided, which leads to a simplified generator design.
Energy Technology Data Exchange (ETDEWEB)
Leblanc, B.
2002-03-01
Molecular simulation aims at simulating particles in interaction, describing a physico-chemical system. When considering Markov Chain Monte Carlo sampling in this context, we often meet the same problem of statistical efficiency as with Molecular Dynamics for the simulation of complex molecules (polymers for example). The search for a correct sampling of the space of possible configurations with respect to the Boltzmann-Gibbs distribution is directly related to the statistical efficiency of such algorithms (i.e. the ability of rapidly providing uncorrelated states covering all the configuration space). We investigated how to improve this efficiency with the help of Artificial Evolution (AE). AE algorithms form a class of stochastic optimization algorithms inspired by Darwinian evolution. Efficiency measures that can be turned into efficiency criteria have been first searched before identifying parameters that could be optimized. Relative frequencies for each type of Monte Carlo moves, usually empirically chosen in reasonable ranges, were first considered. We combined parallel simulations with a 'genetic server' in order to dynamically improve the quality of the sampling during the simulations progress. Our results shows that in comparison with some reference settings, it is possible to improve the quality of samples with respect to the chosen criterion. The same algorithm has been applied to improve the Parallel Tempering technique, in order to optimize in the same time the relative frequencies of Monte Carlo moves and the relative frequencies of swapping between sub-systems simulated at different temperatures. Finally, hints for further research in order to optimize the choice of additional temperatures are given. (author)
Directory of Open Access Journals (Sweden)
2008-05-01
Full Text Available Entrevista (en español Presentación Carlos Romero, politólogo, es profesor-investigador en el Instituto de Estudios Políticos de la Facultad de Ciencias Jurídicas y Políticas de la Universidad Central de Venezuela, en donde se ha desempeñado como coordinador del Doctorado, subdirector y director del Centro de Estudios de Postgrado. Cuenta con ocho libros publicados sobre temas de análisis político y relaciones internacionales, siendo uno de los últimos Jugando con el globo. La política exter...
Rose, Matthias; Bjorner, Jakob B; Gandek, Barbara; Bruce, Bonnie; Fries, James F; Ware, John E
2014-05-01
To document the development and psychometric evaluation of the Patient-Reported Outcomes Measurement Information System (PROMIS) Physical Function (PF) item bank and static instruments. The items were evaluated using qualitative and quantitative methods. A total of 16,065 adults answered item subsets (n>2,200/item) on the Internet, with oversampling of the chronically ill. Classical test and item response theory methods were used to evaluate 149 PROMIS PF items plus 10 Short Form-36 and 20 Health Assessment Questionnaire-Disability Index items. A graded response model was used to estimate item parameters, which were normed to a mean of 50 (standard deviation [SD]=10) in a US general population sample. The final bank consists of 124 PROMIS items covering upper, central, and lower extremity functions and instrumental activities of daily living. In simulations, a 10-item computerized adaptive test (CAT) eliminated floor and decreased ceiling effects, achieving higher measurement precision than any comparable length static tool across four SDs of the measurement range. Improved psychometric properties were transferred to the CAT's superior ability to identify differences between age and disease groups. The item bank provides a common metric and can improve the measurement of PF by facilitating the standardization of patient-reported outcome measures and implementation of CATs for more efficient PF assessments over a larger range. Copyright © 2014. Published by Elsevier Inc.
Absolute calibration in vivo measurement systems
International Nuclear Information System (INIS)
Kruchten, D.A.; Hickman, D.P.
1991-02-01
Lawrence Livermore National Laboratory (LLNL) is currently investigating a new method for obtaining absolute calibration factors for radiation measurement systems used to measure internally deposited radionuclides in vivo. Absolute calibration of in vivo measurement systems will eliminate the need to generate a series of human surrogate structures (i.e., phantoms) for calibrating in vivo measurement systems. The absolute calibration of in vivo measurement systems utilizes magnetic resonance imaging (MRI) to define physiological structure, size, and composition. The MRI image provides a digitized representation of the physiological structure, which allows for any mathematical distribution of radionuclides within the body. Using Monte Carlo transport codes, the emission spectrum from the body is predicted. The in vivo measurement equipment is calibrated using the Monte Carlo code and adjusting for the intrinsic properties of the detection system. The calibration factors are verified using measurements of existing phantoms and previously obtained measurements of human volunteers. 8 refs
Energy Technology Data Exchange (ETDEWEB)
Brockway, D.; Soran, P.; Whalen, P.
1985-01-01
A Monte Carlo algorithm to efficiently calculate static alpha eigenvalues, N = ne/sup ..cap alpha..t/, for supercritical systems has been developed and tested. A direct Monte Carlo approach to calculating a static alpha is to simply follow the buildup in time of neutrons in a supercritical system and evaluate the logarithmic derivative of the neutron population with respect to time. This procedure is expensive, and the solution is very noisy and almost useless for a system near critical. The modified approach is to convert the time-dependent problem to a static ..cap alpha../sup -/eigenvalue problem and regress ..cap alpha.. on solutions of a/sup -/ k/sup -/eigenvalue problem. In practice, this procedure is much more efficient than the direct calculation, and produces much more accurate results. Because the Monte Carlo codes are intrinsically three-dimensional and use elaborate continuous-energy cross sections, this technique is now used as a standard for evaluating other calculational techniques in odd geometries or with group cross sections.
Calibration factor or calibration coefficient?
International Nuclear Information System (INIS)
Meghzifene, A.; Shortt, K.R.
2002-01-01
Full text: The IAEA/WHO network of SSDLs was set up in order to establish links between SSDL members and the international measurement system. At the end of 2001, there were 73 network members in 63 Member States. The SSDL network members provide calibration services to end-users at the national or regional level. The results of the calibrations are summarized in a document called calibration report or calibration certificate. The IAEA has been using the term calibration certificate and will continue using the same terminology. The most important information in a calibration certificate is a list of calibration factors and their related uncertainties that apply to the calibrated instrument for the well-defined irradiation and ambient conditions. The IAEA has recently decided to change the term calibration factor to calibration coefficient, to be fully in line with ISO [ISO 31-0], which recommends the use of the term coefficient when it links two quantities A and B (equation 1) that have different dimensions. The term factor should only be used for k when it is used to link the terms A and B that have the same dimensions A=k.B. However, in a typical calibration, an ion chamber is calibrated in terms of a physical quantity such as air kerma, dose to water, ambient dose equivalent, etc. If the chamber is calibrated together with its electrometer, then the calibration refers to the physical quantity to be measured per electrometer unit reading. In this case, the terms referred have different dimensions. The adoption by the Agency of the term coefficient to express the results of calibrations is consistent with the 'International vocabulary of basic and general terms in metrology' prepared jointly by the BIPM, IEC, ISO, OIML and other organizations. The BIPM has changed from factor to coefficient. The authors believe that this is more than just a matter of semantics and recommend that the SSDL network members adopt this change in terminology. (author)
Cinelli, Giorgia; Tositti, Laura; Mostacci, Domiziano; Baré, Jonathan
2016-05-01
In view of assessing natural radioactivity with on-site quantitative gamma spectrometry, efficiency calibration of NaI(Tl) detectors is investigated. A calibration based on Monte Carlo simulation of detector response is proposed, to render reliable quantitative analysis practicable in field campaigns. The method is developed with reference to contact geometry, in which measurements are taken placing the NaI(Tl) probe directly against the solid source to be analyzed. The Monte Carlo code used for the simulations was MCNP. Experimental verification of the calibration goodness is obtained by comparison with appropriate standards, as reported. On-site measurements yield a quick quantitative assessment of natural radioactivity levels present ((40)K, (238)U and (232)Th). On-site gamma spectrometry can prove particularly useful insofar as it provides information on materials from which samples cannot be taken. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Monte Carlo simulation for IRRMA
International Nuclear Information System (INIS)
Gardner, R.P.; Liu Lianyan
2000-01-01
Monte Carlo simulation is fast becoming a standard approach for many radiation applications that were previously treated almost entirely by experimental techniques. This is certainly true for Industrial Radiation and Radioisotope Measurement Applications - IRRMA. The reasons for this include: (1) the increased cost and inadequacy of experimentation for design and interpretation purposes; (2) the availability of low cost, large memory, and fast personal computers; and (3) the general availability of general purpose Monte Carlo codes that are increasingly user-friendly, efficient, and accurate. This paper discusses the history and present status of Monte Carlo simulation for IRRMA including the general purpose (GP) and specific purpose (SP) Monte Carlo codes and future needs - primarily from the experience of the authors
Statistical estimation Monte Carlo for unreliability evaluation of highly reliable system
International Nuclear Information System (INIS)
Xiao Gang; Su Guanghui; Jia Dounan; Li Tianduo
2000-01-01
Based on analog Monte Carlo simulation, statistical Monte Carlo methods for unreliable evaluation of highly reliable system are constructed, including direct statistical estimation Monte Carlo method and weighted statistical estimation Monte Carlo method. The basal element is given, and the statistical estimation Monte Carlo estimators are derived. Direct Monte Carlo simulation method, bounding-sampling method, forced transitions Monte Carlo method, direct statistical estimation Monte Carlo and weighted statistical estimation Monte Carlo are used to evaluate unreliability of a same system. By comparing, weighted statistical estimation Monte Carlo estimator has smallest variance, and has highest calculating efficiency
Adjoint electron Monte Carlo calculations
International Nuclear Information System (INIS)
Jordan, T.M.
1986-01-01
Adjoint Monte Carlo is the most efficient method for accurate analysis of space systems exposed to natural and artificially enhanced electron environments. Recent adjoint calculations for isotropic electron environments include: comparative data for experimental measurements on electronics boxes; benchmark problem solutions for comparing total dose prediction methodologies; preliminary assessment of sectoring methods used during space system design; and total dose predictions on an electronics package. Adjoint Monte Carlo, forward Monte Carlo, and experiment are in excellent agreement for electron sources that simulate space environments. For electron space environments, adjoint Monte Carlo is clearly superior to forward Monte Carlo, requiring one to two orders of magnitude less computer time for relatively simple geometries. The solid-angle sectoring approximations used for routine design calculations can err by more than a factor of 2 on dose in simple shield geometries. For critical space systems exposed to severe electron environments, these potential sectoring errors demand the establishment of large design margins and/or verification of shield design by adjoint Monte Carlo/experiment
Calibration of a large multi-element neutron counter in the energy range 85-430 MeV
Strong, J A; Esterling, R J; Garvey, J; Green, M G; Harnew, N; Jane, M R; Jobes, M; Mawson, J; McMahon, T; Robertson, A W; Thomas, D H
1978-01-01
Describes the calibration of a large 60 element neutron counter with a threshold of 2.7 MeV equivalent electron energy. The performance of the counter has been measured in the neutron kinetic energy range 8.5-430 MeV using a neutron beam at the CERN Synchrocyclotron. The results obtained for the efficiency as a function of energy are in reasonable agreement with a Monte Carlo calculation. (7 refs).
Nested Sampling with Constrained Hamiltonian Monte Carlo
Betancourt, M. J.
2010-01-01
Nested sampling is a powerful approach to Bayesian inference ultimately limited by the computationally demanding task of sampling from a heavily constrained probability distribution. An effective algorithm in its own right, Hamiltonian Monte Carlo is readily adapted to efficiently sample from any smooth, constrained distribution. Utilizing this constrained Hamiltonian Monte Carlo, I introduce a general implementation of the nested sampling algorithm.
Energy Technology Data Exchange (ETDEWEB)
Trzcinski, A.; Lukasik, J.; Mueller, W.F.J.; Trautmann, W.; Zwieglinski, B. E-mail: bzw@fuw.edu.pl; Auger, G.; Bacri, Ch.O.; Begemann-Blaich, M.L.; Bellaize, N.; Bittiger, R.; Bocage, F.; Borderie, B.; Bougault, R.; Bouriquet, B.; Buchet, Ph.; Charvet, J.L.; Chbihi, A.; Dayras, R.; Dore, D.; Durand, D.; Frankland, J.D.; Galichet, E.; Gourio, D.; Guinet, D.; Hudan, S.; Hurst, B.; Lautesse, P.; Lavaud, F.; Laville, J.L.; Leduc, C.; Le Fevre, A.; Legrain, R.; Lopez, O.; Lynen, U.; Nalpas, L.; Orth, H.; Plagnol, E.; Rosato, E.; Saija, A.; Schwarz, C.; Sfienti, C.; Steckmeyer, J.C.; Tabacaru, G.; Tamain, B.; Turzo, K.; Vient, E.; Vigilante, M.; Volant, C
2003-04-01
An efficient method of energy scale calibration for the CsI(Tl) modules of the INDRA multidetector (rings 6-12) using elastic and inelastic {sup 12}C+{sup 1}H scattering at E({sup 12}C)=30 MeV per nucleon is presented. Background-free spectra for the binary channels are generated by requiring the coincident detection of the light and heavy ejectiles. The gain parameter of the calibration curve is obtained by fitting the proton total charge spectra to the spectra predicted with Monte-Carlo simulations using tabulated cross section data. The method has been applied in multifragmentation experiments with INDRA at GSI.
Trzcinski, A; Müller, W F J; Trautmann, W; Zwieglinski, B; Auger, G; Bacri, C O; Begemann-Blaich, M L; Bellaize, N; Bittiger, R; Bocage, F; Borderie, B; Bougault, R; Bouriquet, B; Buchet, P; Charvet, J L; Chbihi, A; Dayras, R; Doré, D; Durand, D; Frankland, J D; Galíchet, E; Gourio, D; Guinet, D; Hudan, S; Hurst, B; Lautesse, P; Lavaud, F; Laville, J L; Leduc, C; Lefèvre, A; Legrain, R; López, O; Lynen, U; Nalpas, L; Orth, H; Plagnol, E; Rosato, E; Saija, A; Schwarz, C; Sfienti, C; Steckmeyer, J C; Tabacaru, G; Tamain, B; Turzó, K; Vient, E; Vigilante, M; Volant, C
2003-01-01
An efficient method of energy scale calibration for the CsI(Tl) modules of the INDRA multidetector (rings 6-12) using elastic and inelastic sup 1 sup 2 C+ sup 1 H scattering at E( sup 1 sup 2 C)=30 MeV per nucleon is presented. Background-free spectra for the binary channels are generated by requiring the coincident detection of the light and heavy ejectiles. The gain parameter of the calibration curve is obtained by fitting the proton total charge spectra to the spectra predicted with Monte-Carlo simulations using tabulated cross section data. The method has been applied in multifragmentation experiments with INDRA at GSI.
Application of PHOTON simulation software on calibration of HPGe detectors
Energy Technology Data Exchange (ETDEWEB)
Nikolic, J., E-mail: jnikolic@vinca.rs [University of Belgrade Institute for Nuclear Sciences Vinča, Mike Petrovica Alasa 12-16, 11001 Belgrade (Serbia); Puzovic, J. [University of Belgrade Faculty of Physics, Studentski trg 6, 11000 Belgrade (Serbia); Todorovic, D.; Rajacic, M. [University of Belgrade Institute for Nuclear Sciences Vinča, Mike Petrovica Alasa 12-16, 11001 Belgrade (Serbia)
2015-11-01
One of the major difficulties in gamma spectrometry of voluminous environmental samples is the efficiency calibration of the detectors used for the measurement. The direct measurement of different calibration sources, containing isolated γ-ray emitters within the energy range of interest, and subsequent fitting to a parametric function, is the most accurate and at the same time most complicated and time consuming method of efficiency calibration. Many other methods are developed in time, some of them using Monte Carlo simulation. One of such methods is a dedicated and user-friendly program PHOTON, developed to simulate the passage of photons through different media with different geometries. This program was used for efficiency calibration of three HPGe detectors, readily used in Laboratory for Environment and Radiation Protection of the Institute for Nuclear Sciences Vinca, Belgrade, Serbia. The simulation produced the spectral response of the detectors for fixed energy and for different sample geometries and matrices. Thus obtained efficiencies were compared to the values obtained by the measurement of the secondary reference materials and to the results obtained by GEANT4 simulation, in order to establish whether the simulated values agree with the experimental ones. To further analyze the results, a realistic measurement of the materials provided by the IAEA within different interlaboratory proficiency tests, was performed. The activities obtained using simulated efficiencies were compared to the reference values provided by the organizer. A good agreement in the mid energy section of the spectrum was obtained, while for low energies the lack of some parameters in the simulation libraries proved to produce unacceptable discrepancies.
An intercomparison of Monte Carlo codes used for in-situ gamma-ray spectrometry
International Nuclear Information System (INIS)
Hurtado, S.; Villa, M.
2010-01-01
In-situ gamma-ray spectrometry is widely used for monitoring of natural as well as man-made radionuclides and corresponding gamma fields in the environment or working places. It finds effective application in the operational and accidental monitoring of nuclear facilities and their vicinity, waste depositories, radioactive contamination measurements and environmental mapping or geological prospecting. In order to determine accurate radionuclide concentrations in these research fields, Monte Carlo codes have recently been used to obtain the efficiency calibration of in-situ gamma-ray detectors. This work presents an inter-comparison between two Monte Carlo codes applied to in-situ gamma-ray spectrometry. On the commercial market, Canberra has its LABSOCS/ISOCS software which is relatively inexpensive. The ISOCS mathematical efficiency calibration software uses a combination of Monte Carlo calculations and discrete ordinate attenuation computations. Efficiencies can be generated in a few minutes in the field and can be modified easily if needed. However, it has been reported in the literature that ISOCS computation method is accurate on average only within 5%, and additionally in order to use LABSOCS/ISOCS it is necessary a previous characterization of the detector by Canberra, which is an expensive process. On the other hand, the multipurpose and open source GEANT4 takes significant computer time and presents a non-friendly but powerful toolkit, independent of the manufacturer of the detector. Different experimental measurements of calibrated sources were performed with a Canberra portable HPGe detector and compared to the results obtained using both Monte Carlo codes. Furthermore, a variety of efficiency calibrations for different radioactive source distributions were calculated and tested, like plane shapes or containers filled with different materials such as soil, water, etc. LabSOCS simulated efficiencies for medium and high energies were given within an
International Nuclear Information System (INIS)
Farah, Jad
2011-01-01
To optimize the monitoring of female workers using in vivo spectrometry measurements, it is necessary to correct the typical calibration coefficients obtained with the Livermore male physical phantom. To do so, numerical calibrations based on the use of Monte Carlo simulations combined with anthropomorphic 3D phantoms were used. Such computational calibrations require on the one hand the development of representative female phantoms of different size and morphologies and on the other hand rapid and reliable Monte Carlo calculations. A library of female torso models was hence developed by fitting the weight of internal organs and breasts according to the body height and to relevant plastic surgery recommendations. This library was next used to realize a numerical calibration of the AREVA NC La Hague in vivo counting installation. Moreover, the morphology-induced counting efficiency variations with energy were put into equation and recommendations were given to correct the typical calibration coefficients for any monitored female worker as a function of body height and breast size. Meanwhile, variance reduction techniques and geometry simplification operations were considered to accelerate simulations. Furthermore, to determine the activity mapping in the case of complex contaminations, a method that combines Monte Carlo simulations with in vivo measurements was developed. This method consists of realizing several spectrometry measurements with different detector positioning. Next, the contribution of each contaminated organ to the count is assessed from Monte Carlo calculations. The in vivo measurements realized at LEDI, CIEMAT and KIT have demonstrated the effectiveness of the method and highlighted the valuable contribution of Monte Carlo simulations for a more detailed analysis of spectrometry measurements. Thus, a more precise estimate of the activity distribution is given in the case of an internal contamination. (author)
Calibration of the Super-Kamiokande detector
Energy Technology Data Exchange (ETDEWEB)
Abe, K. [Kamioka Observatory, Institute for Cosmic Ray Research, University of Tokyo, Kamioka, Gifu 506-1205 (Japan); Hayato, Y. [Kamioka Observatory, Institute for Cosmic Ray Research, University of Tokyo, Kamioka, Gifu 506-1205 (Japan); Kavli Institute for the Physics and Mathematics of the Universe (WPI), Todai Institutes for Advanced Study, University of Tokyo, Kashiwa, Chiba 277-8583 (Japan); Iida, T.; Iyogi, K.; Kameda, J. [Kamioka Observatory, Institute for Cosmic Ray Research, University of Tokyo, Kamioka, Gifu 506-1205 (Japan); Kishimoto, Y. [Kamioka Observatory, Institute for Cosmic Ray Research, University of Tokyo, Kamioka, Gifu 506-1205 (Japan); Kavli Institute for the Physics and Mathematics of the Universe (WPI), Todai Institutes for Advanced Study, University of Tokyo, Kashiwa, Chiba 277-8583 (Japan); Koshio, Y., E-mail: koshio@fphy.hep.okayama-u.ac.jp [Kamioka Observatory, Institute for Cosmic Ray Research, University of Tokyo, Kamioka, Gifu 506-1205 (Japan); Marti, Ll.; Miura, M. [Kamioka Observatory, Institute for Cosmic Ray Research, University of Tokyo, Kamioka, Gifu 506-1205 (Japan); Moriyama, S.; Nakahata, M. [Kamioka Observatory, Institute for Cosmic Ray Research, University of Tokyo, Kamioka, Gifu 506-1205 (Japan); Kavli Institute for the Physics and Mathematics of the Universe (WPI), Todai Institutes for Advanced Study, University of Tokyo, Kashiwa, Chiba 277-8583 (Japan); Nakano, Y.; Nakayama, S.; Obayashi, Y.; Sekiya, H. [Kamioka Observatory, Institute for Cosmic Ray Research, University of Tokyo, Kamioka, Gifu 506-1205 (Japan); Shiozawa, M.; Suzuki, Y. [Kamioka Observatory, Institute for Cosmic Ray Research, University of Tokyo, Kamioka, Gifu 506-1205 (Japan); Kavli Institute for the Physics and Mathematics of the Universe (WPI), Todai Institutes for Advanced Study, University of Tokyo, Kashiwa, Chiba 277-8583 (Japan); Takeda, A.; Takenaga, Y.; Tanaka, H. [Kamioka Observatory, Institute for Cosmic Ray Research, University of Tokyo, Kamioka, Gifu 506-1205 (Japan); and others
2014-02-11
Procedures and results on hardware-level detector calibration in Super-Kamiokande (SK) are presented in this paper. In particular, we report improvements made in our calibration methods for the experimental phase IV in which new readout electronics have been operating since 2008. The topics are separated into two parts. The first part describes the determination of constants needed to interpret the digitized output of our electronics so that we can obtain physical numbers such as photon counts and their arrival times for each photomultiplier tube (PMT). In this context, we developed an in situ procedure to determine high-voltage settings for PMTs in large detectors like SK, as well as a new method for measuring PMT quantum efficiency and gain in such a detector. The second part describes modeling of the detector in Monte Carlo simulations, including, in particular, the optical properties of the water target and their variability over time. Detailed studies on water quality are also presented. As a result of this work, we have achieved a precision sufficient for physics analyses over a wide energy range (from a few MeV to above 1 TeV). For example, charge determination was at the level of 1%, and the timing resolution was 2.1 ns at the one-photoelectron charge level and 0.5 ns at the 100-photoelectron charge level.
Fast sequential Monte Carlo methods for counting and optimization
Rubinstein, Reuven Y; Vaisman, Radislav
2013-01-01
A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the
International Nuclear Information System (INIS)
Hunt, John
1998-01-01
A Monte Carlo program which uses a voxel phantom has been developed to simulate in vivo measurement systems for calibration purposes. The calibration method presented here employs a mathematical phantom, produced in the form of volume elements (voxels), obtained through Magnetic Resonance Images of the human body. The calibration method uses the Monte Carlo technique to simulate the tissue contamination, the transport of the photons through the tissues and the detection of the radiation. The program simulates the transport and detection of photons between 0.035 and 2 MeV and uses, for the body representation, a voxel phantom with a format of 871 slices each of 277 x 148 picture elements. The Monte Carlo code was applied to the calibration of in vivo systems and to estimate differences in counting efficiencies between homogeneous and non-homogeneous radionuclide distributions in the lung. Calculations show a factor of 20 between deposition of 241 Am at the back compared with the front of the lung. The program was also used to estimate the 137 Cs body burden of an internally contaminated individual, counted with an 8 x 4 Nal (TI) detector and an 241 Am body burden of an internally contaminated individual, who was counted using a planar germanium detector. (author)
International Nuclear Information System (INIS)
Santos, Cecilia Martins
2003-01-01
In this work the efficiency calibration curves of thin-window and low background gas-flow proportional counters were determined for calibration standards with different energies and different absorber thicknesses. For the gross alpha counting we have used 241 Am and natural uranium standards and for the gross beta counting we have used 90 Sr/ 90 Y and 137 Cs standards in residue thicknesses ranging from 0 to approximately 18 mg/cm 2 . These sample thicknesses were increased with a previously determined salted solution prepared simulating the chemical composition of the underground water of IPEN The counting efficiency for alpha emitters ranged from 0,273 +- 0,038 for a weightless residue to only 0,015 +- 0,002 in a planchet containing 15 mg/cm 2 of residue for 241 Am standard. For natural uranium standard the efficiency ranged from 0,322 +- 0,030 for a weightless residue to 0,023 +- 0,003 in a planchet containing 14,5 mg/cm 2 of residue. The counting efficiency for beta emitters ranged from 0,430 +- 0,036 for a weightless residue to 0,247 +- 0,020 in a planchet containing 17 mg/cm 2 of residue for 137 Cs standard. For 90 Sr/ 90 Y standard the efficiency ranged from 0,489 +- 0,041 for a weightless residue to 0,323 +- 0,026 in a planchet containing 18 mg/cm 2 of residue. Results make evident the counting efficiency variation with the alpha or beta emitters energies and the thickness of the water samples residue. So, the calibration standard, the thickness and the chemical composition of the residue must always be considered in the gross alpha and beta radioactivity determination in water samples. (author)
Browne, William J; Steele, Fiona; Golalizadeh, Mousa; Green, Martin J
2009-06-01
We consider the application of Markov chain Monte Carlo (MCMC) estimation methods to random-effects models and in particular the family of discrete time survival models. Survival models can be used in many situations in the medical and social sciences and we illustrate their use through two examples that differ in terms of both substantive area and data structure. A multilevel discrete time survival analysis involves expanding the data set so that the model can be cast as a standard multilevel binary response model. For such models it has been shown that MCMC methods have advantages in terms of reducing estimate bias. However, the data expansion results in very large data sets for which MCMC estimation is often slow and can produce chains that exhibit poor mixing. Any way of improving the mixing will result in both speeding up the methods and more confidence in the estimates that are produced. The MCMC methodological literature is full of alternative algorithms designed to improve mixing of chains and we describe three reparameterization techniques that are easy to implement in available software. We consider two examples of multilevel survival analysis: incidence of mastitis in dairy cattle and contraceptive use dynamics in Indonesia. For each application we show where the reparameterization techniques can be used and assess their performance.
International Nuclear Information System (INIS)
Szoke, A; Brooks, E D; McKinley, M; Daffin, F
2005-01-01
The equations of radiation transport for thermal photons are notoriously difficult to solve in thick media without resorting to asymptotic approximations such as the diffusion limit. One source of this difficulty is that in thick, absorbing media thermal emission is almost completely balanced by strong absorption. In a previous publication [SB03], the photon transport equation was written in terms of the deviation of the specific intensity from the local equilibrium field. We called the new form of the equations the difference formulation. The difference formulation is rigorously equivalent to the original transport equation. It is particularly advantageous in thick media, where the radiation field approaches local equilibrium and the deviations from the Planck distribution are small. The difference formulation for photon transport also clarifies the diffusion limit. In this paper, the transport equation is solved by the Symbolic Implicit Monte Carlo (SIMC) method and a comparison is made between the standard formulation and the difference formulation. The SIMC method is easily adapted to the derivative source terms of the difference formulation, and a remarkable reduction in noise is obtained when the difference formulation is applied to problems involving thick media
International Nuclear Information System (INIS)
Bird, A.J.; Barlow, E.J.; Tikkanen, T.; Bazzano, A.; Del Santo, M.; Ubertini, P.; Blondel, C.; Laurent, P.; Lebrun, F.; Di Cocco, G.; Malaguti, E.; Gabriele, M.; La Rosa, G.; Segreto, A.; Quadrini, E.; Volkmer, R.
2003-01-01
We present an overview of results obtained from IBIS ground calibrations. The spectral and spatial characteristics of the detector planes and surrounding passive materials have been determined through a series of calibration campaigns. Measurements of pixel gain, energy resolution, detection uniformity, efficiency and imaging capability are presented. The key results obtained from the ground calibration have been: - optimization of the instrument tunable parameters, - determination of energy linearity for all detection modes, - determination of energy resolution as a function of energy through the range 20 keV - 3 MeV, - demonstration of imaging capability in each mode, - measurement of intrinsic detector non-uniformity and understanding of the effects of passive materials surrounding the detector plane, and - discovery (and closure) of various leakage paths through the passive shielding system
International Nuclear Information System (INIS)
Mack, D.A.
1976-08-01
Procedures for the calibration of different types of laboratory equipment are described. Provisions for maintaining the integrity of reference and working standards traceable back to a national standard are discussed. Methods of validation and certification methods are included. An appendix lists available publications and services of national standardizing agencies
Is Monte Carlo embarrassingly parallel?
Energy Technology Data Exchange (ETDEWEB)
Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)
2012-07-01
Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)
Is Monte Carlo embarrassingly parallel?
International Nuclear Information System (INIS)
Hoogenboom, J. E.
2012-01-01
Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)
Validation of the ATLAS hadronic calibration with the LAr End-Cap beam tests data
International Nuclear Information System (INIS)
Barillari, Teresa
2009-01-01
The high granularity of the ATLAS calorimeter and the large number of expected particles per event require a clustering algorithm that is able to suppress noise and pile-up efficiently. Therefore the cluster reconstruction is the essential first step in the hadronic calibration. The identification of electromagnetic components within a hadronic cluster using cluster shape variables is the next step in the hadronic calibration procedure. Finally the energy density of individual cells is used to assign the proper weight to correct for the invisible energy deposits of hadrons due to the non-compensating nature of the ATLAS calorimeter and to correct for energy losses in material non instrumented with read-out. The weighting scheme employs the energy density in individual cells. Therefore the validation of the Monte Carlo simulation, which is used to define the weighting parameters and energy correction algorithms, is an essential step in the hadronic calibration procedure. Pion data, obtained in a beam test corresponding to the pseudorapidity region 2.5 < |η| < 4.0 in ATLAS and in the energy range 40 GeV ≤ E ≤ 200 GeV, have been compared with Monte Carlo simulations, using the full ATLAS hadronic calibration procedure.
International Nuclear Information System (INIS)
Setiani, Tia Dwi; Suprijadi; Haryanto, Freddy
2016-01-01
Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10"8 and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.
Energy Technology Data Exchange (ETDEWEB)
Setiani, Tia Dwi, E-mail: tiadwisetiani@gmail.com [Computational Science, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia); Suprijadi [Computational Science, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia); Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia); Haryanto, Freddy [Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia)
2016-03-11
Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.
Saizu, Mirela Angela
2016-09-01
The developments of high-purity germanium detectors match very well the requirements of the in-vivo human body measurements regarding the gamma energy ranges of the radionuclides intended to be measured, the shape of the extended radioactive sources, and the measurement geometries. The Whole Body Counter (WBC) from IFIN-HH is based on an “over-square” high-purity germanium detector (HPGe) to perform accurate measurements of the incorporated radionuclides emitting X and gamma rays in the energy range of 10 keV-1500 keV, under conditions of good shielding, suitable collimation, and calibration. As an alternative to the experimental efficiency calibration method consisting of using reference calibration sources with gamma energy lines that cover all the considered energy range, it is proposed to use the Monte Carlo method for the efficiency calibration of the WBC using the radiation transport code MCNP5. The HPGe detector was modelled and the gamma energy lines of 241Am, 57Co, 133Ba, 137Cs, 60Co, and 152Eu were simulated in order to obtain the virtual efficiency calibration curve of the WBC. The Monte Carlo method was validated by comparing the simulated results with the experimental measurements using point-like sources. For their optimum matching, the impact of the variation of the front dead layer thickness and of the detector photon absorbing layers materials on the HPGe detector efficiency was studied, and the detector’s model was refined. In order to perform the WBC efficiency calibration for realistic people monitoring, more numerical calculations were generated simulating extended sources of specific shape according to the standard man characteristics.
Energy Technology Data Exchange (ETDEWEB)
Venturini, Luzia [Instituto de Pesquisas Energeicas e Nucleares (IPEN), Sao Paulo, Sp (Brazil). Dept. de Metrologia das Radiacoes]. E-mail: lventur@net.ipen.br
2005-07-01
This paper shows a comparison between two theoretical calibration that use two mathematical models to represent the neck region. In the first model thyroid is considered to be just a region limited by two concentric cylinders whose dimensions are those of trachea and neck. The second model uses functional functions to get a better representation of the thyroid geometry. Efficiency values are obtained using Monte Carlo simulation. (author)
Calibration of germanium detectors
International Nuclear Information System (INIS)
Bjurman, B.; Erlandsson, B.
1985-01-01
This paper describes problems concerning the calibration of germanium detectors for the measurement of gamma-radiation from environmental samples. It also contains a brief description of some ways of reducing the uncertainties concerning the activity determination. These uncertainties have many sources, such as counting statistics, full energy peak efficiency determination, density correction and radionuclide specific-coincidence effects, when environmental samples are investigated at close source-to-detector distances
Sheykhizadeh, Saheleh; Naseri, Abdolhossein
2018-04-01
Variable selection plays a key role in classification and multivariate calibration. Variable selection methods are aimed at choosing a set of variables, from a large pool of available predictors, relevant to the analyte concentrations estimation, or to achieve better classification results. Many variable selection techniques have now been introduced among which, those which are based on the methodologies of swarm intelligence optimization have been more respected during a few last decades since they are mainly inspired by nature. In this work, a simple and new variable selection algorithm is proposed according to the invasive weed optimization (IWO) concept. IWO is considered a bio-inspired metaheuristic mimicking the weeds ecological behavior in colonizing as well as finding an appropriate place for growth and reproduction; it has been shown to be very adaptive and powerful to environmental changes. In this paper, the first application of IWO, as a very simple and powerful method, to variable selection is reported using different experimental datasets including FTIR and NIR data, so as to undertake classification and multivariate calibration tasks. Accordingly, invasive weed optimization - linear discrimination analysis (IWO-LDA) and invasive weed optimization- partial least squares (IWO-PLS) are introduced for multivariate classification and calibration, respectively.
Discrete diffusion Monte Carlo for frequency-dependent radiative transfer
International Nuclear Information System (INIS)
Densmore, Jeffery D.; Thompson, Kelly G.; Urbatsch, Todd J.
2011-01-01
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique. (author)
Energy Technology Data Exchange (ETDEWEB)
Moskvin, V; Tsiamas, P; Axente, M; Farr, J [St. Jude Children’s Research Hospital, Memphis, TN (United States); Stewart, R [University of Washington, Seattle, WA. (United States)
2015-06-15
Purpose: One of the more critical initiating events for reproductive cell death is the creation of a DNA double strand break (DSB). In this study, we present a computationally efficient way to determine spatial variations in the relative biological effectiveness (RBE) of proton therapy beams within the FLUKA Monte Carlo (MC) code. Methods: We used the independently tested Monte Carlo Damage Simulation (MCDS) developed by Stewart and colleagues (Radiat. Res. 176, 587–602 2011) to estimate the RBE for DSB induction of monoenergetic protons, tritium, deuterium, hellium-3, hellium-4 ions and delta-electrons. The dose-weighted (RBE) coefficients were incorporated into FLUKA to determine the equivalent {sup 6}°60Co γ-ray dose for representative proton beams incident on cells in an aerobic and anoxic environment. Results: We found that the proton beam RBE for DSB induction at the tip of the Bragg peak, including primary and secondary particles, is close to 1.2. Furthermore, the RBE increases laterally to the beam axis at the area of Bragg peak. At the distal edge, the RBE is in the range from 1.3–1.4 for cells irradiated under aerobic conditions and may be as large as 1.5–1.8 for cells irradiated under anoxic conditions. Across the plateau region, the recorded RBE for DSB induction is 1.02 for aerobic cells and 1.05 for cells irradiated under anoxic conditions. The contribution to total effective dose from secondary heavy ions decreases with depth and is higher at shallow depths (e.g., at the surface of the skin). Conclusion: Multiscale simulation of the RBE for DSB induction provides useful insights into spatial variations in proton RBE within pristine Bragg peaks. This methodology is potentially useful for the biological optimization of proton therapy for the treatment of cancer. The study highlights the need to incorporate spatial variations in proton RBE into proton therapy treatment plans.
Importance iteration in MORSE Monte Carlo calculations
International Nuclear Information System (INIS)
Kloosterman, J.L.; Hoogenboom, J.E.
1994-01-01
An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example that shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation
Importance iteration in MORSE Monte Carlo calculations
International Nuclear Information System (INIS)
Kloosterman, J.L.; Hoogenboom, J.E.
1994-02-01
An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example, which shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Eersel, H. van, E-mail: h.v.eersel@tue.nl; Coehoorn, R. [Department of Applied Physics, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands); Philips Research Laboratories, High Tech Campus 4, 5656 AE Eindhoven (Netherlands); Bobbert, P. A.; Janssen, R. A. J. [Department of Applied Physics, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands)
2014-10-06
We present an advanced molecular-scale organic light-emitting diode (OLED) model, integrating both electronic and excitonic processes. Using this model, we can reproduce the measured efficiency roll-off for prototypical phosphorescent OLED stacks based on the green dye tris[2-phenylpyridine]iridium (Ir(ppy){sub 3}) and the red dye octaethylporphine platinum (PtOEP) and study the cause of the roll-off as function of the current density. Both the voltage versus current density characteristics and roll-off agree well with experimental data. Surprisingly, the results of the simulations lead us to conclude that, contrary to what is often assumed, not triplet-triplet annihilation but triplet-polaron quenching is the dominant mechanism causing the roll-off under realistic operating conditions. Simulations for devices with an optimized recombination profile, achieved by carefully tuning the dye trap depth, show that it will be possible to fabricate OLEDs with a drastically reduced roll-off. It is envisaged that J{sub 90}, the current density at which the efficiency is reduced to 90%, can be increased by almost one order of magnitude as compared to the experimental state-of-the-art.
International Nuclear Information System (INIS)
Eersel, H. van; Coehoorn, R.; Bobbert, P. A.; Janssen, R. A. J.
2014-01-01
We present an advanced molecular-scale organic light-emitting diode (OLED) model, integrating both electronic and excitonic processes. Using this model, we can reproduce the measured efficiency roll-off for prototypical phosphorescent OLED stacks based on the green dye tris[2-phenylpyridine]iridium (Ir(ppy) 3 ) and the red dye octaethylporphine platinum (PtOEP) and study the cause of the roll-off as function of the current density. Both the voltage versus current density characteristics and roll-off agree well with experimental data. Surprisingly, the results of the simulations lead us to conclude that, contrary to what is often assumed, not triplet-triplet annihilation but triplet-polaron quenching is the dominant mechanism causing the roll-off under realistic operating conditions. Simulations for devices with an optimized recombination profile, achieved by carefully tuning the dye trap depth, show that it will be possible to fabricate OLEDs with a drastically reduced roll-off. It is envisaged that J 90 , the current density at which the efficiency is reduced to 90%, can be increased by almost one order of magnitude as compared to the experimental state-of-the-art.
Tau reconstruction, energy calibration and identification at ATLAS
International Nuclear Information System (INIS)
Trottier-Mcdonald, Michel
2012-01-01
Tau leptons play a central role in the LHC physics programme, in particular as an important signature in many Higgs boson and supersymmetry searches. They are further used in Standard Model electroweak measurements, as well as detector-related studies like the determination of the missing transverse energy scale. Copious backgrounds from QCD processes call for both efficient identification of hadronically decaying tau leptons, as well as large suppression of fake candidates. A solid understanding of the combined performance of the calorimeter and tracking detectors is also required. We present the current status of the tau reconstruction, energy calibration and identification with the ATLAS detector at the LHC. Identification efficiencies are measured in W →τν events in data and compared with predictions from Monte Carlo simulations, whereas the misidentification probabilities of QCD jets and electrons are determined from various jet-enriched data samples and from Z → ee events, respectively. The tau energy scale calibration is described and systematic uncertainties on both energy scale and identification efficiencies discussed. (author)
The ATLAS Electromagnetic Calorimeter Calibration Workshop
Hong Ma; Isabelle Wingerter
The ATLAS Electromagnetic Calorimeter Calibration Workshop took place at LAPP-Annecy from the 1st to the 3rd of October; 45 people attended the workshop. A detailed program was setup before the workshop. The agenda was organised around very focused presentations where questions were raised to allow arguments to be exchanged and answers to be proposed. The main topics were: Electronics calibration Handling of problematic channels Cluster level corrections for electrons and photons Absolute energy scale Streams for calibration samples Calibration constants processing Learning from commissioning Forty-five people attended the workshop. The workshop was on the whole lively and fruitful. Based on years of experience with test beam analysis and Monte Carlo simulation, and the recent operation of the detector in the commissioning, the methods to calibrate the electromagnetic calorimeter are well known. Some of the procedures are being exercised in the commisssioning, which have demonstrated the c...
Monte Carlo simulation of Markov unreliability models
International Nuclear Information System (INIS)
Lewis, E.E.; Boehm, F.
1984-01-01
A Monte Carlo method is formulated for the evaluation of the unrealibility of complex systems with known component failure and repair rates. The formulation is in terms of a Markov process allowing dependences between components to be modeled and computational efficiencies to be achieved in the Monte Carlo simulation. Two variance reduction techniques, forced transition and failure biasing, are employed to increase computational efficiency of the random walk procedure. For an example problem these result in improved computational efficiency by more than three orders of magnitudes over analog Monte Carlo. The method is generalized to treat problems with distributed failure and repair rate data, and a batching technique is introduced and shown to result in substantial increases in computational efficiency for an example problem. A method for separating the variance due to the data uncertainty from that due to the finite number of random walks is presented. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Tian, Z; Folkerts, M; Jiang, S; Jia, X [UT Southwestern Medical Ctr, Dallas, TX (United States); Li, Y [Beihang University, Beijing (China)
2016-06-15
Purpose: We have previously developed a GPU-OpenCL-based MC dose engine named goMC with built-in analytical linac beam model. To move goMC towards routine clinical use, we have developed an automatic beam-commissioning method, and an efficient source sampling strategy to facilitate dose calculations for real treatment plans. Methods: Our commissioning method is to automatically adjust the relative weights among the sub-sources, through an optimization process minimizing the discrepancies between calculated dose and measurements. Six models built for Varian Truebeam linac photon beams (6MV, 10MV, 15MV, 18MV, 6MVFFF, 10MVFFF) were commissioned using measurement data acquired at our institution. To facilitate dose calculations for real treatment plans, we employed inverse sampling method to efficiently incorporate MLC leaf-sequencing into source sampling. Specifically, instead of sampling source particles control-point by control-point and rejecting the particles blocked by MLC, we assigned a control-point index to each sampled source particle, according to MLC leaf-open duration of each control-point at the pixel where the particle intersects the iso-center plane. Results: Our auto-commissioning method decreased distance-to-agreement (DTA) of depth dose at build-up regions by 36.2% averagely, making it within 1mm. Lateral profiles were better matched for all beams, with biggest improvement found at 15MV for which root-mean-square difference was reduced from 1.44% to 0.50%. Maximum differences of output factors were reduced to less than 0.7% for all beams, with largest decrease being from1.70% to 0.37% found at 10FFF. Our new sampling strategy was tested on a Head&Neck VMAT patient case. Achieving clinically acceptable accuracy, the new strategy could reduce the required history number by a factor of ∼2.8 given a statistical uncertainty level and hence achieve a similar speed-up factor. Conclusion: Our studies have demonstrated the feasibility and effectiveness of
Dunn, William L
2012-01-01
Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble
Directory of Open Access Journals (Sweden)
Bardenet Rémi
2013-07-01
Full Text Available Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.
Zimmerman, George B.
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
International Nuclear Information System (INIS)
Zimmerman, G.B.
1997-01-01
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials. copyright 1997 American Institute of Physics
International Nuclear Information System (INIS)
Zimmerman, George B.
1997-01-01
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials
Borghi, Giacomo; Tabacchini, Valerio; Seifert, Stefan; Schaart, Dennis R.
2015-02-01
Monolithic scintillator detectors can achieve excellent spatial resolution and coincidence resolving time. However, their practical use for positron emission tomography (PET) and other applications in the medical imaging field is still limited due to drawbacks of the different methods used to estimate the position of interaction. Common statistical methods for example require the collection of an extensive dataset of reference events with a narrow pencil beam aimed at a fine grid of reference positions. Such procedures are time consuming and not straightforwardly implemented in systems composed of many detectors. Here, we experimentally demonstrate for the first time a new calibration procedure for k-nearest neighbor ( k-NN) position estimation that utilizes reference data acquired with a fan beam. The procedure is tested on two detectors consisting of 16 mm ×16 mm ×10 mm and 16 mm ×16 mm ×20 mm monolithic, Ca-codoped LSO:Ce crystals and digital photon counter (DPC) arrays. For both detectors, the spatial resolution and the bias obtained with the new method are found to be practically the same as those obtained with the previously used method based on pencil-beam irradiation, while the calibration time is reduced by a factor of 20. Specifically, a FWHM of 1.1 mm and a FWTM of 2.7 mm were obtained using the fan-beam method with the 10 mm crystal, whereas a FWHM of 1.5 mm and a FWTM of 6 mm were achieved with the 20 mm crystal. Using a fan beam made with a 4.5 MBq 22Na point-source and a tungsten slit collimator with 0.5 mm aperture, the total measurement time needed to acquire the reference dataset was 3 hours for the thinner crystal and 2 hours for the thicker one.
International Nuclear Information System (INIS)
Satoh, Daiki; Sato, Tatsuhiko; Shigyo, Nobuhiro; Ishibashi, Kenji
2006-11-01
The Monte Carlo based computer code SCINFUL-QMD has been developed to evaluate response function and detection efficiency of a liquid organic scintillator for neutrons from 0.1 MeV to 3 GeV. This code is a modified version of SCINFUL that was developed at Oak Ridge National Laboratory in 1988, to provide a calculated full response anticipated for neutron interactions in a scintillator. The upper limit of the applicable energy was extended from 80 MeV to 3 GeV by introducing the quantum molecular dynamics incorporated with the statistical decay model (QMD+SDM) in the high-energy nuclear reaction part. The particles generated in QMD+SDM are neutron, proton, deuteron, triton, 3 He nucleus, alpha particle, and charged pion. Secondary reactions by neutron, proton, and pion inside the scintillator are also taken into account. With the extension of the applicable energy, the database of total cross sections for hydrogen and carbon nuclei were upgraded. This report describes the physical model, computational flow and how to use the code. (author)
Monte Carlo simulation of a stand-up type whole body counter using different sized BOMAB phantoms
International Nuclear Information System (INIS)
Park, Minjung; Yoo, Jaeryong; Park, Seyoung; Ha, Wiho; Lee, Seungsook; Park, Minjung; Yoo, Jaeryong; Kim, Kwangpyo
2013-01-01
It is necessary to assess internal contamination level to determine the need for medical intervention. Whole Body Counter (WBC) is used to measure incorporated radioactive materials inside the human body. Also, WBC is standard in vivo method and used for preparedness of response to radiological emergencies. To operate this equipment correctly, proper energy and efficiency calibrations must be performed. WBC is usually calibrated using a Bottle Manikin ABsorber (BOMAB) Phantom, which is the industrial standard. The problem occurs when the subjects to be measured have different physical characteristics (height or weight) from a phantom used in calibration. In radiation emergency situations, this problem is expected to worsen because there are special populations whose physical characteristics are different from reference male, for example children and women. The aim of this study is to resolve this problem by simulating counting efficiency of different sized BOMAB phantoms using Monte Carlo techniques. The counting efficiency response of the WBC has been modeled for different sized four BOMAB phantoms using MCNPX. The stand-up type WBC has different efficiency response on phantom size since this WBC has different geometry from other scanning-type or non-linear geometry WBC. In emergency monitoring situations, it is important to estimate activity of various sized persons. Therefore, it is necessary to apply appropriate counting efficiency according to person size. Further investigations are needed to optimize methodology for measuring small object in the stand-up type WBC
Murthy, K. P. N.
2001-01-01
An introduction to the basics of Monte Carlo is given. The topics covered include, sample space, events, probabilities, random variables, mean, variance, covariance, characteristic function, chebyshev inequality, law of large numbers, central limit theorem (stable distribution, Levy distribution), random numbers (generation and testing), random sampling techniques (inversion, rejection, sampling from a Gaussian, Metropolis sampling), analogue Monte Carlo and Importance sampling (exponential b...
International Nuclear Information System (INIS)
Kryeziu, D.
2006-09-01
The aim of this work was to test and validate the Monte-Carlo (MC) ionization chamber simulation method in calculating the activity of radioactive solutions. This is required when no or not sufficient experimental calibration figures are available as well as to improve the accuracy of activity measurements for other radionuclides. Well-type or 4π γ ISOCAL IV ionization chambers (IC) are widely used in many national standard laboratories around the world. As secondary standard measuring systems these radionuclide calibrators serve to maintain measurement consistency checks and to ensure the quality of standards disseminated to users for a wide range of radionuclide where many of them are with special interest in nuclear medicine as well as in different applications on radionuclide metrology. For the studied radionuclides the calibration figures (efficiencies) and their respective volume correction factors are determined by using the PENELOPE MC computer code system. The ISOCAL IV IC filled with nitrogen gas at approximately 1 MPa is simulated. The simulated models of the chamber are designed by means of reduced quadric equation and applying the appropriate mathematical transformations. The simulations are done for various container geometries of the standard solution which take forms of: i) sealed Jena glass 5 ml PTB standard ampoule, ii) 10 ml (P6) vial and iii) 10 R Schott Type 1+ vial. Simulation of the ISOCAL IV IC is explained. The effect of density variation of the nitrogen filling gas on the sensitivity of the chamber is investigated. The code is also used to examine the effects of using lead and copper shields as well as to evaluate the sensitivity of the chamber to electrons and positrons. Validation of the Monte-Carlo simulation method has been proved by comparing the Monte-Carlo simulation calculated and experimental calibration figures available from the National Physical Laboratory (NPL) England which are deduced from the absolute activity
Quantum Monte Carlo approaches for correlated systems
Becca, Federico
2017-01-01
Over the past several decades, computational approaches to studying strongly-interacting systems have become increasingly varied and sophisticated. This book provides a comprehensive introduction to state-of-the-art quantum Monte Carlo techniques relevant for applications in correlated systems. Providing a clear overview of variational wave functions, and featuring a detailed presentation of stochastic samplings including Markov chains and Langevin dynamics, which are developed into a discussion of Monte Carlo methods. The variational technique is described, from foundations to a detailed description of its algorithms. Further topics discussed include optimisation techniques, real-time dynamics and projection methods, including Green's function, reptation and auxiliary-field Monte Carlo, from basic definitions to advanced algorithms for efficient codes, and the book concludes with recent developments on the continuum space. Quantum Monte Carlo Approaches for Correlated Systems provides an extensive reference ...
Monte Carlo Transport for Electron Thermal Transport
Chenhall, Jeffrey; Cao, Duc; Moses, Gregory
2015-11-01
The iSNB (implicit Schurtz Nicolai Busquet multigroup electron thermal transport method of Cao et al. is adapted into a Monte Carlo transport method in order to better model the effects of non-local behavior. The end goal is a hybrid transport-diffusion method that combines Monte Carlo Transport with a discrete diffusion Monte Carlo (DDMC). The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the method will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.
Calibration of the whole body counter at PSI
International Nuclear Information System (INIS)
Mayer, Sabine; Boschung, Markus; Fiechtner, Annette; Habegger, Ruedi; Meier, Kilian; Wernli, Christian
2008-01-01
At the Paul Scherrer Institut (PSI), measurements with the whole body counter are routinely carried out for occupationally exposed persons and occasionally for individuals of the population suspected of radioactive intake. In total about 400 measurements are performed per year. The whole body counter is based on a p-type high purity germanium (HPGe) coaxial detector mounted above a canvas chair in a shielded small room. The detector is used to detect the presence of radionuclides that emit photons with energies between 50 keV and 2 MeV. The room itself is made of iron from old railway rails to reduce the natural background radiation to 24 n Sv/h. The present paper describes the calibration of the system with the IGOR phantom. Different body sizes are realized by different standardized configurations of polyethylene bricks, in which small tubes of calibration sources can be introduced. The efficiency of the detector was determined for four phantom geometries (P1, P2, P4 and P6 simulating human bodies in sitting position of 12 kg, 24 kg, 70 kg and 110 kg, respectively. The measurements were performed serially using five different radionuclide sources ( 40 K, 60 Co, 133 Ba, 137 Cs, 152 Eu) within the phantom bricks. Based on results of the experiment, an efficiency curve for each configuration and the detection limits for relevant radionuclides were determined. For routine measurements, the efficiency curve obtained with the phantom geometry P4 was chosen. The detection limits range from 40 Bq to 1000 Bq for selected radionuclides applying a measurement time of 7 min. The proper calibration of the system, on one hand, is essential for the routine measurements at PSI. On the other hand, it serves as a benchmark for the already initiated characterisation of the system with Monte Carlo simulations. (author)
Monte Carlo simulation applied to alpha spectrometry
International Nuclear Information System (INIS)
Baccouche, S.; Gharbi, F.; Trabelsi, A.
2007-01-01
Alpha particle spectrometry is a widely-used analytical method, in particular when we deal with pure alpha emitting radionuclides. Monte Carlo simulation is an adequate tool to investigate the influence of various phenomena on this analytical method. We performed an investigation of those phenomena using the simulation code GEANT of CERN. The results concerning the geometrical detection efficiency in different measurement geometries agree with analytical calculations. This work confirms that Monte Carlo simulation of solid angle of detection is a very useful tool to determine with very good accuracy the detection efficiency.
Monte Carlo simulation of the standardization of {sup 22}Na using scintillation detector arrays
Energy Technology Data Exchange (ETDEWEB)
Sato, Y., E-mail: yss.sato@aist.go.j [National Metrology Institute of Japan, National Institute of Advanced Industrial Science and Technology, Quantum Radiation Division, Radioactivity and Neutron Section, Tsukuba Central 2, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568 (Japan); Murayama, H. [National Institute of Radiological Sciences, 4-9-1, Anagawa, Inage, Chiba 263-8555 (Japan); Yamada, T. [Japan Radioisotope Association, 2-28-45, Hon-komagome, Bunkyo, Tokyo 113-8941 (Japan); National Metrology Institute of Japan, National Institute of Advanced Industrial Science and Technology, Quantum Radiation Division, Radioactivity and Neutron Section, Tsukuba Central 2, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568 (Japan); Tohoku University, 6-6, Aoba, Aramaki, Aoba, Sendai 980-8579 (Japan); Hasegawa, T. [Kitasato University, 1-15-1, Kitasato, Sagamihara, Kanagawa 228-8555 (Japan); Oda, K. [Tokyo Metropolitan Institute of Gerontology, 1-1 Nakacho, Itabashi-ku, Tokyo 173-0022 (Japan); Unno, Y.; Yunoki, A. [National Metrology Institute of Japan, National Institute of Advanced Industrial Science and Technology, Quantum Radiation Division, Radioactivity and Neutron Section, Tsukuba Central 2, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568 (Japan)
2010-07-15
In order to calibrate PET devices by a sealed point source, we contrived an absolute activity measurement method for the sealed point source using scintillation detector arrays. This new method was verified by EGS5 Monte Carlo simulation.
Diffusion Monte Carlo approach versus adiabatic computation for local Hamiltonians
Bringewatt, Jacob; Dorland, William; Jordan, Stephen P.; Mink, Alan
2018-02-01
Most research regarding quantum adiabatic optimization has focused on stoquastic Hamiltonians, whose ground states can be expressed with only real non-negative amplitudes and thus for whom destructive interference is not manifest. This raises the question of whether classical Monte Carlo algorithms can efficiently simulate quantum adiabatic optimization with stoquastic Hamiltonians. Recent results have given counterexamples in which path-integral and diffusion Monte Carlo fail to do so. However, most adiabatic optimization algorithms, such as for solving MAX-k -SAT problems, use k -local Hamiltonians, whereas our previous counterexample for diffusion Monte Carlo involved n -body interactions. Here we present a 6-local counterexample which demonstrates that even for these local Hamiltonians there are cases where diffusion Monte Carlo cannot efficiently simulate quantum adiabatic optimization. Furthermore, we perform empirical testing of diffusion Monte Carlo on a standard well-studied class of permutation-symmetric tunneling problems and similarly find large advantages for quantum optimization over diffusion Monte Carlo.
Generalized hybrid Monte Carlo - CMFD methods for fission source convergence
International Nuclear Information System (INIS)
Wolters, Emily R.; Larsen, Edward W.; Martin, William R.
2011-01-01
In this paper, we generalize the recently published 'CMFD-Accelerated Monte Carlo' method and present two new methods that reduce the statistical error in CMFD-Accelerated Monte Carlo. The CMFD-Accelerated Monte Carlo method uses Monte Carlo to estimate nonlinear functionals used in low-order CMFD equations for the eigenfunction and eigenvalue. The Monte Carlo fission source is then modified to match the resulting CMFD fission source in a 'feedback' procedure. The two proposed methods differ from CMFD-Accelerated Monte Carlo in the definition of the required nonlinear functionals, but they have identical CMFD equations. The proposed methods are compared with CMFD-Accelerated Monte Carlo on a high dominance ratio test problem. All hybrid methods converge the Monte Carlo fission source almost immediately, leading to a large reduction in the number of inactive cycles required. The proposed methods stabilize the fission source more efficiently than CMFD-Accelerated Monte Carlo, leading to a reduction in the number of active cycles required. Finally, as in CMFD-Accelerated Monte Carlo, the apparent variance of the eigenfunction is approximately equal to the real variance, so the real error is well-estimated from a single calculation. This is an advantage over standard Monte Carlo, in which the real error can be underestimated due to inter-cycle correlation. (author)
Energy Technology Data Exchange (ETDEWEB)
Abbas, Mahmoud I., E-mail: mabbas@physicist.net [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Badawi, M.S. [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Ruskov, I.N. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, 1784 Sofia (Bulgaria); El-Khatib, A.M. [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Grozdanov, D.N. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, 1784 Sofia (Bulgaria); Thabet, A.A. [Department of Medical Equipment Technology, Faculty of Allied Medical Sciences, Pharos University in Alexandria (Egypt); Kopatch, Yu.N. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Gouda, M.M. [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Skoy, V.R. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation)
2015-01-21
Gamma-ray detector systems are important instruments in a broad range of science and new setup are continually developing. The most recent step in the evolution of detectors for nuclear spectroscopy is the construction of large arrays of detectors of different forms (for example, conical, pentagonal, hexagonal, etc.) and sizes, where the performance and the efficiency can be increased. In this work, a new direct numerical method (NAM), in an integral form and based on the efficiency transfer (ET) method, is used to calculate the full-energy peak efficiency of a single hexagonal NaI(Tl) detector. The algorithms and the calculations of the effective solid angle ratios for a point (isotropic irradiating) gamma-source situated coaxially at different distances from the detector front-end surface, taking into account the attenuation of the gamma-rays in the detector's material, end-cap and the other materials in-between the gamma-source and the detector, are considered as the core of this (ET) method. The calculated full-energy peak efficiency values by the (NAM) are found to be in a good agreement with the measured experimental data.
International Nuclear Information System (INIS)
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described
Spectrometric methods used in the calibration of radiodiagnostic measuring instruments
Energy Technology Data Exchange (ETDEWEB)
De Vries, W [Rijksuniversiteit Utrecht (Netherlands)
1995-12-01
Recently a set of parameters for checking the quality of radiation for use in diagnostic radiology was established at the calibration facility of Nederlands Meetinstituut (NMI). The establishment of the radiation quality required re-evaluation of the correction factors for the primary air-kerma standards. Free-air ionisation chambers require several correction factors to measure air-kerma according to its definition. These correction factors were calculated for the NMi free-air chamber by Monte Carlo simulations for monoenergetic photons in the energy range from 10 keV to 320 keV. The actual correction factors follow from weighting these mono-energetic correction factors with the air-kerma spectrum of the photon beam. This paper describes the determination of the photon spectra of the X-ray qualities used for the calibration of dosimetric instruments used in radiodiagnostics. The detector used for these measurements is a planar HPGe-detector, placed in the direct beam of the X-ray machine. To convert the measured pulse height spectrum to the actual photon spectrum corrections must be made for fluorescent photon escape, single and multiple compton scattering inside the detector, and detector efficiency. From the calculated photon spectra a number of parameters of the X-ray beam can be calculated. The calculated first and second half value layer in aluminum and copper are compared with the measured values of these parameters to validate the method of spectrum reconstruction. Moreover the spectrum measurements offer the possibility to calibrate the X-ray generator in terms of maximum high voltage. The maximum photon energy in the spectrum is used as a standard for calibration of kVp-meters.
International Nuclear Information System (INIS)
Cornejo Diaz, N.; Vergara Gil, A.; Jurado Vargas, M.
2010-01-01
The Monte Carlo method has become a valuable numerical laboratory framework in which to simulate complex physical systems. It is based on the generation of pseudo-random number sequences by numerical algorithms called random generators. In this work we assessed the suitability of different well-known random number generators for the simulation of gamma-ray spectrometry systems during efficiency calibrations. The assessment was carried out in two stages. The generators considered (Delphi's linear congruential, mersenne twister, XorShift, multiplier with carry, universal virtual array, and non-periodic logistic map based generator) were first evaluated with different statistical empirical tests, including moments, correlations, uniformity, independence of terms and the DIEHARD battery of tests. In a second step, an application-specific test was conducted by implementing the generators in our Monte Carlo program DETEFF and comparing the results obtained with them. The calculations were performed with two different CPUs, for a typical HpGe detector and a water sample in Marinelli geometry, with gamma-rays between 59 and 1800 keV. For the Non-periodic Logistic Map based generator, dependence of the most significant bits was evident. This explains the bias, in excess of 5%, of the efficiency values obtained with this generator. The results of the application-specific assessment and the statistical performance of the other algorithms studied indicate their suitability for the Monte Carlo simulation of gamma-ray spectrometry systems for efficiency calculations.
Díaz, N Cornejo; Gil, A Vergara; Vargas, M Jurado
2010-03-01
The Monte Carlo method has become a valuable numerical laboratory framework in which to simulate complex physical systems. It is based on the generation of pseudo-random number sequences by numerical algorithms called random generators. In this work we assessed the suitability of different well-known random number generators for the simulation of gamma-ray spectrometry systems during efficiency calibrations. The assessment was carried out in two stages. The generators considered (Delphi's linear congruential, mersenne twister, XorShift, multiplier with carry, universal virtual array, and non-periodic logistic map based generator) were first evaluated with different statistical empirical tests, including moments, correlations, uniformity, independence of terms and the DIEHARD battery of tests. In a second step, an application-specific test was conducted by implementing the generators in our Monte Carlo program DETEFF and comparing the results obtained with them. The calculations were performed with two different CPUs, for a typical HpGe detector and a water sample in Marinelli geometry, with gamma-rays between 59 and 1800 keV. For the Non-periodic Logistic Map based generator, dependence of the most significant bits was evident. This explains the bias, in excess of 5%, of the efficiency values obtained with this generator. The results of the application-specific assessment and the statistical performance of the other algorithms studied indicate their suitability for the Monte Carlo simulation of gamma-ray spectrometry systems for efficiency calculations. Copyright 2009 Elsevier Ltd. All rights reserved.
An overview of the dynamic calibration of piezoelectric pressure transducers
Theodoro, F. R. F.; Reis, M. L. C. C.; d’ Souto, C.
2018-03-01
Dynamic calibration is a research area that is still under development and is of great interest to aerospace and automotive industries. This study discusses some concepts regarding dynamic measurements of pressure quantities and presents an overview of dynamic calibration of pressure transducers. Studies conducted by the Institute of Aeronautics and Space focusing on research regarding piezoelectric pressure transducer calibration in shock tube are presented. We employed the Guide to the Expression of Uncertainty and a Monte Carlo Method in the methodology. The results show that both device and methodology employed are adequate to calibrate the piezoelectric sensor.
Variational Monte Carlo Technique
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 8. Variational Monte Carlo Technique: Ground State Energies of Quantum Mechanical Systems. Sukanta Deb. General Article Volume 19 Issue 8 August 2014 pp 713-739 ...
Monte Carlo simulation of neutron counters for safeguards applications
International Nuclear Information System (INIS)
Looman, Marc; Peerani, Paolo; Tagziria, Hamid
2009-01-01
MCNP-PTA is a new Monte Carlo code for the simulation of neutron counters for nuclear safeguards applications developed at the Joint Research Centre (JRC) in Ispra (Italy). After some preliminary considerations outlining the general aspects involved in the computational modelling of neutron counters, this paper describes the specific details and approximations which make up the basis of the model implemented in the code. One of the major improvements allowed by the use of Monte Carlo simulation is a considerable reduction in both the experimental work and in the reference materials required for the calibration of the instruments. This new approach to the calibration of counters using Monte Carlo simulation techniques is also discussed.
Parallel processing Monte Carlo radiation transport codes
International Nuclear Information System (INIS)
McKinney, G.W.
1994-01-01
Issues related to distributed-memory multiprocessing as applied to Monte Carlo radiation transport are discussed. Measurements of communication overhead are presented for the radiation transport code MCNP which employs the communication software package PVM, and average efficiency curves are provided for a homogeneous virtual machine
Wielandt acceleration for MCNP5 Monte Carlo eigenvalue calculations
International Nuclear Information System (INIS)
Brown, F.
2007-01-01
Monte Carlo criticality calculations use the power iteration method to determine the eigenvalue (k eff ) and eigenfunction (fission source distribution) of the fundamental mode. A recently proposed method for accelerating convergence of the Monte Carlo power iteration using Wielandt's method has been implemented in a test version of MCNP5. The method is shown to provide dramatic improvements in convergence rates and to greatly reduce the possibility of false convergence assessment. The method is effective and efficient, improving the Monte Carlo figure-of-merit for many problems. In addition, the method should eliminate most of the underprediction bias in confidence intervals for Monte Carlo criticality calculations. (authors)
The LOFAR long baseline snapshot calibrator survey
Moldón, J.; et al., [Unknown; Carbone, D.; Markoff, S.; Wise, M.W.
2015-01-01
Aims. An efficient means of locating calibrator sources for international LOw Frequency ARray (LOFAR) is developed and used to determine the average density of usable calibrator sources on the sky for subarcsecond observations at 140 MHz. Methods. We used the multi-beaming capability of LOFAR to
The LOFAR ling baseline snapshot calibrator survey
Moldon, J.; Deller, A.T.; Wucknitz, O.; Jackson, N.; Drabent, A.; Carozzi, T.; Conway, J.; Bentum, Marinus Jan; Bernardi, G.; Best, P.; Gunst, A.W.
Aims: An efficient means of locating calibrator sources for international LOw Frequency ARray (LOFAR) is developed and used to determine the average density of usable calibrator sources on the sky for subarcsecond observations at 140 MHz. Methods We used the multi-beaming capability of LOFAR to
The LOFAR long baseline snapshot calibrator survey
Moldón, J.; Deller, A. T.; Wucknitz, O.; Jackson, N.; Drabent, A.; Carozzi, T.; Conway, J.; Kapińska, A. D.; McKean, J. P.; Morabito, L.; Varenius, E.; Zarka, P.; Anderson, J.; Asgekar, A.; Avruch, I. M.; Bell, M. E.; Bentum, M. J.; Bernardi, G.; Best, P.; Bîrzan, L.; Bregman, J.; Breitling, F.; Broderick, J. W.; Brüggen, M.; Butcher, H. R.; Carbone, D.; Ciardi, B.; de Gasperin, F.; de Geus, E.; Duscha, S.; Eislöffel, J.; Engels, D.; Falcke, H.; Fallows, R. A.; Fender, R.; Ferrari, C.; Frieswijk, W.; Garrett, M. A.; Grießmeier, J.; Gunst, A. W.; Hamaker, J. P.; Hassall, T. E.; Heald, G.; Hoeft, M.; Juette, E.; Karastergiou, A.; Kondratiev, V. I.; Kramer, M.; Kuniyoshi, M.; Kuper, G.; Maat, P.; Mann, G.; Markoff, S.; McFadden, R.; McKay-Bukowski, D.; Morganti, R.; Munk, H.; Norden, M. J.; Offringa, A. R.; Orru, E.; Paas, H.; Pandey-Pommier, M.; Pizzo, R.; Polatidis, A. G.; Reich, W.; Röttgering, H.; Rowlinson, A.; Scaife, A. M. M.; Schwarz, D.; Sluman, J.; Smirnov, O.; Stappers, B. W.; Steinmetz, M.; Tagger, M.; Tang, Y.; Tasse, C.; Thoudam, S.; Toribio, M. C.; Vermeulen, R.; Vocks, C.; van Weeren, R. J.; White, S.; Wise, M. W.; Yatawatta, S.; Zensus, A.
Aims: An efficient means of locating calibrator sources for international LOw Frequency ARray (LOFAR) is developed and used to determine the average density of usable calibrator sources on the sky for subarcsecond observations at 140 MHz. Methods: We used the multi-beaming capability of LOFAR to
Energy Technology Data Exchange (ETDEWEB)
Hunt, John
1998-12-31
A Monte Carlo program which uses a voxel phantom has been developed to simulate in vivo measurement systems for calibration purposes. The calibration method presented here employs a mathematical phantom, produced in the form of volume elements (voxels), obtained through Magnetic Resonance Images of the human body. The calibration method uses the Monte Carlo technique to simulate the tissue contamination, the transport of the photons through the tissues and the detection of the radiation. The program simulates the transport and detection of photons between 0.035 and 2 MeV and uses, for the body representation, a voxel phantom with a format of 871 slices each of 277 x 148 picture elements. The Monte Carlo code was applied to the calibration of in vivo systems and to estimate differences in counting efficiencies between homogeneous and non-homogeneous radionuclide distributions in the lung. Calculations show a factor of 20 between deposition of {sup 241} Am at the back compared with the front of the lung. The program was also used to estimate the {sup 137} Cs body burden of an internally contaminated individual, counted with an 8 x 4 Nal (TI) detector and an {sup 241} Am body burden of an internally contaminated individual, who was counted using a planar germanium detector. (author) 24 refs., 38 figs., 23 tabs.
Monte Carlo codes and Monte Carlo simulator program
International Nuclear Information System (INIS)
Higuchi, Kenji; Asai, Kiyoshi; Suganuma, Masayuki.
1990-03-01
Four typical Monte Carlo codes KENO-IV, MORSE, MCNP and VIM have been vectorized on VP-100 at Computing Center, JAERI. The problems in vector processing of Monte Carlo codes on vector processors have become clear through the work. As the result, it is recognized that these are difficulties to obtain good performance in vector processing of Monte Carlo codes. A Monte Carlo computing machine, which processes the Monte Carlo codes with high performances is being developed at our Computing Center since 1987. The concept of Monte Carlo computing machine and its performance have been investigated and estimated by using a software simulator. In this report the problems in vectorization of Monte Carlo codes, Monte Carlo pipelines proposed to mitigate these difficulties and the results of the performance estimation of the Monte Carlo computing machine by the simulator are described. (author)
Improvements for Monte Carlo burnup calculation
Energy Technology Data Exchange (ETDEWEB)
Shenglong, Q.; Dong, Y.; Danrong, S.; Wei, L., E-mail: qiangshenglong@tsinghua.org.cn, E-mail: d.yao@npic.ac.cn, E-mail: songdr@npic.ac.cn, E-mail: luwei@npic.ac.cn [Nuclear Power Inst. of China, Cheng Du, Si Chuan (China)
2015-07-01
Monte Carlo burnup calculation is development trend of reactor physics, there would be a lot of work to be done for engineering applications. Based on Monte Carlo burnup code MOI, non-fuel burnup calculation methods and critical search suggestions will be mentioned in this paper. For non-fuel burnup, mixed burnup mode will improve the accuracy of burnup calculation and efficiency. For critical search of control rod position, a new method called ABN based on ABA which used by MC21 will be proposed for the first time in this paper. (author)
2009-01-01
Carlo Rubbia turned 75 on March 31, and CERN held a symposium to mark his birthday and pay tribute to his impressive contribution to both CERN and science. Carlo Rubbia, 4th from right, together with the speakers at the symposium.On 7 April CERN hosted a celebration marking Carlo Rubbia’s 75th birthday and 25 years since he was awarded the Nobel Prize for Physics. "Today we will celebrate 100 years of Carlo Rubbia" joked CERN’s Director-General, Rolf Heuer in his opening speech, "75 years of his age and 25 years of the Nobel Prize." Rubbia received the Nobel Prize along with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. During the symposium, which was held in the Main Auditorium, several eminent speakers gave lectures on areas of science to which Carlo Rubbia made decisive contributions. Among those who spoke were Michel Spiro, Director of the French National Insti...
International Nuclear Information System (INIS)
Chen, Yizheng; Qiu, Rui; Li, Chunyan; Wu, Zhen; Li, Junli
2016-01-01
In vivo measurement is a main method of internal contamination evaluation, particularly for large numbers of people after a nuclear accident. Before the practical application, it is necessary to obtain the counting efficiency of the detector by calibration. The virtual calibration based on Monte Carlo simulation usually uses the reference human computational phantom, and the morphological difference between the monitored personnel with the calibrated phantom may lead to the deviation of the counting efficiency. Therefore, a phantom library containing a wide range of heights and total body masses is needed. In this study, a Chinese reference adult male polygon surface (CRAM-S) phantom was constructed based on the CRAM voxel phantom, with the organ models adjusted to match the Chinese reference data. CRAM-S phantom was then transformed to sitting posture for convenience in practical monitoring. Referring to the mass and height distribution of the Chinese adult male, a phantom library containing 84 phantoms was constructed by deforming the reference surface phantom. Phantoms in the library have 7 different heights ranging from 155 cm to 185 cm, and there are 12 phantoms with different total body masses in each height. As an example of application, organ specific and total counting efficiencies of Ba-133 were calculated using the MCNPX code, with two series of phantoms selected from the library. The influence of morphological variation on the counting efficiency was analyzed. The results show only using the reference phantom in virtual calibration may lead to an error of 68.9% for total counting efficiency. Thus the influence of morphological difference on virtual calibration can be greatly reduced using the phantom library with a wide range of masses and heights instead of a single reference phantom. (paper)
Chen, Yizheng; Qiu, Rui; Li, Chunyan; Wu, Zhen; Li, Junli
2016-03-01
In vivo measurement is a main method of internal contamination evaluation, particularly for large numbers of people after a nuclear accident. Before the practical application, it is necessary to obtain the counting efficiency of the detector by calibration. The virtual calibration based on Monte Carlo simulation usually uses the reference human computational phantom, and the morphological difference between the monitored personnel with the calibrated phantom may lead to the deviation of the counting efficiency. Therefore, a phantom library containing a wide range of heights and total body masses is needed. In this study, a Chinese reference adult male polygon surface (CRAM_S) phantom was constructed based on the CRAM voxel phantom, with the organ models adjusted to match the Chinese reference data. CRAMS phantom was then transformed to sitting posture for convenience in practical monitoring. Referring to the mass and height distribution of the Chinese adult male, a phantom library containing 84 phantoms was constructed by deforming the reference surface phantom. Phantoms in the library have 7 different heights ranging from 155 cm to 185 cm, and there are 12 phantoms with different total body masses in each height. As an example of application, organ specific and total counting efficiencies of Ba-133 were calculated using the MCNPX code, with two series of phantoms selected from the library. The influence of morphological variation on the counting efficiency was analyzed. The results show only using the reference phantom in virtual calibration may lead to an error of 68.9% for total counting efficiency. Thus the influence of morphological difference on virtual calibration can be greatly reduced using the phantom library with a wide range of masses and heights instead of a single reference phantom.
Absolute calibration technique for spontaneous fission sources
International Nuclear Information System (INIS)
Zucker, M.S.; Karpf, E.
1984-01-01
An absolute calibration technique for a spontaneously fissioning nuclide (which involves no arbitrary parameters) allows unique determination of the detector efficiency for that nuclide, hence of the fission source strength
Energy Technology Data Exchange (ETDEWEB)
Kim, Hyun Suk; Ye, Sung Joon [Seoul National University, Seoul (Korea, Republic of); Smith, Martin B.; Koslowsky, Martin R. [Bubble Technology Industries Inc., Chalk River (Canada); Kwak, Sung Woo [Korea Institute of Nuclear Nonproliferation And Control (KINAC), Daejeon (Korea, Republic of); Kim Gee Hyun [Sejong University, Seoul (Korea, Republic of)
2017-03-15
Simultaneous detection of neutrons and gamma rays have become much more practicable, by taking advantage of good gamma-ray discrimination properties using pulse shape discrimination (PSD) technique. Recently, we introduced a commercial CLYC system in Korea, and performed an initial characterization and simulation studies for the CLYC detector system to provide references for the future implementation of the dual-mode scintillator system in various studies and applications. We evaluated a CLYC detector with 95% 6Li enrichment using various gamma-ray sources and a 252Cf neutron source, with validation of our Monte Carlo simulation results via measurement experiments. Absolute full-energy peak efficiency values were calculated for gamma-ray sources and neutron source using MCNP6 and compared with measurement experiments of the calibration sources. In addition, behavioral characteristics of neutrons were validated by comparing simulations and experiments on neutron moderation with various polyethylene (PE) moderator thicknesses. Both results showed good agreements in overall characteristics of the gamma and neutron detection efficiencies, with consistent ⁓20% discrepancy. Furthermore, moderation of neutrons emitted from {sup 252}Cf showed similarities between the simulation and the experiment, in terms of their relative ratios depending on the thickness of the PE moderator. A CLYC detector system was characterized for its energy resolution and detection efficiency, and Monte Carlo simulations on the detector system was validated experimentally. Validation of the simulation results in overall trend of the CLYC detector behavior will provide the fundamental basis and validity of follow-up Monte Carlo simulation studies for the development of our dual-particle imager using a rotational modulation collimator.
Random Numbers and Monte Carlo Methods
Scherer, Philipp O. J.
Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.
CERN radiation protection (RP) calibration facilities
Energy Technology Data Exchange (ETDEWEB)
Pozzi, Fabio
2016-04-14
Radiation protection calibration facilities are essential to ensure the correct operation of radiation protection instrumentation. Calibrations are performed in specific radiation fields according to the type of instrument to be calibrated: neutrons, photons, X-rays, beta and alpha particles. Some of the instruments are also tested in mixed radiation fields as often encountered close to high-energy particle accelerators. Moreover, calibration facilities are of great importance to evaluate the performance of prototype detectors; testing and measuring the response of a prototype detector to well-known and -characterized radiation fields contributes to improving and optimizing its design and capabilities. The CERN Radiation Protection group is in charge of performing the regular calibrations of all CERN radiation protection devices; these include operational and passive dosimeters, neutron and photon survey-meters, and fixed radiation detectors to monitor the ambient dose equivalent, H*(10), inside CERN accelerators and at the CERN borders. A new state-of-the-art radiation protection calibration facility was designed, constructed and commissioned following the related ISO recommendations to replace the previous ageing (more than 30 years old) laboratory. In fact, the new laboratory aims also at the official accreditation according to the ISO standards in order to be able to release certified calibrations. Four radiation fields are provided: neutrons, photons and beta sources and an X-ray generator. Its construction did not only involve a pure civil engineering work; many radiation protection studies were performed to provide a facility that could answer the CERN calibration needs and fulfill all related safety requirements. Monte Carlo simulations have been confirmed to be a valuable tool for the optimization of the building design, the radiation protection aspects, e.g. shielding, and, as consequence, the overall cost. After the source and irradiator installation
A methodology to develop computational phantoms with adjustable posture for WBC calibration
Ferreira Fonseca, T. C.; Bogaerts, R.; Hunt, John; Vanhavere, F.
2014-11-01
A Whole Body Counter (WBC) is a facility to routinely assess the internal contamination of exposed workers, especially in the case of radiation release accidents. The calibration of the counting device is usually done by using anthropomorphic physical phantoms representing the human body. Due to such a challenge of constructing representative physical phantoms a virtual calibration has been introduced. The use of computational phantoms and the Monte Carlo method to simulate radiation transport have been demonstrated to be a worthy alternative. In this study we introduce a methodology developed for the creation of realistic computational voxel phantoms with adjustable posture for WBC calibration. The methodology makes use of different software packages to enable the creation and modification of computational voxel phantoms. This allows voxel phantoms to be developed on demand for the calibration of different WBC configurations. This in turn helps to study the major source of uncertainty associated with the in vivo measurement routine which is the difference between the calibration phantoms and the real persons being counted. The use of realistic computational phantoms also helps the optimization of the counting measurement. Open source codes such as MakeHuman and Blender software packages have been used for the creation and modelling of 3D humanoid characters based on polygonal mesh surfaces. Also, a home-made software was developed whose goal is to convert the binary 3D voxel grid into a MCNPX input file. This paper summarizes the development of a library of phantoms of the human body that uses two basic phantoms called MaMP and FeMP (Male and Female Mesh Phantoms) to create a set of male and female phantoms that vary both in height and in weight. Two sets of MaMP and FeMP phantoms were developed and used for efficiency calibration of two different WBC set-ups: the Doel NPP WBC laboratory and AGM laboratory of SCK-CEN in Mol, Belgium.
A methodology to develop computational phantoms with adjustable posture for WBC calibration
International Nuclear Information System (INIS)
Fonseca, T C Ferreira; Vanhavere, F; Bogaerts, R; Hunt, John
2014-01-01
A Whole Body Counter (WBC) is a facility to routinely assess the internal contamination of exposed workers, especially in the case of radiation release accidents. The calibration of the counting device is usually done by using anthropomorphic physical phantoms representing the human body. Due to such a challenge of constructing representative physical phantoms a virtual calibration has been introduced. The use of computational phantoms and the Monte Carlo method to simulate radiation transport have been demonstrated to be a worthy alternative. In this study we introduce a methodology developed for the creation of realistic computational voxel phantoms with adjustable posture for WBC calibration. The methodology makes use of different software packages to enable the creation and modification of computational voxel phantoms. This allows voxel phantoms to be developed on demand for the calibration of different WBC configurations. This in turn helps to study the major source of uncertainty associated with the in vivo measurement routine which is the difference between the calibration phantoms and the real persons being counted. The use of realistic computational phantoms also helps the optimization of the counting measurement. Open source codes such as MakeHuman and Blender software packages have been used for the creation and modelling of 3D humanoid characters based on polygonal mesh surfaces. Also, a home-made software was developed whose goal is to convert the binary 3D voxel grid into a MCNPX input file. This paper summarizes the development of a library of phantoms of the human body that uses two basic phantoms called MaMP and FeMP (Male and Female Mesh Phantoms) to create a set of male and female phantoms that vary both in height and in weight. Two sets of MaMP and FeMP phantoms were developed and used for efficiency calibration of two different WBC set-ups: the Doel NPP WBC laboratory and AGM laboratory of SCK-CEN in Mol, Belgium. (paper)
Recommender engine for continuous-time quantum Monte Carlo methods
Huang, Li; Yang, Yi-feng; Wang, Lei
2017-03-01
Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.
Direct megavoltage photon calibration service in Australia
International Nuclear Information System (INIS)
Butler, D.J.; Ramanthan, G.; Oliver, C.; Cole, A.; Harty, P.D.; Wright, T.; Webb, D.V.; Lye, J.; Followill, D.S.
2014-01-01
The Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) maintains the Australian primary standard of absorbed dose. Until recently, the standard was used to calibrate ionisation chambers only in 60 Co gamma rays. These chambers are then used by radiotherapy clinics to determine linac output, using a correction factor (k Q ) to take into account the different spectra of 60 Co and the linac. Over the period 2010–2013, ARPANSA adapted the primary standard to work in megavoltage linac beams, and has developed a calibration service at three photon beams (6, 10 and 18 MV) from an Elekta Synergy linac. We describe the details of the new calibration service, the method validation and the use of the new calibration factors with the International Atomic Energy Agency’s TRS-398 dosimetry Code of Practice. The expected changes in absorbed dose measurements in the clinic when shifting from 60 Co to the direct calibration are determined. For a Farmer chamber (model 2571), the measured chamber calibration coefficient is expected to be reduced by 0.4, 1.0 and 1.1 % respectively for these three beams when compared to the factor derived from 60 Co. These results are in overall agreement with international absorbed dose standards and calculations by Muir and Rogers in 2010 of k Q factors using Monte Carlo techniques. The reasons for and against moving to the new service are discussed in the light of the requirements of clinical dosimetry.
Clinical dosimetry in photon radiotherapy. A Monte Carlo based investigation
International Nuclear Information System (INIS)
Wulff, Joerg
2010-01-01
Practical clinical dosimetry is a fundamental step within the radiation therapy process and aims at quantifying the absorbed radiation dose within a 1-2% uncertainty. To achieve this level of accuracy, corrections are needed for calibrated and air-filled ionization chambers, which are used for dose measurement. The procedures of correction are based on cavity theory of Spencer-Attix and are defined in current dosimetry protocols. Energy dependent corrections for deviations from calibration beams account for changed ionization chamber response in the treatment beam. The corrections applied are usually based on semi-analytical models or measurements and are generally hard to determine due to their magnitude of only a few percents or even less. Furthermore the corrections are defined for fixed geometrical reference-conditions and do not apply to non-reference conditions in modern radiotherapy applications. The stochastic Monte Carlo method for the simulation of radiation transport is becoming a valuable tool in the field of Medical Physics. As a suitable tool for calculation of these corrections with high accuracy the simulations enable the investigation of ionization chambers under various conditions. The aim of this work is the consistent investigation of ionization chamber dosimetry in photon radiation therapy with the use of Monte Carlo methods. Nowadays Monte Carlo systems exist, which enable the accurate calculation of ionization chamber response in principle. Still, their bare use for studies of this type is limited due to the long calculation times needed for a meaningful result with a small statistical uncertainty, inherent to every result of a Monte Carlo simulation. Besides heavy use of computer hardware, techniques methods of variance reduction to reduce the needed calculation time can be applied. Methods for increasing the efficiency in the results of simulation were developed and incorporated in a modern and established Monte Carlo simulation environment
International Nuclear Information System (INIS)
Waller, W.C.; Cram, M.E.; Hall, J.E.
1975-01-01
For any measurement to have meaning, it must be related to generally accepted standard units by a valid and specified system of comparison. To calibrate well-logging tools, sensing systems are designed which produce consistent and repeatable indications over the range for which the tool was intended. The basics of calibration theory, procedures, and calibration record presentations are reviewed. Calibrations for induction, electrical, radioactivity, and sonic logging tools will be discussed. The authors' intent is to provide an understanding of the sources of errors, of the way errors are minimized in the calibration process, and of the significance of changes in recorded calibration data
Leonardo Rossi
Carlo Caso (1940 - 2007) Our friend and colleague Carlo Caso passed away on July 7th, after several months of courageous fight against cancer. Carlo spent most of his scientific career at CERN, taking an active part in the experimental programme of the laboratory. His long and fruitful involvement in particle physics started in the sixties, in the Genoa group led by G. Tomasini. He then made several experiments using the CERN liquid hydrogen bubble chambers -first the 2000HBC and later BEBC- to study various facets of the production and decay of meson and baryon resonances. He later made his own group and joined the NA27 Collaboration to exploit the EHS Spectrometer with a rapid cycling bubble chamber as vertex detector. Amongst their many achievements, they were the first to measure, with excellent precision, the lifetime of the charmed D mesons. At the start of the LEP era, Carlo and his group moved to the DELPHI experiment, participating in the construction and running of the HPC electromagnetic c...
Variational Monte Carlo Technique
Indian Academy of Sciences (India)
ias
on the development of nuclear weapons in Los Alamos ..... cantly improved the paper. ... Carlo simulations of solids, Reviews of Modern Physics, Vol.73, pp.33– ... The computer algorithms are usually based on a random seed that starts the ...
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 3. Markov Chain Monte Carlo - Examples. Arnab Chakraborty. General Article Volume 7 Issue 3 March 2002 pp 25-34. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/007/03/0025-0034. Keywords.
Monte Carlo and Quasi-Monte Carlo Sampling
Lemieux, Christiane
2009-01-01
Presents essential tools for using quasi-Monte Carlo sampling in practice. This book focuses on issues related to Monte Carlo methods - uniform and non-uniform random number generation, variance reduction techniques. It covers several aspects of quasi-Monte Carlo methods.
International Nuclear Information System (INIS)
Guerra, J.G.; Rubiano, J.G.; Winter, G.; Guerra, A.G.; Alonso, H.; Arnedo, M.A.; Tejera, A.; Gil, J.M.; Rodríguez, R.; Martel, P.; Bolivar, J.P.
2015-01-01
The determination in a sample of the activity concentration of a specific radionuclide by gamma spectrometry needs to know the full energy peak efficiency (FEPE) for the energy of interest. The difficulties related to the experimental calibration make it advisable to have alternative methods for FEPE determination, such as the simulation of the transport of photons in the crystal by the Monte Carlo method, which requires an accurate knowledge of the characteristics and geometry of the detector. The characterization process is mainly carried out by Canberra Industries Inc. using proprietary techniques and methodologies developed by that company. It is a costly procedure (due to shipping and to the cost of the process itself) and for some research laboratories an alternative in situ procedure can be very useful. The main goal of this paper is to find an alternative to this costly characterization process, by establishing a method for optimizing the parameters of characterizing the detector, through a computational procedure which could be reproduced at a standard research lab. This method consists in the determination of the detector geometric parameters by using Monte Carlo simulation in parallel with an optimization process, based on evolutionary algorithms, starting from a set of reference FEPEs determined experimentally or computationally. The proposed method has proven to be effective and simple to implement. It provides a set of characterization parameters which it has been successfully validated for different source-detector geometries, and also for a wide range of environmental samples and certified materials. - Highlights: • A computational method for characterizing an HPGe spectrometer has been developed. • Detector characterized using as reference photopeak efficiencies obtained experimentally or by Monte Carlo calibration. • The characterization obtained has been validated for samples with different geometries and composition. • Good agreement
A novel iterative energy calibration method for composite germanium detectors
International Nuclear Information System (INIS)
Pattabiraman, N.S.; Chintalapudi, S.N.; Ghugre, S.S.
2004-01-01
An automatic method for energy calibration of the observed experimental spectrum has been developed. The method presented is based on an iterative algorithm and presents an efficient way to perform energy calibrations after establishing the weights of the calibration data. An application of this novel technique for data acquired using composite detectors in an in-beam γ-ray spectroscopy experiment is presented
A novel iterative energy calibration method for composite germanium detectors
Energy Technology Data Exchange (ETDEWEB)
Pattabiraman, N.S.; Chintalapudi, S.N.; Ghugre, S.S. E-mail: ssg@alpha.iuc.res.in
2004-07-01
An automatic method for energy calibration of the observed experimental spectrum has been developed. The method presented is based on an iterative algorithm and presents an efficient way to perform energy calibrations after establishing the weights of the calibration data. An application of this novel technique for data acquired using composite detectors in an in-beam {gamma}-ray spectroscopy experiment is presented.
Ensemble bayesian model averaging using markov chain Monte Carlo sampling
Energy Technology Data Exchange (ETDEWEB)
Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.
Synthesis Polarimetry Calibration
Moellenbrock, George
2017-10-01
Synthesis instrumental polarization calibration fundamentals for both linear (ALMA) and circular (EVLA) feed bases are reviewed, with special attention to the calibration heuristics supported in CASA. Practical problems affecting modern instruments are also discussed.
International Nuclear Information System (INIS)
Berger, C.D.; Gupton, E.D.; Lane, B.H.; Miller, J.H.; Nichols, S.W.
1982-08-01
The ORNL Calibrations Facility is operated by the Instrumentation Group of the Industrial Safety and Applied Health Physics Division. Its primary purpose is to maintain radiation calibration standards for calibration of ORNL health physics instruments and personnel dosimeters. This report includes a discussion of the radioactive sources and ancillary equipment in use and a step-by-step procedure for calibration of those survey instruments and personnel dosimeters in routine use at ORNL
Technical preparations for the in-vessel 14 MeV neutron calibration at JET
International Nuclear Information System (INIS)
Batistoni, P.; Popovichev, S.; Crowe, R.; Cufar, A.; Ghani, Z.; Keogh, K.; Peacock, A.; Price, R.; Baranov, A.; Korotkov, S.; Lykin, P.; Samoshin, A.
2017-01-01
Highlights: • The JET 14 MeV neutron calibration requires a neutron generator to be deployed inside the vacuum vessel by means of the remote handling system. • A neutron generator of suitable intensity and compliant with physics, remote handling and safety requirements has been identified and procured.The scientific programme of the preparatory phase devoted to fully characterizing the selected 14 MeV neutron generator is discussed. • The aim is to measure the absolute neutron emission rate within (± 5%) and the energy spectrum of emitted neutron as a function of angles. • The physics preparations, source issues, safety and engineering aspects required to calibrate directly the JET neutron detectors are discussed. - Abstract: The power output of fusion devices is measured from their neutron yields which relate directly to the fusion yield. In this paper we describe the devices and methods that have been prepared to perform a new in situ 14 MeV neutron calibration at JET in view of the new DT campaign planned at JET in the next years. The target accuracy of this calibration is ±10% as required for ITER, where a precise neutron yield measurement is important, e.g., for tritium accountancy. In this paper, the constraints and early decisions which defined the main calibration approach are discussed, e.g., the choice of 14 MeV neutron source and the deployment method. The physics preparations, source issues, safety and engineering aspects required to calibrate directly the JET neutron detectors are also discussed. The existing JET remote-handling system will be used to deploy the neutron source inside the JET vessel. For this purpose, compatible tooling and systems necessary to ensure safe and efficient deployment have been developed. The scientific programme of the preparatory phase is devoted to fully characterizing the selected 14 MeV neutron generator to be used as the calibrating source, obtain a better understanding of the limitations of the
Technical preparations for the in-vessel 14 MeV neutron calibration at JET
Energy Technology Data Exchange (ETDEWEB)
Batistoni, P., E-mail: paola.batistoni@enea.it [ENEA, Department of Fusion and Nuclear Safety Technology, I-00044, Frascati, Rome (Italy); EUROfusion Consortium, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Popovichev, S. [CCFE, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); EUROfusion Consortium, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Crowe, R. [Remote Applications in Challenging Environments (RACE), Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); EUROfusion Consortium, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Cufar, A. [Reactor Physics Division, Jožef Stefan Institute, Jamova cesta 39, SI-1000, Ljubljana (Slovenia); EUROfusion Consortium, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Ghani, Z. [CCFE, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); EUROfusion Consortium, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Keogh, K. [Remote Applications in Challenging Environments (RACE), Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); EUROfusion Consortium, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Peacock, A. [JET Exploitation Unit, Abingdon, Oxon, OX14 3DB (United Kingdom); Price, R. [Remote Applications in Challenging Environments (RACE), Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); EUROfusion Consortium, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Baranov, A.; Korotkov, S.; Lykin, P.; Samoshin, A. [All-Russia Research Institute of Automatics (VNIIA), 22, Sushchevskaya str., 127055, Moscow (Russian Federation)
2017-04-15
Highlights: • The JET 14 MeV neutron calibration requires a neutron generator to be deployed inside the vacuum vessel by means of the remote handling system. • A neutron generator of suitable intensity and compliant with physics, remote handling and safety requirements has been identified and procured.The scientific programme of the preparatory phase devoted to fully characterizing the selected 14 MeV neutron generator is discussed. • The aim is to measure the absolute neutron emission rate within (± 5%) and the energy spectrum of emitted neutron as a function of angles. • The physics preparations, source issues, safety and engineering aspects required to calibrate directly the JET neutron detectors are discussed. - Abstract: The power output of fusion devices is measured from their neutron yields which relate directly to the fusion yield. In this paper we describe the devices and methods that have been prepared to perform a new in situ 14 MeV neutron calibration at JET in view of the new DT campaign planned at JET in the next years. The target accuracy of this calibration is ±10% as required for ITER, where a precise neutron yield measurement is important, e.g., for tritium accountancy. In this paper, the constraints and early decisions which defined the main calibration approach are discussed, e.g., the choice of 14 MeV neutron source and the deployment method. The physics preparations, source issues, safety and engineering aspects required to calibrate directly the JET neutron detectors are also discussed. The existing JET remote-handling system will be used to deploy the neutron source inside the JET vessel. For this purpose, compatible tooling and systems necessary to ensure safe and efficient deployment have been developed. The scientific programme of the preparatory phase is devoted to fully characterizing the selected 14 MeV neutron generator to be used as the calibrating source, obtain a better understanding of the limitations of the
Efficient and accurate calibration for radio interferometers
Kazemi, Sanaz
2013-01-01
Optische telescopen hebben detectoren die gevoelig zijn voor individuele fotonen die het detectoroppervlak raken. Hiermee kan de helderheid van een lichtbron aan de hemel gemeten worden. Optische telescopen zijn gevoelig voor fotonen met een golflengte tussen de 400 nanometer (paars) en 700
Multidetector calibration for mass spectrometers
International Nuclear Information System (INIS)
Bayne, C.K.; Donohue, D.L.; Fiedler, R.
1994-06-01
The International Atomic Energy Agency's Safeguards Analytical Laboratory has performed calibration experiments to measure the different efficiencies among multi-Faraday detectors for a Finnigan-MAT 261 mass spectrometer. Two types of calibration experiments were performed: (1) peak-shift experiments and (2) peak-jump experiments. For peak-shift experiments, the ion intensities were measured for all isotopes of an element in different Faraday detectors. Repeated measurements were made by shifting the isotopes to various Faraday detectors. Two different peak-shifting schemes were used to measure plutonium (UK Pu5/92138) samples. For peak-jump experiments, ion intensities were measured in a reference Faraday detector for a single isotope and compared with those measured in the other Faraday detectors. Repeated measurements were made by switching back-and-forth between the reference Faraday detector and a selected Faraday detector. This switching procedure is repeated for all Faraday detectors. Peak-jump experiments were performed with replicate measurements of 239 Pu, 187 Re, and 238 U. Detector efficiency factors were estimated for both peak-jump and peak-shift experiments using a flexible calibration model to statistically analyze both types of multidetector calibration experiments. Calculated detector efficiency factors were shown to depend on both the material analyzed and the experimental conditions. A single detector efficiency factor is not recommended for each detector that would be used to correct routine sample analyses. An alternative three-run peak-shift sample analysis should be considered. A statistical analysis of the data from this peak-shift experiment can adjust the isotopic ratio estimates for detector differences due to each sample analysis
International Nuclear Information System (INIS)
Thomas, D.J.; Horwood, N.; Taylor, G.C.
1999-01-01
The use of realistic neutron calibration fields to overcome some of the problems associated with the response functions of presently available dosemeters, both area survey instruments and personal dosemeters, has been investigated. Realistic calibration fields have spectra which, compared to conventional radionuclide source based calibration fields, more closely match those of the workplace fields in which dosemeters are used. Monte Carlo simulations were performed to identify laboratory systems which would produce appropriate workplace-like calibration fields. A detailed analysis was then undertaken of the predicted under- and over-responses of dosemeters in a wide selection of measured workplace field spectra assuming calibration in a selection of calibration fields. These included both conventional radionuclide source calibration fields, and also several proposed realistic calibration fields. The present state of the art for dosemeter performance, and the possibilities of improving accuracy by using realistic calibration fields are both presented. (author)
Research on perturbation based Monte Carlo reactor criticality search
International Nuclear Information System (INIS)
Li Zeguang; Wang Kan; Li Yangliu; Deng Jingkang
2013-01-01
Criticality search is a very important aspect in reactor physics analysis. Due to the advantages of Monte Carlo method and the development of computer technologies, Monte Carlo criticality search is becoming more and more necessary and feasible. Traditional Monte Carlo criticality search method is suffered from large amount of individual criticality runs and uncertainty and fluctuation of Monte Carlo results. A new Monte Carlo criticality search method based on perturbation calculation is put forward in this paper to overcome the disadvantages of traditional method. By using only one criticality run to get initial k_e_f_f and differential coefficients of concerned parameter, the polynomial estimator of k_e_f_f changing function is solved to get the critical value of concerned parameter. The feasibility of this method was tested. The results show that the accuracy and efficiency of perturbation based criticality search method are quite inspiring and the method overcomes the disadvantages of traditional one. (authors)
Monte Carlo method for array criticality calculations
International Nuclear Information System (INIS)
Dickinson, D.; Whitesides, G.E.
1976-01-01
The Monte Carlo method for solving neutron transport problems consists of mathematically tracing paths of individual neutrons collision by collision until they are lost by absorption or leakage. The fate of the neutron after each collision is determined by the probability distribution functions that are formed from the neutron cross-section data. These distributions are sampled statistically to establish the successive steps in the neutron's path. The resulting data, accumulated from following a large number of batches, are analyzed to give estimates of k/sub eff/ and other collision-related quantities. The use of electronic computers to produce the simulated neutron histories, initiated at Los Alamos Scientific Laboratory, made the use of the Monte Carlo method practical for many applications. In analog Monte Carlo simulation, the calculation follows the physical events of neutron scattering, absorption, and leakage. To increase calculational efficiency, modifications such as the use of statistical weights are introduced. The Monte Carlo method permits the use of a three-dimensional geometry description and a detailed cross-section representation. Some of the problems in using the method are the selection of the spatial distribution for the initial batch, the preparation of the geometry description for complex units, and the calculation of error estimates for region-dependent quantities such as fluxes. The Monte Carlo method is especially appropriate for criticality safety calculations since it permits an accurate representation of interacting units of fissile material. Dissimilar units, units of complex shape, moderators between units, and reflected arrays may be calculated. Monte Carlo results must be correlated with relevant experimental data, and caution must be used to ensure that a representative set of neutron histories is produced
International Nuclear Information System (INIS)
Rajabalinejad, M.
2010-01-01
To reduce cost of Monte Carlo (MC) simulations for time-consuming processes, Bayesian Monte Carlo (BMC) is introduced in this paper. The BMC method reduces number of realizations in MC according to the desired accuracy level. BMC also provides a possibility of considering more priors. In other words, different priors can be integrated into one model by using BMC to further reduce cost of simulations. This study suggests speeding up the simulation process by considering the logical dependence of neighboring points as prior information. This information is used in the BMC method to produce a predictive tool through the simulation process. The general methodology and algorithm of BMC method are presented in this paper. The BMC method is applied to the simplified break water model as well as the finite element model of 17th Street Canal in New Orleans, and the results are compared with the MC and Dynamic Bounds methods.
Proton beam monitor chamber calibration
International Nuclear Information System (INIS)
Gomà, C; Meer, D; Safai, S; Lorentini, S
2014-01-01
The first goal of this paper is to clarify the reference conditions for the reference dosimetry of clinical proton beams. A clear distinction is made between proton beam delivery systems which should be calibrated with a spread-out Bragg peak field and those that should be calibrated with a (pseudo-)monoenergetic proton beam. For the latter, this paper also compares two independent dosimetry techniques to calibrate the beam monitor chambers: absolute dosimetry (of the number of protons exiting the nozzle) with a Faraday cup and reference dosimetry (i.e. determination of the absorbed dose to water under IAEA TRS-398 reference conditions) with an ionization chamber. To compare the two techniques, Monte Carlo simulations were performed to convert dose-to-water to proton fluence. A good agreement was found between the Faraday cup technique and the reference dosimetry with a plane-parallel ionization chamber. The differences—of the order of 3%—were found to be within the uncertainty of the comparison. For cylindrical ionization chambers, however, the agreement was only possible when positioning the effective point of measurement of the chamber at the reference measurement depth—i.e. not complying with IAEA TRS-398 recommendations. In conclusion, for cylindrical ionization chambers, IAEA TRS-398 reference conditions for monoenergetic proton beams led to a systematic error in the determination of the absorbed dose to water, especially relevant for low-energy proton beams. To overcome this problem, the effective point of measurement of cylindrical ionization chambers should be taken into account when positioning the reference point of the chamber. Within the current IAEA TRS-398 recommendations, it seems advisable to use plane-parallel ionization chambers—rather than cylindrical chambers—for the reference dosimetry of pseudo-monoenergetic proton beams. (paper)
International Nuclear Information System (INIS)
Dubi, A.; Gerstl, S.A.W.
1979-05-01
The contributon Monte Carlo method is based on a new recipe to calculate target responses by means of volume integral of the contributon current in a region between the source and the detector. A comprehensive description of the method, its implementation in the general-purpose MCNP code, and results of the method for realistic nonhomogeneous, energy-dependent problems are presented. 23 figures, 10 tables
Directory of Open Access Journals (Sweden)
Pedro Medina Avendaño
1981-01-01
Full Text Available Carlos Vega Duarte tenía la sencillez de los seres elementales y puros. Su corazón era limpio como oro de aluvión. Su trato directo y coloquial ponía de relieve a un santandereano sin contaminaciones que amaba el fulgor de las armas y se encandilaba con el destello de las frases perfectas
International Nuclear Information System (INIS)
Wollaber, Allan Benton
2016-01-01
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating @@), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
International Nuclear Information System (INIS)
Creutz, M.
1986-01-01
The author discusses a recently developed algorithm for simulating statistical systems. The procedure interpolates between molecular dynamics methods and canonical Monte Carlo. The primary advantages are extremely fast simulations of discrete systems such as the Ising model and a relative insensitivity to random number quality. A variation of the algorithm gives rise to a deterministic dynamics for Ising spins. This model may be useful for high speed simulation of non-equilibrium phenomena
Energy Technology Data Exchange (ETDEWEB)
Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
2009-01-01
On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency and Professor at the IUSS School for Advanced Studies in Pavia will speak about his work with Carlo Rubbia. Finally, Hans Joachim Sch...
2009-01-01
On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency, will speak about his work with Carlo Rubbia. Finally, Hans Joachim Schellnhuber of the Potsdam Institute for Climate Research and Sven Kul...
Directory of Open Access Journals (Sweden)
Charlie Samuya Veric
2001-12-01
Full Text Available The importance of Carlos Bulosan in Filipino and Filipino-American radical history and literature is indisputable. His eminence spans the pacific, and he is known, diversely, as a radical poet, fictionist, novelist, and labor organizer. Author of the canonical America Iis the Hearts, Bulosan is celebrated for chronicling the conditions in America in his time, such as racism and unemployment. In the history of criticism on Bulosan's life and work, however, there is an undeclared general consensus that views Bulosan and his work as coherent permanent texts of radicalism and anti-imperialism. Central to the existence of such a tradition of critical reception are the generations of critics who, in more ways than one, control the discourse on and of Carlos Bulosan. This essay inquires into the sphere of the critical reception that orders, for our time and for the time ahead, the reading and interpretation of Bulosan. What eye and seeing, the essay asks, determine the perception of Bulosan as the angel of radicalism? What is obscured in constructing Bulosan as an immutable figure of the political? What light does the reader conceive when the personal is brought into the open and situated against the political? the essay explores the answers to these questions in Bulosan's loving letters to various friends, strangers, and white American women. The presence of these interrogations, the essay believes, will secure ultimately the continuing importance of Carlos Bulosan to radical literature and history.
Calibration of Nanopositioning Stages
Directory of Open Access Journals (Sweden)
Ning Tan
2015-12-01
Full Text Available Accuracy is one of the most important criteria for the performance evaluation of micro- and nanorobots or systems. Nanopositioning stages are used to achieve the high positioning resolution and accuracy for a wide and growing scope of applications. However, their positioning accuracy and repeatability are not well known and difficult to guarantee, which induces many drawbacks for many applications. For example, in the mechanical characterisation of biological samples, it is difficult to perform several cycles in a repeatable way so as not to induce negative influences on the study. It also prevents one from controlling accurately a tool with respect to a sample without adding additional sensors for closed loop control. This paper aims at quantifying the positioning repeatability and accuracy based on the ISO 9283:1998 standard, and analyzing factors influencing positioning accuracy onto a case study of 1-DoF (Degree-of-Freedom nanopositioning stage. The influence of thermal drift is notably quantified. Performances improvement of the nanopositioning stage are then investigated through robot calibration (i.e., open-loop approach. Two models (static and adaptive models are proposed to compensate for both geometric errors and thermal drift. Validation experiments are conducted over a long period (several days showing that the accuracy of the stage is improved from typical micrometer range to 400 nm using the static model and even down to 100 nm using the adaptive model. In addition, we extend the 1-DoF calibration to multi-DoF with a case study of a 2-DoF nanopositioning robot. Results demonstrate that the model efficiently improved the 2D accuracy from 1400 nm to 200 nm.
Calibration of the CREAM calorimeter with beam test data
Han, J H; Amare, Y
The Cosmic Ray Energetics An d Mass (CREAM) calorimeter (CAL) is designed to measure cosmic-ray elemental energy spectra from 10 12 eV to 10 15 eV. It is comprised of 20 layers of tungsten interleaved with 20 layers of scintillating fiber ribbons. Before each flight, the CAL is exposed to an electron beam. For CREAM-IV through CREAM-VI, beams of 150 GeV electrons were used for the calibration, and 100 GeV was used for CREAM-VII. For calibration purpose, we compare electron beam data with simulation results to find calibration constants with the unit of MeV/ADC. In this paper, we present calibration results, including energy resolutions for electrons and uniformity of response. We also discuss CAL calibration using various beam test data compared with Monte Carlo (MC) simulation data.
Direct illumination LED calibration for telescope photometry
International Nuclear Information System (INIS)
Barrelet, E.; Juramy, C.
2008-01-01
A calibration method for telescope photometry, based on the direct illumination of a telescope with a calibrated light source regrouping multiple LEDs, is proposed. Its purpose is to calibrate the instrument response. The main emphasis of the proposed method is the traceability of the calibration process and a continuous monitoring of the instrument in order to maintain a 0.2% accuracy over a period of years. Its specificity is to map finely the response of the telescope and its camera as a function of all light ray parameters. This feature is essential to implement a computer model of the instrument representing the variation of the overall light collection efficiency of each pixel for various filter configurations. We report on hardware developments done for SNDICE, the first application of this direct illumination calibration system which will be installed in Canada France Hawaii telescope (CFHT) for its leading supernova experiment (SNLS)
Implementation of Fast Emulator-based Code Calibration
Energy Technology Data Exchange (ETDEWEB)
Bowman, Nathaniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Risk & Reliability Analysis; Denman, Matthew R [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Risk & Reliability Analysis
2016-08-01
Calibration is the process of using experimental data to gain more precise knowledge of simulator inputs. This process commonly involves the use of Markov-chain Monte Carlo, which requires running a simulator thousands of times. If we can create a faster program, called an emulator, that mimics the outputs of the simulator for an input range of interest, then we can speed up the process enough to make it feasible for expensive simulators. To this end, we implement a Gaussian-process emulator capable of reproducing the behavior of various long-running simulators to within acceptable tolerance. This fast emulator can be used in place of a simulator to run Markov-chain Monte Carlo in order to calibrate simulation parameters to experimental data. As a demonstration, this emulator is used to calibrate the inputs of an actual simulator against two sodium-fire experiments.
Calculation Analysis of Calibration Factors of Airborne Gamma-ray Spectrometer
International Nuclear Information System (INIS)
Zhao Jun; Zhu Jinhui; Xie Honggang; He Qinglin
2009-01-01
To determine the calibration factors of an airborne gamma-ray spectrometer measuring large area gamma-ray emitting source at deferent flying height, a series of Monte Carlo simulations were drawn. Response energy spectrums of NaI crystals in airplane caused by nature-decay-series calibration-pads, and calibration factors on different heights above Cs-137 plane source, were obtained. The calculated results agreed with the experimental data well. (authors)
Discrete Diffusion Monte Carlo for Electron Thermal Transport
Chenhall, Jeffrey; Cao, Duc; Wollaeger, Ryan; Moses, Gregory
2014-10-01
The iSNB (implicit Schurtz Nicolai Busquet electron thermal transport method of Cao et al. is adapted to a Discrete Diffusion Monte Carlo (DDMC) solution method for eventual inclusion in a hybrid IMC-DDMC (Implicit Monte Carlo) method. The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the iSNB-DDMC method will be presented. This work was supported by Sandia National Laboratory - Albuquerque.
Financial model calibration using consistency hints.
Abu-Mostafa, Y S
2001-01-01
We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.
International Nuclear Information System (INIS)
Mainegra, E.; Capote, R.
1997-01-01
A methodology was developed for the characterization of sparkle detectors with crystal of NaI (Ti), from simulation by Monte Carlo with the Electron-Gamma-Shower system version 4. In the simulation it considered the aluminum cover of the crystal and the armor protector, of the detection system. The experimental spectrum was reproduced with precision, except for energies smaller than the peak of re tro dispersion. This divergence is explained for the non consideration of the real dimensions of the fountain and hence of the dispersion of the gamma radiation in the same one. (author) [es
Multivariate calibration applied to the quantitative analysis of infrared spectra
Energy Technology Data Exchange (ETDEWEB)
Haaland, D.M.
1991-01-01
Multivariate calibration methods are very useful for improving the precision, accuracy, and reliability of quantitative spectral analyses. Spectroscopists can more effectively use these sophisticated statistical tools if they have a qualitative understanding of the techniques involved. A qualitative picture of the factor analysis multivariate calibration methods of partial least squares (PLS) and principal component regression (PCR) is presented using infrared calibrations based upon spectra of phosphosilicate glass thin films on silicon wafers. Comparisons of the relative prediction abilities of four different multivariate calibration methods are given based on Monte Carlo simulations of spectral calibration and prediction data. The success of multivariate spectral calibrations is demonstrated for several quantitative infrared studies. The infrared absorption and emission spectra of thin-film dielectrics used in the manufacture of microelectronic devices demonstrate rapid, nondestructive at-line and in-situ analyses using PLS calibrations. Finally, the application of multivariate spectral calibrations to reagentless analysis of blood is presented. We have found that the determination of glucose in whole blood taken from diabetics can be precisely monitored from the PLS calibration of either mind- or near-infrared spectra of the blood. Progress toward the non-invasive determination of glucose levels in diabetics is an ultimate goal of this research. 13 refs., 4 figs.
Experimental and simulated efficiency of a HPGe detector in the energy range of 0.06∼11 MeV
International Nuclear Information System (INIS)
Park, Chang Su; Choi, H. D.; Sun, Gwang Min
2003-01-01
The full energy peak efficiency of a Hyper Pure Germanium (HPGe) detector was calibrated in a wide energy range from 0.06 to 11 MeV. Both the experimental technique and the Monte Carlo method were used for the efficiency calibration. The measurement was performed using the standard radioisotopes in the low energy region of 60∼1408 keV, which was further extended up to 11 MeV by using the 14 N(n,γ) and 35 Cl(n,γ) reactions. The GEANT Monte Carlo code was used for efficiency calculation. The calculated efficiency had the same dependency on the γ-ray energy with the measurement, and the discrepancy between the calculation and the measurement was minimized by fine-tuning of the detector geometry. From the calculated result, the efficiency curve of the HPGe detector was reliably determined particularly in the high energy region above several MeV, where the number of measured efficiency points is relatively small despite the wide energy region. The calculated efficiency agreed with the measurement within about 7%. In addition to the efficiency calculation, the origin of the local minimum near 600 keV on the efficiency curve was analyzed as a general characteristics of a HPGe detector
New radiation protection calibration facility at CERN.
Brugger, Markus; Carbonez, Pierre; Pozzi, Fabio; Silari, Marco; Vincke, Helmut
2014-10-01
The CERN radiation protection group has designed a new state-of-the-art calibration laboratory to replace the present facility, which is >20 y old. The new laboratory, presently under construction, will be equipped with neutron and gamma sources, as well as an X-ray generator and a beta irradiator. The present work describes the project to design the facility, including the facility placement criteria, the 'point-zero' measurements and the shielding study performed via FLUKA Monte Carlo simulations. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Alternative implementations of the Monte Carlo power method
International Nuclear Information System (INIS)
Blomquist, R.N.; Gelbard, E.M.
2002-01-01
We compare nominal efficiencies, i.e. variances in power shapes for equal running time, of different versions of the Monte Carlo eigenvalue computation, as applied to criticality safety analysis calculations. The two main methods considered here are ''conventional'' Monte Carlo and the superhistory method, and both are used in criticality safety codes. Within each of these major methods, different variants are available for the main steps of the basic Monte Carlo algorithm. Thus, for example, different treatments of the fission process may vary in the extent to which they follow, in analog fashion, the details of real-world fission, or may vary in details of the methods by which they choose next-generation source sites. In general the same options are available in both the superhistory method and conventional Monte Carlo, but there seems not to have been much examination of the special properties of the two major methods and their minor variants. We find, first, that the superhistory method is just as efficient as conventional Monte Carlo and, secondly, that use of different variants of the basic algorithms may, in special cases, have a surprisingly large effect on Monte Carlo computational efficiency
Energy Technology Data Exchange (ETDEWEB)
Costa, Priscila
2014-07-01
The Cuno filter is part of the water processing circuit of the IEA-R1 reactor and, when saturated, it is replaced and becomes a radioactive waste, which must be managed. In this work, the primary characterization of the Cuno filter of the IEA-R1 nuclear reactor at IPEN was carried out using gamma spectrometry associated with the Monte Carlo method. The gamma spectrometry was performed using a hyperpure germanium detector (HPGe). The germanium crystal represents the detection active volume of the HPGe detector, which has a region called dead layer or inactive layer. It has been reported in the literature a difference between the theoretical and experimental values when obtaining the efficiency curve of these detectors. In this study we used the MCNP-4C code to obtain the detector calibration efficiency for the geometry of the Cuno filter, and the influence of the dead layer and the effect of sum in cascade at the HPGe detector were studied. The correction of the dead layer values were made by varying the thickness and the radius of the germanium crystal. The detector has 75.83 cm{sup 3} of active volume of detection, according to information provided by the manufacturer. Nevertheless, the results showed that the actual value of active volume is less than the one specified, where the dead layer represents 16% of the total volume of the crystal. A Cuno filter analysis by gamma spectrometry has enabled identifying energy peaks. Using these peaks, three radionuclides were identified in the filter: {sup 108m}Ag, {sup 110m}Ag and {sup 60}Co. From the calibration efficiency obtained by the Monte Carlo method, the value of activity estimated for these radionuclides is in the order of MBq. (author)
International Nuclear Information System (INIS)
Gerlach, M.; Krumrey, M.; Cibik, L.; Mueller, P.; Ulm, G.
2009-01-01
Monte Carlo techniques are powerful tools to simulate the interaction of electromagnetic radiation with matter. One of the most widespread simulation program packages is Geant4. Almost all physical interaction processes can be included. However, it is not evident what accuracy can be obtained by a simulation. In this work, results of scattering experiments using monochromatized synchrotron radiation in the X-ray regime are quantitatively compared to the results of simulations using Geant4. Experiments were performed for various scattering foils made of different materials such as copper and gold. For energy-dispersive measurements of the scattered radiation, a cadmium telluride detector was used. The detector was fully characterized and calibrated with calculable undispersed as well as monochromatized synchrotron radiation. The obtained quantum efficiency and the response functions are in very good agreement with the corresponding Geant4 simulations. At the electron storage ring BESSY II the number of incident photons in the scattering experiments was measured with a photodiode that had been calibrated against a cryogenic radiometer, so that a direct comparison of scattering experiments with Monte Carlo simulations using Geant4 was possible. It was shown that Geant4 describes the photoeffect, including fluorescence as well as the Compton and Rayleigh scattering, with high accuracy, resulting in a deviation of typically less than 20%. Even polarization effects are widely covered by Geant4, and for Doppler broadening of Compton-scattered radiation the extension G4LECS can be included, but the fact that both features cannot be combined is a limitation. For most polarization-dependent simulations, good agreement with the experimental results was found, except for some orientations where Rayleigh scattering was overestimated in the simulation.
Gerlach, M.; Krumrey, M.; Cibik, L.; Müller, P.; Ulm, G.
2009-09-01
Monte Carlo techniques are powerful tools to simulate the interaction of electromagnetic radiation with matter. One of the most widespread simulation program packages is Geant4. Almost all physical interaction processes can be included. However, it is not evident what accuracy can be obtained by a simulation. In this work, results of scattering experiments using monochromatized synchrotron radiation in the X-ray regime are quantitatively compared to the results of simulations using Geant4. Experiments were performed for various scattering foils made of different materials such as copper and gold. For energy-dispersive measurements of the scattered radiation, a cadmium telluride detector was used. The detector was fully characterized and calibrated with calculable undispersed as well as monochromatized synchrotron radiation. The obtained quantum efficiency and the response functions are in very good agreement with the corresponding Geant4 simulations. At the electron storage ring BESSY II the number of incident photons in the scattering experiments was measured with a photodiode that had been calibrated against a cryogenic radiometer, so that a direct comparison of scattering experiments with Monte Carlo simulations using Geant4 was possible. It was shown that Geant4 describes the photoeffect, including fluorescence as well as the Compton and Rayleigh scattering, with high accuracy, resulting in a deviation of typically less than 20%. Even polarization effects are widely covered by Geant4, and for Doppler broadening of Compton-scattered radiation the extension G4LECS can be included, but the fact that both features cannot be combined is a limitation. For most polarization-dependent simulations, good agreement with the experimental results was found, except for some orientations where Rayleigh scattering was overestimated in the simulation.
Energy Technology Data Exchange (ETDEWEB)
Gerlach, M. [Physikalisch-Technische Bundesanstalt, Abbestr. 2-12, 10587 Berlin (Germany); Krumrey, M. [Physikalisch-Technische Bundesanstalt, Abbestr. 2-12, 10587 Berlin (Germany)], E-mail: Michael.Krumrey@ptb.de; Cibik, L.; Mueller, P.; Ulm, G. [Physikalisch-Technische Bundesanstalt, Abbestr. 2-12, 10587 Berlin (Germany)
2009-09-11
Monte Carlo techniques are powerful tools to simulate the interaction of electromagnetic radiation with matter. One of the most widespread simulation program packages is Geant4. Almost all physical interaction processes can be included. However, it is not evident what accuracy can be obtained by a simulation. In this work, results of scattering experiments using monochromatized synchrotron radiation in the X-ray regime are quantitatively compared to the results of simulations using Geant4. Experiments were performed for various scattering foils made of different materials such as copper and gold. For energy-dispersive measurements of the scattered radiation, a cadmium telluride detector was used. The detector was fully characterized and calibrated with calculable undispersed as well as monochromatized synchrotron radiation. The obtained quantum efficiency and the response functions are in very good agreement with the corresponding Geant4 simulations. At the electron storage ring BESSY II the number of incident photons in the scattering experiments was measured with a photodiode that had been calibrated against a cryogenic radiometer, so that a direct comparison of scattering experiments with Monte Carlo simulations using Geant4 was possible. It was shown that Geant4 describes the photoeffect, including fluorescence as well as the Compton and Rayleigh scattering, with high accuracy, resulting in a deviation of typically less than 20%. Even polarization effects are widely covered by Geant4, and for Doppler broadening of Compton-scattered radiation the extension G4LECS can be included, but the fact that both features cannot be combined is a limitation. For most polarization-dependent simulations, good agreement with the experimental results was found, except for some orientations where Rayleigh scattering was overestimated in the simulation.
International Nuclear Information System (INIS)
Sarkar, S.; Kosson, D.S.; Mahadevan, S.; Meeussen, J.C.L.; Sloot, H. van der; Arnold, J.R.; Brown, K.G.
2012-01-01
Chemical equilibrium modeling of cementitious materials requires aqueous–solid equilibrium constants of the controlling mineral phases (K sp ) and the available concentrations of primary components. Inherent randomness of the input and model parameters, experimental measurement error, the assumptions and approximations required for numerical simulation, and inadequate knowledge of the chemical process contribute to uncertainty in model prediction. A numerical simulation framework is developed in this paper to assess uncertainty in K sp values used in geochemical speciation models. A Bayesian statistical method is used in combination with an efficient, adaptive Metropolis sampling technique to develop probability density functions for K sp values. One set of leaching experimental observations is used for calibration and another set is used for comparison to evaluate the applicability of the approach. The estimated probability distributions of K sp values can be used in Monte Carlo simulation to assess uncertainty in the behavior of aqueous–solid partitioning of constituents in cement-based materials.
Calibrations of a tritium extraction facility
International Nuclear Information System (INIS)
Bretscher, M.M.; Oliver, B.M.; Farrar, H. IV.
1983-01-01
A tritium extraction facility has been built for the purpose of measuring the absolute tritium concentration in neutron-irradiated lithium metal samples. Two independent calibration procedures have been used to determine what fraction, if any, of tritium is lost during the extraction process. The first procedure compares independently measured 4 He and 3 H concentrations from the 6 Li(n,α)T reaction. The second procedure compared measured 6 Li(n,α)T/ 197 Au (n,γ) 198 Au thermal neutron reaction rate ratios with those obtained from Monte Carlo calculations using well-known cross sections. Both calibration methods show that within experimental errors (approx. 1.5%) no tritium is lost during the extraction process
RF impedance measurement calibration
International Nuclear Information System (INIS)
Matthews, P.J.; Song, J.J.
1993-01-01
The intent of this note is not to explain all of the available calibration methods in detail. Instead, we will focus on the calibration methods of interest for RF impedance coupling measurements and attempt to explain: (1). The standards and measurements necessary for the various calibration techniques. (2). The advantages and disadvantages of each technique. (3). The mathematical manipulations that need to be applied to the measured standards and devices. (4). An outline of the steps needed for writing a calibration routine that operated from a remote computer. For further details of the various techniques presented in this note, the reader should consult the references
International Nuclear Information System (INIS)
Costa, Priscila
2014-01-01
The Cuno filter is part of the water processing circuit of the IEA-R1 reactor and, when saturated, it is replaced and becomes a radioactive waste, which must be managed. In this work, the primary characterization of the Cuno filter of the IEA-R1 nuclear reactor at IPEN was carried out using gamma spectrometry associated with the Monte Carlo method. The gamma spectrometry was performed using a hyperpure germanium detector (HPGe). The germanium crystal represents the detection active volume of the HPGe detector, which has a region called dead layer or inactive layer. It has been reported in the literature a difference between the theoretical and experimental values when obtaining the efficiency curve of these detectors. In this study we used the MCNP-4C code to obtain the detector calibration efficiency for the geometry of the Cuno filter, and the influence of the dead layer and the effect of sum in cascade at the HPGe detector were studied. The correction of the dead layer values were made by varying the thickness and the radius of the germanium crystal. The detector has 75.83 cm 3 of active volume of detection, according to information provided by the manufacturer. Nevertheless, the results showed that the actual value of active volume is less than the one specified, where the dead layer represents 16% of the total volume of the crystal. A Cuno filter analysis by gamma spectrometry has enabled identifying energy peaks. Using these peaks, three radionuclides were identified in the filter: 108m Ag, 110m Ag and 60 Co. From the calibration efficiency obtained by the Monte Carlo method, the value of activity estimated for these radionuclides is in the order of MBq. (author)
Uncertainty analysis in Monte Carlo criticality computations
International Nuclear Information System (INIS)
Qi Ao
2011-01-01
Highlights: ► Two types of uncertainty methods for k eff Monte Carlo computations are examined. ► Sampling method has the least restrictions on perturbation but computing resources. ► Analytical method is limited to small perturbation on material properties. ► Practicality relies on efficiency, multiparameter applicability and data availability. - Abstract: Uncertainty analysis is imperative for nuclear criticality risk assessments when using Monte Carlo neutron transport methods to predict the effective neutron multiplication factor (k eff ) for fissionable material systems. For the validation of Monte Carlo codes for criticality computations against benchmark experiments, code accuracy and precision are measured by both the computational bias and uncertainty in the bias. The uncertainty in the bias accounts for known or quantified experimental, computational and model uncertainties. For the application of Monte Carlo codes for criticality analysis of fissionable material systems, an administrative margin of subcriticality must be imposed to provide additional assurance of subcriticality for any unknown or unquantified uncertainties. Because of a substantial impact of the administrative margin of subcriticality on economics and safety of nuclear fuel cycle operations, recently increasing interests in reducing the administrative margin of subcriticality make the uncertainty analysis in criticality safety computations more risk-significant. This paper provides an overview of two most popular k eff uncertainty analysis methods for Monte Carlo criticality computations: (1) sampling-based methods, and (2) analytical methods. Examples are given to demonstrate their usage in the k eff uncertainty analysis due to uncertainties in both neutronic and non-neutronic parameters of fissionable material systems.
Multilevel Monte Carlo Approaches for Numerical Homogenization
Efendiev, Yalchin R.
2015-10-01
In this article, we study the application of multilevel Monte Carlo (MLMC) approaches to numerical random homogenization. Our objective is to compute the expectation of some functionals of the homogenized coefficients, or of the homogenized solutions. This is accomplished within MLMC by considering different sizes of representative volumes (RVEs). Many inexpensive computations with the smallest RVE size are combined with fewer expensive computations performed on larger RVEs. Likewise, when it comes to homogenized solutions, different levels of coarse-grid meshes are used to solve the homogenized equation. We show that, by carefully selecting the number of realizations at each level, we can achieve a speed-up in the computations in comparison to a standard Monte Carlo method. Numerical results are presented for both one-dimensional and two-dimensional test-cases that illustrate the efficiency of the approach.
Monte Carlo simulations in skin radiotherapy
International Nuclear Information System (INIS)
Sarvari, A.; Jeraj, R.; Kron, T.
2000-01-01
The primary goal of this work was to develop a procedure for calculation the appropriate filter shape for a brachytherapy applicator used for skin radiotherapy. In the applicator a radioactive source is positioned close to the skin. Without a filter, the resultant dose distribution would be highly nonuniform.High uniformity is usually required however. This can be achieved using an appropriately shaped filter, which flattens the dose profile. Because of the complexity of the transport and geometry, Monte Carlo simulations had to be used. An 192 Ir high dose rate photon source was used. All necessary transport parameters were simulated with the MCNP4B Monte Carlo code. A highly efficient iterative procedure was developed, which enabled calculation of the optimal filter shape in only few iterations. The initially non-uniform dose distributions became uniform within a percent when applying the filter calculated by this procedure. (author)
Coevolution Based Adaptive Monte Carlo Localization (CEAMCL
Directory of Open Access Journals (Sweden)
Luo Ronghua
2008-11-01
Full Text Available An adaptive Monte Carlo localization algorithm based on coevolution mechanism of ecological species is proposed. Samples are clustered into species, each of which represents a hypothesis of the robot's pose. Since the coevolution between the species ensures that the multiple distinct hypotheses can be tracked stably, the problem of premature convergence when using MCL in highly symmetric environments can be solved. And the sample size can be adjusted adaptively over time according to the uncertainty of the robot's pose by using the population growth model. In addition, by using the crossover and mutation operators in evolutionary computation, intra-species evolution can drive the samples move towards the regions where the desired posterior density is large. So a small size of samples can represent the desired density well enough to make precise localization. The new algorithm is termed coevolution based adaptive Monte Carlo localization (CEAMCL. Experiments have been carried out to prove the efficiency of the new localization algorithm.
Calibration of moisture monitors
International Nuclear Information System (INIS)
Gutierrez, R.L.
1979-02-01
A method for calibrating an aluminum oxide hygrometer against an optical chilled mirror dew-point hygrometer has been established. A theoretical cross-point line of dew points from both hygrometers and a maximum moisture content of 10 ppM/sub v/ are used to define an area for calibrating the sensor probes of the aluminum oxide hygrometer
DEFF Research Database (Denmark)
Yordanova, Ginka; Vesth, Allan
The report describes site calibration measurements carried out on a site in Denmark. The measurements are carried out in accordance to Ref. [1]. The site calibration is carried out before a power performance measurement on a given turbine to clarify the influence from the terrain on the ratio...
Topics in Statistical Calibration
2014-03-27
Natural cubic spline speed di st 110 B.2 The calibrate function The most basic calibration problem, the one often encountered in more advanced ...0040-1706, 1537-2723. A. M. Mood, F. A. Graybill, and D. C. Boes. Introduction to the Theory of Statistics. McGraw-Hill, Auckland , U.A, 1974. ISBN
Sandia WIPP calibration traceability
Energy Technology Data Exchange (ETDEWEB)
Schuhen, M.D. [Sandia National Labs., Albuquerque, NM (United States); Dean, T.A. [RE/SPEC, Inc., Albuquerque, NM (United States)
1996-05-01
This report summarizes the work performed to establish calibration traceability for the instrumentation used by Sandia National Laboratories at the Waste Isolation Pilot Plant (WIPP) during testing from 1980-1985. Identifying the calibration traceability is an important part of establishing a pedigree for the data and is part of the qualification of existing data. In general, the requirement states that the calibration of Measuring and Test equipment must have a valid relationship to nationally recognized standards or the basis for the calibration must be documented. Sandia recognized that just establishing calibration traceability would not necessarily mean that all QA requirements were met during the certification of test instrumentation. To address this concern, the assessment was expanded to include various activities.
Sandia WIPP calibration traceability
International Nuclear Information System (INIS)
Schuhen, M.D.; Dean, T.A.
1996-05-01
This report summarizes the work performed to establish calibration traceability for the instrumentation used by Sandia National Laboratories at the Waste Isolation Pilot Plant (WIPP) during testing from 1980-1985. Identifying the calibration traceability is an important part of establishing a pedigree for the data and is part of the qualification of existing data. In general, the requirement states that the calibration of Measuring and Test equipment must have a valid relationship to nationally recognized standards or the basis for the calibration must be documented. Sandia recognized that just establishing calibration traceability would not necessarily mean that all QA requirements were met during the certification of test instrumentation. To address this concern, the assessment was expanded to include various activities
Calibration and measurement of 210Pb using two independent techniques
International Nuclear Information System (INIS)
Villa, M.; Hurtado, S.; Manjon, G.; Garcia-Tenorio, R.
2007-01-01
An experimental procedure has been developed for a rapid and accurate determination of the activity concentration of 210 Pb in sediments by liquid scintillation counting (LSC). Additionally, an alternative technique using γ-spectrometry and Monte Carlo simulation has been developed. A radiochemical procedure, based on radium and barium sulphates co-precipitation have been applied to isolate the Pb-isotopes. 210 Pb activity measurements were done in a low background scintillation spectrometer Quantulus 1220. A calibration of the liquid scintillation spectrometer, including its α/β discrimination system, has been made, in order to minimize background and, additionally, some improvements are suggested for the calculation of the 210 Pb activity concentration, taking into account that 210 Pb counting efficiency cannot be accurately determined. Therefore, the use of an effective radiochemical yield, which can be empirically evaluated, is proposed. 210 Pb activity concentration in riverbed sediments from an area affected by NORM wastes has been determined using both the proposed method. Results using γ-spectrometry and LSC are compared to the results obtained following indirect α-spectrometry ( 210 Po) method
Investigating the impossible: Monte Carlo simulations
International Nuclear Information System (INIS)
Kramer, Gary H.; Crowley, Paul; Burns, Linda C.
2000-01-01
Designing and testing new equipment can be an expensive and time consuming process or the desired performance characteristics may preclude its construction due to technological shortcomings. Cost may also prevent equipment being purchased for other scenarios to be tested. An alternative is to use Monte Carlo simulations to make the investigations. This presentation exemplifies how Monte Carlo code calculations can be used to fill the gap. An example is given for the investigation of two sizes of germanium detector (70 mm and 80 mm diameter) at four different crystal thicknesses (15, 20, 25, and 30 mm) and makes predictions on how the size affects the counting efficiency and the Minimum Detectable Activity (MDA). The Monte Carlo simulations have shown that detector efficiencies can be adequately modelled using photon transport if the data is used to investigate trends. The investigation of the effect of detector thickness on the counting efficiency has shown that thickness for a fixed diameter detector of either 70 mm or 80 mm is unimportant up to 60 keV. At higher photon energies, the counting efficiency begins to decrease as the thickness decreases as expected. The simulations predict that the MDA of either the 70 mm or 80 mm diameter detectors does not differ by more than a factor of 1.15 at 17 keV or 1.2 at 60 keV when comparing detectors of equivalent thicknesses. The MDA is slightly increased at 17 keV, and rises by about 52% at 660 keV, when the thickness is decreased from 30 mm to 15 mm. One could conclude from this information that the extra cost associated with the larger area Ge detectors may not be justified for the slight improvement predicted in the MDA. (author)
Modeling the Efficiency of a Germanium Detector
Hayton, Keith; Prewitt, Michelle; Quarles, C. A.
2006-10-01
We are using the Monte Carlo Program PENELOPE and the cylindrical geometry program PENCYL to develop a model of the detector efficiency of a planar Ge detector. The detector is used for x-ray measurements in an ongoing experiment to measure electron bremsstrahlung. While we are mainly interested in the efficiency up to 60 keV, the model ranges from 10.1 keV (below the Ge absorption edge at 11.1 keV) to 800 keV. Measurements of the detector efficiency have been made in a well-defined geometry with calibrated radioactive sources: Co-57, Se-75, Ba-133, Am-241 and Bi-207. The model is compared with the experimental measurements and is expected to provide a better interpolation formula for the detector efficiency than simply using x-ray absorption coefficients for the major constituents of the detector. Using PENELOPE, we will discuss several factors, such as Ge dead layer, surface ice layer and angular divergence of the source, that influence the efficiency of the detector.
Low-cost programmable pulse generator for particle telescope calibration
Sanchez, S; Seisdedos, M; Meziat, D; Carbajo, M; Medina, J; Bronchalo, E; Peral, L D; Rodríguez-Pacheco, J
1999-01-01
In this paper we present a new calibration system for particle telescopes including multipulse generator and digital controller. The calibration system generates synchronized pulses of variable height for every detector channel on the telescope. The control system is based on a commercial microcontroller linked to a personal computer through an RS-232 bidirectional line. The aim of the device is to perform laboratory calibration of multi-detector telescopes prior to calibration at accelerator. This task includes evaluation of linearity and resolution of each detector channel, as well as coincidence logic. The heights of the pulses sent to the detectors are obtained by Monte Carlo simulation of telescope response to a particle flux of any desired geometry and composition.
Monte Carlo Methods in Physics
International Nuclear Information System (INIS)
Santoso, B.
1997-01-01
Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained
Calibration of ALIBAVA readout system
Energy Technology Data Exchange (ETDEWEB)
Trofymov, Artur [DESY, Hamburg (Germany); Collaboration: ATLAS experiment-Collaboration
2015-07-01
The High Luminosity Large Hadron Collider (LH-LHC) is the upgrade of the LHC that foreseen to increase the instantaneous luminosity by a factor ten with a total integrated luminosity of 3000 fb{sup -1}. The ATLAS experiment will need to build a new tracker to operate in the new severe LH-LHC conditions (increasing detector granularity to cope with much higher channel occupancy, designing radiation-hard sensors and electronics to cope with radiation damage). Charge collection efficiency (CCE) of silicon strip sensors for the new ATLAS tracker can be done with ALIBAVA analog readout system (analog system gives more information about signal from all strips than digital). In this work the preliminary results of ALIBAVA calibration using two different methods (with ''source data'' and ''calibration data'') are presented. Calibration constant obtained by these methods is necessary for knowing collected charge on the silicon strip sensors and for having the ability to compare it with measurements done at the test beam.
Compact radiometric microwave calibrator
International Nuclear Information System (INIS)
Fixsen, D. J.; Wollack, E. J.; Kogut, A.; Limon, M.; Mirel, P.; Singal, J.; Fixsen, S. M.
2006-01-01
The calibration methods for the ARCADE II instrument are described and the accuracy estimated. The Steelcast coated aluminum cones which comprise the calibrator have a low reflection while maintaining 94% of the absorber volume within 5 mK of the base temperature (modeled). The calibrator demonstrates an absorber with the active part less than one wavelength thick and only marginally larger than the mouth of the largest horn and yet black (less than -40 dB or 0.01% reflection) over five octaves in frequency
DEFF Research Database (Denmark)
Fernandez Garcia, Sergio; Villanueva, Héctor
This report presents the result of the lidar to lidar calibration performed for ground-based lidar. Calibration is here understood as the establishment of a relation between the reference lidar wind speed measurements with measurement uncertainties provided by measurement standard and corresponding...... lidar wind speed indications with associated measurement uncertainties. The lidar calibration concerns the 10 minute mean wind speed measurements. The comparison of the lidar measurements of the wind direction with that from the reference lidar measurements are given for information only....
Biased Monte Carlo optimization: the basic approach
International Nuclear Information System (INIS)
Campioni, Luca; Scardovelli, Ruben; Vestrucci, Paolo
2005-01-01
It is well-known that the Monte Carlo method is very successful in tackling several kinds of system simulations. It often happens that one has to deal with rare events, and the use of a variance reduction technique is almost mandatory, in order to have Monte Carlo efficient applications. The main issue associated with variance reduction techniques is related to the choice of the value of the biasing parameter. Actually, this task is typically left to the experience of the Monte Carlo user, who has to make many attempts before achieving an advantageous biasing. A valuable result is provided: a methodology and a practical rule addressed to establish an a priori guidance for the choice of the optimal value of the biasing parameter. This result, which has been obtained for a single component system, has the notable property of being valid for any multicomponent system. In particular, in this paper, the exponential and the uniform biases of exponentially distributed phenomena are investigated thoroughly
International Nuclear Information System (INIS)
Grau, A.; Navarro, N.; Rodriguez, L.; Alvarez, A.; Salvador, S.; Diaz, C.
1996-01-01
The beta-gamma emitters ''60Co, ''137 Cs, ''131 I, ''210 Pb y ''129 Iare radionuclides for which the calibration by the CIEMAT/NIST method ispossible with uncertainties less than 1%. We prepared, from standardized solutions of these radionuclides, samples in vials of 20 ml. We obtained the calibration curves, efficiency as a function of energy, for two germanium detectors. (Author) 5 refs
Mielke, Steven L; Dinpajooh, Mohammadhasan; Siepmann, J Ilja; Truhlar, Donald G
2013-01-07
We present a procedure to calculate ensemble averages, thermodynamic derivatives, and coordinate distributions by effective classical potential methods. In particular, we consider the displaced-points path integral (DPPI) method, which yields exact quantal partition functions and ensemble averages for a harmonic potential and approximate quantal ones for general potentials, and we discuss the implementation of the new procedure in two Monte Carlo simulation codes, one that uses uncorrelated samples to calculate absolute free energies, and another that employs Metropolis sampling to calculate relative free energies. The results of the new DPPI method are compared to those from accurate path integral calculations as well as to results of two other effective classical potential schemes for the case of an isolated water molecule. In addition to the partition function, we consider the heat capacity and expectation values of the energy, the potential energy, the bond angle, and the OH distance. We also consider coordinate distributions. The DPPI scheme performs best among the three effective potential schemes considered and achieves very good accuracy for all of the properties considered. A key advantage of the effective potential schemes is that they display much lower statistical sampling variances than those for accurate path integral calculations. The method presented here shows great promise for including quantum effects in calculations on large systems.
Federal Laboratory Consortium — This facility is for low altitude subsonic altimeter system calibrations of air vehicles. Mission is a direct support of the AFFTC mission. Postflight data merge is...
U.S. Environmental Protection Agency — an UV calibration curve for SRHA quantitation. This dataset is associated with the following publication: Chang, X., and D. Bouchard. Surfactant-Wrapped Multiwalled...
International Nuclear Information System (INIS)
Zhang Bingyun; Li Xiaonan; Zhu Kejun; Zhang Jiawen; Gong Mingyu
2003-01-01
We constructed BES (Beijing Spectrometer) online calibration system to ensure the coherence of readout electronic channels due to huge data volume in high energy physics experiment. This paper describes the structure of hardware and software, and its characteristic and function
International Nuclear Information System (INIS)
Ahlers, C.F.; Liu, H.H.
2001-01-01
The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the AMR Development Plan for U0035 Calibrated Properties Model REV00 (CRWMS M and O 1999c). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions
International Nuclear Information System (INIS)
Ahlers, C.; Liu, H.
2000-01-01
The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions
Directory of Open Access Journals (Sweden)
Patterson E.
2010-06-01
Full Text Available The results are presented using the procedure outlined by the Standardisation Project for Optical Techniques of Strain measurement to calibrate a digital image correlation system. The process involves comparing the experimental data obtained with the optical measurement system to the theoretical values for a specially designed specimen. The standard states the criteria which must be met in order to achieve successful calibration, in addition to quantifying the measurement uncertainty in the system. The system was evaluated at three different displacement load levels, generating strain ranges from 289 µstrain to 2110 µstrain. At the 289 µstrain range, the calibration uncertainty was found to be 14.1 µstrain, and at the 2110 µstrain range it was found to be 28.9 µstrain. This calibration procedure was performed without painting a speckle pattern on the surface of the metal. Instead, the specimen surface was prepared using different grades of grit paper to produce the desired texture.
Monte Carlo Simulation of an American Option
Directory of Open Access Journals (Sweden)
Gikiri Thuo
2007-04-01
Full Text Available We implement gradient estimation techniques for sensitivity analysis of option pricing which can be efficiently employed in Monte Carlo simulation. Using these techniques we can simultaneously obtain an estimate of the option value together with the estimates of sensitivities of the option value to various parameters of the model. After deriving the gradient estimates we incorporate them in an iterative stochastic approximation algorithm for pricing an option with early exercise features. We illustrate the procedure using an example of an American call option with a single dividend that is analytically tractable. In particular we incorporate estimates for the gradient with respect to the early exercise threshold level.
Atomistic Monte Carlo simulation of lipid membranes
DEFF Research Database (Denmark)
Wüstner, Daniel; Sklenar, Heinz
2014-01-01
Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction...... into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches...
Monte Carlo modelling for neutron guide losses
International Nuclear Information System (INIS)
Cser, L.; Rosta, L.; Toeroek, Gy.
1989-09-01
In modern research reactors, neutron guides are commonly used for beam conducting. The neutron guide is a well polished or equivalently smooth glass tube covered inside by sputtered or evaporated film of natural Ni or 58 Ni isotope where the neutrons are totally reflected. A Monte Carlo calculation was carried out to establish the real efficiency and the spectral as well as spatial distribution of the neutron beam at the end of a glass mirror guide. The losses caused by mechanical inaccuracy and mirror quality were considered and the effects due to the geometrical arrangement were analyzed. (author) 2 refs.; 2 figs
Directory of Open Access Journals (Sweden)
Pozhitkov Alexander E
2010-07-01
Full Text Available Abstract Background Calibration of a microarray scanner is critical for accurate interpretation of microarray results. Shi et al. (BMC Bioinformatics, 2005, 6, Art. No. S11 Suppl. 2. reported usage of a Full Moon BioSystems slide for calibration. Inspired by the Shi et al. work, we have calibrated microarray scanners in our previous research. We were puzzled however, that most of the signal intensities from a biological sample fell below the sensitivity threshold level determined by the calibration slide. This conundrum led us to re-investigate the quality of calibration provided by the Full Moon BioSystems slide as well as the accuracy of the analysis performed by Shi et al. Methods Signal intensities were recorded on three different microarray scanners at various photomultiplier gain levels using the same calibration slide from Full Moon BioSystems. Data analysis was conducted on raw signal intensities without normalization or transformation of any kind. Weighted least-squares method was used to fit the data. Results We found that initial analysis performed by Shi et al. did not take into account autofluorescence of the Full Moon BioSystems slide, which led to a grossly distorted microarray scanner response. Our analysis revealed that a power-law function, which is explicitly accounting for the slide autofluorescence, perfectly described a relationship between signal intensities and fluorophore quantities. Conclusions Microarray scanners respond in a much less distorted fashion than was reported by Shi et al. Full Moon BioSystems calibration slides are inadequate for performing calibration. We recommend against using these slides.
Calibration of thermoluminiscent materials
International Nuclear Information System (INIS)
Bos, A.J.J.
1989-07-01
In this report the relation between exposure and absorbed radiation dose in various materials is represented, on the base of recent data. With the help of this a calibration procedure for thermoluminescent materials, adapted to the IRI radiation standard is still the exposure in rontgen. In switching to the air kerma standard the calibration procedure will have to be adapted. (author). 6 refs.; 4 tabs
Lectures on Monte Carlo methods
Madras, Neal
2001-01-01
Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati
Energy Technology Data Exchange (ETDEWEB)
Courtney, M.
2013-01-15
Nacelle mounted, forward looking wind lidars are beginning to be used to provide reference wind speed measurements for the power performance testing of wind turbines. In such applications, a formal calibration procedure with a corresponding uncertainty assessment will be necessary. This report presents four concepts for performing such a nacelle lidar calibration. Of the four methods, two are found to be immediately relevant and are pursued in some detail. The first of these is a line of sight calibration method in which both lines of sight (for a two beam lidar) are individually calibrated by accurately aligning the beam to pass close to a reference wind speed sensor. A testing procedure is presented, reporting requirements outlined and the uncertainty of the method analysed. It is seen that the main limitation of the line of sight calibration method is the time required to obtain a representative distribution of radial wind speeds. An alternative method is to place the nacelle lidar on the ground and incline the beams upwards to bisect a mast equipped with reference instrumentation at a known height and range. This method will be easier and faster to implement and execute but the beam inclination introduces extra uncertainties. A procedure for conducting such a calibration is presented and initial indications of the uncertainties given. A discussion of the merits and weaknesses of the two methods is given together with some proposals for the next important steps to be taken in this work. (Author)
Geometry and Dynamics for Markov Chain Monte Carlo
Barp, Alessandro; Briol, François-Xavier; Kennedy, Anthony D.; Girolami, Mark
2018-03-01
Markov Chain Monte Carlo methods have revolutionised mathematical computation and enabled statistical inference within many previously intractable models. In this context, Hamiltonian dynamics have been proposed as an efficient way of building chains which can explore probability densities efficiently. The method emerges from physics and geometry and these links have been extensively studied by a series of authors through the last thirty years. However, there is currently a gap between the intuitions and knowledge of users of the methodology and our deep understanding of these theoretical foundations. The aim of this review is to provide a comprehensive introduction to the geometric tools used in Hamiltonian Monte Carlo at a level accessible to statisticians, machine learners and other users of the methodology with only a basic understanding of Monte Carlo methods. This will be complemented with some discussion of the most recent advances in the field which we believe will become increasingly relevant to applied scientists.
Estimation of photon energy distribution in gamma calibration field
International Nuclear Information System (INIS)
Takahashi, Fumiaki; Shimizu, Shigeru; Yamaguchi, Yasuhiro
1997-03-01
Photon survey instruments used for radiation protection are usually calibrated at gamma radiation fields, which are traceable to the national standard with regard to exposure. Whereas scattered radiations as well as primary gamma-rays exit in the calibration field, no consideration for the effect of the scattered radiations on energy distribution is given in routine calibration works. The scattered radiations can change photon energy spectra in the field, and this can result in misinterpretations of energy-dependent instrument responses. Construction materials in the field affect the energy distribution and magnitude of the scattered radiations. The geometric relationship between a gamma source and an instrument can determine the energy distribution at the calibration point. Therefore, it is essential for the assurance of quality calibration to estimate the energy spectra at the gamma calibration fields. Then, photon energy distributions at some fields in the Facility of Radiation Standard of the Japan Atomic Energy Research Institute (JAERI) were estimated by measurements using a NaI(Tl) detector and Monte Carlo calculations. It was found that the use of collimator gives a different feature in photon energy distribution. The origin of scattered radiations and the ratio of the scattered radiations to the primary gamma-rays were obtained. The results can help to improve the calibration of photon survey instruments in the JAERI. (author)
Monte Carlo simulation experiments on box-type radon dosimeter
International Nuclear Information System (INIS)
Jamil, Khalid; Kamran, Muhammad; Illahi, Ahsan; Manzoor, Shahid
2014-01-01
Epidemiological studies show that inhalation of radon gas ( 222 Rn) may be carcinogenic especially to mine workers, people living in closed indoor energy conserved environments and underground dwellers. It is, therefore, of paramount importance to measure the 222 Rn concentrations (Bq/m 3 ) in indoors environments. For this purpose, box-type passive radon dosimeters employing ion track detector like CR-39 are widely used. Fraction of the number of radon alphas emitted in the volume of the box type dosimeter resulting in latent track formation on CR-39 is the latent track registration efficiency. Latent track registration efficiency is ultimately required to evaluate the radon concentration which consequently determines the effective dose and the radiological hazards. In this research, Monte Carlo simulation experiments were carried out to study the alpha latent track registration efficiency for box type radon dosimeter as a function of dosimeter’s dimensions and range of alpha particles in air. Two different self developed Monte Carlo simulation techniques were employed namely: (a) Surface ratio (SURA) method and (b) Ray hitting (RAHI) method. Monte Carlo simulation experiments revealed that there are two types of efficiencies i.e. intrinsic efficiency (η int ) and alpha hit efficiency (η hit ). The η int depends upon only on the dimensions of the dosimeter and η hit depends both upon dimensions of the dosimeter and range of the alpha particles. The total latent track registration efficiency is the product of both intrinsic and hit efficiencies. It has been concluded that if diagonal length of box type dosimeter is kept smaller than the range of alpha particle then hit efficiency is achieved as 100%. Nevertheless the intrinsic efficiency keeps playing its role. The Monte Carlo simulation experimental results have been found helpful to understand the intricate track registration mechanisms in the box type dosimeter. This paper explains that how radon
A New Automated Instrument Calibration Facility at the Savannah River Site
International Nuclear Information System (INIS)
Polz, E.; Rushton, R.O.; Wilkie, W.H.; Hancock, R.C.
1998-01-01
The Health Physics Instrument Calibration Facility at the Savannah River Site in Aiken, SC was expressly designed and built to calibrate portable radiation survey instruments. The facility incorporates recent advances in automation technology, building layout and construction, and computer software to improve the calibration process. Nine new calibration systems automate instrument calibration and data collection. The building is laid out so that instruments are moved from one area to another in a logical, efficient manner. New software and hardware integrate all functions such as shipping/receiving, work flow, calibration, testing, and report generation. Benefits include a streamlined and integrated program, improved efficiency, reduced errors, and better accuracy
Advanced Multilevel Monte Carlo Methods
Jasra, Ajay; Law, Kody; Suciu, Carina
2017-01-01
This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.
Advanced Multilevel Monte Carlo Methods
Jasra, Ajay
2017-04-24
This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.
Theoretical calibration of a NaI(Tl) detector for the measurement of 131I in a thyroid phantom
International Nuclear Information System (INIS)
Venturini, Luzia
2002-01-01
This paper describes the theoretical calibration of a NaI(Tl) detector using Monte Carlo Method, for the measurement of 131 I in a thyroid phantom. The thyroid is represented by the region between two concentric cylinders, where the inner one represents the trachea and the outer one represents the neck. 133 Ba was used for experimental calibration. The results shows that the calibration procedure is suitable for 131 I measurements. (author)
Monte Carlo simulation of mixed neutron-gamma radiation fields and dosimetry devices
International Nuclear Information System (INIS)
Zhang, Guoqing
2011-01-01
Monte Carlo methods based on random sampling are widely used in different fields for the capability of solving problems with a large number of coupled degrees of freedom. In this work, Monte Carlos methods are successfully applied for the simulation of the mixed neutron-gamma field in an interim storage facility and neutron dosimeters of different types. Details are discussed in two parts: In the first part, the method of simulating an interim storage facility loaded with CASTORs is presented. The size of a CASTOR is rather large (several meters) and the CASTOR wall is very thick (tens of centimeters). Obtaining the results of dose rates outside a CASTOR with reasonable errors costs usually hours or even days. For the simulation of a large amount of CASTORs in an interim storage facility, it needs weeks or even months to finish a calculation. Variance reduction techniques were used to reduce the calculation time and to achieve reasonable relative errors. Source clones were applied to avoid unnecessary repeated calculations. In addition, the simulations were performed on a cluster system. With the calculation techniques discussed above, the efficiencies of calculations can be improved evidently. In the second part, the methods of simulating the response of neutron dosimeters are presented. An Alnor albedo dosimeter was modelled in MCNP, and it has been simulated in the facility to calculate the calibration factor to get the evaluated response to a Cf-252 source. The angular response of Makrofol detectors to fast neutrons has also been investigated. As a kind of SSNTD, Makrofol can detect fast neutrons by recording the neutron induced heavy charged recoils. To obtain the information of charged recoils, general-purpose Monte Carlo codes were used for transporting incident neutrons. The response of Makrofol to fast neutrons is dependent on several factors. Based on the parameters which affect the track revealing, the formation of visible tracks was determined. For
Monte Carlo simulation of mixed neutron-gamma radiation fields and dosimetry devices
Energy Technology Data Exchange (ETDEWEB)
Zhang, Guoqing
2011-12-22
Monte Carlo methods based on random sampling are widely used in different fields for the capability of solving problems with a large number of coupled degrees of freedom. In this work, Monte Carlos methods are successfully applied for the simulation of the mixed neutron-gamma field in an interim storage facility and neutron dosimeters of different types. Details are discussed in two parts: In the first part, the method of simulating an interim storage facility loaded with CASTORs is presented. The size of a CASTOR is rather large (several meters) and the CASTOR wall is very thick (tens of centimeters). Obtaining the results of dose rates outside a CASTOR with reasonable errors costs usually hours or even days. For the simulation of a large amount of CASTORs in an interim storage facility, it needs weeks or even months to finish a calculation. Variance reduction techniques were used to reduce the calculation time and to achieve reasonable relative errors. Source clones were applied to avoid unnecessary repeated calculations. In addition, the simulations were performed on a cluster system. With the calculation techniques discussed above, the efficiencies of calculations can be improved evidently. In the second part, the methods of simulating the response of neutron dosimeters are presented. An Alnor albedo dosimeter was modelled in MCNP, and it has been simulated in the facility to calculate the calibration factor to get the evaluated response to a Cf-252 source. The angular response of Makrofol detectors to fast neutrons has also been investigated. As a kind of SSNTD, Makrofol can detect fast neutrons by recording the neutron induced heavy charged recoils. To obtain the information of charged recoils, general-purpose Monte Carlo codes were used for transporting incident neutrons. The response of Makrofol to fast neutrons is dependent on several factors. Based on the parameters which affect the track revealing, the formation of visible tracks was determined. For
Considerations of MCNP Monte Carlo code to be used as a radiotherapy treatment planning tool.
Juste, B; Miro, R; Gallardo, S; Verdu, G; Santos, A
2005-01-01
The present work has simulated the photon and electron transport in a Theratron 780® (MDS Nordion)60Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle). This project explains mainly the different methodologies carried out to speedup calculations in order to apply this code efficiently in radiotherapy treatment planning.
Monte Carlo analysis of a control technique for a tunable white lighting system
DEFF Research Database (Denmark)
Chakrabarti, Maumita; Thorseth, Anders; Jepsen, Jørgen
2017-01-01
A simulated colour control mechanism for a multi-coloured LED lighting system is presented. The system achieves adjustable and stable white light output and allows for system-to-system reproducibility after application of the control mechanism. The control unit works using a pre-calibrated lookup...... table for an experimentally realized system, with a calibrated tristimulus colour sensor. A Monte Carlo simulation is used to examine the system performance concerning the variation of luminous flux and chromaticity of the light output. The inputs to the Monte Carlo simulation, are variations of the LED...... peak wavelength, the LED rated luminous flux bin, the influence of the operating conditions, ambient temperature, driving current, and the spectral response of the colour sensor. The system performance is investigated by evaluating the outputs from the Monte Carlo simulation. The outputs show...
Monte Carlo theory and practice
International Nuclear Information System (INIS)
James, F.
1987-01-01
Historically, the first large-scale calculations to make use of the Monte Carlo method were studies of neutron scattering and absorption, random processes for which it is quite natural to employ random numbers. Such calculations, a subset of Monte Carlo calculations, are known as direct simulation, since the 'hypothetical population' of the narrower definition above corresponds directly to the real population being studied. The Monte Carlo method may be applied wherever it is possible to establish equivalence between the desired result and the expected behaviour of a stochastic system. The problem to be solved may already be of a probabilistic or statistical nature, in which case its Monte Carlo formulation will usually be a straightforward simulation, or it may be of a deterministic or analytic nature, in which case an appropriate Monte Carlo formulation may require some imagination and may appear contrived or artificial. In any case, the suitability of the method chosen will depend on its mathematical properties and not on its superficial resemblance to the problem to be solved. The authors show how Monte Carlo techniques may be compared with other methods of solution of the same physical problem
International Nuclear Information System (INIS)
Ren, Huiying; Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Bao, Jie; Swiler, Laura
2017-01-01
In this paper we developed an efficient Bayesian inversion framework for interpreting marine seismic Amplitude Versus Angle and Controlled-Source Electromagnetic data for marine reservoir characterization. The framework uses a multi-chain Markov-chain Monte Carlo sampler, which is a hybrid of DiffeRential Evolution Adaptive Metropolis and Adaptive Metropolis samplers. The inversion framework is tested by estimating reservoir-fluid saturations and porosity based on marine seismic and Controlled-Source Electromagnetic data. The multi-chain Markov-chain Monte Carlo is scalable in terms of the number of chains, and is useful for computationally demanding Bayesian model calibration in scientific and engineering problems. As a demonstration, the approach is used to efficiently and accurately estimate the porosity and saturations in a representative layered synthetic reservoir. The results indicate that the seismic Amplitude Versus Angle and Controlled-Source Electromagnetic joint inversion provides better estimation of reservoir saturations than the seismic Amplitude Versus Angle only inversion, especially for the parameters in deep layers. The performance of the inversion approach for various levels of noise in observational data was evaluated — reasonable estimates can be obtained with noise levels up to 25%. Sampling efficiency due to the use of multiple chains was also checked and was found to have almost linear scalability.
Ren, Huiying; Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Bao, Jie; Swiler, Laura
2017-12-01
In this study we developed an efficient Bayesian inversion framework for interpreting marine seismic Amplitude Versus Angle and Controlled-Source Electromagnetic data for marine reservoir characterization. The framework uses a multi-chain Markov-chain Monte Carlo sampler, which is a hybrid of DiffeRential Evolution Adaptive Metropolis and Adaptive Metropolis samplers. The inversion framework is tested by estimating reservoir-fluid saturations and porosity based on marine seismic and Controlled-Source Electromagnetic data. The multi-chain Markov-chain Monte Carlo is scalable in terms of the number of chains, and is useful for computationally demanding Bayesian model calibration in scientific and engineering problems. As a demonstration, the approach is used to efficiently and accurately estimate the porosity and saturations in a representative layered synthetic reservoir. The results indicate that the seismic Amplitude Versus Angle and Controlled-Source Electromagnetic joint inversion provides better estimation of reservoir saturations than the seismic Amplitude Versus Angle only inversion, especially for the parameters in deep layers. The performance of the inversion approach for various levels of noise in observational data was evaluated - reasonable estimates can be obtained with noise levels up to 25%. Sampling efficiency due to the use of multiple chains was also checked and was found to have almost linear scalability.
Energy Technology Data Exchange (ETDEWEB)
Guerra P, F.; Heeren de O, A. [Universidade Federal de Minas Gerais, Departamento de Engenharia Nuclear, Programa de Pos Graduacao em Ciencias e Tecnicas Nucleares, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil); Melo, B. M.; Lacerda, M. A. S.; Da Silva, T. A.; Ferreira F, T. C., E-mail: tcff01@gmail.com [Centro de Desenvolvimento da Tecnologia Nuclear, Programa de Pos Graduacao / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil)
2015-10-15
The Plan of Radiological Protection licensed by the National Nuclear Energy Commission - CNEN in Brazil includes the risks of assessment of internal and external exposure by implementing a program of individual monitoring which is responsible of controlling exposures and ensuring the maintenance of radiation safety. The Laboratory of Internal Dosimetry of the Center for Development of Nuclear Technology - LID/CDTN is responsible for routine monitoring of internal contamination of the Individuals Occupationally Exposed (IOEs). These are, the IOEs involved in handling {sup 18}F produced by the Unit for Research and Production of Radiopharmaceuticals sources; as well a monitoring of the entire body of workers from the Research Reactor TRIGA IPR-R1/CDTN or whenever there is any risk of accidental incorporation. The determination of photon emitting radionuclides from the human body requires calibration techniques of the counting geometries, in order to obtain a curve of efficiency. The calibration process normally makes use of physical phantoms containing certified activities of the radionuclides of interest. The objective of this project is the calibration of the WBC facility of the LID/CDTN using the BOMAB physical phantom and Monte Carlo simulations. Three steps were needed to complete the calibration process. First, the BOMAB was filled with a KCl solution and several measurements of the gamma ray energy (1.46 MeV) emitted by {sup 40}K were done. Second, simulations using MCNPX code were performed to calculate the counting efficiency (Ce) for the BOMAB model phantom and compared with the measurements Ce results. Third and last step, the modeled BOMAB phantom was used to calculate the Ce covering the energy range of interest. The results showed a good agreement and are within the expected ratio between the measured and simulated results. (Author)
International Nuclear Information System (INIS)
Guerra P, F.; Heeren de O, A.; Melo, B. M.; Lacerda, M. A. S.; Da Silva, T. A.; Ferreira F, T. C.
2015-10-01
The Plan of Radiological Protection licensed by the National Nuclear Energy Commission - CNEN in Brazil includes the risks of assessment of internal and external exposure by implementing a program of individual monitoring which is responsible of controlling exposures and ensuring the maintenance of radiation safety. The Laboratory of Internal Dosimetry of the Center for Development of Nuclear Technology - LID/CDTN is responsible for routine monitoring of internal contamination of the Individuals Occupationally Exposed (IOEs). These are, the IOEs involved in handling 18 F produced by the Unit for Research and Production of Radiopharmaceuticals sources; as well a monitoring of the entire body of workers from the Research Reactor TRIGA IPR-R1/CDTN or whenever there is any risk of accidental incorporation. The determination of photon emitting radionuclides from the human body requires calibration techniques of the counting geometries, in order to obtain a curve of efficiency. The calibration process normally makes use of physical phantoms containing certified activities of the radionuclides of interest. The objective of this project is the calibration of the WBC facility of the LID/CDTN using the BOMAB physical phantom and Monte Carlo simulations. Three steps were needed to complete the calibration process. First, the BOMAB was filled with a KCl solution and several measurements of the gamma ray energy (1.46 MeV) emitted by 40 K were done. Second, simulations using MCNPX code were performed to calculate the counting efficiency (Ce) for the BOMAB model phantom and compared with the measurements Ce results. Third and last step, the modeled BOMAB phantom was used to calculate the Ce covering the energy range of interest. The results showed a good agreement and are within the expected ratio between the measured and simulated results. (Author)
A residual Monte Carlo method for discrete thermal radiative diffusion
International Nuclear Information System (INIS)
Evans, T.M.; Urbatsch, T.J.; Lichtenstein, H.; Morel, J.E.
2003-01-01
Residual Monte Carlo methods reduce statistical error at a rate of exp(-bN), where b is a positive constant and N is the number of particle histories. Contrast this convergence rate with 1/√N, which is the rate of statistical error reduction for conventional Monte Carlo methods. Thus, residual Monte Carlo methods hold great promise for increased efficiency relative to conventional Monte Carlo methods. Previous research has shown that the application of residual Monte Carlo methods to the solution of continuum equations, such as the radiation transport equation, is problematic for all but the simplest of cases. However, the residual method readily applies to discrete systems as long as those systems are monotone, i.e., they produce positive solutions given positive sources. We develop a residual Monte Carlo method for solving a discrete 1D non-linear thermal radiative equilibrium diffusion equation, and we compare its performance with that of the discrete conventional Monte Carlo method upon which it is based. We find that the residual method provides efficiency gains of many orders of magnitude. Part of the residual gain is due to the fact that we begin each timestep with an initial guess equal to the solution from the previous timestep. Moreover, fully consistent non-linear solutions can be obtained in a reasonable amount of time because of the effective lack of statistical noise. We conclude that the residual approach has great potential and that further research into such methods should be pursued for more general discrete and continuum systems
Methods for fitting of efficiency curves obtained by means of HPGe gamma rays spectrometers
International Nuclear Information System (INIS)
Cardoso, Vanderlei
2002-01-01
The present work describes a few methodologies developed for fitting efficiency curves obtained by means of a HPGe gamma-ray spectrometer. The interpolated values were determined by simple polynomial fitting and polynomial fitting between the ratio of experimental peak efficiency and total efficiency, calculated by Monte Carlo technique, as a function of gamma-ray energy. Moreover, non-linear fitting has been performed using a segmented polynomial function and applying the Gauss-Marquardt method. For the peak area obtainment different methodologies were developed in order to estimate the background area under the peak. This information was obtained by numerical integration or by using analytical functions associated to the background. One non-calibrated radioactive source has been included in the curve efficiency in order to provide additional calibration points. As a by-product, it was possible to determine the activity of this non-calibrated source. For all fittings developed in the present work the covariance matrix methodology was used, which is an essential procedure in order to give a complete description of the partial uncertainties involved. (author)
The Hybrid Monte Carlo (HMC) method and dynamic fermions
International Nuclear Information System (INIS)
Amaral, Marcia G. do
1994-01-01
Nevertheless the Monte Carlo method has been extensively used in the simulation of many types of theories, the successful application has been established only for models containing boson fields. With the present computer generation, the development of faster and efficient algorithms became necessary and urgent. This paper studies the HMC and the dynamic fermions
POLCAL - POLARIMETRIC RADAR CALIBRATION
Vanzyl, J.
1994-01-01
Calibration of polarimetric radar systems is a field of research in which great progress has been made over the last few years. POLCAL (Polarimetric Radar Calibration) is a software tool intended to assist in the calibration of Synthetic Aperture Radar (SAR) systems. In particular, POLCAL calibrates Stokes matrix format data produced as the standard product by the NASA/Jet Propulsion Laboratory (JPL) airborne imaging synthetic aperture radar (AIRSAR). POLCAL was designed to be used in conjunction with data collected by the NASA/JPL AIRSAR system. AIRSAR is a multifrequency (6 cm, 24 cm, and 68 cm wavelength), fully polarimetric SAR system which produces 12 x 12 km imagery at 10 m resolution. AIRSTAR was designed as a testbed for NASA's Spaceborne Imaging Radar program. While the images produced after 1991 are thought to be calibrated (phase calibrated, cross-talk removed, channel imbalance removed, and absolutely calibrated), POLCAL can and should still be used to check the accuracy of the calibration and to correct it if necessary. Version 4.0 of POLCAL is an upgrade of POLCAL version 2.0 released to AIRSAR investigators in June, 1990. New options in version 4.0 include automatic absolute calibration of 89/90 data, distributed target analysis, calibration of nearby scenes with calibration parameters from a scene with corner reflectors, altitude or roll angle corrections, and calibration of errors introduced by known topography. Many sources of error can lead to false conclusions about the nature of scatterers on the surface. Errors in the phase relationship between polarization channels result in incorrect synthesis of polarization states. Cross-talk, caused by imperfections in the radar antenna itself, can also lead to error. POLCAL reduces cross-talk and corrects phase calibration without the use of ground calibration equipment. Removing the antenna patterns during SAR processing also forms a very important part of the calibration of SAR data. Errors in the
Vibration transducer calibration techniques
Brinkley, D. J.
1980-09-01
Techniques for the calibration of vibration transducers used in the Aeronautical Quality Assurance Directorate of the British Ministry of Defence are presented. Following a review of the types of measurements necessary in the calibration of vibration transducers, the performance requirements of vibration transducers, which can be used to measure acceleration, velocity or vibration amplitude, are discussed, with particular attention given to the piezoelectric accelerometer. Techniques for the accurate measurement of sinusoidal vibration amplitude in reference-grade transducers are then considered, including the use of a position sensitive photocell and the use of a Michelson laser interferometer. Means of comparing the output of working-grade accelerometers with that of previously calibrated reference-grade devices are then outlined, with attention given to a method employing a capacitance bridge technique and a method to be used at temperatures between -50 and 200 C. Automatic calibration procedures developed to speed up the calibration process are outlined, and future possible extensions of system software are indicated.
Calibration Under Uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Swiler, Laura Painton; Trucano, Timothy Guy
2005-03-01
This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.
Acceleration of monte Carlo solution by conjugate gradient method
International Nuclear Information System (INIS)
Toshihisa, Yamamoto
2005-01-01
The conjugate gradient method (CG) was applied to accelerate Monte Carlo solutions in fixed source problems. The equilibrium model based formulation enables to use CG scheme as well as initial guess to maximize computational performance. This method is available to arbitrary geometry provided that the neutron source distribution in each subregion can be regarded as flat. Even if it is not the case, the method can still be used as a powerful tool to provide an initial guess very close to the converged solution. The major difference of Monte Carlo CG to deterministic CG is that residual error is estimated using Monte Carlo sampling, thus statistical error exists in the residual. This leads to a flow diagram specific to Monte Carlo-CG. Three pre-conditioners were proposed for CG scheme and the performance was compared with a simple 1-D slab heterogeneous test problem. One of them, Sparse-M option, showed an excellent performance in convergence. The performance per unit cost was improved by four times in the test problem. Although direct estimation of efficiency of the method is impossible mainly because of the strong problem-dependence of the optimized pre-conditioner in CG, the method seems to have efficient potential as a fast solution algorithm for Monte Carlo calculations. (author)
Option price calibration from Renyi entropy
International Nuclear Information System (INIS)
Brody, Dorje C.; Buckley, Ian R.C.; Constantinou, Irene C.
2007-01-01
The calibration of the risk-neutral density function for the future asset price, based on the maximisation of the entropy measure of Renyi, is proposed. Whilst the conventional approach based on the use of logarithmic entropy measure fails to produce the observed power-law distribution when calibrated against option prices, the approach outlined here is shown to produce the desired form of the distribution. Procedures for the maximisation of the Renyi entropy under constraints are outlined in detail, and a number of interesting properties of the resulting power-law distributions are also derived. The result is applied to efficiently evaluate prices of path-independent derivatives
HENC performance evaluation and plutonium calibration
International Nuclear Information System (INIS)
Menlove, H.O.; Baca, J.; Pecos, J.M.; Davidson, D.R.; McElroy, R.D.; Brochu, D.B.
1997-10-01
The authors have designed a high-efficiency neutron counter (HENC) to increase the plutonium content in 200-L waste drums. The counter uses totals neutron counting, coincidence counting, and multiplicity counting to determine the plutonium mass. The HENC was developed as part of a Cooperative Research and Development Agreement between the Department of Energy and Canberra Industries. This report presents the results of the detector modifications, the performance tests, the add-a-source calibration, and the plutonium calibration at Los Alamos National Laboratory (TA-35) in 1996
Gamma counter calibration system
International Nuclear Information System (INIS)
1977-01-01
A method and apparatus are described for the calibration of a gamma radiation measurement instrument to be used over any of a number of different absolute energy ranges. The method includes the steps of adjusting the overall signal gain associated with pulses which are derived from detected gamma rays, until the instrument is calibrated for a particular absolute energy range; then storing parameter settings corresponding to the adjusted overall signal gain, and repeating the process for other desired absolute energy ranges. The stored settings can be subsequently retrieved and reapplied so that test measurements can be made using a selected one of the absolute energy ranges. Means are provided for adjusting the overall signal gain and a specific technique is disclosed for making coarse, then fine adjustments to the signal gain, for rapid convergence of the required calibration settings. (C.F.)
International Nuclear Information System (INIS)
Costa, Priscila; Potiens Junior, Ademar J.
2015-01-01
Filter cartridges are part of the primary water treatment system of the IEA-R1 Research Reactor and, when saturated, they are replaced and become radioactive waste. The IEA-R1 is located at the Nuclear and Energy Research Institute (IPEN), in Sao Paulo, Brazil. The primary characterization is the main step of the radioactive waste management in which the physical, chemical and radiological properties are determined. It is a very important step because the information obtained in this moment enables the choice of the appropriate management process and the definition of final disposal options. In this paper, it is presented a non-destructive method for primary characterization, using the Monte Carlo method associated with the gamma spectrometry. Gamma spectrometry allows the identification of radionuclides and their activity values. The detection efficiency is an important parameter, which is related to the photon energy, detector geometry and the matrix of the sample to be analyzed. Due to the difficult to obtain a standard source with the same geometry of the filter cartridge, another technique is necessary to calibrate the detector. The technique described in this paper uses the Monte Carlo method for primary characterization of the IEA-R1 filter cartridges. (author)
Individual dosimetry and calibration
International Nuclear Information System (INIS)
Hoefert, M.; Nielsen, M.
1996-01-01
In 1995 both the Individual Dosimetry and Calibration Sections worked under the condition of a status quo and concentrated fully on the routine part of their work. Nevertheless, the machine for printing the bar code which will be glued onto the film holder and hence identify the people when entering into high radiation areas was put into operation and most of the holders were equipped with the new identification. As far as the Calibration Section is concerned the project of the new source control system that is realized by the Technical Support Section was somewhat accelerated
Radiation Calibration Measurements
International Nuclear Information System (INIS)
Omondi, C.
2017-01-01
KEBS Radiation Dosimetry mandate are: Custodian of Kenya Standards on Ionizing radiation, Ensure traceability to International System (SI ) and Calibration radiation equipment. RAF 8/040 on Radioisotope applications for troubleshooting and optimizing industrial process established Radiotracer Laboratory objective is to introduce and implement radiotracer technique for problem solving of industrial challenges. Gamma ray scanning technique applied is to Locate blockages, Locate liquid in vapor lines, Locate areas of lost refractory or lining in a pipe and Measure flowing densities. Equipment used for diagnostic and radiation protection must be calibrated to ensure Accuracy and Traceability
Directory of Open Access Journals (Sweden)
Frederick Schauer
2017-09-01
Full Text Available Objective to study the notion and essence of legal judgments calibration the possibilities of using it in the lawenforcement activity to explore the expenses and advantages of using it. Methods dialectic approach to the cognition of social phenomena which enables to analyze them in historical development and functioning in the context of the integrity of objective and subjective factors it determined the choice of the following research methods formallegal comparative legal sociological methods of cognitive psychology and philosophy. Results In ordinary life people who assess other peoplersaquos judgments typically take into account the other judgments of those they are assessing in order to calibrate the judgment presently being assessed. The restaurant and hotel rating website TripAdvisor is exemplary because it facilitates calibration by providing access to a raterrsaquos previous ratings. Such information allows a user to see whether a particular rating comes from a rater who is enthusiastic about every place she patronizes or instead from someone who is incessantly hard to please. And even when less systematized as in assessing a letter of recommendation or college transcript calibration by recourse to the decisional history of those whose judgments are being assessed is ubiquitous. Yet despite the ubiquity and utility of such calibration the legal system seems perversely to reject it. Appellate courts do not openly adjust their standard of review based on the previous judgments of the judge whose decision they are reviewing nor do judges in reviewing legislative or administrative decisions magistrates in evaluating search warrant representations or jurors in assessing witness perception. In most legal domains calibration by reference to the prior decisions of the reviewee is invisible either because it does not exist or because reviewing bodies are unwilling to admit using what they in fact know and employ. Scientific novelty for the first
DEFF Research Database (Denmark)
Gómez Arranz, Paula; Courtney, Michael
This report describes the tests carried out on a scanning lidar at the DTU Test Station for large wind turbines, Høvsøre. The tests were divided in two parts. In the first part, the purpose was to obtain wind speed calibrations at two heights against two cup anemometers mounted on a mast. Additio......This report describes the tests carried out on a scanning lidar at the DTU Test Station for large wind turbines, Høvsøre. The tests were divided in two parts. In the first part, the purpose was to obtain wind speed calibrations at two heights against two cup anemometers mounted on a mast...
Geometry and Dynamics for Markov Chain Monte Carlo
Barp, Alessandro; Briol, Francois-Xavier; Kennedy, Anthony D.; Girolami, Mark
2017-01-01
Markov Chain Monte Carlo methods have revolutionised mathematical computation and enabled statistical inference within many previously intractable models. In this context, Hamiltonian dynamics have been proposed as an efficient way of building chains which can explore probability densities efficiently. The method emerges from physics and geometry and these links have been extensively studied by a series of authors through the last thirty years. However, there is currently a gap between the in...
Stocker, Sabrina; Foschum, Florian; Kienle, Alwin
2017-07-01
A calibration free method to detect particle size information is presented. A possible application for such measurements is the investigation of raw milk since there not only the fat and protein content varies but also the fat droplet size. The newly developed method is sensitive to the scattering phase function, which makes it applicable to many other applications, too. By simulating the light propagation by use of Monte Carlo simulations, a calibration free device can be developed from this principle.
Calibration of thin-film dosimeters irradiated with 80-120 kev electrons
DEFF Research Database (Denmark)
Helt-Hansen, J.; Miller, A.; McEwen, M.
2004-01-01
A method for calibration of thin-film dosimeters irradiated with 80-120keV electrons has been developed. The method is based on measurement of dose with a totally absorbing graphite calorimeter, and conversion of dose in the graphite calorimeter to dose in the film dosimeter by Monte Carlo calcul......V electron irradiation. The two calibrations were found to be equal within the estimated uncertainties of +/-10% at 1 s.d. (C) 2004 Elsevier Ltd. All rights reserved....
Control Variates for Monte Carlo Valuation of American Options
DEFF Research Database (Denmark)
Rasmussen, Nicki S.
2005-01-01
This paper considers two applications of control variates to the Monte Carlo valuation of American options. The main contribution of the paper lies in the particular choice of a control variate for American or Bermudan options. It is shown that for any martingale process used as a control variate...... technique is used for improving the least-squares Monte Carlo (LSM) approach for determining exercise strategies. The suggestions made allow for more efficient estimation of the continuation value, used in determining the strategy. An additional suggestion is made in order to improve the stability...
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan
2016-01-01
In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . âˆž>h0>h1â‹¯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. Â© 2016 Elsevier B.V.
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros
2016-08-29
In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . âˆž>h0>h1â‹¯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. Â© 2016 Elsevier B.V.
International Nuclear Information System (INIS)
Ishikawa, Tetsuo; Matsumoto, Masaki; Uchiyama, Masafumi; Kobayashi, Sadayoshi; Mizushita, Seiichi.
1995-01-01
To standardize the calibration methods of whole-body counters, three anthropometric phantoms were manufactured based on dozens of Japanese average value of body size data. Using these phantoms, the calibrations of some whole-body counters were carried out and the comparison of counting efficiency between anthropometric phantoms and block phantoms, which used to be used for the calibration of whole-body counters generally, was implemented. Five whole-body counters, one scanning system, two stationary systems and two chair systems, were used for this study. The following results were derived: As an example, in NIRS scanning system, the counting efficiency of anthropometric phantom of 162cm height was 12.7% greater than that of block phantom of the same height. This means 137 Cs body burdens in adult men used to be estimated with the excess of about 10%. Body burdens tended to be estimated excessively in adult because the difference of counting efficiency between anthropometric phantom and block phantom increases with increase of height. To standardize body burden data measured with various whole-body counters, the calibration of each whole-body counter should be conducted using anthropometric phantoms and phantoms which used to be used for the calibration of that whole-body counter. (author)
Essay on Option Pricing, Hedging and Calibration
DEFF Research Database (Denmark)
da Silva Ribeiro, André Manuel
Quantitative finance is concerned about applying mathematics to financial markets.This thesis is a collection of essays that study different problems in this field: How efficient are option price approximations to calibrate a stochastic volatilitymodel? (Chapter 2) How different is the discretely...... of dynamics? (Chapter 5) How can we formulate a simple free-arbitrage model to price correlationswaps? (Chapter 6) A summary of the work presented in this thesis: Approximation Behooves Calibration In this paper we show that calibration based on an expansion approximation for option prices in the Heston...... stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005 to 2009. Discretely Sampled Variance Options: A Stochastic Approximation Approach In this paper, we expand Drimus and Farkas (2012) framework to price variance options on discretely sampled...
Independent System Calibration of Sentinel-1B
Directory of Open Access Journals (Sweden)
Marco Schwerdt
2017-05-01
Full Text Available Sentinel-1B is the second of two C-Band Synthetic Aperture Radar (SAR satellites of the Sentinel-1 mission, launched in April 2016—two years after the launch of the first satellite, Sentinel-1A. In addition to the commissioning of Sentinel-1B executed by the European Space Agency (ESA, an independent system calibration was performed by the German Aerospace Center (DLR on behalf of ESA. Based on an efficient calibration strategy and the different calibration procedures already developed and applied for Sentinel-1A, extensive measurement campaigns were executed by initializing and aligning DLR’s reference targets deployed on the ground. This paper describes the different activities performed by DLR during the Sentinel-1B commissioning phase and presents the results derived from the analysis and the evaluation of a multitude of data takes and measurements.
Multilevel markov chain monte carlo method for high-contrast single-phase flow problems
Efendiev, Yalchin R.; Jin, Bangti; Michael, Presho; Tan, Xiaosi
2014-01-01
Carlo (MLMC) methods. The former provides a hierarchy of approximations of different resolution, whereas the latter gives an efficient way to estimate quantities of interest using samples on different levels. The number of basis functions in the online
SOLFAST, a Ray-Tracing Monte-Carlo software for solar concentrating facilities
International Nuclear Information System (INIS)
Roccia, J P; Piaud, B; Coustet, C; Caliot, C; Guillot, E; Flamant, G; Delatorre, J
2012-01-01
In this communication, the software SOLFAST is presented. It is a simulation tool based on the Monte-Carlo method and accelerated Ray-Tracing techniques to evaluate efficiently the energy flux in concentrated solar installations.
CERN. Geneva
2015-01-01
My talk will be covering my work as a whole over the course of the semester. The focus will be on using energy flow calibration in ECAL to check the precision of the corrections made by the light monitoring system used to account for transparency loss within ECAL crystals due to radiation damage over time.
Calibration with Absolute Shrinkage
DEFF Research Database (Denmark)
Øjelund, Henrik; Madsen, Henrik; Thyregod, Poul
2001-01-01
In this paper, penalized regression using the L-1 norm on the estimated parameters is proposed for chemometric je calibration. The algorithm is of the lasso type, introduced by Tibshirani in 1996 as a linear regression method with bound on the absolute length of the parameters, but a modification...
Calibration bench of flowmeters
International Nuclear Information System (INIS)
Bremond, J.; Da Costa, D.; Calvet, A.; Vieuxmaire, C.
1966-01-01
This equipment is devoted to the comparison of signals from two turbines installed in the Cabri experimental loop. The signal is compared to the standard turbine. The characteristics and the performance of the calibration bench are presented. (A.L.B.)
Calibration of farmer dosemeters
International Nuclear Information System (INIS)
Ahmad, S.S.; Anwar, K.; Arshed, W.; Mubarak, M.A.; Orfi, S.D.
1984-08-01
The Farmer Dosemeters of Atomic Energy Medical Centre (AEMC) Jamshoro were calibrated in the Secondary Standard Dosimetry Laboratory (SSDL) at PINSTECH, using the NPL Secondary Standard Therapy level X-ray exposure meter. The results are presented in this report. (authors)
Physiotherapy ultrasound calibrations
International Nuclear Information System (INIS)
Gledhill, M.
1996-01-01
Calibration of physiotherapy ultrasound equipment has long been a problem. Numerous surveys around the world over the past 20 years have all found that only a low percentage of the units tested had an output within 30% of that indicatd. In New Zealand, a survey carried out by the NRL in 1985 found that only 24% had an output, at the maximum setting, within + or - 20% of that indicated. The present performance Standard for new equipment (NZS 3200.2.5:1992) requires that the measured output should not deviate from that indicated by more than + or - 30 %. This may be tightened to + or - 20% in the next few years. Any calibration is only as good as the calibration equipment. Some force balances can be tested with small weights to simulate the force exerted by an ultrasound beam, but with others this is not possible. For such balances, testing may only be feasible with a calibrated source which could be used like a transfer standard. (author). 4 refs., 3 figs
NVLAP calibration laboratory program
Energy Technology Data Exchange (ETDEWEB)
Cigler, J.L.
1993-12-31
This paper presents an overview of the progress up to April 1993 in the development of the Calibration Laboratories Accreditation Program within the framework of the National Voluntary Laboratory Accreditation Program (NVLAP) at the National Institute of Standards and Technology (NIST).
NVLAP calibration laboratory program
International Nuclear Information System (INIS)
Cigler, J.L.
1993-01-01
This paper presents an overview of the progress up to April 1993 in the development of the Calibration Laboratories Accreditation Program within the framework of the National Voluntary Laboratory Accreditation Program (NVLAP) at the National Institute of Standards and Technology (NIST)
International Nuclear Information System (INIS)
Rosauer, P.J.; Flaherty, J.J.
1981-01-01
This invention is in the field of gamma ray inspection devices for tubular products and the like employing an improved calibrating block which prevents the sensing system from being overloaded when no tubular product is present, and also provides the operator with a means for visually detecting the presence of wall thicknesses which are less than a required minimum. (author)
Weighted-delta-tracking for Monte Carlo particle transport
International Nuclear Information System (INIS)
Morgan, L.W.G.; Kotlyar, D.
2015-01-01
Highlights: • This paper presents an alteration to the Monte Carlo Woodcock tracking technique. • The alteration improves computational efficiency within regions of high absorbers. • The rejection technique is replaced by a statistical weighting mechanism. • The modified Woodcock method is shown to be faster than standard Woodcock tracking. • The modified Woodcock method achieves a lower variance, given a specified accuracy. - Abstract: Monte Carlo particle transport (MCPT) codes are incredibly powerful and versatile tools to simulate particle behavior in a multitude of scenarios, such as core/criticality studies, radiation protection, shielding, medicine and fusion research to name just a small subset applications. However, MCPT codes can be very computationally expensive to run when the model geometry contains large attenuation depths and/or contains many components. This paper proposes a simple modification to the Woodcock tracking method used by some Monte Carlo particle transport codes. The Woodcock method utilizes the rejection method for sampling virtual collisions as a method to remove collision distance sampling at material boundaries. However, it suffers from poor computational efficiency when the sample acceptance rate is low. The proposed method removes rejection sampling from the Woodcock method in favor of a statistical weighting scheme, which improves the computational efficiency of a Monte Carlo particle tracking code. It is shown that the modified Woodcock method is less computationally expensive than standard ray-tracing and rejection-based Woodcock tracking methods and achieves a lower variance, given a specified accuracy
The contribution to the calibration of LAr calorimeters at the ATLAS experiment
International Nuclear Information System (INIS)
Pecsy, M.
2011-01-01
The presented thesis brings various contributions to the testing and validation of the ATLAS detector calorimeter calibration. Since the ATLAS calorimeter is non-compensating, the sophisticated software calibration of the calorimeter response is needed. One of the ATLAS official calibration methods is the local hadron calibration. This method is based on detailed simulations providing information about the true deposited energy in calorimeter. Such calibration consists of several independent steps, starting with the basic electromagnetic scale signal calibration and proceeding to the particle energy calibration. Calibration starts from the topological clusters reconstruction and calibration at EM scale. These clusters are classified as EM or hadronic and the hadronic ones receive weights to correct for the invisible energy deposits of hadrons. To get the nal reconstructed energy the out-of-cluster and dead material corrections are applied in next steps. The tests of calorimeter response with the rst real data from cosmic-ray muons and the LHC collisions data are presented in the thesis. The detailed studies of the full hadronic calibration performance in the special combined end-cap calorimeter beam test 2004 are presented as well. To optimise the performance of the calibration, the Monte-Carlo based studies are necessary. Two alternative methods of cluster classification are discussed, and the software tool of particle track extrapolation has been developed. (author)
PLEIADES ABSOLUTE CALIBRATION : INFLIGHT CALIBRATION SITES AND METHODOLOGY
Directory of Open Access Journals (Sweden)
S. Lachérade
2012-07-01
Full Text Available In-flight calibration of space sensors once in orbit is a decisive step to be able to fulfil the mission objectives. This article presents the methods of the in-flight absolute calibration processed during the commissioning phase. Four In-flight calibration methods are used: absolute calibration, cross-calibration with reference sensors such as PARASOL or MERIS, multi-temporal monitoring and inter-bands calibration. These algorithms are based on acquisitions over natural targets such as African deserts, Antarctic sites, La Crau (Automatic calibration station and Oceans (Calibration over molecular scattering or also new extra-terrestrial sites such as the Moon and selected stars. After an overview of the instrument and a description of the calibration sites, it is pointed out how each method is able to address one or several aspects of the calibration. We focus on how these methods complete each other in their operational use, and how they help building a coherent set of information that addresses all aspects of in-orbit calibration. Finally, we present the perspectives that the high level of agility of PLEIADES offers for the improvement of its calibration and a better characterization of the calibration sites.
Energy Technology Data Exchange (ETDEWEB)
John F. Schabron; Joseph F. Rovani; Susan S. Sorini
2007-03-31
The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005, requires that calibration of mercury continuous emissions monitors (CEMs) be performed with NIST-traceable standards. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor generators. The traceability protocol will be written by EPA. Traceability will be based on the actual analysis of the output of each calibration unit at several concentration levels ranging from about 2-40 ug/m{sup 3}, and this analysis will be directly traceable to analyses by NIST using isotope dilution inductively coupled plasma/mass spectrometry (ID ICP/MS) through a chain of analyses linking the calibration unit in the power plant to the NIST ID ICP/MS. Prior to this project, NIST did not provide a recommended mercury vapor pressure equation or list mercury vapor pressure in its vapor pressure database. The NIST Physical and Chemical Properties Division in Boulder, Colorado was subcontracted under this project to study the issue in detail and to recommend a mercury vapor pressure equation that the vendors of mercury vapor pressure calibration units can use to calculate the elemental mercury vapor concentration in an equilibrium chamber at a particular temperature. As part of this study, a preliminary evaluation of calibration units from five vendors was made. The work was performed by NIST in Gaithersburg, MD and Joe Rovani from WRI who traveled to NIST as a Visiting Scientist.
Measurement and simulation of neutron detection efficiency in lead-scintillating fiber calorimeters
Energy Technology Data Exchange (ETDEWEB)
Anelli, M.; Bertolucci, S. [Laboratori Nazionali di Frascati, INFN (Italy); Bini, C. [Dipartimento di Fisica dell' Universita ' La Sapienza' , Roma (Italy); INFN Sezione di Roma, Roma (Italy); Branchini, P. [INFN Sezione di Roma Tre, Roma (Italy); Curceanu, C. [Laboratori Nazionali di Frascati, INFN (Italy); De Zorzi, G.; Di Domenico, A. [Dipartimento di Fisica dell' Universita ' La Sapienza' , Roma (Italy); INFN Sezione di Roma, Roma (Italy); Di Micco, B. [Dipartimento di Fisica dell' Universita ' Roma Tre' , Roma (Italy); INFN Sezione di Roma Tre, Roma (Italy); Ferrari, A. [Fondazione CNAO, Milano (Italy); Fiore, S.; Gauzzi, P. [Dipartimento di Fisica dell' Universita ' La Sapienza' , Roma (Italy); INFN Sezione di Roma, Roma (Italy); Giovannella, S., E-mail: simona.giovannella@lnf.infn.i [Laboratori Nazionali di Frascati, INFN (Italy); Happacher, F. [Laboratori Nazionali di Frascati, INFN (Italy); Iliescu, M. [Laboratori Nazionali di Frascati, INFN (Italy); IFIN-HH, Bucharest (Romania); Martini, M. [Laboratori Nazionali di Frascati, INFN (Italy); Dipartimento di Energetica dell' Universita ' La Sapienza' , Roma (Italy); Miscetti, S. [Laboratori Nazionali di Frascati, INFN (Italy); Nguyen, F. [Dipartimento di Fisica dell' Universita ' Roma Tre' , Roma (Italy); INFN Sezione di Roma Tre, Roma (Italy); Passeri, A. [INFN Sezione di Roma Tre, Roma (Italy); Prokofiev, A. [Svedberg Laboratory, Uppsala University (Sweden); Sciascia, B. [Laboratori Nazionali di Frascati, INFN (Italy)
2009-12-15
The overall detection efficiency to neutrons of a small prototype of the KLOE lead-scintillating fiber calorimeter has been measured at the neutron beam facility of The Svedberg Laboratory, TSL, Uppsala, in the kinetic energy range [5-175] MeV. The measurement of the neutron detection efficiency of a NE110 scintillator provided a reference calibration. At the lowest trigger threshold, the overall calorimeter efficiency ranges from 30% to 50%. This value largely exceeds the estimated 8-15% expected if the response were proportional only to the scintillator equivalent thickness. A detailed simulation of the calorimeter and of the TSL beam line has been performed with the FLUKA Monte Carlo code. First data-MC comparisons are encouraging and allow to disentangle a neutron halo component in the beam.
Measurement and simulation of neutron detection efficiency in lead-scintillating fiber calorimeters
International Nuclear Information System (INIS)
Anelli, M.; Bertolucci, S.; Bini, C.; Branchini, P.; Curceanu, C.; De Zorzi, G.; Di Domenico, A.; Di Micco, B.; Ferrari, A.; Fiore, S.; Gauzzi, P.; Giovannella, S.; Happacher, F.; Iliescu, M.; Martini, M.; Miscetti, S.; Nguyen, F.; Passeri, A.; Prokofiev, A.; Sciascia, B.
2009-01-01
The overall detection efficiency to neutrons of a small prototype of the KLOE lead-scintillating fiber calorimeter has been measured at the neutron beam facility of The Svedberg Laboratory, TSL, Uppsala, in the kinetic energy range [5-175] MeV. The measurement of the neutron detection efficiency of a NE110 scintillator provided a reference calibration. At the lowest trigger threshold, the overall calorimeter efficiency ranges from 30% to 50%. This value largely exceeds the estimated 8-15% expected if the response were proportional only to the scintillator equivalent thickness. A detailed simulation of the calorimeter and of the TSL beam line has been performed with the FLUKA Monte Carlo code. First data-MC comparisons are encouraging and allow to disentangle a neutron halo component in the beam.
Field calibration of cup anemometers
DEFF Research Database (Denmark)
Schmidt Paulsen, Uwe; Mortensen, Niels Gylling; Hansen, Jens Carsten
2007-01-01
A field calibration method and results are described along with the experience gained with the method. The cup anemometers to be calibrated are mounted in a row on a 10-m high rig and calibrated in the free wind against a reference cup anemometer. The method has been reported [1] to improve...... the statistical bias on the data relative to calibrations carried out in a wind tunnel. The methodology is sufficiently accurate for calibration of cup anemometers used for wind resource assessments and provides a simple, reliable and cost-effective solution to cup anemometer calibration, especially suited...
Improved Monte Carlo Method for PSA Uncertainty Analysis
International Nuclear Information System (INIS)
Choi, Jongsoo
2016-01-01
The treatment of uncertainty is an important issue for regulatory decisions. Uncertainties exist from knowledge limitations. A probabilistic approach has exposed some of these limitations and provided a framework to assess their significance and assist in developing a strategy to accommodate them in the regulatory process. The uncertainty analysis (UA) is usually based on the Monte Carlo method. This paper proposes a Monte Carlo UA approach to calculate the mean risk metrics accounting for the SOKC between basic events (including CCFs) using efficient random number generators and to meet Capability Category III of the ASME/ANS PRA standard. Audit calculation is needed in PSA regulatory reviews of uncertainty analysis results submitted for licensing. The proposed Monte Carlo UA approach provides a high degree of confidence in PSA reviews. All PSA needs accounting for the SOKC between event probabilities to meet the ASME/ANS PRA standard
Improved Monte Carlo Method for PSA Uncertainty Analysis
Energy Technology Data Exchange (ETDEWEB)
Choi, Jongsoo [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)
2016-10-15
The treatment of uncertainty is an important issue for regulatory decisions. Uncertainties exist from knowledge limitations. A probabilistic approach has exposed some of these limitations and provided a framework to assess their significance and assist in developing a strategy to accommodate them in the regulatory process. The uncertainty analysis (UA) is usually based on the Monte Carlo method. This paper proposes a Monte Carlo UA approach to calculate the mean risk metrics accounting for the SOKC between basic events (including CCFs) using efficient random number generators and to meet Capability Category III of the ASME/ANS PRA standard. Audit calculation is needed in PSA regulatory reviews of uncertainty analysis results submitted for licensing. The proposed Monte Carlo UA approach provides a high degree of confidence in PSA reviews. All PSA needs accounting for the SOKC between event probabilities to meet the ASME/ANS PRA standard.
Multiple-time-stepping generalized hybrid Monte Carlo methods
Energy Technology Data Exchange (ETDEWEB)
Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
Strategije drevesnega preiskovanja Monte Carlo
VODOPIVEC, TOM
2018-01-01
Po preboju pri igri go so metode drevesnega preiskovanja Monte Carlo (ang. Monte Carlo tree search – MCTS) sprožile bliskovit napredek agentov za igranje iger: raziskovalna skupnost je od takrat razvila veliko variant in izboljšav algoritma MCTS ter s tem zagotovila napredek umetne inteligence ne samo pri igrah, ampak tudi v številnih drugih domenah. Čeprav metode MCTS združujejo splošnost naključnega vzorčenja z natančnostjo drevesnega preiskovanja, imajo lahko v praksi težave s počasno konv...
Energy Technology Data Exchange (ETDEWEB)
Anelli, M; Bertolucci, S; Curceanu, C; Giovannella, S; Happacher, F; Iliescu, M; Martini, M; Miscetti, S [Laboratori Nazionali di Frascati, INFN (Italy); Battistoni, G [Sezione INFN di Milano (Italy); Bini, C; Zorzi, G De; Domenico, Adi; Gauzzi, P [Ubiversita degli Studi ' La Sapienza' e Sezine INFN di Roma (Italy); Branchini, P; Micco, B Di; Ngugen, F; Paseri, A [Universita degli di Studi ' Roma Tre' e Sezione INFN di Roma Tre (Italy); Ferrari, A [Fondazione CNAO, Milano (Italy); Prokfiev, A [Svedberg Laboratory, Uppsala University (Sweden); Fiore, S, E-mail: matteo.martino@inf.infn.i
2009-04-01
We have measured the overall detection efficiency of a small prototype of the KLOE PB-scintillation fiber calorimeter to neutrons with kinetic energy range [5,175] MeV. The measurement has been done in a dedicated test beam in the neutron beam facility of the Svedberg Laboratory, TSL Uppsala. The measurements of the neutron detection efficiency of a NE110 scintillator provided a reference calibration. At the lowest trigger threshold, the overall calorimeter efficiency ranges from 28% to 33%. This value largely exceeds the estimated {approx}8% expected if the response were proportional only to the scintillator equivalent thickness. A detailed simulation of the calorimeter and of the TSL beam line has been performed with the FLUKA Monte Carlo code. The simulated response of the detector to neutrons is presented together with the first data to Monte Carlo comparison. The results show an overall neutron efficiency of about 35%. The reasons for such an efficiency enhancement, in comparison with the typical scintillator-based neutron counters, are explained, opening the road to a novel neutron detector.
Reduced Calibration Curve for Proton Computed Tomography
International Nuclear Information System (INIS)
Yevseyeva, Olga; Assis, Joaquim de; Evseev, Ivan; Schelin, Hugo; Paschuk, Sergei; Milhoretto, Edney; Setti, Joao; Diaz, Katherin; Hormaza, Joel; Lopes, Ricardo
2010-01-01
The pCT deals with relatively thick targets like the human head or trunk. Thus, the fidelity of pCT as a tool for proton therapy planning depends on the accuracy of physical formulas used for proton interaction with thick absorbers. Although the actual overall accuracy of the proton stopping power in the Bethe-Bloch domain is about 1%, the analytical calculations and the Monte Carlo simulations with codes like TRIM/SRIM, MCNPX and GEANT4 do not agreed with each other. A tentative to validate the codes against experimental data for thick absorbers bring some difficulties: only a few data is available and the existing data sets have been acquired at different initial proton energies, and for different absorber materials. In this work we compare the results of our Monte Carlo simulations with existing experimental data in terms of reduced calibration curve, i.e. the range - energy dependence normalized on the range scale by the full projected CSDA range for given initial proton energy in a given material, taken from the NIST PSTAR database, and on the final proton energy scale - by the given initial energy of protons. This approach is almost energy and material independent. The results of our analysis are important for pCT development because the contradictions observed at arbitrary low initial proton energies could be easily scaled now to typical pCT energies.
Individual dosimetry and calibration
International Nuclear Information System (INIS)
Otto, T.
1997-01-01
In 1996, the Dosimetry and Calibration Section was, as in previous years, mainly engaged in routine tasks: the distribution of over 6000 dosimeters (with a total of more than 10,000 films) every two months and the calibration of about 900 fixed and mobile instruments used in the radiation survey sections of RP group. These tasks were, thanks to an experienced team, well mastered. Special efforts had to be made in a number of areas to modernize the service or to keep it in line with new prescriptions. The Individual Dosimetry Service had to assure that CERN's contracting firms comply with the prescriptions in the Radiation Safety Manual (1996) that had been inspired by the Swiss Ordinance of 1994: Companies must file for authorizations with the Swiss Federal Office for Public Health requiring that in every company an 'Expert in Radiation Protection' be nominated and subsequently trained. CERN's Individual Dosimetry Service is accredited by the Swiss Federal Authorities and works closely together with other, similar services on a rigorous quality assurance programme. Within this framework, CERN was mandated to organize this year the annual Swiss 'Intercomparison of Dosimeters'. All ten accredited dosimetry services - among others those of the Paul Scherrer Institute (PSI) in Villigen and of the four Swiss nuclear power stations - sent dosimeters to CERN, where they were irradiated in CERN's calibration facility with precise photon doses. After return to their origin they were processed and evaluated. The results were communicated to CERN and were compared with the originally given doses. A report on the results was subsequently prepared and submitted to the Swiss 'Group of Experts on Personal Dosimetry'. Reference monitors for photon and neutron radiation were brought to standard laboratories to assure the traceability of CERN's calibration service to the fundamental quantities. For photon radiation, a set of ionization chambers was calibrated in the reference field
A continuation multilevel Monte Carlo algorithm
Collier, Nathan
2014-09-05
We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error tolerance is satisfied. CMLMC assumes discretization hierarchies that are defined a priori for each level and are geometrically refined across levels. The actual choice of computational work across levels is based on parametric models for the average cost per sample and the corresponding variance and weak error. These parameters are calibrated using Bayesian estimation, taking particular notice of the deepest levels of the discretization hierarchy, where only few realizations are available to produce the estimates. The resulting CMLMC estimator exhibits a non-trivial splitting between bias and statistical contributions. We also show the asymptotic normality of the statistical error in the MLMC estimator and justify in this way our error estimate that allows prescribing both required accuracy and confidence in the final result. Numerical results substantiate the above results and illustrate the corresponding computational savings in examples that are described in terms of differential equations either driven by random measures or with random coefficients. © 2014, Springer Science+Business Media Dordrecht.
Quantum Monte Carlo for atoms and molecules
International Nuclear Information System (INIS)
Barnett, R.N.
1989-11-01
The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H 2 , LiH, Li 2 , and H 2 O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li 2 , and H 2 O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations, the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions
Parallel Monte Carlo Search for Hough Transform
Lopes, Raul H. C.; Franqueira, Virginia N. L.; Reid, Ivan D.; Hobson, Peter R.
2017-10-01
We investigate the problem of line detection in digital image processing and in special how state of the art algorithms behave in the presence of noise and whether CPU efficiency can be improved by the combination of a Monte Carlo Tree Search, hierarchical space decomposition, and parallel computing. The starting point of the investigation is the method introduced in 1962 by Paul Hough for detecting lines in binary images. Extended in the 1970s to the detection of space forms, what came to be known as Hough Transform (HT) has been proposed, for example, in the context of track fitting in the LHC ATLAS and CMS projects. The Hough Transform transfers the problem of line detection, for example, into one of optimization of the peak in a vote counting process for cells which contain the possible points of candidate lines. The detection algorithm can be computationally expensive both in the demands made upon the processor and on memory. Additionally, it can have a reduced effectiveness in detection in the presence of noise. Our first contribution consists in an evaluation of the use of a variation of the Radon Transform as a form of improving theeffectiveness of line detection in the presence of noise. Then, parallel algorithms for variations of the Hough Transform and the Radon Transform for line detection are introduced. An algorithm for Parallel Monte Carlo Search applied to line detection is also introduced. Their algorithmic complexities are discussed. Finally, implementations on multi-GPU and multicore architectures are discussed.
MAVEN SEP Calibrated Data Bundle
National Aeronautics and Space Administration — The maven.sep.calibrated Level 2 Science Data Bundle contains fully calibrated SEP data, as well as the raw count data from which they are derived, and ancillary...
Ultrasonic calibration assembly
International Nuclear Information System (INIS)
1981-01-01
Ultrasonic transducers for in-service inspection of nuclear reactor vessels have several problems associated with them which this invention seeks to overcome. The first is that of calibration or referencing a zero start point for the vertical axis of transducer movement to locate a weld defect. The second is that of verifying the positioning (vertically or at a predetermined angle). Thirdly there is the problem of ascertaining the speed per unit distance in the operating medium of the transducer beam prior to the actual inspection. The apparatus described is a calibration assembly which includes a fixed, generally spherical body having a surface for reflecting an ultrasonic beam from one of the transducers which can be moved until the reflection from the spherical body is the highest amplitude return signal indicating radial alignment from the body. (U.K.)
Travelling gradient thermocouple calibration
International Nuclear Information System (INIS)
Broomfield, G.H.
1975-01-01
A short discussion of the origins of the thermocouple EMF is used to re-introduce the idea that the Peltier and Thompson effects are indistinguishable from one another. Thermocouples may be viewed as devices which generate an EMF at junctions or as integrators of EMF's developed in thermal gradients. The thermal gradient view is considered the more appropriate, because of its better accord with theory and behaviour, the correct approach to calibration, and investigation of service effects is immediately obvious. Inhomogeneities arise in thermocouples during manufacture and in service. The results of travelling gradient measurements are used to show that such effects are revealed with a resolution which depends on the length of the gradient although they may be masked during simple immersion calibration. Proposed tests on thermocouples irradiated in a nuclear reactor are discussed
Mesoscale hybrid calibration artifact
Tran, Hy D.; Claudet, Andre A.; Oliver, Andrew D.
2010-09-07
A mesoscale calibration artifact, also called a hybrid artifact, suitable for hybrid dimensional measurement and the method for make the artifact. The hybrid artifact has structural characteristics that make it suitable for dimensional measurement in both vision-based systems and touch-probe-based systems. The hybrid artifact employs the intersection of bulk-micromachined planes to fabricate edges that are sharp to the nanometer level and intersecting planes with crystal-lattice-defined angles.
Lorefice, Salvatore; Malengo, Andrea
2006-10-01
After a brief description of the different methods employed in periodic calibration of hydrometers used in most cases to measure the density of liquids in the range between 500 kg m-3 and 2000 kg m-3, particular emphasis is given to the multipoint procedure based on hydrostatic weighing, known as well as Cuckow's method. The features of the calibration apparatus and the procedure used at the INRiM (formerly IMGC-CNR) density laboratory have been considered to assess all relevant contributions involved in the calibration of different kinds of hydrometers. The uncertainty is strongly dependent on the kind of hydrometer; in particular, the results highlight the importance of the density of the reference buoyant liquid, the temperature of calibration and the skill of operator in the reading of the scale in the whole assessment of the uncertainty. It is also interesting to realize that for high-resolution hydrometers (division of 0.1 kg m-3), the uncertainty contribution of the density of the reference liquid is the main source of the total uncertainty, but its importance falls under about 50% for hydrometers with a division of 0.5 kg m-3 and becomes somewhat negligible for hydrometers with a division of 1 kg m-3, for which the reading uncertainty is the predominant part of the total uncertainty. At present the best INRiM result is obtained with commercially available hydrometers having a scale division of 0.1 kg m-3, for which the relative uncertainty is about 12 × 10-6.
Golobokov, M.; Danilevich, S.
2018-04-01
In order to assess calibration reliability and automate such assessment, procedures for data collection and simulation study of thermal imager calibration procedure have been elaborated. The existing calibration techniques do not always provide high reliability. A new method for analyzing the existing calibration techniques and developing new efficient ones has been suggested and tested. A type of software has been studied that allows generating instrument calibration reports automatically, monitoring their proper configuration, processing measurement results and assessing instrument validity. The use of such software allows reducing man-hours spent on finalization of calibration data 2 to 5 times and eliminating a whole set of typical operator errors.
Exact Monte Carlo for molecules
International Nuclear Information System (INIS)
Lester, W.A. Jr.; Reynolds, P.J.
1985-03-01
A brief summary of the fixed-node quantum Monte Carlo method is presented. Results obtained for binding energies, the classical barrier height for H + H 2 , and the singlet-triplet splitting in methylene are presented and discussed. 17 refs
A Study on Relative Radiometric Calibration without Calibration Field for YG-25
Directory of Open Access Journals (Sweden)
ZHANG Guo
2017-08-01
Full Text Available YG-25 is the first agility optical remote sensing satellite of China to acquire the sub-meter imagery of the earth. The side slither calibration technique is an on-orbit maneuver that has been used to flat-field image data acquired over the uniform calibration field. However, imaging to the single uniform calibration field cannot afford to calibrate the full dynamic response range of the sensor and reduces the efficiency. The paper proposes a new relative radiometric calibration method that a 90-degree yaw maneuver is performed over any non-uniform features of the Earth for YG-25. Meanwhile, we use an enhanced side slither image horizontal correction method based on line segment detector(LSDalgorithm to solve the side slither image over-shifted problem.The shifted results are compared with other horizontal correction method. The histogram match algorithm is used to calculate the relative gains of all detectors. The correctness and validity of the proposed method are validated by using the YG-25 on-board side slither data. The results prove that the mean streaking metrics of relative correction images of YG-25 is better 0.07%, the noticeable striping artifact and residual noise are removed, the calibration accuracy of side slither technique based on non-uniform features is superior to life image statistics of sensor's life span.
Observation models in radiocarbon calibration
International Nuclear Information System (INIS)
Jones, M.D.; Nicholls, G.K.
2001-01-01
The observation model underlying any calibration process dictates the precise mathematical details of the calibration calculations. Accordingly it is important that an appropriate observation model is used. Here this is illustrated with reference to the use of reservoir offsets where the standard calibration approach is based on a different model to that which the practitioners clearly believe is being applied. This sort of error can give rise to significantly erroneous calibration results. (author). 12 refs., 1 fig