Monte Carlo efficiency calibration of a neutron generator-based total-body irradiator
International Nuclear Information System (INIS)
Shypailo, R.J.; Ellis, K.J.
2009-01-01
Many body composition measurement systems are calibrated against a single-sized reference phantom. Prompt-gamma neutron activation (PGNA) provides the only direct measure of total body nitrogen (TBN), an index of the body's lean tissue mass. In PGNA systems, body size influences neutron flux attenuation, induced gamma signal distribution, and counting efficiency. Thus, calibration based on a single-sized phantom could result in inaccurate TBN values. We used Monte Carlo simulations (MCNP-5; Los Alamos National Laboratory) in order to map a system's response to the range of body weights (65-160 kg) and body fat distributions (25-60%) in obese humans. Calibration curves were constructed to derive body-size correction factors relative to a standard reference phantom, providing customized adjustments to account for differences in body habitus of obese adults. The use of MCNP-generated calibration curves should allow for a better estimate of the true changes in lean tissue mass that many occur during intervention programs focused only on weight loss. (author)
International Nuclear Information System (INIS)
Carrazana González, J.; Cornejo Díaz, N.; Jurado Vargas, M.
2012-01-01
We studied the applicability of the Monte Carlo code DETEFF for the efficiency calibration of detectors for in situ gamma-ray spectrometry determinations of ground deposition activity levels. For this purpose, the code DETEFF was applied to a study case, and the calculated 137 Cs activity deposition levels at four sites were compared with published values obtained both by soil sampling and by in situ measurements. The 137 Cs ground deposition levels obtained with DETEFF were found to be equivalent to the results of the study case within the uncertainties involved. The code DETEFF could thus be used for the efficiency calibration of in situ gamma-ray spectrometry for the determination of ground deposition activity using the uniform slab model. It has the advantage of requiring far less simulation time than general Monte Carlo codes adapted for efficiency computation, which is essential for in situ gamma-ray spectrometry where the measurement configuration yields low detection efficiency. - Highlights: ► Application of the code DETEFF to in situ gamma-ray spectrometry. ► 137 Cs ground deposition levels evaluated assuming a uniform slab model. ► Code DETEFF allows a rapid efficiency calibration.
Bhati, S; Patni, H K; Ghare, V P; Singh, I S; Nadar, M Y
2012-03-01
Internal contamination due to high-energy photon (HEP) emitters is assessed using a scanning bed whole-body monitor housed in a steel room at the Bhabha Atomic Research Centre (BARC). The monitor consists of a (203 mm diameter × 102 mm thickness) NaI(Tl) detector and is calibrated using a Reference BOMAB phantom representative of an average Indian radiation worker. However, a series of different size physical phantoms are required to account for size variability in workers, which is both expensive and time consuming. Therefore, a theoretical approach based on Monte Carlo techniques has been employed to calibrate the system in scanning geometry with BOMAB phantoms of different sizes characterised by their weight (W) and height (H) for several radionuclides of interest ((131)I, (137)Cs, (60)Co and (40)K). A computer program developed for this purpose generates the detector response and the detection efficiencies (DEs) for the BARC Reference phantom (63 kg/168 cm), ICRP Reference male phantom (70 kg/170 cm) and several of its scaled versions. The results obtained for different size phantoms indicated a decreasing trend of DEs with the increase in W/H values of the phantoms. The computed DEs for uniform distribution of (137)Cs in BOMAB phantom varied from 3.52 × 10(-3) to 2.88 × 10(-3) counts per photon as the W/H values increased from 0.26 to 0.50. The theoretical results obtained for the BARC Reference phantom have been verified with experimental measurements. The Monte Carlo results from this study will be useful for in vivo assessment of HEP emitters in radiation workers of different physiques.
Monte carlo efficiency calibration of a neutron generator-based total-body irradiator
The increasing prevalence of obesity world-wide has focused attention on the need for accurate body composition assessments, especially of large subjects. However, many body composition measurement systems are calibrated against a single-sized phantom, often based on the standard Reference Man mode...
International Nuclear Information System (INIS)
Verma, Amit K.; Anilkumar, S.; Narayani, K.; Babu, D.A.R.; Sharma, D.N.
2012-01-01
The gamma ray spectrometry technique is commonly used for the assessment of radioactivity in environmental matrices like water, soil, vegetation etc. The detector system used for gamma ray spectrometer should be calibrated for each geometry considered and for different gamma energies. It is very difficult to have radionuclide standards to cover all photon energies and also not feasible to make standard geometries for common applications. So there is a need to develop some computational techniques to determine absolute efficiencies of these detectors for practical geometries and for energies of common radioactive sources. A Monte Carlo based simulation method is proposed to study the response of the detector for various energies and geometries. From the simulated spectrum it is possible to calculate the efficiency of the gamma energy for the particular geometry modeled. The efficiency calculated by this method has to be validated experimentally using standard sources in laboratory conditions for selected geometries. For the present work simulation studies were under taken for the 3″ x 3″ NaI(Tl) detector based gamma spectrometry system set up in our laboratory. In order to see the effectiveness of the method for the low level radioactivity measurement it is planned to use low active standard source of 40 K in cylindrical container geometry for our work. Suitable detector and geometry model was developed using KCI standard of the same size, composition, density and radionuclide contents. Simulation data generated was compared with the experimental spectral data taken for a counting period of 20000 sec and 50000 sec. The peak areas obtained from the simulated spectra were compared with that of experimental spectral data. It was observed that the count rate (9.03 cps) in the simulated peak area is in agreement with that of the experimental peak area count rates (8.44 cps and 8.4 cps). The efficiency of the detector calculated by this method offers an alternative
International Nuclear Information System (INIS)
Baccouche, S.; Al-Azmi, D.; Karunakara, N.; Trabelsi, A.
2012-01-01
Gamma-ray measurements in terrestrial/environmental samples require the use of high efficient detectors because of the low level of the radionuclide activity concentrations in the samples; thus scintillators are suitable for this purpose. Two scintillation detectors were studied in this work; CsI(Tl) and NaI(Tl) with identical size for measurement of terrestrial samples for performance study. This work describes a Monte Carlo method for making the full-energy efficiency calibration curves for both detectors using gamma-ray energies associated with the decay of naturally occurring radionuclides 137 Cs (661 keV), 40 K (1460 keV), 238 U ( 214 Bi, 1764 keV) and 232 Th ( 208 Tl, 2614 keV), which are found in terrestrial samples. The magnitude of the coincidence summing effect occurring for the 2614 keV emission of 208 Tl is assessed by simulation. The method provides an efficient tool to make the full-energy efficiency calibration curve for scintillation detectors for any samples geometry and volume in order to determine accurate activity concentrations in terrestrial samples. - Highlights: ► CsI (Tl) and NaI (Tl) detectors were studied for the measurement of terrestrial samples. ► Monte Carlo method was used for efficiency calibration using natural gamma emitting terrestrial radionuclides. ► The coincidence summing effect occurring for the 2614 keV emission of 208 Tl is assessed by simulation.
Energy Technology Data Exchange (ETDEWEB)
Baccouche, S., E-mail: souad.baccouche@cnstn.rnrt.tn [UR-MDTN, National Center for Nuclear Sciences and Technology, Technopole Sidi Thabet, 2020 Sidi Thabet (Tunisia); Al-Azmi, D., E-mail: ds.alazmi@paaet.edu.kw [Department of Applied Sciences, College of Technological Studies, Public Authority for Applied Education and Training, Shuwaikh, P.O. Box 42325, Code 70654 (Kuwait); Karunakara, N., E-mail: karunakara_n@yahoo.com [University Science Instrumentation Centre, Mangalore University, Mangalagangotri 574199 (India); Trabelsi, A., E-mail: adel.trabelsi@fst.rnu.tn [UR-MDTN, National Center for Nuclear Sciences and Technology, Technopole Sidi Thabet, 2020 Sidi Thabet (Tunisia); UR-UPNHE, Faculty of Sciences of Tunis, El-Manar University, 2092 Tunis (Tunisia)
2012-01-15
Gamma-ray measurements in terrestrial/environmental samples require the use of high efficient detectors because of the low level of the radionuclide activity concentrations in the samples; thus scintillators are suitable for this purpose. Two scintillation detectors were studied in this work; CsI(Tl) and NaI(Tl) with identical size for measurement of terrestrial samples for performance study. This work describes a Monte Carlo method for making the full-energy efficiency calibration curves for both detectors using gamma-ray energies associated with the decay of naturally occurring radionuclides {sup 137}Cs (661 keV), {sup 40}K (1460 keV), {sup 238}U ({sup 214}Bi, 1764 keV) and {sup 232}Th ({sup 208}Tl, 2614 keV), which are found in terrestrial samples. The magnitude of the coincidence summing effect occurring for the 2614 keV emission of {sup 208}Tl is assessed by simulation. The method provides an efficient tool to make the full-energy efficiency calibration curve for scintillation detectors for any samples geometry and volume in order to determine accurate activity concentrations in terrestrial samples. - Highlights: Black-Right-Pointing-Pointer CsI (Tl) and NaI (Tl) detectors were studied for the measurement of terrestrial samples. Black-Right-Pointing-Pointer Monte Carlo method was used for efficiency calibration using natural gamma emitting terrestrial radionuclides. Black-Right-Pointing-Pointer The coincidence summing effect occurring for the 2614 keV emission of {sup 208}Tl is assessed by simulation.
International Nuclear Information System (INIS)
Hegenbart, Lars
2010-01-01
Detector efficiency calibration of in vivo bioassay measurements is based on physical anthropomorphic phantoms that can be loaded with radionuclides of the suspected incorporation. Systematic errors of traditional calibration methods can cause considerable over- or underestimation of the incorporated activity and hence the absorbed dose in the human body. In this work Monte Carlo methods for radiation transport problem are used. Virtual models of the in vivo measurement equipment used at the Institute of Radiation Research, including detectors and anthropomorphic phantoms have been developed. Software tools have been coded to handle memory intensive human models for the visualization, preparation and evaluation of simulations of in vivo measurement scenarios. The used tools, methods, and models have been validated. Various parameters have been investigated for their sensitivity on the detector efficiency to identify and quantify possible systematic errors. Measures have been implemented to improve the determination of the detector efficiency in regard to apply them in the routine of the in vivo measurement laboratory of the institute. A positioning system has been designed and installed in the Partial Body Counter measurement chamber to measure the relative position of the detector to the test person, which has been identified to be a sensitive parameter. A computer cluster has been set up to facilitate the Monte Carlo simulations and reduce computing time. Methods based on image registration techniques have been developed to transform existing human models to match with an individual test person. The measures and methods developed have improved the classic detector efficiency methods successfully. (orig.)
Monte Carlo simulation: tool for the calibration in analytical determination of radionuclides
International Nuclear Information System (INIS)
Gonzalez, Jorge A. Carrazana; Ferrera, Eduardo A. Capote; Gomez, Isis M. Fernandez; Castro, Gloria V. Rodriguez; Ricardo, Niury Martinez
2013-01-01
This work shows how is established the traceability of the analytical determinations using this calibration method. Highlights the advantages offered by Monte Carlo simulation for the application of corrections by differences in chemical composition, density and height of the samples analyzed. Likewise, the results obtained by the LVRA in two exercises organized by the International Agency for Atomic Energy (IAEA) are presented. In these exercises (an intercomparison and a proficiency test) all reported analytical results were obtained based on calibrations in efficiency by Monte Carlo simulation using the DETEFF program
Energy Technology Data Exchange (ETDEWEB)
Gonzalez, Jorge A. Carrazana; Ferrera, Eduardo A. Capote; Gomez, Isis M. Fernandez; Castro, Gloria V. Rodriguez; Ricardo, Niury Martinez, E-mail: cphr@cphr.edu.cu [Centro de Proteccion e Higiene de las Radiaciones (CPHR), La Habana (Cuba)
2013-07-01
This work shows how is established the traceability of the analytical determinations using this calibration method. Highlights the advantages offered by Monte Carlo simulation for the application of corrections by differences in chemical composition, density and height of the samples analyzed. Likewise, the results obtained by the LVRA in two exercises organized by the International Agency for Atomic Energy (IAEA) are presented. In these exercises (an intercomparison and a proficiency test) all reported analytical results were obtained based on calibrations in efficiency by Monte Carlo simulation using the DETEFF program.
Results of monte Carlo calibrations of a low energy germanium detector
International Nuclear Information System (INIS)
Brettner-Messler, R.; Brettner-Messler, R.; Maringer, F.J.
2006-01-01
Normally, measurements of the peak efficiency of a gamma ray detector are performed with calibrated samples which are prepared to match the measured ones in all important characteristics like its volume, chemical composition and density. Another way to determine the peak efficiency is to calculate it with special monte Carlo programs. In principle the program 'Pencyl' from the source code 'P.E.N.E.L.O.P.E. 2003' can be used for peak efficiency calibration of a cylinder symmetric detector however exact data for the geometries and the materials is needed. The interpretation of the simulation results is not clear but we found a way to convert the data into values which can be compared to our measurement results. It is possible to find other simulation parameters which perform the same or better results. Further improvements can be expected by longer simulation times and more simulations in the questionable ranges of densities and filling heights. (N.C.)
Hydrogen analysis depth calibration by CORTEO Monte-Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Moser, M., E-mail: marcus.moser@unibw.de [Universität der Bundeswehr München, Institut für Angewandte Physik und Messtechnik LRT2, Fakultät für Luft- und Raumfahrttechnik, 85577 Neubiberg (Germany); Reichart, P.; Bergmaier, A.; Greubel, C. [Universität der Bundeswehr München, Institut für Angewandte Physik und Messtechnik LRT2, Fakultät für Luft- und Raumfahrttechnik, 85577 Neubiberg (Germany); Schiettekatte, F. [Université de Montréal, Département de Physique, Montréal, QC H3C 3J7 (Canada); Dollinger, G., E-mail: guenther.dollinger@unibw.de [Universität der Bundeswehr München, Institut für Angewandte Physik und Messtechnik LRT2, Fakultät für Luft- und Raumfahrttechnik, 85577 Neubiberg (Germany)
2016-03-15
Hydrogen imaging with sub-μm lateral resolution and sub-ppm sensitivity has become possible with coincident proton–proton (pp) scattering analysis (Reichart et al., 2004). Depth information is evaluated from the energy sum signal with respect to energy loss of both protons on their path through the sample. In first order, there is no angular dependence due to elastic scattering. In second order, a path length effect due to different energy loss on the paths of the protons causes an angular dependence of the energy sum. Therefore, the energy sum signal has to be de-convoluted depending on the matrix composition, i.e. mainly the atomic number Z, in order to get a depth calibrated hydrogen profile. Although the path effect can be calculated analytically in first order, multiple scattering effects lead to significant deviations in the depth profile. Hence, in our new approach, we use the CORTEO Monte-Carlo code (Schiettekatte, 2008) in order to calculate the depth of a coincidence event depending on the scattering angle. The code takes individual detector geometry into account. In this paper we show, that the code correctly reproduces measured pp-scattering energy spectra with roughness effects considered. With more than 100 μm thick Mylar-sandwich targets (Si, Fe, Ge) we demonstrate the deconvolution of the energy spectra on our current multistrip detector at the microprobe SNAKE at the Munich tandem accelerator lab. As a result, hydrogen profiles can be evaluated with an accuracy in depth of about 1% of the sample thickness.
Efficiency and accuracy of Monte Carlo (importance) sampling
Waarts, P.H.
2003-01-01
Monte Carlo Analysis is often regarded as the most simple and accurate reliability method. Be-sides it is the most transparent method. The only problem is the accuracy in correlation with the efficiency. Monte Carlo gets less efficient or less accurate when very low probabilities are to be computed
Calibration of detector efficiency of neutron detector
International Nuclear Information System (INIS)
Guo Hongsheng; He Xijun; Xu Rongkun; Peng Taiping
2001-01-01
BF 3 neutron detector has been set up. Detector efficiency is calibrated by associated particle technique. It is about 3.17 x 10 -4 (1 +- 18%). Neutron yield of neutron generator per pulse (10 7 /pulse) is measured by using the detector
Lu, Dan; Ricciuto, Daniel; Walker, Anthony; Safta, Cosmin; Munger, William
2017-09-01
Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM) algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results in a better model fit and predictive performance compared to the popular adaptive Metropolis (AM) scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. The result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.
Directory of Open Access Journals (Sweden)
D. Lu
2017-09-01
Full Text Available Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results in a better model fit and predictive performance compared to the popular adaptive Metropolis (AM scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. The result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.
Calibration of lung counter using a CT model of Torso phantom and Monte Carlo method
International Nuclear Information System (INIS)
Zhang Binquan; Ma Jizeng; Yang Duanjie; Liu Liye; Cheng Jianping
2006-01-01
Tomography image of a Torso phantom was obtained from CT-Scan. The Torso phantom represents the trunk of an adult man that is 170 cm high and weight of 65 kg. After these images were segmented, cropped, and resized, a 3-dimension voxel phantom was created. The voxel phantom includes more than 2 million voxels, which size was 2.73 mm x 2.73 mm x 3 mm. This model could be used for the calibration of lung counter with Monte Carlo method. On the assumption that radioactive material was homogeneously distributed throughout the lung, counting efficiencies of a HPGe detector in different positions were calculated as Adipose Mass fraction (AMF) was different in the soft tissue in chest. The results showed that counting efficiencies of the lung counter changed up to 67% for 17.5 keV γ ray and 20% for 25 keV γ ray when AMF changed from 0 to 40%. (authors)
High precision efficiency calibration of a HPGe detector
International Nuclear Information System (INIS)
Nica, N.; Hardy, J.C.; Iacob, V.E.; Helmer, R.G.
2003-01-01
Many experiments involving measurements of γ rays require a very precise efficiency calibration. Since γ-ray detection and identification also requires good energy resolution, the most commonly used detectors are of the coaxial HPGe type. We have calibrated our 70% HPGe to ∼ 0.2% precision, motivated by the measurement of precise branching ratios (BR) in superallowed 0 + → 0 + β decays. These BRs are essential ingredients in extracting ft-values needed to test the Standard Model via the unitarity of the Cabibbo-Kobayashi-Maskawa matrix, a test that it currently fails by more than two standard deviations. To achieve the required high precision in our efficiency calibration, we measured 17 radioactive sources at a source-detector distance of 15 cm. Some of these were commercial 'standard' sources but we achieved the highest relative precision with 'home-made' sources selected because they have simple decay schemes with negligible side feeding, thus providing exactly matched γ-ray intensities. These latter sources were produced by us at Texas A and M by n-activation or by nuclear reactions. Another critical source among the 17 was a 60 Co source produced by Physikalisch-Technische Bundesanstalt, Braunschweig, Germany: its absolute activity was quoted to better than 0.06%. We used it to establish our absolute efficiency, while all the other sources were used to determine relative efficiencies, extending our calibration over a large energy range (40-3500 keV). Efficiencies were also determined with Monte Carlo calculations performed with the CYLTRAN code. The physical parameters of the Ge crystal were independently determined and only two (unmeasurable) dead-layers were adjusted, within physically reasonable limits, to achieve precise absolute agreement with our measured efficiencies. The combination of measured efficiencies at more than 60 individual energies and Monte Carlo calculations to interpolate between them allows us to quote the efficiency of our
Nakano, Y; Yamazaki, A; Watanabe, K; Uritani, A; Ogawa, K; Isobe, M
2014-11-01
Neutron monitoring is important to manage safety of fusion experiment facilities because neutrons are generated in fusion reactions. Monte Carlo simulations play an important role in evaluating the influence of neutron scattering from various structures and correcting differences between deuterium plasma experiments and in situ calibration experiments. We evaluated these influences based on differences between the both experiments at Large Helical Device using Monte Carlo simulation code MCNP5. A difference between the both experiments in absolute detection efficiency of the fission chamber between O-ports is estimated to be the biggest of all monitors. We additionally evaluated correction coefficients for some neutron monitors.
International Nuclear Information System (INIS)
Ramos M, D.; Yera S, Y.; Lopez B, G. M.; Acosta R, N.; Vergara G, A.
2014-08-01
This work is based on the determination of the detection efficiency of 125 I and 131 I in thyroid of the identiFINDER detector using the Monte Carlo method. The suitability of the calibration method is analyzed, when comparing the results of the direct Monte Carlo method with the corrected, choosing the latter because the differences with the real efficiency stayed below 10%. To simulate the detector their geometric parameters were optimized using a tomographic study, what allowed the uncertainties minimization of the estimates. Finally were obtained the simulations of the detector geometry-point source to find the correction factors to 5 cm, 15 cm and 25 cm, and those corresponding to the detector-simulator arrangement for the method validation and final calculation of the efficiency, demonstrating that in the Monte Carlo method implementation if simulates at a greater distance than the used in the Laboratory measurements an efficiency overestimation can be obtained, while if simulates at a shorter distance this will be underestimated, so should be simulated at the same distance to which will be measured in the reality. Also, is achieved the obtaining of the efficiency curves and minimum detectable activity for the measurement of 131 I and 125 I. In general is achieved the implementation of the Monte Carlo methodology for the identiFINDER calibration with the purpose of estimating the measured activity of iodine in thyroid. This method represents an ideal way to replace the lack of patterns solutions and simulators assuring the capacities of the Internal Contamination Laboratory of the Centro de Proteccion e Higiene de las Radiaciones are always calibrated for the iodine measurement in thyroid. (author)
Time step length versus efficiency of Monte Carlo burnup calculations
International Nuclear Information System (INIS)
Dufek, Jan; Valtavirta, Ville
2014-01-01
Highlights: • Time step length largely affects efficiency of MC burnup calculations. • Efficiency of MC burnup calculations improves with decreasing time step length. • Results were obtained from SIE-based Monte Carlo burnup calculations. - Abstract: We demonstrate that efficiency of Monte Carlo burnup calculations can be largely affected by the selected time step length. This study employs the stochastic implicit Euler based coupling scheme for Monte Carlo burnup calculations that performs a number of inner iteration steps within each time step. In a series of calculations, we vary the time step length and the number of inner iteration steps; the results suggest that Monte Carlo burnup calculations get more efficient as the time step length is reduced. More time steps must be simulated as they get shorter; however, this is more than compensated by the decrease in computing cost per time step needed for achieving a certain accuracy
Research and application of sourceless efficiency calibration technic
International Nuclear Information System (INIS)
Fu Jie; Xu Cuihua
2007-01-01
An introduction is given on the research and application of HPGe detectors' sourceless efficiency calibration. The ISOCS and LabSOCS are also described, then compare the advantage and disadvantage of the sourceless and source-based efficiency calibration, in the end we indicate that sourceless efficiency calibration is a fast and accurate method. (authors)
The peak efficiency calibration of volume source using 152Eu point source in computer
International Nuclear Information System (INIS)
Shen Tingyun; Qian Jianfu; Nan Qinliang; Zhou Yanguo
1997-01-01
The author describes the method of the peak efficiency calibration of volume source by means of 152 Eu point source for HPGe γ spectrometer. The peak efficiency can be computed by Monte Carlo simulation, after inputting parameter of detector. The computation results are in agreement with the experimental results with an error of +-3.8%, with an exception one is about +-7.4%
Calibration of the top-quark Monte-Carlo mass
International Nuclear Information System (INIS)
Kieseler, Jan; Lipka, Katerina; Moch, Sven-Olaf
2015-11-01
We present a method to establish experimentally the relation between the top-quark mass m MC t as implemented in Monte-Carlo generators and the Lagrangian mass parameter m t in a theoretically well-defined renormalization scheme. We propose a simultaneous fit of m MC t and an observable sensitive to m t , which does not rely on any prior assumptions about the relation between m t and m MC t . The measured observable is independent of m MC t and can be used subsequently for a determination of m t . The analysis strategy is illustrated with examples for the extraction of m t from inclusive and differential cross sections for hadro-production of top-quarks.
Energy Technology Data Exchange (ETDEWEB)
Liu, Zhe [China Institute of Atomic Energy, Beijing 102413 (China); Zhang, Li [Key Laboratory of Particle and Radiation Imaging, Tsinghua University, Ministry of Education, Department of Engineering Physics, Tsinghua University, Beijing 100084 (China)
2015-07-01
In radioactive waste assay with gamma-ray computed tomography, calibration for intrinsic efficiency of the system is important to the reconstruction of radioactivity distribution. Due to the geometric characteristics of the system, the non-uniformity of intrinsic efficiency for gamma-rays with different incident positions and directions are often un-negligible. Intrinsic efficiency curves versus geometric parameters of incident gamma-ray are obtained by Monte-Carlo simulation, and two intrinsic efficiency models are suggested to characterize the intrinsic efficiency determined by relative source-detector position and system geometry in the system matrix. Monte-Carlo simulation is performed to compare the different intrinsic efficiency models. Better reconstruction results of radioactivity distribution are achieved by both suggested models than by the uniform intrinsic efficiency model. And compared to model based on detector position, model based on point response increases reconstruction accuracy as well as complexity and time of calculation. (authors)
An efficient parallel computing scheme for Monte Carlo criticality calculations
International Nuclear Information System (INIS)
Dufek, Jan; Gudowski, Waclaw
2009-01-01
The existing parallel computing schemes for Monte Carlo criticality calculations suffer from a low efficiency when applied on many processors. We suggest a new fission matrix based scheme for efficient parallel computing. The results are derived from the fission matrix that is combined from all parallel simulations. The scheme allows for a practically ideal parallel scaling as no communication among the parallel simulations is required, and inactive cycles are not needed.
Shypailo, R. J.; Ellis, K. J.
2011-05-01
During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of 40K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.
International Nuclear Information System (INIS)
Mallett, M.W.
1991-01-01
Lawrence Livermore National Laboratory (LLNL) is currently investigating a new method for obtaining absolute calibration factors for radiation measurement systems used to measure internally deposited radionuclides in vivo. This method uses magnetic resonance imaging (MRI) to determine the anatomical makeup of an individual. A new MRI technique is also employed that is capable of resolving the fat and water content of the human tissue. This anatomical and biochemical information is used to model a mathematical phantom. Monte Carlo methods are then used to simulate the transport of radiation throughout the phantom. By modeling the detection equipment of the in vivo measurement system into the code, calibration factors are generated that are specific to the individual. Furthermore, this method eliminates the need for surrogate human structures in the calibration process. A demonstration of the proposed method is being performed using a fat/water matrix
Model calibration for building energy efficiency simulation
International Nuclear Information System (INIS)
Mustafaraj, Giorgio; Marini, Dashamir; Costa, Andrea; Keane, Marcus
2014-01-01
Highlights: • Developing a 3D model relating to building architecture, occupancy and HVAC operation. • Two calibration stages developed, final model providing accurate results. • Using an onsite weather station for generating the weather data file in EnergyPlus. • Predicting thermal behaviour of underfloor heating, heat pump and natural ventilation. • Monthly energy saving opportunities related to heat pump of 20–27% was identified. - Abstract: This research work deals with an Environmental Research Institute (ERI) building where an underfloor heating system and natural ventilation are the main systems used to maintain comfort condition throughout 80% of the building areas. Firstly, this work involved developing a 3D model relating to building architecture, occupancy and HVAC operation. Secondly, the calibration methodology, which consists of two levels, was then applied in order to insure accuracy and reduce the likelihood of errors. To further improve the accuracy of calibration a historical weather data file related to year 2011, was created from the on-site local weather station of ERI building. After applying the second level of calibration process, the values of Mean bias Error (MBE) and Cumulative Variation of Root Mean Squared Error (CV(RMSE)) on hourly based analysis for heat pump electricity consumption varied within the following ranges: (MBE) hourly from −5.6% to 7.5% and CV(RMSE) hourly from 7.3% to 25.1%. Finally, the building was simulated with EnergyPlus to identify further possibilities of energy savings supplied by a water to water heat pump to underfloor heating system. It found that electricity consumption savings from the heat pump can vary between 20% and 27% on monthly bases
Efficiency calibration of solid track spark auto counter
International Nuclear Information System (INIS)
Wang Mei; Wen Zhongwei; Lin Jufang; Liu Rong; Jiang Li; Lu Xinxin; Zhu Tonghua
2008-01-01
The factors influencing detection efficiency of solid track spark auto counter were analyzed, and the best etch condition and parameters of charge were also reconfirmed. With small plate fission ionization chamber, the efficiency of solid track spark auto counter at various experiment assemblies was re-calibrated. The efficiency of solid track spark auto counter at various experimental conditions was obtained. (authors)
Energy Technology Data Exchange (ETDEWEB)
Ramos M, D.; Yera S, Y.; Lopez B, G. M.; Acosta R, N.; Vergara G, A., E-mail: dayana@cphr.edu.cu [Centro de Proteccion e Higiene de las Radiaciones, Calle 20 No. 4113 e/ 41 y 47, Playa, 10600 La Habana (Cuba)
2014-08-15
This work is based on the determination of the detection efficiency of {sup 125}I and {sup 131}I in thyroid of the identiFINDER detector using the Monte Carlo method. The suitability of the calibration method is analyzed, when comparing the results of the direct Monte Carlo method with the corrected, choosing the latter because the differences with the real efficiency stayed below 10%. To simulate the detector their geometric parameters were optimized using a tomographic study, what allowed the uncertainties minimization of the estimates. Finally were obtained the simulations of the detector geometry-point source to find the correction factors to 5 cm, 15 cm and 25 cm, and those corresponding to the detector-simulator arrangement for the method validation and final calculation of the efficiency, demonstrating that in the Monte Carlo method implementation if simulates at a greater distance than the used in the Laboratory measurements an efficiency overestimation can be obtained, while if simulates at a shorter distance this will be underestimated, so should be simulated at the same distance to which will be measured in the reality. Also, is achieved the obtaining of the efficiency curves and minimum detectable activity for the measurement of {sup 131}I and {sup 125}I. In general is achieved the implementation of the Monte Carlo methodology for the identiFINDER calibration with the purpose of estimating the measured activity of iodine in thyroid. This method represents an ideal way to replace the lack of patterns solutions and simulators assuring the capacities of the Internal Contamination Laboratory of the Centro de Proteccion e Higiene de las Radiaciones are always calibrated for the iodine measurement in thyroid. (author)
A Monte Carlo modeling alternative for the API Gamma Ray Calibration Facility.
Galford, J E
2017-04-01
The gamma ray pit at the API Calibration Facility, located on the University of Houston campus, defines the API unit for natural gamma ray logs used throughout the petroleum logging industry. Future use of the facility is uncertain. An alternative method is proposed to preserve the gamma ray API unit definition as an industry standard by using Monte Carlo modeling to obtain accurate counting rate-to-API unit conversion factors for gross-counting and spectral gamma ray tool designs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Monte Carlo modeling provides accurate calibration factors for radionuclide activity meters
International Nuclear Information System (INIS)
Zagni, F.; Cicoria, G.; Lucconi, G.; Infantino, A.; Lodi, F.; Marengo, M.
2014-01-01
Accurate determination of calibration factors for radionuclide activity meters is crucial for quantitative studies and in the optimization step of radiation protection, as these detectors are widespread in radiopharmacy and nuclear medicine facilities. In this work we developed the Monte Carlo model of a widely used activity meter, using the Geant4 simulation toolkit. More precisely the “PENELOPE” EM physics models were employed. The model was validated by means of several certified sources, traceable to primary activity standards, and other sources locally standardized with spectrometry measurements, plus other experimental tests. Great care was taken in order to accurately reproduce the geometrical details of the gas chamber and the activity sources, each of which is different in shape and enclosed in a unique container. Both relative calibration factors and ionization current obtained with simulations were compared against experimental measurements; further tests were carried out, such as the comparison of the relative response of the chamber for a source placed at different positions. The results showed a satisfactory level of accuracy in the energy range of interest, with the discrepancies lower than 4% for all the tested parameters. This shows that an accurate Monte Carlo modeling of this type of detector is feasible using the low-energy physics models embedded in Geant4. The obtained Monte Carlo model establishes a powerful tool for first instance determination of new calibration factors for non-standard radionuclides, for custom containers, when a reference source is not available. Moreover, the model provides an experimental setup for further research and optimization with regards to materials and geometrical details of the measuring setup, such as the ionization chamber itself or the containers configuration. - Highlights: • We developed a Monte Carlo model of a radionuclide activity meter using Geant4. • The model was validated using several
A calibration method for whole-body counters, using Monte Carlo simulation
International Nuclear Information System (INIS)
Ishikawa, T.; Matsumoto, M.; Uchiyama, M.
1996-01-01
A Monte Carlo simulation code was developed to estimate the counting efficiencies in whole-body counting for various body sizes. The code consists of mathematical models and parameters which are categorised into three groups: a geometrical model for phantom and detectors, a photon transport model, and a detection system model. Photon histories were simulated with these models. The counting efficiencies for five 137 Cs block phantoms of different sizes were calculated by the code and compared with those measured with a whole-body counter at NIRS (Japan). The phantoms corresponded to a newborn, a 5 month old, a 6 year old, and 11 year old and an adult. The differences between the measured and calculated values were within 6%. For the adult phantom, the difference was 0.5%. The results suggest that the Monte Carlo simulation code can be used to estimate the counting efficiencies for various body sizes. (Author)
A calibration method for whole-body counters, using Monte Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Ishikawa, T.; Matsumoto, M.; Uchiyama, M. [National Inst. of Radiological Sciences, Chiba (Japan)
1996-11-01
A Monte Carlo simulation code was developed to estimate the counting efficiencies in whole-body counting for various body sizes. The code consists of mathematical models and parameters which are categorised into three groups: a geometrical model for phantom and detectors, a photon transport model, and a detection system model. Photon histories were simulated with these models. The counting efficiencies for five {sup 137}Cs block phantoms of different sizes were calculated by the code and compared with those measured with a whole-body counter at NIRS (Japan). The phantoms corresponded to a newborn, a 5 month old, a 6 year old, and 11 year old and an adult. The differences between the measured and calculated values were within 6%. For the adult phantom, the difference was 0.5%. The results suggest that the Monte Carlo simulation code can be used to estimate the counting efficiencies for various body sizes. (Author).
Efficient sampling algorithms for Monte Carlo based treatment planning
International Nuclear Information System (INIS)
DeMarco, J.J.; Solberg, T.D.; Chetty, I.; Smathers, J.B.
1998-01-01
Efficient sampling algorithms are necessary for producing a fast Monte Carlo based treatment planning code. This study evaluates several aspects of a photon-based tracking scheme and the effect of optimal sampling algorithms on the efficiency of the code. Four areas were tested: pseudo-random number generation, generalized sampling of a discrete distribution, sampling from the exponential distribution, and delta scattering as applied to photon transport through a heterogeneous simulation geometry. Generalized sampling of a discrete distribution using the cutpoint method can produce speedup gains of one order of magnitude versus conventional sequential sampling. Photon transport modifications based upon the delta scattering method were implemented and compared with a conventional boundary and collision checking algorithm. The delta scattering algorithm is faster by a factor of six versus the conventional algorithm for a boundary size of 5 mm within a heterogeneous geometry. A comparison of portable pseudo-random number algorithms and exponential sampling techniques is also discussed
Coincidence corrected efficiency calibration of Compton-suppressed HPGe detectors
Energy Technology Data Exchange (ETDEWEB)
Aucott, Timothy [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Brand, Alexander [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); DiPrete, David [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2015-04-20
The authors present a reliable method to calibrate the full-energy efficiency and the coincidence correction factors using a commonly-available mixed source gamma standard. This is accomplished by measuring the peak areas from both summing and non-summing decay schemes and simultaneously fitting both the full-energy efficiency, as well as the total efficiency, as functions of energy. By using known decay schemes, these functions can then be used to provide correction factors for other nuclides not included in the calibration standard.
International Nuclear Information System (INIS)
Torii, T.
1995-01-01
The ionization efficiencies of a cylindrical ionization chamber have been calculated using a Monte Carlo electron-photon transport code. The results agreed well with experimental values for radioactive gases. The present calculational procedure can be applied to the estimation of ionization efficiencies for radioactive gases, such as the short half-lived nuclides and positron emitters, which have been difficult to estimate through experimental calibrations. It can also be applied to evaluations of efficiencies in gases other than air. This paper describes the calculation method and results, and also presents the effects of shape and volume variations of the ionization chamber. ((orig.))
The EURADOS-KIT training course on Monte Carlo methods for the calibration of body counters
International Nuclear Information System (INIS)
Breustedt, B.; Broggio, D.; Gomez-Ros, J.M.; Lopez, M.A.; Leone, D.; Poelz, S.; Marzocchi, O.; Shutt, A.
2016-01-01
Monte Carlo (MC) methods are numerical simulation techniques that can be used to extend the scope of calibrations performed in in vivo monitoring laboratories. These methods allow calibrations to be carried out for a much wider range of body shapes and sizes than would be feasible using physical phantoms. Unfortunately, nowadays, this powerful technique is still used mainly in research institutions only. In 2013, EURADOS and the in vivo monitoring laboratory of Karlsruhe Institute of Technology (KIT) organized a 3-d training course to disseminate knowledge on the application of MC methods for in vivo monitoring. It was intended as a hands-on course centered around an exercise which guided the participants step by step through the calibration process using a simplified version of KIT's equipment. Only introductory lectures on in vivo monitoring and voxel models were given. The course was based on MC codes of the MCNP family, widespread in the community. The strong involvement of the participants and the working atmosphere in the classroom as well as the formal evaluation of the course showed that the approach chosen was appropriate. Participants liked the hands-on approach and the extensive course materials on the exercise. (authors)
Houska, Tobias; Multsch, Sebastian; Kraft, Philipp; Frede, Hans-Georg; Breuer, Lutz
2014-05-01
Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures, e.g. by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow a more detailed analysis of the dynamic behaviour of the soil-plant interface. We used the Python programming language to couple two of such high process oriented independent models and to calibrate both models simultaneously. The Catchment Modelling Framework (CMF) simulated soil hydrology based on the Richards equation and the Van-Genuchten-Mualem retention curve. CMF was coupled with the Plant growth Modelling Framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo based Generalised Likelihood Uncertainty Estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions to it. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 x 106 model runs randomly drawn from an equally distributed parameter space. Three objective functions were used to evaluate the model performance, i.e. coefficient of determination (R2), bias and model efficiency according to Nash Sutcliffe (NSE). The model was applied to three sites with different management in Muencheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matters of roots, storages, stems and leaves. Best parameter sets resulted in NSE of 0.57 for the simulation of soil moisture across all three sites. The shape
On the efficiency calibration of a drum waste assay system
Dinescu, L; Cazan, I L; Macrin, R; Caragheorgheopol, G; Rotarescu, G
2002-01-01
The efficiency calibration of a gamma spectroscopy waste assay system, constructed by IFIN-HH, was performed. The calibration technique was based on the assumption of a uniform distribution of the source activity in the drum and also a uniform sample matrix. A collimated detector (HPGe--20% relative efficiency) placed at 30 cm from the drum was used. The detection limit for sup 1 sup 3 sup 7 Cs and sup 6 sup 0 Co is approximately 45 Bq/kg for a sample of about 400 kg and a counting time of 10 min. A total measurement uncertainty of -70% to +40% was estimated.
Improving computational efficiency of Monte Carlo simulations with variance reduction
International Nuclear Information System (INIS)
Turner, A.; Davis, A.
2013-01-01
CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)
Novotny, M.A.
2010-02-01
The efficiency of dynamic Monte Carlo algorithms for off-lattice systems composed of particles is studied for the case of a single impurity particle. The theoretical efficiencies of the rejection-free method and of the Monte Carlo with Absorbing Markov Chains method are given. Simulation results are presented to confirm the theoretical efficiencies. © 2010.
Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector
Energy Technology Data Exchange (ETDEWEB)
Padilla Cabal, Fatima, E-mail: fpadilla@instec.c [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba); Lopez-Pino, Neivy; Luis Bernal-Castillo, Jose; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D' Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba)
2010-12-15
A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ({sup 241}Am, {sup 133}Ba, {sup 22}Na, {sup 60}Co, {sup 57}Co, {sup 137}Cs and {sup 152}Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.
Research of pulse formation neutron detector efficiency by Monte Carlo method
International Nuclear Information System (INIS)
Zhang Jianmin; Deng Li; Xie Zhongsheng; Yu Weidong; Zhong Zhenqian
2001-01-01
A study on detection efficiency of the neutron detector used in oil logging by Monte Carlo method is presented. Detection efficiency of the thermal and epithermal neutron detectors used in oil logging was calculated by Monte Carlo method using the MCNP code. The calculation results were satisfactory
Calibration and neutron detection efficiency in Double Chooz
Energy Technology Data Exchange (ETDEWEB)
Almazan, Helena; Buck, Christian; Haser, Julia; Lindner, Manfred [MPIK, Heidelberg (Germany); Collaboration: Double Chooz-Collaboration
2016-07-01
As an intense and pure source of low energy electron antineutrinos, nuclear reactors are one of the most powerful tools to analyse the neutrino oscillations. The Double Chooz experiment aims for a precise determination of the neutrino mixing angle θ{sub 13} with the new data from the near detector. In order to reach this precision, a high and accurately known detection efficiency of the inverse beta decay (IBD) signal - the antineutrino interaction - is required. Several methods are available for detector calibration. Cosmic muons and spallation neutron captures are some examples of natural sources that are used. Furthermore, signals created by artificial sources contribute to the calibration with well defined classes of events. LED Light Injection systems in the Inner Detector and the Inner Veto are used to measure PMT gains and time responses. Radioactive sources deployed inside the detector are used to determine the energy scale and the detector stability. The {sup 252}Cf source plays an important role in the detector calibration. In the spontaneous fissions of this isotope neutrons are produced with high multiplicity. An analysis of the neutron interactions in the scintillator can be used to estimate the detection efficiency of the delayed coincidence signal of the IBD reaction. New results from recent calibration campaigns will be presented, providing a crucial input for reactor antineutrino analysis with two detectors.
Boon, Niels
2017-01-01
Proposed here is a dynamic Monte-Carlo algorithm that is efficient in simulating dense systems of long flexible chain molecules. It expands on the configurational-bias Monte-Carlo method through the simultaneous generation of a large set of trial configurations. This process is directed by attempting to terminate unfinished chains with a low statistical weight, and replacing these chains with clones (enrichments) of stronger chains. The efficiency of the resulting method is explored by simula...
Extrapolated HPGe efficiency estimates based on a single calibration measurement
International Nuclear Information System (INIS)
Winn, W.G.
1994-01-01
Gamma spectroscopists often must analyze samples with geometries for which their detectors are not calibrated. The effort to experimentally recalibrate a detector for a new geometry can be quite time consuming, causing delay in reporting useful results. Such concerns have motivated development of a method for extrapolating HPGe efficiency estimates from an existing single measured efficiency. Overall, the method provides useful preliminary results for analyses that do not require exceptional accuracy, while reliably bracketing the credible range. The estimated efficiency element-of for a uniform sample in a geometry with volume V is extrapolated from the measured element-of 0 of the base sample of volume V 0 . Assuming all samples are centered atop the detector for maximum efficiency, element-of decreases monotonically as V increases about V 0 , and vice versa. Extrapolation of high and low efficiency estimates element-of h and element-of L provides an average estimate of element-of = 1/2 [element-of h + element-of L ] ± 1/2 [element-of h - element-of L ] (general) where an uncertainty D element-of = 1/2 (element-of h - element-of L ] brackets limits for a maximum possible error. The element-of h and element-of L both diverge from element-of 0 as V deviates from V 0 , causing D element-of to increase accordingly. The above concepts guided development of both conservative and refined estimates for element-of
Ramgraber, M.; Schirmer, M.
2017-12-01
As computational power grows and wireless sensor networks find their way into common practice, it becomes increasingly feasible to pursue on-line numerical groundwater modelling. The reconciliation of model predictions with sensor measurements often necessitates the application of Sequential Monte Carlo (SMC) techniques, most prominently represented by the Ensemble Kalman Filter. In the pursuit of on-line predictions it seems advantageous to transcend the scope of pure data assimilation and incorporate on-line parameter calibration as well. Unfortunately, the interplay between shifting model parameters and transient states is non-trivial. Several recent publications (e.g. Chopin et al., 2013, Kantas et al., 2015) in the field of statistics discuss potential algorithms addressing this issue. However, most of these are computationally intractable for on-line application. In this study, we investigate to what extent compromises between mathematical rigour and computational restrictions can be made within the framework of on-line numerical modelling of groundwater. Preliminary studies are conducted in a synthetic setting, with the goal of transferring the conclusions drawn into application in a real-world setting. To this end, a wireless sensor network has been established in the valley aquifer around Fehraltorf, characterized by a highly dynamic groundwater system and located about 20 km to the East of Zürich, Switzerland. By providing continuous probabilistic estimates of the state and parameter distribution, a steady base for branched-off predictive scenario modelling could be established, providing water authorities with advanced tools for assessing the impact of groundwater management practices. Chopin, N., Jacob, P.E. and Papaspiliopoulos, O. (2013): SMC2: an efficient algorithm for sequential analysis of state space models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75 (3), p. 397-426. Kantas, N., Doucet, A., Singh, S
Efficient Monte Carlo Simulations of Gas Molecules Inside Porous Materials.
Kim, Jihan; Smit, Berend
2012-07-10
Monte Carlo (MC) simulations are commonly used to obtain adsorption properties of gas molecules inside porous materials. In this work, we discuss various optimization strategies that lead to faster MC simulations with CO2 gas molecules inside host zeolite structures used as a test system. The reciprocal space contribution of the gas-gas Ewald summation and both the direct and the reciprocal gas-host potential energy interactions are stored inside energy grids to reduce the wall time in the MC simulations. Additional speedup can be obtained by selectively calling the routine that computes the gas-gas Ewald summation, which does not impact the accuracy of the zeolite's adsorption characteristics. We utilize two-level density-biased sampling technique in the grand canonical Monte Carlo (GCMC) algorithm to restrict CO2 insertion moves into low-energy regions within the zeolite materials to accelerate convergence. Finally, we make use of the graphics processing units (GPUs) hardware to conduct multiple MC simulations in parallel via judiciously mapping the GPU threads to available workload. As a result, we can obtain a CO2 adsorption isotherm curve with 14 pressure values (up to 10 atm) for a zeolite structure within a minute of total compute wall time.
A modified version of the Monte Carlo computer code for calculating neutron detection efficiencies
International Nuclear Information System (INIS)
Nakayama, K.; Pessoa, E.F.; Douglas, R.A.
1980-12-01
A calculation of neutron detection efficiencies has been performed for organic scintillators using the Monte Carlo Method. Effects which contribute to the detection efficiency have been incorporated in the calculations as thoroughly as possible. The reliability of the results is verified by comparison with the efficiency measurements available in the literature for neutrons in the energy range between 1 and 170 MeV with neutron detection thresholds between O.1 and 22.3 MeV. (Author) [pt
Energy Technology Data Exchange (ETDEWEB)
Morera-Gómez, Yasser, E-mail: ymore24@gamail.com [Centro de Estudios Ambientales de Cienfuegos, AP 5. Ciudad Nuclear, CP 59350 Cienfuegos (Cuba); Departamento de Química y Edafología, Universidad de Navarra, Irunlarrea No 1, Pamplona 31009, Navarra (Spain); Cartas-Aguila, Héctor A.; Alonso-Hernández, Carlos M.; Nuñez-Duartes, Carlos [Centro de Estudios Ambientales de Cienfuegos, AP 5. Ciudad Nuclear, CP 59350 Cienfuegos (Cuba)
2016-05-11
To obtain reliable measurements of the environmental radionuclide activity using HPGe (High Purity Germanium) detectors, the knowledge of the absolute peak efficiency is required. This work presents a practical procedure for efficiency calibration of a coaxial n-type and a well-type HPGe detector using experimental and Monte Carlo simulations methods. The method was performed in an energy range from 40 to 1460 keV and it can be used for both, solid and liquid environmental samples. The calibration was initially verified measuring several reference materials provided by the IAEA (International Atomic Energy Agency). Finally, through the participation in two Proficiency Tests organized by IAEA for the members of the ALMERA network (Analytical Laboratories for the Measurement of Environmental Radioactivity) the validity of the developed procedure was confirmed. The validation also showed that measurement of {sup 226}Ra should be conducted using coaxial n-type HPGe detector in order to minimize the true coincidence summing effect. - Highlights: • An efficiency calibration for a coaxial and a well-type HPGe detector was performed. • The calibration was made using experimental and Monte Carlo simulations methods. • The procedure was verified measuring several reference materials provided by IAEA. • Calibrations were validated through the participation in 2 ALMERA Proficiency Tests.
Seichter, Felicia; Vogt, Josef; Radermacher, Peter; Mizaikoff, Boris
2017-01-25
The calibration of analytical systems is time-consuming and the effort for daily calibration routines should therefore be minimized, while maintaining the analytical accuracy and precision. The 'calibration transfer' approach proposes to combine calibration data already recorded with actual calibrations measurements. However, this strategy was developed for the multivariate, linear analysis of spectroscopic data, and thus, cannot be applied to sensors with a single response channel and/or a non-linear relationship between signal and desired analytical concentration. To fill this gap for a non-linear calibration equation, we assume that the coefficients for the equation, collected over several calibration runs, are normally distributed. Considering that coefficients of an actual calibration are a sample of this distribution, only a few standards are needed for a complete calibration data set. The resulting calibration transfer approach is demonstrated for a fluorescence oxygen sensor and implemented as a hierarchical Bayesian model, combined with a Lagrange Multipliers technique and Monte-Carlo Markov-Chain sampling. The latter provides realistic estimates for coefficients and prediction together with accurate error bounds by simulating known measurement errors and system fluctuations. Performance criteria for validation and optimal selection of a reduced set of calibration samples were developed and lead to a setup which maintains the analytical performance of a full calibration. Strategies for a rapid determination of problems occurring in a daily calibration routine, are proposed, thereby opening the possibility of correcting the problem just in time. Copyright © 2016 Elsevier B.V. All rights reserved.
Monte Carlo calculation of efficiencies of whole-body counter, by microcomputer
International Nuclear Information System (INIS)
Fernandes Neto, J.M.
1987-01-01
A computer programming using the Monte Carlo method for calculation of efficiencies of whole-body counting of body radiation distribution is presented. An analytical simulator (for man e for child) incorporated with 99m Tc, 131 I and 42 K is used. (M.A.C.) [pt
Simple formalism for efficient derivatives and multi-determinant expansions in quantum Monte Carlo
Filippi, Claudia; Assaraf, R.; Moroni, S.
2016-01-01
We present a simple and general formalism to compute efficiently the derivatives of a multi-determinant Jastrow-Slater wave function, the local energy, the interatomic forces, and similar quantities needed in quantum Monte Carlo. Through a straightforward manipulation of matrices evaluated on the
Adding computationally efficient realism to Monte Carlo turbulence simulation
Campbell, C. W.
1985-01-01
Frequently in aerospace vehicle flight simulation, random turbulence is generated using the assumption that the craft is small compared to the length scales of turbulence. The turbulence is presumed to vary only along the flight path of the vehicle but not across the vehicle span. The addition of the realism of three-dimensionality is a worthy goal, but any such attempt will not gain acceptance in the simulator community unless it is computationally efficient. A concept for adding three-dimensional realism with a minimum of computational complexity is presented. The concept involves the use of close rational approximations to irrational spectra and cross-spectra so that systems of stable, explicit difference equations can be used to generate the turbulence.
DETEF a Monte Carlo system for the calculation of gamma spectrometers efficiency
International Nuclear Information System (INIS)
Cornejo, N.; Mann, G.
1996-01-01
The Monte Carlo method program DETEF calculates the efficiency of cylindrical NaI, Csi, Ge or Si, detectors for photons energy until 2 MeV and several sample geometric. These sources could be punctual, plane cylindrical or rectangular. The energy spectrum appears on the screen simultaneously with the statistical simulation. The calculated and experimental estimated efficiencies coincidence well in the standards deviations intervals
Efficient mass calibration of magnetic sector mass spectrometers
International Nuclear Information System (INIS)
Roddick, J.C.
1996-01-01
Magnetic sector mass spectrometers used for automatic acquisition of precise isotopic data are usually controlled with Hall probes and software that uses polynomial equations to define and calibrate the mass-field relations required for mass focusing. This procedure requires a number of reference masses and careful tuning to define and maintain an accurate mass calibration. A simplified equation is presented and applied to several different magnetically controlled mass spectrometers. The equation accounts for nonlinearity in typical Hall probe controlled mass-field relations, reduces calibration to a linear fitting procedure, and is sufficiently accurate to permit calibration over a mass range of 2 to 200 amu with only two defining masses. Procedures developed can quickly correct for normal drift in calibrations and compensate for drift during isotopic analysis over a limited mass range such as a single element. The equation is: Field A·Mass 1/2 + B·(Mass) p where A, B, and p are constants. The power value p has a characteristic value for a Hall probe/controller and is insensitive to changing conditions, thus reducing calibration to a linear regression to determine optimum A and B. (author). 1 ref., 1 tab., 6 figs
Franck, D; Borissov, N; de Carlan, L; Pierrat, N; Genicot, J L; Etherington, G
2003-01-01
This paper reports on a new utility for development of computational phantoms for Monte Carlo calculations and data analysis for in vivo measurements of radionuclides deposited in tissues. The individual parameters of each worker can be acquired for an exact geometric representation of his or her anatomy, which is particularly important for low-energy gamma ray emitting sources such as thorium, uranium, plutonium and other actinides. The software discussed here enables automatic creation of an MCNP input data file based on computed tomography (CT) scanning data. The utility was first tested for low- and medium-energy actinide emitters on Livermore phantoms, the mannequins generally used for lung counting, in order to compare the results of simulation and measurement. From these results, the utility's ability to study uncertainties in in vivo calibration were investigated. Calculations and comparison with the experimental data are presented and discussed in this paper.
Lawler, J. E.; Den Hartog, E. A.
2018-03-01
The Ar I and II branching ratio calibration method is discussed with the goal of improving the technique. This method of establishing a relative radiometric calibration is important in ongoing research to improve atomic transition probabilities for quantitative spectroscopy in astrophysics and other fields. Specific suggestions are presented along with Monte Carlo simulations of wavelength dependent effects from scattering/reflecting of photons in a hollow cathode.
Velazquez, L.; Castro-Palacio, J. C.
2013-07-01
Recently, Velazquez and Curilef proposed a methodology to extend Monte Carlo algorithms based on a canonical ensemble which aims to overcome slow sampling problems associated with temperature-driven discontinuous phase transitions. We show in this work that Monte Carlo algorithms extended with this methodology also exhibit a remarkable efficiency near a critical point. Our study is performed for the particular case of a two-dimensional four-state Potts model on a square lattice with periodic boundary conditions. This analysis reveals that the extended version of Metropolis importance sampling is more efficient than the usual Swendsen-Wang and Wolff cluster algorithms. These results demonstrate the effectiveness of this methodology to improve the efficiency of MC simulations of systems that undergo any type of temperature-driven phase transition.
Shielding calculations for neutron calibration bunker using Monte Carlo code MCNP-4C
International Nuclear Information System (INIS)
Suman, H.; Kharita, M. H.; Yousef, S.
2008-02-01
In this work, the dose arising from an Am-Be source of 10 8 neutron/sec strength located inside the newly constructed neutron calibration bunker in the National Radiation Metrology Laboratories, was calculated using MCNP-4C code. It was found that the shielding of the neutron calibration bunker is sufficient. As the calculated dose is not expected to exceed in inhabited areas 0.183 μSv/hr, which is 10 times smaller than the regulatory dose constraints. Hence, it can be concluded that the calibration bunker can house - from the external exposure point of view - an Am-Be neutron source of 10 9 neutron/sec strength. It turned out that the neutron dose from the source is few times greater than the photon dose. The sky shine was found to contribute significantly to the total dose. This contribution was estimated to be 60% of the neutron dose and 10% of the photon dose. The systematic uncertainties due to various factors have been assessed and was found to be between 4 and 10% due to concrete density variations; 15% due to the dose estimation method; 4 -10% due to weather variations (temperature and moisture). The calculated dose was highly sensitive to the changes in source spectra. The uncertainty due to the use of two different neutron spectra is about 70%.(author)
International Nuclear Information System (INIS)
Greacen, E.L.; Correll, R.L.; Cunningham, R.B.; Johns, G.G.; Nicolls, K.D.
1981-01-01
Procedures common to different methods of calibration of neutron moisture meters are outlined and laboratory and field calibration methods compared. Gross errors which arise from faulty calibration techniques are described. The count rate can be affected by the dry bulk density of the soil, the volumetric content of constitutional hydrogen and other chemical components of the soil and soil solution. Calibration is further complicated by the fact that the neutron meter responds more strongly to the soil properties close to the detector and source. The differences in slope of calibration curves for different soils can be as much as 40%
A parameter for the selection of an optimum balance calibration model by Monte Carlo simulation
CSIR Research Space (South Africa)
Bidgood, Peter M
2013-09-01
Full Text Available in (1). The additional modulated terms are included to give a degree of functionality to balances whose characteristics are dependent on the sign of the applied load(s). In this paper, only the linear and quadratic terms are considered. Equation (1...]. MDOE methods are directed at obtaining an optimum balance loading scheme that can be applied to effectively determine the coefficients of a pre-defined calibration model: this model may be linear, quadratic or cubic. The most general of these models...
Efficient Markov Chain Monte Carlo Sampling for Hierarchical Hidden Markov Models
Turek, Daniel; de Valpine, Perry; Paciorek, Christopher J.
2016-01-01
Traditional Markov chain Monte Carlo (MCMC) sampling of hidden Markov models (HMMs) involves latent states underlying an imperfect observation process, and generates posterior samples for top-level parameters concurrently with nuisance latent variables. When potentially many HMMs are embedded within a hierarchical model, this can result in prohibitively long MCMC runtimes. We study combinations of existing methods, which are shown to vastly improve computational efficiency for these hierarchi...
CSIR Research Space (South Africa)
Mwila, MK
2014-06-01
Full Text Available Conference on Ambient Systems, Networks and Technologies (ANT-2014) Approach to Sensor Node Calibration for Efficient Localisation in Wireless Sensor Networks in Realistic Scenarios Martin K. Mwilaa, Karim Djouanib, Anish Kurienc,∗ a...
Directory of Open Access Journals (Sweden)
Chapoutier Nicolas
2017-01-01
Full Text Available In the context of the rising of Monte Carlo transport calculations for any kind of application, AREVA recently improved its suite of engineering tools in order to produce efficient Monte Carlo workflow. Monte Carlo codes, such as MCNP or TRIPOLI, are recognized as reference codes to deal with a large range of radiation transport problems. However the inherent drawbacks of theses codes - laboring input file creation and long computation time - contrast with the maturity of the treatment of the physical phenomena. The goals of the recent AREVA developments were to reach similar efficiency as other mature engineering sciences such as finite elements analyses (e.g. structural or fluid dynamics. Among the main objectives, the creation of a graphical user interface offering CAD tools for geometry creation and other graphical features dedicated to the radiation field (source definition, tally definition has been reached. The computations times are drastically reduced compared to few years ago thanks to the use of massive parallel runs, and above all, the implementation of hybrid variance reduction technics. From now engineering teams are capable to deliver much more prompt support to any nuclear projects dealing with reactors or fuel cycle facilities from conceptual phase to decommissioning.
Chapoutier, Nicolas; Mollier, François; Nolin, Guillaume; Culioli, Matthieu; Mace, Jean-Reynald
2017-09-01
In the context of the rising of Monte Carlo transport calculations for any kind of application, AREVA recently improved its suite of engineering tools in order to produce efficient Monte Carlo workflow. Monte Carlo codes, such as MCNP or TRIPOLI, are recognized as reference codes to deal with a large range of radiation transport problems. However the inherent drawbacks of theses codes - laboring input file creation and long computation time - contrast with the maturity of the treatment of the physical phenomena. The goals of the recent AREVA developments were to reach similar efficiency as other mature engineering sciences such as finite elements analyses (e.g. structural or fluid dynamics). Among the main objectives, the creation of a graphical user interface offering CAD tools for geometry creation and other graphical features dedicated to the radiation field (source definition, tally definition) has been reached. The computations times are drastically reduced compared to few years ago thanks to the use of massive parallel runs, and above all, the implementation of hybrid variance reduction technics. From now engineering teams are capable to deliver much more prompt support to any nuclear projects dealing with reactors or fuel cycle facilities from conceptual phase to decommissioning.
An efficient feedback calibration algorithm for direct imaging radio telescopes
Beardsley, Adam P.; Thyagarajan, Nithyanandan; Bowman, Judd D.; Morales, Miguel F.
2017-10-01
We present the E-field Parallel Imaging Calibration (EPICal) algorithm, which addresses the need for a fast calibration method for direct imaging radio astronomy correlators. Direct imaging involves a spatial fast Fourier transform of antenna signals, alleviating an O(Na ^2) computational bottleneck typical in radio correlators, and yielding a more gentle O(Ng log _2 Ng) scaling, where Na is the number of antennas in the array and Ng is the number of gridpoints in the imaging analysis. This can save orders of magnitude in computation cost for next generation arrays consisting of hundreds or thousands of antennas. However, because antenna signals are mixed in the imaging correlator without creating visibilities, gain correction must be applied prior to imaging, rather than on visibilities post-correlation. We develop the EPICal algorithm to form gain solutions quickly and without ever forming visibilities. This method scales as the number of antennas, and produces results comparable to those from visibilities. We use simulations to demonstrate the EPICal technique and study the noise properties of our gain solutions, showing they are similar to visibility-based solutions in realistic situations. By applying EPICal to 2 s of Long Wavelength Array data, we achieve a 65 per cent dynamic range improvement compared to uncalibrated images, showing this algorithm is a promising solution for next generation instruments.
Farr, W. M.; Mandel, I.; Stevens, D.
2015-06-01
Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the Markov chain Monte Carlo (MCMC) algorithm and convergence is correspondingly slow. Here, we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose intermodel jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient "global" proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher dimensional spaces efficiently.
Efficient Accommodation of Local Minima in Watershed Model Calibration
National Research Council Canada - National Science Library
Skahill, Brian E; Doherty, John
2006-01-01
.... Nevertheless, the method also has advantages, chief among these being its model-run efficiency, and its ability to report useful information on parameter sensitivities and covariances as a by-product of its use...
Calibration of school spectrometer for measuring light bulb efficiency
Valenčič, Matej
2015-01-01
This diploma thesis presents the efficiency of various light sources which are used for lighting on a daily basis (incandescent light bulb, halogen lamp, compact fluorescent lamp, light-emiting diode). The theoretical part of the thesis presents the instruments used for measurements, the physical laws used in order to calculate the efficiency of each light source and the physical background needed for the understanding of further calculations. The experimental part of the thesis presents the ...
On computing efficiency of Monte-Carlo methods in solving Dirichlet's problem
International Nuclear Information System (INIS)
Androsenko, P.A.; Lomtev, V.L.
1990-01-01
Algorithms of Monte-Carlo method based on boundary random walks, application of Fredholm's series and intended for the solution of stationary and non-stationary boundary value Dirichlet's problem for the Laplace's equation are presented. Description is made of the code systems BRANDB, BRANDBT and BRANDF realizing the above algorithms and allowing the calculation of values of solution and its derivatives for three-dimensional geometrical systems. The results of computing experiments on solving a number of problems in the system with convex and non-convex geometries are presented, conclusions are made on the computing efficiency of the methods involved. 13 refs.; 4 figs.; 2 tabs
Dawn, Sandipan; Bakshi, A K; Sathian, Deepa; Selvam, T Palani
2017-06-15
Neutron scatter contributions as a function of distance along the transverse axis of 241Am-Be source were estimated by three different methods such as shadow cone, semi-empirical and Monte Carlo. The Monte Carlo-based FLUKA code was used to simulate the existing room used for the calibration of CR-39 detector as well as LB6411 doseratemeter for selected distances from 241Am-Be source. The modified 241Am-Be spectra at different irradiation geometries such as at different source detector distances, behind the shadow cone, at the surface of the water phantom were also evaluated using Monte Carlo calculations. Neutron scatter contributions, estimated using three different methods compare reasonably well. It is proposed to use the scattering correction factors estimated through Monte Carlo simulation and other methods for the calibration of CR-39 detector and doseratemeter at 0.75 and 1 m distance from the source. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
International Nuclear Information System (INIS)
Vanin, V.R.; Castro, R.M.; Helene, O.A.M.; Pascholati, P.R.; Koskinas, M.F.; Dias, M.S.
1999-01-01
The aim of this work is to determine the steps needed to define the correlations between decay data. This problem is linked to the inverse problem, that is, the detector efficiency calibration with multigamma-ray sources taking into account the correlations between the decay data. The calibration procedure must give sound results even in unfavorable geometry, for instance when the source is placed near the detector. First is described the calibration method, followed by the procedure to determine the decay data and variance matrix
Simple formalism for efficient derivatives and multi-determinant expansions in quantum Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Filippi, Claudia, E-mail: c.filippi@utwente.nl [MESA+ Institute for Nanotechnology, University of Twente, P.O. Box 217, 7500 AE Enschede (Netherlands); Assaraf, Roland, E-mail: assaraf@lct.jussieu.fr [Sorbonne Universités, UPMC Univ Paris 06, CNRS, Laboratoire de Chimie Théorique CC 137-4, place Jussieu F-75252 Paris Cedex 05 (France); Moroni, Saverio, E-mail: moroni@democritos.it [CNR-IOM DEMOCRITOS, Istituto Officina dei Materiali, and SISSA Scuola Internazionale Superiore di Studi Avanzati, Via Bonomea 265, I-34136 Trieste (Italy)
2016-05-21
We present a simple and general formalism to compute efficiently the derivatives of a multi-determinant Jastrow-Slater wave function, the local energy, the interatomic forces, and similar quantities needed in quantum Monte Carlo. Through a straightforward manipulation of matrices evaluated on the occupied and virtual orbitals, we obtain an efficiency equivalent to algorithmic differentiation in the computation of the interatomic forces and the optimization of the orbital parameters. Furthermore, for a large multi-determinant expansion, the significant computational gain afforded by a recently introduced table method is here extended to the local value of any one-body operator and to its derivatives, in both all-electron and pseudopotential calculations.
Cai, Han-Jie; Zhang, Zhi-Lei; Fu, Fen; Li, Jian-Yang; Zhang, Xun-Chao; Zhang, Ya-Ling; Yan, Xue-Song; Lin, Ping; Xv, Jian-Ya; Yang, Lei
2018-02-01
The dense granular flow spallation target is a new target concept chosen for the Accelerator-Driven Subcritical (ADS) project in China. For the R&D of this kind of target concept, a dedicated Monte Carlo (MC) program named GMT was developed to perform the simulation study of the beam-target interaction. Owing to the complexities of the target geometry, the computational cost of the MC simulation of particle tracks is highly expensive. Thus, improvement of computational efficiency will be essential for the detailed MC simulation studies of the dense granular target. Here we present the special design of the GMT program and its high efficiency performance. In addition, the speedup potential of the GPU-accelerated spallation models is discussed.
Simple formalism for efficient derivatives and multi-determinant expansions in quantum Monte Carlo.
Filippi, Claudia; Assaraf, Roland; Moroni, Saverio
2016-05-21
We present a simple and general formalism to compute efficiently the derivatives of a multi-determinant Jastrow-Slater wave function, the local energy, the interatomic forces, and similar quantities needed in quantum Monte Carlo. Through a straightforward manipulation of matrices evaluated on the occupied and virtual orbitals, we obtain an efficiency equivalent to algorithmic differentiation in the computation of the interatomic forces and the optimization of the orbital parameters. Furthermore, for a large multi-determinant expansion, the significant computational gain afforded by a recently introduced table method is here extended to the local value of any one-body operator and to its derivatives, in both all-electron and pseudopotential calculations.
Efficiency Studies with Gamma Ray Portion of Specialized Reactor-Shield Monte Carlo Program 18-0
Energy Technology Data Exchange (ETDEWEB)
Capo, M. A.
1961-08-01
Application studies were made with Specialized Reactor-Shield Monte Carlo Program 18-0 to determine the efficiency and feasibility of calculating energy deposition due to primary core gamma rays throughout the XNJ140E-1 reactor-shield assembly. Monte Carlo results are presented in tabular form for all geometrical regions used to describe the shield. Described here is a means of obtaining adequate and valid heating rates in about 47 hours on the IBM-704 digital computer. Comparison of Monte Carlo and point kernel data are included.
Yasin, Zafar; Negoita, Florin; Tabbassum, Sana; Borcea, Ruxandra; Kisyov, Stanimir
2017-12-01
The plastic scintillators are used in different areas of science and technology. One of the use of these scintillator detectors is as beam loss monitors (BLM) for new generation of high intensity heavy ion in superconducting linear accelerators. Operated in pulse counting mode with rather high thresholds and shielded by few centimeters of lead in order to cope with radiofrequency noise and X-ray background emitted by accelerator cavities, they preserve high efficiency for high energy gamma ray and neutrons produced in the nuclear reactions of lost beam particles with accelerator components. Efficiency calculation and calibration of detectors is very important before their practical usage. In the present work, the efficiency of plastic scintillator detectors is simulated using FLUKA for different gamma and neutron sources like, 60Co, 137Cs and 238Pu-Be. The sources are placed at different positions around the detector. Calculated values are compared with the measured values and a reasonable agreement is observed.
Efficiency calibration of a HPGe detector for [{sup 18}F] FDG activity measurements
Energy Technology Data Exchange (ETDEWEB)
Fragoso, Maria da Conceicao de Farias; Lacerda, Isabelle Viviane Batista de; Albuquerque, Antonio Morais de Sa, E-mail: mariacc05@yahoo.com.br, E-mail: isabelle.lacerda@ufpe.br, E-mail: moraisalbuquerque@hotmaiI.com [Universidade Federal de Pernambuco (DEN/UFPE), Recife, PE (Brazil). Departamento de Energia Nuclear; Oliveira, Mercia Liane de; Hazin, Clovis Abrahao; Lima, Fernando Roberto de Andrade, E-mail: mercial@cnen.gov.br, E-mail: chazin@cnen.gov.br, E-mail: falima@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)
2013-11-01
The radionuclide {sup 18}F, in the form of flurodeoxyglucose (FDG), is the most used radiopharmaceutical for Positron Emission Tomography (PET). Due to [{sup 18}F]FDG increasing demand, it is important to ensure high quality activity measurements in the nuclear medicine practice. Therefore, standardized reference sources are necessary to calibrate of {sup 18}F measuring systems. Usually, the activity measurements are performed in re-entrant ionization chambers, also known as radionuclide calibrators. Among the existing alternatives for the standardization of radioactive sources, the method known as gamma spectrometry is widely used for short-lived radionuclides, since it is essential to minimize source preparation time. The purpose of this work was to perform the standardization of the [{sup 18}F]FDG solution by gamma spectrometry. In addition, the reference sources calibrated by this method can be used to calibrate and test the radionuclide calibrators from the Divisao de Producao de Radiofarmacos (DIPRA) of the Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE). Standard sources of {sup 152}Eu, {sup 137}Cs and {sup 68}Ge were used for the efficiency calibration of the spectrometer system. As a result, the efficiency curve as a function of energy was determined in wide energy range from 122 to 1408 keV. Reference sources obtained by this method can be used in [{sup 18}F]FDG activity measurements comparison programs for PET services localized in the Brazilian Northeast region. (author)
Calibration of RPC-based muon detector at BESIII
International Nuclear Information System (INIS)
Xie Yuguang; Bian Jianming; Cao Guofu; Cao Xuexiang; He Miao; Huang Bin; Liu Qiuguang; Ma Xiang; Sun Yongzhao; Wang Jike; Wang Liangliang; Wu Linghui; Yan Liang; Li Weidong; Zhang Jiawen; Deng Ziyan; He Kanglin; Ji Xiaobin; Li Fei; Li Haibo; Liu Chunxiu; Liu Huaimin; Liu Yingjie; Ma Qinmei; Mao Zepu; Mo Xiaohu; Ping Ronggang; Qiu Jinfa; Sun Shengsen; Wen Shuopin; Yuan Changzheng; Yuan Ye; Zhang Bingyun; Zhang Changchun; Zhang Jianyong; Zhang Yao; Zhu Kejun; Zhu Yongsheng; Liang Yutie; You Zheng yun; Mao Yajun; Chen Shenjian; Fu Chengdong; Gao Yuanning; Hua Chunfei; Qin Yahong; Huang Xingtao; Zhang Xueyao; Zou Jiaheng; Liu Suo; Pan Minghua; Pang Caiying; Zhu Zhili; Xu Min
2010-01-01
The calibration algorithm for RPC-based muon detector at BESIII has been developed. The calibration method, calibration error and algorithm performance are studied. The primary results of efficiency and noise at layer, module and strip levels have been calibrated with cosmic ray data. The calibration constants are available for simulation and reconstruction tuning. The results of Monte Carlo and data are also compared to check the validation and reliability of the algorithm. (authors)
International Nuclear Information System (INIS)
Munoz-Cobo, J.L.; Pena, J.; Chiva, S.; Mendez, S.
2007-01-01
This paper presents a study of the estimation of the correction factors for the interfacial area concentration and the bubble velocity in two phase flow measurements using the double sensor conductivity probe. Monte-Carlo calculations of these correction factors have been performed for different values of the relative distance (ΔS/D) between the tips of the conductivity probe and different values of the relative bubble velocity fluctuation parameter. Also this paper presents the Monte-Carlo calculation of the expected value of the calibration factors for bubbly flow assuming a log-normal distribution of the bubble sizes. We have computed the variation of the expected values of the calibration factors with the relative distance (ΔS/D) between the tips and the velocity fluctuation parameter. Finally, we have performed a sensitivity study of the variation of the average values of the calibration factors for bubbly flow with the geometrical standard deviation of the log-normal distribution of bubble sizes. The results of these calculations show that the total interfacial area correction factor is very close to 2, and depends very weakly on the velocity fluctuation, and the relative distance between tips. For the velocity calibration factor, the Monte-Carlo results show that for moderate values of the relative bubble velocity fluctuation parameter (H max ≤ 0.3) and values of the relative distance between tips not too small (ΔS/D ≥ 0.2), the correction velocity factor for the bubble sensor conductivity probe is close to unity, ranging from 0.96 to 1
Modeling of detection efficiency of HPGe semiconductor detector by Monte Carlo method
International Nuclear Information System (INIS)
Rapant, T.
2003-01-01
Over the past ten years following the gradual adoption of new legislative standards for protection against ionizing radiation was significant penetration of gamma-spectrometry between standard radioanalytical methods. In terms of nuclear power plant gamma-spectrometry has shown as the most effective method of determining of the activity of individual radionuclides. Spectrometric laboratories were gradually equipped with the most modern technical equipment. Nevertheless, due to the use of costly and time intensive experimental calibration methods, the possibilities of gamma-spectrometry were partially limited. Mainly in late 90-ies during substantial renovation and modernization works. For this reason, in spectrometric laboratory in Nuclear Power Plants Bohunice in cooperation with the Department of Nuclear Physics FMPI in Bratislava were developed and tested several calibration procedures based on computer simulations using GEANT program. In presented thesis the calibration method for measuring of bulk samples based on auto-absorption factors is described. The accuracy of the proposed method is at least comparable with other used methods, but it surpasses them significantly in terms of efficiency and financial time and simplicity. The described method has been used successfully almost for two years in laboratory spectrometric Radiation Protection Division in Bohunice nuclear power. It is shown by the results of international comparison measurements and repeated validation measurements performed by Slovak Institute of Metrology in Bratislava.
SU-E-J-60: Efficient Monte Carlo Dose Calculation On CPU-GPU Heterogeneous Systems
International Nuclear Information System (INIS)
Xiao, K; Chen, D. Z; Hu, X. S; Zhou, B
2014-01-01
Purpose: It is well-known that the performance of GPU-based Monte Carlo dose calculation implementations is bounded by memory bandwidth. One major cause of this bottleneck is the random memory writing patterns in dose deposition, which leads to several memory efficiency issues on GPU such as un-coalesced writing and atomic operations. We propose a new method to alleviate such issues on CPU-GPU heterogeneous systems, which achieves overall performance improvement for Monte Carlo dose calculation. Methods: Dose deposition is to accumulate dose into the voxels of a dose volume along the trajectories of radiation rays. Our idea is to partition this procedure into the following three steps, which are fine-tuned for CPU or GPU: (1) each GPU thread writes dose results with location information to a buffer on GPU memory, which achieves fully-coalesced and atomic-free memory transactions; (2) the dose results in the buffer are transferred to CPU memory; (3) the dose volume is constructed from the dose buffer on CPU. We organize the processing of all radiation rays into streams. Since the steps within a stream use different hardware resources (i.e., GPU, DMA, CPU), we can overlap the execution of these steps for different streams by pipelining. Results: We evaluated our method using a Monte Carlo Convolution Superposition (MCCS) program and tested our implementation for various clinical cases on a heterogeneous system containing an Intel i7 quad-core CPU and an NVIDIA TITAN GPU. Comparing with a straightforward MCCS implementation on the same system (using both CPU and GPU for radiation ray tracing), our method gained 2-5X speedup without losing dose calculation accuracy. Conclusion: The results show that our new method improves the effective memory bandwidth and overall performance for MCCS on the CPU-GPU systems. Our proposed method can also be applied to accelerate other Monte Carlo dose calculation approaches. This research was supported in part by NSF under Grants CCF
International Nuclear Information System (INIS)
Scemama, Anthony; Caffarel, Michel; Oseret, Emmanuel; Jalby, William
2013-01-01
Various strategies to implement efficiently quantum Monte Carlo (QMC) simulations for large chemical systems are presented. These include: (i) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), (ii) the possibility of keeping the memory footprint minimal, (iii) the important enhancement of single-core performance when efficient optimization tools are used, and (iv) the definition of a universal, dynamic, fault-tolerant, and load-balanced framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC-Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056, and 1731 electrons). Using 10-80 k computing cores of the Curie machine (GENCI-TGCC-CEA, France), QMC-Chem has been shown to be capable of running at the peta scale level, thus demonstrating that for this machine a large part of the peak performance can be achieved. Implementation of large-scale QMC simulations for future exa scale platforms with a comparable level of efficiency is expected to be feasible. (authors)
Study of RPC Barrel maximum efficiency in 2012 and 2015 calibration collision runs
Cassar, Samwel
2015-01-01
The maximum efficiency of each of the 1020 Resistive Plate Chamber (RPC) rolls in the barrel region of the CMS muon detector is calculated from the best sigmoid fit of efficiency against high voltage (HV). Data from the HV scans, collected during calibration runs in 2012 and 2015, were compared and the rolls exhibiting a change in maximum efficiency were identified. The chi-square value of the sigmoid fit for each roll was considered in determining the significance of the maximum efficiency for the respective roll.
Akhmatskaya, Elena; Fernández-Pendás, Mario; Radivojević, Tijana; Sanz-Serna, J M
2017-10-24
The modified Hamiltonian Monte Carlo (MHMC) methods, i.e., importance sampling methods that use modified Hamiltonians within a Hybrid Monte Carlo (HMC) framework, often outperform in sampling efficiency standard techniques such as molecular dynamics (MD) and HMC. The performance of MHMC may be enhanced further through the rational choice of the simulation parameters and by replacing the standard Verlet integrator with more sophisticated splitting algorithms. Unfortunately, it is not easy to identify the appropriate values of the parameters that appear in those algorithms. We propose a technique, that we call MAIA (Modified Adaptive Integration Approach), which, for a given simulation system and a given time step, automatically selects the optimal integrator within a useful family of two-stage splitting formulas. Extended MAIA (or e-MAIA) is an enhanced version of MAIA, which additionally supplies a value of the method-specific parameter that, for the problem under consideration, keeps the momentum acceptance rate at a user-desired level. The MAIA and e-MAIA algorithms have been implemented, with no computational overhead during simulations, in MultiHMC-GROMACS, a modified version of the popular software package GROMACS. Tests performed on well-known molecular models demonstrate the superiority of the suggested approaches over a range of integrators (both standard and recently developed), as well as their capacity to improve the sampling efficiency of GSHMC, a noticeable method for molecular simulation in the MHMC family. GSHMC combined with e-MAIA shows a remarkably good performance when compared to MD and HMC coupled with the appropriate adaptive integrators.
Study on calibration of neutron efficiency and relative photo-yield of plastic scintillator
International Nuclear Information System (INIS)
Peng Taiping; Zhang Chuanfei; Li Rurong; Zhang Jianhua; Luo Xiaobing; Xia Yijun; Yang Zhihua
2002-01-01
A method used for the calibration of neutron efficiency and the relative photo yield of plastic scintillator is studied. T(p, n) and D(d, n) reactions are used as neutron resources. The neutron efficiencies and the relative photo yields of plastic scintillators 1421 (40 mm in diameter and 5 mm in thickness) and determined in the neutron energy range of 0.655-5 MeV
Lodwick, Camille J; Spitz, Henry B
2008-12-01
An anthropometric surrogate (phantom) of the human leg was defined in the constructs of the Monte Carlo N Particle (MCNP) code to predict the response when used in calibrating K x-ray fluorescence (K-XRF) spectrometry measurements of stable lead in bone. The predicted response compared favorably with measurements using the anthropometric phantom containing a tibia with increasing stable lead content. These benchmark measurements confirmed the validity of a modified MCNP code to accurately simulate K-XRF spectrometry measurements of stable lead in bone. A second, cylindrical leg phantom was simulated to determine whether the shape of the calibration phantom is a significant factor in evaluating K-XRF performance. Simulations of the cylindrical and anthropometric calibration phantoms suggest that a cylindrical calibration standard overestimates lead content of a human leg up to 4%. A two-way analysis of variance determined that phantom shape is a statistically significant factor in predicting the K-XRF response. These results suggest that an anthropometric phantom provides a more accurate calibration standard compared to the conventional cylindrical shape, and that a cylindrical shape introduces a 4% positive bias in measured lead values.
Guerra, Marta L.
2009-02-23
We calculate the efficiency of a rejection-free dynamic Monte Carlo method for d -dimensional off-lattice homogeneous particles interacting through a repulsive power-law potential r-p. Theoretically we find the algorithmic efficiency in the limit of low temperatures and/or high densities is asymptotically proportional to ρ (p+2) /2 T-d/2 with the particle density ρ and the temperature T. Dynamic Monte Carlo simulations are performed in one-, two-, and three-dimensional systems with different powers p, and the results agree with the theoretical predictions. © 2009 The American Physical Society.
Efficient heterogeneous execution of Monte Carlo shielding calculations on a Beowulf cluster.
Dewar, David; Hulse, Paul; Cooper, Andrew; Smith, Nigel
2005-01-01
Recent work has been done in using a high-performance 'Beowulf' cluster computer system for the efficient distribution of Monte Carlo shielding calculations. This has enabled the rapid solution of complex shielding problems at low cost and with greater modularity and scalability than traditional platforms. The work has shown that a simple approach to distributing the workload is as efficient as using more traditional techniques such as PVM (Parallel Virtual Machine). In addition, when used in an operational setting this technique is fairer with the use of resources than traditional methods, in that it does not tie up a single computing resource but instead shares the capacity with other tasks. These developments in computing technology have enabled shielding problems to be solved that would have taken an unacceptably long time to run on traditional platforms. This paper discusses the BNFL Beowulf cluster and a number of tests that have recently been run to demonstrate the efficiency of the asynchronous technique in running the MCBEND program. The BNFL Beowulf currently consists of 84 standard PCs running RedHat Linux. Current performance of the machine has been estimated to be between 40 and 100 Gflop s(-1). When the whole system is employed on one problem up to four million particles can be tracked per second. There are plans to review its size in line with future business needs.
Efficient heterogeneous execution of Monte Carlo shielding calculations on a Beowulf cluster
International Nuclear Information System (INIS)
Dewar, D.; Hulse, P.; Cooper, A.; Smith, N.
2005-01-01
Recent work has been done in using a high-performance 'Beowulf' cluster computer system for the efficient distribution of Monte Carlo shielding calculations. This has enabled the rapid solution of complex shielding problems at low cost and with greater modularity and scalability than traditional platforms. The work has shown that a simple approach to distributing the workload is as efficient as using more traditional techniques such as PVM (Parallel Virtual Machine). In addition, when used in an operational setting this technique is fairer with the use of resources than traditional methods, in that it does not tie up a single computing resource but instead shares the capacity with other tasks. These developments in computing technology have enabled shielding problems to be solved that would have taken an unacceptably long time to run on traditional platforms. This paper discusses the BNFL Beowulf cluster and a number of tests that have recently been run to demonstrate the efficiency of the asynchronous technique in running the MCBEND program. The BNFL Beowulf currently consists of 84 standard PCs running RedHat Linux. Current performance of the machine has been estimated to be between 40 and 100 Gflop s -1 . When the whole system is employed on one problem up to four million particles can be tracked per second. There are plans to review its size in line with future business needs. (authors)
International Nuclear Information System (INIS)
Matsui, S.; Mori, Y.; Nonaka, T.; Hattori, T.; Kasamatsu, Y.; Haraguchi, D.; Watanabe, Y.; Uchiyama, K.; Ishikawa, M.
2016-01-01
For evaluation of on-site dosimetry and process design in industrial use of ultra-low energy electron beam (ULEB) processes, we evaluate the energy deposition using a thin radiochromic film and a Monte Carlo simulation. The response of film dosimeter was calibrated using a high energy electron beam with an acceleration voltage of 2 MV and alanine dosimeters with uncertainty of 11% at coverage factor 2. Using this response function, the results of absorbed dose measurements for ULEB were evaluated from 10 kGy to 100 kGy as a relative dose. The deviation between the responses of deposit energy on the films and Monte Carlo simulations was within 15%. As far as this limitation, relative dose estimation using thin film dosimeters with response function obtained by high energy electron irradiation and simulation results is effective for ULEB irradiation processes management.
Energy Technology Data Exchange (ETDEWEB)
Matsui, S., E-mail: smatsui@gpi.ac.jp; Mori, Y. [The Graduate School for the Creation of New Photonics Industries, 1955-1 Kurematsucho, Nishiku, Hamamatsu, Shizuoka 431-1202 (Japan); Nonaka, T.; Hattori, T.; Kasamatsu, Y.; Haraguchi, D.; Watanabe, Y.; Uchiyama, K.; Ishikawa, M. [Hamamatsu Photonics K.K. Electron Tube Division, 314-5 Shimokanzo, Iwata, Shizuoka 438-0193 (Japan)
2016-05-15
For evaluation of on-site dosimetry and process design in industrial use of ultra-low energy electron beam (ULEB) processes, we evaluate the energy deposition using a thin radiochromic film and a Monte Carlo simulation. The response of film dosimeter was calibrated using a high energy electron beam with an acceleration voltage of 2 MV and alanine dosimeters with uncertainty of 11% at coverage factor 2. Using this response function, the results of absorbed dose measurements for ULEB were evaluated from 10 kGy to 100 kGy as a relative dose. The deviation between the responses of deposit energy on the films and Monte Carlo simulations was within 15%. As far as this limitation, relative dose estimation using thin film dosimeters with response function obtained by high energy electron irradiation and simulation results is effective for ULEB irradiation processes management.
Determination of relative efficiency of a detector using Monte Carlo method
International Nuclear Information System (INIS)
Medeiros, M.P.C.; Rebello, W.F.; Lopes, J.M.; Silva, A.X.
2015-01-01
High-purity germanium detectors (HPGe) are mandatory tools for spectrometry because of their excellent energy resolution. The efficiency of such detectors, quoted in the list of specifications by the manufacturer, frequently refers to the relative full-energy peak efficiency, related to the absolute full-energy peak efficiency of a 7.6 cm x 7.6 cm (diameter x height) NaI(Tl) crystal, based on the 1.33 MeV peak of a 60 Co source positioned 25 cm from the detector. In this study, we used MCNPX code to simulate an HPGe detector (Canberra GC3020), from Real-Time Neutrongraphy Laboratory of UFRJ, to survey the spectrum of a 60 Co source located 25 cm from the detector in order to calculate and confirm the efficiency declared by the manufacturer. Agreement between experimental and simulated data was achieved. The model under development will be used for calculating and comparison purposes with the detector calibration curve from software Genie2000™, also serving as a reference for future studies. (author)
Lu, D.; Ricciuto, D. M.; Evans, K. J.
2017-12-01
Data-worth analysis plays an essential role in improving the understanding of the subsurface system, in developing and refining subsurface models, and in supporting rational water resources management. However, data-worth analysis is computationally expensive as it requires quantifying parameter uncertainty, prediction uncertainty, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface simulations using standard Monte Carlo (MC) sampling or advanced surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose efficient Bayesian analysis of data-worth using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce the computational cost with the use of multifidelity approximations. As the data-worth analysis involves a great deal of expectation estimations, the cost savings from MLMC in the assessment can be very outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it to a highly heterogeneous oil reservoir simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the estimation obtained from the standard MC. But compared to the standard MC, the MLMC greatly reduces the computational costs in the uncertainty reduction estimation, with up to 600 days cost savings when one processor is used.
An efficient Bayesian data-worth analysis using a multilevel Monte Carlo method
Lu, Dan; Ricciuto, Daniel; Evans, Katherine
2018-03-01
Improving the understanding of subsurface systems and thus reducing prediction uncertainty requires collection of data. As the collection of subsurface data is costly, it is important that the data collection scheme is cost-effective. Design of a cost-effective data collection scheme, i.e., data-worth analysis, requires quantifying model parameter, prediction, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface hydrological model simulations using standard Monte Carlo (MC) sampling or surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose an efficient Bayesian data-worth analysis using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce computational costs using multifidelity approximations. Since the Bayesian data-worth analysis involves a great deal of expectation estimation, the cost saving of the MLMC in the assessment can be outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it for a highly heterogeneous two-phase subsurface flow simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the standard MC estimation. But compared to the standard MC, the MLMC greatly reduces the computational costs.
Malasics, Attila; Boda, Dezso
2010-06-28
Two iterative procedures have been proposed recently to calculate the chemical potentials corresponding to prescribed concentrations from grand canonical Monte Carlo (GCMC) simulations. Both are based on repeated GCMC simulations with updated excess chemical potentials until the desired concentrations are established. In this paper, we propose combining our robust and fast converging iteration algorithm [Malasics, Gillespie, and Boda, J. Chem. Phys. 128, 124102 (2008)] with the suggestion of Lamperski [Mol. Simul. 33, 1193 (2007)] to average the chemical potentials in the iterations (instead of just using the chemical potentials obtained in the last iteration). We apply the unified method for various electrolyte solutions and show that our algorithm is more efficient if we use the averaging procedure. We discuss the convergence problems arising from violation of charge neutrality when inserting/deleting individual ions instead of neutral groups of ions (salts). We suggest a correction term to the iteration procedure that makes the algorithm efficient to determine the chemical potentials of individual ions too.
Calibration method for a radwaste assay system
International Nuclear Information System (INIS)
Dulama, C.; Dobrin, R.; Toma, Al.; Paunoiu, C.
2004-01-01
A waste assay system entirely designed and manufactured in the Institute for Nuclear Research is used in radwaste treatment and conditioning stream to ensure compliance with national repository radiological requirements. Usually, waste assay systems are calibrated by using various experimental arrangements including calibration phantoms. The paper presents a comparative study concerning the efficiency calibration performed by shell source method and a semiempirical, computational method based on a Monte Carlo algorithm. (authors)
Sharma, Diksha; Sempau, Josep; Badano, Aldo
2018-02-01
Monte Carlo simulations require large number of histories to obtain reliable estimates of the quantity of interest and its associated statistical uncertainty. Numerous variance reduction techniques (VRTs) have been employed to increase computational efficiency by reducing the statistical uncertainty. We investigate the effect of two VRTs for optical transport methods on accuracy and computing time for the estimation of variance (noise) in x-ray imaging detectors. We describe two VRTs. In the first, we preferentially alter the direction of the optical photons to increase detection probability. In the second, we follow only a fraction of the total optical photons generated. In both techniques, the statistical weight of photons is altered to maintain the signal mean. We use fastdetect2, an open-source, freely available optical transport routine from the hybridmantis package. We simulate VRTs for a variety of detector models and energy sources. The imaging data from the VRT simulations are then compared to the analog case (no VRT) using pulse height spectra, Swank factor, and the variance of the Swank estimate. We analyze the effect of VRTs on the statistical uncertainty associated with Swank factors. VRTs increased the relative efficiency by as much as a factor of 9. We demonstrate that we can achieve the same variance of the Swank factor with less computing time. With this approach, the simulations can be stopped when the variance of the variance estimates reaches the desired level of uncertainty. We implemented analytic estimates of the variance of Swank factor and demonstrated the effect of VRTs on image quality calculations. Our findings indicate that the Swank factor is dominated by the x-ray interaction profile as compared to the additional uncertainty introduced in the optical transport by the use of VRTs. For simulation experiments that aim at reducing the uncertainty in the Swank factor estimate, any of the proposed VRT can be used for increasing the relative
Energy Technology Data Exchange (ETDEWEB)
Moreira, M.C.F., E-mail: marcos@ird.gov.b [Universidade Federal do Rio de Janeiro, COPPE, Programa de Engenharia Nuclear, Laboratorio de Monitoracao de Processos (Federal University of Rio de Janeiro, COPPE, Nuclear Engineering Program, Process Monitoring Laboratory), P.O. Box 68509, 21941-972 Rio de Janeiro (Brazil); Instituto de Radioprotecao e Dosimetria, CNEN/IRD (Radiation Protection and Dosimetry Institute, CNEN/IRD), Av. Salvador Allende s/no, P.O. Box 37750, 22780-160 Rio de Janeiro (Brazil); Conti, C.C. [Instituto de Radioprotecao e Dosimetria, CNEN/IRD (Radiation Protection and Dosimetry Institute, CNEN/IRD), Av. Salvador Allende s/no, P.O. Box 37750, 22780-160 Rio de Janeiro (Brazil); Schirru, R. [Universidade Federal do Rio de Janeiro, COPPE, Programa de Engenharia Nuclear, Laboratorio de Monitoracao de Processos (Federal University of Rio de Janeiro, COPPE, Nuclear Engineering Program, Process Monitoring Laboratory), P.O. Box 68509, 21941-972 Rio de Janeiro (Brazil)
2010-09-21
An NaI(Tl) multidetector layout combined with the use of Monte Carlo (MC) calculations and artificial neural networks(ANN) is proposed to assess the radioactive contamination of urban and semi-urban environment surfaces. A very simple urban environment like a model street composed of a wall on either side and the ground surface was the study case. A layout of four NaI(Tl) detectors was used, and the data corresponding to the response of the detectors were obtained by the Monte Carlo method. Two additional data sets with random values for the contamination and for detectors' response were also produced to test the ANNs. For this work, 18 feedforward topologies with backpropagation learning algorithm ANNs were chosen and trained. The results showed that some trained ANNs were able to accurately predict the contamination on the three urban surfaces when submitted to values within the training range. Other results showed that generalization outside the training range of values could not be achieved. The use of Monte Carlo calculations in combination with ANNs has been proven to be a powerful tool to perform detection calibration for highly complicated detection geometries.
International Nuclear Information System (INIS)
Moreira, M.C.F.; Conti, C.C.; Schirru, R.
2010-01-01
An NaI(Tl) multidetector layout combined with the use of Monte Carlo (MC) calculations and artificial neural networks(ANN) is proposed to assess the radioactive contamination of urban and semi-urban environment surfaces. A very simple urban environment like a model street composed of a wall on either side and the ground surface was the study case. A layout of four NaI(Tl) detectors was used, and the data corresponding to the response of the detectors were obtained by the Monte Carlo method. Two additional data sets with random values for the contamination and for detectors' response were also produced to test the ANNs. For this work, 18 feedforward topologies with backpropagation learning algorithm ANNs were chosen and trained. The results showed that some trained ANNs were able to accurately predict the contamination on the three urban surfaces when submitted to values within the training range. Other results showed that generalization outside the training range of values could not be achieved. The use of Monte Carlo calculations in combination with ANNs has been proven to be a powerful tool to perform detection calibration for highly complicated detection geometries.
International Nuclear Information System (INIS)
Le Tuan Anh; Bui Minh Hue; Bui Van Loat; Nguyen Cong Tam
2011-01-01
Abundance and age of uranium material are two significant parameters. Traditionally, they are determined by alpha spectrometry and mass-spectrometry, which are destructive methods. In this report, we exhibit the result of abundance and age, which were determined by gamma-spectrometry using intrinsic efficiency calibration. This is a nondestructive method without using standard source or any reference samples, and applicable to material of any geometrical shape. The result obtained is in good agreement with the results obtained by other authors. (author)
On an efficient multiple time step Monte Carlo simulation of the SABR model
A. Leitao Rodriguez (Álvaro); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)
2017-01-01
textabstractIn this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl.
On an efficient multiple time step Monte Carlo simulation of the SABR model
Leitao Rodriguez, A.; Grzelak, L.A.; Oosterlee, C.W.
2017-01-01
In this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl. Math.
Efficient 3D Kinetic Monte Carlo Method for Modeling of Molecular Structure and Dynamics
DEFF Research Database (Denmark)
Panshenskov, Mikhail; Solov'yov, Ilia; Solov'yov, Andrey V.
2014-01-01
Self-assembly of molecular systems is an important and general problem that intertwines physics, chemistry, biology, and material sciences. Through understanding of the physical principles of self-organization, it often becomes feasible to control the process and to obtain complex structures with...... the kinetic Monte Carlo approach in a three-dimensional space. We describe the computational side of the developed code, discuss its efficiency, and apply it for studying an exemplary system....... with tailored properties, for example, bacteria colonies of cells or nanodevices with desired properties. Theoretical studies and simulations provide an important tool for unraveling the principles of self-organization and, therefore, have recently gained an increasing interest. The present article features...... an extension of a popular code MBN EXPLORER (MesoBioNano Explorer) aiming to provide a universal approach to study self-assembly phenomena in biology and nanoscience. In particular, this extension involves a highly parallelized module of MBN EXPLORER that allows simulating stochastic processes using...
Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport
Energy Technology Data Exchange (ETDEWEB)
Romano, Paul K.; Siegel, Andrew R.
2017-04-16
The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup due to vectorization as a function of two parameters: the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size in order to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, however, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration. For some problems, this implies that vector effciencies over 50% may not be attainable. While there are many factors impacting performance of an event-based algorithm that are not captured by our model, it nevertheless provides insights into factors that may be limiting in a real implementation.
International Nuclear Information System (INIS)
Picton, D.J.; Harris, R.G.; Randle, K.; Weaver, D.R.
1995-01-01
This paper describes a simple, accurate and efficient technique for the calculation of materials perturbation effects in Monte Carlo photon transport calculations. It is particularly suited to the application for which it was developed, namely the modelling of a dual detector density tool as used in borehole logging. However, the method would be appropriate to any photon transport calculation in the energy range 0.1 to 2 MeV, in which the predominant processes are Compton scattering and photoelectric absorption. The method enables a single set of particle histories to provide results for an array of configurations in which material densities or compositions vary. It can calculate the effects of small perturbations very accurately, but is by no means restricted to such cases. For the borehole logging application described here the method has been found to be efficient for a moderate range of variation in the bulk density (of the order of ±30% from a reference value) or even larger changes to a limited portion of the system (e.g. a low density mudcake of the order of a few tens of mm in thickness). The effective speed enhancement over an equivalent set of individual calculations is in the region of an order of magnitude or more. Examples of calculations on a dual detector density tool are given. It is demonstrated that the method predicts, to a high degree of accuracy, the variation of detector count rates with formation density, and that good results are also obtained for the effects of mudcake layers. An interesting feature of the results is that relative count rates (the ratios of count rates obtained with different configurations) can usually be determined more accurately than the absolute values of the count rates. (orig.)
Directory of Open Access Journals (Sweden)
Rehman Shakeel U.
2009-01-01
Full Text Available A primary-interaction based Monte Carlo algorithm has been developed for determination of the total efficiency of cylindrical scintillation g-ray detectors. This methodology has been implemented in a Matlab based computer program BPIMC. For point isotropic sources at axial locations with respect to the detector axis, excellent agreement has been found between the predictions of the BPIMC code with the corresponding results obtained by using hybrid Monte Carlo as well as by experimental measurements over a wide range of g-ray energy values. For off-axis located point sources, the comparison of the BPIMC predictions with the corresponding results obtained by direct calculations as well as by conventional Monte Carlo schemes shows good agreement validating the proposed algorithm. Using the BPIMC program, the energy dependent detector efficiency has been found to approach an asymptotic profile by increasing either thickness or diameter of scintillator while keeping the other fixed. The variation of energy dependent total efficiency of a 3'x3' NaI(Tl scintillator with axial distance has been studied using the BPIMC code. About two orders of magnitude change in detector efficiency has been observed for zero to 50 cm variation in the axial distance. For small values of axial separation, a similar large variation has also been observed in total efficiency for 137Cs as well as for 60Co sources by increasing the axial-offset from zero to 50 cm.
Achleitner, S; Rinderer, M; Kirnbauer, R
2009-01-01
For the Tyrolean part of the river Inn, a hybrid model for flood forecast has been set up and is currently in its test phase. The system is a hybrid system which comprises of a hydraulic 1D model for the river Inn, and the hydrological models HQsim (Rainfall-runoff-discharge model) and the snow and ice melt model SES for modeling the rainfall runoff form non-glaciated and glaciated tributary catchment respectively. Within this paper the focus is put on the hydrological modeling of the totally 49 connected non-glaciated catchments realized with the software HQsim. In the course of model calibration, the identification of the most sensitive parameters is important aiming at an efficient calibration procedure. The indicators used for explaining the parameter sensitivities were chosen specifically for the purpose of flood forecasting. Finally five model parameters could be identified as being sensitive for model calibration when aiming for a well calibrated model for flood conditions. In addition two parameters were identified which are sensitive in situations where the snow line plays an important role.
Calibration method for carbon dioxide sensors to investigate direct methanol fuel cell efficiency
Stähler, M.; Burdzik, A.
2014-09-01
Methanol crossover is a process in direct methanol fuel cells which causes significant reduction of cell efficiency. Methanol permeates through the membrane electrode assembly and reacts at the cathode with oxygen to form carbon dioxide. This process is undesirable because it does not generate electric energy, but rather only increases heat production. Different procedures have been used for the investigation of this crossover. One method uses the detection of carbon dioxide in the exhaust gas of the cathode by means of a carbon dioxide sensor. This technique is inexpensive and enables real-time measurements but its disadvantage is the low accuracy. This paper demonstrates a simple method to generate gas mixtures for the calibration of the sensor in order to increase the accuracy. The advantages of this technique consist in the fact that only the existing devices of a direct methanol fuel cell test rig are needed and that the operator can adjust the carbon dioxide concentration for the calibration process. This is important for dealing with nonlinearities of the sensor. A detailed error analysis accompanies the experiments. At the end it is shown that the accuracy of the determined Faraday efficiency can be improved by using the presented calibration technique.
Directory of Open Access Journals (Sweden)
Y. Tang
2006-01-01
Full Text Available This study provides a comprehensive assessment of state-of-the-art evolutionary multiobjective optimization (EMO tools' relative effectiveness in calibrating hydrologic models. The relative computational efficiency, accuracy, and ease-of-use of the following EMO algorithms are tested: Epsilon Dominance Nondominated Sorted Genetic Algorithm-II (ε-NSGAII, the Multiobjective Shuffled Complex Evolution Metropolis algorithm (MOSCEM-UA, and the Strength Pareto Evolutionary Algorithm 2 (SPEA2. This study uses three test cases to compare the algorithms' performances: (1 a standardized test function suite from the computer science literature, (2 a benchmark hydrologic calibration test case for the Leaf River near Collins, Mississippi, and (3 a computationally intensive integrated surface-subsurface model application in the Shale Hills watershed in Pennsylvania. One challenge and contribution of this work is the development of a methodology for comprehensively comparing EMO algorithms that have different search operators and randomization techniques. Overall, SPEA2 attained competitive to superior results for most of the problems tested in this study. The primary strengths of the SPEA2 algorithm lie in its search reliability and its diversity preservation operator. The biggest challenge in maximizing the performance of SPEA2 lies in specifying an effective archive size without a priori knowledge of the Pareto set. In practice, this would require significant trial-and-error analysis, which is problematic for more complex, computationally intensive calibration applications. ε-NSGAII appears to be superior to MOSCEM-UA and competitive with SPEA2 for hydrologic model calibration. ε-NSGAII's primary strength lies in its ease-of-use due to its dynamic population sizing and archiving which lead to rapid convergence to very high quality solutions with minimal user input. MOSCEM-UA is best suited for hydrologic model calibration applications that have small
Variation in thyroid calibration efficiencies as a function of phantom design for I-125
International Nuclear Information System (INIS)
Miltenberger, R.P.; Langille, E.; Sheetz, M.; Ricci, T.
1987-01-01
Four commercially available thyroid phantoms were evaluated to determine the effect that choice of phantom would have on the determination of I-125 activity in the thyroid. Efficiency calibration values for a 5.0-cm diameter 1-mm thick NaI (Tl) detector were determined as a function of distance from the phantom along the central axis and at angles up to 45 off axis using I-125 as the radionuclide of interest. Results indicate that substantial variations in the estimate of radioactivity in the thyroid will occur based on choice of calibration phantom. Using the Humanoid System, Inc. Realistic Phantom as the reference phantom, one could experience differences in estimated activity in the human thyroid that range from 0.86 to 2.94. 6 refs., 6 figs., 3 tabs
International Nuclear Information System (INIS)
Harb, S.; Salahel Din, K.; Abbady, A.
2009-01-01
In this paper, we describe a method of calibrating of efficiency of a HPGe gamma-ray spectrometry of bulk environmental samples (Tea, crops, water, and soil) is a significant part of the environmental radioactivity measurements. Here we will discuss the full energy peak efficiency (FEPE) of three HPGe detectors it as a consequence, it is essential that the efficiency is determined for each set-up employed. Besides to take full advantage at gamma-ray spectrometry, a set of efficiency at several energies which covers the wide the range in energy, the large the number of radionuclides whose concentration can be determined to measure the main natural gamma-ray emitters, the efficiency should be known at least from 46.54 keV ( 210 Pb) to 1836 keV ( 88 Y). Radioactive sources were prepared from two different standards, a first mixed standard QC Y 40 containing 210 Pb, 241 Am, 109 Cd, and Co 57 , and the second QC Y 48 containing 241 Am, 109 Cd, 57 Co, 139 Ce, 113 Sn, 85 Sr, 137 Cs, 88 Y, and 60 Co is necessary in order to calculate the activity of the different radionuclides contained in a sample. In this work, we will study the efficiency calibration as a function of different parameters as:- Energy of gamma ray from 46.54 keV ( 210 Pb) to 1836 keV ( 88 Y), three different detectors A, B, and C, geometry of containers (point source, marinelli beaker, and cylindrical bottle 1 L), height of standard soil samples in bottle 250 ml, and density of standard environmental samples. These standard environmental sample must be measured before added standard solution because we will use the same environmental samples in order to consider the self absorption especially and composition in the case of volume samples.
Joglekar, Ajit; Chen, Renjie; Lawrimore, Joshua
2013-01-01
Macromolecular machines participate in almost every cell biological function. These machines can take the form of well-defined protein structures such as the kinetochore, or more loosely organized protein assemblies like the endocytic coat. The protein architecture of these machines-the arrangement of multiple copies of protein subunits at the nanoscale, is necessary for understanding their cell biological function and biophysical mechanism. Defining this architecture in vivo presents a major challenge. High density of protein molecules within macromolecular machines severely limits the effectiveness of super-resolution microscopy. However, this density is ideal for Forster Resonance Energy Transfer (FRET), which can determine the proximity between neighboring molecules. Here, we present a simple FRET quantitation scheme that calibrates a standard epifluorescence microscope for measuring donor-acceptor separations. This calibration can be used to deduce FRET efficiency fluorescence intensity measurements. This method will allow accurate determination of FRET efficiency over a wide range of values and FRET pair number. It will also allow dynamic FRET measurements with high spatiotemporal resolution under cell biological conditions. Although the poor maturation efficiency of genetically encoded fluorescent proteins presents a challenge, we show that its effects can be alleviated. To demonstrate this methodology, we probe the in vivo architecture of the γ-Tubulin Ring. Our technique can be applied to study the architecture and dynamics of a wide range of macromolecular machines.
Absolute efficiency calibration of 6LiF-based solid state thermal neutron detectors
Finocchiaro, Paolo; Cosentino, Luigi; Lo Meo, Sergio; Nolte, Ralf; Radeck, Desiree
2018-03-01
The demand for new thermal neutron detectors as an alternative to 3He tubes in research, industrial, safety and homeland security applications, is growing. These needs have triggered research and development activities about new generations of thermal neutron detectors, characterized by reasonable efficiency and gamma rejection comparable to 3He tubes. In this paper we show the state of the art of a promising low-cost technique, based on commercial solid state silicon detectors coupled with thin neutron converter layers of 6LiF deposited onto carbon fiber substrates. A few configurations were studied with the GEANT4 simulation code, and the intrinsic efficiency of the corresponding detectors was calibrated at the PTB Thermal Neutron Calibration Facility. The results show that the measured intrinsic detection efficiency is well reproduced by the simulations, therefore validating the simulation tool in view of new designs. These neutron detectors have also been tested at neutron beam facilities like ISIS (Rutherford Appleton Laboratory, UK) and n_TOF (CERN) where a few samples are already in operation for beam flux and 2D profile measurements. Forthcoming applications are foreseen for the online monitoring of spent nuclear fuel casks in interim storage sites.
Efficiencies of joint non-local update moves in Monte Carlo simulations of coarse-grained polymers
Austin, Kieran S.; Marenz, Martin; Janke, Wolfhard
2018-03-01
In this study four update methods are compared in their performance in a Monte Carlo simulation of polymers in continuum space. The efficiencies of the update methods and combinations thereof are compared with the aid of the autocorrelation time with a fixed (optimal) acceptance ratio. Results are obtained for polymer lengths N = 14, 28 and 42 and temperatures below, at and above the collapse transition. In terms of autocorrelation, the optimal acceptance ratio is approximately 0.4. Furthermore, an overview of the step sizes of the update methods that correspond to this optimal acceptance ratio is given. This shall serve as a guide for future studies that rely on efficient computer simulations.
Energy Technology Data Exchange (ETDEWEB)
Garnica-Garza, H M [Centro de Investigacion y de Estudios Avanzados del Instituto Politecnico Nacional Unidad Monterrey, VIa del Conocimiento 201 Parque de Investigacion e Innovacion Tecnologica, Apodaca NL C.P. 66600 (Mexico)], E-mail: hgarnica@cinvestav.mx
2009-03-21
Monte Carlo simulation was employed to calculate the response of TLD-100 chips under irradiation conditions such as those found during accelerated partial breast irradiation with the MammoSite radiation therapy system. The absorbed dose versus radius in the last 0.5 cm of the treated volume was also calculated, employing a resolution of 20 {mu}m, and a function that fits the observed data was determined. Several clinically relevant irradiation conditions were simulated for different combinations of balloon size, balloon-to-surface distance and contents of the contrast solution used to fill the balloon. The thermoluminescent dosemeter (TLD) cross-calibration factors were derived assuming that the calibration of the dosemeters was carried out using a Cobalt 60 beam, and in such a way that they provide a set of parameters that reproduce the function that describes the behavior of the absorbed dose versus radius curve. Such factors may also prove to be useful for those standardized laboratories that provide postal dosimetry services.
Directory of Open Access Journals (Sweden)
Lauren S. Wakschlag
2009-06-01
Full Text Available Maternal smoking during pregnancy is a major public health problem that has been associated with numerous short- and long-term adverse health outcomes in offspring. However, characterizing smoking exposure during pregnancy precisely has been rather difficult: self-reported measures of smoking often suffer from recall bias, deliberate misreporting, and selective non-disclosure, while single bioassay measures of nicotine metabolites only reflect recent smoking history and cannot capture the fluctuating and complex patterns of varying exposure of the fetus. Recently, Dukic et al. [1] have proposed a statistical method for combining information from both sources in order to increase the precision of the exposure measurement and power to detect more subtle effects of smoking. In this paper, we extend the Dukic et al. [1] method to incorporate individual variation of the metabolic parameters (such as clearance rates into the calibration model of smoking exposure during pregnancy. We apply the new method to the Family Health and Development Project (FHDP, a small convenience sample of 96 predominantly working-class white pregnant women oversampled for smoking. We find that, on average, misreporters smoke 7.5 cigarettes more than what they report to smoke, with about one third underreporting by 1.5, one third under-reporting by about 6.5, and one third underreporting by 8.5 cigarettes. Partly due to the limited demographic heterogeneity in the FHDP sample, the results are similar to those obtained by the deterministic calibration model, whose adjustments were slightly lower (by 0.5 cigarettes on average. The new results are also, as expected, less sensitive to assumed values of cotinine half-life.
Özgün, Özlem
2013-01-01
Statistical properties of scattered fields (or radar cross section values) in electromagnetic scattering from objects (such as ship- and decoy-like objects) on or above random rough sea surfaces are predicted by using transformation electromagnetics, finite element method (FEM) and Monte Carlo technique. The rough sea surface is modeled as a random process and is randomly generated by using the Pierson-Moskowitz spectrum. For each realization of the sea surface, scattered fields and the radar...
Energy Technology Data Exchange (ETDEWEB)
Zhang, X. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Izaurralde, R. C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Zong, Z. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Zhao, K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Thomson, A. M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2012-08-20
The efficiency of calibrating physically-based complex hydrologic models is a major concern in the application of those models to understand and manage natural and human activities that affect watershed systems. In this study, we developed a multi-core aware multi-objective evolutionary optimization algorithm (MAMEOA) to improve the efficiency of calibrating a worldwide used watershed model (Soil and Water Assessment Tool (SWAT)). The test results show that MAMEOA can save about 1-9%, 26-51%, and 39-56% time consumed by calibrating SWAT as compared with sequential method by using dual-core, quad-core, and eight-core machines, respectively. Potential and limitations of MAMEOA for calibrating SWAT are discussed. MAMEOA is open source software.
Nonequilibrium candidate Monte Carlo is an efficient tool for equilibrium simulation
Energy Technology Data Exchange (ETDEWEB)
Nilmeier, J. P.; Crooks, G. E.; Minh, D. D. L.; Chodera, J. D.
2011-10-24
Metropolis Monte Carlo simulation is a powerful tool for studying the equilibrium properties of matter. In complex condensed-phase systems, however, it is difficult to design Monte Carlo moves with high acceptance probabilities that also rapidly sample uncorrelated configurations. Here, we introduce a new class of moves based on nonequilibrium dynamics: candidate configurations are generated through a finite-time process in which a system is actively driven out of equilibrium, and accepted with criteria that preserve the equilibrium distribution. The acceptance rule is similar to the Metropolis acceptance probability, but related to the nonequilibrium work rather than the instantaneous energy difference. Our method is applicable to sampling from both a single thermodynamic state or a mixture of thermodynamic states, and allows both coordinates and thermodynamic parameters to be driven in nonequilibrium proposals. While generating finite-time switching trajectories incurs an additional cost, driving some degrees of freedom while allowing others to evolve naturally can lead to large enhancements in acceptance probabilities, greatly reducing structural correlation times. Using nonequilibrium driven processes vastly expands the repertoire of useful Monte Carlo proposals in simulations of dense solvated systems.
International Nuclear Information System (INIS)
Santos, Cecilia Martins; Pecequilo, Brigitte Roxana Soreanu
2002-01-01
In this paper the efficiency calibration curves were determined for a thin-window, low-background and gas-flow proportional counter using calibration standards with different energies and different absorber thicknesses. For the gross alpha counting we used 241 Am standard and natural uranium and for the gross beta counting we used 90 Sr/ 90 Y and 137 Cs in residue thickness ranging from 0 to approximately 18 mg/cm 2 . Counting efficiency for alpha emitters ranged from 0,266± 0,032 for a weightless residue to 0,023± 0,003 in a planchet containing 15 mg/cm 2 of residue for 241 Am standard calibration planchets. Counting efficiency values obtained for natural uranium standard calibration planchets ranged from 0,322± 0,030 for a weightless residue to 0,023± 0,003 in a planchet containing 14,5 mg/cm 2 of residue. Counting efficiency for beta emitters ranged from 0,430± 0,036 for a weightless residue to 0,247± 0,020 in a planchet containing 17 mg/cm 2 of residue for 137 Cs standard. Counting efficiency values obtained for 90 Sr/ 90 Y standard calibration planchets ranged from 0,489± 0,041 for a weightless residue to 0,323± 0,026 for a residue thickness of 18 mg/cm 2 . (author)
Yun, Yong-Huan; Li, Hong-Dong; Wood, Leslie R. E.; Fan, Wei; Wang, Jia-Jun; Cao, Dong-Sheng; Xu, Qing-Song; Liang, Yi-Zeng
2013-07-01
Wavelength selection is a critical step for producing better prediction performance when applied to spectral data. Considering the fact that the vibrational and rotational spectra have continuous features of spectral bands, we propose a novel method of wavelength interval selection based on random frog, called interval random frog (iRF). To obtain all the possible continuous intervals, spectra are first divided into intervals by moving window of a fix width over the whole spectra. These overlapping intervals are ranked applying random frog coupled with PLS and the optimal ones are chosen. This method has been applied to two near-infrared spectral datasets displaying higher efficiency in wavelength interval selection than others. The source code of iRF can be freely downloaded for academy research at the website: http://code.google.com/p/multivariate-calibration/downloads/list.
Efficient data management techniques implemented in the Karlsruhe Monte Carlo code KAMCCO
International Nuclear Information System (INIS)
Arnecke, G.; Borgwaldt, H.; Brandl, V.; Lalovic, M.
1974-01-01
The Karlsruhe Monte Carlo Code KAMCCO is a forward neutron transport code with an eigenfunction and a fixed source option, including time-dependence. A continuous energy model is combined with a detailed representation of neutron cross sections, based on linear interpolation, Breit-Wigner resonances and probability tables. All input is processed into densely packed, dynamically addressed parameter fields and networks of pointers (addresses). Estimation routines are decoupled from random walk and analyze a storage region with sample records. This technique leads to fast execution with moderate storage requirements and without any I/O-operations except in the input and output stages. 7 references. (U.S.)
Understanding and improving the efficiency of full configuration interaction quantum Monte Carlo.
Vigor, W A; Spencer, J S; Bearpark, M J; Thom, A J W
2016-03-07
Within full configuration interaction quantum Monte Carlo, we investigate how the statistical error behaves as a function of the parameters which control the stochastic sampling. We define the inefficiency as a measure of the statistical error per particle sampling the space and per time step and show there is a sizeable parameter regime where this is minimised. We find that this inefficiency increases sublinearly with Hilbert space size and can be reduced by localising the canonical Hartree-Fock molecular orbitals, suggesting that the choice of basis impacts the method beyond that of the sign problem.
Understanding and improving the efficiency of full configuration interaction quantum Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Vigor, W. A.; Bearpark, M. J. [Department of Chemistry, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Spencer, J. S. [Department of Physics, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Department of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Thom, A. J. W. [Department of Chemistry, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); University Chemical Laboratory, Lensfield Road, Cambridge CB2 1EW (United Kingdom)
2016-03-07
Within full configuration interaction quantum Monte Carlo, we investigate how the statistical error behaves as a function of the parameters which control the stochastic sampling. We define the inefficiency as a measure of the statistical error per particle sampling the space and per time step and show there is a sizeable parameter regime where this is minimised. We find that this inefficiency increases sublinearly with Hilbert space size and can be reduced by localising the canonical Hartree–Fock molecular orbitals, suggesting that the choice of basis impacts the method beyond that of the sign problem.
Efficient pseudo-random number generation for monte-carlo simulations using graphic processors
Mohanty, Siddhant; Mohanty, A. K.; Carminati, F.
2012-06-01
A hybrid approach based on the combination of three Tausworthe generators and one linear congruential generator for pseudo random number generation for GPU programing as suggested in NVIDIA-CUDA library has been used for MONTE-CARLO sampling. On each GPU thread, a random seed is generated on fly in a simple way using the quick and dirty algorithm where mod operation is not performed explicitly due to unsigned integer overflow. Using this hybrid generator, multivariate correlated sampling based on alias technique has been carried out using both CUDA and OpenCL languages.
Efficient pseudo-random number generation for Monte-Carlo simulations using graphic processors
International Nuclear Information System (INIS)
Mohanty, Siddhant; Mohanty, A K; Carminati, F
2012-01-01
A hybrid approach based on the combination of three Tausworthe generators and one linear congruential generator for pseudo random number generation for GPU programing as suggested in NVIDIA-CUDA library has been used for MONTE-CARLO sampling. On each GPU thread, a random seed is generated on fly in a simple way using the quick and dirty algorithm where mod operation is not performed explicitly due to unsigned integer overflow. Using this hybrid generator, multivariate correlated sampling based on alias technique has been carried out using both CUDA and OpenCL languages.
On stochastic error and computational efficiency of the Markov Chain Monte Carlo method
Li, Jun
2014-01-01
In Markov Chain Monte Carlo (MCMC) simulations, thermal equilibria quantities are estimated by ensemble average over a sample set containing a large number of correlated samples. These samples are selected in accordance with the probability distribution function, known from the partition function of equilibrium state. As the stochastic error of the simulation results is significant, it is desirable to understand the variance of the estimation by ensemble average, which depends on the sample size (i.e., the total number of samples in the set) and the sampling interval (i.e., cycle number between two consecutive samples). Although large sample sizes reduce the variance, they increase the computational cost of the simulation. For a given CPU time, the sample size can be reduced greatly by increasing the sampling interval, while having the corresponding increase in variance be negligible if the original sampling interval is very small. In this work, we report a few general rules that relate the variance with the sample size and the sampling interval. These results are observed and confirmed numerically. These variance rules are derived for theMCMCmethod but are also valid for the correlated samples obtained using other Monte Carlo methods. The main contribution of this work includes the theoretical proof of these numerical observations and the set of assumptions that lead to them. © 2014 Global-Science Press.
International Nuclear Information System (INIS)
Zazula, J.M.
1988-01-01
The self-learning Monte Carlo technique has been implemented to the commonly used general purpose neutron transport code MORSE, in order to enhance sampling of the particle histories that contribute to a detector response. The parameters of all the biasing techniques available in MORSE, i.e. of splitting, Russian roulette, source and collision outgoing energy importance sampling, path length transformation and additional biasing of the source angular distribution are optimized. The learning process is iteratively performed after each batch of particles, by retrieving the data concerning the subset of histories that passed the detector region and energy range in the previous batches. This procedure has been tested on two sample problems in nuclear geophysics, where an unoptimized Monte Carlo calculation is particularly inefficient. The results are encouraging, although the presented method does not directly minimize the variance and the convergence of our algorithm is restricted by the statistics of successful histories from previous random walk. Further applications for modeling of the nuclear logging measurements seem to be promising. 11 refs., 2 figs., 3 tabs. (author)
Czech Academy of Sciences Publication Activity Database
Mukhopadhyay, N. D.; Sampson, A. J.; Deniz, D.; Carlsson, G. A.; Williamson, J.; Malušek, Alexandr
2012-01-01
Roč. 70, č. 1 (2012), s. 315-323 ISSN 0969-8043 Institutional research plan: CEZ:AV0Z10480505 Keywords : Monte Carlo * correlated sampling * efficiency * uncertainty * bootstrap Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 1.179, year: 2012 http://www.sciencedirect.com/science/article/pii/S0969804311004775
Saghamanesh, S.; Aghamiri, S. M.; Kamali-Asl, A.; Yashiro, W.
2017-09-01
An important challenge in real-world biomedical applications of x-ray phase contrast imaging (XPCI) techniques is the efficient use of the photon flux generated by an incoherent and polychromatic x-ray source. This efficiency can directly influence dose and exposure time and ideally should not affect the superior contrast and sensitivity of XPCI. In this paper, we present a quantitative evaluation of the photon detection efficiency of two laboratory-based XPCI methods, grating interferometry (GI) and coded-aperture (CA). We adopt a Monte Carlo approach to simulate existing prototypes of those systems, tailored for mammography applications. Our simulations were validated by means of a simple experiment performed on a CA XPCI system. Our results show that the fraction of detected photons in the standard energy range of mammography are about 1.4% and 10% for the GI and CA techniques, respectively. The simulations indicate that the design of the optical components plays an important role in the higher efficiency of CA compared to the GI method. It is shown that the use of lower absorbing materials as the substrates for GI gratings can improve its flux efficiency by up to four times. Along similar lines, we also show that an optimized and compact configuration of GI could lead to a 3.5 times higher fraction of detected counts compared to a standard and non-optimised GI implementation.
Fromm, Steven
2017-09-01
In an effort to study and improve the optical trapping efficiency of the 225Ra Electric Dipole Moment experiment, a fully parallelized Monte Carlo simulation of the laser cooling and trapping apparatus was created at Argonne National Laboratory and now maintained and upgraded at Michigan State University. The simulation allows us to study optimizations and upgrades without having to use limited quantities of 225Ra (15 day half-life) in experiment's apparatus. It predicts a trapping efficiency that differs from the observed value in the experiment by approximately a factor of thirty. The effects of varying oven geometry, background gas interactions, laboratory magnetic fields, MOT laser beam configurations and laser frequency noise were studied and ruled out as causes of the discrepancy between measured and predicted values of the overall trapping efficiency. Presently, the simulation is being used to help optimize a planned blue slower laser upgrade in the experiment's apparatus, which will increase the overall trapping efficiency by up to two orders of magnitude. This work is supported by Michigan State University, the Director's Research Scholars Program at the National Superconducting Cyclotron Laboratory, and the U.S. DOE, Office of Science, Office of Nuclear Physics, under Contract DE-AC02-06CH11357.
Monte Carlo calculations on efficiency of boron neutron capture therapy for brain cancer
International Nuclear Information System (INIS)
Awadalla, Galaleldin Mohamed Suliman
2015-11-01
The search for ways to treat cancer has led to many different treatments, including surgery, chemotherapy, and radiation therapy. Among these treatments, boron neutron capture therapy (BNCT) has shown promising results. BNCT is a radiotherapy treatment modality that has been proposed to treat brain cancer. In this technique, cancerous cells are being injected with 1 0B and irradiated by thermal neutrons to increase the probability of 1 0B (n, a)7 L i reaction to occur. This reaction can potentially deliver a high radiation dose sufficient to kill cancer cells by concentrating boron in them. The short rang of 1 0B (n, a) 7 L i reaction limits the damage to only cancerous cells without affecting healthy tissues. The effectiveness and safety of radiotherapy are dependent on the radiation dose delivered to the tumor and healthy tissues. In this thesis, after reviewing the basics and working principles of boron neutron capture therapy (BNCT), monte Carlo simulations were carried out to model a thermal neutron source suitable for BNCT and to examine the performance of proposed model when used to irradiate a sample of boron containing both 1 0B and 1 1B isotopes. MCNP5 code was used to examine the modeled neutron source through different shielding materials. The results were presented, analyzed and discussed at the end of the work. (author)
Atamas, Alexander A; Cuppen, Herma M; Koudriachova, Marina V; de Leeuw, Simon W
2013-01-31
The thermodynamics of binary sII hydrogen clathrates with secondary guest molecules is studied with Monte Carlo simulations. The small cages of the sII unit cell are occupied by one H(2) guest molecule. Different promoter molecules entrapped in the large cages are considered. Simulations are conducted at a pressure of 1000 atm in a temperature range of 233-293 K. To determine the stabilizing effect of different promoter molecules on the clathrate, the Gibbs free energy of fully and partially occupied sII hydrogen clathrates are calculated. Our aim is to predict what would be an efficient promoter molecule using properties such as size, dipole moment, and hydrogen bonding capability. The gas clathrate configurational and free energies are compared. The entropy makes a considerable contribution to the free energy and should be taken into account in determining stability conditions of binary sII hydrogen clathrates.
An efficient Monte Carlo-SubSampling approach to first passage problems
Energy Technology Data Exchange (ETDEWEB)
Marseguerra, M., E-mail: marzio.marseguerra@polimi.it [Polytechnic of Milan, Via Ponzio 34/3, 20133 Milan (Italy)
2011-02-15
In the realm of the Continuous Time Random Walk (CTRW) and in conjunction with the Monte Carlo (MC) approach, we consider the transport of a chemical or radioactive pollutant in a 3D heterogeneous medium, focusing on the first passage time (FPT), defined as the time required by the walkers representative of the dangerous particles to travel within the medium until crossing a target disk, thus entering another medium which should instead remain clean, such as a water well or an aquifer. The advantage of the MC approach is the possibility of simulating different features of the travel such as different waiting-time probabilities, space dependent jump lengths, absorption and desorption phenomena, Galilei invariant and variant velocities driven by external forces. When the computer time required for collecting a suitable number of target crossings is excessive, we propound to hybridize the MC approach with the recent SubSampling computational procedure, usefully applied in the engineering reliability field to computing very small failure probabilities in short computer time. To tackle the FPT problem we iteratively consider groups of a few thousands of walkers: in each iteration we select a fraction of them closer to the target, ignoring the remaining ones, and then restore the group by creating with the MC technique new walkers even more close to the target. The successive groups have large conditional probabilities which can be estimated one after the other in short time and whose product yields the total probability. By so doing the FPT can be actually computed in much shorter times: we report examples of Gaussian and anomalous distributions in which the reduction with respect to a pure MC computation is of orders of magnitude.
International Nuclear Information System (INIS)
Uosif, M.A.M.
2006-01-01
A new 9 th degree polynomial fit function has been constructed to calculate the absolute γ-ray detection efficiencies (ηth) of Ge(Li) and HPGe Detectors, for calculating the absolute efficiency at any interesting γ-energy in the energy range between 25 and 2000 keV and distance between 6 and 148 cm. The total absolute γ -ray detection efficiencies have been calculated for six detectors, three of them are Ge(Li) and three HPGe at different distances. The absolute efficiency of the different detectors was calculated at the specific energy of the standard sources for each measuring distances. In this calculation, experimental (η e xp) and fitting (η f it) efficiency have been calculated. Seven calibrated point sources Am-241, Ba-133, Co-57, Co-60, Cs-137, Eu-152 and Ra-226 were used. The uncertainties of efficiency calibration have been calculated also for quality control. The measured (η e xp) and (η f it) calculated efficiency values were compared with efficiency, which calculated, by Gray fit function (time)- The results obtained on the basis of (η e xp)and (η f it) seem to be in very good agreement
International Nuclear Information System (INIS)
Koukorava, C; Farah, J; Clairand, I; Donadille, L; Struelens, L; Vanhavere, F; Dimitriou, P
2014-01-01
Monte Carlo calculations were used to investigate the efficiency of radiation protection equipment in reducing eye and whole body doses during fluoroscopically guided interventional procedures. Eye lens doses were determined considering different models of eyewear with various shapes, sizes and lead thickness. The origin of scattered radiation reaching the eyes was also assessed to explain the variation in the protection efficiency of the different eyewear models with exposure conditions. The work also investigates the variation of eye and whole body doses with ceiling-suspended shields of various shapes and positioning. For all simulations, a broad spectrum of configurations typical for most interventional procedures was considered. Calculations showed that ‘wrap around’ glasses are the most efficient eyewear models reducing, on average, the dose by 74% and 21% for the left and right eyes respectively. The air gap between the glasses and the eyes was found to be the primary source of scattered radiation reaching the eyes. The ceiling-suspended screens were more efficient when positioned close to the patient’s skin and to the x-ray field. With the use of such shields, the H p (10) values recorded at the collar, chest and waist level and the H p (3) values for both eyes were reduced on average by 47%, 37%, 20% and 56% respectively. Finally, simulations proved that beam quality and lead thickness have little influence on eye dose while beam projection, the position and head orientation of the operator as well as the distance between the image detector and the patient are key parameters affecting eye and whole body doses. (paper)
Koukorava, C; Farah, J; Struelens, L; Clairand, I; Donadille, L; Vanhavere, F; Dimitriou, P
2014-09-01
Monte Carlo calculations were used to investigate the efficiency of radiation protection equipment in reducing eye and whole body doses during fluoroscopically guided interventional procedures. Eye lens doses were determined considering different models of eyewear with various shapes, sizes and lead thickness. The origin of scattered radiation reaching the eyes was also assessed to explain the variation in the protection efficiency of the different eyewear models with exposure conditions. The work also investigates the variation of eye and whole body doses with ceiling-suspended shields of various shapes and positioning. For all simulations, a broad spectrum of configurations typical for most interventional procedures was considered. Calculations showed that 'wrap around' glasses are the most efficient eyewear models reducing, on average, the dose by 74% and 21% for the left and right eyes respectively. The air gap between the glasses and the eyes was found to be the primary source of scattered radiation reaching the eyes. The ceiling-suspended screens were more efficient when positioned close to the patient's skin and to the x-ray field. With the use of such shields, the Hp(10) values recorded at the collar, chest and waist level and the Hp(3) values for both eyes were reduced on average by 47%, 37%, 20% and 56% respectively. Finally, simulations proved that beam quality and lead thickness have little influence on eye dose while beam projection, the position and head orientation of the operator as well as the distance between the image detector and the patient are key parameters affecting eye and whole body doses.
International Nuclear Information System (INIS)
Fernandes Neto, J.M.; Mesquita, C.H. de; Deus, S.F.
1986-01-01
It was developed a program in Basic language applied to Sinclair type personal computer. The code is able to calculate the Whole counting efficiency when applying a cillindrical type detector. The scope of the code made use of the Monte Carlo Method. (Author) [pt
Efficient Monte Carlo Estimation of the Expected Value of Sample Information Using Moment Matching.
Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca
2018-02-01
The Expected Value of Sample Information (EVSI) is used to calculate the economic value of a new research strategy. Although this value would be important to both researchers and funders, there are very few practical applications of the EVSI. This is due to computational difficulties associated with calculating the EVSI in practical health economic models using nested simulations. We present an approximation method for the EVSI that is framed in a Bayesian setting and is based on estimating the distribution of the posterior mean of the incremental net benefit across all possible future samples, known as the distribution of the preposterior mean. Specifically, this distribution is estimated using moment matching coupled with simulations that are available for probabilistic sensitivity analysis, which is typically mandatory in health economic evaluations. This novel approximation method is applied to a health economic model that has previously been used to assess the performance of other EVSI estimators and accurately estimates the EVSI. The computational time for this method is competitive with other methods. We have developed a new calculation method for the EVSI which is computationally efficient and accurate. This novel method relies on some additional simulation so can be expensive in models with a large computational cost.
Energy Technology Data Exchange (ETDEWEB)
Mesta, M.; Coehoorn, R.; Bobbert, P. A. [Department of Applied Physics, Technische Universiteit Eindhoven, P.O. Box 513, NL-5600 MB Eindhoven (Netherlands); Eersel, H. van [Simbeyond B.V., P.O. Box 513, NL-5600 MB Eindhoven (Netherlands)
2016-03-28
Triplet-triplet annihilation (TTA) and triplet-polaron quenching (TPQ) in organic light-emitting devices (OLEDs) lead to a roll-off of the internal quantum efficiency (IQE) with increasing current density J. We employ a kinetic Monte Carlo modeling study to analyze the measured IQE and color balance as a function of J in a multilayer hybrid white OLED that combines fluorescent blue with phosphorescent green and red emission. We investigate two models for TTA and TPQ involving the phosphorescent green and red emitters: short-range nearest-neighbor quenching and long-range Förster-type quenching. Short-range quenching predicts roll-off to occur at much higher J than measured. Taking long-range quenching with Förster radii for TTA and TPQ equal to twice the Förster radii for exciton transfer leads to a fair description of the measured IQE-J curve, with the major contribution to the roll-off coming from TPQ. The measured decrease of the ratio of phosphorescent to fluorescent component of the emitted light with increasing J is correctly predicted. A proper description of the J-dependence of the ratio of red and green phosphorescent emission needs further model refinements.
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.
2010-10-01
Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.
Tau reconstruction, energy calibration and identification at ATLAS
Indian Academy of Sciences (India)
2012-11-10
Nov 10, 2012 ... We present the current status of the tau reconstruction, energy calibration and identification with the ATLAS detector at the LHC. Identification efficiencies are measured in. W → τν events in data and compared with predictions from Monte Carlo simulations, whereas the misidentification probabilities of QCD ...
Self-calibration: an efficient method to control systematic effects in bolometric interferometry
Bigot-Sazy, M.-A.; Charlassier, R.; Hamilton, J.-Ch.; Kaplan, J.; Zahariade, G.
2013-02-01
Context. The QUBIC collaboration is building a bolometric interferometer dedicated to the detection of B-mode polarization fluctuations in the cosmic microwave background. Aims: We introduce a self-calibration procedure related to those used in radio-interferometry to control a wide range of instrumental systematic errors in polarization-sensitive instruments. Methods: This procedure takes advantage of the need for measurements on redundant baselines to match each other exactly in the absence of systematic effects. For a given systematic error model, measuring each baseline independently therefore allows writing a system of nonlinear equations whose unknowns are the systematic error model parameters (gains and couplings of Jones matrices, for instance). Results: We give the mathematical basis of the self-calibration. We implement this method numerically in the context of bolometric interferometry. We show that, for large enough arrays of horns, the nonlinear system can be solved numerically using a standard nonlinear least-squares fitting and that the accuracy achievable on systematic effects is only limited by the time spent on the calibration mode for each baseline apart from the validity of the systematic error model.
Energy Technology Data Exchange (ETDEWEB)
Santos, J.A.M. [Servico de Fisica Medica, Instituto Portugues de Oncologia do Porto Francisco Gentil, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200072 Porto (Portugal); Centro de Investigacao, Instituto Portugues de Oncologia do Porto Francisco Gentil, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200072 Porto (Portugal)], E-mail: a.miranda@portugalmail.pt; Carrasco, M.F. [Servico de Fisica Medica, Instituto Portugues de Oncologia do Porto Francisco Gentil, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200072 Porto (Portugal); Centro de Investigacao, Instituto Portugues de Oncologia do Porto Francisco Gentil, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200072 Porto (Portugal); Lencart, J. [Servico de Fisica Medica, Instituto Portugues de Oncologia do Porto Francisco Gentil, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200072 Porto (Portugal); Bastos, A.L. [Servico de Medicina Nuclear, Instituto Portugues de Oncologia do Porto Francisco Gentil, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200072 Porto (Portugal)
2009-06-15
A careful analysis of geometry and source positioning influence in the activity measurement outcome of a nuclear medicine dose calibrator is presented for {sup 99m}Tc. The implementation of a quasi-point source apparent activity curve measurement is proposed for an accurate correction of the activity inside several syringes, and compared with a theoretical geometric efficiency model. Additionally, new geometrical parameters are proposed to test and verify the correct positioning of the syringes as part of acceptance testing and quality control procedures.
Griesbach, J.; Wetterer, C.; Sydney, P.; Gerber, J.
Photometric processing of non-resolved Electro-Optical (EO) images has commonly required the use of dark and flat calibration frames that are obtained to correct for charge coupled device (CCD) dark (thermal) noise and CCD quantum efficiency/optical path vignetting effects respectively. It is necessary to account/calibrate for these effects so that the brightness of objects of interest (e.g. stars or resident space objects (RSOs)) may be measured in a consistent manner across the CCD field of view. Detected objects typically require further calibration using aperture photometry to compensate for sky background (shot noise). For this, annuluses are measured around each detected object whose contained pixels are used to estimate an average background level that is subtracted from the detected pixel measurements. In a new photometric calibration software tool developed for AFRL/RD, called Efficient Photometry In-Frame Calibration (EPIC), an automated background normalization technique is proposed that eliminates the requirement to capture dark and flat calibration images. The proposed technique simultaneously corrects for dark noise, shot noise, and CCD quantum efficiency/optical path vignetting effects. With this, a constant detection threshold may be applied for constant false alarm rate (CFAR) object detection without the need for aperture photometry corrections. The detected pixels may be simply summed (without further correction) for an accurate instrumental magnitude estimate. The noise distribution associated with each pixel is assumed to be sampled from a Poisson distribution. Since Poisson distributed data closely resembles Gaussian data for parameterized means greater than 10, the data may be corrected by applying bias subtraction and standard-deviation division. EPIC performs automated background normalization on rate-tracked satellite images using the following technique. A deck of approximately 50-100 images is combined by performing an independent median
Hansen, T. M.; Cordua, K. S.
2017-12-01
Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.
New method for calibrating a Ge detector by using only zero to four efficiency points
International Nuclear Information System (INIS)
Gunnink, R.
1990-01-01
We have developed a method whereby the efficiency of a coaxial germanium detector can be determined with an accuracy of a few percent for the energy region from 0.05 to 4.0 MeV by using only the specifications provided by the manufacturer. The accuracy can be further improved to within 1-2% by using one to four (or more) measured efficiency points. Our method also allows the detector efficiency-versus-energy curve to be applied to any source-to-detector distance. (orig.)
New method for calibrating a Ge detector by using only zero to four efficiency points
Gunnink, Ray
1990-12-01
We have developed a method whereby the efficiency of a coaxial germanium detector can be determined with an accuracy of a few percent for the energy region from 0.05 to 4.0 MeV by using only the specifications provided by the manufacturer. The accuracy can be further improved to within 1-2% by using one to four (or more) measured efficiency points. Our method also allows the detector efficiency-versus-energy curve to be applied to any source-to-detector distance.
Efficiency calibration of a liquid scintillation counter for 90Y Cherenkov counting
International Nuclear Information System (INIS)
Vaca, F.; Garcia-Leon, M.
1998-01-01
In this paper a complete and self-consistent method for 90 Sr determination in environmental samples is presented. It is based on the Cherenkov counting of 90 Y with a conventional liquid scintillation counter. The effects of color quenching on the counting efficiency and background are carefully studied. A working curve is presented which allows to quantify the correction in the counting efficiency depending on the color quenching strength. (orig.)
Construction and Calibration of Optically Efficient LCD-based Multi-Layer Light Field Displays
International Nuclear Information System (INIS)
Hirsch, Matthew; Lanman, Douglas; Wetzstein, Gordon; Raskar, Ramesh
2013-01-01
Near-term commercial multi-view displays currently employ ray-based 3D or 4D light field techniques. Conventional approaches to ray-based display typically include lens arrays or heuristic barrier patterns combined with integral interlaced views on a display screen such as an LCD panel. Recent work has placed an emphasis on the co-design of optics and image formation algorithms to achieve increased frame rates, brighter images, and wider fields-of-view using optimization-in-the-loop and novel arrangements of commodity LCD panels. In this paper we examine the construction and calibration methods of computational, multi-layer LCD light field displays. We present several experimental configurations that are simple to build and can be tuned to sufficient precision to achieve a research quality light field display. We also present an analysis of moiré interference in these displays, and guidelines for diffuser placement and display alignment to reduce the effects of moiré. We describe a technique using the moiré magnifier to fine-tune the alignment of the LCD layers.
A multi-frequency, self-calibrating, in-situ soil sensor with energy efficient wireless interface
Pandey, Gunjan; Kumar, Ratnesh; Weber, Robert J.
2013-05-01
Real time and accurate measurement of sub-surface soil moisture and nutrients is critical for agricultural and environmental studies. This paper presents a novel on-board solution for a robust, accurate and self-calibrating soil moisture and nutrient sensor with inbuilt wireless transmission and reception capability that makes it ideally suited to act as a node in a network spread over a large area. The sensor works on the principle of soil impedance measurement by comparing the amplitude and phase of signals incident on and reflected from the soil in proximity of the sensor. Accuracy of measurements is enhanced by considering a distributed transmission line model for the on-board connections. Presence of an inbuilt self-calibrating mechanism which operates on the standard short-open-load (SOL) technique makes the sensor independent of inaccuracies that may occur due to variations in temperature and surroundings. Moreover, to minimize errors, the parasitic impedances of the board are taken into account in the measurements. Measurements of both real and imaginary parts of soil impedance at multiple frequencies gives the sensor an ability to detect variations in ionic concentrations other than soil moisture content. A switch-controlled multiple power mode transmission and reception is provided to support highly energy efficient medium access control.1
Calibration efficiency of HPGe detector in the 50-1800 KeV energy range
International Nuclear Information System (INIS)
Venturini, Luzia
1996-01-01
This paper describes the efficiency of an HPGe detector in the 50 - 1800 keV energy range, for two geometries for water measurements: Marinelli breaker (850 ml) and a polyethylene flask (100 ml). The experimental data were corrected for the summing effect and fitted to a continuous, differentiable and energy dependent function given by 1n(ε)=b 0 +b 1 .1n(E/E 0 )+ β.1n(E/E 0 ) 2 , where β = b 2 if E>E 0 and β =a 2 if E ≤E 0 ; ε = the full absorption peak efficiency; E is the gamma-ray energy and {b 0 , b 1 , b 2 , a 2 , E 0 } is the parameter set to be fitted. (author)
Efficiency calibration of the Ge(Li) detector of the BIPM for SIR-type ampoules
International Nuclear Information System (INIS)
Michotte, C.
1999-01-01
The absolute full-energy peak efficiency of the Ge(Li) γ-ray spectrometer has been measured between 50 keV and 2 MeV with a relative uncertainty around 1 x 10 -2 and for ampoule-to-detector distances of 20 cm and 50 cm. All the corrections applied (self-attenuation, dead time, pile up, true coincidence summing) are discussed in detail. (authors)
Energy Technology Data Exchange (ETDEWEB)
Pai, S [iCAD Inc., Los Gatos, CA (United States)
2015-06-15
Purpose: The objective is to improve the efficiency and efficacy of Xoft™ Axxent™ electronic brachytherapy (EBT) calibration of the source & surface applicator using AAPM TG-61 formalism. Methods: Current method of Xoft EBT source calibration involves determination of absolute dose rate of the source in each of the four conical surface applicators using in-air chamber measurements & TG61 formalism. We propose a simplified TG-61 calibration methodology involving initial characterization of surface cone applicators. This is accomplished by calibrating dose rates for all 4 surface applicator sets (for 10 sources) which establishes the “applicator output ratios” with respect to the selected reference applicator (20 mm applicator). After the initial time, Xoft™ Axxent™ source TG61 Calibration is carried out only in the reference applicator. Using the established applicator output ratios, dose rates for other applicators will be calculated. Results: 200 sources & 8 surface applicator sets were calibrated cumulatively using a Standard Imaging A20 ion-chamber in accordance with manufacturer-recommended protocols. Dose rates of 10, 20, 35 & 50mm applicators were normalized to the reference (20mm) applicator. The data in Figure 1 indicates that the normalized dose rate variation for each applicator for all 200 sources is better than ±3%. The average output ratios are 1.11, 1.02 and 0.49 for the 10 mm,35 mm and 50 mm applicators, respectively, which are in good agreement with the manufacturer’s published output ratios of 1.13, 1.02 and 0.49. Conclusion: Our measurements successfully demonstrate the accuracy of a new calibration method using a single surface applicator for Xoft EBT sources and deriving the dose rates of other applicators. The accuracy of the calibration is improved as this method minimizes the source position variation inside the applicator during individual source calibrations. The new method significantly reduces the calibration time to less
DEFF Research Database (Denmark)
Hansen, Per Lunnemann; Rabouw, Freddy T.; van Dijk-Moes, Relinde J. A.
2013-01-01
We demonstrate that a simple silver coated ball lens can be used to accurately measure the entire distribution of radiative transition rates of quantum dot nanocrystals. This simple and cost-effective implementation of Drexhage’s method that uses nanometer-controlled optical mode density variations......-in-rod emitters. The emitters are of large current interest due to their improved stability and reduced blinking. We retrieve a room-temperature ensemble average quantum efficiency of 0.87 ± 0.08 at a mean lifetime around 20 ns. We confirm a log-normal distribution of decay rates as often assumed in literature...... near a mirror, not only allows an extraction of calibrated ensemble-averaged rates, but for the first time also to quantify the full inhomogeneous dispersion of radiative and non radiative decay rates across thousands of nanocrystals. We apply the technique to novel ultrastable CdSe/CdS dot...
Gålfalk, Magnus; Karlson, Martin; Crill, Patrick; Bousquet, Philippe; Bastviken, David
2018-03-01
The calibration and validation of remote sensing land cover products are highly dependent on accurate field reference data, which are costly and practically challenging to collect. We describe an optical method for collection of field reference data that is a fast, cost-efficient, and robust alternative to field surveys and UAV imaging. A lightweight, waterproof, remote-controlled RGB camera (GoPro HERO4 Silver, GoPro Inc.) was used to take wide-angle images from 3.1 to 4.5 m in altitude using an extendable monopod, as well as representative near-ground (training, and is facilitated by a step-by-step manual that is included in the Supplement. Over time a global ground cover database can be built that can be used as reference data for studies of non-forested wetlands from satellites such as Sentinel 1 and 2 (10 m pixel size).
Energy Technology Data Exchange (ETDEWEB)
Trzcinski, A.; Zwieglinski, B. [Soltan Inst. for Nuclear Studies, Warsaw (Poland); Lynen, U. [Gesellschaft fuer Schwerionenforschung mbH, Darmstadt (Germany); Pochodzalla, J. [Max-Planck-Institut fuer Kernphysik, Heidelberg (Germany)
1998-10-01
This paper reports on a Monte-Carlo program, MSX, developed to evaluate the performance of large-volume, Gd-loaded liquid scintillation detectors used in neutron multiplicity measurements. The results of simulations are presented for the detector intended to count neutrons emitted by the excited target residue in coincidence with the charged products of the projectile fragmentation following relativistic heavy-ion collisions. The latter products could be detected with the ALADIN magnetic spectrometer at GSI-Darmstadt. (orig.) 61 refs.
Sugawara, Hirotake
2018-03-01
In Monte Carlo simulations of electron swarms, sample electrons were copied periodically so that a sufficient number of samples are obtained in equilibrium after relaxation even under a severe attachment-dominated condition where most electrons vanish during the relaxation. The final sampling results were equivalent to those sampled by a conventional method, and the computational time conventionally wasted for the tracking of vanishing electrons was reduced drastically. The time saved can be utilized for tracking more samples to reduce statistical fluctuation. The efficiency of this technique and its limitation are discussed quantitatively together with details on its implementation.
Energy Technology Data Exchange (ETDEWEB)
Courtine, Fabien [Laboratoire de Physique Corpusculaire, Universite Blaise Pascal - CNRS/IN2P3, 63000 Aubiere Cedex (France)
2007-03-15
The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of {sup 137}Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the {sup 60}Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)
Directory of Open Access Journals (Sweden)
É. Gaborit
2017-09-01
Full Text Available This work explores the potential of the distributed GEM-Hydro runoff modeling platform, developed at Environment and Climate Change Canada (ECCC over the last decade. More precisely, the aim is to develop a robust implementation methodology to perform reliable streamflow simulations with a distributed model over large and partly ungauged basins, in an efficient manner. The latest version of GEM-Hydro combines the SVS (Soil, Vegetation and Snow land-surface scheme and the WATROUTE routing scheme. SVS has never been evaluated from a hydrological point of view, which is done here for all major rivers flowing into Lake Ontario. Two established hydrological models are confronted to GEM-Hydro, namely MESH and WATFLOOD, which share the same routing scheme (WATROUTE but rely on different land-surface schemes. All models are calibrated using the same meteorological forcings, objective function, calibration algorithm, and basin delineation. GEM-Hydro is shown to be competitive with MESH and WATFLOOD: the NSE √ (Nash–Sutcliffe criterion computed on the square root of the flows is for example equal to 0.83 for MESH and GEM-Hydro in validation on the Moira River basin, and to 0.68 for WATFLOOD. A computationally efficient strategy is proposed to calibrate SVS: a simple unit hydrograph is used for routing instead of WATROUTE. Global and local calibration strategies are compared in order to estimate runoff for ungauged portions of the Lake Ontario basin. Overall, streamflow predictions obtained using a global calibration strategy, in which a single parameter set is identified for the whole basin of Lake Ontario, show accuracy comparable to the predictions based on local calibration: the average NSE √ in validation and over seven subbasins is 0.73 and 0.61, respectively for local and global calibrations. Hence, global calibration provides spatially consistent parameter values, robust performance at gauged locations, and reduces the
Gaborit, Étienne; Fortin, Vincent; Xu, Xiaoyong; Seglenieks, Frank; Tolson, Bryan; Fry, Lauren M.; Hunter, Tim; Anctil, François; Gronewold, Andrew D.
2017-09-01
This work explores the potential of the distributed GEM-Hydro runoff modeling platform, developed at Environment and Climate Change Canada (ECCC) over the last decade. More precisely, the aim is to develop a robust implementation methodology to perform reliable streamflow simulations with a distributed model over large and partly ungauged basins, in an efficient manner. The latest version of GEM-Hydro combines the SVS (Soil, Vegetation and Snow) land-surface scheme and the WATROUTE routing scheme. SVS has never been evaluated from a hydrological point of view, which is done here for all major rivers flowing into Lake Ontario. Two established hydrological models are confronted to GEM-Hydro, namely MESH and WATFLOOD, which share the same routing scheme (WATROUTE) but rely on different land-surface schemes. All models are calibrated using the same meteorological forcings, objective function, calibration algorithm, and basin delineation. GEM-Hydro is shown to be competitive with MESH and WATFLOOD: the NSE √ (Nash-Sutcliffe criterion computed on the square root of the flows) is for example equal to 0.83 for MESH and GEM-Hydro in validation on the Moira River basin, and to 0.68 for WATFLOOD. A computationally efficient strategy is proposed to calibrate SVS: a simple unit hydrograph is used for routing instead of WATROUTE. Global and local calibration strategies are compared in order to estimate runoff for ungauged portions of the Lake Ontario basin. Overall, streamflow predictions obtained using a global calibration strategy, in which a single parameter set is identified for the whole basin of Lake Ontario, show accuracy comparable to the predictions based on local calibration: the average NSE √ in validation and over seven subbasins is 0.73 and 0.61, respectively for local and global calibrations. Hence, global calibration provides spatially consistent parameter values, robust performance at gauged locations, and reduces the complexity and computation burden of the
International Nuclear Information System (INIS)
Hubert-Tremblay, Vincent; Archambault, Louis; Tubic, Dragan; Roy, Rene; Beaulieu, Luc
2006-01-01
The purpose of the present study is to introduce a compression algorithm for the CT (computed tomography) data used in Monte Carlo simulations. Performing simulations on the CT data implies large computational costs as well as large memory requirements since the number of voxels in such data reaches typically into hundreds of millions voxels. CT data, however, contain homogeneous regions which could be regrouped to form larger voxels without affecting the simulation's accuracy. Based on this property we propose a compression algorithm based on octrees: in homogeneous regions the algorithm replaces groups of voxels with a smaller number of larger voxels. This reduces the number of voxels while keeping the critical high-density gradient area. Results obtained using the present algorithm on both phantom and clinical data show that compression rates up to 75% are possible without losing the dosimetric accuracy of the simulation
An Efficient Monte Carlo Approach to Compute PageRank for Large Graphs on a Single PC
Directory of Open Access Journals (Sweden)
Sonobe Tomohiro
2016-03-01
Full Text Available This paper describes a novel Monte Carlo based random walk to compute PageRanks of nodes in a large graph on a single PC. The target graphs of this paper are ones whose size is larger than the physical memory. In such an environment, memory management is a difficult task for simulating the random walk among the nodes. We propose a novel method that partitions the graph into subgraphs in order to make them fit into the physical memory, and conducts the random walk for each subgraph. By evaluating the walks lazily, we can conduct the walks only in a subgraph and approximate the random walk by rotating the subgraphs. In computational experiments, the proposed method exhibits good performance for existing large graphs with several passes of the graph data.
Directory of Open Access Journals (Sweden)
M. Gålfalk
2018-03-01
Full Text Available The calibration and validation of remote sensing land cover products are highly dependent on accurate field reference data, which are costly and practically challenging to collect. We describe an optical method for collection of field reference data that is a fast, cost-efficient, and robust alternative to field surveys and UAV imaging. A lightweight, waterproof, remote-controlled RGB camera (GoPro HERO4 Silver, GoPro Inc. was used to take wide-angle images from 3.1 to 4.5 m in altitude using an extendable monopod, as well as representative near-ground (< 1 m images to identify spectral and structural features that correspond to various land covers in present lighting conditions. A semi-automatic classification was made based on six surface types (graminoids, water, shrubs, dry moss, wet moss, and rock. The method enables collection of detailed field reference data, which is critical in many remote sensing applications, such as satellite-based wetland mapping. The method uses common non-expensive equipment, does not require special skills or training, and is facilitated by a step-by-step manual that is included in the Supplement. Over time a global ground cover database can be built that can be used as reference data for studies of non-forested wetlands from satellites such as Sentinel 1 and 2 (10 m pixel size.
Directory of Open Access Journals (Sweden)
Kowal Robert
2016-12-01
Full Text Available A simple linear regression model is one of the pillars of classic econometrics. Multiple areas of research function within its scope. One of the many fundamental questions in the model concerns proving the efficiency of the most commonly used OLS estimators and examining their properties. In the literature of the subject one can find taking back to this scope and certain solutions in that regard. Methodically, they are borrowed from the multiple regression model or also from a boundary partial model. Not everything, however, is here complete and consistent. In the paper a completely new scheme is proposed, based on the implementation of the Cauchy-Schwarz inequality in the arrangement of the constraint aggregated from calibrated appropriately secondary constraints of unbiasedness which in a result of choice the appropriate calibrator for each variable directly leads to showing this property. A separate range-is a matter of choice of such a calibrator. These deliberations, on account of the volume and kinds of the calibration, were divided into a few parts. In the one the efficiency of OLS estimators is proven in a mixed scheme of the calibration by averages, that is preliminary, and in the most basic frames of the proposed methodology. In these frames the future outlines and general premises constituting the base of more distant generalizations are being created.
International Nuclear Information System (INIS)
Mekarski, Pawel; Weihua Zhang; Chuanlei Liu; Kurt Ungar, Kurt
2017-01-01
The efficiencies of a broad energy germanium detector were characterized down to energies below 10 keV. The K shell X-ray absorption in germanium was seen as a sharp drop in efficiency around 11 keV. This feature was used to determine the thickness of this detector's dead layer. By comparing efficiency simulations with different dead layer thicknesses to those determined from empirical measurements, the best fit dead layer thickness was determined to be 5.7 ± 1.5 µm. Additional measurements, taken with a collimator, made analytical calculations of the efficiency possible and these results also supported the dead layer thickness determination. (author)
Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J
2008-06-01
Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler.
Hubert, S.; Boubault, F.
2018-03-01
In this article, we present the first X-ray calibration performed over the 0.1-1.5 keV spectral range by means of a soft X-ray Manson source and the monochromator SYMPAX. This monochromator, based on a classical Rowland geometry, presents the novelty to be able to board simultaneously two detectors and move them under vacuum in front of the exit slit of the monochromatizing stage. This provides the great advantage to perform radiometric measurements of the monochromatic X-ray photon flux with one reference detector while calibrating another X-ray detector. To achieve this, at least one secondary standard must be operated with SYMPAX. This paper presents thereby an efficiency transfer experiment between a secondary standard silicon drift detector (SDD), previously calibrated on BESSY II synchrotron Facility, and another one ("unknown" SDD), devoted to be used permanently with SYMPAX. The associated calibration process is described as well as corresponding results. Comparison with calibrated measurements performed at the Physikalisch-Technische Bundesanstalt (PTB) Radiometric Laboratory shows a very good agreement between the secondary standard and the unknown SDD.
Farr, Benjamin; Kalogera, Vicky; Luijten, Erik
2014-07-01
We introduce a new Markov-chain Monte Carlo (MCMC) approach designed for the efficient sampling of highly correlated and multimodal posteriors. Parallel tempering, though effective, is a costly technique for sampling such posteriors. Our approach minimizes the use of parallel tempering, only applying it for a short time to build a proposal distribution that is based upon estimation of the kernel density and tuned to the target posterior. This proposal makes subsequent use of parallel tempering unnecessary, allowing all chains to be cooled to sample the target distribution. Gains in efficiency are found to increase with increasing posterior complexity, ranging from tens of percent in the simplest cases to over a factor of 10 for the more complex cases. Our approach is particularly useful in the context of parameter estimation of gravitational-wave signals measured by ground-based detectors, which is currently done through Bayesian inference with MCMC, one of the leading sampling methods. Posteriors for these signals are typically multimodal with strong nonlinear correlations, making sampling difficult. As we enter the advanced-detector era, improved sensitivities and wider bandwidths will drastically increase the computational cost of analyses, demanding more efficient search algorithms to meet these challenges.
National Aeronautics and Space Administration — Develop and demonstrate a next-generation digitally calibrated, highly scalable, L-band Transmit/Receive (TR) module to enable a precision beamforming SweepSAR...
Monte Carlo Calculation Of HPGe GEM 15P4 Detector Efficiency In The 59 - 2000 keV Energy Range
International Nuclear Information System (INIS)
Trinh Hoai Vinh; Pham Nguyen Thanh Vinh; Hoang Ba Kim; Vo Xuan An
2011-01-01
A precise model of a 15% relative efficiency p-type HPGe GEM 15P4 detector was created for peak efficiency curves determination using the MCNP5 code developed by The Los Alamos Laboratory. The dependence of peak efficiency on distance from the source to detector was also investigated. That model was validated by comparing experimental and calculated results using six standard point sources including 133 Ba, 109 Cd, 57 Co, 60 Co, 22 Na and 65 Zn. The sources used for more simulating are 241 Am, 75 Se, 113 Sn, 85 Sr, 54 Mn, 137 Cs, 56 Co, 94 Nb, 111 In, 139 Ce, 228 Th, 243 Am, 154 Eu, 152 Eu and 88 Y according to IAEA-TECDOC-619 document. All these sources have the same geometry. The ratio of the experimental efficiencies to calculated ones are higher than 0.94. This result indicates that our simulation program based on MCNP5 code is good enough for later studies on this HPGe spectrometer which is located in Nuclear Physics Laboratory at HCMC University of Pedagogy. (author)
International Nuclear Information System (INIS)
Giles, J.R.
1996-05-01
A Gamma Spectroscopy Logging System (GSLS) has been developed to study sub-surface radionuclide contamination. Absolute efficiency calibration of the GSLS was performed using simple cylindrical borehole geometry. The calibration source incorporated naturally occurring radioactive material (NORM) that emitted photons ranging from 186-keV to 2,614-keV. More complex borehole geometries were modeled using commercially available shielding software. A linear relationship was found between increasing source thickness and relative photon fluence rates at the detector. Examination of varying porosity and moisture content showed that as porosity increases, relative photon fluence rates increase linearly for all energies. Attenuation effects due to iron, water, PVC, and concrete cylindrical shields were found to agree with previous studies. Regression analyses produced energy-dependent equations for efficiency corrections applicable to spectral gamma-ray well logs collected under non-standard borehole conditions
International Nuclear Information System (INIS)
Nikolopoulos, Dimitrios; Kandarakis, Ioannis; Tsantilas, Xenophon; Valais, Ioannis; Cavouras, Dionisios; Louizi, Anna
2006-01-01
The radiation detection efficiency of four scintillators employed, or designed to be employed, in positron emission imaging (PET) was evaluated as a function of the crystal thickness by applying Monte Carlo Methods. The scintillators studied were the LuSiO 5 (LSO), LuAlO 3 (LuAP), Gd 2 SiO 5 (GSO) and the YAlO 3 (YAP). Crystal thicknesses ranged from 0 to 50 mm. The study was performed via a previously generated photon transport Monte Carlo code. All photon track and energy histories were recorded and the energy transferred or absorbed in the scintillator medium was calculated together with the energy redistributed and retransported as secondary characteristic fluorescence radiation. Various parameters were calculated e.g. the fraction of the incident photon energy absorbed, transmitted or redistributed as fluorescence radiation, the scatter to primary ratio, the photon and energy distribution within each scintillator block etc. As being most significant, the fraction of the incident photon energy absorbed was found to increase with increasing crystal thickness tending to form a plateau above the 30 mm thickness. For LSO, LuAP, GSO and YAP scintillators, respectively, this fraction had the value of 44.8, 36.9 and 45.7% at the 10 mm thickness and 96.4, 93.2 and 96.9% at the 50 mm thickness. Within the plateau area approximately (57-59)% (59-63)% (52-63)% and (58-61)% of this fraction was due to scattered and reabsorbed radiation for the LSO, GSO, YAP and LuAP scintillators, respectively. In all cases, a negligible fraction (<0.1%) of the absorbed energy was found to escape the crystal as fluorescence radiation
Zambri, Brian
2017-02-22
We propose a computational strategy that falls into the category of prediction/correction iterative-type approaches, for calibrating the hemodynamic model introduced by Friston et al. (2000). The proposed method is employed to estimate consecutively the values of the biophysiological system parameters and the external stimulus characteristics of the model. Numerical results corresponding to both synthetic and real functional Magnetic Resonance Imaging (fMRI) measurements for a single stimulus as well as for multiple stimuli are reported to highlight the capability of this computational methodology to fully calibrate the considered hemodynamic model. This article is protected by copyright. All rights reserved.
Djellouli, Rabia
2017-01-01
We propose a computational strategy that falls into the category of prediction/correction iterative-type approaches, for calibrating the hemodynamic model introduced by Friston et al. (2000). The proposed method is employed to estimate consecutively the values of the biophysiological system parameters and the external stimulus characteristics of the model. Numerical results corresponding to both synthetic and real functional Magnetic Resonance Imaging (fMRI) measurements for a single stimulus as well as for multiple stimuli are reported to highlight the capability of this computational methodology to fully calibrate the considered hemodynamic model.
Camacho, R; The ATLAS collaboration
2011-01-01
The accurate measurement of jets at high transverse momentum produced in proton proton collision at a centre of mass energy at \\sqrt(s)=7 TeV is important in many physics analysis at LHC. Due to the non-compensating nature of the ATLAS calorimeter, signal losses due to noise thresholds and in dead material the jet energy needs to be calibrated. Presently, the ATLAS experiment derives the jet calibration from Monte Carlo simulation using a simple correction that relates the true and the reconstructed jet energy. The jet energy scale and its uncertainty are derived from in-situ measurements and variation in the Monte Carlo simulation. Other calibration schemes have been also developed, they use hadronic cell calibrations or the topology of the jet constituents to reduce hadronic fluctuations in the jet response, improving in that way the jet resolution. The performances of the various calibration schemes using data and simulation, the evaluation of the modelling of the properties used to derive each calibration...
TARC: Carlo Rubbia's Energy Amplifier
Laurent Guiraud
1997-01-01
Transmutation by Adiabatic Resonance Crossing (TARC) is Carlo Rubbia's energy amplifier. This CERN experiment demonstrated that long-lived fission fragments, such as 99-TC, can be efficiently destroyed.
Sun, Shuyu
2013-06-01
This paper introduces an efficient technique to generate new molecular simulation Markov chains for different temperature and density conditions, which allow for rapid extrapolation of canonical ensemble averages at a range of temperatures and densities different from the original conditions where a single simulation is conducted. Obtained information from the original simulation are reweighted and even reconstructed in order to extrapolate our knowledge to the new conditions. Our technique allows not only the extrapolation to a new temperature or density, but also the double extrapolation to both new temperature and density. The method was implemented for Lennard-Jones fluid with structureless particles in single-gas phase region. Extrapolation behaviors as functions of extrapolation ranges were studied. Limits of extrapolation ranges showed a remarkable capability especially along isochors where only reweighting is required. Various factors that could affect the limits of extrapolation ranges were investigated and compared. In particular, these limits were shown to be sensitive to the number of particles used and starting point where the simulation was originally conducted.
Case, J.B.; Buesch, D.C.
2004-01-01
Predictions of waste canister and repository driftwall temperatures as functions of space and time are important to evaluate pre-closure performance of the proposed repository for spent nuclear fuel and high-level radioactive waste at Yucca Mountain, Nevada. Variations in the lithostratigraphic features in densely welded and crystallized rocks of the 12.8-million-year-old Topopah Spring Tuff, especially the porosity resulting from lithophysal cavities, affect thermal properties. A simulated emplacement drift is based on projecting lithophysal cavity porosity values 50 to 800 m from the Enhanced Characterization of the Repository Block cross drift. Lithophysal cavity porosity varies from 0.00 to 0.05 cm3/cm3 in the middle nonlithophysal zone and from 0.03 to 0.28 cm3/cm3 in the lower lithophysal zone. A ventilation model and computer program titled "Monte Carlo Simulation of Ventilation" (MCSIMVENT), which is based on a composite thermal-pulse calculation, simulates statistical variability and uncertainty of rock-mass thermal properties and ventilation performance along a simulated emplacement drift for a pre-closure period of 50 years. Although ventilation efficiency is relatively insensitive to thermal properties, variations in lithophysal porosity along the drift can result in a range of peak driftwall temperatures can range from 40 to 85??C for the preclosure period. Copyright ?? 2004 by ASME.
Energy Technology Data Exchange (ETDEWEB)
Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Lin, Guang [Department of Mathematics and School of Mechanical Engineering, Purdue University, West Lafayette Indiana USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA
2017-03-01
In decision-making for groundwater management and contamination remediation, it is important to accurately evaluate the probability of the occurrence of a failure event. For small failure probability analysis, a large number of model evaluations are needed in the Monte Carlo (MC) simulation, which is impractical for CPU-demanding models. One approach to alleviate the computational cost caused by the model evaluations is to construct a computationally inexpensive surrogate model instead. However, using a surrogate approximation can cause an extra error in the failure probability analysis. Moreover, constructing accurate surrogates is challenging for high-dimensional models, i.e., models containing many uncertain input parameters. To address these issues, we propose an efficient two-stage MC approach for small failure probability analysis in high-dimensional groundwater contaminant transport modeling. In the first stage, a low-dimensional representation of the original high-dimensional model is sought with Karhunen–Loève expansion and sliced inverse regression jointly, which allows for the easy construction of a surrogate with polynomial chaos expansion. Then a surrogate-based MC simulation is implemented. In the second stage, the small number of samples that are close to the failure boundary are re-evaluated with the original model, which corrects the bias introduced by the surrogate approximation. The proposed approach is tested with a numerical case study and is shown to be 100 times faster than the traditional MC approach in achieving the same level of estimation accuracy.
International Nuclear Information System (INIS)
Christmas, P.; Nichols, A.L.; Lemmel, H.D.
1989-07-01
The final official meeting of the IAEA Coordinated Research Programme on the Measurement and Evaluation of X- and Gamma-ray Standards for Detector Efficiency Calibration was held in Braunschweig from 31 May to 2 June 1989. Work undertaken by the participants was reviewed in detail, and actions were agreed to resolve specific issues and problems. Initial steps were also made to establish a format and procedure for the preparation by mid-1990 of an IAEA Technical Reports Series booklet; the measurements and recommended data will be listed, and an IAEA data file established for issue to all interested organisations. (author). 3 tabs
Czech Academy of Sciences Publication Activity Database
Kučera, Jan; Kubešová, Marie; Lebeda, Ondřej
2018-01-01
Roč. 315, č. 3 (2018), s. 671-675 ISSN 0236-5731. [7th International K0-Users Workshop. Montreal, 03.09.2017-08.09.2017] R&D Projects: GA ČR(CZ) GBP108/12/G108; GA MŠk LM2015056 Institutional support: RVO:61389005 Keywords : k(0)-INAA * Ca determination * HPGe detector * High- energy efficiency calibration * Co-56 activity standard Subject RIV: CB - Analytical Chemistry, Separation OBOR OECD: Analytical chemistry Impact factor: 1.282, year: 2016
Calibration of HPGe detector for flowing sample neutron activation analysis
International Nuclear Information System (INIS)
Abdo, F.S.; Atomic Energy Authority, Cairo; Mohamed Soliman; Ahmed, M.M.; Rizk, R.A.M.; Megahid, R.M.
2016-01-01
This work is concerned with the calibration of the HPGe detector used in flowing sample neutron activation analysis technique. The optimum counting configuration and half-life based correction factors have been estimated using Monte Carlo computer simulations. Depending on detection efficiency, sample volume and flow type around the detector, the optimum geometry was achieved using 4 mm diameter hose rolled in spiral shape around the detector. The derived results showed that the half-life based efficiency correction factors are strongly dependent on sample flow rate and the isotope half-life. (author)
International Nuclear Information System (INIS)
Shikaze, Yoshiaki; Tanimura, Yoshihiko; Saegusa, Jun; Tsutsumi, Masahiro
2010-01-01
Precise calibration of monitors and dosimeters for use with high energy neutrons necessitates reliable and accurate neutron fluences being evaluated with use of a reference point. A highly efficient Proton Recoil counter Telescope (PRT) to make absolute measurements with use of a reference point was developed to evaluate neutron fluences in quasi-monoenergetic neutron fields. The relatively large design of the PRT componentry and relatively thick, approximately 2 mm, polyethylene converter contributed to high detection efficiency at the reference point over a large irradiation area at a long distance from the target. The polyethylene converter thickness was adjusted to maintain the same carbon density per unit area as the graphite converter for easy background subtraction. The high detection efficiency and thickness adjustment resulted in efficient absolute measurements being made of the neutron fluences of sufficient statistical precision over a short period of time. The neutron detection efficiencies of the PRT were evaluated using MCNPX code at 2.61x10 -6 , 2.16x10 -6 and 1.14x10 -6 for the respective neutron peak energies of 45, 60 and 75 MeV. The neutron fluences were determined to have been evaluated at an uncertainty of within 6.5% using analysis of measured data and the detection efficiencies. The PRT was also designed so as to be capable of simultaneously obtaining TOF data. The TOF data also increased the reliability of neutron fluence measurements and provided useful information for use in interpreting the source of proton events.
Biegun, A K; van Goethem, M-J; van der Graaf, E R; van Beuzekom, M; Koffeman, E N; Nakaji, T; Takatsu, J; Visser, J; Brandenburg, S
2017-09-01
Proton radiography is a novel imaging modality that allows direct measurement of the proton energy loss in various tissues. Currently, due to the conversion of so-called Hounsfield units from X-ray Computed Tomography (CT) into relative proton stopping powers (RPSP), the uncertainties of RPSP are 3-5% or higher, which need to be minimized down to 1% to make the proton treatment plans more accurate. In this work, we simulated a proton radiography system, with position-sensitive detectors (PSDs) and a residual energy detector (RED). The simulations were built using Geant4, a Monte Carlo simulation toolkit. A phantom, consisting of several materials was placed between the PSDs of various Water Equivalent Thicknesses (WET), corresponding to an ideal detector, a gaseous detector, silicon and plastic scintillator detectors. The energy loss radiograph and the scattering angle distributions of the protons were studied for proton beam energies of 150MeV, 190MeV and 230MeV. To improve the image quality deteriorated by the multiple Coulomb scattering (MCS), protons with small angles were selected. Two ways of calculating a scattering angle were considered using the proton's direction and position. A scattering angle cut of 8.7mrad was applied giving an optimal balance between quality and efficiency of the radiographic image. For the three proton beam energies, the number of protons used in image reconstruction with the direction method was half the number of protons kept using the position method. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Lepy, M.Ch
2000-07-01
The EUROMET project 428 examines efficiency transfer computation for Ge gamma-ray spectrometers when the efficiency is known for a reference point source geometry in the 60 keV to 2 MeV energy range. For this, different methods are used, such as Monte Carlo simulation or semi-empirical computation. The exercise compares the application of these methods to the same selected experimental cases to determine the usage limitations versus the requested accuracy. For carefully examining these results and trying to derive information for improving the computation codes, this study was limited to a few simple cases, from an experimental efficiency calibration for point source at 10-cm source-to-detector distance. The first part concerns the simplest case of geometry transfer, i.e., using point sources for 3 source-to-detector distances: 2,5 and 20 cm; the second part deals with transfer from point source geometry to cylindrical geometry with three different matrices. The general results show that the deviations between the computed results and the measured efficiencies are for the most part within 10%. The quality of the results is rather inhomogeneous and shows that these codes cannot be used directly for metrological purposes. However, most of them are operational for routine measurements when efficiency uncertainties of 5-10% can be sufficient. (author)
Energy Technology Data Exchange (ETDEWEB)
Leblanc, B.
2002-03-01
Molecular simulation aims at simulating particles in interaction, describing a physico-chemical system. When considering Markov Chain Monte Carlo sampling in this context, we often meet the same problem of statistical efficiency as with Molecular Dynamics for the simulation of complex molecules (polymers for example). The search for a correct sampling of the space of possible configurations with respect to the Boltzmann-Gibbs distribution is directly related to the statistical efficiency of such algorithms (i.e. the ability of rapidly providing uncorrelated states covering all the configuration space). We investigated how to improve this efficiency with the help of Artificial Evolution (AE). AE algorithms form a class of stochastic optimization algorithms inspired by Darwinian evolution. Efficiency measures that can be turned into efficiency criteria have been first searched before identifying parameters that could be optimized. Relative frequencies for each type of Monte Carlo moves, usually empirically chosen in reasonable ranges, were first considered. We combined parallel simulations with a 'genetic server' in order to dynamically improve the quality of the sampling during the simulations progress. Our results shows that in comparison with some reference settings, it is possible to improve the quality of samples with respect to the chosen criterion. The same algorithm has been applied to improve the Parallel Tempering technique, in order to optimize in the same time the relative frequencies of Monte Carlo moves and the relative frequencies of swapping between sub-systems simulated at different temperatures. Finally, hints for further research in order to optimize the choice of additional temperatures are given. (author)
Sahmani, S; Fattahi, A M
2017-08-01
New ceramic materials containing nanoscaled crystalline phases create a main object of scientific interest due to their attractive advantages such as biocompatibility. Zirconia as a transparent glass ceramic is one of the most useful binary oxides in a wide range of applications. In the present study, a new size-dependent plate model is constructed to predict the nonlinear axial instability characteristics of zirconia nanosheets under axial compressive load. To accomplish this end, the nonlocal continuum elasticity of Eringen is incorporated to a refined exponential shear deformation plate theory. A perturbation-based solving process is put to use to derive explicit expressions for nonlocal equilibrium paths of axial-loaded nanosheets. After that, some molecular dynamics (MD) simulations are performed for axial instability response of square zirconia nanosheets with different side lengths, the results of which are matched with those of the developed nonlocal plate model to capture the proper value of nonlocal parameter. It is demonstrated that the calibrated nonlocal plate model with nonlocal parameter equal to 0.37nm has a very good capability to predict the axial instability characteristics of zirconia nanosheets, the accuracy of which is comparable with that of MD simulation. Copyright © 2017 Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Brockway, D.; Soran, P.; Whalen, P.
1985-01-01
A Monte Carlo algorithm to efficiently calculate static alpha eigenvalues, N = ne/sup ..cap alpha..t/, for supercritical systems has been developed and tested. A direct Monte Carlo approach to calculating a static alpha is to simply follow the buildup in time of neutrons in a supercritical system and evaluate the logarithmic derivative of the neutron population with respect to time. This procedure is expensive, and the solution is very noisy and almost useless for a system near critical. The modified approach is to convert the time-dependent problem to a static ..cap alpha../sup -/eigenvalue problem and regress ..cap alpha.. on solutions of a/sup -/ k/sup -/eigenvalue problem. In practice, this procedure is much more efficient than the direct calculation, and produces much more accurate results. Because the Monte Carlo codes are intrinsically three-dimensional and use elaborate continuous-energy cross sections, this technique is now used as a standard for evaluating other calculational techniques in odd geometries or with group cross sections.
Comparison between two calibration models of a measurement system for thyroid monitoring
International Nuclear Information System (INIS)
Venturini, Luzia
2005-01-01
This paper shows a comparison between two theoretical calibration that use two mathematical models to represent the neck region. In the first model thyroid is considered to be just a region limited by two concentric cylinders whose dimensions are those of trachea and neck. The second model uses functional functions to get a better representation of the thyroid geometry. Efficiency values are obtained using Monte Carlo simulation. (author)
Kalos, Melvin H
2008-01-01
This introduction to Monte Carlo methods seeks to identify and study the unifying elements that underlie their effective application. Initial chapters provide a short treatment of the probability and statistics needed as background, enabling those without experience in Monte Carlo techniques to apply these ideas to their research.The book focuses on two basic themes: The first is the importance of random walks as they occur both in natural stochastic systems and in their relationship to integral and differential equations. The second theme is that of variance reduction in general and importance sampling in particular as a technique for efficient use of the methods. Random walks are introduced with an elementary example in which the modeling of radiation transport arises directly from a schematic probabilistic description of the interaction of radiation with matter. Building on this example, the relationship between random walks and integral equations is outlined
Parallelizing Monte Carlo with PMC
International Nuclear Information System (INIS)
Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.
1994-11-01
PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described
Zambri, Brian
2015-11-05
Our aim is to propose a numerical strategy for retrieving accurately and efficiently the biophysiological parameters as well as the external stimulus characteristics corresponding to the hemodynamic mathematical model that describes changes in blood flow and blood oxygenation during brain activation. The proposed method employs the TNM-CKF method developed in [1], but in a prediction/correction framework. We present numerical results using both real and synthetic functional Magnetic Resonance Imaging (fMRI) measurements to highlight the performance characteristics of this computational methodology. © 2015 IEEE.
Monte Carlo simulation for IRRMA
International Nuclear Information System (INIS)
Gardner, R.P.; Liu Lianyan
2000-01-01
Monte Carlo simulation is fast becoming a standard approach for many radiation applications that were previously treated almost entirely by experimental techniques. This is certainly true for Industrial Radiation and Radioisotope Measurement Applications - IRRMA. The reasons for this include: (1) the increased cost and inadequacy of experimentation for design and interpretation purposes; (2) the availability of low cost, large memory, and fast personal computers; and (3) the general availability of general purpose Monte Carlo codes that are increasingly user-friendly, efficient, and accurate. This paper discusses the history and present status of Monte Carlo simulation for IRRMA including the general purpose (GP) and specific purpose (SP) Monte Carlo codes and future needs - primarily from the experience of the authors
Absolute calibration in vivo measurement systems
International Nuclear Information System (INIS)
Kruchten, D.A.; Hickman, D.P.
1991-02-01
Lawrence Livermore National Laboratory (LLNL) is currently investigating a new method for obtaining absolute calibration factors for radiation measurement systems used to measure internally deposited radionuclides in vivo. Absolute calibration of in vivo measurement systems will eliminate the need to generate a series of human surrogate structures (i.e., phantoms) for calibrating in vivo measurement systems. The absolute calibration of in vivo measurement systems utilizes magnetic resonance imaging (MRI) to define physiological structure, size, and composition. The MRI image provides a digitized representation of the physiological structure, which allows for any mathematical distribution of radionuclides within the body. Using Monte Carlo transport codes, the emission spectrum from the body is predicted. The in vivo measurement equipment is calibrated using the Monte Carlo code and adjusting for the intrinsic properties of the detection system. The calibration factors are verified using measurements of existing phantoms and previously obtained measurements of human volunteers. 8 refs
DEFF Research Database (Denmark)
Heydorn, Kaj; Anglov, Thomas
2002-01-01
Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration...
Rose, Matthias; Bjorner, Jakob B; Gandek, Barbara; Bruce, Bonnie; Fries, James F; Ware, John E
2014-05-01
To document the development and psychometric evaluation of the Patient-Reported Outcomes Measurement Information System (PROMIS) Physical Function (PF) item bank and static instruments. The items were evaluated using qualitative and quantitative methods. A total of 16,065 adults answered item subsets (n>2,200/item) on the Internet, with oversampling of the chronically ill. Classical test and item response theory methods were used to evaluate 149 PROMIS PF items plus 10 Short Form-36 and 20 Health Assessment Questionnaire-Disability Index items. A graded response model was used to estimate item parameters, which were normed to a mean of 50 (standard deviation [SD]=10) in a US general population sample. The final bank consists of 124 PROMIS items covering upper, central, and lower extremity functions and instrumental activities of daily living. In simulations, a 10-item computerized adaptive test (CAT) eliminated floor and decreased ceiling effects, achieving higher measurement precision than any comparable length static tool across four SDs of the measurement range. Improved psychometric properties were transferred to the CAT's superior ability to identify differences between age and disease groups. The item bank provides a common metric and can improve the measurement of PF by facilitating the standardization of patient-reported outcome measures and implementation of CATs for more efficient PF assessments over a larger range. Copyright © 2014. Published by Elsevier Inc.
Adjoint electron Monte Carlo calculations
International Nuclear Information System (INIS)
Jordan, T.M.
1986-01-01
Adjoint Monte Carlo is the most efficient method for accurate analysis of space systems exposed to natural and artificially enhanced electron environments. Recent adjoint calculations for isotropic electron environments include: comparative data for experimental measurements on electronics boxes; benchmark problem solutions for comparing total dose prediction methodologies; preliminary assessment of sectoring methods used during space system design; and total dose predictions on an electronics package. Adjoint Monte Carlo, forward Monte Carlo, and experiment are in excellent agreement for electron sources that simulate space environments. For electron space environments, adjoint Monte Carlo is clearly superior to forward Monte Carlo, requiring one to two orders of magnitude less computer time for relatively simple geometries. The solid-angle sectoring approximations used for routine design calculations can err by more than a factor of 2 on dose in simple shield geometries. For critical space systems exposed to severe electron environments, these potential sectoring errors demand the establishment of large design margins and/or verification of shield design by adjoint Monte Carlo/experiment
Energy Technology Data Exchange (ETDEWEB)
Gallardo, S.; Querol, A.; Rodenas, J.; Verdu, G.
2014-07-01
in this paper we propose to perform a simulation model using the MCNP5 code and a registration form meshing to improve the simulation efficiency of the detector in the range of energies ranging from 50 to 2000 keV. This meshing is built by FMESH MCNP5 registration code that allows a mesh with cells of few microns. The photon and electron flow is calculated in the different cells of the mesh which is superimposed on detector geometry. It analyzes the variation of efficiency (related to the variation of energy deposited in the active volume). (Author)
International Nuclear Information System (INIS)
Takano, M.; Masukawa, F.; Naito, Y.
1994-01-01
The MCACE code, a radiation shielding analysis code by the Monte Carlo method is examined and modified to execute on a parallel computer. The parallelized MCACE code has achieved a speed-up of 52.5 times when random walk processes are executed by 128 batches of 400 particles on the parallel computer AP-1000 equipped with 64 cell processors. In order to achieve high performance, the number of particles for each batch must be large enough to reduce a fluctuation among the execution times in the cell processors, which are mainly caused by differences in random walk processes. (authors). 3 refs., 2 figs., 1 tab
Fast sequential Monte Carlo methods for counting and optimization
Rubinstein, Reuven Y; Vaisman, Radislav
2013-01-01
A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the
International Nuclear Information System (INIS)
Farah, Jad
2011-01-01
To optimize the monitoring of female workers using in vivo spectrometry measurements, it is necessary to correct the typical calibration coefficients obtained with the Livermore male physical phantom. To do so, numerical calibrations based on the use of Monte Carlo simulations combined with anthropomorphic 3D phantoms were used. Such computational calibrations require on the one hand the development of representative female phantoms of different size and morphologies and on the other hand rapid and reliable Monte Carlo calculations. A library of female torso models was hence developed by fitting the weight of internal organs and breasts according to the body height and to relevant plastic surgery recommendations. This library was next used to realize a numerical calibration of the AREVA NC La Hague in vivo counting installation. Moreover, the morphology-induced counting efficiency variations with energy were put into equation and recommendations were given to correct the typical calibration coefficients for any monitored female worker as a function of body height and breast size. Meanwhile, variance reduction techniques and geometry simplification operations were considered to accelerate simulations. Furthermore, to determine the activity mapping in the case of complex contaminations, a method that combines Monte Carlo simulations with in vivo measurements was developed. This method consists of realizing several spectrometry measurements with different detector positioning. Next, the contribution of each contaminated organ to the count is assessed from Monte Carlo calculations. The in vivo measurements realized at LEDI, CIEMAT and KIT have demonstrated the effectiveness of the method and highlighted the valuable contribution of Monte Carlo simulations for a more detailed analysis of spectrometry measurements. Thus, a more precise estimate of the activity distribution is given in the case of an internal contamination. (author)
Trzcinski, A; Müller, W F J; Trautmann, W; Zwieglinski, B; Auger, G; Bacri, C O; Begemann-Blaich, M L; Bellaize, N; Bittiger, R; Bocage, F; Borderie, B; Bougault, R; Bouriquet, B; Buchet, P; Charvet, J L; Chbihi, A; Dayras, R; Doré, D; Durand, D; Frankland, J D; Galíchet, E; Gourio, D; Guinet, D; Hudan, S; Hurst, B; Lautesse, P; Lavaud, F; Laville, J L; Leduc, C; Lefèvre, A; Legrain, R; López, O; Lynen, U; Nalpas, L; Orth, H; Plagnol, E; Rosato, E; Saija, A; Schwarz, C; Sfienti, C; Steckmeyer, J C; Tabacaru, G; Tamain, B; Turzó, K; Vient, E; Vigilante, M; Volant, C
2003-01-01
An efficient method of energy scale calibration for the CsI(Tl) modules of the INDRA multidetector (rings 6-12) using elastic and inelastic sup 1 sup 2 C+ sup 1 H scattering at E( sup 1 sup 2 C)=30 MeV per nucleon is presented. Background-free spectra for the binary channels are generated by requiring the coincident detection of the light and heavy ejectiles. The gain parameter of the calibration curve is obtained by fitting the proton total charge spectra to the spectra predicted with Monte-Carlo simulations using tabulated cross section data. The method has been applied in multifragmentation experiments with INDRA at GSI.
Energy Technology Data Exchange (ETDEWEB)
Trzcinski, A.; Lukasik, J.; Mueller, W.F.J.; Trautmann, W.; Zwieglinski, B. E-mail: bzw@fuw.edu.pl; Auger, G.; Bacri, Ch.O.; Begemann-Blaich, M.L.; Bellaize, N.; Bittiger, R.; Bocage, F.; Borderie, B.; Bougault, R.; Bouriquet, B.; Buchet, Ph.; Charvet, J.L.; Chbihi, A.; Dayras, R.; Dore, D.; Durand, D.; Frankland, J.D.; Galichet, E.; Gourio, D.; Guinet, D.; Hudan, S.; Hurst, B.; Lautesse, P.; Lavaud, F.; Laville, J.L.; Leduc, C.; Le Fevre, A.; Legrain, R.; Lopez, O.; Lynen, U.; Nalpas, L.; Orth, H.; Plagnol, E.; Rosato, E.; Saija, A.; Schwarz, C.; Sfienti, C.; Steckmeyer, J.C.; Tabacaru, G.; Tamain, B.; Turzo, K.; Vient, E.; Vigilante, M.; Volant, C
2003-04-01
An efficient method of energy scale calibration for the CsI(Tl) modules of the INDRA multidetector (rings 6-12) using elastic and inelastic {sup 12}C+{sup 1}H scattering at E({sup 12}C)=30 MeV per nucleon is presented. Background-free spectra for the binary channels are generated by requiring the coincident detection of the light and heavy ejectiles. The gain parameter of the calibration curve is obtained by fitting the proton total charge spectra to the spectra predicted with Monte-Carlo simulations using tabulated cross section data. The method has been applied in multifragmentation experiments with INDRA at GSI.
Is Monte Carlo embarrassingly parallel?
International Nuclear Information System (INIS)
Hoogenboom, J. E.
2012-01-01
Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)
Variational Monte Carlo Technique
Indian Academy of Sciences (India)
ias
RESONANCE ⎜ August 2014. GENERAL ⎜ ARTICLE. Variational Monte Carlo Technique. Ground State Energies of Quantum Mechanical Systems. Sukanta Deb. Keywords. Variational methods, Monte. Carlo techniques, harmonic os- cillators, quantum mechanical systems. Sukanta Deb is an. Assistant Professor in the.
Indian Academy of Sciences (India)
. Keywords. Gibbs sampling, Markov Chain. Monte Carlo, Bayesian inference, stationary distribution, conver- gence, image restoration. Arnab Chakraborty. We describe the mathematics behind the Markov. Chain Monte Carlo method of ...
DEFF Research Database (Denmark)
Kock, Carsten Weber; Vesth, Allan
This Site Calibration report is describing the results of a measured site calibration for a site in Denmark. The calibration is carried out by DTU Wind Energy in accordance with Ref.[3] and Ref.[4]. The measurement period is given. The site calibration is carried out before a power performance...... measurement on a given turbine to clarify the influence from the terrain on the ratio between the wind speed at the center of the turbine hub and at the met mast. The wind speed at the turbine is measured by a temporary mast placed at the foundation for the turbine. The site and measurement equipment...... is detailed described in [1] and [2]. All parts of the sensors and the measurement system have been installed by DTU Wind Energy....
Strategies for CT tissue segmentation for Monte Carlo calculations in nuclear medicine dosimetry
DEFF Research Database (Denmark)
Braad, Poul-Erik; Andersen, Thomas; Hansen, Søren Baarsgaard
2016-01-01
Purpose: CT images are used for patient specific Monte Carlo treatment planning in radionuclide therapy. The authors investigated the impact of tissue classification, CT image segmentation, and CT errors on Monte Carlo calculated absorbed dose estimates in nuclear medicine. Methods: CT errors...... calibration of the CT number-to-density conversion ramp. Tissue segmentation by a 13-tissue CT conversion ramp, calibrated by a stoichiometric method, resulted in low (isotopes. Conclusions: A calibrated CT scanner specific conversion ramp is required for accurate...
International Nuclear Information System (INIS)
Szoke, A; Brooks, E D; McKinley, M; Daffin, F
2005-01-01
The equations of radiation transport for thermal photons are notoriously difficult to solve in thick media without resorting to asymptotic approximations such as the diffusion limit. One source of this difficulty is that in thick, absorbing media thermal emission is almost completely balanced by strong absorption. In a previous publication [SB03], the photon transport equation was written in terms of the deviation of the specific intensity from the local equilibrium field. We called the new form of the equations the difference formulation. The difference formulation is rigorously equivalent to the original transport equation. It is particularly advantageous in thick media, where the radiation field approaches local equilibrium and the deviations from the Planck distribution are small. The difference formulation for photon transport also clarifies the diffusion limit. In this paper, the transport equation is solved by the Symbolic Implicit Monte Carlo (SIMC) method and a comparison is made between the standard formulation and the difference formulation. The SIMC method is easily adapted to the derivative source terms of the difference formulation, and a remarkable reduction in noise is obtained when the difference formulation is applied to problems involving thick media
Energy Technology Data Exchange (ETDEWEB)
Farah, Jad
2011-10-06
To optimize the monitoring of female workers using in vivo spectrometry measurements, it is necessary to correct the typical calibration coefficients obtained with the Livermore male physical phantom. To do so, numerical calibrations based on the use of Monte Carlo simulations combined with anthropomorphic 3D phantoms were used. Such computational calibrations require on the one hand the development of representative female phantoms of different size and morphologies and on the other hand rapid and reliable Monte Carlo calculations. A library of female torso models was hence developed by fitting the weight of internal organs and breasts according to the body height and to relevant plastic surgery recommendations. This library was next used to realize a numerical calibration of the AREVA NC La Hague in vivo counting installation. Moreover, the morphology-induced counting efficiency variations with energy were put into equation and recommendations were given to correct the typical calibration coefficients for any monitored female worker as a function of body height and breast size. Meanwhile, variance reduction techniques and geometry simplification operations were considered to accelerate simulations. Furthermore, to determine the activity mapping in the case of complex contaminations, a method that combines Monte Carlo simulations with in vivo measurements was developed. This method consists of realizing several spectrometry measurements with different detector positioning. Next, the contribution of each contaminated organ to the count is assessed from Monte Carlo calculations. The in vivo measurements realized at LEDI, CIEMAT and KIT have demonstrated the effectiveness of the method and highlighted the valuable contribution of Monte Carlo simulations for a more detailed analysis of spectrometry measurements. Thus, a more precise estimate of the activity distribution is given in the case of an internal contamination. (author)
International Nuclear Information System (INIS)
Setiani, Tia Dwi; Suprijadi; Haryanto, Freddy
2016-01-01
Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10 8 and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.
Discrete diffusion Monte Carlo for frequency-dependent radiative transfer
Energy Technology Data Exchange (ETDEWEB)
Densmore, Jeffrey D [Los Alamos National Laboratory; Kelly, Thompson G [Los Alamos National Laboratory; Urbatish, Todd J [Los Alamos National Laboratory
2010-11-17
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.
Energy Technology Data Exchange (ETDEWEB)
Holzmannhofer, J. [Landeskrankenhaus Salzburg, Universitaetsklinikum der Paracelsus Medizinischen Privatuniv. (Austria). Universitaetsklinik fuer Nuklearmedizin und Endokrinolgoie
2009-07-01
A possible method of measurement for the monitoring of workers concerning internal I-131 radiation exposure is the in vivo measurement of the I-131 activity in the thyroid gland by means of a thyroid uptake system. For this method of measurement and for a maximum time interval between measurements of 14 days the Austrian standard ON S 5220 part 2 defines a lower limit of detection (LLD) of 84 Bq or less. This standard also requires the trueness Br to be in the following range: -0,25 < Br < +0,5. The trueness Br is externally checked every second year within the framework of a so called 'messtechnische Kontrolle' (conformity assessment). This conformity assessment can also cover the aspect of the calculation of the committed effective dose according to part 3 of the Austrian standard ON S 5220. In Salzburg we use a thyroid uptake system with a 3-inch NaI detector and a suitable collimated shield. The personnel is positioned as close as possible to the detector. The efficiency calibration is done according to the manufacturers' instructions. The calibration source is positioned in a neck phantom with an eccentric insert. The efficiency calibration showed the following results: FKal = 0,446 cpm/BqI-131, LLD = 65 Bq and the decision limit DL = 31 Bq (counting time: 2 min, background counting time: 30 min, background counting rate: 129 cpm). Thus the required LLD of 84 Bq is easily achieved with a pleasantly short counting time. Part 2 of the Austrian standard ON S 5220 requires that the (internal) verification or (external) conformity assessment of the efficiency calibration factor has to be done with an activity in the range between the 10- to 1000-fold of the LLD. When doing the verification the exact source position within the neck phantom is well known. This exact source position might not be known when an (external) conformity assessment is performed. With every measurement (personnel measurement or conformity assessment) a distance correction for
Importance iteration in MORSE Monte Carlo calculations
International Nuclear Information System (INIS)
Kloosterman, J.L.; Hoogenboom, J.E.
1994-02-01
An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example, which shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation. (orig.)
DEFF Research Database (Denmark)
Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan
2008-01-01
uncertainty estimation (GLUE) procedure based on Markov chain Monte Carlo sampling is applied in order to improve the performance of the methodology in estimating parameters and posterior output distributions. The description of the spatial variations of the hydrological processes is accounted for by defining......-distributed responses are, however, still quite unexplored. Especially for complex models, rigorous parameterization, reduction of the parameter space and use of efficient and effective algorithms are essential to facilitate the calibration process and make it more robust. Moreover, for these models multi...... the identifiability of the parameters and results in satisfactory multi-variable simulations and uncertainty estimates. However, the parameter uncertainty alone cannot explain the total uncertainty at all the sites, due to limitations in the distributed data included in the model calibration. The study also indicates...
Monte Carlo simulation of Markov unreliability models
International Nuclear Information System (INIS)
Lewis, E.E.; Boehm, F.
1984-01-01
A Monte Carlo method is formulated for the evaluation of the unrealibility of complex systems with known component failure and repair rates. The formulation is in terms of a Markov process allowing dependences between components to be modeled and computational efficiencies to be achieved in the Monte Carlo simulation. Two variance reduction techniques, forced transition and failure biasing, are employed to increase computational efficiency of the random walk procedure. For an example problem these result in improved computational efficiency by more than three orders of magnitudes over analog Monte Carlo. The method is generalized to treat problems with distributed failure and repair rate data, and a batching technique is introduced and shown to result in substantial increases in computational efficiency for an example problem. A method for separating the variance due to the data uncertainty from that due to the finite number of random walks is presented. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Venturini, Luzia [Instituto de Pesquisas Energeicas e Nucleares (IPEN), Sao Paulo, Sp (Brazil). Dept. de Metrologia das Radiacoes]. E-mail: lventur@net.ipen.br
2005-07-01
This paper shows a comparison between two theoretical calibration that use two mathematical models to represent the neck region. In the first model thyroid is considered to be just a region limited by two concentric cylinders whose dimensions are those of trachea and neck. The second model uses functional functions to get a better representation of the thyroid geometry. Efficiency values are obtained using Monte Carlo simulation. (author)
Energy Technology Data Exchange (ETDEWEB)
Eersel, H. van, E-mail: h.v.eersel@tue.nl; Coehoorn, R. [Department of Applied Physics, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands); Philips Research Laboratories, High Tech Campus 4, 5656 AE Eindhoven (Netherlands); Bobbert, P. A.; Janssen, R. A. J. [Department of Applied Physics, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands)
2014-10-06
We present an advanced molecular-scale organic light-emitting diode (OLED) model, integrating both electronic and excitonic processes. Using this model, we can reproduce the measured efficiency roll-off for prototypical phosphorescent OLED stacks based on the green dye tris[2-phenylpyridine]iridium (Ir(ppy){sub 3}) and the red dye octaethylporphine platinum (PtOEP) and study the cause of the roll-off as function of the current density. Both the voltage versus current density characteristics and roll-off agree well with experimental data. Surprisingly, the results of the simulations lead us to conclude that, contrary to what is often assumed, not triplet-triplet annihilation but triplet-polaron quenching is the dominant mechanism causing the roll-off under realistic operating conditions. Simulations for devices with an optimized recombination profile, achieved by carefully tuning the dye trap depth, show that it will be possible to fabricate OLEDs with a drastically reduced roll-off. It is envisaged that J{sub 90}, the current density at which the efficiency is reduced to 90%, can be increased by almost one order of magnitude as compared to the experimental state-of-the-art.
Schläger, Martin
2011-03-01
Three widely used anthropomorphic phantoms are analysed with regard to their suitability for the efficiency calibration of whole-body counters (WBCs): a Bottle Manikin Absorber (BOMAB) phantom consisting of water-filled plastic containers, a St Petersburg block phantom (Research Institute of Sea Transport Hygiene, St Petersburg) made of polyethylene bricks and a mathematical Medical Internal Radiation Dose (MIRD) phantom, each of them representing a person weighing 70 kg. The analysis was performed by means of Monte Carlo simulations with the Monte Carlo N-Particle transport code using detailed mathematical models of the phantoms and the WBC at Forschungszentrum Jülich (FZJ). The simulated peak efficiencies for the BOMAB phantom and the MIRD phantom agree very well, but the results for the St Petersburg phantom are considerably higher. Therefore, WBCs similar to that at FZJ will probably underestimate the activity of incorporated radionuclides if they are calibrated by means of a St Petersburg phantom. Finally, the results from this work are compared with the conclusions from other studies dealing with block and BOMAB phantoms.
Dunn, William L
2012-01-01
Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble
Directory of Open Access Journals (Sweden)
Bardenet Rémi
2013-07-01
Full Text Available Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.
Prieto, E.; Casanovas, R.; Salvadó, M.
2018-03-01
A scintillation gamma-ray spectrometry water monitor with a 2″ × 2″ LaBr3(Ce) detector was characterized in this study. This monitor measures gamma-ray spectra of river water. Energy and resolution calibrations were performed experimentally, whereas the detector efficiency was determined using Monte Carlo simulations with EGS5 code system. Values of the minimum detectable activity concentrations for 131I and 137Cs were calculated for different integration times. As an example of the monitor performance after calibration, a radiological increment during a rainfall episode was studied.
Zimmerman, George B.
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
International Nuclear Information System (INIS)
Zimmerman, G.B.
1997-01-01
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials. copyright 1997 American Institute of Physics
International Nuclear Information System (INIS)
Zimmerman, George B.
1997-01-01
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials
Calibration of germanium detectors
International Nuclear Information System (INIS)
Bjurman, B.; Erlandsson, B.
1985-01-01
This paper describes problems concerning the calibration of germanium detectors for the measurement of gamma-radiation from environmental samples. It also contains a brief description of some ways of reducing the uncertainties concerning the activity determination. These uncertainties have many sources, such as counting statistics, full energy peak efficiency determination, density correction and radionuclide specific-coincidence effects, when environmental samples are investigated at close source-to-detector distances
Sheykhizadeh, Saheleh; Naseri, Abdolhossein
2018-04-05
Variable selection plays a key role in classification and multivariate calibration. Variable selection methods are aimed at choosing a set of variables, from a large pool of available predictors, relevant to the analyte concentrations estimation, or to achieve better classification results. Many variable selection techniques have now been introduced among which, those which are based on the methodologies of swarm intelligence optimization have been more respected during a few last decades since they are mainly inspired by nature. In this work, a simple and new variable selection algorithm is proposed according to the invasive weed optimization (IWO) concept. IWO is considered a bio-inspired metaheuristic mimicking the weeds ecological behavior in colonizing as well as finding an appropriate place for growth and reproduction; it has been shown to be very adaptive and powerful to environmental changes. In this paper, the first application of IWO, as a very simple and powerful method, to variable selection is reported using different experimental datasets including FTIR and NIR data, so as to undertake classification and multivariate calibration tasks. Accordingly, invasive weed optimization - linear discrimination analysis (IWO-LDA) and invasive weed optimization- partial least squares (IWO-PLS) are introduced for multivariate classification and calibration, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.
Sheykhizadeh, Saheleh; Naseri, Abdolhossein
2018-04-01
Variable selection plays a key role in classification and multivariate calibration. Variable selection methods are aimed at choosing a set of variables, from a large pool of available predictors, relevant to the analyte concentrations estimation, or to achieve better classification results. Many variable selection techniques have now been introduced among which, those which are based on the methodologies of swarm intelligence optimization have been more respected during a few last decades since they are mainly inspired by nature. In this work, a simple and new variable selection algorithm is proposed according to the invasive weed optimization (IWO) concept. IWO is considered a bio-inspired metaheuristic mimicking the weeds ecological behavior in colonizing as well as finding an appropriate place for growth and reproduction; it has been shown to be very adaptive and powerful to environmental changes. In this paper, the first application of IWO, as a very simple and powerful method, to variable selection is reported using different experimental datasets including FTIR and NIR data, so as to undertake classification and multivariate calibration tasks. Accordingly, invasive weed optimization - linear discrimination analysis (IWO-LDA) and invasive weed optimization- partial least squares (IWO-PLS) are introduced for multivariate classification and calibration, respectively.
The ATLAS Electromagnetic Calorimeter Calibration Workshop
Hong Ma; Isabelle Wingerter
The ATLAS Electromagnetic Calorimeter Calibration Workshop took place at LAPP-Annecy from the 1st to the 3rd of October; 45 people attended the workshop. A detailed program was setup before the workshop. The agenda was organised around very focused presentations where questions were raised to allow arguments to be exchanged and answers to be proposed. The main topics were: Electronics calibration Handling of problematic channels Cluster level corrections for electrons and photons Absolute energy scale Streams for calibration samples Calibration constants processing Learning from commissioning Forty-five people attended the workshop. The workshop was on the whole lively and fruitful. Based on years of experience with test beam analysis and Monte Carlo simulation, and the recent operation of the detector in the commissioning, the methods to calibrate the electromagnetic calorimeter are well known. Some of the procedures are being exercised in the commisssioning, which have demonstrated the c...
Quantum Monte Carlo approaches for correlated systems
Becca, Federico
2017-01-01
Over the past several decades, computational approaches to studying strongly-interacting systems have become increasingly varied and sophisticated. This book provides a comprehensive introduction to state-of-the-art quantum Monte Carlo techniques relevant for applications in correlated systems. Providing a clear overview of variational wave functions, and featuring a detailed presentation of stochastic samplings including Markov chains and Langevin dynamics, which are developed into a discussion of Monte Carlo methods. The variational technique is described, from foundations to a detailed description of its algorithms. Further topics discussed include optimisation techniques, real-time dynamics and projection methods, including Green's function, reptation and auxiliary-field Monte Carlo, from basic definitions to advanced algorithms for efficient codes, and the book concludes with recent developments on the continuum space. Quantum Monte Carlo Approaches for Correlated Systems provides an extensive reference ...
Monte Carlo simulation of the standardization of {sup 22}Na using scintillation detector arrays
Energy Technology Data Exchange (ETDEWEB)
Sato, Y., E-mail: yss.sato@aist.go.j [National Metrology Institute of Japan, National Institute of Advanced Industrial Science and Technology, Quantum Radiation Division, Radioactivity and Neutron Section, Tsukuba Central 2, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568 (Japan); Murayama, H. [National Institute of Radiological Sciences, 4-9-1, Anagawa, Inage, Chiba 263-8555 (Japan); Yamada, T. [Japan Radioisotope Association, 2-28-45, Hon-komagome, Bunkyo, Tokyo 113-8941 (Japan); National Metrology Institute of Japan, National Institute of Advanced Industrial Science and Technology, Quantum Radiation Division, Radioactivity and Neutron Section, Tsukuba Central 2, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568 (Japan); Tohoku University, 6-6, Aoba, Aramaki, Aoba, Sendai 980-8579 (Japan); Hasegawa, T. [Kitasato University, 1-15-1, Kitasato, Sagamihara, Kanagawa 228-8555 (Japan); Oda, K. [Tokyo Metropolitan Institute of Gerontology, 1-1 Nakacho, Itabashi-ku, Tokyo 173-0022 (Japan); Unno, Y.; Yunoki, A. [National Metrology Institute of Japan, National Institute of Advanced Industrial Science and Technology, Quantum Radiation Division, Radioactivity and Neutron Section, Tsukuba Central 2, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568 (Japan)
2010-07-15
In order to calibrate PET devices by a sealed point source, we contrived an absolute activity measurement method for the sealed point source using scintillation detector arrays. This new method was verified by EGS5 Monte Carlo simulation.
Generalized hybrid Monte Carlo - CMFD methods for fission source convergence
International Nuclear Information System (INIS)
Wolters, Emily R.; Larsen, Edward W.; Martin, William R.
2011-01-01
In this paper, we generalize the recently published 'CMFD-Accelerated Monte Carlo' method and present two new methods that reduce the statistical error in CMFD-Accelerated Monte Carlo. The CMFD-Accelerated Monte Carlo method uses Monte Carlo to estimate nonlinear functionals used in low-order CMFD equations for the eigenfunction and eigenvalue. The Monte Carlo fission source is then modified to match the resulting CMFD fission source in a 'feedback' procedure. The two proposed methods differ from CMFD-Accelerated Monte Carlo in the definition of the required nonlinear functionals, but they have identical CMFD equations. The proposed methods are compared with CMFD-Accelerated Monte Carlo on a high dominance ratio test problem. All hybrid methods converge the Monte Carlo fission source almost immediately, leading to a large reduction in the number of inactive cycles required. The proposed methods stabilize the fission source more efficiently than CMFD-Accelerated Monte Carlo, leading to a reduction in the number of active cycles required. Finally, as in CMFD-Accelerated Monte Carlo, the apparent variance of the eigenfunction is approximately equal to the real variance, so the real error is well-estimated from a single calculation. This is an advantage over standard Monte Carlo, in which the real error can be underestimated due to inter-cycle correlation. (author)
Calculation of the counting efficiency for 137Cs using voxel phantoms with lungs and a skeleton
International Nuclear Information System (INIS)
Ishikawa, T.; Uchiyama, M.
1997-01-01
Calibration of whole-body counters for 137 Cs is generally conducted with physical phantoms which are uniformly filled with 137 Cs solutions. To check the conventional calibration, two types of mathematical voxel phantoms were prepared; the phantom with uniform density and that with lungs and a skeleton. Five different sizes, from newborn to adult were prepared for each type of the voxel phantoms. As for a whole-body counter at NIRS (Japan), counting efficiencies for these voxel phantoms were calculated using a Monte Carlo simulation code. The counting efficiency for the phantom with the lungs and skeleton was larger than that for the uniform density phantom for each size. The difference of counting efficiency ranged from 3 to 5% for five sizes. For the adult, it was about 5%. This indicates that the conventional calibration phantoms, assuming uniform activity distribution and uniform density, were acceptable for 137 Cs whole-body counting. (author)
A simple accelerator calibration procedure
Lane, D. W.; Avery, A. J.; Partridge, G.; Healy, M.
1993-04-01
A calibration procedure for an accelerator is described which is based upon the principles of Rutherford backscattering spectrometry and uses existing experimental apparatus. The procedure enables calibration to be performed both rapidly and efficiently. Details of the calibration of a 2.5 MV Van de Graaff generator are given as an example, and the results are compared to the 19F( p,αγ) 16O resonant nuclear reactions at proton energies of 872 keV and 1373 keV.
Energy Technology Data Exchange (ETDEWEB)
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.
International Nuclear Information System (INIS)
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described
International Nuclear Information System (INIS)
Cornejo Diaz, N.; Vergara Gil, A.; Jurado Vargas, M.
2010-01-01
The Monte Carlo method has become a valuable numerical laboratory framework in which to simulate complex physical systems. It is based on the generation of pseudo-random number sequences by numerical algorithms called random generators. In this work we assessed the suitability of different well-known random number generators for the simulation of gamma-ray spectrometry systems during efficiency calibrations. The assessment was carried out in two stages. The generators considered (Delphi's linear congruential, mersenne twister, XorShift, multiplier with carry, universal virtual array, and non-periodic logistic map based generator) were first evaluated with different statistical empirical tests, including moments, correlations, uniformity, independence of terms and the DIEHARD battery of tests. In a second step, an application-specific test was conducted by implementing the generators in our Monte Carlo program DETEFF and comparing the results obtained with them. The calculations were performed with two different CPUs, for a typical HpGe detector and a water sample in Marinelli geometry, with gamma-rays between 59 and 1800 keV. For the Non-periodic Logistic Map based generator, dependence of the most significant bits was evident. This explains the bias, in excess of 5%, of the efficiency values obtained with this generator. The results of the application-specific assessment and the statistical performance of the other algorithms studied indicate their suitability for the Monte Carlo simulation of gamma-ray spectrometry systems for efficiency calculations.
Díaz, N Cornejo; Gil, A Vergara; Vargas, M Jurado
2010-03-01
The Monte Carlo method has become a valuable numerical laboratory framework in which to simulate complex physical systems. It is based on the generation of pseudo-random number sequences by numerical algorithms called random generators. In this work we assessed the suitability of different well-known random number generators for the simulation of gamma-ray spectrometry systems during efficiency calibrations. The assessment was carried out in two stages. The generators considered (Delphi's linear congruential, mersenne twister, XorShift, multiplier with carry, universal virtual array, and non-periodic logistic map based generator) were first evaluated with different statistical empirical tests, including moments, correlations, uniformity, independence of terms and the DIEHARD battery of tests. In a second step, an application-specific test was conducted by implementing the generators in our Monte Carlo program DETEFF and comparing the results obtained with them. The calculations were performed with two different CPUs, for a typical HpGe detector and a water sample in Marinelli geometry, with gamma-rays between 59 and 1800 keV. For the Non-periodic Logistic Map based generator, dependence of the most significant bits was evident. This explains the bias, in excess of 5%, of the efficiency values obtained with this generator. The results of the application-specific assessment and the statistical performance of the other algorithms studied indicate their suitability for the Monte Carlo simulation of gamma-ray spectrometry systems for efficiency calculations. Copyright 2009 Elsevier Ltd. All rights reserved.
Calibration of the whole body counter at PSI
International Nuclear Information System (INIS)
Mayer, Sabine; Boschung, Markus; Fiechtner, Annette; Habegger, Ruedi; Meier, Kilian; Wernli, Christian
2008-01-01
At the Paul Scherrer Institut (PSI), measurements with the whole body counter are routinely carried out for occupationally exposed persons and occasionally for individuals of the population suspected of radioactive intake. In total about 400 measurements are performed per year. The whole body counter is based on a p-type high purity germanium (HPGe) coaxial detector mounted above a canvas chair in a shielded small room. The detector is used to detect the presence of radionuclides that emit photons with energies between 50 keV and 2 MeV. The room itself is made of iron from old railway rails to reduce the natural background radiation to 24 n Sv/h. The present paper describes the calibration of the system with the IGOR phantom. Different body sizes are realized by different standardized configurations of polyethylene bricks, in which small tubes of calibration sources can be introduced. The efficiency of the detector was determined for four phantom geometries (P1, P2, P4 and P6 simulating human bodies in sitting position of 12 kg, 24 kg, 70 kg and 110 kg, respectively. The measurements were performed serially using five different radionuclide sources ( 40 K, 60 Co, 133 Ba, 137 Cs, 152 Eu) within the phantom bricks. Based on results of the experiment, an efficiency curve for each configuration and the detection limits for relevant radionuclides were determined. For routine measurements, the efficiency curve obtained with the phantom geometry P4 was chosen. The detection limits range from 40 Bq to 1000 Bq for selected radionuclides applying a measurement time of 7 min. The proper calibration of the system, on one hand, is essential for the routine measurements at PSI. On the other hand, it serves as a benchmark for the already initiated characterisation of the system with Monte Carlo simulations. (author)
Variational Monte Carlo Technique
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 8. Variational Monte Carlo Technique: Ground State Energies of Quantum Mechanical Systems. Sukanta Deb. General Article Volume 19 Issue 8 August 2014 pp 713-739 ...
Wielandt acceleration for MCNP5 Monte Carlo eigenvalue calculations
International Nuclear Information System (INIS)
Brown, F.
2007-01-01
Monte Carlo criticality calculations use the power iteration method to determine the eigenvalue (k eff ) and eigenfunction (fission source distribution) of the fundamental mode. A recently proposed method for accelerating convergence of the Monte Carlo power iteration using Wielandt's method has been implemented in a test version of MCNP5. The method is shown to provide dramatic improvements in convergence rates and to greatly reduce the possibility of false convergence assessment. The method is effective and efficient, improving the Monte Carlo figure-of-merit for many problems. In addition, the method should eliminate most of the underprediction bias in confidence intervals for Monte Carlo criticality calculations. (authors)
Monte Carlo simulation of neutron counters for safeguards applications
International Nuclear Information System (INIS)
Looman, Marc; Peerani, Paolo; Tagziria, Hamid
2009-01-01
MCNP-PTA is a new Monte Carlo code for the simulation of neutron counters for nuclear safeguards applications developed at the Joint Research Centre (JRC) in Ispra (Italy). After some preliminary considerations outlining the general aspects involved in the computational modelling of neutron counters, this paper describes the specific details and approximations which make up the basis of the model implemented in the code. One of the major improvements allowed by the use of Monte Carlo simulation is a considerable reduction in both the experimental work and in the reference materials required for the calibration of the instruments. This new approach to the calibration of counters using Monte Carlo simulation techniques is also discussed.
Parallel processing Monte Carlo radiation transport codes
International Nuclear Information System (INIS)
McKinney, G.W.
1994-01-01
Issues related to distributed-memory multiprocessing as applied to Monte Carlo radiation transport are discussed. Measurements of communication overhead are presented for the radiation transport code MCNP which employs the communication software package PVM, and average efficiency curves are provided for a homogeneous virtual machine
Validation of efficiency transfer for Marinelli geometries
International Nuclear Information System (INIS)
Ferreux, Laurent; Pierre, Sylvie; Thanh, Tran Thien; Lépy, Marie-Christine
2013-01-01
In the framework of environmental measurements by gamma-ray spectrometry, some laboratories need to characterize samples in geometries for which a calibration is not directly available. A possibility is to use an efficiency transfer code, e.g., ETNA. However, validation for large volume sources, such as Marinelli geometries, is needed. With this aim in mind, ETNA is compared, initially to a Monte Carlo simulation (PENELOPE) and subsequently to experimental data obtained with a high-purity germanium detector (HPGe). - Highlights: • Validation of ETNA efficiency transfer calculations for simple geometries using the PENELOPE code. • Validation of two Marinelli geometries: comparison of the ETNA software with PENELOPE simulations. • ETNA efficiency transfer calculations and experimental values are compared for a Marinelli geometry
Spectrometric methods used in the calibration of radiodiagnostic measuring instruments
Energy Technology Data Exchange (ETDEWEB)
De Vries, W. [Rijksuniversiteit Utrecht (Netherlands)
1995-12-01
Recently a set of parameters for checking the quality of radiation for use in diagnostic radiology was established at the calibration facility of Nederlands Meetinstituut (NMI). The establishment of the radiation quality required re-evaluation of the correction factors for the primary air-kerma standards. Free-air ionisation chambers require several correction factors to measure air-kerma according to its definition. These correction factors were calculated for the NMi free-air chamber by Monte Carlo simulations for monoenergetic photons in the energy range from 10 keV to 320 keV. The actual correction factors follow from weighting these mono-energetic correction factors with the air-kerma spectrum of the photon beam. This paper describes the determination of the photon spectra of the X-ray qualities used for the calibration of dosimetric instruments used in radiodiagnostics. The detector used for these measurements is a planar HPGe-detector, placed in the direct beam of the X-ray machine. To convert the measured pulse height spectrum to the actual photon spectrum corrections must be made for fluorescent photon escape, single and multiple compton scattering inside the detector, and detector efficiency. From the calculated photon spectra a number of parameters of the X-ray beam can be calculated. The calculated first and second half value layer in aluminum and copper are compared with the measured values of these parameters to validate the method of spectrum reconstruction. Moreover the spectrum measurements offer the possibility to calibrate the X-ray generator in terms of maximum high voltage. The maximum photon energy in the spectrum is used as a standard for calibration of kVp-meters.
Monte Carlo codes and Monte Carlo simulator program
International Nuclear Information System (INIS)
Higuchi, Kenji; Asai, Kiyoshi; Suganuma, Masayuki.
1990-03-01
Four typical Monte Carlo codes KENO-IV, MORSE, MCNP and VIM have been vectorized on VP-100 at Computing Center, JAERI. The problems in vector processing of Monte Carlo codes on vector processors have become clear through the work. As the result, it is recognized that these are difficulties to obtain good performance in vector processing of Monte Carlo codes. A Monte Carlo computing machine, which processes the Monte Carlo codes with high performances is being developed at our Computing Center since 1987. The concept of Monte Carlo computing machine and its performance have been investigated and estimated by using a software simulator. In this report the problems in vectorization of Monte Carlo codes, Monte Carlo pipelines proposed to mitigate these difficulties and the results of the performance estimation of the Monte Carlo computing machine by the simulator are described. (author)
SU-E-T-274: Monte Carlo Simulations of Output Factors for a Small Animal Irradiator.
Pidikiti, R; Stojadinovic, S; Song, K; Speiser, M; Solberg, T
2012-06-01
Measurement of dosimetric parameters of small photon beams, with field sizes as small 1 mm in diameter, is particularly challenging. This work utilizes Monte Carlo techniques to calculate percent depth dose (PDD) and output factors for small photon fields from a kV x-ray based small animal irradiator. Absolute dose calibration of a commercial small animal stereotactic irradiator (XRAD225, Precision X-ray) was performed in accordance with the recommendations of AAPM TG-61 protocol. Both in-air and in-water calibrations were performed at a 30.4 cm source-to-surface distance (SSD) for a reference collimator 50 mm in diameter. The BEAM/EGS was used to model 225 kV photon beams used for most therapeutic applications. The Monte Carlo model was provided good agreement with measured beam characteristics, e.g. PDD and off-axis ratios. Subsequently, output factors for various square and circular applicators were measured using an ionization chamber and radiochromic film, and compared with MC simulations. Directional Bremsstrahlung splitting (DBS) was utilized for variance reduction to improve efficiency of the output factor simulations. The statistical uncertainty on the MC- calculated results is between 0.5% and 1% for most points. The absolute dose measured for reference collimator at 30.4 cm SSD in water and in air is 4.1 and 4.12 Gy/min. The agreement between simulated and measured output factors was excellent, ranging from 1% to 2.84%. The MC- simulated and measured depth dose data, normalized at the surface, show excellent agreement, with a maximum deviation is approximately 2.5 %. Monte Carlo simulation provides an indispensible tool for validating measurements of the smallest field sizes used in preclinical small animal irradiation. © 2012 American Association of Physicists in Medicine.
2009-01-01
Carlo Rubbia turned 75 on March 31, and CERN held a symposium to mark his birthday and pay tribute to his impressive contribution to both CERN and science. Carlo Rubbia, 4th from right, together with the speakers at the symposium.On 7 April CERN hosted a celebration marking Carlo Rubbia’s 75th birthday and 25 years since he was awarded the Nobel Prize for Physics. "Today we will celebrate 100 years of Carlo Rubbia" joked CERN’s Director-General, Rolf Heuer in his opening speech, "75 years of his age and 25 years of the Nobel Prize." Rubbia received the Nobel Prize along with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. During the symposium, which was held in the Main Auditorium, several eminent speakers gave lectures on areas of science to which Carlo Rubbia made decisive contributions. Among those who spoke were Michel Spiro, Director of the French National Insti...
Energy Technology Data Exchange (ETDEWEB)
Ishikawa, T.; Uchiyama, M. [National Inst. of Radiological Sciences, Chiba (Japan)
1997-07-01
Calibration of whole-body counters for {sup 137}Cs is generally conducted with physical phantoms which are uniformly filled with {sup 137}Cs solutions. To check the conventional calibration, two types of mathematical voxel phantoms were prepared; the phantom with uniform density and that with lungs and a skeleton. Five different sizes, from newborn to adult were prepared for each type of the voxel phantoms. As for a whole-body counter at NIRS (Japan), counting efficiencies for these voxel phantoms were calculated using a Monte Carlo simulation code. The counting efficiency for the phantom with the lungs and skeleton was larger than that for the uniform density phantom for each size. The difference of counting efficiency ranged from 3 to 5% for five sizes. For the adult, it was about 5%. This indicates that the conventional calibration phantoms, assuming uniform activity distribution and uniform density, were acceptable for {sup 137}Cs whole-body counting. (author).
Traceable Pyrgeometer Calibrations
Energy Technology Data Exchange (ETDEWEB)
Dooraghi, Mike; Kutchenreiter, Mark; Reda, Ibrahim; Habte, Aron; Sengupta, Manajit; Andreas, Afshin; Newman, Martina
2016-05-02
This poster presents the development, implementation, and operation of the Broadband Outdoor Radiometer Calibrations (BORCAL) Longwave (LW) system at the Southern Great Plains Radiometric Calibration Facility for the calibration of pyrgeometers that provide traceability to the World Infrared Standard Group.
Bayesian statistics and Monte Carlo methods
Koch, K. R.
2018-03-01
The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.
Adiabatic optimization versus diffusion Monte Carlo methods
Jarret, Michael; Jordan, Stephen P.; Lackey, Brad
2016-10-01
Most experimental and theoretical studies of adiabatic optimization use stoquastic Hamiltonians, whose ground states are expressible using only real nonnegative amplitudes. This raises a question as to whether classical Monte Carlo methods can simulate stoquastic adiabatic algorithms with polynomial overhead. Here we analyze diffusion Monte Carlo algorithms. We argue that, based on differences between L1 and L2 normalized states, these algorithms suffer from certain obstructions preventing them from efficiently simulating stoquastic adiabatic evolution in generality. In practice however, we obtain good performance by introducing a method that we call Substochastic Monte Carlo. In fact, our simulations are good classical optimization algorithms in their own right, competitive with the best previously known heuristic solvers for MAX-k -SAT at k =2 ,3 ,4 .
Random Numbers and Monte Carlo Methods
Scherer, Philipp O. J.
Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.
Hrivnacova, I; Berejnov, V V; Brun, R; Carminati, F; Fassò, A; Futo, E; Gheata, A; Caballero, I G; Morsch, Andreas
2003-01-01
The concept of Virtual Monte Carlo (VMC) has been developed by the ALICE Software Project to allow different Monte Carlo simulation programs to run without changing the user code, such as the geometry definition, the detector response simulation or input and output formats. Recently, the VMC classes have been integrated into the ROOT framework, and the other relevant packages have been separated from the AliRoot framework and can be used individually by any other HEP project. The general concept of the VMC and its set of base classes provided in ROOT will be presented. Existing implementations for Geant3, Geant4 and FLUKA and simple examples of usage will be described.
TWSTFT Link Calibration Report
2015-09-01
traveling calibration station (calibrator) consisting of N (≥2) GNSS receivers+antennas+cables and PPS/frequency-distributors. It is a pre-cabled black...the PTB is taken as the reference of the calibration, a GNSS time link correction is equal to the classic GNSS equipment calibration correction [8...TWSTFT link calibration. If we replace the TWSTFT link by a GNSS link or a optical fiber (OF), it becomes a GNSS or an OF time link calibration. This
Preliminary evaluation of a Neutron Calibration Laboratory
Energy Technology Data Exchange (ETDEWEB)
Alvarenga, Talysson S.; Neves, Lucio P.; Perini, Ana P.; Sanches, Matias P.; Mitake, Malvina B.; Caldas, Linda V.E., E-mail: talvarenga@ipen.br, E-mail: lpneves@ipen.br, E-mail: aperini@ipen.br, E-mail: msanches@ipen.br, E-mail: mbmitake@ipen.br, E-mail: lcaldas@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Federico, Claudio A., E-mail: claudiofederico@ieav.cta.br [Instituto de Estudos Avancados (IEAv/DCTA), Sao Jose dos Campos, SP (Brazil). Dept. de Ciencia e Tecnologia Aeroespacial
2013-07-01
In the past few years, Brazil and several other countries in Latin America have experimented a great demand for the calibration of neutron detectors, mainly due to the increase in oil prospection and extraction. The only laboratory for calibration of neutron detectors in Brazil is localized at the Institute for Radioprotection and Dosimetry (IRD/CNEN), Rio de Janeiro, which is part of the IAEA SSDL network. This laboratory is the national standard laboratory in Brazil. With the increase in the demand for the calibration of neutron detectors, there is a need for another calibration services. In this context, the Calibration Laboratory of IPEN/CNEN, Sao Paulo, which already offers calibration services of radiation detectors with standard X, gamma, beta and alpha beams, has recently projected a new calibration laboratory for neutron detectors. In this work, the ambient equivalent dose rate (H⁎(10)) was evaluated in several positions inside and around this laboratory, using Monte Carlo simulation (MCNP5 code), in order to verify the adequateness of the shielding. The obtained results showed that the shielding is effective, and that this is a low-cost methodology to improve the safety of the workers and evaluate the total staff workload. (author)
Energy Technology Data Exchange (ETDEWEB)
Hunt, John
1998-12-31
A Monte Carlo program which uses a voxel phantom has been developed to simulate in vivo measurement systems for calibration purposes. The calibration method presented here employs a mathematical phantom, produced in the form of volume elements (voxels), obtained through Magnetic Resonance Images of the human body. The calibration method uses the Monte Carlo technique to simulate the tissue contamination, the transport of the photons through the tissues and the detection of the radiation. The program simulates the transport and detection of photons between 0.035 and 2 MeV and uses, for the body representation, a voxel phantom with a format of 871 slices each of 277 x 148 picture elements. The Monte Carlo code was applied to the calibration of in vivo systems and to estimate differences in counting efficiencies between homogeneous and non-homogeneous radionuclide distributions in the lung. Calculations show a factor of 20 between deposition of {sup 241} Am at the back compared with the front of the lung. The program was also used to estimate the {sup 137} Cs body burden of an internally contaminated individual, counted with an 8 x 4 Nal (TI) detector and an {sup 241} Am body burden of an internally contaminated individual, who was counted using a planar germanium detector. (author) 24 refs., 38 figs., 23 tabs.
Calibration of thoron (220Rn) activity concentration monitors
International Nuclear Information System (INIS)
Sabot, Benoit
2015-01-01
The goal of this PhD is to develop an activity standard for use in calibrating monitors used to measure the thoron ( 220 Rn) concentration in air. The device, which has been designed to be the standard, is a volume with a silicon semi-conductor detector and an electric field which allows the charged decay products of thoron to be trapped on the detector surface. A finite element method has been used for the electric field simulations. This electric field is high enough to catch the decay products of thoron at the detector surface even with the high flow rate inside the volume. Monte-Carlo calculations were used to define the detection efficiency of the system and to optimize the geometry shape and size. The calculated detection efficiencies have been compared with the results obtained for a reference radon ( 222 Rn) atmosphere produced with a new gas dilution setup. These experiments allowed the sensitivity of the system to be evaluated, as a function of the air properties. It has been demonstrated that the measurement system is independent of the pressure, the relative humidity and the flow rate for a large range of values. Through the analysis of measured alpha spectra the experimental gas detection efficiency was found to be consistent with the Monte-Carlo simulations. This portable system can now be used to evaluate precisely the thoron activity concentration with a well-defined associated uncertainty. Comparison measurements have been performed at the Italian metrological institute. Both systems are consistent within their uncertainties. (author) [fr
Variational Monte Carlo Technique
Indian Academy of Sciences (India)
ias
nonprobabilistic) problem [5]. ... In quantum mechanics, the MC methods are used to simulate many-particle systems us- ing random ...... D Ceperley, G V Chester and M H Kalos, Monte Carlo simulation of a many-fermion study, Physical Review Vol.
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 3. Markov Chain Monte Carlo - Examples. Arnab Chakraborty. General Article Volume 7 Issue 3 March 2002 pp 25-34. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/007/03/0025-0034. Keywords.
Leonardo Rossi
Carlo Caso (1940 - 2007) Our friend and colleague Carlo Caso passed away on July 7th, after several months of courageous fight against cancer. Carlo spent most of his scientific career at CERN, taking an active part in the experimental programme of the laboratory. His long and fruitful involvement in particle physics started in the sixties, in the Genoa group led by G. Tomasini. He then made several experiments using the CERN liquid hydrogen bubble chambers -first the 2000HBC and later BEBC- to study various facets of the production and decay of meson and baryon resonances. He later made his own group and joined the NA27 Collaboration to exploit the EHS Spectrometer with a rapid cycling bubble chamber as vertex detector. Amongst their many achievements, they were the first to measure, with excellent precision, the lifetime of the charmed D mesons. At the start of the LEP era, Carlo and his group moved to the DELPHI experiment, participating in the construction and running of the HPC electromagnetic c...
Chen, Yizheng; Qiu, Rui; Li, Chunyan; Wu, Zhen; Li, Junli
2016-03-07
In vivo measurement is a main method of internal contamination evaluation, particularly for large numbers of people after a nuclear accident. Before the practical application, it is necessary to obtain the counting efficiency of the detector by calibration. The virtual calibration based on Monte Carlo simulation usually uses the reference human computational phantom, and the morphological difference between the monitored personnel with the calibrated phantom may lead to the deviation of the counting efficiency. Therefore, a phantom library containing a wide range of heights and total body masses is needed. In this study, a Chinese reference adult male polygon surface (CRAM_S) phantom was constructed based on the CRAM voxel phantom, with the organ models adjusted to match the Chinese reference data. CRAM_S phantom was then transformed to sitting posture for convenience in practical monitoring. Referring to the mass and height distribution of the Chinese adult male, a phantom library containing 84 phantoms was constructed by deforming the reference surface phantom. Phantoms in the library have 7 different heights ranging from 155 cm to 185 cm, and there are 12 phantoms with different total body masses in each height. As an example of application, organ specific and total counting efficiencies of Ba-133 were calculated using the MCNPX code, with two series of phantoms selected from the library. The influence of morphological variation on the counting efficiency was analyzed. The results show only using the reference phantom in virtual calibration may lead to an error of 68.9% for total counting efficiency. Thus the influence of morphological difference on virtual calibration can be greatly reduced using the phantom library with a wide range of masses and heights instead of a single reference phantom.
International Nuclear Information System (INIS)
Guerra, J.G.; Rubiano, J.G.; Winter, G.; Guerra, A.G.; Alonso, H.; Arnedo, M.A.; Tejera, A.; Gil, J.M.; Rodríguez, R.; Martel, P.; Bolivar, J.P.
2015-01-01
The determination in a sample of the activity concentration of a specific radionuclide by gamma spectrometry needs to know the full energy peak efficiency (FEPE) for the energy of interest. The difficulties related to the experimental calibration make it advisable to have alternative methods for FEPE determination, such as the simulation of the transport of photons in the crystal by the Monte Carlo method, which requires an accurate knowledge of the characteristics and geometry of the detector. The characterization process is mainly carried out by Canberra Industries Inc. using proprietary techniques and methodologies developed by that company. It is a costly procedure (due to shipping and to the cost of the process itself) and for some research laboratories an alternative in situ procedure can be very useful. The main goal of this paper is to find an alternative to this costly characterization process, by establishing a method for optimizing the parameters of characterizing the detector, through a computational procedure which could be reproduced at a standard research lab. This method consists in the determination of the detector geometric parameters by using Monte Carlo simulation in parallel with an optimization process, based on evolutionary algorithms, starting from a set of reference FEPEs determined experimentally or computationally. The proposed method has proven to be effective and simple to implement. It provides a set of characterization parameters which it has been successfully validated for different source-detector geometries, and also for a wide range of environmental samples and certified materials. - Highlights: • A computational method for characterizing an HPGe spectrometer has been developed. • Detector characterized using as reference photopeak efficiencies obtained experimentally or by Monte Carlo calibration. • The characterization obtained has been validated for samples with different geometries and composition. • Good agreement
Jet energy calibration at the LHC
INSPIRE-00053381
2015-01-01
Jets are one of the most prominent physics signatures of high energy proton proton (p-p) collisions at the Large Hadron Collider (LHC). They are key physics objects for precision measurements and searches for new phenomena. This review provides an overview of the reconstruction and calibration of jets at the LHC during its first Run. ATLAS and CMS developed different approaches for the reconstruction of jets, but use similar methods for the energy calibration. ATLAS reconstructs jets utilizing input signals from their calorimeters and use charged particle tracks to refine their energy measurement and suppress the effects of multiple p-p interactions (pileup). CMS, instead, combines calorimeter and tracking information to build jets from particle flow objects. Jets are calibrated using Monte Carlo (MC) simulations and a residual in situ calibration derived from collision data is applied to correct for the differences in jet response between data and Monte Carlo. Large samples of dijet, Z+jets, and photon+jet e...
Well GeHP detector calibration for environmental measurements using reference materials
Energy Technology Data Exchange (ETDEWEB)
Tedjani, A. [Laboratoire Chrono-Environnement, UMR CNRS 6249, Université de Bourgogne Franche-Comté, F-25030 Besançon (France); Laboratoire de Physique des Rayonnements et Applications, Université de Jijel, B.P. 98, Oueled Aissa, Jijel 18000 (Algeria); Mavon, C. [Laboratoire Chrono-Environnement, UMR CNRS 6249, Université de Bourgogne Franche-Comté, F-25030 Besançon (France); Belafrites, A. [Laboratoire de Physique des Rayonnements et Applications, Université de Jijel, B.P. 98, Oueled Aissa, Jijel 18000 (Algeria); Degrelle, D. [Laboratoire Chrono-Environnement, UMR CNRS 6249, Université de Bourgogne Franche-Comté, F-25030 Besançon (France); Boumala, D. [Laboratoire Chrono-Environnement, UMR CNRS 6249, Université de Bourgogne Franche-Comté, F-25030 Besançon (France); Laboratoire de Physique des Rayonnements et Applications, Université de Jijel, B.P. 98, Oueled Aissa, Jijel 18000 (Algeria); Rius, D. [Laboratoire Chrono-Environnement, UMR CNRS 6249, Université de Bourgogne Franche-Comté, F-25030 Besançon (France); Groetz, J.-E., E-mail: jegroetz@univ-fcomte.fr [Laboratoire Chrono-Environnement, UMR CNRS 6249, Université de Bourgogne Franche-Comté, F-25030 Besançon (France)
2016-12-01
A well-type detector installed in the Modane underground Laboratory (LSM) can combine both low background and high detection efficiency and it is well suited for the analysis of small amounts of environmental samples. Reference materials such as IAEA-447 (moss-soil), IAEA-RG-Th1 and IAEA-RG-U1 were used for the detector calibration, owing to a chemical composition close to those of the environmental samples. Nevertheless, the matrix effects and the true coincidence summing effects must be corrected from the full energy peak efficiency (FEPE). The FEPE was performed for a wide range of energy by a semi-empirical method using Monte Carlo simulation (MCNP6), intended for environmental measurements such as lake sediments dating. In the well geometry, the true coincidence summing effects could be very important and correction factors have been computed in three different ways.
Ensemble bayesian model averaging using markov chain Monte Carlo sampling
Energy Technology Data Exchange (ETDEWEB)
Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.
Markov chain Monte Carlo methods: an introductory example
Klauenberg, Katy; Elster, Clemens
2016-02-01
When the Guide to the Expression of Uncertainty in Measurement (GUM) and methods from its supplements are not applicable, the Bayesian approach may be a valid and welcome alternative. Evaluating the posterior distribution, estimates or uncertainties involved in Bayesian inferences often requires numerical methods to avoid high-dimensional integrations. Markov chain Monte Carlo (MCMC) sampling is such a method—powerful, flexible and widely applied. Here, a concise introduction is given, illustrated by a simple, typical example from metrology. The Metropolis-Hastings algorithm is the most basic and yet flexible MCMC method. Its underlying concepts are explained and the algorithm is given step by step. The few lines of software code required for its implementation invite interested readers to get started. Diagnostics to evaluate the performance and common algorithmic choices are illustrated to calibrate the Metropolis-Hastings algorithm for efficiency. Routine application of MCMC algorithms may be hindered currently by the difficulty to assess the convergence of MCMC output and thus to assure the validity of results. An example points to the importance of convergence and initiates discussion about advantages as well as areas of research. Available software tools are mentioned throughout.
A methodology to develop computational phantoms with adjustable posture for WBC calibration
Ferreira Fonseca, T. C.; Bogaerts, R.; Hunt, John; Vanhavere, F.
2014-11-01
A Whole Body Counter (WBC) is a facility to routinely assess the internal contamination of exposed workers, especially in the case of radiation release accidents. The calibration of the counting device is usually done by using anthropomorphic physical phantoms representing the human body. Due to such a challenge of constructing representative physical phantoms a virtual calibration has been introduced. The use of computational phantoms and the Monte Carlo method to simulate radiation transport have been demonstrated to be a worthy alternative. In this study we introduce a methodology developed for the creation of realistic computational voxel phantoms with adjustable posture for WBC calibration. The methodology makes use of different software packages to enable the creation and modification of computational voxel phantoms. This allows voxel phantoms to be developed on demand for the calibration of different WBC configurations. This in turn helps to study the major source of uncertainty associated with the in vivo measurement routine which is the difference between the calibration phantoms and the real persons being counted. The use of realistic computational phantoms also helps the optimization of the counting measurement. Open source codes such as MakeHuman and Blender software packages have been used for the creation and modelling of 3D humanoid characters based on polygonal mesh surfaces. Also, a home-made software was developed whose goal is to convert the binary 3D voxel grid into a MCNPX input file. This paper summarizes the development of a library of phantoms of the human body that uses two basic phantoms called MaMP and FeMP (Male and Female Mesh Phantoms) to create a set of male and female phantoms that vary both in height and in weight. Two sets of MaMP and FeMP phantoms were developed and used for efficiency calibration of two different WBC set-ups: the Doel NPP WBC laboratory and AGM laboratory of SCK-CEN in Mol, Belgium.
A methodology to develop computational phantoms with adjustable posture for WBC calibration.
Fonseca, T C Ferreira; Bogaerts, R; Hunt, John; Vanhavere, F
2014-11-21
A Whole Body Counter (WBC) is a facility to routinely assess the internal contamination of exposed workers, especially in the case of radiation release accidents. The calibration of the counting device is usually done by using anthropomorphic physical phantoms representing the human body. Due to such a challenge of constructing representative physical phantoms a virtual calibration has been introduced. The use of computational phantoms and the Monte Carlo method to simulate radiation transport have been demonstrated to be a worthy alternative. In this study we introduce a methodology developed for the creation of realistic computational voxel phantoms with adjustable posture for WBC calibration. The methodology makes use of different software packages to enable the creation and modification of computational voxel phantoms. This allows voxel phantoms to be developed on demand for the calibration of different WBC configurations. This in turn helps to study the major source of uncertainty associated with the in vivo measurement routine which is the difference between the calibration phantoms and the real persons being counted. The use of realistic computational phantoms also helps the optimization of the counting measurement. Open source codes such as MakeHuman and Blender software packages have been used for the creation and modelling of 3D humanoid characters based on polygonal mesh surfaces. Also, a home-made software was developed whose goal is to convert the binary 3D voxel grid into a MCNPX input file. This paper summarizes the development of a library of phantoms of the human body that uses two basic phantoms called MaMP and FeMP (Male and Female Mesh Phantoms) to create a set of male and female phantoms that vary both in height and in weight. Two sets of MaMP and FeMP phantoms were developed and used for efficiency calibration of two different WBC set-ups: the Doel NPP WBC laboratory and AGM laboratory of SCK-CEN in Mol, Belgium.
A methodology to develop computational phantoms with adjustable posture for WBC calibration
International Nuclear Information System (INIS)
Fonseca, T C Ferreira; Vanhavere, F; Bogaerts, R; Hunt, John
2014-01-01
A Whole Body Counter (WBC) is a facility to routinely assess the internal contamination of exposed workers, especially in the case of radiation release accidents. The calibration of the counting device is usually done by using anthropomorphic physical phantoms representing the human body. Due to such a challenge of constructing representative physical phantoms a virtual calibration has been introduced. The use of computational phantoms and the Monte Carlo method to simulate radiation transport have been demonstrated to be a worthy alternative. In this study we introduce a methodology developed for the creation of realistic computational voxel phantoms with adjustable posture for WBC calibration. The methodology makes use of different software packages to enable the creation and modification of computational voxel phantoms. This allows voxel phantoms to be developed on demand for the calibration of different WBC configurations. This in turn helps to study the major source of uncertainty associated with the in vivo measurement routine which is the difference between the calibration phantoms and the real persons being counted. The use of realistic computational phantoms also helps the optimization of the counting measurement. Open source codes such as MakeHuman and Blender software packages have been used for the creation and modelling of 3D humanoid characters based on polygonal mesh surfaces. Also, a home-made software was developed whose goal is to convert the binary 3D voxel grid into a MCNPX input file. This paper summarizes the development of a library of phantoms of the human body that uses two basic phantoms called MaMP and FeMP (Male and Female Mesh Phantoms) to create a set of male and female phantoms that vary both in height and in weight. Two sets of MaMP and FeMP phantoms were developed and used for efficiency calibration of two different WBC set-ups: the Doel NPP WBC laboratory and AGM laboratory of SCK-CEN in Mol, Belgium. (paper)
Research on perturbation based Monte Carlo reactor criticality search
International Nuclear Information System (INIS)
Li Zeguang; Wang Kan; Li Yangliu; Deng Jingkang
2013-01-01
Criticality search is a very important aspect in reactor physics analysis. Due to the advantages of Monte Carlo method and the development of computer technologies, Monte Carlo criticality search is becoming more and more necessary and feasible. Traditional Monte Carlo criticality search method is suffered from large amount of individual criticality runs and uncertainty and fluctuation of Monte Carlo results. A new Monte Carlo criticality search method based on perturbation calculation is put forward in this paper to overcome the disadvantages of traditional method. By using only one criticality run to get initial k eff and differential coefficients of concerned parameter, the polynomial estimator of k eff changing function is solved to get the critical value of concerned parameter. The feasibility of this method was tested. The results show that the accuracy and efficiency of perturbation based criticality search method are quite inspiring and the method overcomes the disadvantages of traditional one. (authors)
CERN radiation protection (RP) calibration facilities
Energy Technology Data Exchange (ETDEWEB)
Pozzi, Fabio
2016-04-14
Radiation protection calibration facilities are essential to ensure the correct operation of radiation protection instrumentation. Calibrations are performed in specific radiation fields according to the type of instrument to be calibrated: neutrons, photons, X-rays, beta and alpha particles. Some of the instruments are also tested in mixed radiation fields as often encountered close to high-energy particle accelerators. Moreover, calibration facilities are of great importance to evaluate the performance of prototype detectors; testing and measuring the response of a prototype detector to well-known and -characterized radiation fields contributes to improving and optimizing its design and capabilities. The CERN Radiation Protection group is in charge of performing the regular calibrations of all CERN radiation protection devices; these include operational and passive dosimeters, neutron and photon survey-meters, and fixed radiation detectors to monitor the ambient dose equivalent, H*(10), inside CERN accelerators and at the CERN borders. A new state-of-the-art radiation protection calibration facility was designed, constructed and commissioned following the related ISO recommendations to replace the previous ageing (more than 30 years old) laboratory. In fact, the new laboratory aims also at the official accreditation according to the ISO standards in order to be able to release certified calibrations. Four radiation fields are provided: neutrons, photons and beta sources and an X-ray generator. Its construction did not only involve a pure civil engineering work; many radiation protection studies were performed to provide a facility that could answer the CERN calibration needs and fulfill all related safety requirements. Monte Carlo simulations have been confirmed to be a valuable tool for the optimization of the building design, the radiation protection aspects, e.g. shielding, and, as consequence, the overall cost. After the source and irradiator installation
Marković, Nikola; Roos, Per; Hou, Xiaolin; Nielsen, Sven Poul
2018-02-01
An X-ray-gamma coincidence measurement method for efficiency calibration of a HPGe-HPGe system, using the methodology for activity standardisation of 125I, has been developed. By taking one list-mode time-stamped measurement of the 125I source, six spectra were generated in post-processing: total spectra, coincidence spectra and energy gated coincidence spectra for each of the two detectors. The method provides enough observables for source activity to be determined without a prior knowledge of the detector efficiencies. In addition, once the source is calibrated in this way the same spectra can also be used to perform efficiency calibration of the individual detectors in the low energy range. This new methodology for source activity determination is an alternative to the already established X-ray-(X-ray, gamma) coincidence counting method; with two NaI(Tl) detectors and the sum-peak method using a single HPGe detector. When compared to the coincidence counting method using two NaI(Tl) detectors, the newly developed method displays improved energy resolution of HPGe detectors combined with measurement of only full peak areas, without the need for total efficiency determination. This enables activity determination even in presence of other gamma emitters in the sample. Standard coincidence counting with NaI(Tl) detectors provides lower uncertainties. The method has been used for calibration of a coincidence HPGe spectrometer in the low energy range of 125I and fine adjustments of a Monte Carlo model of the coincidence system.
Calibration and Rating of Photovoltaics: Preprint
Energy Technology Data Exchange (ETDEWEB)
Emery, K.
2012-06-01
Rating the performance of photovoltaic (PV) modules is critical to determining the cost per watt, and efficiency is useful to assess the relative progress among PV concepts. Procedures for determining the efficiency for PV technologies from 1-sun to low concentration to high concentration are discussed. We also discuss the state of the art in primary and secondary calibration of PV reference cells used by calibration laboratories around the world. Finally, we consider challenges to rating PV technologies and areas for improvement.
Direct megavoltage photon calibration service in Australia
International Nuclear Information System (INIS)
Butler, D.J.; Ramanthan, G.; Oliver, C.; Cole, A.; Harty, P.D.; Wright, T.; Webb, D.V.; Lye, J.; Followill, D.S.
2014-01-01
The Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) maintains the Australian primary standard of absorbed dose. Until recently, the standard was used to calibrate ionisation chambers only in 60 Co gamma rays. These chambers are then used by radiotherapy clinics to determine linac output, using a correction factor (k Q ) to take into account the different spectra of 60 Co and the linac. Over the period 2010–2013, ARPANSA adapted the primary standard to work in megavoltage linac beams, and has developed a calibration service at three photon beams (6, 10 and 18 MV) from an Elekta Synergy linac. We describe the details of the new calibration service, the method validation and the use of the new calibration factors with the International Atomic Energy Agency’s TRS-398 dosimetry Code of Practice. The expected changes in absorbed dose measurements in the clinic when shifting from 60 Co to the direct calibration are determined. For a Farmer chamber (model 2571), the measured chamber calibration coefficient is expected to be reduced by 0.4, 1.0 and 1.1 % respectively for these three beams when compared to the factor derived from 60 Co. These results are in overall agreement with international absorbed dose standards and calculations by Muir and Rogers in 2010 of k Q factors using Monte Carlo techniques. The reasons for and against moving to the new service are discussed in the light of the requirements of clinical dosimetry.
Directory of Open Access Journals (Sweden)
Pedro Medina Avendaño
1981-01-01
Full Text Available Carlos Vega Duarte tenía la sencillez de los seres elementales y puros. Su corazón era limpio como oro de aluvión. Su trato directo y coloquial ponía de relieve a un santandereano sin contaminaciones que amaba el fulgor de las armas y se encandilaba con el destello de las frases perfectas
Energy Technology Data Exchange (ETDEWEB)
Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
Wormhole Hamiltonian Monte Carlo
Lan, S; Streets, J; Shahbaba, B
2014-01-01
Copyright © 2014, Association for the Advancement of Artificial Intelligence. In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, espe...
International Nuclear Information System (INIS)
Creutz, M.
1986-01-01
The author discusses a recently developed algorithm for simulating statistical systems. The procedure interpolates between molecular dynamics methods and canonical Monte Carlo. The primary advantages are extremely fast simulations of discrete systems such as the Ising model and a relative insensitivity to random number quality. A variation of the algorithm gives rise to a deterministic dynamics for Ising spins. This model may be useful for high speed simulation of non-equilibrium phenomena
Directory of Open Access Journals (Sweden)
Charlie Samuya Veric
2001-12-01
Full Text Available The importance of Carlos Bulosan in Filipino and Filipino-American radical history and literature is indisputable. His eminence spans the pacific, and he is known, diversely, as a radical poet, fictionist, novelist, and labor organizer. Author of the canonical America Iis the Hearts, Bulosan is celebrated for chronicling the conditions in America in his time, such as racism and unemployment. In the history of criticism on Bulosan's life and work, however, there is an undeclared general consensus that views Bulosan and his work as coherent permanent texts of radicalism and anti-imperialism. Central to the existence of such a tradition of critical reception are the generations of critics who, in more ways than one, control the discourse on and of Carlos Bulosan. This essay inquires into the sphere of the critical reception that orders, for our time and for the time ahead, the reading and interpretation of Bulosan. What eye and seeing, the essay asks, determine the perception of Bulosan as the angel of radicalism? What is obscured in constructing Bulosan as an immutable figure of the political? What light does the reader conceive when the personal is brought into the open and situated against the political? the essay explores the answers to these questions in Bulosan's loving letters to various friends, strangers, and white American women. The presence of these interrogations, the essay believes, will secure ultimately the continuing importance of Carlos Bulosan to radical literature and history.
2009-01-01
On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency and Professor at the IUSS School for Advanced Studies in Pavia will speak about his work with Carlo Rubbia. Finally, Hans Joachim Sch...
2009-01-01
On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency, will speak about his work with Carlo Rubbia. Finally, Hans Joachim Schellnhuber of the Potsdam Institute for Climate Research and Sven Kul...
Optimal calibration of nuclear instrumentation
International Nuclear Information System (INIS)
Griffith, J.M.; Bray, M.A.; Feeley, J.J.
1981-01-01
Accurate knowledge of core power level is essential for the safe and efficient operation of nuclear power plants. Ionization chambers located outside the reactor core have the necessary reliability and response time characteristics and have been used extensively to indicate power level. The calibration of the ion chamber, and associated nuclear instrumentation (NI), has traditionally been based on the thermal power in the secondary coolant system. The usual NI calibration procedure consists of establishing steady-state operating conditions, calorimetrically determining the power at the secondary side of the steam generator, and adjusting the NI output to correspond to the measured thermal power. This study addresses certain questions including; (a) what sampling rate should be employed, (b) how many measurements are required, and (c) how can additional power level related information such as primary coolant loop measurements and knowledge of plant dynamics be included in the calibration procedure
Efficient uncertainty quantification methodologies for high-dimensional climate land models
Energy Technology Data Exchange (ETDEWEB)
Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Berry, Robert Dan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Debusschere, Bert J. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)
2011-11-01
In this report, we proposed, examined and implemented approaches for performing efficient uncertainty quantification (UQ) in climate land models. Specifically, we applied Bayesian compressive sensing framework to a polynomial chaos spectral expansions, enhanced it with an iterative algorithm of basis reduction, and investigated the results on test models as well as on the community land model (CLM). Furthermore, we discussed construction of efficient quadrature rules for forward propagation of uncertainties from high-dimensional, constrained input space to output quantities of interest. The work lays grounds for efficient forward UQ for high-dimensional, strongly non-linear and computationally costly climate models. Moreover, to investigate parameter inference approaches, we have applied two variants of the Markov chain Monte Carlo (MCMC) method to a soil moisture dynamics submodel of the CLM. The evaluation of these algorithms gave us a good foundation for further building out the Bayesian calibration framework towards the goal of robust component-wise calibration.
Chiarucci, Simone; Wijnholds, Stefan J.
2018-02-01
Blind calibration, i.e. calibration without a priori knowledge of the source model, is robust to the presence of unknown sources such as transient phenomena or (low-power) broad-band radio frequency interference that escaped detection. In this paper, we present a novel method for blind calibration of a radio interferometric array assuming that the observed field only contains a small number of discrete point sources. We show the huge computational advantage over previous blind calibration methods and we assess its statistical efficiency and robustness to noise and the quality of the initial estimate. We demonstrate the method on actual data from a Low-Frequency Array low-band antenna station showing that our blind calibration is able to recover the same gain solutions as the regular calibration approach, as expected from theory and simulations. We also discuss the implications of our findings for the robustness of regular self-calibration to poor starting models.
A novel iterative energy calibration method for composite germanium detectors
International Nuclear Information System (INIS)
Pattabiraman, N.S.; Chintalapudi, S.N.; Ghugre, S.S.
2004-01-01
An automatic method for energy calibration of the observed experimental spectrum has been developed. The method presented is based on an iterative algorithm and presents an efficient way to perform energy calibrations after establishing the weights of the calibration data. An application of this novel technique for data acquired using composite detectors in an in-beam γ-ray spectroscopy experiment is presented
A novel iterative energy calibration method for composite germanium detectors
Energy Technology Data Exchange (ETDEWEB)
Pattabiraman, N.S.; Chintalapudi, S.N.; Ghugre, S.S. E-mail: ssg@alpha.iuc.res.in
2004-07-01
An automatic method for energy calibration of the observed experimental spectrum has been developed. The method presented is based on an iterative algorithm and presents an efficient way to perform energy calibrations after establishing the weights of the calibration data. An application of this novel technique for data acquired using composite detectors in an in-beam {gamma}-ray spectroscopy experiment is presented.
Technical preparations for the in-vessel 14 MeV neutron calibration at JET
Energy Technology Data Exchange (ETDEWEB)
Batistoni, P., E-mail: paola.batistoni@enea.it [ENEA, Department of Fusion and Nuclear Safety Technology, I-00044, Frascati, Rome (Italy); EUROfusion Consortium, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Popovichev, S. [CCFE, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); EUROfusion Consortium, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Crowe, R. [Remote Applications in Challenging Environments (RACE), Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); EUROfusion Consortium, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Cufar, A. [Reactor Physics Division, Jožef Stefan Institute, Jamova cesta 39, SI-1000, Ljubljana (Slovenia); EUROfusion Consortium, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Ghani, Z. [CCFE, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); EUROfusion Consortium, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Keogh, K. [Remote Applications in Challenging Environments (RACE), Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); EUROfusion Consortium, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Peacock, A. [JET Exploitation Unit, Abingdon, Oxon, OX14 3DB (United Kingdom); Price, R. [Remote Applications in Challenging Environments (RACE), Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); EUROfusion Consortium, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Baranov, A.; Korotkov, S.; Lykin, P.; Samoshin, A. [All-Russia Research Institute of Automatics (VNIIA), 22, Sushchevskaya str., 127055, Moscow (Russian Federation)
2017-04-15
Highlights: • The JET 14 MeV neutron calibration requires a neutron generator to be deployed inside the vacuum vessel by means of the remote handling system. • A neutron generator of suitable intensity and compliant with physics, remote handling and safety requirements has been identified and procured.The scientific programme of the preparatory phase devoted to fully characterizing the selected 14 MeV neutron generator is discussed. • The aim is to measure the absolute neutron emission rate within (± 5%) and the energy spectrum of emitted neutron as a function of angles. • The physics preparations, source issues, safety and engineering aspects required to calibrate directly the JET neutron detectors are discussed. - Abstract: The power output of fusion devices is measured from their neutron yields which relate directly to the fusion yield. In this paper we describe the devices and methods that have been prepared to perform a new in situ 14 MeV neutron calibration at JET in view of the new DT campaign planned at JET in the next years. The target accuracy of this calibration is ±10% as required for ITER, where a precise neutron yield measurement is important, e.g., for tritium accountancy. In this paper, the constraints and early decisions which defined the main calibration approach are discussed, e.g., the choice of 14 MeV neutron source and the deployment method. The physics preparations, source issues, safety and engineering aspects required to calibrate directly the JET neutron detectors are also discussed. The existing JET remote-handling system will be used to deploy the neutron source inside the JET vessel. For this purpose, compatible tooling and systems necessary to ensure safe and efficient deployment have been developed. The scientific programme of the preparatory phase is devoted to fully characterizing the selected 14 MeV neutron generator to be used as the calibrating source, obtain a better understanding of the limitations of the
Technical preparations for the in-vessel 14 MeV neutron calibration at JET
International Nuclear Information System (INIS)
Batistoni, P.; Popovichev, S.; Crowe, R.; Cufar, A.; Ghani, Z.; Keogh, K.; Peacock, A.; Price, R.; Baranov, A.; Korotkov, S.; Lykin, P.; Samoshin, A.
2017-01-01
Highlights: • The JET 14 MeV neutron calibration requires a neutron generator to be deployed inside the vacuum vessel by means of the remote handling system. • A neutron generator of suitable intensity and compliant with physics, remote handling and safety requirements has been identified and procured.The scientific programme of the preparatory phase devoted to fully characterizing the selected 14 MeV neutron generator is discussed. • The aim is to measure the absolute neutron emission rate within (± 5%) and the energy spectrum of emitted neutron as a function of angles. • The physics preparations, source issues, safety and engineering aspects required to calibrate directly the JET neutron detectors are discussed. - Abstract: The power output of fusion devices is measured from their neutron yields which relate directly to the fusion yield. In this paper we describe the devices and methods that have been prepared to perform a new in situ 14 MeV neutron calibration at JET in view of the new DT campaign planned at JET in the next years. The target accuracy of this calibration is ±10% as required for ITER, where a precise neutron yield measurement is important, e.g., for tritium accountancy. In this paper, the constraints and early decisions which defined the main calibration approach are discussed, e.g., the choice of 14 MeV neutron source and the deployment method. The physics preparations, source issues, safety and engineering aspects required to calibrate directly the JET neutron detectors are also discussed. The existing JET remote-handling system will be used to deploy the neutron source inside the JET vessel. For this purpose, compatible tooling and systems necessary to ensure safe and efficient deployment have been developed. The scientific programme of the preparatory phase is devoted to fully characterizing the selected 14 MeV neutron generator to be used as the calibrating source, obtain a better understanding of the limitations of the
Multiscale Monte Carlo equilibration: Pure Yang-Mills theory
Endres, Michael G.; Brower, Richard C.; Detmold, William; Orginos, Kostas; Pochinsky, Andrew V.
2015-12-01
We present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
Calibration of cylindrical detectors using a simplified theoretical approach
International Nuclear Information System (INIS)
Abbas, Mahmoud I.; Nafee, Sherif; Selim, Younis S.
2006-01-01
The calibration of cylindrical detectors using different types of radioactive sources is a matter of routine. The most accurate method, that of experiment, is limited by several factors when the energy interval is broad, requiring a relatively large number of primary standards, implying considerable investment of money and time. Several other techniques can be used instead, including Monte Carlo simulations and semi-empirical methods. Calculations based on the first technique require good definition of the geometry and materials, including the dead layer and window thickness together with an accurate set of cross-sections. The second technique requires two different types of experimental input, the first being from use of sources emitting cascade γ rays and the second from use of sources emitting isolated γ rays in order to cover the wide energy range and provide coincidence-summing corrections, respectively. Here, we introduce a new theoretical approach based on the Direct Statistical method proposed by Selim and Abbas to calculate the total and the full-energy peak (photopeak) efficiencies for both point and thin circular disk sources for scintillation and semiconductor detectors. The present method combines calculation of the average path length covered by the photon inside the detector active volume and the geometrical solid angle Ω, to obtain a simple formula for the different efficiencies. Results from the present model were tested against data sets obtained with previous treatments in order to underline how simple and fast our calculations are
International Nuclear Information System (INIS)
Talley, T.L.; Evans, F.
1988-01-01
Prior work demonstrated the importance of nuclear scattering to fusion product energy deposition in hot plasmas. This suggests careful examination of nuclear physics details in burning plasma simulations. An existing Monte Carlo fast ion transport code is being expanded to be a test bed for this examination. An initial extension, the energy deposition of fast alpha particles in a hot deuterium plasma, is reported. The deposition times and deposition ranges are modified by allowing nuclear scattering. Up to 10% of the initial alpha particle energy is carried to greater ranges and times by the more mobile recoil deuterons. 4 refs., 5 figs., 2 tabs
Uncertainty analysis in Monte Carlo criticality computations
International Nuclear Information System (INIS)
Qi Ao
2011-01-01
Highlights: ► Two types of uncertainty methods for k eff Monte Carlo computations are examined. ► Sampling method has the least restrictions on perturbation but computing resources. ► Analytical method is limited to small perturbation on material properties. ► Practicality relies on efficiency, multiparameter applicability and data availability. - Abstract: Uncertainty analysis is imperative for nuclear criticality risk assessments when using Monte Carlo neutron transport methods to predict the effective neutron multiplication factor (k eff ) for fissionable material systems. For the validation of Monte Carlo codes for criticality computations against benchmark experiments, code accuracy and precision are measured by both the computational bias and uncertainty in the bias. The uncertainty in the bias accounts for known or quantified experimental, computational and model uncertainties. For the application of Monte Carlo codes for criticality analysis of fissionable material systems, an administrative margin of subcriticality must be imposed to provide additional assurance of subcriticality for any unknown or unquantified uncertainties. Because of a substantial impact of the administrative margin of subcriticality on economics and safety of nuclear fuel cycle operations, recently increasing interests in reducing the administrative margin of subcriticality make the uncertainty analysis in criticality safety computations more risk-significant. This paper provides an overview of two most popular k eff uncertainty analysis methods for Monte Carlo criticality computations: (1) sampling-based methods, and (2) analytical methods. Examples are given to demonstrate their usage in the k eff uncertainty analysis due to uncertainties in both neutronic and non-neutronic parameters of fissionable material systems.
Proton beam monitor chamber calibration
International Nuclear Information System (INIS)
Gomà, C; Meer, D; Safai, S; Lorentini, S
2014-01-01
The first goal of this paper is to clarify the reference conditions for the reference dosimetry of clinical proton beams. A clear distinction is made between proton beam delivery systems which should be calibrated with a spread-out Bragg peak field and those that should be calibrated with a (pseudo-)monoenergetic proton beam. For the latter, this paper also compares two independent dosimetry techniques to calibrate the beam monitor chambers: absolute dosimetry (of the number of protons exiting the nozzle) with a Faraday cup and reference dosimetry (i.e. determination of the absorbed dose to water under IAEA TRS-398 reference conditions) with an ionization chamber. To compare the two techniques, Monte Carlo simulations were performed to convert dose-to-water to proton fluence. A good agreement was found between the Faraday cup technique and the reference dosimetry with a plane-parallel ionization chamber. The differences—of the order of 3%—were found to be within the uncertainty of the comparison. For cylindrical ionization chambers, however, the agreement was only possible when positioning the effective point of measurement of the chamber at the reference measurement depth—i.e. not complying with IAEA TRS-398 recommendations. In conclusion, for cylindrical ionization chambers, IAEA TRS-398 reference conditions for monoenergetic proton beams led to a systematic error in the determination of the absorbed dose to water, especially relevant for low-energy proton beams. To overcome this problem, the effective point of measurement of cylindrical ionization chambers should be taken into account when positioning the reference point of the chamber. Within the current IAEA TRS-398 recommendations, it seems advisable to use plane-parallel ionization chambers—rather than cylindrical chambers—for the reference dosimetry of pseudo-monoenergetic proton beams. (paper)
Implementation of Fast Emulator-based Code Calibration.
Energy Technology Data Exchange (ETDEWEB)
Bowman, Nathaniel; Denman, Matthew R
2016-08-01
Calibration is the process of using experimental data to gain more precise knowledge of sim- ulator inputs. This process commonly involves the use of Markov-chain Monte Carlo, which requires running a simulator thousands of times. If we can create a faster program, called an emulator, that mimics the outputs of the simulator for an input range of interest, then we can speed up the process enough to make it feasible for expensive simulators. To this end, we implement a Gaussian-process emulator capable of reproducing the behavior of various long- running simulators to within acceptable tolerance. This fast emulator can be used in place of a simulator to run Markov-chain Monte Carlo in order to calibrate simulation parameters to ex- perimental data. As a demonstration, this emulator is used to calibrate the inputs of an actual simulator against two sodium-fire experiments.
International Nuclear Information System (INIS)
Gerlach, M.; Krumrey, M.; Cibik, L.; Mueller, P.; Ulm, G.
2009-01-01
Monte Carlo techniques are powerful tools to simulate the interaction of electromagnetic radiation with matter. One of the most widespread simulation program packages is Geant4. Almost all physical interaction processes can be included. However, it is not evident what accuracy can be obtained by a simulation. In this work, results of scattering experiments using monochromatized synchrotron radiation in the X-ray regime are quantitatively compared to the results of simulations using Geant4. Experiments were performed for various scattering foils made of different materials such as copper and gold. For energy-dispersive measurements of the scattered radiation, a cadmium telluride detector was used. The detector was fully characterized and calibrated with calculable undispersed as well as monochromatized synchrotron radiation. The obtained quantum efficiency and the response functions are in very good agreement with the corresponding Geant4 simulations. At the electron storage ring BESSY II the number of incident photons in the scattering experiments was measured with a photodiode that had been calibrated against a cryogenic radiometer, so that a direct comparison of scattering experiments with Monte Carlo simulations using Geant4 was possible. It was shown that Geant4 describes the photoeffect, including fluorescence as well as the Compton and Rayleigh scattering, with high accuracy, resulting in a deviation of typically less than 20%. Even polarization effects are widely covered by Geant4, and for Doppler broadening of Compton-scattered radiation the extension G4LECS can be included, but the fact that both features cannot be combined is a limitation. For most polarization-dependent simulations, good agreement with the experimental results was found, except for some orientations where Rayleigh scattering was overestimated in the simulation.
Gerlach, M.; Krumrey, M.; Cibik, L.; Müller, P.; Ulm, G.
2009-09-01
Monte Carlo techniques are powerful tools to simulate the interaction of electromagnetic radiation with matter. One of the most widespread simulation program packages is Geant4. Almost all physical interaction processes can be included. However, it is not evident what accuracy can be obtained by a simulation. In this work, results of scattering experiments using monochromatized synchrotron radiation in the X-ray regime are quantitatively compared to the results of simulations using Geant4. Experiments were performed for various scattering foils made of different materials such as copper and gold. For energy-dispersive measurements of the scattered radiation, a cadmium telluride detector was used. The detector was fully characterized and calibrated with calculable undispersed as well as monochromatized synchrotron radiation. The obtained quantum efficiency and the response functions are in very good agreement with the corresponding Geant4 simulations. At the electron storage ring BESSY II the number of incident photons in the scattering experiments was measured with a photodiode that had been calibrated against a cryogenic radiometer, so that a direct comparison of scattering experiments with Monte Carlo simulations using Geant4 was possible. It was shown that Geant4 describes the photoeffect, including fluorescence as well as the Compton and Rayleigh scattering, with high accuracy, resulting in a deviation of typically less than 20%. Even polarization effects are widely covered by Geant4, and for Doppler broadening of Compton-scattered radiation the extension G4LECS can be included, but the fact that both features cannot be combined is a limitation. For most polarization-dependent simulations, good agreement with the experimental results was found, except for some orientations where Rayleigh scattering was overestimated in the simulation.
Energy Technology Data Exchange (ETDEWEB)
Gerlach, M. [Physikalisch-Technische Bundesanstalt, Abbestr. 2-12, 10587 Berlin (Germany); Krumrey, M. [Physikalisch-Technische Bundesanstalt, Abbestr. 2-12, 10587 Berlin (Germany)], E-mail: Michael.Krumrey@ptb.de; Cibik, L.; Mueller, P.; Ulm, G. [Physikalisch-Technische Bundesanstalt, Abbestr. 2-12, 10587 Berlin (Germany)
2009-09-11
Monte Carlo techniques are powerful tools to simulate the interaction of electromagnetic radiation with matter. One of the most widespread simulation program packages is Geant4. Almost all physical interaction processes can be included. However, it is not evident what accuracy can be obtained by a simulation. In this work, results of scattering experiments using monochromatized synchrotron radiation in the X-ray regime are quantitatively compared to the results of simulations using Geant4. Experiments were performed for various scattering foils made of different materials such as copper and gold. For energy-dispersive measurements of the scattered radiation, a cadmium telluride detector was used. The detector was fully characterized and calibrated with calculable undispersed as well as monochromatized synchrotron radiation. The obtained quantum efficiency and the response functions are in very good agreement with the corresponding Geant4 simulations. At the electron storage ring BESSY II the number of incident photons in the scattering experiments was measured with a photodiode that had been calibrated against a cryogenic radiometer, so that a direct comparison of scattering experiments with Monte Carlo simulations using Geant4 was possible. It was shown that Geant4 describes the photoeffect, including fluorescence as well as the Compton and Rayleigh scattering, with high accuracy, resulting in a deviation of typically less than 20%. Even polarization effects are widely covered by Geant4, and for Doppler broadening of Compton-scattered radiation the extension G4LECS can be included, but the fact that both features cannot be combined is a limitation. For most polarization-dependent simulations, good agreement with the experimental results was found, except for some orientations where Rayleigh scattering was overestimated in the simulation.
International Nuclear Information System (INIS)
Costa, Priscila
2014-01-01
The Cuno filter is part of the water processing circuit of the IEA-R1 reactor and, when saturated, it is replaced and becomes a radioactive waste, which must be managed. In this work, the primary characterization of the Cuno filter of the IEA-R1 nuclear reactor at IPEN was carried out using gamma spectrometry associated with the Monte Carlo method. The gamma spectrometry was performed using a hyperpure germanium detector (HPGe). The germanium crystal represents the detection active volume of the HPGe detector, which has a region called dead layer or inactive layer. It has been reported in the literature a difference between the theoretical and experimental values when obtaining the efficiency curve of these detectors. In this study we used the MCNP-4C code to obtain the detector calibration efficiency for the geometry of the Cuno filter, and the influence of the dead layer and the effect of sum in cascade at the HPGe detector were studied. The correction of the dead layer values were made by varying the thickness and the radius of the germanium crystal. The detector has 75.83 cm 3 of active volume of detection, according to information provided by the manufacturer. Nevertheless, the results showed that the actual value of active volume is less than the one specified, where the dead layer represents 16% of the total volume of the crystal. A Cuno filter analysis by gamma spectrometry has enabled identifying energy peaks. Using these peaks, three radionuclides were identified in the filter: 108m Ag, 110m Ag and 60 Co. From the calibration efficiency obtained by the Monte Carlo method, the value of activity estimated for these radionuclides is in the order of MBq. (author)
Calculation Analysis of Calibration Factors of Airborne Gamma-ray Spectrometer
International Nuclear Information System (INIS)
Zhao Jun; Zhu Jinhui; Xie Honggang; He Qinglin
2009-01-01
To determine the calibration factors of an airborne gamma-ray spectrometer measuring large area gamma-ray emitting source at deferent flying height, a series of Monte Carlo simulations were drawn. Response energy spectrums of NaI crystals in airplane caused by nature-decay-series calibration-pads, and calibration factors on different heights above Cs-137 plane source, were obtained. The calculated results agreed with the experimental data well. (authors)
First results about on-ground calibration of the silicon tracker for the AGILE satellite
International Nuclear Information System (INIS)
Cattaneo, P.W.; Argan, A.; Boffelli, F.; Bulgarelli, A.; Buonomo, B.; Chen, A.W.; D'Ammando, F.; Froysland, T.; Fuschino, F.; Galli, M.; Gianotti, F.; Giuliani, A.; Longo, F.; Marisaldi, M.; Mazzitelli, G.; Pellizzoni, A.; Prest, M.; Pucella, G.; Quintieri, L.; Rappoldi, A.
2011-01-01
The AGILE scientific instrument has been calibrated with a tagged γ-ray beam at the Beam Test Facility (BTF) of the INFN Laboratori Nazionali di Frascati (LNF). The goal of the calibration was the measure of the Point Spread Function (PSF) as a function of the photon energy and incident angle and the validation of the Monte Carlo (MC) simulation of the silicon tracker operation. The calibration setup is described and some preliminary results are presented.
Multilevel Monte Carlo Approaches for Numerical Homogenization
Efendiev, Yalchin R.
2015-10-01
In this article, we study the application of multilevel Monte Carlo (MLMC) approaches to numerical random homogenization. Our objective is to compute the expectation of some functionals of the homogenized coefficients, or of the homogenized solutions. This is accomplished within MLMC by considering different sizes of representative volumes (RVEs). Many inexpensive computations with the smallest RVE size are combined with fewer expensive computations performed on larger RVEs. Likewise, when it comes to homogenized solutions, different levels of coarse-grid meshes are used to solve the homogenized equation. We show that, by carefully selecting the number of realizations at each level, we can achieve a speed-up in the computations in comparison to a standard Monte Carlo method. Numerical results are presented for both one-dimensional and two-dimensional test-cases that illustrate the efficiency of the approach.
Calibration of Nanopositioning Stages
Directory of Open Access Journals (Sweden)
Ning Tan
2015-12-01
Full Text Available Accuracy is one of the most important criteria for the performance evaluation of micro- and nanorobots or systems. Nanopositioning stages are used to achieve the high positioning resolution and accuracy for a wide and growing scope of applications. However, their positioning accuracy and repeatability are not well known and difficult to guarantee, which induces many drawbacks for many applications. For example, in the mechanical characterisation of biological samples, it is difficult to perform several cycles in a repeatable way so as not to induce negative influences on the study. It also prevents one from controlling accurately a tool with respect to a sample without adding additional sensors for closed loop control. This paper aims at quantifying the positioning repeatability and accuracy based on the ISO 9283:1998 standard, and analyzing factors influencing positioning accuracy onto a case study of 1-DoF (Degree-of-Freedom nanopositioning stage. The influence of thermal drift is notably quantified. Performances improvement of the nanopositioning stage are then investigated through robot calibration (i.e., open-loop approach. Two models (static and adaptive models are proposed to compensate for both geometric errors and thermal drift. Validation experiments are conducted over a long period (several days showing that the accuracy of the stage is improved from typical micrometer range to 400 nm using the static model and even down to 100 nm using the adaptive model. In addition, we extend the 1-DoF calibration to multi-DoF with a case study of a 2-DoF nanopositioning robot. Results demonstrate that the model efficiently improved the 2D accuracy from 1400 nm to 200 nm.
Calibration of Geodetic Instruments
Directory of Open Access Journals (Sweden)
Marek Bajtala
2005-06-01
Full Text Available The problem of metrology and security systems of unification, correctness and standard reproducibilities belong to the preferred requirements of theory and technical practice in geodesy. Requirements on the control and verification of measured instruments and equipments increase and the importance and up-to-date of calibration get into the foreground. Calibration possibilities of length-scales (of electronic rangefinders and angle-scales (of horizontal circles of geodetic instruments. Calibration of electronic rangefinders on the linear comparative baseline in terrain. Primary standard of planar angle – optical traverse and its exploitation for calibration of the horizontal circles of theodolites. The calibration equipment of the Institute of Slovak Metrology in Bratislava. The Calibration process and results from the calibration of horizontal circles of selected geodetic instruments.
Kuzuoğlu, Mustafa; Özgün, Özlem
2013-01-01
Dalgalı deniz yüzeyi üzerinde bulunan gemi ve aldatıcı gibi cisimlerden saçılan elektromanyetik alana (veya bu alandan türetilen radar kesit alanı parametresine) ait olan istatistiksel özellikler; dönüşümsel elektromanyetik, sonlu elemanlar yöntemi (SEY) ve Monte Carlo yaklaşımları kullanılarak elde edilmiştir. Deniz yüzeyi profili, bir rasgele süreç olarak modellenmiş ve bu sürece ait spektral güç yoğunluğu fonksiyonu kullanılarak deniz yüzeyi profili için örnek fonksiyonlar elde edilmiştir....
Calibration of a microprobe array
International Nuclear Information System (INIS)
Schrader, Christian; Tutsch, Rainer
2012-01-01
Conventional coordinate measurement machines are not well adapted to the specific needs for the measurement of mechanical microstructures that are made in a highly parallel production process. In particular, the increase of the measurement speed is addressed by using an array of microprobes to measure a number of objects in parallel. It consists of multiple microprobes that are etched into the same silicon substrate. The styli are glued onto a boss structure in the middle of a silicon membrane. To facilitate the alignment of an array and the underlying wafer, the array is mounted on three stacked rotational stages. Due to the production tolerances, the positions of the touching balls of the probes relative to their pivot have to be calibrated. The probe sensitivity is another field of calibration. This paper describes an efficient calibration procedure of the probe array which is usable for arrays with a large number of probes and different array layouts. The validation method of this procedure is explained and calibration results are discussed (paper)
Mean field theory of the swap Monte Carlo algorithm.
Ikeda, Harukuni; Zamponi, Francesco; Ikeda, Atsushi
2017-12-21
The swap Monte Carlo algorithm combines the translational motion with the exchange of particle species and is unprecedentedly efficient for some models of glass former. In order to clarify the physics underlying this acceleration, we study the problem within the mean field replica liquid theory. We extend the Gaussian Ansatz so as to take into account the exchange of particles of different species, and we calculate analytically the dynamical glass transition points corresponding to the swap and standard Monte Carlo algorithms. We show that the system evolved with the standard Monte Carlo algorithm exhibits the dynamical transition before that of the swap Monte Carlo algorithm. We also test the result by performing computer simulations of a binary mixture of the Mari-Kurchan model, both with standard and swap Monte Carlo. This scenario provides a possible explanation for the efficiency of the swap Monte Carlo algorithm. Finally, we discuss how the thermodynamic theory of the glass transition should be modified based on our results.
Investigating the impossible: Monte Carlo simulations
International Nuclear Information System (INIS)
Kramer, Gary H.; Crowley, Paul; Burns, Linda C.
2000-01-01
Designing and testing new equipment can be an expensive and time consuming process or the desired performance characteristics may preclude its construction due to technological shortcomings. Cost may also prevent equipment being purchased for other scenarios to be tested. An alternative is to use Monte Carlo simulations to make the investigations. This presentation exemplifies how Monte Carlo code calculations can be used to fill the gap. An example is given for the investigation of two sizes of germanium detector (70 mm and 80 mm diameter) at four different crystal thicknesses (15, 20, 25, and 30 mm) and makes predictions on how the size affects the counting efficiency and the Minimum Detectable Activity (MDA). The Monte Carlo simulations have shown that detector efficiencies can be adequately modelled using photon transport if the data is used to investigate trends. The investigation of the effect of detector thickness on the counting efficiency has shown that thickness for a fixed diameter detector of either 70 mm or 80 mm is unimportant up to 60 keV. At higher photon energies, the counting efficiency begins to decrease as the thickness decreases as expected. The simulations predict that the MDA of either the 70 mm or 80 mm diameter detectors does not differ by more than a factor of 1.15 at 17 keV or 1.2 at 60 keV when comparing detectors of equivalent thicknesses. The MDA is slightly increased at 17 keV, and rises by about 52% at 660 keV, when the thickness is decreased from 30 mm to 15 mm. One could conclude from this information that the extra cost associated with the larger area Ge detectors may not be justified for the slight improvement predicted in the MDA. (author)
Calibration of HPGe well-type detectors using the direct mathematical method
International Nuclear Information System (INIS)
Abbas, M.I.; Nafee, S.S.
2006-01-01
A new theoretical approach is presented here to calibrate the HPGe well-type detectors using cylindrical sources. This approach depends on the accurate calculation of two important factors; the first factor is the average path length covered by the photon inside the active volume of a gamma detector, whereas, the second one is the geometrical solid angle (Ω geo ) subtended by the source to the detector. These two factors are theoretically derived in straightforward analytical formulae. The present model leads to a direct evaluation of the full energy peak efficiency (ε p ) for these source-detector geometries. The validity of this work will be clear via systematic comparisons for these geometries with the experimental method and the Monte Carlo simulations. Also the validity of it over the traditional direct mathematical method reported by Abbas, and Abbas and Selim will be presented. Simple programming is possible and short computer time is needed. (author)
Overcoming the reference large-area sources non-uniformity in surface area monitor calibration
Energy Technology Data Exchange (ETDEWEB)
Junior, Iremar Alves S.; Siqueira, Paulo de T.D.; Xavier, Marcs; Nascimento, Eduardo do; Potiens, Maria da Penha A., E-mail: iremarjr@usp.br, E-mail: ptsiquei@ipen.br, E-mail: mxavier@ipen.br, E-mail: eduardon@ufba.br, E-mail: mppalbu@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2015-07-01
This paper describes a study using MCNP5 simulations, a Monte Carlo based radiation transport code, in order to evaluate the possibility of using reference large-area sources that do not meet the uniformity recommendations of the ISO 8769:2010 in surface contamination monitors calibration. {sup 14}C, {sup 36}Cl, {sup 99}Tc, {sup 137}Cs and {sup 90}Sr + {sup 90}Y large area reference sources were simulated as well as the setup and the detector probe. Simulations were carried out for both uniform and non-uniform surface distributions. In the case of uniform distribution, specific weights for each region were considered, as obtained in the uniformity evaluation measurements. To each simulation, it was considered the average number of signals generated in each detector probe, i.e., it was determined the fraction of stories depositing energy in the corresponding gas filled region of the detector. Simulations results show differences in detection efficiency values up to 15%. (author)
New radiation protection calibration facility at CERN.
Brugger, Markus; Carbonez, Pierre; Pozzi, Fabio; Silari, Marco; Vincke, Helmut
2014-10-01
The CERN radiation protection group has designed a new state-of-the-art calibration laboratory to replace the present facility, which is >20 y old. The new laboratory, presently under construction, will be equipped with neutron and gamma sources, as well as an X-ray generator and a beta irradiator. The present work describes the project to design the facility, including the facility placement criteria, the 'point-zero' measurements and the shielding study performed via FLUKA Monte Carlo simulations. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Monte Carlo methods for the self-avoiding walk
International Nuclear Information System (INIS)
Janse van Rensburg, E J
2009-01-01
The numerical simulation of self-avoiding walks remains a significant component in the study of random objects in lattices. In this review, I give a comprehensive overview of the current state of Monte Carlo simulations of models of self-avoiding walks. The self-avoiding walk model is revisited, and the motivations for Monte Carlo simulations of this model are discussed. Efficient sampling of self-avoiding walks remains an elusive objective, but significant progress has been made over the last three decades. The model still poses challenging numerical questions however, and I review specific Monte Carlo methods for improved sampling including general Monte Carlo techniques such as Metropolis sampling, umbrella sampling and multiple Markov Chain sampling. In addition, specific static and dynamic algorithms for walks are presented, and I give an overview of recent innovations in this field, including algorithms such as flatPERM, flatGARM and flatGAS. (topical review)
Monte Carlo Methods in Physics
International Nuclear Information System (INIS)
Santoso, B.
1997-01-01
Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained
Calibrations of a tritium extraction facility
International Nuclear Information System (INIS)
Bretscher, M.M.; Oliver, B.M.; Farrar, H. IV.
1983-01-01
A tritium extraction facility has been built for the purpose of measuring the absolute tritium concentration in neutron-irradiated lithium metal samples. Two independent calibration procedures have been used to determine what fraction, if any, of tritium is lost during the extraction process. The first procedure compares independently measured 4 He and 3 H concentrations from the 6 Li(n,α)T reaction. The second procedure compared measured 6 Li(n,α)T/ 197 Au (n,γ) 198 Au thermal neutron reaction rate ratios with those obtained from Monte Carlo calculations using well-known cross sections. Both calibration methods show that within experimental errors (approx. 1.5%) no tritium is lost during the extraction process
Monte Carlo Form-Finding Method for Tensegrity Structures
Li, Yue; Feng, Xi-Qiao; Cao, Yan-Ping
2010-05-01
In this paper, we propose a Monte Carlo-based approach to solve tensegrity form-finding problems. It uses a stochastic procedure to find the deterministic equilibrium configuration of a tensegrity structure. The suggested Monte Carlo form-finding (MCFF) method is highly efficient because it does not involve complicated matrix operations and symmetry analysis and it works for arbitrary initial configurations. Both regular and non-regular tensegrity problems of large scale can be solved. Some representative examples are presented to demonstrate the efficiency and accuracy of this versatile method.
Energy Technology Data Exchange (ETDEWEB)
Moralles, M. [Centro do Reator de Pesquisas, Instituto de Pesquisas Energeticas e Nucleares, Caixa Postal 11049, CEP 05422-970, Sao Paulo SP (Brazil)], E-mail: moralles@ipen.br; Bonifacio, D.A.B. [Centro do Reator de Pesquisas, Instituto de Pesquisas Energeticas e Nucleares, Caixa Postal 11049, CEP 05422-970, Sao Paulo SP (Brazil); Bottaro, M.; Pereira, M.A.G. [Instituto de Eletrotecnica e Energia, Universidade de Sao Paulo, Av. Prof. Luciano Gualberto, 1289, CEP 05508-010, Sao Paulo SP (Brazil)
2007-09-21
Spectra of calibration sources and X-ray beams were measured with a cadmium telluride (CdTe) detector. The response function of the detector was simulated using the GEANT4 Monte Carlo toolkit. Trapping of charge carriers were taken into account using the Hecht equation in the active zone of the CdTe crystal associated with a continuous function to produce drop of charge collection efficiency near the metallic contacts and borders. The rise time discrimination is approximated by a cut in the depth of the interaction relative to cathode and corrections that depend on the pulse amplitude. The least-squares method with truncation was employed to unfold X-ray spectra typically used in medical diagnostics and the results were compared with reference data.
Moralles, M.; Bonifácio, D. A. B.; Bottaro, M.; Pereira, M. A. G.
2007-09-01
Spectra of calibration sources and X-ray beams were measured with a cadmium telluride (CdTe) detector. The response function of the detector was simulated using the GEANT4 Monte Carlo toolkit. Trapping of charge carriers were taken into account using the Hecht equation in the active zone of the CdTe crystal associated with a continuous function to produce drop of charge collection efficiency near the metallic contacts and borders. The rise time discrimination is approximated by a cut in the depth of the interaction relative to cathode and corrections that depend on the pulse amplitude. The least-squares method with truncation was employed to unfold X-ray spectra typically used in medical diagnostics and the results were compared with reference data.
Metropolis Methods for Quantum Monte Carlo Simulations
Ceperley, D. M.
2003-01-01
Since its first description fifty years ago, the Metropolis Monte Carlo method has been used in a variety of different ways for the simulation of continuum quantum many-body systems. This paper will consider some of the generalizations of the Metropolis algorithm employed in quantum Monte Carlo: Variational Monte Carlo, dynamical methods for projector monte carlo ({\\it i.e.} diffusion Monte Carlo with rejection), multilevel sampling in path integral Monte Carlo, the sampling of permutations, ...
Calibration platforms for gravimeters
Vanruymbeke, M.
Several methods investigated in order to calibrate gravimeters by the inertial acceleration produced by a vertical motion are described. The VRR 8601 calibrating platform is especially designed to calibrate La Coste and Romberg gravimeters. For heavier gravimeters such as tidal La Coste or superconducting instruments, two other principles are possible to lift up sinusoidally the platform: a mercury crapaudine or the rotation on an inclined plane.
Biased Monte Carlo optimization: the basic approach
International Nuclear Information System (INIS)
Campioni, Luca; Scardovelli, Ruben; Vestrucci, Paolo
2005-01-01
It is well-known that the Monte Carlo method is very successful in tackling several kinds of system simulations. It often happens that one has to deal with rare events, and the use of a variance reduction technique is almost mandatory, in order to have Monte Carlo efficient applications. The main issue associated with variance reduction techniques is related to the choice of the value of the biasing parameter. Actually, this task is typically left to the experience of the Monte Carlo user, who has to make many attempts before achieving an advantageous biasing. A valuable result is provided: a methodology and a practical rule addressed to establish an a priori guidance for the choice of the optimal value of the biasing parameter. This result, which has been obtained for a single component system, has the notable property of being valid for any multicomponent system. In particular, in this paper, the exponential and the uniform biases of exponentially distributed phenomena are investigated thoroughly
DEFF Research Database (Denmark)
Courtney, Michael
presents four concepts for performing such a nacelle lidar calibration. Of the four methods, two are found to be immediately relevant and are pursued in some detail. The first of these is a line of sight calibration method in which both lines of sight (for a two beam lidar) are individually calibrated...... by accurately aligning the beam to pass close to a reference wind speed sensor. A testing procedure is presented, reporting requirements outlined and the uncertainty of the method analysed. It is seen that the main limitation of the line of sight calibration method is the time required to obtain...
ATLAS Muon Calibration Frameowrk
Carlino, Dr; The ATLAS collaboration; Jha, Dr; Kortner, Dr; Mazzaferro, Dr; Petrucci, Dr; Salvo, Dr; Simone, Dr; WALKER, Dr
2010-01-01
Automated calibration of the ATLAS detector subsystems ( like MDT and RPC chambers) are being performed at remote sites, called Remote Calibration Centers. The calibration data for the assigned part of the detector are being processed at these centers and send the result back to CERN for general use in reconstruction and analysis. In this work, we present the recent developments in data discovery mechanism and integration of Ganga as a backend which allows for the specification, submission, bookkeeping and post processing of calibration tasks on a wide set of available heterogeneous resources at remote centers.
ATLAS Muon Calibration Framework
CARLINO, G; The ATLAS collaboration; Di Simone, A; Doria, A; Jha, MK; Mazzaferro, L; Walker, R
2011-01-01
Automated calibration of the ATLAS detector subsystems ( like MDT and RPC chambers) are being performed at remote sites, called Remote Calibration Centers. The calibration data for the assigned part of the detector are being processed at these centers and send the result back to CERN for general use in reconstruction and analysis. In this work, we present the recent developments in data discovery mechanism and integration of Ganga as a backend which allows for the specification, submission, bookkeeping and post processing of calibration tasks on a wide set of available heterogeneous resources at remote centers.
RF impedance measurement calibration
International Nuclear Information System (INIS)
Matthews, P.J.; Song, J.J.
1993-01-01
The intent of this note is not to explain all of the available calibration methods in detail. Instead, we will focus on the calibration methods of interest for RF impedance coupling measurements and attempt to explain: (1). The standards and measurements necessary for the various calibration techniques. (2). The advantages and disadvantages of each technique. (3). The mathematical manipulations that need to be applied to the measured standards and devices. (4). An outline of the steps needed for writing a calibration routine that operated from a remote computer. For further details of the various techniques presented in this note, the reader should consult the references
Atomistic Monte Carlo simulation of lipid membranes
DEFF Research Database (Denmark)
Wüstner, Daniel; Sklenar, Heinz
2014-01-01
Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction...... into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches...
Lectures on Monte Carlo methods
Madras, Neal
2001-01-01
Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati
Wormhole Hamiltonian Monte Carlo
Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak
2015-01-01
In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551
Wormhole Hamiltonian Monte Carlo.
Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak
2014-07-31
In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function.
Mielke, Steven L; Dinpajooh, Mohammadhasan; Siepmann, J Ilja; Truhlar, Donald G
2013-01-07
We present a procedure to calculate ensemble averages, thermodynamic derivatives, and coordinate distributions by effective classical potential methods. In particular, we consider the displaced-points path integral (DPPI) method, which yields exact quantal partition functions and ensemble averages for a harmonic potential and approximate quantal ones for general potentials, and we discuss the implementation of the new procedure in two Monte Carlo simulation codes, one that uses uncorrelated samples to calculate absolute free energies, and another that employs Metropolis sampling to calculate relative free energies. The results of the new DPPI method are compared to those from accurate path integral calculations as well as to results of two other effective classical potential schemes for the case of an isolated water molecule. In addition to the partition function, we consider the heat capacity and expectation values of the energy, the potential energy, the bond angle, and the OH distance. We also consider coordinate distributions. The DPPI scheme performs best among the three effective potential schemes considered and achieves very good accuracy for all of the properties considered. A key advantage of the effective potential schemes is that they display much lower statistical sampling variances than those for accurate path integral calculations. The method presented here shows great promise for including quantum effects in calculations on large systems.
DEFF Research Database (Denmark)
Yordanova, Ginka; Vesth, Allan
The report describes site calibration measurements carried out on a site in Denmark. The measurements are carried out in accordance to Ref. [1]. The site calibration is carried out before a power performance measurement on a given turbine to clarify the influence from the terrain on the ratio...
Advanced Multilevel Monte Carlo Methods
Jasra, Ajay
2017-04-24
This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.
Handbook of Monte Carlo methods
National Research Council Canada - National Science Library
Kroese, Dirk P; Taimre, Thomas; Botev, Zdravko I
2011-01-01
... in rapid succession, the staggering number of related techniques, ideas, concepts and algorithms makes it difficult to maintain an overall picture of the Monte Carlo approach. This book attempts to encapsulate the emerging dynamics of this field of study"--
Sandia WIPP calibration traceability
Energy Technology Data Exchange (ETDEWEB)
Schuhen, M.D. [Sandia National Labs., Albuquerque, NM (United States); Dean, T.A. [RE/SPEC, Inc., Albuquerque, NM (United States)
1996-05-01
This report summarizes the work performed to establish calibration traceability for the instrumentation used by Sandia National Laboratories at the Waste Isolation Pilot Plant (WIPP) during testing from 1980-1985. Identifying the calibration traceability is an important part of establishing a pedigree for the data and is part of the qualification of existing data. In general, the requirement states that the calibration of Measuring and Test equipment must have a valid relationship to nationally recognized standards or the basis for the calibration must be documented. Sandia recognized that just establishing calibration traceability would not necessarily mean that all QA requirements were met during the certification of test instrumentation. To address this concern, the assessment was expanded to include various activities.
Carlos Chagas: biographical sketch.
Moncayo, Alvaro
2010-01-01
Carlos Chagas was born on 9 July 1878 in the farm "Bon Retiro" located close to the City of Oliveira in the interior of the State of Minas Gerais, Brazil. He started his medical studies in 1897 at the School of Medicine of Rio de Janeiro. In the late XIX century, the works by Louis Pasteur and Robert Koch induced a change in the medical paradigm with emphasis in experimental demonstrations of the causal link between microbes and disease. During the same years in Germany appeared the pathological concept of disease, linking organic lesions with symptoms. All these innovations were adopted by the reforms of the medical schools in Brazil and influenced the scientific formation of Chagas. Chagas completed his medical studies between 1897 and 1903 and his examinations during these years were always ranked with high grades. Oswaldo Cruz accepted Chagas as a doctoral candidate and directed his thesis on "Hematological studies of Malaria" which was received with honors by the examiners. In 1903 the director appointed Chagas as research assistant at the Institute. In those years, the Institute of Manguinhos, under the direction of Oswaldo Cruz, initiated a process of institutional growth and gathered a distinguished group of Brazilian and foreign scientists. In 1907, he was requested to investigate and control a malaria outbreak in Lassance, Minas Gerais. In this moment Chagas could not have imagined that this field research was the beginning of one of the most notable medical discoveries. Chagas was, at the age of 28, a Research Assistant at the Institute of Manguinhos and was studying a new flagellate parasite isolated from triatomine insects captured in the State of Minas Gerais. Chagas made his discoveries in this order: first the causal agent, then the vector and finally the human cases. These notable discoveries were carried out by Chagas in twenty months. At the age of 33 Chagas had completed his discoveries and published the scientific articles that gave him world
Monte Carlo simulation of mixed neutron-gamma radiation fields and dosimetry devices
International Nuclear Information System (INIS)
Zhang, Guoqing
2011-01-01
Monte Carlo methods based on random sampling are widely used in different fields for the capability of solving problems with a large number of coupled degrees of freedom. In this work, Monte Carlos methods are successfully applied for the simulation of the mixed neutron-gamma field in an interim storage facility and neutron dosimeters of different types. Details are discussed in two parts: In the first part, the method of simulating an interim storage facility loaded with CASTORs is presented. The size of a CASTOR is rather large (several meters) and the CASTOR wall is very thick (tens of centimeters). Obtaining the results of dose rates outside a CASTOR with reasonable errors costs usually hours or even days. For the simulation of a large amount of CASTORs in an interim storage facility, it needs weeks or even months to finish a calculation. Variance reduction techniques were used to reduce the calculation time and to achieve reasonable relative errors. Source clones were applied to avoid unnecessary repeated calculations. In addition, the simulations were performed on a cluster system. With the calculation techniques discussed above, the efficiencies of calculations can be improved evidently. In the second part, the methods of simulating the response of neutron dosimeters are presented. An Alnor albedo dosimeter was modelled in MCNP, and it has been simulated in the facility to calculate the calibration factor to get the evaluated response to a Cf-252 source. The angular response of Makrofol detectors to fast neutrons has also been investigated. As a kind of SSNTD, Makrofol can detect fast neutrons by recording the neutron induced heavy charged recoils. To obtain the information of charged recoils, general-purpose Monte Carlo codes were used for transporting incident neutrons. The response of Makrofol to fast neutrons is dependent on several factors. Based on the parameters which affect the track revealing, the formation of visible tracks was determined. For
Monte Carlo simulation of mixed neutron-gamma radiation fields and dosimetry devices
Energy Technology Data Exchange (ETDEWEB)
Zhang, Guoqing
2011-12-22
Monte Carlo methods based on random sampling are widely used in different fields for the capability of solving problems with a large number of coupled degrees of freedom. In this work, Monte Carlos methods are successfully applied for the simulation of the mixed neutron-gamma field in an interim storage facility and neutron dosimeters of different types. Details are discussed in two parts: In the first part, the method of simulating an interim storage facility loaded with CASTORs is presented. The size of a CASTOR is rather large (several meters) and the CASTOR wall is very thick (tens of centimeters). Obtaining the results of dose rates outside a CASTOR with reasonable errors costs usually hours or even days. For the simulation of a large amount of CASTORs in an interim storage facility, it needs weeks or even months to finish a calculation. Variance reduction techniques were used to reduce the calculation time and to achieve reasonable relative errors. Source clones were applied to avoid unnecessary repeated calculations. In addition, the simulations were performed on a cluster system. With the calculation techniques discussed above, the efficiencies of calculations can be improved evidently. In the second part, the methods of simulating the response of neutron dosimeters are presented. An Alnor albedo dosimeter was modelled in MCNP, and it has been simulated in the facility to calculate the calibration factor to get the evaluated response to a Cf-252 source. The angular response of Makrofol detectors to fast neutrons has also been investigated. As a kind of SSNTD, Makrofol can detect fast neutrons by recording the neutron induced heavy charged recoils. To obtain the information of charged recoils, general-purpose Monte Carlo codes were used for transporting incident neutrons. The response of Makrofol to fast neutrons is dependent on several factors. Based on the parameters which affect the track revealing, the formation of visible tracks was determined. For
Calibration and Performance Testing of Sodium Iodide, NaI (Tl ...
African Journals Online (AJOL)
The performance testing of a newly acquired sodium iodide detector (NaI), (Tl)) at Ghana Atomic Energy Commission (GAEC) was investigated by carrying out energy and efficiency calibration on the detector, as well as validation of its calibration. The energy and efficiency calibrations were performed using mixed ...
Reconstruction and Calibration of Small Radius Jets in the ATLAS Experiment for LHC Run 2
Loch, Peter; The ATLAS collaboration
2017-01-01
Small radius jets with R = 0.4 are standard tools in ATLAS for physics analysis. They are calibrated using a sequence of Monte Carlo simulation-derived calibrations and corrections followed by in-situ calibrations based on the transverse momentum balance between the probed jets and well-measured reference signals. In this talk the inputs to jet reconstruction in LHC Run 2 comprising calorimeter cell clusters, reconstructed charge particle tracks, and particle flow objects, are discussed together with the jet energy calibration scheme. Selected results from the performance of the procedure and the associated systematic uncertainties are presented.
Ren, Huiying; Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Bao, Jie; Swiler, Laura
2017-12-01
In this study we developed an efficient Bayesian inversion framework for interpreting marine seismic Amplitude Versus Angle and Controlled-Source Electromagnetic data for marine reservoir characterization. The framework uses a multi-chain Markov-chain Monte Carlo sampler, which is a hybrid of DiffeRential Evolution Adaptive Metropolis and Adaptive Metropolis samplers. The inversion framework is tested by estimating reservoir-fluid saturations and porosity based on marine seismic and Controlled-Source Electromagnetic data. The multi-chain Markov-chain Monte Carlo is scalable in terms of the number of chains, and is useful for computationally demanding Bayesian model calibration in scientific and engineering problems. As a demonstration, the approach is used to efficiently and accurately estimate the porosity and saturations in a representative layered synthetic reservoir. The results indicate that the seismic Amplitude Versus Angle and Controlled-Source Electromagnetic joint inversion provides better estimation of reservoir saturations than the seismic Amplitude Versus Angle only inversion, especially for the parameters in deep layers. The performance of the inversion approach for various levels of noise in observational data was evaluated - reasonable estimates can be obtained with noise levels up to 25%. Sampling efficiency due to the use of multiple chains was also checked and was found to have almost linear scalability.
International Nuclear Information System (INIS)
Khabaz, Rahim
2012-01-01
In this study, for calibration of emission rate of radioisotopic neutron sources, a suitable vanadium salt was proposed instead manganese sulfate because the shorter half-life of 52 V would facilitate faster neutron yield measurements to be made with a shorter delay time between subsequent measurements. Using Monte Carlo method, different correction factors of manganese and vanadyl sulfate baths were calculated and compared with each other. The results showed that for having an appropriate efficiency of VBS, high concentrations of solution must be used. - Highlights: ► The widely used method for measurement of emission rate of radioisotopic neutron sources is MBS. ► One of limitations of MBS is time required for activation to saturation and background measurement. ► The aim of this study is to evaluate a calibration method by VBS with about one-hour full-cycle time. ► The relative efficiency per neutron of VBS was just about half that obtained from MBS. ► The relative efficiency of VBS is compensated with increasing the concentration of solution.
DEFF Research Database (Denmark)
Fernandez Garcia, Sergio; Villanueva, Héctor
This report presents the result of the lidar to lidar calibration performed for ground-based lidar. Calibration is here understood as the establishment of a relation between the reference lidar wind speed measurements with measurement uncertainties provided by measurement standard and corresponding...... lidar wind speed indications with associated measurement uncertainties. The lidar calibration concerns the 10 minute mean wind speed measurements. The comparison of the lidar measurements of the wind direction with that from the reference lidar measurements are given for information only....
Calibration Fixture For Anemometer Probes
Lewis, Charles R.; Nagel, Robert T.
1993-01-01
Fixture facilitates calibration of three-dimensional sideflow thermal anemometer probes. With fixture, probe oriented at number of angles throughout its design range. Readings calibrated as function of orientation in airflow. Calibration repeatable and verifiable.
Clustered calibration: an improvement to radio interferometric direction-dependent self-calibration
Kazemi, S.; Yatawatta, S.; Zaroubi, S.
2013-04-01
The new generation of radio synthesis arrays, such as Low Frequency Array and Square Kilometre Array, have been designed to surpass existing arrays in terms of sensitivity, angular resolution and frequency coverage. This evolution has led to the development of advanced calibration techniques that ensure the delivery of accurate results at the lowest possible computational cost. However, the performance of such calibration techniques is still limited by the compact, bright sources in the sky, used as calibrators. It is important to have a bright enough source that is well distinguished from the background noise level in order to achieve satisfactory results in calibration. This paper presents `clustered calibration' as a modification to traditional radio interferometric calibration, in order to accommodate faint sources that are almost below the background noise level into the calibration process. The main idea is to employ the information of the bright sources' measured signals as an aid to calibrate fainter sources that are nearby the bright sources. In the case where we do not have bright enough sources, a source cluster could act as a bright source that can be distinguished from background noise. For this purpose, we construct a number of source clusters assuming that the signals of the sources belonging to a single cluster are corrupted by almost the same errors. Under this assumption, each cluster is calibrated as a single source, using the combined coherencies of its sources simultaneously. This upgrades the power of an individual faint source by the effective power of its cluster. The solutions thus obtained for every cluster are assigned to each individual source in the cluster. We give performance analysis of clustered calibration to show the superiority of this approach compared to the traditional unclustered calibration. We also provide analytical criteria to choose the optimum number of clusters for a given observation in an efficient manner.
Head calibration phantoms for actinides: measurements and simulations.
Vrba, T
2011-03-01
The paper deals with the physical skull phantoms Bundesinstitut fuer Strahlenschutz and BPAM-001, which are used in order to calibrate in vivo detection systems for estimation of (241)Am activity in the skeleton. Their voxel models were made and used in the Monte Carlo simulations. The results of the simulation were compared with measurements and reasonable agreement was observed. Several aspects such as materials and source distributions used in the models were discussed.
Guerra, J G; Rubiano, J G; Winter, G; Guerra, A G; Alonso, H; Arnedo, M A; Tejera, A; Gil, J M; Rodríguez, R; Martel, P; Bolivar, J P
2015-11-01
The determination in a sample of the activity concentration of a specific radionuclide by gamma spectrometry needs to know the full energy peak efficiency (FEPE) for the energy of interest. The difficulties related to the experimental calibration make it advisable to have alternative methods for FEPE determination, such as the simulation of the transport of photons in the crystal by the Monte Carlo method, which requires an accurate knowledge of the characteristics and geometry of the detector. The characterization process is mainly carried out by Canberra Industries Inc. using proprietary techniques and methodologies developed by that company. It is a costly procedure (due to shipping and to the cost of the process itself) and for some research laboratories an alternative in situ procedure can be very useful. The main goal of this paper is to find an alternative to this costly characterization process, by establishing a method for optimizing the parameters of characterizing the detector, through a computational procedure which could be reproduced at a standard research lab. This method consists in the determination of the detector geometric parameters by using Monte Carlo simulation in parallel with an optimization process, based on evolutionary algorithms, starting from a set of reference FEPEs determined experimentally or computationally. The proposed method has proven to be effective and simple to implement. It provides a set of characterization parameters which it has been successfully validated for different source-detector geometries, and also for a wide range of environmental samples and certified materials. Copyright © 2015 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Costa, Priscila; Potiens Junior, Ademar J.
2015-01-01
Filter cartridges are part of the primary water treatment system of the IEA-R1 Research Reactor and, when saturated, they are replaced and become radioactive waste. The IEA-R1 is located at the Nuclear and Energy Research Institute (IPEN), in Sao Paulo, Brazil. The primary characterization is the main step of the radioactive waste management in which the physical, chemical and radiological properties are determined. It is a very important step because the information obtained in this moment enables the choice of the appropriate management process and the definition of final disposal options. In this paper, it is presented a non-destructive method for primary characterization, using the Monte Carlo method associated with the gamma spectrometry. Gamma spectrometry allows the identification of radionuclides and their activity values. The detection efficiency is an important parameter, which is related to the photon energy, detector geometry and the matrix of the sample to be analyzed. Due to the difficult to obtain a standard source with the same geometry of the filter cartridge, another technique is necessary to calibrate the detector. The technique described in this paper uses the Monte Carlo method for primary characterization of the IEA-R1 filter cartridges. (author)
Monte Carlo simulation of NSE at reactor and spallation sources
Energy Technology Data Exchange (ETDEWEB)
Zsigmond, G.; Wechsler, D.; Mezei, F. [Hahn-Meitner-Institut Berlin, Berlin (Germany)
2001-03-01
A MC (Monte Carlo) computation study of NSE (Neutron Spin Echo) has been performed by means of VITESS investigating the classic and TOF-NSE options at spallation sources. The use of white beams in TOF-NSE makes the flipper efficiency in function of the neutron wavelength an important issue. The emphasis was put on exact evaluation of flipper efficiencies for wide wavelength-band instruments. (author)
Calibration of Multiple Fish-Eye Cameras Using a Wand
Fu, Qiang; Quan, Quan; Cai, Kai-Yuan
2014-01-01
Fish-eye cameras are becoming increasingly popular in computer vision, but their use for 3D measurement is limited partly due to the lack of an accurate, efficient and user-friendly calibration procedure. For such a purpose, we propose a method to calibrate the intrinsic and extrinsic parameters (including radial distortion parameters) of two/multiple fish-eye cameras simultaneously by using a wand under general motions. Thanks to the generic camera model used, the proposed calibration method...
U.S. Environmental Protection Agency — an UV calibration curve for SRHA quantitation. This dataset is associated with the following publication: Chang, X., and D. Bouchard. Surfactant-Wrapped Multiwalled...
Energy Technology Data Exchange (ETDEWEB)
C. Ahlers; H. Liu
2000-03-12
The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions.
Energy Technology Data Exchange (ETDEWEB)
C.F. Ahlers, H.H. Liu
2001-12-18
The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the AMR Development Plan for U0035 Calibrated Properties Model REV00 (CRWMS M&O 1999c). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions.
Traceable Pyrgeometer Calibrations
Energy Technology Data Exchange (ETDEWEB)
Dooraghi, Mike; Kutchenreiter, Mark; Reda, Ibrahim; Habte, Aron; Sengupta, Manajit; Andreas, Afshin; Newman, Martina; Webb, Craig
2016-05-02
This presentation provides a high-level overview of the progress on the Broadband Outdoor Radiometer Calibrations for all shortwave and longwave radiometers that are deployed by the Atmospheric Radiation Measurement program.
Directory of Open Access Journals (Sweden)
Patterson E.
2010-06-01
Full Text Available The results are presented using the procedure outlined by the Standardisation Project for Optical Techniques of Strain measurement to calibrate a digital image correlation system. The process involves comparing the experimental data obtained with the optical measurement system to the theoretical values for a specially designed specimen. The standard states the criteria which must be met in order to achieve successful calibration, in addition to quantifying the measurement uncertainty in the system. The system was evaluated at three different displacement load levels, generating strain ranges from 289 µstrain to 2110 µstrain. At the 289 µstrain range, the calibration uncertainty was found to be 14.1 µstrain, and at the 2110 µstrain range it was found to be 28.9 µstrain. This calibration procedure was performed without painting a speckle pattern on the surface of the metal. Instead, the specimen surface was prepared using different grades of grit paper to produce the desired texture.
DEFF Research Database (Denmark)
Kock, Carsten Weber; Vesth, Allan
This Site Calibration report is describing the results of a measured site calibration for a site in Denmark. The calibration is carried out by DTU Wind Energy in accordance with Ref.[3] and Ref.[4]. The measurement period is given. The site calibration is carried out before a power performance...... measurement on a given turbine to clarify the influence from the terrain on the ratio between the wind speed at the center of the turbine hub and at the met mast. The wind speed at the turbine is measured by a temporary mast placed at the foundation for the turbine. The site and measurement equipment...... is detailed described in [1] and [2]. All parts of the sensors and the measurement system have been installed by DTU Wind Energy....
Federal Laboratory Consortium — This facility is for low altitude subsonic altimeter system calibrations of air vehicles. Mission is a direct support of the AFFTC mission. Postflight data merge is...
Theoretical calibration of a NaI(Tl) detector for the measurement of 131I in a thyroid phantom
International Nuclear Information System (INIS)
Venturini, Luzia
2002-01-01
This paper describes the theoretical calibration of a NaI(Tl) detector using Monte Carlo Method, for the measurement of 131 I in a thyroid phantom. The thyroid is represented by the region between two concentric cylinders, where the inner one represents the trachea and the outer one represents the neck. 133 Ba was used for experimental calibration. The results shows that the calibration procedure is suitable for 131 I measurements. (author)
Calibration of thermoluminiscent materials
International Nuclear Information System (INIS)
Bos, A.J.J.
1989-07-01
In this report the relation between exposure and absorbed radiation dose in various materials is represented, on the base of recent data. With the help of this a calibration procedure for thermoluminescent materials, adapted to the IRI radiation standard is still the exposure in rontgen. In switching to the air kerma standard the calibration procedure will have to be adapted. (author). 6 refs.; 4 tabs
Directory of Open Access Journals (Sweden)
Pozhitkov Alexander E
2010-07-01
Full Text Available Abstract Background Calibration of a microarray scanner is critical for accurate interpretation of microarray results. Shi et al. (BMC Bioinformatics, 2005, 6, Art. No. S11 Suppl. 2. reported usage of a Full Moon BioSystems slide for calibration. Inspired by the Shi et al. work, we have calibrated microarray scanners in our previous research. We were puzzled however, that most of the signal intensities from a biological sample fell below the sensitivity threshold level determined by the calibration slide. This conundrum led us to re-investigate the quality of calibration provided by the Full Moon BioSystems slide as well as the accuracy of the analysis performed by Shi et al. Methods Signal intensities were recorded on three different microarray scanners at various photomultiplier gain levels using the same calibration slide from Full Moon BioSystems. Data analysis was conducted on raw signal intensities without normalization or transformation of any kind. Weighted least-squares method was used to fit the data. Results We found that initial analysis performed by Shi et al. did not take into account autofluorescence of the Full Moon BioSystems slide, which led to a grossly distorted microarray scanner response. Our analysis revealed that a power-law function, which is explicitly accounting for the slide autofluorescence, perfectly described a relationship between signal intensities and fluorophore quantities. Conclusions Microarray scanners respond in a much less distorted fashion than was reported by Shi et al. Full Moon BioSystems calibration slides are inadequate for performing calibration. We recommend against using these slides.
Approximation Behooves Calibration
DEFF Research Database (Denmark)
da Silva Ribeiro, André Manuel; Poulsen, Rolf
2013-01-01
Calibration based on an expansion approximation for option prices in the Heston stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005–2009.......Calibration based on an expansion approximation for option prices in the Heston stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005–2009....
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros
2016-08-29
In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . âˆž>h0>h1â‹¯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. Â© 2016 Elsevier B.V.
Topics in Statistical Calibration
2014-03-27
type of garden cress called nasturtium. The response is weight of the plant in milligrams (mg) after three weeks of growth, and the predictor is the...7 (1):1–26, 1979. B. Efron. The bootstrap and markov-chain monte carlo. Journal of Biopharmaceutical Statistics, 21(6):1052–1062, 2011. B. Efron and
National Research Council Canada - National Science Library
Neuman, S. P
2002-01-01
.... Our objective was to avoid the need for either Monte Carlo simulation or upscaling, by developing ways to render predictions and uncertainty assessments directly, in a computationally efficient and accurate manner...
Energy Technology Data Exchange (ETDEWEB)
Courtney, M.
2013-01-15
Nacelle mounted, forward looking wind lidars are beginning to be used to provide reference wind speed measurements for the power performance testing of wind turbines. In such applications, a formal calibration procedure with a corresponding uncertainty assessment will be necessary. This report presents four concepts for performing such a nacelle lidar calibration. Of the four methods, two are found to be immediately relevant and are pursued in some detail. The first of these is a line of sight calibration method in which both lines of sight (for a two beam lidar) are individually calibrated by accurately aligning the beam to pass close to a reference wind speed sensor. A testing procedure is presented, reporting requirements outlined and the uncertainty of the method analysed. It is seen that the main limitation of the line of sight calibration method is the time required to obtain a representative distribution of radial wind speeds. An alternative method is to place the nacelle lidar on the ground and incline the beams upwards to bisect a mast equipped with reference instrumentation at a known height and range. This method will be easier and faster to implement and execute but the beam inclination introduces extra uncertainties. A procedure for conducting such a calibration is presented and initial indications of the uncertainties given. A discussion of the merits and weaknesses of the two methods is given together with some proposals for the next important steps to be taken in this work. (Author)
Energy Technology Data Exchange (ETDEWEB)
Guerra P, F.; Heeren de O, A. [Universidade Federal de Minas Gerais, Departamento de Engenharia Nuclear, Programa de Pos Graduacao em Ciencias e Tecnicas Nucleares, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil); Melo, B. M.; Lacerda, M. A. S.; Da Silva, T. A.; Ferreira F, T. C., E-mail: tcff01@gmail.com [Centro de Desenvolvimento da Tecnologia Nuclear, Programa de Pos Graduacao / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil)
2015-10-15
The Plan of Radiological Protection licensed by the National Nuclear Energy Commission - CNEN in Brazil includes the risks of assessment of internal and external exposure by implementing a program of individual monitoring which is responsible of controlling exposures and ensuring the maintenance of radiation safety. The Laboratory of Internal Dosimetry of the Center for Development of Nuclear Technology - LID/CDTN is responsible for routine monitoring of internal contamination of the Individuals Occupationally Exposed (IOEs). These are, the IOEs involved in handling {sup 18}F produced by the Unit for Research and Production of Radiopharmaceuticals sources; as well a monitoring of the entire body of workers from the Research Reactor TRIGA IPR-R1/CDTN or whenever there is any risk of accidental incorporation. The determination of photon emitting radionuclides from the human body requires calibration techniques of the counting geometries, in order to obtain a curve of efficiency. The calibration process normally makes use of physical phantoms containing certified activities of the radionuclides of interest. The objective of this project is the calibration of the WBC facility of the LID/CDTN using the BOMAB physical phantom and Monte Carlo simulations. Three steps were needed to complete the calibration process. First, the BOMAB was filled with a KCl solution and several measurements of the gamma ray energy (1.46 MeV) emitted by {sup 40}K were done. Second, simulations using MCNPX code were performed to calculate the counting efficiency (Ce) for the BOMAB model phantom and compared with the measurements Ce results. Third and last step, the modeled BOMAB phantom was used to calculate the Ce covering the energy range of interest. The results showed a good agreement and are within the expected ratio between the measured and simulated results. (Author)
A New Automated Instrument Calibration Facility at the Savannah River Site
International Nuclear Information System (INIS)
Polz, E.; Rushton, R.O.; Wilkie, W.H.; Hancock, R.C.
1998-01-01
The Health Physics Instrument Calibration Facility at the Savannah River Site in Aiken, SC was expressly designed and built to calibrate portable radiation survey instruments. The facility incorporates recent advances in automation technology, building layout and construction, and computer software to improve the calibration process. Nine new calibration systems automate instrument calibration and data collection. The building is laid out so that instruments are moved from one area to another in a logical, efficient manner. New software and hardware integrate all functions such as shipping/receiving, work flow, calibration, testing, and report generation. Benefits include a streamlined and integrated program, improved efficiency, reduced errors, and better accuracy
Application of biasing techniques to the contributon Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Dubi, A.; Gerstl, S.A.W.
1980-01-01
Recently, a new Monte Carlo Method called the Contribution Monte Carlo Method was developed. The method is based on the theory of contributions, and uses a new receipe for estimating target responses by a volume integral over the contribution current. The analog features of the new method were discussed in previous publications. The application of some biasing methods to the new contribution scheme is examined here. A theoretical model is developed that enables an analytic prediction of the benefit to be expected when these biasing schemes are applied to both the contribution method and regular Monte Carlo. This model is verified by a variety of numerical experiments and is shown to yield satisfying results, especially for deep-penetration problems. Other considerations regarding the efficient use of the new method are also discussed, and remarks are made as to the application of other biasing methods. 14 figures, 1 tables.
Markov Chain Monte Carlo Methods-Simple Monte Carlo
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 8; Issue 4. Markov Chain Monte Carlo ... New York 14853, USA. Indian Statistical Institute 8th Mile, Mysore Road Bangalore 560 059, India. Systat Software Asia-Pacific (PI Ltd., Floor 5, 'C' Tower Golden Enclave, Airport Road Bangalore 560017, India.
Quantum Monte Carlo Endstation for Petascale Computing
Energy Technology Data Exchange (ETDEWEB)
Lubos Mitas
2011-01-26
NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlo code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13
Calibration Under Uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Swiler, Laura Painton; Trucano, Timothy Guy
2005-03-01
This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.
POLCAL - POLARIMETRIC RADAR CALIBRATION
Vanzyl, J.
1994-01-01
Calibration of polarimetric radar systems is a field of research in which great progress has been made over the last few years. POLCAL (Polarimetric Radar Calibration) is a software tool intended to assist in the calibration of Synthetic Aperture Radar (SAR) systems. In particular, POLCAL calibrates Stokes matrix format data produced as the standard product by the NASA/Jet Propulsion Laboratory (JPL) airborne imaging synthetic aperture radar (AIRSAR). POLCAL was designed to be used in conjunction with data collected by the NASA/JPL AIRSAR system. AIRSAR is a multifrequency (6 cm, 24 cm, and 68 cm wavelength), fully polarimetric SAR system which produces 12 x 12 km imagery at 10 m resolution. AIRSTAR was designed as a testbed for NASA's Spaceborne Imaging Radar program. While the images produced after 1991 are thought to be calibrated (phase calibrated, cross-talk removed, channel imbalance removed, and absolutely calibrated), POLCAL can and should still be used to check the accuracy of the calibration and to correct it if necessary. Version 4.0 of POLCAL is an upgrade of POLCAL version 2.0 released to AIRSAR investigators in June, 1990. New options in version 4.0 include automatic absolute calibration of 89/90 data, distributed target analysis, calibration of nearby scenes with calibration parameters from a scene with corner reflectors, altitude or roll angle corrections, and calibration of errors introduced by known topography. Many sources of error can lead to false conclusions about the nature of scatterers on the surface. Errors in the phase relationship between polarization channels result in incorrect synthesis of polarization states. Cross-talk, caused by imperfections in the radar antenna itself, can also lead to error. POLCAL reduces cross-talk and corrects phase calibration without the use of ground calibration equipment. Removing the antenna patterns during SAR processing also forms a very important part of the calibration of SAR data. Errors in the
Vibration transducer calibration techniques
Brinkley, D. J.
1980-09-01
Techniques for the calibration of vibration transducers used in the Aeronautical Quality Assurance Directorate of the British Ministry of Defence are presented. Following a review of the types of measurements necessary in the calibration of vibration transducers, the performance requirements of vibration transducers, which can be used to measure acceleration, velocity or vibration amplitude, are discussed, with particular attention given to the piezoelectric accelerometer. Techniques for the accurate measurement of sinusoidal vibration amplitude in reference-grade transducers are then considered, including the use of a position sensitive photocell and the use of a Michelson laser interferometer. Means of comparing the output of working-grade accelerometers with that of previously calibrated reference-grade devices are then outlined, with attention given to a method employing a capacitance bridge technique and a method to be used at temperatures between -50 and 200 C. Automatic calibration procedures developed to speed up the calibration process are outlined, and future possible extensions of system software are indicated.
Andueza, Donato; Picard, Fabienne; Dozias, Dominique; Aufrère, Jocelyne
2017-09-01
The forage feed value determined by organic matter digestibility (OMD) and voluntary intake (VI) is hard and expensive. Thus, several indirect methods such as near infrared reflectance (NIR) spectroscopy have been developed for predicting the feed value of forages. In this study, NIR spectra of 1040 samples of feces from sheep fed fresh temperate forages were used to develop calibration models for the prediction of fecal crude ash (CA), fecal crude protein (CP), fresh forage OMD, and VI. Another 136 samples of feces were used to assess these models. Four calibration strategies were compared: (1) species-specific calibration; (2) family-specific calibration; (3) a global procedure; and (4) a LOCAL approach. The first three strategies were based on classical regression models developed on different sample populations, whereas the LOCAL approach is based on the development models from selected samples spectrally similar to the sample to be predicted. The first two strategies use feces-samples grouping based on the species or the family of the forage ingested. Forage calibration data sets gave value ranges of 79-327 g/kg dry matter (DM) for CA, 65-243 g/kg DM for CP, 0.52-0.85 g/g for OMD, and 34.7-100.5 g DM/kg metabolic body weight (BW 0.75 ) for VI. The prediction of CA and CP content in feces by species-specific fecal NIR (FNIR) spectroscopy models showed lower standard error of prediction (SEP) (CA 15.03 and CP 7.48 g/kg DM) than family-specific (CA 21.93 and CP 7.69 g/kg DM), global (CA 19.83 and CP 7.98 g/kg DM), or LOCAL (CA 30.85 and CP 8.10 g/kg DM) models. For OMD, the LOCAL procedure led to a lower SEP (0.018 g/g) than the other approaches (0.023, 0.024, and 0.023 g/g for species-specific, family-specific, and global models, respectively). For VI, the LOCAL approach again led to a lower SEP (6.15 g/kg BW 0.75 ) than the other approaches (7.35, 8.00, and 8.13 g/kg BW 0.75 for the species-specific, family-specific, and global models
Multilevel Monte Carlo simulation of Coulomb collisions
Energy Technology Data Exchange (ETDEWEB)
Rosin, M.S., E-mail: msr35@math.ucla.edu [Mathematics Department, University of California at Los Angeles, Los Angeles, CA 90036 (United States); Department of Mathematics and Science, Pratt Institute, Brooklyn, NY 11205 (United States); Ricketson, L.F. [Mathematics Department, University of California at Los Angeles, Los Angeles, CA 90036 (United States); Dimits, A.M. [Lawrence Livermore National Laboratory, L-637, P.O. Box 808, Livermore, CA 94511-0808 (United States); Caflisch, R.E. [Mathematics Department, University of California at Los Angeles, Los Angeles, CA 90036 (United States); Institute for Pure and Applied Mathematics, University of California at Los Angeles, Los Angeles, CA 90095 (United States); Cohen, B.I. [Lawrence Livermore National Laboratory, L-637, P.O. Box 808, Livermore, CA 94511-0808 (United States)
2014-10-01
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε, the computational cost of the method is O(ε{sup −2}) or O(ε{sup −2}(lnε){sup 2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε{sup −3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10{sup −5}. We discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.
Parallel Monte Carlo Search for Hough Transform
Lopes, Raul H. C.; Franqueira, Virginia N. L.; Reid, Ivan D.; Hobson, Peter R.
2017-10-01
We investigate the problem of line detection in digital image processing and in special how state of the art algorithms behave in the presence of noise and whether CPU efficiency can be improved by the combination of a Monte Carlo Tree Search, hierarchical space decomposition, and parallel computing. The starting point of the investigation is the method introduced in 1962 by Paul Hough for detecting lines in binary images. Extended in the 1970s to the detection of space forms, what came to be known as Hough Transform (HT) has been proposed, for example, in the context of track fitting in the LHC ATLAS and CMS projects. The Hough Transform transfers the problem of line detection, for example, into one of optimization of the peak in a vote counting process for cells which contain the possible points of candidate lines. The detection algorithm can be computationally expensive both in the demands made upon the processor and on memory. Additionally, it can have a reduced effectiveness in detection in the presence of noise. Our first contribution consists in an evaluation of the use of a variation of the Radon Transform as a form of improving theeffectiveness of line detection in the presence of noise. Then, parallel algorithms for variations of the Hough Transform and the Radon Transform for line detection are introduced. An algorithm for Parallel Monte Carlo Search applied to line detection is also introduced. Their algorithmic complexities are discussed. Finally, implementations on multi-GPU and multicore architectures are discussed.
Quantum Monte Carlo for atoms and molecules
International Nuclear Information System (INIS)
Barnett, R.N.
1989-11-01
The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H 2 , LiH, Li 2 , and H 2 O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li 2 , and H 2 O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations, the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions
A continuation multilevel Monte Carlo algorithm
Collier, Nathan
2014-09-05
We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error tolerance is satisfied. CMLMC assumes discretization hierarchies that are defined a priori for each level and are geometrically refined across levels. The actual choice of computational work across levels is based on parametric models for the average cost per sample and the corresponding variance and weak error. These parameters are calibrated using Bayesian estimation, taking particular notice of the deepest levels of the discretization hierarchy, where only few realizations are available to produce the estimates. The resulting CMLMC estimator exhibits a non-trivial splitting between bias and statistical contributions. We also show the asymptotic normality of the statistical error in the MLMC estimator and justify in this way our error estimate that allows prescribing both required accuracy and confidence in the final result. Numerical results substantiate the above results and illustrate the corresponding computational savings in examples that are described in terms of differential equations either driven by random measures or with random coefficients. © 2014, Springer Science+Business Media Dordrecht.
Option price calibration from Renyi entropy
International Nuclear Information System (INIS)
Brody, Dorje C.; Buckley, Ian R.C.; Constantinou, Irene C.
2007-01-01
The calibration of the risk-neutral density function for the future asset price, based on the maximisation of the entropy measure of Renyi, is proposed. Whilst the conventional approach based on the use of logarithmic entropy measure fails to produce the observed power-law distribution when calibrated against option prices, the approach outlined here is shown to produce the desired form of the distribution. Procedures for the maximisation of the Renyi entropy under constraints are outlined in detail, and a number of interesting properties of the resulting power-law distributions are also derived. The result is applied to efficiently evaluate prices of path-independent derivatives
HENC performance evaluation and plutonium calibration
International Nuclear Information System (INIS)
Menlove, H.O.; Baca, J.; Pecos, J.M.; Davidson, D.R.; McElroy, R.D.; Brochu, D.B.
1997-10-01
The authors have designed a high-efficiency neutron counter (HENC) to increase the plutonium content in 200-L waste drums. The counter uses totals neutron counting, coincidence counting, and multiplicity counting to determine the plutonium mass. The HENC was developed as part of a Cooperative Research and Development Agreement between the Department of Energy and Canberra Industries. This report presents the results of the detector modifications, the performance tests, the add-a-source calibration, and the plutonium calibration at Los Alamos National Laboratory (TA-35) in 1996
Measurement and simulation of neutron detection efficiency in lead-scintillating fiber calorimeters
International Nuclear Information System (INIS)
Anelli, M.; Bertolucci, S.; Bini, C.; Branchini, P.; Curceanu, C.; De Zorzi, G.; Di Domenico, A.; Di Micco, B.; Ferrari, A.; Fiore, S.; Gauzzi, P.; Giovannella, S.; Happacher, F.; Iliescu, M.; Martini, M.; Miscetti, S.; Nguyen, F.; Passeri, A.; Prokofiev, A.; Sciascia, B.
2009-01-01
The overall detection efficiency to neutrons of a small prototype of the KLOE lead-scintillating fiber calorimeter has been measured at the neutron beam facility of The Svedberg Laboratory, TSL, Uppsala, in the kinetic energy range [5-175] MeV. The measurement of the neutron detection efficiency of a NE110 scintillator provided a reference calibration. At the lowest trigger threshold, the overall calorimeter efficiency ranges from 30% to 50%. This value largely exceeds the estimated 8-15% expected if the response were proportional only to the scintillator equivalent thickness. A detailed simulation of the calorimeter and of the TSL beam line has been performed with the FLUKA Monte Carlo code. First data-MC comparisons are encouraging and allow to disentangle a neutron halo component in the beam.
Measurement and simulation of neutron detection efficiency in lead-scintillating fiber calorimeters
Energy Technology Data Exchange (ETDEWEB)
Anelli, M.; Bertolucci, S. [Laboratori Nazionali di Frascati, INFN (Italy); Bini, C. [Dipartimento di Fisica dell' Universita ' La Sapienza' , Roma (Italy); INFN Sezione di Roma, Roma (Italy); Branchini, P. [INFN Sezione di Roma Tre, Roma (Italy); Curceanu, C. [Laboratori Nazionali di Frascati, INFN (Italy); De Zorzi, G.; Di Domenico, A. [Dipartimento di Fisica dell' Universita ' La Sapienza' , Roma (Italy); INFN Sezione di Roma, Roma (Italy); Di Micco, B. [Dipartimento di Fisica dell' Universita ' Roma Tre' , Roma (Italy); INFN Sezione di Roma Tre, Roma (Italy); Ferrari, A. [Fondazione CNAO, Milano (Italy); Fiore, S.; Gauzzi, P. [Dipartimento di Fisica dell' Universita ' La Sapienza' , Roma (Italy); INFN Sezione di Roma, Roma (Italy); Giovannella, S., E-mail: simona.giovannella@lnf.infn.i [Laboratori Nazionali di Frascati, INFN (Italy); Happacher, F. [Laboratori Nazionali di Frascati, INFN (Italy); Iliescu, M. [Laboratori Nazionali di Frascati, INFN (Italy); IFIN-HH, Bucharest (Romania); Martini, M. [Laboratori Nazionali di Frascati, INFN (Italy); Dipartimento di Energetica dell' Universita ' La Sapienza' , Roma (Italy); Miscetti, S. [Laboratori Nazionali di Frascati, INFN (Italy); Nguyen, F. [Dipartimento di Fisica dell' Universita ' Roma Tre' , Roma (Italy); INFN Sezione di Roma Tre, Roma (Italy); Passeri, A. [INFN Sezione di Roma Tre, Roma (Italy); Prokofiev, A. [Svedberg Laboratory, Uppsala University (Sweden); Sciascia, B. [Laboratori Nazionali di Frascati, INFN (Italy)
2009-12-15
The overall detection efficiency to neutrons of a small prototype of the KLOE lead-scintillating fiber calorimeter has been measured at the neutron beam facility of The Svedberg Laboratory, TSL, Uppsala, in the kinetic energy range [5-175] MeV. The measurement of the neutron detection efficiency of a NE110 scintillator provided a reference calibration. At the lowest trigger threshold, the overall calorimeter efficiency ranges from 30% to 50%. This value largely exceeds the estimated 8-15% expected if the response were proportional only to the scintillator equivalent thickness. A detailed simulation of the calorimeter and of the TSL beam line has been performed with the FLUKA Monte Carlo code. First data-MC comparisons are encouraging and allow to disentangle a neutron halo component in the beam.
Computer-Based Model Calibration and Uncertainty Analysis: Terms and Concepts
2015-07-01
correlated. MCMC is generally more efficient than other Monte Carlo methods. The ability to sample from the posterior probability distribution for...importance sampling and (2) Markov chain Monte Carlo ( MCMC ) sampling . Multiple-solution PE methods are generally more computationally intensive than single...reject candidate points. Unlike in the traditional Monte Carlo method, where the random samples are statistically independent, the samples in MCMC are
Exact Monte Carlo for molecules
Energy Technology Data Exchange (ETDEWEB)
Lester, W.A. Jr.; Reynolds, P.J.
1985-03-01
A brief summary of the fixed-node quantum Monte Carlo method is presented. Results obtained for binding energies, the classical barrier height for H + H2, and the singlet-triplet splitting in methylene are presented and discussed. 17 refs.
Markov Chain Monte Carlo Methods
Indian Academy of Sciences (India)
time Technical Consultant to. Systat Software Asia-Pacific. (P) Ltd., in Bangalore, where the technical work for the development of the statistical software Systat takes place. His research interests have been in statistical pattern recognition and biostatistics. Keywords. Markov chain, Monte Carlo sampling, Markov chain Monte.
Markov Chain Monte Carlo Methods
Indian Academy of Sciences (India)
Markov Chain Monte Carlo Methods. 2. The Markov Chain Case. K B Athreya, Mohan Delampady and T Krishnan. K B Athreya is a Professor at. Cornell University. His research interests include mathematical analysis, probability theory and its application and statistics. He enjoys writing for Resonance. His spare time is ...
Markov Chain Monte Carlo Methods
Indian Academy of Sciences (India)
GENERAL ! ARTICLE. Markov Chain Monte Carlo Methods. 3. Statistical Concepts. K B Athreya, Mohan Delampady and T Krishnan. K B Athreya is a Professor at. Cornell University. His research interests include mathematical analysis, probability theory and its application and statistics. He enjoys writing for Resonance.
Monte Carlo calculations of nuclei
Energy Technology Data Exchange (ETDEWEB)
Pieper, S.C. [Argonne National Lab., IL (United States). Physics Div.
1997-10-01
Nuclear many-body calculations have the complication of strong spin- and isospin-dependent potentials. In these lectures the author discusses the variational and Green`s function Monte Carlo techniques that have been developed to address this complication, and presents a few results.
Markov Chain Monte Carlo Methods
Indian Academy of Sciences (India)
ter of the 20th century, due to rapid developments in computing technology ... early part of this development saw a host of Monte ... These iterative. Monte Carlo procedures typically generate a random se- quence with the Markov property such that the Markov chain is ergodic with a limiting distribution coinciding with the ...
BATSE spectroscopy detector calibration
Band, D.; Ford, L.; Matteson, J.; Lestrade, J. P.; Teegarden, B.; Schaefer, B.; Cline, T.; Briggs, M.; Paciesas, W.; Pendleton, G.
1992-01-01
We describe the channel-to-energy calibration of the Spectroscopy Detectors of the Burst and Transient Source Experiment (BATSE) on the Compton Gamma Ray Observatory (GRO). These detectors consist of NaI(TI) crystals viewed by photomultiplier tubes whose output in turn is measured by a pulse height analyzer. The calibration of these detectors has been complicated by frequent gain changes and by nonlinearities specific to the BATSE detectors. Nonlinearities in the light output from the NaI crystal and in the pulse height analyzer are shifted relative to each other by changes in the gain of the photomultiplier tube. We present the analytical model which is the basis of our calibration methodology, and outline how the empirical coefficients in this approach were determined. We also describe the complications peculiar to the Spectroscopy Detectors, and how our understanding of the detectors' operation led us to a solution to these problems.
Calibrating the Athena telescope
de Bruijne, J.; Guainazzi, M.; den Herder, J.; Bavdaz, M.; Burwitz, V.; Ferrando, P.; Lumb, D.; Natalucci, L.; Pajot, F.; Pareschi, G.
2017-10-01
Athena is ESA's upcoming X-ray mission, currently set for launch in 2028. With two nationally-funded, state-of-the-art instruments (a high-resolution spectrograph named X-IFU and a wide-field imager named WFI), and a telescope collecting area of 1.4-2 m^2 at 1 keV, the calibration of the spacecraft is a challenge in itself. This poster presents the current (spring 2017) plan of how to calibrate the Athena telescope. It is based on a hybrid approach, using bulk manufacturing and integration data as well as dedicated calibration measurements combined with a refined software model to simulate the full response of the optics.
Monte Carlo - Advances and Challenges
International Nuclear Information System (INIS)
Brown, Forrest B.; Mosteller, Russell D.; Martin, William R.
2008-01-01
Abstract only, full text follows: With ever-faster computers and mature Monte Carlo production codes, there has been tremendous growth in the application of Monte Carlo methods to the analysis of reactor physics and reactor systems. In the past, Monte Carlo methods were used primarily for calculating k eff of a critical system. More recently, Monte Carlo methods have been increasingly used for determining reactor power distributions and many design parameters, such as β eff , l eff , τ, reactivity coefficients, Doppler defect, dominance ratio, etc. These advanced applications of Monte Carlo methods are now becoming common, not just feasible, but bring new challenges to both developers and users: Convergence of 3D power distributions must be assured; confidence interval bias must be eliminated; iterated fission probabilities are required, rather than single-generation probabilities; temperature effects including Doppler and feedback must be represented; isotopic depletion and fission product buildup must be modeled. This workshop focuses on recent advances in Monte Carlo methods and their application to reactor physics problems, and on the resulting challenges faced by code developers and users. The workshop is partly tutorial, partly a review of the current state-of-the-art, and partly a discussion of future work that is needed. It should benefit both novice and expert Monte Carlo developers and users. In each of the topic areas, we provide an overview of needs, perspective on past and current methods, a review of recent work, and discussion of further research and capabilities that are required. Electronic copies of all workshop presentations and material will be available. The workshop is structured as 2 morning and 2 afternoon segments: - Criticality Calculations I - convergence diagnostics, acceleration methods, confidence intervals, and the iterated fission probability, - Criticality Calculations II - reactor kinetics parameters, dominance ratio, temperature
Energy Technology Data Exchange (ETDEWEB)
Anelli, M; Bertolucci, S; Curceanu, C; Giovannella, S; Happacher, F; Iliescu, M; Martini, M; Miscetti, S [Laboratori Nazionali di Frascati, INFN (Italy); Battistoni, G [Sezione INFN di Milano (Italy); Bini, C; Zorzi, G De; Domenico, Adi; Gauzzi, P [Ubiversita degli Studi ' La Sapienza' e Sezine INFN di Roma (Italy); Branchini, P; Micco, B Di; Ngugen, F; Paseri, A [Universita degli di Studi ' Roma Tre' e Sezione INFN di Roma Tre (Italy); Ferrari, A [Fondazione CNAO, Milano (Italy); Prokfiev, A [Svedberg Laboratory, Uppsala University (Sweden); Fiore, S, E-mail: matteo.martino@inf.infn.i
2009-04-01
We have measured the overall detection efficiency of a small prototype of the KLOE PB-scintillation fiber calorimeter to neutrons with kinetic energy range [5,175] MeV. The measurement has been done in a dedicated test beam in the neutron beam facility of the Svedberg Laboratory, TSL Uppsala. The measurements of the neutron detection efficiency of a NE110 scintillator provided a reference calibration. At the lowest trigger threshold, the overall calorimeter efficiency ranges from 28% to 33%. This value largely exceeds the estimated {approx}8% expected if the response were proportional only to the scintillator equivalent thickness. A detailed simulation of the calorimeter and of the TSL beam line has been performed with the FLUKA Monte Carlo code. The simulated response of the detector to neutrons is presented together with the first data to Monte Carlo comparison. The results show an overall neutron efficiency of about 35%. The reasons for such an efficiency enhancement, in comparison with the typical scintillator-based neutron counters, are explained, opening the road to a novel neutron detector.
Calibrating gyrochronology using Kepler asteroseismic targets
Angus, Ruth; Aigrain, Suzanne; Foreman-Mackey, Daniel; McQuillan, Amy
2015-06-01
Among the available methods for dating stars, gyrochronology is a powerful one because it requires knowledge of only the star's mass and rotation period. Gyrochronology relations have previously been calibrated using young clusters, with the Sun providing the only age dependence, and are therefore poorly calibrated at late ages. We used rotation period measurements of 310 Kepler stars with asteroseismic ages, 50 stars from the Hyades and Coma Berenices clusters and 6 field stars (including the Sun) with precise age measurements to calibrate the gyrochronology relation, whilst fully accounting for measurement uncertainties in all observable quantities. We calibrated a relation of the form P = An × (B - V - c)b, where P is rotation period in days, A is age in Myr, B and V are magnitudes and a, b and n are the free parameters of our model. We found a = 0.40^{+0.3}_{-0.05}, b = 0.31^{+0.05}_{-0.02} and n = 0.55^{+0.02}_{-0.09}. Markov Chain Monte Carlo methods were used to explore the posterior probability distribution functions of the gyrochronology parameters and we carefully checked the effects of leaving out parts of our sample, leading us to find that no single relation between rotation period, colour and age can adequately describe all the subsets of our data. The Kepler asteroseismic stars, cluster stars and local field stars cannot all be described by the same gyrochronology relation. The Kepler asteroseismic stars may be subject to observational biases; however, the clusters show unexpected deviations from the predicted behaviour, providing concerns for the overall reliability of gyrochronology as a dating method.
Individual dosimetry and calibration
International Nuclear Information System (INIS)
Hoefert, M.; Nielsen, M.
1996-01-01
In 1995 both the Individual Dosimetry and Calibration Sections worked under the condition of a status quo and concentrated fully on the routine part of their work. Nevertheless, the machine for printing the bar code which will be glued onto the film holder and hence identify the people when entering into high radiation areas was put into operation and most of the holders were equipped with the new identification. As far as the Calibration Section is concerned the project of the new source control system that is realized by the Technical Support Section was somewhat accelerated
DEFF Research Database (Denmark)
Gómez Arranz, Paula; Courtney, Michael
This report describes the tests carried out on a scanning lidar at the DTU Test Station for large wind turbines, Høvsøre. The tests were divided in two parts. In the first part, the purpose was to obtain wind speed calibrations at two heights against two cup anemometers mounted on a mast. Additio......This report describes the tests carried out on a scanning lidar at the DTU Test Station for large wind turbines, Høvsøre. The tests were divided in two parts. In the first part, the purpose was to obtain wind speed calibrations at two heights against two cup anemometers mounted on a mast...
Calibration with Absolute Shrinkage
DEFF Research Database (Denmark)
Øjelund, Henrik; Madsen, Henrik; Thyregod, Poul
2001-01-01
In this paper, penalized regression using the L-1 norm on the estimated parameters is proposed for chemometric je calibration. The algorithm is of the lasso type, introduced by Tibshirani in 1996 as a linear regression method with bound on the absolute length of the parameters, but a modification...... to the lasso. The lasso is applied both directly as a calibration method and as a method to select important variables/wave lengths. It is demonstrated that the lasso algorithm, in general, leads to parameter estimates of which some are zero while others are quite large (compared to e.g. the traditional PLS...
Radiation Calibration Measurements
International Nuclear Information System (INIS)
Omondi, C.
2017-01-01
KEBS Radiation Dosimetry mandate are: Custodian of Kenya Standards on Ionizing radiation, Ensure traceability to International System (SI ) and Calibration radiation equipment. RAF 8/040 on Radioisotope applications for troubleshooting and optimizing industrial process established Radiotracer Laboratory objective is to introduce and implement radiotracer technique for problem solving of industrial challenges. Gamma ray scanning technique applied is to Locate blockages, Locate liquid in vapor lines, Locate areas of lost refractory or lining in a pipe and Measure flowing densities. Equipment used for diagnostic and radiation protection must be calibrated to ensure Accuracy and Traceability
Directory of Open Access Journals (Sweden)
Frederick Schauer
2017-09-01
Full Text Available Objective to study the notion and essence of legal judgments calibration the possibilities of using it in the lawenforcement activity to explore the expenses and advantages of using it. Methods dialectic approach to the cognition of social phenomena which enables to analyze them in historical development and functioning in the context of the integrity of objective and subjective factors it determined the choice of the following research methods formallegal comparative legal sociological methods of cognitive psychology and philosophy. Results In ordinary life people who assess other peoplersaquos judgments typically take into account the other judgments of those they are assessing in order to calibrate the judgment presently being assessed. The restaurant and hotel rating website TripAdvisor is exemplary because it facilitates calibration by providing access to a raterrsaquos previous ratings. Such information allows a user to see whether a particular rating comes from a rater who is enthusiastic about every place she patronizes or instead from someone who is incessantly hard to please. And even when less systematized as in assessing a letter of recommendation or college transcript calibration by recourse to the decisional history of those whose judgments are being assessed is ubiquitous. Yet despite the ubiquity and utility of such calibration the legal system seems perversely to reject it. Appellate courts do not openly adjust their standard of review based on the previous judgments of the judge whose decision they are reviewing nor do judges in reviewing legislative or administrative decisions magistrates in evaluating search warrant representations or jurors in assessing witness perception. In most legal domains calibration by reference to the prior decisions of the reviewee is invisible either because it does not exist or because reviewing bodies are unwilling to admit using what they in fact know and employ. Scientific novelty for the first
San Carlos Apache Tribe - Energy Organizational Analysis
Energy Technology Data Exchange (ETDEWEB)
Rapp, James; Albert, Steve
2012-04-01
The San Carlos Apache Tribe (SCAT) was awarded $164,000 in late-2011 by the U.S. Department of Energy (U.S. DOE) Tribal Energy Program's "First Steps Toward Developing Renewable Energy and Energy Efficiency on Tribal Lands" Grant Program. This grant funded: The analysis and selection of preferred form(s) of tribal energy organization (this Energy Organization Analysis, hereinafter referred to as "EOA"). Start-up staffing and other costs associated with the Phase 1 SCAT energy organization. An intern program. Staff training. Tribal outreach and workshops regarding the new organization and SCAT energy programs and projects, including two annual tribal energy summits (2011 and 2012). This report documents the analysis and selection of preferred form(s) of a tribal energy organization.
Methods for Monte Carlo simulations of biomacromolecules.
Vitalis, Andreas; Pappu, Rohit V
2009-01-01
The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies.
Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo
Herckenrath, Daan; Langevin, Christian D.; Doherty, John
2011-01-01
Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of
Monte-Carlo Simulation for PDC-Based Optical CDMA System
Directory of Open Access Journals (Sweden)
FAHIM AZIZ UMRANI
2010-10-01
Full Text Available This paper presents the Monte-Carlo simulation of Optical CDMA (Code Division Multiple Access systems, and analyse its performance in terms of the BER (Bit Error Rate. The spreading sequence chosen for CDMA is Perfect Difference Codes. Furthermore, this paper derives the expressions of noise variances from first principles to calibrate the noise for both bipolar (electrical domain and unipolar (optical domain signalling required for Monte-Carlo simulation. The simulated results conform to the theory and show that the receiver gain mismatch and splitter loss at the transceiver degrades the system performance.
Essay on Option Pricing, Hedging and Calibration
DEFF Research Database (Denmark)
da Silva Ribeiro, André Manuel
Quantitative finance is concerned about applying mathematics to financial markets.This thesis is a collection of essays that study different problems in this field: How efficient are option price approximations to calibrate a stochastic volatilitymodel? (Chapter 2) How different is the discretely...... of dynamics? (Chapter 5) How can we formulate a simple free-arbitrage model to price correlationswaps? (Chapter 6) A summary of the work presented in this thesis: Approximation Behooves Calibration In this paper we show that calibration based on an expansion approximation for option prices in the Heston...... stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005 to 2009. Discretely Sampled Variance Options: A Stochastic Approximation Approach In this paper, we expand Drimus and Farkas (2012) framework to price variance options on discretely sampled...
(U) Introduction to Monte Carlo Methods
Energy Technology Data Exchange (ETDEWEB)
Hungerford, Aimee L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-03-20
Monte Carlo methods are very valuable for representing solutions to particle transport problems. Here we describe a “cook book” approach to handling the terms in a transport equation using Monte Carlo methods. Focus is on the mechanics of a numerical Monte Carlo code, rather than the mathematical foundations of the method.
Directory of Open Access Journals (Sweden)
Björn J. Döring
2013-12-01
Full Text Available A synthetic aperture radar (SAR system requires external absolute calibration so that radiometric measurements can be exploited in numerous scientific and commercial applications. Besides estimating a calibration factor, metrological standards also demand the derivation of a respective calibration uncertainty. This uncertainty is currently not systematically determined. Here for the first time it is proposed to use hierarchical modeling and Bayesian statistics as a consistent method for handling and analyzing the hierarchical data typically acquired during external calibration campaigns. Through the use of Markov chain Monte Carlo simulations, a joint posterior probability can be conveniently derived from measurement data despite the necessary grouping of data samples. The applicability of the method is demonstrated through a case study: The radar reflectivity of DLR’s new C-band Kalibri transponder is derived through a series of RADARSAT-2 acquisitions and a comparison with reference point targets (corner reflectors. The systematic derivation of calibration uncertainties is seen as an important step toward traceable radiometric calibration of synthetic aperture radars.
Calibration bench of flowmeters
International Nuclear Information System (INIS)
Bremond, J.; Da Costa, D.; Calvet, A.; Vieuxmaire, C.
1966-01-01
This equipment is devoted to the comparison of signals from two turbines installed in the Cabri experimental loop. The signal is compared to the standard turbine. The characteristics and the performance of the calibration bench are presented. (A.L.B.)
2008-01-01
Commodity-free calibration is a reaction rate calibration technique that does not require the addition of any commodities. This technique is a specific form of the reaction rate technique, where all of the necessary reactants, other than the sample being analyzed, are either inherent in the analyzing system or specifically added or provided to the system for a reason other than calibration. After introduction, the component of interest is exposed to other reactants or flow paths already present in the system. The instrument detector records one of the following to determine the rate of reaction: the increase in the response of the reaction product, a decrease in the signal of the analyte response, or a decrease in the signal from the inherent reactant. With this data, the initial concentration of the analyte is calculated. This type of system can analyze and calibrate simultaneously, reduce the risk of false positives and exposure to toxic vapors, and improve accuracy. Moreover, having an excess of the reactant already present in the system eliminates the need to add commodities, which further reduces cost, logistic problems, and potential contamination. Also, the calculations involved can be simplified by comparison to those of the reaction rate technique. We conducted tests with hypergols as an initial investigation into the feasiblility of the technique.
Calibration with Absolute Shrinkage
DEFF Research Database (Denmark)
Øjelund, Henrik; Madsen, Henrik; Thyregod, Poul
2001-01-01
In this paper, penalized regression using the L-1 norm on the estimated parameters is proposed for chemometric je calibration. The algorithm is of the lasso type, introduced by Tibshirani in 1996 as a linear regression method with bound on the absolute length of the parameters, but a modification...
Calibrating Communication Competencies
Surges Tatum, Donna
2016-11-01
The Many-faceted Rasch measurement model is used in the creation of a diagnostic instrument by which communication competencies can be calibrated, the severity of observers/raters can be determined, the ability of speakers measured, and comparisons made between various groups.
NVLAP calibration laboratory program
International Nuclear Information System (INIS)
Cigler, J.L.
1993-01-01
This paper presents an overview of the progress up to April 1993 in the development of the Calibration Laboratories Accreditation Program within the framework of the National Voluntary Laboratory Accreditation Program (NVLAP) at the National Institute of Standards and Technology (NIST)
CERN. Geneva
2015-01-01
My talk will be covering my work as a whole over the course of the semester. The focus will be on using energy flow calibration in ECAL to check the precision of the corrections made by the light monitoring system used to account for transparency loss within ECAL crystals due to radiation damage over time.
Measurement System & Calibration report
DEFF Research Database (Denmark)
Gómez Arranz, Paula; Vesth, Allan
This Measurement System & Calibration report is describing DTU’s measurement system installed at a specific wind turbine. A major part of the sensors has been installed by others (see [1]) the rest of the sensors have been installed by DTU. The results of the measurements, described in this report...
Measurement System & Calibration report
DEFF Research Database (Denmark)
Georgieva Yankova, Ginka; Federici, Paolo
This Measurement System & Calibration report is describing DTU’s measurement system installed at a specific wind turbine. A part of the sensors has been installed by others, the rest of the sensors have been installed by DTU. The results of the measurements, described in this report, are only valid...
Measurement System & Calibration report
DEFF Research Database (Denmark)
Gómez Arranz, Paula; Villanueva, Héctor
This Measurement System & Calibration report is describing DTU’s measurement system installed at a specific wind turbine. A part of the sensors has been installed by others, the rest of the sensors have been installed by DTU. The results of the measurements, described in this report, are only val...
Entropic calibration revisited
Energy Technology Data Exchange (ETDEWEB)
Brody, Dorje C. [Blackett Laboratory, Imperial College, London SW7 2BZ (United Kingdom)]. E-mail: d.brody@imperial.ac.uk; Buckley, Ian R.C. [Centre for Quantitative Finance, Imperial College, London SW7 2AZ (United Kingdom); Constantinou, Irene C. [Blackett Laboratory, Imperial College, London SW7 2BZ (United Kingdom); Meister, Bernhard K. [Blackett Laboratory, Imperial College, London SW7 2BZ (United Kingdom)
2005-04-11
The entropic calibration of the risk-neutral density function is effective in recovering the strike dependence of options, but encounters difficulties in determining the relevant greeks. By use of put-call reversal we apply the entropic method to the time reversed economy, which allows us to obtain the spot price dependence of options and the relevant greeks.
Physiotherapy ultrasound calibrations
International Nuclear Information System (INIS)
Gledhill, M.
1996-01-01
Calibration of physiotherapy ultrasound equipment has long been a problem. Numerous surveys around the world over the past 20 years have all found that only a low percentage of the units tested had an output within 30% of that indicatd. In New Zealand, a survey carried out by the NRL in 1985 found that only 24% had an output, at the maximum setting, within + or - 20% of that indicated. The present performance Standard for new equipment (NZS 3200.2.5:1992) requires that the measured output should not deviate from that indicated by more than + or - 30 %. This may be tightened to + or - 20% in the next few years. Any calibration is only as good as the calibration equipment. Some force balances can be tested with small weights to simulate the force exerted by an ultrasound beam, but with others this is not possible. For such balances, testing may only be feasible with a calibrated source which could be used like a transfer standard. (author). 4 refs., 3 figs
International Nuclear Information System (INIS)
Rosauer, P.J.; Flaherty, J.J.
1981-01-01
This invention is in the field of gamma ray inspection devices for tubular products and the like employing an improved calibrating block which prevents the sensing system from being overloaded when no tubular product is present, and also provides the operator with a means for visually detecting the presence of wall thicknesses which are less than a required minimum. (author)
PLEIADES ABSOLUTE CALIBRATION : INFLIGHT CALIBRATION SITES AND METHODOLOGY
Directory of Open Access Journals (Sweden)
S. Lachérade
2012-07-01
Full Text Available In-flight calibration of space sensors once in orbit is a decisive step to be able to fulfil the mission objectives. This article presents the methods of the in-flight absolute calibration processed during the commissioning phase. Four In-flight calibration methods are used: absolute calibration, cross-calibration with reference sensors such as PARASOL or MERIS, multi-temporal monitoring and inter-bands calibration. These algorithms are based on acquisitions over natural targets such as African deserts, Antarctic sites, La Crau (Automatic calibration station and Oceans (Calibration over molecular scattering or also new extra-terrestrial sites such as the Moon and selected stars. After an overview of the instrument and a description of the calibration sites, it is pointed out how each method is able to address one or several aspects of the calibration. We focus on how these methods complete each other in their operational use, and how they help building a coherent set of information that addresses all aspects of in-orbit calibration. Finally, we present the perspectives that the high level of agility of PLEIADES offers for the improvement of its calibration and a better characterization of the calibration sites.
Energy Technology Data Exchange (ETDEWEB)
John F. Schabron; Joseph F. Rovani; Susan S. Sorini
2007-03-31
The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005, requires that calibration of mercury continuous emissions monitors (CEMs) be performed with NIST-traceable standards. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor generators. The traceability protocol will be written by EPA. Traceability will be based on the actual analysis of the output of each calibration unit at several concentration levels ranging from about 2-40 ug/m{sup 3}, and this analysis will be directly traceable to analyses by NIST using isotope dilution inductively coupled plasma/mass spectrometry (ID ICP/MS) through a chain of analyses linking the calibration unit in the power plant to the NIST ID ICP/MS. Prior to this project, NIST did not provide a recommended mercury vapor pressure equation or list mercury vapor pressure in its vapor pressure database. The NIST Physical and Chemical Properties Division in Boulder, Colorado was subcontracted under this project to study the issue in detail and to recommend a mercury vapor pressure equation that the vendors of mercury vapor pressure calibration units can use to calculate the elemental mercury vapor concentration in an equilibrium chamber at a particular temperature. As part of this study, a preliminary evaluation of calibration units from five vendors was made. The work was performed by NIST in Gaithersburg, MD and Joe Rovani from WRI who traveled to NIST as a Visiting Scientist.
Energy Technology Data Exchange (ETDEWEB)
J. Wang
2003-06-24
The purpose of this Model Report is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Office of Repository Development (ORD). The UZ contains the unsaturated rock layers overlying the repository and host unit, which constitute a natural barrier to flow, and the unsaturated rock layers below the repository which constitute a natural barrier to flow and transport. This work followed, and was planned in, ''Technical Work Plan (TWP) for: Performance Assessment Unsaturated Zone'' (BSC 2002 [160819], Section 1.10.8 [under Work Package (WP) AUZM06, Climate Infiltration and Flow], and Section I-1-1 [in Attachment I, Model Validation Plans]). In Section 4.2, four acceptance criteria (ACs) are identified for acceptance of this Model Report; only one of these (Section 4.2.1.3.6.3, AC 3) was identified in the TWP (BSC 2002 [160819], Table 3-1). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, and drift-scale and mountain-scale coupled-process models from the UZ Flow, Transport and Coupled Processes Department in the Natural Systems Subproject of the Performance Assessment (PA) Project. The Calibrated Properties Model output will also be used by the Engineered Barrier System Department in the Engineering Systems Subproject. The Calibrated Properties Model provides input through the UZ Model and other process models of natural and engineered systems to the Total System Performance Assessment (TSPA) models, in accord with the PA Strategy and Scope in the PA Project of the Bechtel SAIC Company, LLC (BSC). The UZ process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions. UZ flow is a TSPA model component.
Parallel Monte Carlo Simulation of Aerosol Dynamics
Directory of Open Access Journals (Sweden)
Kun Zhou
2014-02-01
Full Text Available A highly efficient Monte Carlo (MC algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process. Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI. The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles.
Parallel Monte Carlo simulation of aerosol dynamics
Zhou, K.
2014-01-01
A highly efficient Monte Carlo (MC) algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process). Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI). The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands) of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD) function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles. 2014 Kun Zhou et al.
Field calibration of cup anemometers
DEFF Research Database (Denmark)
Schmidt Paulsen, Uwe; Mortensen, Niels Gylling; Hansen, Jens Carsten
2007-01-01
A field calibration method and results are described along with the experience gained with the method. The cup anemometers to be calibrated are mounted in a row on a 10-m high rig and calibrated in the free wind against a reference cup anemometer. The method has been reported [1] to improve...... the statistical bias on the data relative to calibrations carried out in a wind tunnel. The methodology is sufficiently accurate for calibration of cup anemometers used for wind resource assessments and provides a simple, reliable and cost-effective solution to cup anemometer calibration, especially suited...
Shell model Monte Carlo methods
International Nuclear Information System (INIS)
Koonin, S.E.
1996-01-01
We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs
Subtle Monte Carlo Updates in Dense Molecular Systems
DEFF Research Database (Denmark)
Bottaro, Sandro; Boomsma, Wouter; Johansson, Kristoffer E.
2012-01-01
as correlations in a multivariate Gaussian distribution. We demonstrate that our method reproduces structural variation in proteins with greater efficiency than current state-of-the-art Monte Carlo methods and has real-time simulation performance on par with molecular dynamics simulations. The presented results......Although Markov chain Monte Carlo (MC) simulation is a potentially powerful approach for exploring conformational space, it has been unable to compete with molecular dynamics (MD) in the analysis of high density structural states, such as the native state of globular proteins. Here, we introduce...... a kinetic algorithm, CRISP, that greatly enhances the sampling efficiency in all-atom MC simulations of dense systems. The algorithm is based on an exact analytical solution to the classic chain-closure problem, making it possible to express the interdependencies among degrees of freedom in the molecule...
Parallelism in continuous energy Monte Carlo method for neutron transport
Energy Technology Data Exchange (ETDEWEB)
Uenohara, Yuji (Nuclear Engineering Lab., Toshiba Corp. (Japan))
1993-04-01
The continuous energy Monte Carlo code VIM was implemented on a prototype highly parallel computer called PRODIGY developed by TOSHIBA Corporation. The author tried to distribute nuclear data to the processing elements (PEs) for the purpose of studying domain decompositon for the velocity space. Eigenvalue problems for a 1-D plate-cell infinite lattice mockup of ZPR-6-7 wa examined. For the geometrical space, the PEs were assigned to domains corresponding to nuclear fuel bundles in a typical boiling water reactor. The author estimated the parallelization efficiencies for both highly parallel and a massively parallel computer. Negligible communication overhead derived from neutron transports resulted from the heavy computing loads of Monte Carlo simulations. In the case of highly parallel computers, the communication overheads scarcely contributed to the parallelization efficiency. In the case of massively parallel computers, the control of PEs resulted in considerable communication overheads. (orig.)
Parallelism in continuous energy Monte Carlo method for neutron transport
International Nuclear Information System (INIS)
Uenohara, Yuji
1993-01-01
The continuous energy Monte Carlo code VIM was implemented on a prototype highly parallel computer called PRODIGY developed by TOSHIBA Corporation. The author tried to distribute nuclear data to the processing elements (PEs) for the purpose of studying domain decompositon for the velocity space. Eigenvalue problems for a 1-D plate-cell infinite lattice mockup of ZPR-6-7 wa examined. For the geometrical space, the PEs were assigned to domains corresponding to nuclear fuel bundles in a typical boiling water reactor. The author estimated the parallelization efficiencies for both highly parallel and a massively parallel computer. Negligible communication overhead derived from neutron transports resulted from the heavy computing loads of Monte Carlo simulations. In the case of highly parallel computers, the communication overheads scarcely contributed to the parallelization efficiency. In the case of massively parallel computers, the control of PEs resulted in considerable communication overheads. (orig.)
Calibration of thin-film dosimeters irradiated with 80-120 kev electrons
DEFF Research Database (Denmark)
Helt-Hansen, J.; Miller, A.; McEwen, M.
2004-01-01
A method for calibration of thin-film dosimeters irradiated with 80-120keV electrons has been developed. The method is based on measurement of dose with a totally absorbing graphite calorimeter, and conversion of dose in the graphite calorimeter to dose in the film dosimeter by Monte Carlo...... calculations. A thermal model was developed to estimate the temperature contributions from the air above the calorimeter that is heated by the electron beam. As an example, Riso B3 thin-film dosimeters were calibrated by 80-120 keV electron irradiation and compared with a calibration carried out at 10 Me...
Adaptive Multilevel Monte Carlo Simulation
Hoel, H
2011-08-23
This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).
Extending canonical Monte Carlo methods
International Nuclear Information System (INIS)
Velazquez, L; Curilef, S
2010-01-01
In this paper, we discuss the implications of a recently obtained equilibrium fluctuation-dissipation relation for the extension of the available Monte Carlo methods on the basis of the consideration of the Gibbs canonical ensemble to account for the existence of an anomalous regime with negative heat capacities C α with α≈0.2 for the particular case of the 2D ten-state Potts model
Parallel Monte Carlo reactor neutronics
International Nuclear Information System (INIS)
Blomquist, R.N.; Brown, F.B.
1994-01-01
The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved
Energy Technology Data Exchange (ETDEWEB)
Ortiz Lora, A.; Miras del Rio, H.; Terron Leon, J. A.
2013-07-01
Following the recommendations of the IAEA, and as a further check, they have been Monte Carlo simulation of each one of the plates that are arranged at the Hospital. The objective of the work is the verification of the certificates of calibration and intends to establish criteria of action for its acceptance. (Author)
Difficult Sudoku Puzzles Created by Replica Exchange Monte Carlo Method
Watanabe, Hiroshi
2013-01-01
An algorithm to create difficult Sudoku puzzles is proposed. An Ising spin-glass like Hamiltonian describing difficulty of puzzles is defined, and difficult puzzles are created by minimizing the energy of the Hamiltonian. We adopt the replica exchange Monte Carlo method with simultaneous temperature adjustments to search lower energy states efficiently, and we succeed in creating a puzzle which is the world hardest ever created in our definition, to our best knowledge. (Added on Mar. 11, the ...
Scaling Monte Carlo Tree Search on Intel Xeon Phi
Mirsoleimani, S. Ali; Plaat, Aske; Herik, Jaap van den; Vermaseren, Jos
2015-01-01
Many algorithms have been parallelized successfully on the Intel Xeon Phi coprocessor, especially those with regular, balanced, and predictable data access patterns and instruction flows. Irregular and unbalanced algorithms are harder to parallelize efficiently. They are, for instance, present in artificial intelligence search algorithms such as Monte Carlo Tree Search (MCTS). In this paper we study the scaling behavior of MCTS, on a highly optimized real-world application, on real hardware. ...
Strings, Projected Entangled Pair States, and variational Monte Carlo methods
Schuch, Norbert; Wolf, Michael M.; Verstraete, Frank; Cirac, J. Ignacio
2007-01-01
We introduce string-bond states, a class of states obtained by placing strings of operators on a lattice, which encompasses the relevant states in Quantum Information. For string-bond states, expectation values of local observables can be computed efficiently using Monte Carlo sampling, making them suitable for a variational abgorithm which extends DMRG to higher dimensional and irregular systems. Numerical results demonstrate the applicability of these states to the simulation of many-body s...
Golobokov, M.; Danilevich, S.
2018-04-01
In order to assess calibration reliability and automate such assessment, procedures for data collection and simulation study of thermal imager calibration procedure have been elaborated. The existing calibration techniques do not always provide high reliability. A new method for analyzing the existing calibration techniques and developing new efficient ones has been suggested and tested. A type of software has been studied that allows generating instrument calibration reports automatically, monitoring their proper configuration, processing measurement results and assessing instrument validity. The use of such software allows reducing man-hours spent on finalization of calibration data 2 to 5 times and eliminating a whole set of typical operator errors.
Spada, F.M.; Krol, M.C.; Stammes, P.
2006-01-01
A new multiple-scattering Monte Carlo 3-D radiative transfer model named McSCIA (Monte Carlo for SCIAmachy) is presented. The backward technique is used to efficiently simulate narrow field of view instruments. The McSCIA algorithm has been formulated as a function of the Earth’s radius, and can
Spada, F.; Krol, M.C.; Stammes, P.
2006-01-01
A new multiple-scatteringMonte Carlo 3-D radiative transfer model named McSCIA (Monte Carlo for SCIA-machy) is presented. The backward technique is used to efficiently simulate narrow field of view instruments. The McSCIA algorithm has been formulated as a function of the Earth's radius, and can
Monte-Carlo-based uncertainty propagation with hierarchical models—a case study in dynamic torque
Klaus, Leonard; Eichstädt, Sascha
2018-04-01
For a dynamic calibration, a torque transducer is described by a mechanical model, and the corresponding model parameters are to be identified from measurement data. A measuring device for the primary calibration of dynamic torque, and a corresponding model-based calibration approach, have recently been developed at PTB. The complete mechanical model of the calibration set-up is very complex, and involves several calibration steps—making a straightforward implementation of a Monte Carlo uncertainty evaluation tedious. With this in mind, we here propose to separate the complete model into sub-models, with each sub-model being treated with individual experiments and analysis. The uncertainty evaluation for the overall model then has to combine the information from the sub-models in line with Supplement 2 of the Guide to the Expression of Uncertainty in Measurement. In this contribution, we demonstrate how to carry this out using the Monte Carlo method. The uncertainty evaluation involves various input quantities of different origin and the solution of a numerical optimisation problem.
MAVEN SEP Calibrated Data Bundle
National Aeronautics and Space Administration — The maven.sep.calibrated Level 2 Science Data Bundle contains fully calibrated SEP data, as well as the raw count data from which they are derived, and ancillary...
Neutron calibration field of bare {sup 252}Cf source in Vietnam
Energy Technology Data Exchange (ETDEWEB)
Le, Ngoc Thiem; Tran, Hoai Nam; Nguyen, Khai Tuan [Institute for Nuclear Science and Technology, Hanoi (Viet Nam); Trinh, Glap Van [Institute of Research and Development, Duy Tan University, Da Nang (Viet Nam)
2017-02-15
This paper presents the establishment and characterization of a neutron calibration field using a bare {sup 252}Cf source of low neutron source strength in Vietnam. The characterization of the field in terms of neutron flux spectra and neutron ambient dose equivalent rates were performed by Monte Carlo simulations using the MCNP5 code. The anisotropy effect of the source was also investigated. The neutron ambient dose equivalent rates at three reference distances of 75, 125, and 150 cm from the source were calculated and compared with the measurements using the Aloka TPS-451C neutron survey meters. The discrepancy between the calculated and measured values is found to be about 10%. To separate the scattered and the direct components from the total neutron flux spectra, an in-house shadow cone of 10% borated polyethylene was used. The shielding efficiency of the shadow cone was estimated using the MCNP5 code. The results confirmed that the shielding efficiency of the shadow cone is acceptable.
A Study on Relative Radiometric Calibration without Calibration Field for YG-25
Directory of Open Access Journals (Sweden)
ZHANG Guo
2017-08-01
Full Text Available YG-25 is the first agility optical remote sensing satellite of China to acquire the sub-meter imagery of the earth. The side slither calibration technique is an on-orbit maneuver that has been used to flat-field image data acquired over the uniform calibration field. However, imaging to the single uniform calibration field cannot afford to calibrate the full dynamic response range of the sensor and reduces the efficiency. The paper proposes a new relative radiometric calibration method that a 90-degree yaw maneuver is performed over any non-uniform features of the Earth for YG-25. Meanwhile, we use an enhanced side slither image horizontal correction method based on line segment detector(LSDalgorithm to solve the side slither image over-shifted problem.The shifted results are compared with other horizontal correction method. The histogram match algorithm is used to calculate the relative gains of all detectors. The correctness and validity of the proposed method are validated by using the YG-25 on-board side slither data. The results prove that the mean streaking metrics of relative correction images of YG-25 is better 0.07%, the noticeable striping artifact and residual noise are removed, the calibration accuracy of side slither technique based on non-uniform features is superior to life image statistics of sensor's life span.
CSIR Research Space (South Africa)
Bidgood, Peter M
2017-01-01
Full Text Available , 2 01 7 | ht tp: //a rc. aia a.o rg | D OI : 1 0.2 514 /6. 201 7-0 106 55th AIAA Aerospace Sciences Meeting 9 - 13 January 2017, Grapevine, Texas AIAA 2017-0106 Copyright © 2017 by CSIR-South Africa. Published by the American Institute... Tunnel Balance Calibration models using Monte Carlo to Propagate Elemental Errors from Calibration to Installation.” Paper presented at 51st AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition : Grapevine, Texas : s...
DEFF Research Database (Denmark)
Ejsing Jørgensen, Hans; Mikkelsen, T.; Streicher, J.
1997-01-01
A series of atmospheric aerosol diffusion experiments combined with lidar detection was conducted to evaluate and calibrate an existing retrieval algorithm for aerosol backscatter lidar systems. The calibration experiments made use of two (almost) identical mini-lidar systems for aerosol cloud...... detection to test the reproducibility and uncertainty of lidars. Lidar data were obtained from both single-ended and double-ended Lidar configurations. A backstop was introduced in one of the experiments and a new method was developed where information obtained from the backstop can be used in the inversion...... algorithm. Independent in-situ aerosol plume concentrations were obtained from a simultaneous tracer gas experiment with SF6, and comparisons with the two lidars were made. The study shows that the reproducibility of the lidars is within 15%, including measurements from both sides of a plume...
Travelling gradient thermocouple calibration
International Nuclear Information System (INIS)
Broomfield, G.H.
1975-01-01
A short discussion of the origins of the thermocouple EMF is used to re-introduce the idea that the Peltier and Thompson effects are indistinguishable from one another. Thermocouples may be viewed as devices which generate an EMF at junctions or as integrators of EMF's developed in thermal gradients. The thermal gradient view is considered the more appropriate, because of its better accord with theory and behaviour, the correct approach to calibration, and investigation of service effects is immediately obvious. Inhomogeneities arise in thermocouples during manufacture and in service. The results of travelling gradient measurements are used to show that such effects are revealed with a resolution which depends on the length of the gradient although they may be masked during simple immersion calibration. Proposed tests on thermocouples irradiated in a nuclear reactor are discussed
Ultrasonic calibration assembly
International Nuclear Information System (INIS)
1981-01-01
Ultrasonic transducers for in-service inspection of nuclear reactor vessels have several problems associated with them which this invention seeks to overcome. The first is that of calibration or referencing a zero start point for the vertical axis of transducer movement to locate a weld defect. The second is that of verifying the positioning (vertically or at a predetermined angle). Thirdly there is the problem of ascertaining the speed per unit distance in the operating medium of the transducer beam prior to the actual inspection. The apparatus described is a calibration assembly which includes a fixed, generally spherical body having a surface for reflecting an ultrasonic beam from one of the transducers which can be moved until the reflection from the spherical body is the highest amplitude return signal indicating radial alignment from the body. (U.K.)
ALEPH: An optimal approach to Monte Carlo burn-up
International Nuclear Information System (INIS)
Verboomen, B.
2007-01-01
The incentive of creating Monte Carlo burn-up codes arises from its ability to provide the most accurate locally dependent spectra and flux values in realistic 3D geometries of any type. These capabilities linked with the ability to handle nuclear data not only in its most basic but also most complex form (namely continuous energy cross sections, detailed energy-angle correlations, multi-particle physics, etc.) could make Monte Carlo burn-up codes very powerful, especially for hybrid and advanced nuclear systems (like for instance Accelerator Driven Systems). Still, such Monte Carlo burn-up codes have had limited success mainly due to the rather long CPU time required to carry out very detailed and accurate calculations, even with modern computer technology. To work around this issue, users often have to reduce the number of nuclides in the evolution chains or to consider either longer irradiation time steps and/or larger spatial burn-up cells, jeopardizing the accuracy of the calculation in all cases. There should always be a balance between accuracy and what is (reasonably) achievable. So when the Monte Carlo simulation time is as low as possible and if calculating the cross sections and flux values required for the depletion calculation takes little or no extra time compared to this simulation time, then we can actually be as accurate as we want. That is the optimum situation for Monte Carlo burn-up calculations.The ultimate goal of this work is to provide the Monte Carlo community with an efficient, flexible and easy to use alternative for Monte Carlo burn-up and activation calculations, which is what we did with ALEPH. ALEPH is a Monte Carlo burn-up code that uses ORIGEN 2.2 as a depletion module and any version of MCNP or MCNPX as the transport module. For now, ALEPH has been limited to updating microscopic cross section data only. By providing an easy to understand user interface, we also take away the burden from the user. For the user, it is as if he is
Mesoscale hybrid calibration artifact
Tran, Hy D.; Claudet, Andre A.; Oliver, Andrew D.
2010-09-07
A mesoscale calibration artifact, also called a hybrid artifact, suitable for hybrid dimensional measurement and the method for make the artifact. The hybrid artifact has structural characteristics that make it suitable for dimensional measurement in both vision-based systems and touch-probe-based systems. The hybrid artifact employs the intersection of bulk-micromachined planes to fabricate edges that are sharp to the nanometer level and intersecting planes with crystal-lattice-defined angles.
Energy Technology Data Exchange (ETDEWEB)
H. H. Liu
2003-02-14
This report has documented the methodologies and the data used for developing rock property sets for three infiltration maps. Model calibration is necessary to obtain parameter values appropriate for the scale of the process being modeled. Although some hydrogeologic property data (prior information) are available, these data cannot be directly used to predict flow and transport processes because they were measured on scales smaller than those characterizing property distributions in models used for the prediction. Since model calibrations were done directly on the scales of interest, the upscaling issue was automatically considered. On the other hand, joint use of data and the prior information in inversions can further increase the reliability of the developed parameters compared with those for the prior information. Rock parameter sets were developed for both the mountain and drift scales because of the scale-dependent behavior of fracture permeability. Note that these parameter sets, except those for faults, were determined using the 1-D simulations. Therefore, they cannot be directly used for modeling lateral flow because of perched water in the unsaturated zone (UZ) of Yucca Mountain. Further calibration may be needed for two- and three-dimensional modeling studies. As discussed above in Section 6.4, uncertainties for these calibrated properties are difficult to accurately determine, because of the inaccuracy of simplified methods for this complex problem or the extremely large computational expense of more rigorous methods. One estimate of uncertainty that may be useful to investigators using these properties is the uncertainty used for the prior information. In most cases, the inversions did not change the properties very much with respect to the prior information. The Output DTNs (including the input and output files for all runs) from this study are given in Section 9.4.
Frederick Schauer; Barbara A. Spellman
2017-01-01
Objective to study the notion and essence of legal judgments calibration the possibilities of using it in the lawenforcement activity to explore the expenses and advantages of using it. Methods dialectic approach to the cognition of social phenomena which enables to analyze them in historical development and functioning in the context of the integrity of objective and subjective factors it determined the choice of the following research methods formallegal comparative legal sociolog...
Energy Technology Data Exchange (ETDEWEB)
T. Ghezzehej
2004-10-04
The purpose of this model report is to document the calibrated properties model that provides calibrated property sets for unsaturated zone (UZ) flow and transport process models (UZ models). The calibration of the property sets is performed through inverse modeling. This work followed, and was planned in, ''Technical Work Plan (TWP) for: Unsaturated Zone Flow Analysis and Model Report Integration'' (BSC 2004 [DIRS 169654], Sections 1.2.6 and 2.1.1.6). Direct inputs to this model report were derived from the following upstream analysis and model reports: ''Analysis of Hydrologic Properties Data'' (BSC 2004 [DIRS 170038]); ''Development of Numerical Grids for UZ Flow and Transport Modeling'' (BSC 2004 [DIRS 169855]); ''Simulation of Net Infiltration for Present-Day and Potential Future Climates'' (BSC 2004 [DIRS 170007]); ''Geologic Framework Model'' (GFM2000) (BSC 2004 [DIRS 170029]). Additionally, this model report incorporates errata of the previous version and closure of the Key Technical Issue agreement TSPAI 3.26 (Section 6.2.2 and Appendix B), and it is revised for improved transparency.
Efficient quadrature rules for illumination integrals from quasi Monte Carlo to Bayesian Monte Carlo
Marques, Ricardo; Santos, Luís Paulo; Bouatouch, Kadi
2015-01-01
Rendering photorealistic images is a costly process which can take up to several days in the case of high quality images. In most cases, the task of sampling the incident radiance function to evaluate the illumination integral is responsible for an important share of the computation time. Therefore, to reach acceptable rendering times, the illumination integral must be evaluated using a limited set of samples. Such a restriction raises the question of how to obtain the most accurate approximation possible with such a limited set of samples. One must thus ensure that sampling produces the highe
Improved Calibration Shows Images True Colors
2015-01-01
Innovative Imaging and Research, located at Stennis Space Center, used a single SBIR contract with the center to build a large-scale integrating sphere, capable of calibrating a whole array of cameras simultaneously, at a fraction of the usual cost for such a device. Through the use of LEDs, the company also made the sphere far more efficient than existing products and able to mimic sunlight.
Gamma ray energy loss spectra simulation in NaI detectors with the Monte Carlo method
International Nuclear Information System (INIS)
Vieira, W.J.
1982-01-01
With the aim of studying and applying the Monte Carlo method, a computer code was developed to calculate the pulse height spectra and detector efficiencies for gamma rays incident on NaI (Tl) crystals. The basic detector processes in NaI (Tl) detectors are given together with an outline of Monte Carlo methods and a general review of relevant published works. A detailed description of the application of Monte Carlo methods to ν-ray detection in NaI (Tl) detectors is given. Comparisons are made with published, calculated and experimental, data. (Author) [pt
Microwave transport in EBT distribution manifolds using Monte Carlo ray-tracing techniques
International Nuclear Information System (INIS)
Lillie, R.A.; White, T.L.; Gabriel, T.A.; Alsmiller, R.G. Jr.
1983-01-01
Ray tracing Monte Carlo calculations have been carried out using an existing Monte Carlo radiation transport code to obtain estimates of the microsave power exiting the torus coupling links in EPT microwave manifolds. The microwave power loss and polarization at surface reflections were accounted for by treating the microwaves as plane waves reflecting off plane surfaces. Agreement on the order of 10% was obtained between the measured and calculated output power distribution for an existing EBT-S toroidal manifold. A cost effective iterative procedure utilizing the Monte Carlo history data was implemented to predict design changes which could produce increased manifold efficiency and improved output power uniformity
Calibration of Underwater Sound Transducers
H.R.S. Sastry
1983-01-01
The techniques of calibration of underwater sound transducers for farfield, near-field and closed environment conditions are reviewed in this paper .The design of acoustic calibration tank is mentioned. The facilities available at Naval Physical & Oceanographic Laboratory, Cochin for calibration of transducers are also listed.
BESIII online electronics calibration system
International Nuclear Information System (INIS)
Wang Liang; Lei Guangkun; Zhu Kejun; Zhao Jingwei; Li Fei
2006-01-01
This paper introduce the components of BESIII DAQ System. It describe the relationship of internal online electrionics calibration's components, the mechanism of dataflow and message flow and the implementation of system functions. When BESIII is running, the system will be used to online calibrate electronics channels and provide the calibration params to adjust electronics data. (authors)
Present Status and Extensions of the Monte Carlo Performance Benchmark
Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.
2014-06-01
The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.
A separable shadow Hamiltonian hybrid Monte Carlo method
Sweet, Christopher R.; Hampton, Scott S.; Skeel, Robert D.; Izaguirre, Jesús A.
2009-11-01
Hybrid Monte Carlo (HMC) is a rigorous sampling method that uses molecular dynamics (MD) as a global Monte Carlo move. The acceptance rate of HMC decays exponentially with system size. The shadow hybrid Monte Carlo (SHMC) was previously introduced to reduce this performance degradation by sampling instead from the shadow Hamiltonian defined for MD when using a symplectic integrator. SHMC's performance is limited by the need to generate momenta for the MD step from a nonseparable shadow Hamiltonian. We introduce the separable shadow Hamiltonian hybrid Monte Carlo (S2HMC) method based on a formulation of the leapfrog/Verlet integrator that corresponds to a separable shadow Hamiltonian, which allows efficient generation of momenta. S2HMC gives the acceptance rate of a fourth order integrator at the cost of a second-order integrator. Through numerical experiments we show that S2HMC consistently gives a speedup greater than two over HMC for systems with more than 4000 atoms for the same variance. By comparison, SHMC gave a maximum speedup of only 1.6 over HMC. S2HMC has the additional advantage of not requiring any user parameters beyond those of HMC. S2HMC is available in the program PROTOMOL 2.1. A Python version, adequate for didactic purposes, is also in MDL (http://mdlab.sourceforge.net/s2hmc).
Directory of Open Access Journals (Sweden)
Alía Rubén García
2018-01-01
Full Text Available We describe an approach to calibrate Single Event Effect (SEE-based detectors in monoenergetic fields and apply the resulting semi-empiric responses to more general mixed-field cases in which a broad variety of particle species and energy spectra are present. The calibration of the response functions is based both on experimental proton (30–200 MeV and neutron (5–300 MeV data and considerations derived from Monte Carlo simulations using the FLUKA Monte Carlo code. The application environments include the quasi-monoenergetic neutrons at RCNP, the atmospheric-like VESUVIO spallation spectrum and the CHARM high-energy accelerator test facility. The agreement between the mixed-field response and that predicted through the mono-energetic calibration is within ±30% for the broad variety of cases considered and thus regarded as highly successful for mixed-field monitoring applications.
Accelerated GPU based SPECT Monte Carlo simulations.
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-07
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational
Accelerated GPU based SPECT Monte Carlo simulations
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-01
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency
International Nuclear Information System (INIS)
Xiao Gang; Li Zhizhong
2004-01-01
Based on integral equaiton describing the life-history of Markov system, six types of estimators of the current unavailability of Markov system with dependent repair are propounded. Combining with the biased sampling of state transition time of system, six types of Monte Carlo for estimating the current unavailability are given. Two numerical examples are given to deal with the variances and efficiencies of the six types of Monte Carlo methods. (authors)
The determination of beam quality correction factors: Monte Carlo simulations and measurements.
González-Castaño, D M; Hartmann, G H; Sánchez-Doblado, F; Gómez, F; Kapsch, R-P; Pena, J; Capote, R
2009-08-07
Modern dosimetry protocols are based on the use of ionization chambers provided with a calibration factor in terms of absorbed dose to water. The basic formula to determine the absorbed dose at a user's beam contains the well-known beam quality correction factor that is required whenever the quality of radiation used at calibration differs from that of the user's radiation. The dosimetry protocols describe the whole ionization chamber calibration procedure and include tabulated beam quality correction factors which refer to 60Co gamma radiation used as calibration quality. They have been calculated for a series of ionization chambers and radiation qualities based on formulae, which are also described in the protocols. In the case of high-energy photon beams, the relative standard uncertainty of the beam quality correction factor is estimated to amount to 1%. In the present work, two alternative methods to determine beam quality correction factors are prescribed-Monte Carlo simulation using the EGSnrc system and an experimental method based on a comparison with a reference chamber. Both Monte Carlo calculations and ratio measurements were carried out for nine chambers at several radiation beams. Four chamber types are not included in the current dosimetry protocols. Beam quality corrections for the reference chamber at two beam qualities were also measured using a calorimeter at a PTB Primary Standards Dosimetry Laboratory. Good agreement between the Monte Carlo calculated (1% uncertainty) and measured (0.5% uncertainty) beam quality correction factors was obtained. Based on these results we propose that beam quality correction factors can be generated both by measurements and by the Monte Carlo simulations with an uncertainty at least comparable to that given in current dosimetry protocols.
Effect of phantom dimension variation on Monte Carlo simulation speed and precision
International Nuclear Information System (INIS)
Lin Hui; Xu Yuanying; Xu Liangfeng; Li Guoli; Jiang Jia
2007-01-01
There is a correlation between Monte Carlo simulation speed and the phantom dimension. The effect of the phantom dimension on the Monte Carlo simulation speed and precision was studied based on a fast Monte Carlo code DPM. The results showed that when the thickness of the phantom was reduced, the efficiency would increase exponentially without compromise of its precision except for the position at the tailor. When the width of the phantom was reduced to outside the penumbra, the effect on the efficiency would be neglectable. However when it was reduced to within the penumbra, the efficiency would be increased at some extent without precision loss. This result was applied to a clinic head case, and the remarkable increased efficiency was acquired. (authors)
Cuartel San Carlos. Yacimiento veterano
Directory of Open Access Journals (Sweden)
Mariana Flores
2007-01-01
Full Text Available El Cuartel San Carlos es un monumento histórico nacional (1986 de finales del siglo XVIII (1785-1790, caracterizado por sufrir diversas adversidades en su construcción y soportar los terremotos de 1812 y 1900. En el año 2006, el organismo encargado de su custodia, el Instituto de Patrimonio Cultural del Ministerio de Cultura, ejecutó tres etapas de exploración arqueológica, que abarcaron las áreas Traspatio, Patio Central y las Naves Este y Oeste de la edificación. Este trabajo reseña el análisis de la documentación arqueológica obtenida en el sitio, a partir de la realización de dicho proyecto, denominado EACUSAC (Estudio Arqueológico del Cuartel San Carlos, que representa además, la tercera campaña realizada en el sitio. La importancia de este yacimiento histórico, radica en su participación en los acontecimientos que propiciaron conflictos de poder durante el surgimiento de la República y en los sucesos políticos del siglo XX. De igual manera, se encontró en el sitio una amplia muestra de materiales arqueológicos que reseñan un estilo de vida cotidiana militar, así como las dinámicas sociales internas ocurridas en el San Carlos, como lugar estratégico para la defensa de los diferentes regímenes que atravesó el país, desde la época del imperialismo español hasta nuestros días.
Carlos Battilana: Profesor, Gestor, Amigo
Directory of Open Access Journals (Sweden)
José Pacheco
2009-12-01
Full Text Available El Comité Editorial de Anales ha perdido a uno de sus miembros más connotados. Brillante docente de nuestra Facultad, Carlos Alberto Battilana Guanilo (1945-2009 supo transmitir los conocimientos y atraer la atención de sus auditorios, de jóvenes estudiantes o de contemporáneos ya no tan jóvenes. Interesó a sus alumnos en la senda de la capacitación permanente y en la investigación. Por otro lado, comprometió a médicos distinguidos a conformar y liderar grupos con interés en la ciencia-amistad. Su vocación docente lo vinculó a facultades de medicina y academias y sociedades científicas, en donde coordinó cursos y congresos de grato recuerdo. Su producción científica la dedicó a la nefrología, inmunología, cáncer, costos en el tratamiento médico. Su capacidad gestora y de liderazgo presente desde su época de estudiante, le permitió llegar a ser director regional de un laboratorio farmacéutico de mucho prestigio, organizar una facultad de medicina y luego tener el cargo de decano de la facultad de ciencias de la salud de dicha universidad privada. Carlos fue elemento importante para que Anales alcanzara un sitial de privilegio entre las revistas biomédicas peruanas. En la semblanza que publicamos tratamos de resumir apretadamente la trayectoria de Carlos Battilana, semanas después de su partida sin retorno.
Energy Technology Data Exchange (ETDEWEB)
John Schabron; Joseph Rovani; Mark Sanderson
2008-02-29
Mercury continuous emissions monitoring systems (CEMS) are being implemented in over 800 coal-fired power plant stacks. The power industry desires to conduct at least a full year of monitoring before the formal monitoring and reporting requirement begins on January 1, 2009. It is important for the industry to have available reliable, turnkey equipment from CEM vendors. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor generators. The generators are used to calibrate mercury CEMs at power plant sites. The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005 requires that calibration be performed with NIST-traceable standards (Federal Register 2007). Traceability procedures will be defined by EPA. An initial draft traceability protocol was issued by EPA in May 2007 for comment. In August 2007, EPA issued an interim traceability protocol for elemental mercury generators (EPA 2007). The protocol is based on the actual analysis of the output of each calibration unit at several concentration levels ranging initially from about 2-40 {micro}g/m{sup 3} elemental mercury, and in the future down to 0.2 {micro}g/m{sup 3}, and this analysis will be directly traceable to analyses by NIST. The document is divided into two separate sections. The first deals with the qualification of generators by the vendors for use in mercury CEM calibration. The second describes the procedure that the vendors must use to certify the generator models that meet the qualification specifications. The NIST traceable certification is performance based, traceable to analysis using isotope dilution inductively coupled plasma/mass spectrometry performed by NIST in Gaithersburg, MD. The
Elsheikh, Ahmed H.
2014-02-01
An efficient Bayesian calibration method based on the nested sampling (NS) algorithm and non-intrusive polynomial chaos method is presented. Nested sampling is a Bayesian sampling algorithm that builds a discrete representation of the posterior distributions by iteratively re-focusing a set of samples to high likelihood regions. NS allows representing the posterior probability density function (PDF) with a smaller number of samples and reduces the curse of dimensionality effects. The main difficulty of the NS algorithm is in the constrained sampling step which is commonly performed using a random walk Markov Chain Monte-Carlo (MCMC) algorithm. In this work, we perform a two-stage sampling using a polynomial chaos response surface to filter out rejected samples in the Markov Chain Monte-Carlo method. The combined use of nested sampling and the two-stage MCMC based on approximate response surfaces provides significant computational gains in terms of the number of simulation runs. The proposed algorithm is applied for calibration and model selection of subsurface flow models. © 2013.
Directory of Open Access Journals (Sweden)
Rafael Maya
1979-04-01
Full Text Available Entre los poetasa del Centenario tuvo Luis Carlos López mucha popularidad en el extranjero, desde la publicación de su primer libro. Creo que su obra llamó la atención de filósofos como Unamuno y, si no estoy equivocado, Darío se refirió a ella en términos elogiosos. En Colombia ha sido encomiada hiperbólicamente por algunos, a tiemp que otros no le conceden mayor mérito.
Antitwilight II: Monte Carlo simulations.
Richtsmeier, Steven C; Lynch, David K; Dearborn, David S P
2017-07-01
For this paper, we employ the Monte Carlo scene (MCScene) radiative transfer code to elucidate the underlying physics giving rise to the structure and colors of the antitwilight, i.e., twilight opposite the Sun. MCScene calculations successfully reproduce colors and spatial features observed in videos and still photos of the antitwilight taken under clear, aerosol-free sky conditions. Through simulations, we examine the effects of solar elevation angle, Rayleigh scattering, molecular absorption, aerosol scattering, multiple scattering, and surface reflectance on the appearance of the antitwilight. We also compare MCScene calculations with predictions made by the MODTRAN radiative transfer code for a solar elevation angle of +1°.
Carlos Restrepo. Un verdadero Maestro
Pelayo Correa
2009-01-01
Carlos Restrepo fue el primer profesor de Patología y un miembro ilustre del grupo de pioneros que fundaron la Facultad de Medicina de la Universidad del Valle. Estos pioneros convergieron en Cali en la década de 1950, en posesión de un espíritu renovador y creativo que emprendió con mucho éxito la labor de cambiar la cultura académica del Valle del Cauca. Ellos encontraron una sociedad apacible, que disfrutaba de la generosidad de su entorno, sin deseos de romper las tradiciones centenarias...
Monte Carlo techniques in radiation therapy
Verhaegen, Frank
2013-01-01
Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...
Wildi, F.; Chazelas, B.; Deline, A.; Sarajlic, M.; Sordet, M.
2017-09-01
CHEOPS is an ESA Class S Mission aiming at the characterization of exoplanets through the precise measurement of their radius, using the transit method [1]. To achieve this goal, the payload is designed to be a high precision "absolute" photometer, looking at one star at a time. It will be able to cover la large fraction of the sky by repointing. Its launch is expected at the end of 2017 [2, this conference]. CHEOPS' main science is the measure of the transit of exoplanets of radius ranging from 1 to 6 Earth radii orbiting bright stars. The required photometric stability to reach this goal is of 20 ppm in 6 hours for a 9th magnitude star. The CHEOPS' only instrument is a Ritchey-Chretien style telescope with 300 mm effective aperture diameter, which provides a defocussed image of the target star on a single frame-transfer backside illuminated CCD detector cooled to -40°C and stabilized within 10 mK [2]. CHEOPS being in a LEO, it is equipped with a high performance baffle. The spacecraft platform provides a pointing stability of flat-fielding necessary In the rest of this article we will refer to the only CHEOPS instrument simply as "CHEOP" Its behavior will be calibrated thoroughly on the ground and only a small subset of the calibrations can be redone in flight. The main focuses of the calibrations are the photonic gain stability and sensibility to the environment variations and the Flat field that has to be known at a precision better than 0.1%.
Novel Quantum Monte Carlo Approaches for Quantum Liquids
Rubenstein, Brenda M.
the eventual hope is to apply this algorithm to the exploration of yet unidentified high-pressure, low-temperature phases of hydrogen, I employ this algorithm to determine whether or not quantum hard spheres can form a low-temperature bcc solid if exchange is not taken into account. In the final chapter of this thesis, I use Path Integral Monte Carlo once again to explore whether glassy para-hydrogen exhibits superfluidity. Physicists have long searched for ways to coax hydrogen into becoming a superfluid. I present evidence that, while glassy hydrogen does not crystallize at the temperatures at which hydrogen might become a superfluid, it nevertheless does not exhibit superfluidity. This is because the average binding energy per p-H2 molecule poses a severe barrier to exchange regardless of whether the system is crystalline. All in all, this work extends the reach of Quantum Monte Carlo methods to new systems and brings the power of existing methods to bear on new problems. Portions of this work have been published in Rubenstein, PRE (2010) and Rubenstein, PRA (2012) [167;169]. Other papers not discussed here published during my Ph.D. include Rubenstein, BPJ (2008) and Rubenstein, PRL (2012) [166;168]. The work in Chapters 6 and 7 is currently unpublished. [166] Brenda M. Rubenstein, Ivan Coluzza, and Mark A. Miller. Controlling the folding and substrate-binding of proteins using polymer brushes. Physical Review Letters, 108(20):208104, May 2012. [167] Brenda M. Rubenstein, J.E. Gubernatis, and J.D. Doll. Comparative monte carlo efficiency by monte carlo analysis. Physical Review E, 82(3):036701, September 2010. [168] Brenda M. Rubenstein and Laura J. Kaufman. The role of extracellular matrix in glioma invasion: A cellular potts model approach. Biophysical Journal, 95(12):5661-- 5680, December 2008. [169] Brenda M. Rubenstein, Shiwei Zhang, and David R. Reichman. Finite-temperature auxiliary-field quantum monte carlo for bose-fermi mixtures. Physical Review A, 86
Self-calibrating interferometer
International Nuclear Information System (INIS)
Nussmeier, T.A.
1982-01-01
A self-calibrating interferometer is disclosed which forms therein a pair of Michelson interferometers with one beam length of each Michelson interferometer being controlled by a common phase shifter. The transfer function measured from the phase shifter to either of a pair of detectors is sinusoidal with a full cycle for each half wavelength of phase shifter travel. The phase difference between these two sinusoidal detector outputs represents the optical phase difference between a path of known distance and a path of unknown distance
Mean field simulation for Monte Carlo integration
Del Moral, Pierre
2013-01-01
In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko
International Nuclear Information System (INIS)
Schrader, Heinrich
2000-01-01
Calibration in terms of activity of the ionization-chamber secondary standard measuring systems at the PTB is described. The measurement results of a Centronic IG12/A20, a Vinten ISOCAL IV and a radionuclide calibrator chamber for nuclear medicine applications are discussed, their energy-dependent efficiency curves established and the consistency checked using recently evaluated radionuclide decay data. Criteria for evaluating and transferring calibration factors (or efficiencies) are given
Calibration aspects of the JEM-EUSO mission
Adams, J. H.; Ahmad, S.; Albert, J.-N.; Allard, D.; Anchordoqui, L.; Andreev, V.; Anzalone, A.; Arai, Y.; Asano, K.; Ave Pernas, M.; Baragatti, P.; Barrillon, P.; Batsch, T.; Bayer, J.; Bechini, R.; Belenguer, T.; Bellotti, R.; Belov, K.; Berlind, A. A.; Bertaina, M.; Biermann, P. L.; Biktemerova, S.; Blaksley, C.; Blanc, N.; Błȩcki, J.; Blin-Bondil, S.; Blümer, J.; Bobik, P.; Bogomilov, M.; Bonamente, M.; Briggs, M. S.; Briz, S.; Bruno, A.; Cafagna, F.; Campana, D.; Capdevielle, J.-N.; Caruso, R.; Casolino, M.; Cassardo, C.; Castellinic, G.; Catalano, C.; Catalano, G.; Cellino, A.; Chikawa, M.; Christl, M. J.; Cline, D.; Connaughton, V.; Conti, L.; Cordero, G.; Crawford, H. J.; Cremonini, R.; Csorna, S.; Dagoret-Campagne, S.; de Castro, A. J.; De Donato, C.; de la Taille, C.; De Santis, C.; del Peral, L.; Dell'Oro, A.; De Simone, N.; Di Martino, M.; Distratis, G.; Dulucq, F.; Dupieux, M.; Ebersoldt, A.; Ebisuzaki, T.; Engel, R.; Falk, S.; Fang, K.; Fenu, F.; Fernández-Gómez, I.; Ferrarese, S.; Finco, D.; Flamini, M.; Fornaro, C.; Franceschi, A.; Fujimoto, J.; Fukushima, M.; Galeotti, P.; Garipov, G.; Geary, J.; Gelmini, G.; Giraudo, G.; Gonchar, M.; González Alvarado, C.; Gorodetzky, P.; Guarino, F.; Guzmán, A.; Hachisu, Y.; Harlov, B.; Haungs, A.; Hernández Carretero, J.; Higashide, K.; Ikeda, D.; Ikeda, H.; Inoue, N.; Inoue, S.; Insolia, A.; Isgrò, F.; Itow, Y.; Joven, E.; Judd, E. G.; Jung, A.; Kajino, F.; Kajino, T.; Kaneko, I.; Karadzhov, Y.; Karczmarczyk, J.; Karus, M.; Katahira, K.; Kawai, K.; Kawasaki, Y.; Keilhauer, B.; Khrenov, B. A.; Kim, J.-S.; Kim, S.-W.; Kim, S.-W.; Kleifges, M.; Klimov, P. A.; Kolev, D.; Kreykenbohm, I.; Kudela, K.; Kurihara, Y.; Kusenko, A.; Kuznetsov, E.; Lacombe, M.; Lachaud, C.; Lee, J.; Licandro, J.; Lim, H.; López, F.; Maccarone, M. C.; Mannheim, K.; Maravilla, D.; Marcelli, L.; Marini, A.; Martinez, O.; Masciantonio, G.; Mase, K.; Matev, R.; Medina-Tanco, G.; Mernik, T.; Miyamoto, H.; Miyazaki, Y.; Mizumoto, Y.; Modestino, G.; Monaco, A.; Monnier-Ragaigne, D.; Morales de los Ríos, J. A.; Moretto, C.; Morozenko, V. S.; Mot, B.; Murakami, T.; Murakami, M. Nagano; Nagata, M.; Nagataki, S.; Nakamura, T.; Napolitano, T.; Naumov, D.; Nava, R.; Neronov, A.; Nomoto, K.; Nonaka, T.; Ogawa, T.; Ogio, S.; Ohmori, H.; Olinto, A. V.; Orleański, P.; Osteria, G.; Panasyuk, M. I.; Parizot, E.; Park, I. H.; Park, H. W.; Pastircak, B.; Patzak, T.; Paul, T.; Pennypacker, C.; Perez Cano, S.; Peter, T.; Picozza, P.; Pierog, T.; Piotrowski, L. W.; Piraino, S.; Plebaniak, Z.; Pollini, A.; Prat, P.; Prévôt, G.; Prieto, H.; Putis, M.; Reardon, P.; Reyes, M.; Ricci, M.; Rodríguez, I.; Rodríguez Frías, M. D.; Ronga, F.; Roth, M.; Rothkaehl, H.; Roudil, G.; Rusinov, I.; Rybczyński, M.; Sabau, M. D.; Sáez-Cano, G.; Sagawa, H.; Saito, A.; Sakaki, N.; Sakata, M.; Salazar, H.; Sánchez, S.; Santangelo, A.; Santiago Crúz, L.; Sanz Palomino, M.; Saprykin, O.; Sarazin, F.; Sato, H.; Sato, M.; Schanz, T.; Schieler, H.; Scotti, V.; Segreto, A.; Selmane, S.; Semikoz, D.; Serra, M.; Sharakin, S.; Shibata, T.; Shimizu, H. M.; Shinozaki, K.; Shirahama, T.; Siemieniec-Oziȩbło, G.; Silva López, H. H.; Sledd, J.; Słomińska, K.; Sobey, A.; Sugiyama, T.; Supanitsky, D.; Suzuki, M.; Szabelska, B.; Szabelski, J.; Tajima, F.; Tajima, N.; Tajima, T.; Takahashi, Y.; Takami, H.; Takeda, M.; Takizawa, Y.; Tenzer, C.; Tibolla, O.; Tkachev, L.; Tokuno, H.; Tomida, T.; Tone, N.; Toscano, S.; Trillaud, F.; Tsenov, R.; Tsunesada, Y.; Tsuno, K.; Tymieniecka, T.; Uchihori, Y.; Unger, M.; Vaduvescu, O.; Valdés-Galicia, J. F.; Vallania, P.; Valore, L.; Vankova, G.; Vigorito, C.; Villaseñor, L.; von Ballmoos, P.; Wada, S.; Watanabe, J.; Watanabe, S.; Watts, J.; Weber, M.; Weiler, T. J.; Wibig, T.; Wiencke, L.; Wille, M.; Wilms, J.; Włodarczyk, Z.; Yamamoto, T.; Yamamoto, Y.; Yang, J.; Yano, H.; Yashin, I. V.; Yonetoku, D.; Yoshida, K.; Yoshida, S.; Young, R.; Zotov, M. Yu.; Zuccaro Marchi, A.
2015-11-01
The JEM-EUSO telescope will be, after calibration, a very accurate instrument which yields the number of received photons from the number of measured photo-electrons. The project is in phase A (demonstration of the concept) including already operating prototype instruments, i.e. many parts of the instrument have been constructed and tested. Calibration is a crucial part of the instrument and its use. The focal surface (FS) of the JEM-EUSO telescope will consist of about 5000 photo-multiplier tubes (PMTs), which have to be well calibrated to reach the required accuracy in reconstructing the air-shower parameters. The optics system consists of 3 plastic Fresnel (double-sided) lenses of 2.5 m diameter. The aim of the calibration system is to measure the efficiencies (transmittances) of the optics and absolute efficiencies of the entire focal surface detector. The system consists of 3 main components: (i) Pre-flight calibration devices on ground, where the efficiency and gain of the PMTs will be measured absolutely and also the transmittance of the optics will be. (ii) On-board relative calibration system applying two methods: a) operating during the day when the JEM-EUSO lid will be closed with small light sources on board. b) operating during the night, together with data taking: the monitoring of the background rate over identical sites. (iii) Absolute in-flight calibration, again, applying two methods: a) measurement of the moon light, reflected on high altitude, high albedo clouds. b) measurements of calibrated flashes and tracks produced by the Global Light System (GLS). Some details of each calibration method will be described in this paper.
Calibration of piezoelectric RL shunts with explicit residual mode correction
DEFF Research Database (Denmark)
Høgsberg, Jan Becker; Krenk, Steen
2017-01-01
Piezoelectric RL (resistive-inductive) shunts are passive resonant devices used for damping of dominant vibration modes of a flexible structure and their efficiency relies on the precise calibration of the shunt components. In the present paper improved calibration accuracy is attained by an exte......Piezoelectric RL (resistive-inductive) shunts are passive resonant devices used for damping of dominant vibration modes of a flexible structure and their efficiency relies on the precise calibration of the shunt components. In the present paper improved calibration accuracy is attained...... by an extension of the local piezoelectric transducer displacement by two additional terms, representing the flexibility and inertia contributions from the residual vibration modes not directly addressed by the shunt damping. This results in an augmented dynamic model for the targeted resonant vibration mode...
Redundant interferometric calibration as a complex optimization problem
Grobler, T. L.; Bernardi, G.; Kenyon, J. S.; Parsons, A. R.; Smirnov, O. M.
2018-05-01
Observations of the redshifted 21 cm line from the epoch of reionization have recently motivated the construction of low-frequency radio arrays with highly redundant configurations. These configurations provide an alternative calibration strategy - `redundant calibration' - and boost sensitivity on specific spatial scales. In this paper, we formulate calibration of redundant interferometric arrays as a complex optimization problem. We solve this optimization problem via the Levenberg-Marquardt algorithm. This calibration approach is more robust to initial conditions than current algorithms and, by leveraging an approximate matrix inversion, allows for further optimization and an efficient implementation (`redundant STEFCAL'). We also investigated using the preconditioned conjugate gradient method as an alternative to the approximate matrix inverse, but found that its computational performance is not competitive with respect to `redundant STEFCAL'. The efficient implementation of this new algorithm is made publicly available.
ATLAS Tile Calorimeter calibration and monitoring systems
Cortés-González, Arely
2018-01-01
The ATLAS Tile Calorimeter is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes, located in the outer part of the calorimeter. Neutral particles may also produce a signal after interacting with the material and producing charged particles. The readout is segmented into about 5000 cells, each of them being read out by two photomultipliers in parallel. To calibrate and monitor the stability and performance of each part of the readout chain during the data taking, a set of calibration systems is used. This comprises Cesium radioactive sources, Laser, charge injection elements and an integrator based readout system. Information from all systems allows to monitor and equalise the calorimeter response at each stage of the signal production, from scintillation light to digitisation. Calibration runs are monitored from a data quality perspective and used as a cross-check for physics runs. The data quality efficiency achieved during 2016 was 98.9%. These calibration and stability of the calorimeter reported here show that the TileCal performance is within the design requirements and has given essential contribution to reconstructed objects and physics results.
Monte Carlo simulations of neutron scattering instruments
International Nuclear Information System (INIS)
Aestrand, Per-Olof; Copenhagen Univ.; Lefmann, K.; Nielsen, K.
2001-01-01
A Monte Carlo simulation is an important computational tool used in many areas of science and engineering. The use of Monte Carlo techniques for simulating neutron scattering instruments is discussed. The basic ideas, techniques and approximations are presented. Since the construction of a neutron scattering instrument is very expensive, Monte Carlo software used for design of instruments have to be validated and tested extensively. The McStas software was designed with these aspects in mind and some of the basic principles of the McStas software will be discussed. Finally, some future prospects are discussed for using Monte Carlo simulations in optimizing neutron scattering experiments. (R.P.)
Status of Monte Carlo dose planning
International Nuclear Information System (INIS)
Mackie, T.R.
1995-01-01
Monte Carlo simulation will become increasing important for treatment planning for radiotherapy. The EGS4 Monte Carlo system, a general particle transport system, has been used most often for simulation tasks in radiotherapy although ETRAN/ITS and MCNP have also been used. Monte Carlo treatment planning requires that the beam characteristics such as the energy spectrum and angular distribution of particles emerging from clinical accelerators be accurately represented. An EGS4 Monte Carlo code, called BEAM, was developed by the OMEGA Project (a collaboration between the University of Wisconsin and the National Research Council of Canada) to transport particles through linear accelerator heads. This information was used as input to simulate the passage of particles through CT-based representations of phantoms or patients using both an EGS4 code (DOSXYZ) and the macro Monte Carlo (MMC) method. Monte Carlo computed 3-D electron beam dose distributions compare well to measurements obtained in simple and complex heterogeneous phantoms. The present drawback with most Monte Carlo codes is that simulation times are slower than most non-stochastic dose computation algorithms. This is especially true for photon dose planning. In the future dedicated Monte Carlo treatment planning systems like Peregrine (from Lawrence Livermore National Laboratory), which will be capable of computing the dose from all beam types, or the Macro Monte Carlo (MMC) system, which is an order of magnitude faster than other algorithms, may dominate the field
Energy Technology Data Exchange (ETDEWEB)
Venturini, Luzia [Instituto de Pesquisas Energeticas e Nucleares (IPEN), Sao Paulo, SP (Brazil). Servico de Monitoracao Ambiental
1996-07-01
This paper describes the efficiency of an HPGe detector in the 50 - 1800 keV energy range, for two geometries for water measurements: Marinelli breaker (850 ml) and a polyethylene flask (100 ml). The experimental data were corrected for the summing effect and fitted to a continuous, differentiable and energy dependent function given by 1n({epsilon})=b{sub 0}+b{sub 1}.1n(E/E{sub 0})+ {beta}.1n(E/E{sub 0}){sup 2}, where {beta} = b{sub 2} if E>E{sub 0} and {beta} =a{sub 2} if E {<=}E{sub 0}; {epsilon} = the full absorption peak efficiency; E is the gamma-ray energy and {l_brace}b{sub 0}, b{sub 1}, b{sub 2}, a{sub 2}, E{sub 0} {r_brace} is the parameter set to be fitted. (author)
Device calibration impacts security of quantum key distribution.
Jain, Nitin; Wittmann, Christoffer; Lydersen, Lars; Wiechers, Carlos; Elser, Dominique; Marquardt, Christoph; Makarov, Vadim; Leuchs, Gerd
2011-09-09
Characterizing the physical channel and calibrating the cryptosystem hardware are prerequisites for establishing a quantum channel for quantum key distribution (QKD). Moreover, an inappropriately implemented calibration routine can open a fatal security loophole. We propose and experimentally demonstrate a method to induce a large temporal detector efficiency mismatch in a commercial QKD system by deceiving a channel length calibration routine. We then devise an optimal and realistic strategy using faked states to break the security of the cryptosystem. A fix for this loophole is also suggested.
Dosimetry and Calibration Section
International Nuclear Information System (INIS)
Otto, T.
1999-01-01
The Dosimetry and Calibration Section fulfils two tasks within CERN's Radiation Protection Group: the Individual Dosimetry Service monitors more than 5000 persons potentially exposed to ionizing radiation on the CERN sites, and the Calibration Laboratory verifies throughout the year, at regular intervals, over 1000 instruments, monitors, and electronic dosimeters used by RP Group. The establishment of a Quality Assurance System for the Individual Dosimetry Service, a requirement of the new Swiss Ordinance for personal dosimetry, put a considerable workload on the section. Together with an external consultant it was decided to identify and then describe the different 'processes' of the routine work performed in the dosimetry service. The resulting Quality Manual was submitted to the Federal Office for Public Health in Bern in autumn. The CERN Individual Dosimetry Service will eventually be officially endorsed after a successful technical test in March 1999. On the technical side, the introduction of an automatic development machine for gamma films was very successful. It processes the dosimetric films without an operator being present, and its built-in regeneration mechanism keeps the concentration of the processing chemicals at a constant level
Monte Carlo simulation of tomography techniques using the platform Gate
International Nuclear Information System (INIS)
Barbouchi, Asma
2007-01-01
Simulations play a key role in functional imaging, with applications ranging from scanner design, scatter correction, protocol optimisation. GATE (Geant4 for Application Tomography Emission) is a platform for Monte Carlo Simulation. It is based on Geant4 to generate and track particles, to model geometry and physics process. Explicit modelling of time includes detector motion, time of flight, tracer kinetics. Interfaces to voxellised models and image reconstruction packages improve the integration of GATE in the global modelling cycle. In this work Monte Carlo simulations are used to understand and optimise the gamma camera's performances. We study the effect of the distance between source and collimator, the diameter of the holes and the thick of the collimator on the spatial resolution, energy resolution and efficiency of the gamma camera. We also study the reduction of simulation's time and implement a model of left ventricle in GATE. (Author). 7 refs
Monte Carlo Euler approximations of HJM term structure financial models
Björk, Tomas
2012-11-22
We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.
Monte Carlo Methods in ICF (LIRPP Vol. 13)
Zimmerman, George B.
2016-10-01
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved SOX in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
Path integral Monte Carlo and the electron gas
Brown, Ethan W.
Path integral Monte Carlo is a proven method for accurately simulating quantum mechanical systems at finite-temperature. By stochastically sampling Feynman's path integral representation of the quantum many-body density matrix, path integral Monte Carlo includes non-perturbative effects like thermal fluctuations and particle correlations in a natural way. Over the past 30 years, path integral Monte Carlo has been successfully employed to study the low density electron gas, high-pressure hydrogen, and superfluid helium. For systems where the role of Fermi statistics is important, however, traditional path integral Monte Carlo simulations have an exponentially decreasing efficiency with decreased temperature and increased system size. In this thesis, we work towards improving this efficiency, both through approximate and exact methods, as specifically applied to the homogeneous electron gas. We begin with a brief overview of the current state of atomic simulations at finite-temperature before we delve into a pedagogical review of the path integral Monte Carlo method. We then spend some time discussing the one major issue preventing exact simulation of Fermi systems, the sign problem. Afterwards, we introduce a way to circumvent the sign problem in PIMC simulations through a fixed-node constraint. We then apply this method to the homogeneous electron gas at a large swatch of densities and temperatures in order to map out the warm-dense matter regime. The electron gas can be a representative model for a host of real systems, from simple medals to stellar interiors. However, its most common use is as input into density functional theory. To this end, we aim to build an accurate representation of the electron gas from the ground state to the classical limit and examine its use in finite-temperature density functional formulations. The latter half of this thesis focuses on possible routes beyond the fixed-node approximation. As a first step, we utilize the variational
A new sewage exfiltration model--parameters and calibration.
Karpf, Christian; Krebs, Peter
2011-01-01
Exfiltration of waste water from sewer systems represents a potential danger for the soil and the aquifer. Common models, which are used to describe the exfiltration process, are based on the law of Darcy, extended by a more or less detailed consideration of the expansion of leaks, the characteristics of the soil and the colmation layer. But, due to the complexity of the exfiltration process, the calibration of these models includes a significant uncertainty. In this paper, a new exfiltration approach is introduced, which implements the dynamics of the clogging process and the structural conditions near sewer leaks. The calibration is realised according to experimental studies and analysis of groundwater infiltration to sewers. Furthermore, exfiltration rates and the sensitivity of the approach are estimated and evaluated, respectively, by Monte-Carlo simulations.
Energy Technology Data Exchange (ETDEWEB)
Cardoso, Vanderlei
2002-07-01
The present work describes a few methodologies developed for fitting efficiency curves obtained by means of a HPGe gamma-ray spectrometer. The interpolated values were determined by simple polynomial fitting and polynomial fitting between the ratio of experimental peak efficiency and total efficiency, calculated by Monte Carlo technique, as a function of gamma-ray energy. Moreover, non-linear fitting has been performed using a segmented polynomial function and applying the Gauss-Marquardt method. For the peak area obtainment different methodologies were developed in order to estimate the background area under the peak. This information was obtained by numerical integration or by using analytical functions associated to the background. One non-calibrated radioactive source has been included in the curve efficiency in order to provide additional calibration points. As a by-product, it was possible to determine the activity of this non-calibrated source. For all fittings developed in the present work the covariance matrix methodology was used, which is an essential procedure in order to give a complete description of the partial uncertainties involved. (author)
Design of a transportable high efficiency fast neutron spectrometer
Energy Technology Data Exchange (ETDEWEB)
Roecker, C., E-mail: calebroecker@berkeley.edu [Department of Nuclear Engineering, University of California at Berkeley, CA 94720 (United States); Bernstein, A.; Bowden, N.S. [Nuclear and Chemical Sciences Division, Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); Cabrera-Palmer, B. [Radiation and Nuclear Detection Systems, Sandia National Laboratories, Livermore, CA 94550 (United States); Dazeley, S. [Nuclear and Chemical Sciences Division, Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); Gerling, M.; Marleau, P.; Sweany, M.D. [Radiation and Nuclear Detection Systems, Sandia National Laboratories, Livermore, CA 94550 (United States); Vetter, K. [Department of Nuclear Engineering, University of California at Berkeley, CA 94720 (United States); Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)
2016-08-01
A transportable fast neutron detection system has been designed and constructed for measuring neutron energy spectra and flux ranging from tens to hundreds of MeV. The transportability of the spectrometer reduces the detector-related systematic bias between different neutron spectra and flux measurements, which allows for the comparison of measurements above or below ground. The spectrometer will measure neutron fluxes that are of prohibitively low intensity compared to the site-specific background rates targeted by other transportable fast neutron detection systems. To measure low intensity high-energy neutron fluxes, a conventional capture-gating technique is used for measuring neutron energies above 20 MeV and a novel multiplicity technique is used for measuring neutron energies above 100 MeV. The spectrometer is composed of two Gd containing plastic scintillator detectors arranged around a lead spallation target. To calibrate and characterize the position dependent response of the spectrometer, a Monte Carlo model was developed and used in conjunction with experimental data from gamma ray sources. Multiplicity event identification algorithms were developed and used with a Cf-252 neutron multiplicity source to validate the Monte Carlo model Gd concentration and secondary neutron capture efficiency. The validated Monte Carlo model was used to predict an effective area for the multiplicity and capture gating analyses. For incident neutron energies between 100 MeV and 1000 MeV with an isotropic angular distribution, the multiplicity analysis predicted an effective area of 500 cm{sup 2} rising to 5000 cm{sup 2}. For neutron energies above 20 MeV, the capture-gating analysis predicted an effective area between 1800 cm{sup 2} and 2500 cm{sup 2}. The multiplicity mode was found to be sensitive to the incident neutron angular distribution.
Soybean Physiology Calibration in the Community Land Model
Drewniak, B. A.; Bilionis, I.; Constantinescu, E. M.
2014-12-01
With the large influence of agricultural land use on biophysical and biogeochemical cycles, integrating cultivation into Earth System Models (ESMs) is increasingly important. The Community Land Model (CLM) was augmented with a CLM-Crop extension that simulates the development of three crop types: maize, soybean, and spring wheat. The CLM-Crop model is a complex system that relies on a suite of parametric inputs that govern plant growth under a given atmospheric forcing and available resources. However, the strong nonlinearity of ESMs makes parameter fitting a difficult task. In this study, our goal is to calibrate ten of the CLM-Crop parameters for one crop type, soybean, in order to improve model projection of plant development and carbon fluxes. We used measurements of gross primary productivity, net ecosystem exchange, and plant biomass from AmeriFlux sites to choose parameter values that optimize crop productivity in the model. Calibration is performed in a Bayesian framework by developing a scalable and adaptive scheme based on sequential Monte Carlo (SMC). Our scheme can perform model calibration using very few evaluations and, by exploiting parallelism, at a fraction of the time required by plain vanilla Markov Chain Monte Carlo (MCMC). We present the results from a twin experiment (self-validation) and calibration results and validation using real observations from an AmeriFlux tower site in the Midwestern United States, for the soybean crop type. The improved model will help researchers understand how climate affects crop production and resulting carbon fluxes, and additionally, how cultivation impacts climate.
Calibration philosophy for reactor instrumentation
International Nuclear Information System (INIS)
Saroja, A.R.; Ilango Sambasivan, S.; Swaminathan, P.
2004-01-01
All electronic test and measuring systems and process control instruments constitute a critical and important area of instrumentation in the nuclear and conventional power plant, process plant and research laboratories. All these instruments need periodic calibration. Therefore standards laboratories is one of the essential tools in enforcing quality. Calibration of these instruments plays a vital role in the performance, reliability, and quality of the target to be achieved. Thus calibration is a must if need speed and quality. (author)