WorldWideScience

Sample records for carlo efficiency calibration

  1. Monte Carlo efficiency calibration of a neutron generator-based total-body irradiator

    International Nuclear Information System (INIS)

    Many body composition measurement systems are calibrated against a single-sized reference phantom. Prompt-gamma neutron activation (PGNA) provides the only direct measure of total body nitrogen (TBN), an index of the body's lean tissue mass. In PGNA systems, body size influences neutron flux attenuation, induced gamma signal distribution, and counting efficiency. Thus, calibration based on a single-sized phantom could result in inaccurate TBN values. We used Monte Carlo simulations (MCNP-5; Los Alamos National Laboratory) in order to map a system's response to the range of body weights (65-160 kg) and body fat distributions (25-60%) in obese humans. Calibration curves were constructed to derive body-size correction factors relative to a standard reference phantom, providing customized adjustments to account for differences in body habitus of obese adults. The use of MCNP-generated calibration curves should allow for a better estimate of the true changes in lean tissue mass that many occur during intervention programs focused only on weight loss. (author)

  2. Application of the Monte Carlo code DETEFF to efficiency calibrations for in situ gamma-ray spectrometry.

    Science.gov (United States)

    Carrazana González, J; Cornejo Díaz, N; Jurado Vargas, M

    2012-05-01

    We studied the applicability of the Monte Carlo code DETEFF for the efficiency calibration of detectors for in situ gamma-ray spectrometry determinations of ground deposition activity levels. For this purpose, the code DETEFF was applied to a study case, and the calculated (137)Cs activity deposition levels at four sites were compared with published values obtained both by soil sampling and by in situ measurements. The (137)Cs ground deposition levels obtained with DETEFF were found to be equivalent to the results of the study case within the uncertainties involved. The code DETEFF could thus be used for the efficiency calibration of in situ gamma-ray spectrometry for the determination of ground deposition activity using the uniform slab model. It has the advantage of requiring far less simulation time than general Monte Carlo codes adapted for efficiency computation, which is essential for in situ gamma-ray spectrometry where the measurement configuration yields low detection efficiency. PMID:22336296

  3. Application of the Monte Carlo code DETEFF to efficiency calibrations for in situ gamma-ray spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Carrazana Gonzalez, J.; Cornejo Diaz, N. [Centre for Radiological Protection and Hygiene, P.O. Box 6195, Habana (Cuba); Jurado Vargas, M., E-mail: mjv@unex.es [Departamento de Fisica, Universidad de Extremadura, 06071 Badajoz (Spain)

    2012-05-15

    We studied the applicability of the Monte Carlo code DETEFF for the efficiency calibration of detectors for in situ gamma-ray spectrometry determinations of ground deposition activity levels. For this purpose, the code DETEFF was applied to a study case, and the calculated {sup 137}Cs activity deposition levels at four sites were compared with published values obtained both by soil sampling and by in situ measurements. The {sup 137}Cs ground deposition levels obtained with DETEFF were found to be equivalent to the results of the study case within the uncertainties involved. The code DETEFF could thus be used for the efficiency calibration of in situ gamma-ray spectrometry for the determination of ground deposition activity using the uniform slab model. It has the advantage of requiring far less simulation time than general Monte Carlo codes adapted for efficiency computation, which is essential for in situ gamma-ray spectrometry where the measurement configuration yields low detection efficiency. - Highlights: Black-Right-Pointing-Pointer Application of the code DETEFF to in situ gamma-ray spectrometry. Black-Right-Pointing-Pointer {sup 137}Cs ground deposition levels evaluated assuming a uniform slab model. Black-Right-Pointing-Pointer Code DETEFF allows a rapid efficiency calibration.

  4. Monte Carlo calculation of the efficiency calibration curve and coincidence-summing corrections in low-level gamma-ray spectrometry using well-type HPGe detectors

    International Nuclear Information System (INIS)

    Well-type high-purity germanium (HPGe) detectors are well suited to the analysis of small amounts of environmental samples, as they can combine both low background and high detection efficiency. A low-background well-type detector is installed in the Modane underground Laboratory. In the well geometry, coincidence-summing effects are high and make the construction of the full energy peak efficiency curve a difficult task with an usual calibration standard, especially in the high energy range. Using the GEANT code and taking into account a detailed description of the detector and the source, efficiency curves have been modelled for several filling heights of the vial. With a special routine taking into account the decay schemes of the radionuclides, corrections for coincidence-summing effects that occur when measuring samples containing 238U, 232Th or 134Cs have been computed. The results are found to be in good agreement with the experimental data. It is shown that triple coincidences effect on counting losses accounts for 7-15% of pair coincidences effect in the case of 604 and 796 keV lines of 134Cs

  5. Virtual point source efficiency calibration method for voluminous sample of radio-xenon based on efficiency function of point source

    International Nuclear Information System (INIS)

    A virtual point source calibration method is developed to finish the calibration of voluminous sample. We used a mixed point source to get the parameters of efficiency function, obtaining the virtual position of voluminous sample. So, the detection efficiency of xenon samples and standard soil samples were calibrated by placing the point source at their virtual position. The Monte Carlo method was also used to simulate the detector efficiency of xenon samples. Deviations between the virtual source method and Monte Carlo simulation are within 2.2 % for xenon samples. Thus, we have developed two robust efficiency calibration methods based on Monte Carlo simulations and virtual point source, respectively. (author)

  6. Detector characterization for efficiency calibration in different measurement geometries

    International Nuclear Information System (INIS)

    In order to perform an accurate efficiency calibration for different measurement geometries a good knowledge of the detector characteristics is required. The Monte Carlo simulation program GESPECOR is applied. The detector characterization required for Monte Carlo simulation is achieved using the efficiency values obtained from measuring a point source. The point source was measured in two significant geometries: the source placed in a vertical plane containing the vertical symmetry axis of the detector and in a horizontal plane containing the centre of the active volume of the detector. The measurements were made using gamma spectrometry technique. (authors)

  7. Monte Carlo simulation: tool for the calibration in analytical determination of radionuclides; Simulacion Monte Carlo: herramienta para la calibracion en determinaciones analiticas de radionucleidos

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, Jorge A. Carrazana; Ferrera, Eduardo A. Capote; Gomez, Isis M. Fernandez; Castro, Gloria V. Rodriguez; Ricardo, Niury Martinez, E-mail: cphr@cphr.edu.cu [Centro de Proteccion e Higiene de las Radiaciones (CPHR), La Habana (Cuba)

    2013-07-01

    This work shows how is established the traceability of the analytical determinations using this calibration method. Highlights the advantages offered by Monte Carlo simulation for the application of corrections by differences in chemical composition, density and height of the samples analyzed. Likewise, the results obtained by the LVRA in two exercises organized by the International Agency for Atomic Energy (IAEA) are presented. In these exercises (an intercomparison and a proficiency test) all reported analytical results were obtained based on calibrations in efficiency by Monte Carlo simulation using the DETEFF program.

  8. Efficiency calibration of low background gamma spectrometer

    International Nuclear Information System (INIS)

    A method of efficiency calibration is described. The authors used standard ores of U, Ra and Th (power form), KCl and Cs-137 sources to do calibration volume-sources which were directly placed on the detector end cap. In such a measuring geometry, it is not necessary to make coincidence-summing correction. The efficiency calibration curve obtained by the method were compared with results measured by Am-241, Cd-109 and Eu-152 calibration sources. The agree in the error of about 5%

  9. Top Quark Mass Calibration for Monte Carlo Event Generators

    CERN Document Server

    Butenschoen, Mathias; Hoang, Andre H; Mateu, Vicent; Preisser, Moritz; Stewart, Iain W

    2016-01-01

    The most precise top quark mass measurements use kinematic reconstruction methods, determining the top mass parameter of a Monte Carlo event generator, $m_t^{\\rm MC}$. Due to hadronization and parton shower dynamics, relating $m_t^{\\rm MC}$ to a field theory mass is difficult. We present a calibration procedure to determine this relation using hadron level QCD predictions for observables with kinematic mass sensitivity. Fitting $e^+e^-$ 2-Jettiness calculations at NLL/NNLL order to Pythia 8.205, $m_t^{\\rm MC}$ differs from the pole mass by $900$/$600$ MeV, and agrees with the MSR mass within uncertainties, $m_t^{\\rm MC}\\simeq m_{t,1\\,{\\rm GeV}}^{\\rm MSR}$.

  10. Monte Carlo based calibration of scintillation detectors for laboratory and in situ gamma ray measurements

    NARCIS (Netherlands)

    van der Graaf, E. R.; Limburg, J.; Koomans, R. L.; Tijs, M.

    2011-01-01

    The calibration of scintillation detectors for gamma radiation in a well characterized setup can be transferred to other geometries using Monte Carlo simulations to account for the differences between the calibration and the other geometry. In this study a calibration facility was used that is const

  11. Calibration of the identiFINDER detector for the iodine measurement in thyroid using the Monte Carlo method

    International Nuclear Information System (INIS)

    This work is based on the determination of the detection efficiency of 125I and 131I in thyroid of the identiFINDER detector using the Monte Carlo method. The suitability of the calibration method is analyzed, when comparing the results of the direct Monte Carlo method with the corrected, choosing the latter because the differences with the real efficiency stayed below 10%. To simulate the detector their geometric parameters were optimized using a tomographic study, what allowed the uncertainties minimization of the estimates. Finally were obtained the simulations of the detector geometry-point source to find the correction factors to 5 cm, 15 cm and 25 cm, and those corresponding to the detector-simulator arrangement for the method validation and final calculation of the efficiency, demonstrating that in the Monte Carlo method implementation if simulates at a greater distance than the used in the Laboratory measurements an efficiency overestimation can be obtained, while if simulates at a shorter distance this will be underestimated, so should be simulated at the same distance to which will be measured in the reality. Also, is achieved the obtaining of the efficiency curves and minimum detectable activity for the measurement of 131I and 125I. In general is achieved the implementation of the Monte Carlo methodology for the identiFINDER calibration with the purpose of estimating the measured activity of iodine in thyroid. This method represents an ideal way to replace the lack of patterns solutions and simulators assuring the capacities of the Internal Contamination Laboratory of the Centro de Proteccion e Higiene de las Radiaciones are always calibrated for the iodine measurement in thyroid. (author)

  12. Calibration and Monte Carlo modelling of neutron long counters

    CERN Document Server

    Tagziria, H

    2000-01-01

    The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivit...

  13. Force calibration using errors-in-variables regression and Monte Carlo uncertainty evaluation

    Science.gov (United States)

    Bartel, Thomas; Stoudt, Sara; Possolo, Antonio

    2016-06-01

    An errors-in-variables regression method is presented as an alternative to the ordinary least-squares regression computation currently employed for determining the calibration function for force measuring instruments from data acquired during calibration. A Monte Carlo uncertainty evaluation for the errors-in-variables regression is also presented. The corresponding function (which we call measurement function, often called analysis function in gas metrology) necessary for the subsequent use of the calibrated device to measure force, and the associated uncertainty evaluation, are also derived from the calibration results. Comparisons are made, using real force calibration data, between the results from the errors-in-variables and ordinary least-squares analyses, as well as between the Monte Carlo uncertainty assessment and the conventional uncertainty propagation employed at the National Institute of Standards and Technology (NIST). The results show that the errors-in-variables analysis properly accounts for the uncertainty in the applied calibrated forces, and that the Monte Carlo method, owing to its intrinsic ability to model uncertainty contributions accurately, yields a better representation of the calibration uncertainty throughout the transducer’s force range than the methods currently in use. These improvements notwithstanding, the differences between the results produced by the current and by the proposed new methods generally are small because the relative uncertainties of the inputs are small and most contemporary load cells respond approximately linearly to such inputs. For this reason, there will be no compelling need to revise any of the force calibration reports previously issued by NIST.

  14. Calibration and validation of a Monte Carlo model for PGNAA of chlorine in soil

    International Nuclear Information System (INIS)

    A prompt gamma-ray neutron activation analysis (PGNAA) system was used to calibrate and validate a Monte Carlo model as a proof of principle for the quantification of chlorine in soil. First, the response of an n-type HPGe detector to point sources of 60Co and 152Eu was determined experimentally and used to calibrate an MCNP4a model of the detector. The refined MCNP4a detector model can predict the absolute peak detection efficiency within 12% in the energy range of 120-1400 keV. Second, a PGNAA system consisting of a light-water moderated 252Cf (1.06 μg) neutron source, and the shielded and collimated HPGe detector was used to collect prompt gamma-ray spectra from Savannah River Site (SRS) soil spiked with chlorine. The spectra were used to calculate the minimum detectable concentration (MDC) of chlorine and the prompt gamma-ray detection probability. Using the 252Cf based PGNAA system, the MDC for Cl in the SRS soil is 4400 μg/g for an 1800-second irradiation based on the analysis of the 6110 keV prompt gamma-ray. MCNP4a was used to predict the PGNAA detection probability, which was accomplished by modeling the neutron and gamma-ray transport components separately. In the energy range of 788 to 6110 keV, the MCNP4a predictions of the prompt gamma-ray detection probability were generally within 60% of the experimental value, thus validating the Monte Carlo model. (author)

  15. Calibration of the Top-Quark Monte Carlo Mass

    Science.gov (United States)

    Kieseler, Jan; Lipka, Katerina; Moch, Sven-Olaf

    2016-04-01

    We present a method to establish, experimentally, the relation between the top-quark mass mtMC as implemented in Monte Carlo generators and the Lagrangian mass parameter mt in a theoretically well-defined renormalization scheme. We propose a simultaneous fit of mtMC and an observable sensitive to mt, which does not rely on any prior assumptions about the relation between mt and mtMC. The measured observable is independent of mtMC and can be used subsequently for a determination of mt. The analysis strategy is illustrated with examples for the extraction of mt from inclusive and differential cross sections for hadroproduction of top quarks.

  16. Monte Carlo simulation for the calibration of neutron source strength measurement of JT-60 upgrade

    International Nuclear Information System (INIS)

    The calibration of the relation between the neutron source strength in the whole plasma and the output of neutron monitor is important to evaluate the fusion gain in tokamaks with DD or DT operation. JT-60 will be modified to be tokamak of deuterium plasma with Ip≤7MA and V≤110 m3. The source strength of JT-60 Upgrade will be measured with 235U and 238U fission chambers. Detection efficiencies for source neutron are calculated by the Monte Carlo code MCNP with 3-dimensional modelling of JT-60 Upgrade and with the poloidally distributed neutron source. More than 90% of fission chamber's counts are contributed by source of -85deg235U and 238U detectors, respectively. Detection efficiencies are sensitive to major radius of the detector position, but not so sensitive to vertical and toroidal shift of the detector positions. And total uncertainties combined detector position errors are ±13% and ±9% for 235U and 238U detectors, respectively. The modelling errors of the detection efficiencies are so large for the 238U detector that more precise modelling including the port boxes is needed. (author)

  17. Calibration of the top-quark Monte-Carlo mass

    Energy Technology Data Exchange (ETDEWEB)

    Kieseler, Jan; Lipka, Katerina [DESY Hamburg (Germany); Moch, Sven-Olaf [Hamburg Univ. (Germany). II. Inst. fuer Theoretische Physik

    2015-11-15

    We present a method to establish experimentally the relation between the top-quark mass m{sup MC}{sub t} as implemented in Monte-Carlo generators and the Lagrangian mass parameter m{sub t} in a theoretically well-defined renormalization scheme. We propose a simultaneous fit of m{sup MC}{sub t} and an observable sensitive to m{sub t}, which does not rely on any prior assumptions about the relation between m{sub t} and m{sup MC}{sub t}. The measured observable is independent of m{sup MC}{sub t} and can be used subsequently for a determination of m{sub t}. The analysis strategy is illustrated with examples for the extraction of m{sub t} from inclusive and differential cross sections for hadro-production of top-quarks.

  18. Calibration of the Top-Quark Monte-Carlo Mass

    CERN Document Server

    Kieseler, Jan; Moch, Sven-Olaf

    2015-01-01

    We present a method to establish experimentally the relation between the top-quark mass $m_t^{MC}$ as implemented in Monte-Carlo generators and the Lagrangian mass parameter $m_t$ in a theoretically well-defined renormalization scheme. We propose a simultaneous fit of $m_t^{MC}$ and an observable sensitive to $m_t$, which does not rely on any prior assumptions about the relation between $m_t$ and $m_t^{MC}$. The measured observable is independent of $m_t^{MC}$ and can be used subsequently for a determination of $m_t$. The analysis strategy is illustrated with examples for the extraction of $m_t$ from inclusive and differential cross sections for hadro-production of top-quarks.

  19. Calibration of the top-quark Monte-Carlo mass

    International Nuclear Information System (INIS)

    We present a method to establish experimentally the relation between the top-quark mass mMCt as implemented in Monte-Carlo generators and the Lagrangian mass parameter mt in a theoretically well-defined renormalization scheme. We propose a simultaneous fit of mMCt and an observable sensitive to mt, which does not rely on any prior assumptions about the relation between mt and mMCt. The measured observable is independent of mMCt and can be used subsequently for a determination of mt. The analysis strategy is illustrated with examples for the extraction of mt from inclusive and differential cross sections for hadro-production of top-quarks.

  20. Evaluation of Monte Carlo-based calibrations of HPGe detectors for in situ gamma-ray spectrometry.

    Science.gov (United States)

    Boson, Jonas; Plamboeck, Agneta H; Ramebäck, Henrik; Agren, Göran; Johansson, Lennart

    2009-11-01

    The aim of this work was to evaluate the use of Monte Carlo-based calibrations for in situ gamma-ray spectrometry. We have performed in situ measurements at five different sites in Sweden using HPGe detectors to determine ground deposition activity levels of (137)Cs from the 1986 Chernobyl accident. Monte Carlo-calculated efficiency calibration factors were compared with corresponding values calculated using a more traditional semi-empirical method. In addition, results for the activity ground deposition were also compared with activity densities found in soil samples. In order to facilitate meaningful comparisons between the different types of results, the combined standard uncertainty of in situ measurements was assessed for both calibration methods. Good agreement, both between the two calibration methods, and between in situ measurements and soil samples, was found at all five sites. Uncertainties in in situ measurements for the given measurement conditions, about 20 years after the fallout occurred, were found to be in the range 15-20% (with a coverage factor k=1, i.e. with a confidence interval of about 68%). PMID:19604609

  1. Confidence and efficiency scaling in Variational Quantum Monte Carlo calculations

    CERN Document Server

    Delyon, François; Holzmann, Markus

    2016-01-01

    Based on the central limit theorem, we discuss the problem of evaluation of the statistical error of Monte Carlo calculations using a time discretized diffusion process. We present a robust and practical method to determine the effective variance of general observables and show how to verify the equilibrium hypothesis by the Kolmogorov-Smirnov test. We then derive scaling laws of the efficiency illustrated by Variational Monte Carlo calculations on the two dimensional electron gas.

  2. The peak efficiency calibration of volume source using 152Eu point source in computer

    International Nuclear Information System (INIS)

    The author describes the method of the peak efficiency calibration of volume source by means of 152Eu point source for HPGe γ spectrometer. The peak efficiency can be computed by Monte Carlo simulation, after inputting parameter of detector. The computation results are in agreement with the experimental results with an error of +-3.8%, with an exception one is about +-7.4%

  3. Strategies for improving the efficiency of quantum Monte Carlo calculations

    CERN Document Server

    Lee, R M; Nemec, N; Rios, P Lopez; Drummond, N D

    2010-01-01

    We describe a number of strategies for optimizing the efficiency of quantum Monte Carlo (QMC) calculations. We investigate the dependence of the efficiency of the variational Monte Carlo method on the sampling algorithm. Within a unified framework, we compare several commonly used variants of diffusion Monte Carlo (DMC). We then investigate the behavior of DMC calculations on parallel computers and the details of parallel implementations, before proposing a technique to optimize the efficiency of the extrapolation of DMC results to zero time step, finding that a relative time step ratio of 1:4 is optimal. Finally, we discuss the removal of serial correlation from data sets by reblocking, setting out criteria for the choice of block length and quantifying the effects of the uncertainty in the estimated correlation length and the presence of divergences in the local energy on estimated error bars on QMC energies.

  4. Optimum and efficient sampling for variational quantum Monte Carlo

    CERN Document Server

    Trail, John Robert; 10.1063/1.3488651

    2010-01-01

    Quantum mechanics for many-body systems may be reduced to the evaluation of integrals in 3N dimensions using Monte-Carlo, providing the Quantum Monte Carlo ab initio methods. Here we limit ourselves to expectation values for trial wavefunctions, that is to Variational quantum Monte Carlo. Almost all previous implementations employ samples distributed as the physical probability density of the trial wavefunction, and assume the Central Limit Theorem to be valid. In this paper we provide an analysis of random error in estimation and optimisation that leads naturally to new sampling strategies with improved computational and statistical properties. A rigorous lower limit to the random error is derived, and an efficient sampling strategy presented that significantly increases computational efficiency. In addition the infinite variance heavy tailed random errors of optimum parameters in conventional methods are replaced with a Normal random error, strengthening the theoretical basis of optimisation. The method is ...

  5. Efficiency of Monte Carlo sampling in chaotic systems.

    Science.gov (United States)

    Leitão, Jorge C; Lopes, J M Viana Parente; Altmann, Eduardo G

    2014-11-01

    In this paper we investigate how the complexity of chaotic phase spaces affect the efficiency of importance sampling Monte Carlo simulations. We focus on flat-histogram simulations of the distribution of finite-time Lyapunov exponent in a simple chaotic system and obtain analytically that the computational effort: (i) scales polynomially with the finite time, a tremendous improvement over the exponential scaling obtained in uniform sampling simulations; and (ii) the polynomial scaling is suboptimal, a phenomenon known as critical slowing down. We show that critical slowing down appears because of the limited possibilities to issue a local proposal in the Monte Carlo procedure when it is applied to chaotic systems. These results show how generic properties of chaotic systems limit the efficiency of Monte Carlo simulations.

  6. Application of the Monte Carlo method to the analysis of measurement geometries for the calibration of a HP Ge detector in an environmental radioactivity laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Rodenas, Jose [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain)], E-mail: jrodenas@iqn.upv.es; Gallardo, Sergio; Ballester, Silvia; Primault, Virginie [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain); Ortiz, Josefina [Laboratorio de Radiactividad Ambiental, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain)

    2007-10-15

    A gamma spectrometer including an HP Ge detector is commonly used for environmental radioactivity measurements. The efficiency of the detector should be calibrated for each geometry considered. Simulation of the calibration procedure with a validated computer program is an important auxiliary tool for environmental radioactivity laboratories. The MCNP code based on the Monte Carlo method has been applied to simulate the detection process in order to obtain spectrum peaks and determine the efficiency curve for each modelled geometry. The source used for measurements was a calibration mixed radionuclide gamma reference solution, covering a wide energy range (50-2000 keV). Two measurement geometries - Marinelli beaker and Petri boxes - as well as different materials - water, charcoal, sand - containing the source have been considered. Results obtained from the Monte Carlo model have been compared with experimental measurements in the laboratory in order to validate the model.

  7. Calibration coefficient of reference brachytherapy ionization chamber using analytical and Monte Carlo methods.

    Science.gov (United States)

    Kumar, Sudhir; Srinivasan, P; Sharma, S D

    2010-06-01

    A cylindrical graphite ionization chamber of sensitive volume 1002.4 cm(3) was designed and fabricated at Bhabha Atomic Research Centre (BARC) for use as a reference dosimeter to measure the strength of high dose rate (HDR) (192)Ir brachytherapy sources. The air kerma calibration coefficient (N(K)) of this ionization chamber was estimated analytically using Burlin general cavity theory and by the Monte Carlo method. In the analytical method, calibration coefficients were calculated for each spectral line of an HDR (192)Ir source and the weighted mean was taken as N(K). In the Monte Carlo method, the geometry of the measurement setup and physics related input data of the HDR (192)Ir source and the surrounding material were simulated using the Monte Carlo N-particle code. The total photon energy fluence was used to arrive at the reference air kerma rate (RAKR) using mass energy absorption coefficients. The energy deposition rates were used to simulate the value of charge rate in the ionization chamber and N(K) was determined. The Monte Carlo calculated N(K) agreed within 1.77 % of that obtained using the analytical method. The experimentally determined RAKR of HDR (192)Ir sources, using this reference ionization chamber by applying the analytically estimated N(K), was found to be in agreement with the vendor quoted RAKR within 1.43%.

  8. Whole body counter calibration using Monte Carlo modeling with an array of phantom sizes based on national anthropometric reference data

    International Nuclear Information System (INIS)

    During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of 40K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.

  9. Whole body counter calibration using Monte Carlo modeling with an array of phantom sizes based on national anthropometric reference data.

    Science.gov (United States)

    Shypailo, R J; Ellis, K J

    2011-05-21

    During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of (40)K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.

  10. Whole body counter calibration using Monte Carlo modeling with an array of phantom sizes based on national anthropometric reference data

    Science.gov (United States)

    Shypailo, R. J.; Ellis, K. J.

    2011-05-01

    During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of 40K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.

  11. HPGe Detector Efficiency Calibration Using HEU Standards

    Energy Technology Data Exchange (ETDEWEB)

    Salaymeh, S.R.

    2000-10-12

    The Analytical Development Section of SRTC was requested by the Facilities Disposition Division (FDD) to determine the holdup of enriched uranium in the 321-M facility as part of an overall deactivation project of the facility. The 321-M facility was used to fabricate enriched uranium fuel assemblies, lithium-aluminum target tubes, neptunium assemblies, and miscellaneous components for the production reactors. The facility also includes the 324-M storage building and the passageway connecting it to 321-M. The results of the holdup assays are essential for determining compliance with the Solid Waste's Waste Acceptance Criteria, Material Control and Accountability, and to meet criticality safety controls. Two measurement systems will be used to determine highly enriched uranium (HEU) holdup: One is a portable HPGe detector and EG and G Dart system that contains high voltage power supply and signal processing electronics. A personal computer with Gamma-Vision software was used to provide an MCA card, and space to store and manipulate multiple 4096-channel g-ray spectra. The other is a 2 inches x 2 inches NaI crystal with an MCA that uses a portable computer with a Canberra NaI plus card installed. This card converts the PC to a full function MCA and contains the ancillary electronics, high voltage power supply and amplifier, required for data acquisition. This report describes and documents the HPGe point, line, area, and constant geometry-constant transmission detector efficiency calibrations acquired and calculated for use in conducting holdup measurements as part of the overall deactivation project of building 321-M.

  12. Monte Carlo Studies for the Calibration System of the GERDA Experiment

    CERN Document Server

    Baudis, Laura; Froborg, Francis; Tarka, Michal

    2013-01-01

    The GERmanium Detector Array, GERDA, searches for neutrinoless double beta decay in Ge-76 using bare high-purity germanium detectors submerged in liquid argon. For the calibration of these detectors gamma emitting sources have to be lowered from their parking position on top of the cryostat over more than five meters down to the germanium crystals. With the help of Monte Carlo simulations, the relevant parameters of the calibration system were determined. It was found that three Th-228 sources with an activity of 20 kBq each at two different vertical positions will be necessary to reach sufficient statistics in all detectors in less than four hours of calibration time. These sources will contribute to the background of the experiment with a total of (1.07 +/- 0.04(stat) +0.13 -0.19(sys)) 10^{-4} cts/(keV kg yr) when shielded from below with 6 cm of tantalum in the parking position.

  13. Efficient, Automated Monte Carlo Methods for Radiation Transport.

    Science.gov (United States)

    Kong, Rong; Ambrose, Martin; Spanier, Jerome

    2008-11-20

    Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k + 1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed. PMID:23226872

  14. Model calibration for building energy efficiency simulation

    International Nuclear Information System (INIS)

    Highlights: • Developing a 3D model relating to building architecture, occupancy and HVAC operation. • Two calibration stages developed, final model providing accurate results. • Using an onsite weather station for generating the weather data file in EnergyPlus. • Predicting thermal behaviour of underfloor heating, heat pump and natural ventilation. • Monthly energy saving opportunities related to heat pump of 20–27% was identified. - Abstract: This research work deals with an Environmental Research Institute (ERI) building where an underfloor heating system and natural ventilation are the main systems used to maintain comfort condition throughout 80% of the building areas. Firstly, this work involved developing a 3D model relating to building architecture, occupancy and HVAC operation. Secondly, the calibration methodology, which consists of two levels, was then applied in order to insure accuracy and reduce the likelihood of errors. To further improve the accuracy of calibration a historical weather data file related to year 2011, was created from the on-site local weather station of ERI building. After applying the second level of calibration process, the values of Mean bias Error (MBE) and Cumulative Variation of Root Mean Squared Error (CV(RMSE)) on hourly based analysis for heat pump electricity consumption varied within the following ranges: (MBE)hourly from −5.6% to 7.5% and CV(RMSE)hourly from 7.3% to 25.1%. Finally, the building was simulated with EnergyPlus to identify further possibilities of energy savings supplied by a water to water heat pump to underfloor heating system. It found that electricity consumption savings from the heat pump can vary between 20% and 27% on monthly bases

  15. Calibration of the identiFINDER detector for the iodine measurement in thyroid using the Monte Carlo method; Calibracion del detector identiFINDER para la medicion de yodo en tiroides utilizando el metodo Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Ramos M, D.; Yera S, Y.; Lopez B, G. M.; Acosta R, N.; Vergara G, A., E-mail: dayana@cphr.edu.cu [Centro de Proteccion e Higiene de las Radiaciones, Calle 20 No. 4113 e/ 41 y 47, Playa, 10600 La Habana (Cuba)

    2014-08-15

    This work is based on the determination of the detection efficiency of {sup 125}I and {sup 131}I in thyroid of the identiFINDER detector using the Monte Carlo method. The suitability of the calibration method is analyzed, when comparing the results of the direct Monte Carlo method with the corrected, choosing the latter because the differences with the real efficiency stayed below 10%. To simulate the detector their geometric parameters were optimized using a tomographic study, what allowed the uncertainties minimization of the estimates. Finally were obtained the simulations of the detector geometry-point source to find the correction factors to 5 cm, 15 cm and 25 cm, and those corresponding to the detector-simulator arrangement for the method validation and final calculation of the efficiency, demonstrating that in the Monte Carlo method implementation if simulates at a greater distance than the used in the Laboratory measurements an efficiency overestimation can be obtained, while if simulates at a shorter distance this will be underestimated, so should be simulated at the same distance to which will be measured in the reality. Also, is achieved the obtaining of the efficiency curves and minimum detectable activity for the measurement of {sup 131}I and {sup 125}I. In general is achieved the implementation of the Monte Carlo methodology for the identiFINDER calibration with the purpose of estimating the measured activity of iodine in thyroid. This method represents an ideal way to replace the lack of patterns solutions and simulators assuring the capacities of the Internal Contamination Laboratory of the Centro de Proteccion e Higiene de las Radiaciones are always calibrated for the iodine measurement in thyroid. (author)

  16. Efficiency calibration of solid track spark auto counter

    International Nuclear Information System (INIS)

    The factors influencing detection efficiency of solid track spark auto counter were analyzed, and the best etch condition and parameters of charge were also reconfirmed. With small plate fission ionization chamber, the efficiency of solid track spark auto counter at various experiment assemblies was re-calibrated. The efficiency of solid track spark auto counter at various experimental conditions was obtained. (authors)

  17. The determination of the efficiency of a Compton suppressed HPGe detector using Monte Carlo simulations.

    Science.gov (United States)

    McNamara, A L; Heijnis, H; Fierro, D; Reinhard, M I

    2012-04-01

    A Compton suppressed high-purity germanium (HPGe) detector is well suited to the analysis of low levels of radioactivity in environmental samples. The difference in geometry, density and composition of environmental calibration standards (e.g. soil) can contribute to excessive experimental uncertainty to the measured efficiency curve. Furthermore multiple detectors, like those used in a Compton suppressed system, can add complexities to the calibration process. Monte Carlo simulations can be a powerful complement in calibrating these types of detector systems, provided enough physical information on the system is known. A full detector model using the Geant4 simulation toolkit is presented and the system is modelled in both the suppressed and unsuppressed mode of operation. The full energy peak efficiencies of radionuclides from a standard source sample is calculated and compared to experimental measurements. The experimental results agree relatively well with the simulated values (within ∼5 - 20%). The simulations show that coincidence losses in the Compton suppression system can cause radionuclide specific effects on the detector efficiency, especially in the Compton suppressed mode of the detector. Additionally since low energy photons are more sensitive to small inaccuracies in the computational detector model than high energy photons, large discrepancies may occur at energies lower than ∼100 keV. PMID:22304994

  18. O5S, Calibration of Organic Scintillation Detector by Monte-Carlo

    International Nuclear Information System (INIS)

    1 - Nature of physical problem solved: O5S is designed to directly simulate the experimental techniques used to obtain the pulse height distribution for a parallel beam of mono-energetic neutrons incident on organic scintillator systems. Developed to accurately calibrate the nominally 2 in. by 2 in. liquid organic scintillator NE-213 (composition CH-1.2), the programme should be readily adaptable to many similar problems. 2 - Method of solution: O5S is a Monte Carlo programme patterned after the general-purpose Monte Carlo neutron transport programme system, O5R. The O5S Monte Carlo experiment follows the course of each neutron through the scintillator and obtains the energy-deposits of the ions produced by elastic scatterings and reactions. The light pulse produced by the neutron is obtained by summing up the contributions of the various ions with the use of appropriate light vs. ion-energy tables. Because of the specialized geometry and simpler cross section needs O5S is able to by-pass many features included in O5R. For instance, neutrons may be followed individually, their histories analyzed as they occur, and upon completion of the experiment, the results analyzed to obtain the pulse-height distribution during one pass on the computer. O5S does allow the absorption of neutrons, but does not allow splitting or Russian roulette (biased weighting schemes). SMOOTHIE is designed to smooth O5S histogram data using Gaussian functions with parameters specified by the user

  19. Calibration of AGILE-GRID with in-flight data and Monte Carlo simulations

    Science.gov (United States)

    Chen, A. W.; Argan, A.; Bulgarelli, A.; Cattaneo, P. W.; Contessi, T.; Giuliani, A.; Pittori, C.; Pucella, G.; Tavani, M.; Trois, A.; Verrecchia, F.; Barbiellini, G.; Caraveo, P.; Colafrancesco, S.; Costa, E.; De Paris, G.; Del Monte, E.; Di Cocco, G.; Donnarumma, I.; Evangelista, Y.; Ferrari, A.; Feroci, M.; Fioretti, V.; Fiorini, M.; Fuschino, F.; Galli, M.; Gianotti, F.; Giommi, P.; Giusti, M.; Labanti, C.; Lapshov, I.; Lazzarotto, F.; Lipari, P.; Longo, F.; Lucarelli, F.; Marisaldi, M.; Mereghetti, S.; Morelli, E.; Moretti, E.; Morselli, A.; Pacciani, L.; Pellizzoni, A.; Perotti, F.; Piano, G.; Picozza, P.; Pilia, M.; Prest, M.; Rapisarda, M.; Rappoldi, A.; Rubini, A.; Sabatini, S.; Santolamazza, P.; Soffitta, P.; Striani, E.; Trifoglio, M.; Valentini, G.; Vallazza, E.; Vercellone, S.; Vittorini, V.; Zanello, D.

    2013-10-01

    Context. AGILE is a γ-ray astrophysics mission which has been in orbit since 23 April 2007 and continues to operate reliably. The γ-ray detector, AGILE-GRID, has observed Galactic and extragalactic sources, many of which were collected in the first AGILE Catalog. Aims: We present the calibration of the AGILE-GRID using in-flight data and Monte Carlo simulations, producing instrument response functions (IRFs) for the effective area (Aeff), energy dispersion probability (EDP), and point spread function (PSF), each as a function of incident direction in instrument coordinates and energy. Methods: We performed Monte Carlo simulations at different γ-ray energies and incident angles, including background rejection filters and Kalman filter-based γ-ray reconstruction. Long integrations of in-flight observations of the Vela, Crab and Geminga sources in broad and narrow energy bands were used to validate and improve the accuracy of the instrument response functions. Results: The weighted average PSFs as a function of spectra correspond well to the data for all sources and energy bands. Conclusions: Changes in the interpolation of the PSF from Monte Carlo data and in the procedure for construction of the energy-weighted effective areas have improved the correspondence between predicted and observed fluxes and spectra of celestial calibration sources, reducing false positives and obviating the need for post-hoc energy-dependent scaling factors. The new IRFs have been publicly available from the AGILE Science Data Center since November 25, 2011, while the changes in the analysis software will be distributed in an upcoming release.

  20. Analysis of the effect of true coincidence summing on efficiency calibration for an HP GE detector

    Energy Technology Data Exchange (ETDEWEB)

    Rodenas, J.; Gallardo, S.; Ballester, S.; Primault, V. [Valencia Univ. Politecnica, Dept. de Ingenieria Quimica y Nuclear (Spain); Ortiz, J. [Valencia Univ. Politecnica, Lab. de Radiactividad Ambiental (Spain)

    2006-07-01

    The H.P. (High Purity) Germanium detector is commonly used for gamma spectrometry in environmental radioactivity laboratories. The efficiency of the detector must be calibrated for each geometry considered. This calibration is performed using a standard solution containing gamma emitter sources. The usual goal is the obtaining of an efficiency curve to be used in the determination of the activity of samples with the same geometry. It is evident the importance of the detector calibration. However, the procedure presents some problems as it depends on the source geometry (shape, volume, distance to detector, etc.) and shall be repeated when these factors change. That means an increasing use of standard solutions and consequently an increasing generation of radioactive wastes. Simulation of the calibration procedure with a validated computer program is clearly an important auxiliary tool for environmental radioactivity laboratories. This simulation is useful for both optimising calibration procedures and reducing the amount of radioactivity wastes produced. The M.C.N.P. code, based on the Monte Carlo method, has been used in this work for the simulation of detector calibration. A model has been developed for the detector as well as for the source contained in a Petri box. The source is a standard solution that contains the following radionuclides: {sup 241}Am, {sup 109}Cd, {sup 57}Co, {sup 139}Ce, {sup 203}Hg, {sup 113}Sn, {sup 85}Sr, {sup 137}Cs, {sup 88}Y and {sup 60}Co; covering a wide energy range (50 to 2000 keV). However, there are two radionuclides in the solution ({sup 60}Co and {sup 88}Y) that emit gamma rays in true coincidence. The effect of the true coincidence summing produces a distortion of the calibration curve at higher energies. To decrease this effect some measurements have been performed at increasing distances between the source and the detector. As the true coincidence effect is observed in experimental measurements but not in the Monte Carlo

  1. An Efficient Approach to Ab Initio Monte Carlo Simulation

    CERN Document Server

    Leiding, Jeff

    2013-01-01

    We present a Nested Markov Chain Monte Carlo (NMC) scheme for building equilibrium averages based on accurate potentials such as density functional theory. Metropolis sampling of a reference system, defined by an inexpensive but approximate potential, is used to substantially decorrelate configurations at which the potential of interest is evaluated, thereby dramatically reducing the number needed to build ensemble averages at a given level of precision. The efficiency of this procedure is maximized on-the-fly through variation of the reference system thermodynamic state (characterized here by its inverse temperature \\beta^0), which is otherwise unconstrained. Local density approximation (LDA) results are presented for shocked states in argon at pressures from 4 to 60 GPa. Depending on the quality of the reference potential, the acceptance probability is enhanced by factors of 1.2-28 relative to unoptimized NMC sampling, and the procedure's efficiency is found to be competitive with that of standard ab initio...

  2. Monte Carlo studies and optimization for the calibration system of the GERDA experiment

    Science.gov (United States)

    Baudis, L.; Ferella, A. D.; Froborg, F.; Tarka, M.

    2013-11-01

    The GERmanium Detector Array, GERDA, searches for neutrinoless double β decay in 76Ge using bare high-purity germanium detectors submerged in liquid argon. For the calibration of these detectors γ emitting sources have to be lowered from their parking position on the top of the cryostat over more than 5 m down to the germanium crystals. With the help of Monte Carlo simulations, the relevant parameters of the calibration system were determined. It was found that three 228Th sources with an activity of 20 kBq each at two different vertical positions will be necessary to reach sufficient statistics in all detectors in less than 4 h of calibration time. These sources will contribute to the background of the experiment with a total of (1.07±0.04(stat)-0.19+0.13(sys))×10-4 cts/(keV kg yr)) when shielded from below with 6 cm of tantalum in the parking position.

  3. Monte Carlo studies and optimization for the calibration system of the GERDA experiment

    Energy Technology Data Exchange (ETDEWEB)

    Baudis, L. [Physics Institute, University of Zurich, Winterthurerstrasse 190, 8057 Zürich (Switzerland); Ferella, A.D. [Physics Institute, University of Zurich, Winterthurerstrasse 190, 8057 Zürich (Switzerland); INFN Laboratori Nazionali del Gran Sasso, 67010 Assergi (Italy); Froborg, F., E-mail: francis@froborg.de [Physics Institute, University of Zurich, Winterthurerstrasse 190, 8057 Zürich (Switzerland); Tarka, M. [Physics Institute, University of Zurich, Winterthurerstrasse 190, 8057 Zürich (Switzerland); Physics Department, University of Illinois, 1110 West Green Street, Urbana, IL 61801 (United States)

    2013-11-21

    The GERmanium Detector Array, GERDA, searches for neutrinoless double β decay in {sup 76}Ge using bare high-purity germanium detectors submerged in liquid argon. For the calibration of these detectors γ emitting sources have to be lowered from their parking position on the top of the cryostat over more than 5 m down to the germanium crystals. With the help of Monte Carlo simulations, the relevant parameters of the calibration system were determined. It was found that three {sup 228}Th sources with an activity of 20 kBq each at two different vertical positions will be necessary to reach sufficient statistics in all detectors in less than 4 h of calibration time. These sources will contribute to the background of the experiment with a total of (1.07±0.04(stat){sub −0.19}{sup +0.13}(sys))×10{sup −4}cts/(keVkgyr)) when shielded from below with 6 cm of tantalum in the parking position.

  4. Coincidence corrected efficiency calibration of Compton-suppressed HPGe detectors

    Energy Technology Data Exchange (ETDEWEB)

    Aucott, T.

    2015-04-20

    The authors present a reliable method to calibrate the full-energy efficiency and the coincidence correction factors using a commonly-available mixed source gamma standard. This is accomplished by measuring the peak areas from both summing and non-summing decay schemes and simultaneously fitting both the full-energy efficiency, as well as the total efficiency, as functions of energy. By using known decay schemes, these functions can then be used to provide correction factors for other nuclides not included in the calibration standard.

  5. A Monte Carlo study of lung counting efficiency for female workers of different breast sizes using deformable phantoms

    Science.gov (United States)

    Hegenbart, L.; Na, Y. H.; Zhang, J. Y.; Urban, M.; Xu, X. George

    2008-10-01

    There are currently no physical phantoms available for calibrating in vivo counting devices that represent women with different breast sizes because such phantoms are difficult, time consuming and expensive to fabricate. In this work, a feasible alternative involving computational phantoms was explored. A series of new female voxel phantoms with different breast sizes were developed and ported into a Monte Carlo radiation transport code for performing virtual lung counting efficiency calibrations. The phantoms are based on the RPI adult female phantom, a boundary representation (BREP) model. They were created with novel deformation techniques and then voxelized for the Monte Carlo simulations. Eight models have been selected with cup sizes ranging from AA to G according to brassiere industry standards. Monte Carlo simulations of a lung counting system were performed with these phantoms to study the effect of breast size on lung counting efficiencies, which are needed to determine the activity of a radionuclide deposited in the lung and hence to estimate the resulting dose to the worker. Contamination scenarios involving three different radionuclides, namely Am-241, Cs-137 and Co-60, were considered. The results show that detector efficiencies considerably decrease with increasing breast size, especially for low energy photon emitting radionuclides. When the counting efficiencies of models with cup size AA were compared to those with cup size G, a difference of up to 50% was observed. The detector efficiencies for each radionuclide can be approximated by curve fitting in the total breast mass (polynomial of second order) or the cup size (power).

  6. Monte Carlo Based Calibration and Uncertainty Analysis of a Coupled Plant Growth and Hydrological Model

    Science.gov (United States)

    Houska, Tobias; Multsch, Sebastian; Kraft, Philipp; Frede, Hans-Georg; Breuer, Lutz

    2014-05-01

    Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures, e.g. by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow a more detailed analysis of the dynamic behaviour of the soil-plant interface. We used the Python programming language to couple two of such high process oriented independent models and to calibrate both models simultaneously. The Catchment Modelling Framework (CMF) simulated soil hydrology based on the Richards equation and the Van-Genuchten-Mualem retention curve. CMF was coupled with the Plant growth Modelling Framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo based Generalised Likelihood Uncertainty Estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions to it. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 x 106 model runs randomly drawn from an equally distributed parameter space. Three objective functions were used to evaluate the model performance, i.e. coefficient of determination (R2), bias and model efficiency according to Nash Sutcliffe (NSE). The model was applied to three sites with different management in Muencheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matters of roots, storages, stems and leaves. Best parameter sets resulted in NSE of 0.57 for the simulation of soil moisture across all three sites. The shape

  7. New data concerning the efficiency calibration of a drum waste assay system

    International Nuclear Information System (INIS)

    The study is focused on the efficiency calibration of a gamma spectroscopy system for drum waste assay.The measurement of a radioactive drum waste is usually difficult because of its large volume, the varied distribution of the waste within the drum and its high self attenuation.To solve this problems, a complex calibration of the system is required. For this purpose, a calibration drum provided with seven tubes, placed at different distances from its center was used, the rest of the drum being filled with Portland cement. For the efficiency determination of a uniformly distributed source, a linear source of 152 Eu was used.The linear calibration source was introduced successively inside the seven tubes, the gamma spectra being recorded while the drum was translated and simultaneously rotated. Using the GENIE-PC software, the gamma-spectra were analyzed and the detection efficiencies for shell-sources were obtained. Using this efficiencies, the total response of the detector and the detection efficiency appropriate to a uniform volume source were calculated. For the efficiency determination of a non-homogenous source, additional measurements in the following geometries were made. First, with a 152 Eu point source placed in front of the detector, measured in all seven tubes, the drum being only rotated. Second, with the linear source of 152 Eu placed in front of the detector, measured in all seven tubes, only the drum being rotated. For each position the gamma spectra was recorded and the detection efficiency was calculated.The obtained values for efficiency were verified using GESPECOR software, which has been developed for the computation of the efficiency of Ge detectors for a wide class of measurement configurations, using Monte-Carlo method. (authors)

  8. Calibration of a gamma spectrometer for natural radioactivity measurement. Experimental measurements and Monte Carlo modelling

    International Nuclear Information System (INIS)

    The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of 137Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the 60Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)

  9. On the efficiency calibration of a drum waste assay system

    CERN Document Server

    Dinescu, L; Cazan, I L; Macrin, R; Caragheorgheopol, G; Rotarescu, G

    2002-01-01

    The efficiency calibration of a gamma spectroscopy waste assay system, constructed by IFIN-HH, was performed. The calibration technique was based on the assumption of a uniform distribution of the source activity in the drum and also a uniform sample matrix. A collimated detector (HPGe--20% relative efficiency) placed at 30 cm from the drum was used. The detection limit for sup 1 sup 3 sup 7 Cs and sup 6 sup 0 Co is approximately 45 Bq/kg for a sample of about 400 kg and a counting time of 10 min. A total measurement uncertainty of -70% to +40% was estimated.

  10. Calibration of AGILE-GRID with In-Flight Data and Monte Carlo Simulations

    CERN Document Server

    Chen, Andrew W; Bulgarelli, A; Cattaneo, P W; Contessi, T; Giuliani, A; Pittori, C; Pucella, G; Tavani, M; Trois, A; Verrecchia, F; Barbiellini, G; Caraveo, P; Colafrancesco, S; Costa, E; De Paris, G; Del Monte, E; Di Cocco, G; Donnarumma, I; Evangelista, Y; Ferrari, A; Feroci, M; Fioretti, V; Fiorini, M; Fuschino, F; Galli, M; Gianotti, F; Giommi, P; Giusti, M; Labanti, C; Lapshov, I; Lazzarotto, F; Lipari, P; Longo, F; Lucarelli, F; Marisaldi, M; Mereghetti, S; Morelli, E; Moretti, E; Morselli, A; Pacciani, L; Pellizzoni, A; Perotti, F; Piano, G; Picozza, P; Pilia, M; Prest, M; Rapisarda, M; Rappoldi, A; Rubini, A; Sabatini, S; Santolamazza, P; Soffitta, P; Striani, E; Trifoglio, M; Valentini, G; Vallazza, E; Vercellone, S; Vittorini, V; Zanello, D

    2013-01-01

    Context: AGILE is a gamma-ray astrophysics mission which has been in orbit since 23 April 2007 and continues to operate reliably. The gamma-ray detector, AGILE-GRID, has observed Galactic and extragalactic sources, many of which were collected in the first AGILE Catalog. Aims: We present the calibration of the AGILE-GRID using in-flight data and Monte Carlo simulations, producing Instrument Response Functions (IRFs) for the effective area A_eff), Energy Dispersion Probability (EDP), and Point Spread Function (PSF), each as a function of incident direction in instrument coordinates and energy. Methods: We performed Monte Carlo simulations at different gamma-ray energies and incident angles, including background rejection filters and Kalman filter-based gamma-ray reconstruction. Long integrations of in-flight observations of the Vela, Crab and Geminga sources in broad and narrow energy bands were used to validate and improve the accuracy of the instrument response functions. Results: The weighted average PSFs a...

  11. Efficiencies of dynamic Monte Carlo algorithms for off-lattice particle systems with a single impurity

    KAUST Repository

    Novotny, M.A.

    2010-02-01

    The efficiency of dynamic Monte Carlo algorithms for off-lattice systems composed of particles is studied for the case of a single impurity particle. The theoretical efficiencies of the rejection-free method and of the Monte Carlo with Absorbing Markov Chains method are given. Simulation results are presented to confirm the theoretical efficiencies. © 2010.

  12. A Generic Algorithm for IACT Optical Efficiency Calibration using Muons

    CERN Document Server

    Mitchell, A M W; Parsons, R D

    2015-01-01

    Muons produced in Extensive Air Showers (EAS) generate ring-like images in Imaging Atmospheric Cherenkov Telescopes when travelling near parallel to the optical axis. From geometrical parameters of these images, the absolute amount of light emitted may be calculated analytically. Comparing the amount of light recorded in these images to expectation is a well established technique for telescope optical efficiency calibration. However, this calculation is usually performed under the assumption of an approximately circular telescope mirror. The H.E.S.S. experiment entered its second phase in 2012, with the addition of a fifth telescope with a non-circular 600m$^2$ mirror. Due to the differing mirror shape of this telescope to the original four H.E.S.S. telescopes, adaptations to the standard muon calibration were required. We present a generalised muon calibration procedure, adaptable to telescopes of differing shapes and sizes, and demonstrate its performance on the H.E.S.S. II array.

  13. Design and fabrication of an in situ gamma radioactivity measurement system for marine environment and its calibration with Monte Carlo method.

    Science.gov (United States)

    Abdollahnejad, Hamed; Vosoughi, Naser; Zare, Mohammad Reza

    2016-08-01

    Simulation, design and fabrication of a sealing enclosure is carried out for a NaI(Tl) 2″×2″ detector, to be used as in situ gamma radioactivity measurement system in marine environment. Effect of sealing enclosure on performance of the system in laboratory and marine environment (distinct tank with 10m(3) volume) were studied using point sources. The marine volumetric efficiency for radiation with 1461keV energy (from (40)K) is measured with KCl volumetric liquid source diluted in distinct tank. The experimental and simulated efficiency values agreed well. Marine volumetric efficiency calibration curve is calculated for 60keV to 1461keV energy with Monte Carlo method. This curve indicates that efficiency increasing rapidly up to 140.5keV but then drops exponentially. PMID:27213808

  14. Calibration of an in-situ BEGe detector using semi-empirical and Monte Carlo techniques.

    Science.gov (United States)

    Agrafiotis, K; Karfopoulos, K L; Anagnostakis, M J

    2011-08-01

    In the case of a nuclear or radiological accident a rapid estimation of the qualitative and quantitative characteristics of the potential radioactive pollution is needed. For aerial releases the radioactive pollutants are finally deposited on the ground forming a surface source. In this case, in-situ γ-ray spectrometry is a powerful tool for the determination of ground pollution. In this work, the procedure followed at the Nuclear Engineering Department of the National Technical University of Athens (NED-NTUA) for the calibration of an in-situ Broad Energy Germanium (BEGe) detector, for the determination of gamma-emitting radionuclides deposited on the ground surface, is presented. BEGe detectors due to their technical characteristics are suitable for the analysis of photons in a wide energy region. Two different techniques were applied for the full-energy peak efficiency calibration of the BEGe detector in the energy region 60-1600 keV: Full-energy peak efficiencies determined using the two methods agree within statistical uncertainties. PMID:21193317

  15. Energy Self-calibration and low-energy efficiency calibration for an underwater in-situ LaBr3:Ce spectrometer

    CERN Document Server

    Zeng, Zhi; Ma, Hao; He, Jianhua; Cang, Jirong; Zeng, Ming; Cheng, Jianping

    2016-01-01

    An underwater in situ gamma ray spectrometer based on LaBr was developed and optimized to monitor marine radioactivity. The intrinsic background mainly from 138La and 227Ac of LaBr was well determined by low background measurement and pulse shape discrimination method. A method of self calibration using three internal contaminant peaks was proposed to eliminate the peak shift during long term monitoring. With experiments under different temperatures, the method was proved to be helpful for maintaining long term stability. To monitor the marine radioactivity, the spectrometer's efficiency was calculated via water tank experiment as well as Monte Carlo simulation.

  16. Highly Efficient Monte-Carlo for Estimating the Unavailability of Markov Dynamic System1)

    Institute of Scientific and Technical Information of China (English)

    XIAOGang; DENGLi; ZHANGBen-Ai; ZHUJian-Shi

    2004-01-01

    Monte Carlo simulation has become an important tool for estimating the reliability andavailability of dynamic system, since conventional numerical methods are no longer efficient whenthe size of the system to solve is large. However, evaluating by a simulation the probability of oc-currence of very rare events means playing a very large number of histories of the system, whichleads to unacceptable computing time. Highly efficient Monte Carlo should be worked out. In thispaper, based on the integral equation describing state transitions of Markov dynamic system, a u-niform Monte Carlo for estimating unavailability is presented. Using free-flight estimator, directstatistical estimation Monte Carlo is achieved. Using both free-flight estimator and biased proba-bility space of sampling, weighted statistical estimation Monte Carlo is also achieved. Five MonteCarlo schemes, including crude simulation, analog simulation, statistical estimation based oncrude and analog simulation, and weighted statistical estimation, are used for calculating the un-availability of a repairable Con/3/30 : F system. Their efficiencies are compared with each other.The results show the weighted statistical estimation Monte Carlo has the smallest variance and thehighest efficiency in very rare events simulation.

  17. Euromet action 428: transfer of ge detectors efficiency calibration from point source geometry to other geometries

    International Nuclear Information System (INIS)

    The EUROMET project 428 examines efficiency transfer computation for Ge gamma-ray spectrometers when the efficiency is known for a reference point source geometry in the 60 keV to 2 MeV energy range. For this, different methods are used, such as Monte Carlo simulation or semi-empirical computation. The exercise compares the application of these methods to the same selected experimental cases to determine the usage limitations versus the requested accuracy. For carefully examining these results and trying to derive information for improving the computation codes, this study was limited to a few simple cases, from an experimental efficiency calibration for point source at 10-cm source-to-detector distance. The first part concerns the simplest case of geometry transfer, i.e., using point sources for 3 source-to-detector distances: 2,5 and 20 cm; the second part deals with transfer from point source geometry to cylindrical geometry with three different matrices. The general results show that the deviations between the computed results and the measured efficiencies are for the most part within 10%. The quality of the results is rather inhomogeneous and shows that these codes cannot be used directly for metrological purposes. However, most of them are operational for routine measurements when efficiency uncertainties of 5-10% can be sufficient. (author)

  18. A Monte Carlo (MC) based individual calibration method for in vivo x-ray fluorescence analysis (XRF)

    Science.gov (United States)

    Hansson, Marie; Isaksson, Mats

    2007-04-01

    X-ray fluorescence analysis (XRF) is a non-invasive method that can be used for in vivo determination of thyroid iodine content. System calibrations with phantoms resembling the neck may give misleading results in the cases when the measurement situation largely differs from the calibration situation. In such cases, Monte Carlo (MC) simulations offer a possibility of improving the calibration by better accounting for individual features of the measured subjects. This study investigates the prospects of implementing MC simulations in a calibration procedure applicable to in vivo XRF measurements. Simulations were performed with Penelope 2005 to examine a procedure where a parameter, independent of the iodine concentration, was used to get an estimate of the expected detector signal if the thyroid had been measured outside the neck. An attempt to increase the simulation speed and reduce the variance by exclusion of electrons and by implementation of interaction forcing was conducted. Special attention was given to the geometry features: analysed volume, source-sample-detector distances, thyroid lobe size and position in the neck. Implementation of interaction forcing and exclusion of electrons had no obvious adverse effect on the quotients while the simulation time involved in an individual calibration was low enough to be clinically feasible.

  19. Russian roulette efficiency in Monte Carlo resonant absorption calculations

    Energy Technology Data Exchange (ETDEWEB)

    Ghassoun, J. E-mail: ghassoun@ucam.ac.ma; Jehouani, A

    2000-11-15

    The resonant absorption calculation in media containing heavy resonant nuclei is one of the most difficult problems treated in reactor physics. Deterministic techniques need many approximations to solve this kind of problem. On the other hand, the Monte Carlo method is a reliable mathematical tool for evaluating the neutron resonance escape probability. But it suffers from large statistical deviations of results and long computation times. In order to overcome this problem, we have used the Splitting and Russian Roulette technique coupled separately to the survival biasing and to the importance sampling for the energy parameter. These techniques have been used to calculate the neutron resonance absorption in infinite homogenous media containing hydrogen and uranium characterized by the dilution (ratio of the concentrations of hydrogen to uranium). The punctual neutron source energy is taken at E{sub s}=2 MeV and E{sub s}=676.45 eV, whereas the energy cut-off is fixed at E{sub c}=2.768 eV. The results show a large reduction of computation time and statistical deviation, without altering the mean resonance escape probability compared to the usual analog simulation. The Splitting and Russian Roulette coupled to the survival biasing method is found to be the best methods for studying the neutron resonant absorption, particularly for high energies. A comparison is done between the Monte Carlo and deterministic methods based on the numerical solution of the neutron slowing down equations by the iterative method results for several dilutions.

  20. Russian roulette efficiency in Monte Carlo resonant absorption calculations

    Science.gov (United States)

    Ghassoun; Jehouani

    2000-10-01

    The resonant absorption calculation in media containing heavy resonant nuclei is one of the most difficult problems treated in reactor physics. Deterministic techniques need many approximations to solve this kind of problem. On the other hand, the Monte Carlo method is a reliable mathematical tool for evaluating the neutron resonance escape probability. But it suffers from large statistical deviations of results and long computation times. In order to overcome this problem, we have used the Splitting and Russian Roulette technique coupled separately to the survival biasing and to the importance sampling for the energy parameter. These techniques have been used to calculate the neutron resonance absorption in infinite homogenous media containing hydrogen and uranium characterized by the dilution (ratio of the concentrations of hydrogen to uranium). The punctual neutron source energy is taken at Es = 2 MeV and Es = 676.45 eV, whereas the energy cut-off is fixed at Ec = 2.768 eV. The results show a large reduction of computation time and statistical deviation, without altering the mean resonance escape probability compared to the usual analog simulation. The Splitting and Russian Roulette coupled to the survival biasing method is found to be the best methods for studying the neutron resonant absorption, particularly for high energies. A comparison is done between the Monte Carlo and deterministic methods based on the numerical solution of the neutron slowing down equations by the iterative method results for several dilutions. PMID:11003535

  1. Russian roulette efficiency in Monte Carlo resonant absorption calculations

    International Nuclear Information System (INIS)

    The resonant absorption calculation in media containing heavy resonant nuclei is one of the most difficult problems treated in reactor physics. Deterministic techniques need many approximations to solve this kind of problem. On the other hand, the Monte Carlo method is a reliable mathematical tool for evaluating the neutron resonance escape probability. But it suffers from large statistical deviations of results and long computation times. In order to overcome this problem, we have used the Splitting and Russian Roulette technique coupled separately to the survival biasing and to the importance sampling for the energy parameter. These techniques have been used to calculate the neutron resonance absorption in infinite homogenous media containing hydrogen and uranium characterized by the dilution (ratio of the concentrations of hydrogen to uranium). The punctual neutron source energy is taken at Es=2 MeV and Es=676.45 eV, whereas the energy cut-off is fixed at Ec=2.768 eV. The results show a large reduction of computation time and statistical deviation, without altering the mean resonance escape probability compared to the usual analog simulation. The Splitting and Russian Roulette coupled to the survival biasing method is found to be the best methods for studying the neutron resonant absorption, particularly for high energies. A comparison is done between the Monte Carlo and deterministic methods based on the numerical solution of the neutron slowing down equations by the iterative method results for several dilutions

  2. Auto-calibration of a one-dimensional hydrodynamic-ecological model using a Monte Carlo approach: simulation of hypoxic events in a polymictic lake

    Science.gov (United States)

    Luo, L.

    2011-12-01

    Automated calibration of complex deterministic water quality models with a large number of biogeochemical parameters can reduce time-consuming iterative simulations involving empirical judgements of model fit. We undertook auto-calibration of the one-dimensional hydrodynamic-ecological lake model DYRESM-CAEDYM, using a Monte Carlo sampling (MCS) method, in order to test the applicability of this procedure for shallow, polymictic Lake Rotorua (New Zealand). The calibration procedure involved independently minimising the root-mean-square-error (RMSE), maximizing the Pearson correlation coefficient (r) and Nash-Sutcliffe efficient coefficient (Nr) for comparisons of model state variables against measured data. An assigned number of parameter permutations was used for 10,000 simulation iterations. The 'optimal' temperature calibration produced a RMSE of 0.54 °C, Nr-value of 0.99 and r-value of 0.98 through the whole water column based on comparisons with 540 observed water temperatures collected between 13 July 2007 - 13 January 2009. The modeled bottom dissolved oxygen concentration (20.5 m below surface) was compared with 467 available observations. The calculated RMSE of the simulations compared with the measurements was 1.78 mg L-1, the Nr-value was 0.75 and the r-value was 0.87. The autocalibrated model was further tested for an independent data set by simulating bottom-water hypoxia events for the period 15 January 2009 to 8 June 2011 (875 days). This verification produced an accurate simulation of five hypoxic events corresponding to DO practiced for similar complex water quality models.

  3. Calibration of a neutron moisture gauge by Monte-Carlo simulation

    International Nuclear Information System (INIS)

    Neutron transport calculations using the MCNP code have been used to determine flux distributions in soils and to derive the calibration curves of a neutron guage. The calculations were carried out for a typical geometry identical with that of the moisture guage HUMITERRA developed by the Laboratorio Nacional de Engenharia e Tecnologia Industrial, Portugal. To test the reliability of the method a comparison of computed and experimental results was made. The effect on the guage calibration curve of varying the values of several parameters which characterize the measurement system was studied, namely the soil dry bulk density, the active length of the neutron detector, the materials and wall thickness of the probe casing and of the access tubes. The usefulness of the method in the design, development and calibration of neutron guages for soil moisture determinations is discussed. (Author)

  4. Calibration of a gamma spectrometer for measuring natural radioactivity. Experimental measurements and modeling by Monte-Carlo methods

    International Nuclear Information System (INIS)

    The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of 137Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the 60Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)

  5. Response and Monte Carlo evaluation of a reference ionization chamber for radioprotection level at calibration laboratories

    Science.gov (United States)

    Neves, Lucio P.; Vivolo, Vitor; Perini, Ana P.; Caldas, Linda V. E.

    2015-07-01

    A special parallel plate ionization chamber, inserted in a slab phantom for the personal dose equivalent Hp(10) determination, was developed and characterized in this work. This ionization chamber has collecting electrodes and window made of graphite, and the walls and phantom made of PMMA. The tests comprise experimental evaluation following international standards and Monte Carlo simulations, employing the PENELOPE code to evaluate the design of this new dosimeter. The experimental tests were conducted employing the radioprotection level quality N-60 established at the IPEN, and all results were within the recommended standards.

  6. Efficiency of Static Knowledge Bias in Monte-Carlo Tree Search

    OpenAIRE

    Ikeda, Kokolo; Viennot, Simon

    2014-01-01

    Monte-Carlo methods are currently the best known algorithms for the game of Go. It is already known that Monte-Carlo simulations based on a probability model containing static knowledge of the game are more efficient than random simulations. Such probability models are also used by some programs in the tree search policy to limit the search to a subset of the legal moves or to bias the search, but this aspect is not so well documented. In this article, we try to describe more precisely how st...

  7. Calibration of the Top-Quark Monte Carlo Mass.

    Science.gov (United States)

    Kieseler, Jan; Lipka, Katerina; Moch, Sven-Olaf

    2016-04-22

    We present a method to establish, experimentally, the relation between the top-quark mass m_{t}^{MC} as implemented in Monte Carlo generators and the Lagrangian mass parameter m_{t} in a theoretically well-defined renormalization scheme. We propose a simultaneous fit of m_{t}^{MC} and an observable sensitive to m_{t}, which does not rely on any prior assumptions about the relation between m_{t} and m_{t}^{MC}. The measured observable is independent of m_{t}^{MC} and can be used subsequently for a determination of m_{t}. The analysis strategy is illustrated with examples for the extraction of m_{t} from inclusive and differential cross sections for hadroproduction of top quarks. PMID:27152794

  8. Comparing Two Different Methods of Preferential Flow Simulation, Using Calibration Constrained Monte Carlo Uncertainty analysis

    Science.gov (United States)

    Schirmer, M.; Ghasemizade, M.; Radny, D.

    2014-12-01

    Many different methods and approaches have been suggested for simulation of preferential flows. However, most of these methods have been tested in lab scales where boundary conditions and material properties are known and under control. The focus of this study is to compare two different approaches for simulating preferential flows in a weighing lysimeter where the scale of simulation is closer to field scales than simulations done in labs. To do so, we applied dual permeability and spatially distributed heterogeneity as two competitive approaches for simulating slow and rapid flow out of a lysimeter. While the dual permeability approach assumes that there is a structure among soil aggregates and that can be captured as a fraction of the porosity, the other method attributes the existence of preferential flows to heterogeneity distributed within the domain. The two aforementioned approaches were used in order to simulate daily recharge values of a lysimeter. The analysis included a calibration phase, which started from March 2012 until March 2013, and a validation phase which lasted a year following the calibration period. The simulations were performed based on the numerical and 3-D physically based model HydroGeoSphere. The nonlinear uncertainty analysis of the results indicate that they are comparable.

  9. Improving the efficiency of Monte Carlo simulations of systems that undergo temperature-driven phase transitions

    Science.gov (United States)

    Velazquez, L.; Castro-Palacio, J. C.

    2013-07-01

    Recently, Velazquez and Curilef proposed a methodology to extend Monte Carlo algorithms based on a canonical ensemble which aims to overcome slow sampling problems associated with temperature-driven discontinuous phase transitions. We show in this work that Monte Carlo algorithms extended with this methodology also exhibit a remarkable efficiency near a critical point. Our study is performed for the particular case of a two-dimensional four-state Potts model on a square lattice with periodic boundary conditions. This analysis reveals that the extended version of Metropolis importance sampling is more efficient than the usual Swendsen-Wang and Wolff cluster algorithms. These results demonstrate the effectiveness of this methodology to improve the efficiency of MC simulations of systems that undergo any type of temperature-driven phase transition.

  10. Theoretical and practical study of the variance and efficiency of a Monte Carlo calculation due to Russian roulette

    International Nuclear Information System (INIS)

    Although Russian roulette is applied very often in Monte Carlo calculations, not much literature exists on its quantitative influence on the variance and efficiency of a Monte Carlo calculation. Elaborating on the work of Lux and Koblinger using moment equations, new relevant equations are derived to calculate the variance of a Monte Carlo simulation using Russian roulette. To demonstrate its practical application the theory is applied to a simplified transport model resulting in explicit analytical expressions for the variance of a Monte Carlo calculation and for the expected number of collisions per history. From these expressions numerical results are shown and compared with actual Monte Carlo calculations, showing an excellent agreement. By considering the number of collisions in a Monte Carlo calculation as a measure of the CPU time, also the efficiency of the Russian roulette can be studied. It opens the way for further investigations, including optimization of Russian roulette parameters. (authors)

  11. An Efficient Feedback Calibration Algorithm for Direct Imaging Radio Telescopes

    CERN Document Server

    Beardsley, Adam P; Bowman, Judd D; Morales, Miguel F

    2016-01-01

    We present the E-field Parallel Imaging Calibration (EPICal) algorithm, which addresses the need for a real-time calibration method for direct imaging radio astronomy correlators. Direct imaging involves a spatial fast Fourier transform of antenna voltages, alleviating the harsh $\\mathcal{O}(N_a^2)$ computational scaling to a more gentle $\\mathcal{O}(N_a \\log_2 N_a)$, which can save orders of magnitude in computation cost for next generation arrays consisting of hundreds to thousands of antennas. However, because signals are mixed in the correlator, gain correction must be applied on the front end. We develop the EPICal algorithm to form gain solutions in real time without ever forming visibilities. This method scales as the number of antennas, and produces results comparable to those from visibilities. Through simulations and application to Long Wavelength Array data we show this algorithm is a promising solution for next generation instruments.

  12. Efficient implementation of the Hellmann-Feynman theorem in a diffusion Monte Carlo calculation.

    Science.gov (United States)

    Vitiello, S A

    2011-02-01

    Kinetic and potential energies of systems of (4)He atoms in the solid phase are computed at T = 0. Results at two densities of the liquid phase are presented as well. Calculations are performed by the multiweight extension to the diffusion Monte Carlo method that allows the application of the Hellmann-Feynman theorem in a robust and efficient way. This is a general method that can be applied in other situations of interest as well.

  13. Calibration and efficiency curve of SANAEM ionization chamber for activity measurements.

    Science.gov (United States)

    Yeltepe, Emin; Kossert, Karsten; Dirican, Abdullah; Nähle, Ole; Niedergesäß, Christiane; Kemal Şahin, Namik

    2016-03-01

    A commercially available Fidelis ionization chamber was calibrated and assessed in PTB with activity standard solutions. The long-term stability and linearity of the system was checked. Energy-dependent efficiency curves for photons and beta particles were determined, using an iterative method in Excel™, to enable calibration factors to be calculated for radionuclides which were not used in the calibration. Relative deviations between experimental and calculated radionuclide efficiencies are of the order of 1% for most photon emitters and below 5% for pure beta emitters. The system will enable TAEK-SANAEM to provide traceable activity measurements.

  14. Monte Carlo evaluation of the neutron detection efficiency of a superheated drop detector

    Energy Technology Data Exchange (ETDEWEB)

    Gualdrini, G. F. [ENEA, Centro Ricerche `Ezio Clementel`, Bologna (Italy). Dipt. Ambiente; D`Errico, F.; Noccioni, P. [Pisa, Univ. (Italy). Dipt. di Costruzioni Meccaniche e Nucleari

    1997-06-01

    Neutron dosimetry has recently gained renewed attention, following concerns on the exposure of crew members on board aircraft, and of workers around the increasing number of high energy accelerators for medical and research purposes. At the same time the new operational quantities for radiation dosimetry introduced by ICRU and the ICRP, aiming at a unified metrological system applicable to all types of radiation exposure, involved the need to update current devices in order to meet new requirements. Superheated Drop (Bubble) Detectors (SDD) offer an alternative approach to neutron radiation protection dosimetry. The SDDs are currently studied within a large collaborative effort involving Yale University, New Haven CT, the `Universita` degli Studi di Pisa`, the Physikalisch-Technische Bundesanstalt, Braunschweig D. and ENEA (National Agency for New Technology, Energy and the Environment)-C.R., Bologna. The detectors were characterised through calibrations with monoenergetic neutron beams and where experimental investigations were inadequate or impossible, such as in the intermediate energy range, parametric Monte Carlo calculations of the response were carried out. This report describes the general characteristics of the SDDs along with the Monte Carlo computations of the energy response and a comparison with the experimental results.

  15. Monte Carlo evaluation of the neutron detection efficiency of a superheated drop detector

    Energy Technology Data Exchange (ETDEWEB)

    Gualdrini, G.F. [ENEA, Centro Ricerche `Ezio Clementel`, Bologna (Italy). Dipt. Ambiente; D`Errico, F.; Noccioni, P. [Pisa, Univ. (Italy). Dipt. di Costruzioni Meccaniche e Nucleari

    1997-03-01

    Neuron dosimetry has recently gained renewed attention, following concerns on the exposure of crew members on board aircraft, and of workers around the increasing number of high energy accelerators for medical and research purpose. At the same time the new operational qualities for radiation dosimetry introduced by ICRU and the ICRP, aiming at a unified metrological system applicable to all types of radiation exposure, involved the need to update current devices in order to meet new requirements. Superheated Drop (Bubble) Detectors (SDD) offer an alternative approach to neutron radiation protection dosimetry. The SDDs are currently studied within a large collaborative effort involving Yale University. New Haven CT, Pisa (IT) University, the Physikalisch-Technische Bundesanstalt, Braunschweig D, and ENEA (Italian National Agency for new Technologies Energy and the Environment) Centre of Bologna. The detectors were characterised through calibrations with monoenergetic neutron beams and where experimental investigations were inadequate or impossible, such as in the intermediate energy range , parametric Monte Carlo calculations of the response were carried out. This report describes the general characteristic of the SDDs along with the Monte Carlo computations of the energy response and a comparison with the experimental results.

  16. Monte Carlo evaluation of the neutron detection efficiency of a superheated drop detector

    International Nuclear Information System (INIS)

    Neuron dosimetry has recently gained renewed attention, following concerns on the exposure of crew members on board aircraft, and of workers around the increasing number of high energy accelerators for medical and research purpose. At the same time the new operational qualities for radiation dosimetry introduced by ICRU and the ICRP, aiming at a unified metrological system applicable to all types of radiation exposure, involved the need to update current devices in order to meet new requirements. Superheated Drop (Bubble) Detectors (SDD) offer an alternative approach to neutron radiation protection dosimetry. The SDDs are currently studied within a large collaborative effort involving Yale University. New Haven CT, Pisa (IT) University, the Physikalisch-Technische Bundesanstalt, Braunschweig D, and ENEA (Italian National Agency for new Technologies Energy and the Environment) Centre of Bologna. The detectors were characterised through calibrations with monoenergetic neutron beams and where experimental investigations were inadequate or impossible, such as in the intermediate energy range , parametric Monte Carlo calculations of the response were carried out. This report describes the general characteristic of the SDDs along with the Monte Carlo computations of the energy response and a comparison with the experimental results

  17. Evidence-Based Model Calibration for Efficient Building Energy Services

    OpenAIRE

    Bertagnolio, Stéphane

    2012-01-01

    Energy services play a growing role in the control of energy consumption and the improvement of energy efficiency in non-residential buildings. Most of the energy use analyses involved in the energy efficiency service process require on-field measurements and energy use analysis. Today, while detailed on-field measurements and energy counting stay generally expensive and time-consuming, energy simulations are increasingly cheaper due to the continuous improvement of computer speed. This work ...

  18. Mathematical efficiency calibration with uncertain source geometries using smart optimization

    Energy Technology Data Exchange (ETDEWEB)

    Menaa, N. [AREVA/CANBERRA Nuclear Measurements Business Unit, Saint Quentin-en-Yvelines 78182 (France); Bosko, A.; Bronson, F.; Venkataraman, R.; Russ, W. R.; Mueller, W. [AREVA/CANBERRA Nuclear Measurements Business Unit, Meriden, CT (United States); Nizhnik, V. [International Atomic Energy Agency, Vienna (Austria); Mirolo, L. [AREVA/CANBERRA Nuclear Measurements Business Unit, Saint Quentin-en-Yvelines 78182 (France)

    2011-07-01

    The In Situ Object Counting Software (ISOCS), a mathematical method developed by CANBERRA, is a well established technique for computing High Purity Germanium (HPGe) detector efficiencies for a wide variety of source shapes and sizes. In the ISOCS method, the user needs to input the geometry related parameters such as: the source dimensions, matrix composition and density, along with the source-to-detector distance. In many applications, the source dimensions, the matrix material and density may not be well known. Under such circumstances, the efficiencies may not be very accurate since the modeled source geometry may not be very representative of the measured geometry. CANBERRA developed an efficiency optimization software known as 'Advanced ISOCS' that varies the not well known parameters within user specified intervals and determines the optimal efficiency shape and magnitude based on available benchmarks in the measured spectra. The benchmarks could be results from isotopic codes such as MGAU, MGA, IGA, or FRAM, activities from multi-line nuclides, and multiple counts of the same item taken in different geometries (from the side, bottom, top etc). The efficiency optimization is carried out using either a random search based on standard probability distributions, or using numerical techniques that carry out a more directed (referred to as 'smart' in this paper) search. Measurements were carried out using representative source geometries and radionuclide distributions. The radionuclide activities were determined using the optimum efficiency and compared against the true activities. The 'Advanced ISOCS' method has many applications among which are: Safeguards, Decommissioning and Decontamination, Non-Destructive Assay systems and Nuclear reactor outages maintenance. (authors)

  19. Quantum efficiency calibration of opto-electronic detector by means of correlated photons method

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A new calibration method of detectors can be realized by using correlated photons generated in spontaneous parametric down-conversion (SPDC) effect of nonlinear crystal.An absolute calibration system of detector quantum efficiency is performed.And its principle and experimental setup are introduced.A continuouswave (CW) ultraviolet (351 nm),diode-pumped,frequency-doubled,and solid-state laser is used to pump BBO crystal.The quantum efficiencies of the photomultiplier at 633,702,and 789 nm are measured respectively.The coincidence peaks are observed using coincidence circuit.Some measurement factors including the filter bandwidth of trigger channel,the detector position alignment and polarization of the pump light are analyzed.The uncertainties of this calibration method are also analyzed,and the relative uncertainties of total calibration are less than 5.8%.The accuracy of this method could be improved in the future.

  20. Developing an Efficient Calibration System for Joint Offset of Industrial Robots

    OpenAIRE

    Bingtuan Gao; Yong Liu; Ning Xi; Yantao Shen

    2014-01-01

    Joint offset calibration is one of the most important methods to improve the positioning accuracy for industrial robots. This paper presents an efficient method to calibrate industrial robot joint offset. The proposed method mainly relies on a laser pointer mounted on the robot end-effector and a position sensitive device (PSD) located in the work space arbitrarily. A vision based control was employed to aid the laser beam shooting at the center of PSD surface from several initial robot p...

  1. Ge(Li) intrinsic efficiency calculation using Monte Carlo simulation for γ radiation transport

    International Nuclear Information System (INIS)

    To solve a radiation transport problem by using Monte Carlo simulation method, the evolution of a large number of radiations must be simulated and also the analysis of their history must be done. The evolution of a radiation starts by the radiation emission, followed by the radiation unperturbed propagation in the medium between the successive interactions and then the radiation parameters modification in the points where interactions occur. The goal of this paper consists in the calculation of the total detection efficiency and the intrinsic efficiency for a coaxial Ge(Li) detector, using Monte Carlo method in order to simulate the γ radiation transport. A Ge(Li) detector with 106 cm3 active volume and γ photons with energies in 50 keV - 2 MeV range, emitted by a point source situated on the detector axis, were considered. Each γ photon evolution is simulated by an analogue process step-by-step until the photon escapes from the detector or is completely absorbed in the active volume of the detector. (author)

  2. Quantum Monte Carlo for large chemical systems: implementing efficient strategies for peta scale platforms and beyond

    International Nuclear Information System (INIS)

    Various strategies to implement efficiently quantum Monte Carlo (QMC) simulations for large chemical systems are presented. These include: (i) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), (ii) the possibility of keeping the memory footprint minimal, (iii) the important enhancement of single-core performance when efficient optimization tools are used, and (iv) the definition of a universal, dynamic, fault-tolerant, and load-balanced framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC-Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056, and 1731 electrons). Using 10-80 k computing cores of the Curie machine (GENCI-TGCC-CEA, France), QMC-Chem has been shown to be capable of running at the peta scale level, thus demonstrating that for this machine a large part of the peak performance can be achieved. Implementation of large-scale QMC simulations for future exa scale platforms with a comparable level of efficiency is expected to be feasible. (authors)

  3. Quantum Monte Carlo for large chemical systems: implementing efficient strategies for petascale platforms and beyond.

    Science.gov (United States)

    Scemama, Anthony; Caffarel, Michel; Oseret, Emmanuel; Jalby, William

    2013-04-30

    Various strategies to implement efficiently quantum Monte Carlo (QMC) simulations for large chemical systems are presented. These include: (i) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), (ii) the possibility of keeping the memory footprint minimal, (iii) the important enhancement of single-core performance when efficient optimization tools are used, and (iv) the definition of a universal, dynamic, fault-tolerant, and load-balanced framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC=Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056, and 1731 electrons). Using 10-80 k computing cores of the Curie machine (GENCI-TGCC-CEA, France), QMC=Chem has been shown to be capable of running at the petascale level, thus demonstrating that for this machine a large part of the peak performance can be achieved. Implementation of large-scale QMC simulations for future exascale platforms with a comparable level of efficiency is expected to be feasible.

  4. An efficient calibration method for SQUID measurement system using three orthogonal Helmholtz coils

    Science.gov (United States)

    Hua, Li; Shu-Lin, Zhang; Chao-Xiang, Zhang; Xiang-Yan, Kong; Xiao-Ming, Xie

    2016-06-01

    For a practical superconducting quantum interference device (SQUID) based measurement system, the Tesla/volt coefficient must be accurately calibrated. In this paper, we propose a highly efficient method of calibrating a SQUID magnetometer system using three orthogonal Helmholtz coils. The Tesla/volt coefficient is regarded as the magnitude of a vector pointing to the normal direction of the pickup coil. By applying magnetic fields through a three-dimensional Helmholtz coil, the Tesla/volt coefficient can be directly calculated from magnetometer responses to the three orthogonally applied magnetic fields. Calibration with alternating current (AC) field is normally used for better signal-to-noise ratio in noisy urban environments and the results are compared with the direct current (DC) calibration to avoid possible effects due to eddy current. In our experiment, a calibration relative error of about 6.89 × 10‑4 is obtained, and the error is mainly caused by the non-orthogonality of three axes of the Helmholtz coils. The method does not need precise alignment of the magnetometer inside the Helmholtz coil. It can be used for the multichannel magnetometer system calibration effectively and accurately. Project supported by the “Strategic Priority Research Program (B)” of the Chinese Academy of Sciences (Grant No. XDB04020200) and the Shanghai Municipal Science and Technology Commission Project, China (Grant No. 15DZ1940902).

  5. An efficient calibration method for SQUID measurement system using three orthogonal Helmholtz coils

    Science.gov (United States)

    Hua, Li; Shu-Lin, Zhang; Chao-Xiang, Zhang; Xiang-Yan, Kong; Xiao-Ming, Xie

    2016-06-01

    For a practical superconducting quantum interference device (SQUID) based measurement system, the Tesla/volt coefficient must be accurately calibrated. In this paper, we propose a highly efficient method of calibrating a SQUID magnetometer system using three orthogonal Helmholtz coils. The Tesla/volt coefficient is regarded as the magnitude of a vector pointing to the normal direction of the pickup coil. By applying magnetic fields through a three-dimensional Helmholtz coil, the Tesla/volt coefficient can be directly calculated from magnetometer responses to the three orthogonally applied magnetic fields. Calibration with alternating current (AC) field is normally used for better signal-to-noise ratio in noisy urban environments and the results are compared with the direct current (DC) calibration to avoid possible effects due to eddy current. In our experiment, a calibration relative error of about 6.89 × 10-4 is obtained, and the error is mainly caused by the non-orthogonality of three axes of the Helmholtz coils. The method does not need precise alignment of the magnetometer inside the Helmholtz coil. It can be used for the multichannel magnetometer system calibration effectively and accurately. Project supported by the “Strategic Priority Research Program (B)” of the Chinese Academy of Sciences (Grant No. XDB04020200) and the Shanghai Municipal Science and Technology Commission Project, China (Grant No. 15DZ1940902).

  6. Study on calibration of neutron efficiency and relative photo-yield of plastic scintillator

    CERN Document Server

    Peng Tai Ping; Li Ru Rong; Zhang Jian Hua; Luo Xiao Bing; Xia Yi Jun; Yang Zhi Hu

    2002-01-01

    A method used for the calibration of neutron efficiency and the relative photo yield of plastic scintillator is studied. T(p, n) and D(d, n) reactions are used as neutron resources. The neutron efficiencies and the relative photo yields of plastic scintillators 1421 (40 mm in diameter and 5 mm in thickness) and determined in the neutron energy range of 0.655-5 MeV

  7. Study of RPC Barrel maximum efficiency in 2012 and 2015 calibration collision runs

    CERN Document Server

    Cassar, Samwel

    2015-01-01

    The maximum efficiency of each of the 1020 Resistive Plate Chamber (RPC) rolls in the barrel region of the CMS muon detector is calculated from the best sigmoid fit of efficiency against high voltage (HV). Data from the HV scans, collected during calibration runs in 2012 and 2015, were compared and the rolls exhibiting a change in maximum efficiency were identified. The chi-square value of the sigmoid fit for each roll was considered in determining the significance of the maximum efficiency for the respective roll.

  8. Efficient Monte Carlo and greedy heuristic for the inference of stochastic block models

    CERN Document Server

    Peixoto, Tiago P

    2014-01-01

    We present an efficient algorithm for the inference of stochastic block models in large networks. The algorithm can be used as an optimized Markov chain Monte Carlo (MCMC) method, with a fast mixing time and a much reduced susceptibility to getting trapped in metastable states, or as a greedy agglomerative heuristic, with an almost linear $O(N\\ln^2N)$ complexity, where $N$ is the number of nodes in the network, independent on the number of blocks being inferred. We show that the heuristic is capable of delivering results which are indistinguishable from the more exact and numerically expensive MCMC method in many artificial and empirical networks, despite being much faster. The method is entirely unbiased towards any specific mixing pattern, and in particular it does not favor assortative community structures.

  9. Efficiency of rejection-free methods for dynamic Monte Carlo studies of off-lattice interacting particles

    KAUST Repository

    Guerra, Marta L.

    2009-02-23

    We calculate the efficiency of a rejection-free dynamic Monte Carlo method for d -dimensional off-lattice homogeneous particles interacting through a repulsive power-law potential r-p. Theoretically we find the algorithmic efficiency in the limit of low temperatures and/or high densities is asymptotically proportional to ρ (p+2) /2 T-d/2 with the particle density ρ and the temperature T. Dynamic Monte Carlo simulations are performed in one-, two-, and three-dimensional systems with different powers p, and the results agree with the theoretical predictions. © 2009 The American Physical Society.

  10. Efficient heterogeneous execution of Monte Carlo shielding calculations on a Beowulf cluster.

    Science.gov (United States)

    Dewar, David; Hulse, Paul; Cooper, Andrew; Smith, Nigel

    2005-01-01

    Recent work has been done in using a high-performance 'Beowulf' cluster computer system for the efficient distribution of Monte Carlo shielding calculations. This has enabled the rapid solution of complex shielding problems at low cost and with greater modularity and scalability than traditional platforms. The work has shown that a simple approach to distributing the workload is as efficient as using more traditional techniques such as PVM (Parallel Virtual Machine). In addition, when used in an operational setting this technique is fairer with the use of resources than traditional methods, in that it does not tie up a single computing resource but instead shares the capacity with other tasks. These developments in computing technology have enabled shielding problems to be solved that would have taken an unacceptably long time to run on traditional platforms. This paper discusses the BNFL Beowulf cluster and a number of tests that have recently been run to demonstrate the efficiency of the asynchronous technique in running the MCBEND program. The BNFL Beowulf currently consists of 84 standard PCs running RedHat Linux. Current performance of the machine has been estimated to be between 40 and 100 Gflop s(-1). When the whole system is employed on one problem up to four million particles can be tracked per second. There are plans to review its size in line with future business needs.

  11. A new NaI(Tl) four-detector layout for field contamination assessment using artificial neural networks and the Monte Carlo method for system calibration

    Energy Technology Data Exchange (ETDEWEB)

    Moreira, M.C.F., E-mail: marcos@ird.gov.b [Universidade Federal do Rio de Janeiro, COPPE, Programa de Engenharia Nuclear, Laboratorio de Monitoracao de Processos (Federal University of Rio de Janeiro, COPPE, Nuclear Engineering Program, Process Monitoring Laboratory), P.O. Box 68509, 21941-972 Rio de Janeiro (Brazil); Instituto de Radioprotecao e Dosimetria, CNEN/IRD (Radiation Protection and Dosimetry Institute, CNEN/IRD), Av. Salvador Allende s/no, P.O. Box 37750, 22780-160 Rio de Janeiro (Brazil); Conti, C.C. [Instituto de Radioprotecao e Dosimetria, CNEN/IRD (Radiation Protection and Dosimetry Institute, CNEN/IRD), Av. Salvador Allende s/no, P.O. Box 37750, 22780-160 Rio de Janeiro (Brazil); Schirru, R. [Universidade Federal do Rio de Janeiro, COPPE, Programa de Engenharia Nuclear, Laboratorio de Monitoracao de Processos (Federal University of Rio de Janeiro, COPPE, Nuclear Engineering Program, Process Monitoring Laboratory), P.O. Box 68509, 21941-972 Rio de Janeiro (Brazil)

    2010-09-21

    An NaI(Tl) multidetector layout combined with the use of Monte Carlo (MC) calculations and artificial neural networks(ANN) is proposed to assess the radioactive contamination of urban and semi-urban environment surfaces. A very simple urban environment like a model street composed of a wall on either side and the ground surface was the study case. A layout of four NaI(Tl) detectors was used, and the data corresponding to the response of the detectors were obtained by the Monte Carlo method. Two additional data sets with random values for the contamination and for detectors' response were also produced to test the ANNs. For this work, 18 feedforward topologies with backpropagation learning algorithm ANNs were chosen and trained. The results showed that some trained ANNs were able to accurately predict the contamination on the three urban surfaces when submitted to values within the training range. Other results showed that generalization outside the training range of values could not be achieved. The use of Monte Carlo calculations in combination with ANNs has been proven to be a powerful tool to perform detection calibration for highly complicated detection geometries.

  12. Increasing innovation in home energy efficiency: Monte Carlo simulation of potential improvements

    Energy Technology Data Exchange (ETDEWEB)

    Soratana, Kullapa; Marriott, Joe [Civil and Environmental Engineering Department, University of Pittsburgh, 949 Benedum Hall, 3700 O' Hara Street, Pittsburgh, PA 15261 (United States)

    2010-06-15

    Despite the enormous potential for savings, there is little penetration of market-based solutions in the residential energy efficiency market. We hypothesize that there is a failure in the residential efficiency improvement market: due to lack of customer knowledge and capital to invest in improvements, there is unrecovered savings. In this paper, we model a means of extracting profit from those unrecovered energy savings with a market-based residential energy services company, or RESCO. We use a Monte Carlo simulation of the cost and performance of various improvements along with a hypothetical business model to derive general information about the financial viability of these companies. Despite the large amount of energy savings potential, we find that an average contract length with residential customers needs to be nearly 35 years to recoup the cost of the improvements. However, our modeling of an installer knowledge parameter indicates that experience plays a large part in minimizing the time to profitability for each home. Large numbers of inexperienced workers driven by government investment in this area could result in the installation of improvements with long payback periods, whereas a free market might eliminate companies making poor decisions. (author)

  13. Monte Carlo polarimetric efficiency simulations for a single monolithic CdTe thick matrix

    Energy Technology Data Exchange (ETDEWEB)

    Curado da Silva, R.M.; Hage-Ali, M.; Siffert, P. [Lab. PHASE, CNRS, Strasbourg (France); Caroli, E.; Stephen, J.B. [Inst. TESRE/CNR, Bologna (Italy)

    2001-07-01

    Polarimetric measurements for hard X- and soft gamma-rays are still quite unexplored in astrophysical source observations. In order to improve the study of these sources through Compton polarimetry, detectors should have a good polarimetric efficiency and also satisfy the demands of the typical exigent detection environments for this kind of missions. Herein we present a simple concept for such systems, since we propose the use of a single thick ({proportional_to}10 mm) monolithic matrix of CdTe of 32 x 32 pixels, with an active area of about 40 cm{sup 2}. In order to predict the best configuration and dimension of detector pixels defined inside the CdTe monolithic piece, a Monte Carlo code based on GEANT4 library modules was developed. Efficiency and polarimetric modulation factor results as a function of energy and detector thickness, are presented and discussed. Q factor of the order of 0.3 has been found up to several hundreds of keV. (orig.)

  14. Monte Carlo polarimetric efficiency simulations for a single monolithic CdTe thick matrix

    International Nuclear Information System (INIS)

    Polarimetric measurements for hard X- and soft gamma-rays are still quite unexplored in astrophysical source observations. In order to improve the study of these sources through Compton polarimetry, detectors should have a good polarimetric efficiency and also satisfy the demands of the typical exigent detection environments for this kind of missions. Herein we present a simple concept for such systems, since we propose the use of a single thick (∝10 mm) monolithic matrix of CdTe of 32 x 32 pixels, with an active area of about 40 cm2. In order to predict the best configuration and dimension of detector pixels defined inside the CdTe monolithic piece, a Monte Carlo code based on GEANT4 library modules was developed. Efficiency and polarimetric modulation factor results as a function of energy and detector thickness, are presented and discussed. Q factor of the order of 0.3 has been found up to several hundreds of keV. (orig.)

  15. Efficiency calibration of an HPGe X-ray detector for quantitative PIXE analysis

    Energy Technology Data Exchange (ETDEWEB)

    Mulware, Stephen J., E-mail: Stephenmulware@my.unt.edu; Baxley, Jacob D., E-mail: jacob.baxley351@topper.wku.edu; Rout, Bibhudutta, E-mail: bibhu@unt.edu; Reinert, Tilo, E-mail: tilo@unt.edu

    2014-08-01

    Particle Induced X-ray Emission (PIXE) is an analytical technique, which provides reliably and accurately quantitative results without the need of standards when the efficiency of the X-ray detection system is calibrated. The ion beam microprobe of the Ion Beam Modification and Analysis Laboratory at the University of North Texas is equipped with a 100 mm{sup 2} high purity germanium X-ray detector (Canberra GUL0110 Ultra-LEGe). In order to calibrate the efficiency of the detector for standard less PIXE analysis we have measured the X-ray yield of a set of commercially available X-ray fluorescence standards. The set contained elements from low atomic number Z = 11 (sodium) to higher atomic numbers to cover the X-ray energy region from 1.25 keV to about 20 keV where the detector is most efficient. The effective charge was obtained from the proton backscattering yield of a calibrated particle detector.

  16. Multi-level Monte Carlo Methods for Efficient Simulation of Coulomb Collisions

    Science.gov (United States)

    Ricketson, Lee

    2013-10-01

    We discuss the use of multi-level Monte Carlo (MLMC) schemes--originally introduced by Giles for financial applications--for the efficient simulation of Coulomb collisions in the Fokker-Planck limit. The scheme is based on a Langevin treatment of collisions, and reduces the computational cost of achieving a RMS error scaling as ɛ from O (ɛ-3) --for standard Langevin methods and binary collision algorithms--to the theoretically optimal scaling O (ɛ-2) for the Milstein discretization, and to O (ɛ-2 (logɛ)2) with the simpler Euler-Maruyama discretization. In practice, this speeds up simulation by factors up to 100. We summarize standard MLMC schemes, describe some tricks for achieving the optimal scaling, present results from a test problem, and discuss the method's range of applicability. This work was performed under the auspices of the U.S. DOE by the University of California, Los Angeles, under grant DE-FG02-05ER25710, and by LLNL under contract DE-AC52-07NA27344.

  17. A fast, primary-interaction Monte Carlo methodology for determination of total efficiency of cylindrical scintillation gamma-ray detectors

    Directory of Open Access Journals (Sweden)

    Rehman Shakeel U.

    2009-01-01

    Full Text Available A primary-interaction based Monte Carlo algorithm has been developed for determination of the total efficiency of cylindrical scintillation g-ray detectors. This methodology has been implemented in a Matlab based computer program BPIMC. For point isotropic sources at axial locations with respect to the detector axis, excellent agreement has been found between the predictions of the BPIMC code with the corresponding results obtained by using hybrid Monte Carlo as well as by experimental measurements over a wide range of g-ray energy values. For off-axis located point sources, the comparison of the BPIMC predictions with the corresponding results obtained by direct calculations as well as by conventional Monte Carlo schemes shows good agreement validating the proposed algorithm. Using the BPIMC program, the energy dependent detector efficiency has been found to approach an asymptotic profile by increasing either thickness or diameter of scintillator while keeping the other fixed. The variation of energy dependent total efficiency of a 3'x3' NaI(Tl scintillator with axial distance has been studied using the BPIMC code. About two orders of magnitude change in detector efficiency has been observed for zero to 50 cm variation in the axial distance. For small values of axial separation, a similar large variation has also been observed in total efficiency for 137Cs as well as for 60Co sources by increasing the axial-offset from zero to 50 cm.

  18. Efficient and robust calibration of the Heston option pricing model for American options using an improved Cuckoo Search Algorithm

    OpenAIRE

    Stefan Haring; Ronald Hochreiter

    2015-01-01

    In this paper an improved Cuckoo Search Algorithm is developed to allow for an efficient and robust calibration of the Heston option pricing model for American options. Calibration of stochastic volatility models like the Heston is significantly harder than classical option pricing models as more parameters have to be estimated. The difficult task of calibrating one of these models to American Put options data is the main objective of this paper. Numerical results are shown to substantiate th...

  19. Improved efficiency in Monte Carlo simulation for passive-scattering proton therapy

    International Nuclear Information System (INIS)

    The aim of this work was to improve the computational efficiency of Monte Carlo simulations when tracking protons through a proton therapy treatment head. Two proton therapy facilities were considered, the Francis H Burr Proton Therapy Center (FHBPTC) at the Massachusetts General Hospital and the Crocker Lab eye treatment facility used by University of California at San Francisco (UCSFETF). The computational efficiency was evaluated for phase space files scored at the exit of the treatment head to determine optimal parameters to improve efficiency while maintaining accuracy in the dose calculation.For FHBPTC, particles were split by a factor of 8 upstream of the second scatterer and upstream of the aperture. The radius of the region for Russian roulette was set to 2.5 or 1.5 times the radius of the aperture and a secondary particle production cut (PC) of 50 mm was applied. For UCSFETF, particles were split a factor of 16 upstream of a water absorber column and upstream of the aperture. Here, the radius of the region for Russian roulette was set to 4 times the radius of the aperture and a PC of 0.05 mm was applied. In both setups, the cylindrical symmetry of the proton beam was exploited to position the split particles randomly spaced around the beam axis.When simulating a phase space for subsequent water phantom simulations, efficiency gains between a factor of 19.9  ±  0.1 and 52.21  ±  0.04 for the FHTPC setups and 57.3  ±  0.5 for the UCSFETF setups were obtained. For a phase space used as input for simulations in a patient geometry, the gain was a factor of 78.6  ±  7.5. Lateral-dose curves in water were within the accepted clinical tolerance of 2%, with statistical uncertainties of 0.5% for the two facilities. For the patient geometry and by considering the 2% and 2mm criteria, 98.4% of the voxels showed a gamma index lower than unity. An analysis of the dose distribution resulted in systematic deviations below of 0.88% for 20

  20. How effective and efficient are multiobjective evolutionary algorithms at hydrologic model calibration?

    Science.gov (United States)

    Tang, Y.; Reed, P.; Wagener, T.

    2006-05-01

    This study provides a comprehensive assessment of state-of-the-art evolutionary multiobjective optimization (EMO) tools' relative effectiveness in calibrating hydrologic models. The relative computational efficiency, accuracy, and ease-of-use of the following EMO algorithms are tested: Epsilon Dominance Nondominated Sorted Genetic Algorithm-II (ɛ-NSGAII), the Multiobjective Shuffled Complex Evolution Metropolis algorithm (MOSCEM-UA), and the Strength Pareto Evolutionary Algorithm 2 (SPEA2). This study uses three test cases to compare the algorithms' performances: (1) a standardized test function suite from the computer science literature, (2) a benchmark hydrologic calibration test case for the Leaf River near Collins, Mississippi, and (3) a computationally intensive integrated surface-subsurface model application in the Shale Hills watershed in Pennsylvania. One challenge and contribution of this work is the development of a methodology for comprehensively comparing EMO algorithms that have different search operators and randomization techniques. Overall, SPEA2 attained competitive to superior results for most of the problems tested in this study. The primary strengths of the SPEA2 algorithm lie in its search reliability and its diversity preservation operator. The biggest challenge in maximizing the performance of SPEA2 lies in specifying an effective archive size without a priori knowledge of the Pareto set. In practice, this would require significant trial-and-error analysis, which is problematic for more complex, computationally intensive calibration applications. ɛ-NSGAII appears to be superior to MOSCEM-UA and competitive with SPEA2 for hydrologic model calibration. ɛ-NSGAII's primary strength lies in its ease-of-use due to its dynamic population sizing and archiving which lead to rapid convergence to very high quality solutions with minimal user input. MOSCEM-UA is best suited for hydrologic model calibration applications that have small parameter sets

  1. Thermodynamics of long supercoiled molecules: insights from highly efficient Monte Carlo simulations.

    Science.gov (United States)

    Lepage, Thibaut; Képès, François; Junier, Ivan

    2015-07-01

    Supercoiled DNA polymer models for which the torsional energy depends on the total twist of molecules (Tw) are a priori well suited for thermodynamic analysis of long molecules. So far, nevertheless, the exact determination of Tw in these models has been based on a computation of the writhe of the molecules (Wr) by exploiting the conservation of the linking number, Lk=Tw+Wr, which reflects topological constraints coming from the helical nature of DNA. Because Wr is equal to the number of times the main axis of a DNA molecule winds around itself, current Monte Carlo algorithms have a quadratic time complexity, O(L(2)), with respect to the contour length (L) of the molecules. Here, we present an efficient method to compute Tw exactly, leading in principle to algorithms with a linear complexity, which in practice is O(L(1.2)). Specifically, we use a discrete wormlike chain that includes the explicit double-helix structure of DNA and where the linking number is conserved by continuously preventing the generation of twist between any two consecutive cylinders of the discretized chain. As an application, we show that long (up to 21 kbp) linear molecules stretched by mechanical forces akin to magnetic tweezers contain, in the buckling regime, multiple and branched plectonemes that often coexist with curls and helices, and whose length and number are in good agreement with experiments. By attaching the ends of the molecules to a reservoir of twists with which these can exchange helix turns, we also show how to compute the torques in these models. As an example, we report values that are in good agreement with experiments and that concern the longest molecules that have been studied so far (16 kbp).

  2. Monte Carlo simulation of efficient data acquisition for an entire-body PET scanner

    Energy Technology Data Exchange (ETDEWEB)

    Isnaini, Ismet; Obi, Takashi [Tokyo Institute of Technology, 4259 Nagatsuta-cho, Midori-ku, Yokohama 226-8503 (Japan); Yoshida, Eiji, E-mail: rush@nirs.go.jp [National Institute of Radiological Sciences, 4-9-1 Inage-ku, Chiba 263-8555 (Japan); Yamaya, Taiga [National Institute of Radiological Sciences, 4-9-1 Inage-ku, Chiba 263-8555 (Japan)

    2014-07-01

    Conventional PET scanners can image the whole body using many bed positions. On the other hand, an entire-body PET scanner with an extended axial FOV, which can trace whole-body uptake images at the same time and improve sensitivity dynamically, has been desired. The entire-body PET scanner would have to process a large amount of data effectively. As a result, the entire-body PET scanner has high dead time at a multiplex detector grouping process. Also, the entire-body PET scanner has many oblique line-of-responses. In this work, we study an efficient data acquisition for the entire-body PET scanner using the Monte Carlo simulation. The simulated entire-body PET scanner based on depth-of-interaction detectors has a 2016-mm axial field-of-view (FOV) and an 80-cm ring diameter. Since the entire-body PET scanner has higher single data loss than a conventional PET scanner at grouping circuits, the NECR of the entire-body PET scanner decreases. But, single data loss is mitigated by separating the axially arranged detector into multiple parts. Our choice of 3 groups of axially-arranged detectors has shown to increase the peak NECR by 41%. An appropriate choice of maximum ring difference (MRD) will also maintain the same high performance of sensitivity and high peak NECR while at the same time reduces the data size. The extremely-oblique line of response for large axial FOV does not contribute much to the performance of the scanner. The total sensitivity with full MRD increased only 15% than that with about half MRD. The peak NECR was saturated at about half MRD. The entire-body PET scanner promises to provide a large axial FOV and to have sufficient performance values without using the full data.

  3. Monte Carlo-derived TLD cross-calibration factors for treatment verification and measurement of skin dose in accelerated partial breast irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Garnica-Garza, H M [Centro de Investigacion y de Estudios Avanzados del Instituto Politecnico Nacional Unidad Monterrey, VIa del Conocimiento 201 Parque de Investigacion e Innovacion Tecnologica, Apodaca NL C.P. 66600 (Mexico)], E-mail: hgarnica@cinvestav.mx

    2009-03-21

    Monte Carlo simulation was employed to calculate the response of TLD-100 chips under irradiation conditions such as those found during accelerated partial breast irradiation with the MammoSite radiation therapy system. The absorbed dose versus radius in the last 0.5 cm of the treated volume was also calculated, employing a resolution of 20 {mu}m, and a function that fits the observed data was determined. Several clinically relevant irradiation conditions were simulated for different combinations of balloon size, balloon-to-surface distance and contents of the contrast solution used to fill the balloon. The thermoluminescent dosemeter (TLD) cross-calibration factors were derived assuming that the calibration of the dosemeters was carried out using a Cobalt 60 beam, and in such a way that they provide a set of parameters that reproduce the function that describes the behavior of the absorbed dose versus radius curve. Such factors may also prove to be useful for those standardized laboratories that provide postal dosimetry services.

  4. The development of an efficient mass balance approach for the purity assignment of organic calibration standards.

    Science.gov (United States)

    Davies, Stephen R; Alamgir, Mahiuddin; Chan, Benjamin K H; Dang, Thao; Jones, Kai; Krishnaswami, Maya; Luo, Yawen; Mitchell, Peter S R; Moawad, Michael; Swan, Hilton; Tarrant, Greg J

    2015-10-01

    The purity determination of organic calibration standards using the traditional mass balance approach is described. Demonstrated examples highlight the potential for bias in each measurement and the need to implement an approach that provides a cross-check for each result, affording fit for purpose purity values in a timely and cost-effective manner. Chromatographic techniques such as gas chromatography with flame ionisation detection (GC-FID) and high-performance liquid chromatography with UV detection (HPLC-UV), combined with mass and NMR spectroscopy, provide a detailed impurity profile allowing an efficient conversion of chromatographic peak areas into relative mass fractions, generally avoiding the need to calibrate each impurity present. For samples analysed by GC-FID, a conservative measurement uncertainty budget is described, including a component to cover potential variations in the response of each unidentified impurity. An alternative approach is also detailed in which extensive purification eliminates the detector response factor issue, facilitating the certification of a super-pure calibration standard which can be used to quantify the main component in less-pure candidate materials. This latter approach is particularly useful when applying HPLC analysis with UV detection. Key to the success of this approach is the application of both qualitative and quantitative (1)H NMR spectroscopy. PMID:26342310

  5. Calibration of STUD+ parameters to achieve optimally efficient broadband adiabatic decoupling in a single transient

    Science.gov (United States)

    Bendall; Skinner

    1998-10-01

    To provide the most efficient conditions for spin decoupling with least RF power, master calibration curves are provided for the maximum centerband amplitude, and the minimum amplitude for the largest cycling sideband, resulting from STUD+ adiabatic decoupling applied during a single free induction decay. The principal curve is defined as a function of the four most critical experimental input parameters: the maximum amplitude of the RF field, RFmax, the length of the sech/tanh pulse, Tp, the extent of the frequency sweep, bwdth, and the coupling constant, Jo. Less critical parameters, the effective (or actual) decoupled bandwidth, bweff, and the sech/tanh truncation factor, beta, which become more important as bwdth is decreased, are calibrated in separate curves. The relative importance of nine additional factors in determining optimal decoupling performance in a single transient are considered. Specific parameters for efficient adiabatic decoupling can be determined via a set of four equations which will be most useful for 13C decoupling, covering the range of one-bond 13C1H coupling constants from 125 to 225 Hz, and decoupled bandwidths of 7 to 100 kHz, with a bandwidth of 100 kHz being the requirement for a 2 GHz spectrometer. The four equations are derived from a recent vector model of adiabatic decoupling, and experiment, supported by computer simulations. The vector model predicts an inverse linear relation between the centerband and maximum sideband amplitudes, and it predicts a simple parabolic relationship between maximum sideband amplitude and the product JoTp. The ratio bwdth/(RFmax)2 can be viewed as a characteristic time scale, tauc, affecting sideband levels, with tauc approximately Tp giving the most efficient STUD+ decoupling, as suggested by the adiabatic condition. Functional relationships between bwdth and less critical parameters, bweff and beta, for efficient decoupling can be derived from Bloch-equation calculations of the inversion profile

  6. Determination of relative efficiency of a detector using Monte Carlo method; Determinacao da eficiencia relativa de um detector usando metodo de Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Medeiros, M.P.C.; Rebello, W.F., E-mail: eng.cavaliere@ime.eb.br, E-mail: rebello@ime.eb.br [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil). Secao de Engenharia Nuclear; Lopes, J.M.; Silva, A.X., E-mail: marqueslopez@yahoo.com.br, E-mail: ademir@nuclear.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear

    2015-07-01

    High-purity germanium detectors (HPGe) are mandatory tools for spectrometry because of their excellent energy resolution. The efficiency of such detectors, quoted in the list of specifications by the manufacturer, frequently refers to the relative full-energy peak efficiency, related to the absolute full-energy peak efficiency of a 7.6 cm x 7.6 cm (diameter x height) NaI(Tl) crystal, based on the 1.33 MeV peak of a {sup 60}Co source positioned 25 cm from the detector. In this study, we used MCNPX code to simulate an HPGe detector (Canberra GC3020), from Real-Time Neutrongraphy Laboratory of UFRJ, to survey the spectrum of a {sup 60}Co source located 25 cm from the detector in order to calculate and confirm the efficiency declared by the manufacturer. Agreement between experimental and simulated data was achieved. The model under development will be used for calculating and comparison purposes with the detector calibration curve from software Genie2000™, also serving as a reference for future studies. (author)

  7. Calibrating Self-Reported Measures of Maternal Smoking in Pregnancy via Bioassays Using a Monte Carlo Approach

    Directory of Open Access Journals (Sweden)

    Lauren S. Wakschlag

    2009-06-01

    Full Text Available Maternal smoking during pregnancy is a major public health problem that has been associated with numerous short- and long-term adverse health outcomes in offspring. However, characterizing smoking exposure during pregnancy precisely has been rather difficult: self-reported measures of smoking often suffer from recall bias, deliberate misreporting, and selective non-disclosure, while single bioassay measures of nicotine metabolites only reflect recent smoking history and cannot capture the fluctuating and complex patterns of varying exposure of the fetus. Recently, Dukic et al. [1] have proposed a statistical method for combining information from both sources in order to increase the precision of the exposure measurement and power to detect more subtle effects of smoking. In this paper, we extend the Dukic et al. [1] method to incorporate individual variation of the metabolic parameters (such as clearance rates into the calibration model of smoking exposure during pregnancy. We apply the new method to the Family Health and Development Project (FHDP, a small convenience sample of 96 predominantly working-class white pregnant women oversampled for smoking. We find that, on average, misreporters smoke 7.5 cigarettes more than what they report to smoke, with about one third underreporting by 1.5, one third under-reporting by about 6.5, and one third underreporting by 8.5 cigarettes. Partly due to the limited demographic heterogeneity in the FHDP sample, the results are similar to those obtained by the deterministic calibration model, whose adjustments were slightly lower (by 0.5 cigarettes on average. The new results are also, as expected, less sensitive to assumed values of cotinine half-life.

  8. The Adjoint Monte Carlo - a viable option for efficient radiotherapy treatment planning

    International Nuclear Information System (INIS)

    In cancer therapy using collimated beams of photons, the radiation oncologist must determine a set of beams that delivers the required dose to each point in the tumor and minimizes the risk of damage to the healthy tissue and vital organs. Currently, the oncologist determines these beams iteratively, by using a sequence of dose calculations using approximate numerical methods. In this paper, a more accurate and potentially faster approach, based on the Adjoint Monte Carlo method, is presented (authors)

  9. Nonequilibrium candidate Monte Carlo is an efficient tool for equilibrium simulation

    Energy Technology Data Exchange (ETDEWEB)

    Nilmeier, J. P.; Crooks, G. E.; Minh, D. D. L.; Chodera, J. D.

    2011-10-24

    Metropolis Monte Carlo simulation is a powerful tool for studying the equilibrium properties of matter. In complex condensed-phase systems, however, it is difficult to design Monte Carlo moves with high acceptance probabilities that also rapidly sample uncorrelated configurations. Here, we introduce a new class of moves based on nonequilibrium dynamics: candidate configurations are generated through a finite-time process in which a system is actively driven out of equilibrium, and accepted with criteria that preserve the equilibrium distribution. The acceptance rule is similar to the Metropolis acceptance probability, but related to the nonequilibrium work rather than the instantaneous energy difference. Our method is applicable to sampling from both a single thermodynamic state or a mixture of thermodynamic states, and allows both coordinates and thermodynamic parameters to be driven in nonequilibrium proposals. While generating finite-time switching trajectories incurs an additional cost, driving some degrees of freedom while allowing others to evolve naturally can lead to large enhancements in acceptance probabilities, greatly reducing structural correlation times. Using nonequilibrium driven processes vastly expands the repertoire of useful Monte Carlo proposals in simulations of dense solvated systems.

  10. CdTe detector efficiency calibration using thick targets of pure and stable compounds

    Science.gov (United States)

    Chaves, P. C.; Taborda, A.; Reis, M. A.

    2012-02-01

    Quantitative PIXE measurements require perfectly calibrated set-ups. Cooled CdTe detectors have good efficiency for energies above those covered by Si(Li) detectors and turn on the possibility of studying K X-rays lines instead of L X-rays lines for medium and eventually heavy elements, which is an important advantage in various cases, if only limited resolution systems are available in the low energy range. In this work we present and discuss spectra from a CdTe semiconductor detector covering the energy region from Cu (K α1 = 8.047 keV) to U (K α1 = 98.439 keV). Pure thick samples were irradiated with proton beams at the ITN 3.0 MV Tandetron accelerator in the High Resolution High Energy PIXE set-up. Results and the application to the study of a Portuguese Ossa Morena region Dark Stone sample are presented in this work.

  11. CdTe detector efficiency calibration using thick targets of pure and stable compounds

    International Nuclear Information System (INIS)

    Quantitative PIXE measurements require perfectly calibrated set-ups. Cooled CdTe detectors have good efficiency for energies above those covered by Si(Li) detectors and turn on the possibility of studying K X-rays lines instead of L X-rays lines for medium and eventually heavy elements, which is an important advantage in various cases, if only limited resolution systems are available in the low energy range. In this work we present and discuss spectra from a CdTe semiconductor detector covering the energy region from Cu (Kα1 = 8.047 keV) to U (Kα1 = 98.439 keV). Pure thick samples were irradiated with proton beams at the ITN 3.0 MV Tandetron accelerator in the High Resolution High Energy PIXE set-up. Results and the application to the study of a Portuguese Ossa Morena region Dark Stone sample are presented in this work.

  12. Precise Efficiency Calibration of an HPGe Detector Using the Decay of 180m Hf

    International Nuclear Information System (INIS)

    Superallowed 0+ → 0+ nuclear beta decays provide both the best test of the Conserved Vector Current (CVC) hypothesis and, together with the muon lifetime, the most accurate value for the up-down quark-mixing matrix element, Vud , of the Cabibbo-Kobayashi-Maskawa (CKM) matrix. This matrix should be unitary, and experimental verification of that expectation constitutes an important test of the Standard Model. In aiming for a definitive test of CKM unitarity we have mounted a program at the Texas A and M University to establish (or eliminate) the discrepancy with unitarity. One correction accounts for isospin symmetry breaking and its accuracy can be tested by measurements of the ft-values of Tz = - 1 superallowed emitters (e.g. 22 Mg and 30 S) to a precision of about ±0.1%. A requirement for these measurements is a detector whose detection efficiency is known to the same precision. However, calibration of a detector's efficiency to this level of precision is extremely challenging since very few sources provide γ-rays whose intensities (relative or absolute) are known to better than ±0.5%. The isomer 180m Hf (t 1/2 = 5.5 h) provides a very precise γ-ray calibration source in the 90 to 330 keV energy range. The decay of 180m Hf to the 180 Hf ground state includes a cascade of three consecutive E2 γ-ray transitions of energies 93.3, 215.2 and 332.3 keV with no other feeding of the intermediate states. This provides a uniquely well-known calibration standard since the relative γ-ray intensities emitted are dependent only on the calculated E2 conversion coefficients. The 180m Hf isomer was produced by irradiation of a 0.91 mg sample of HfO2 isotopically enriched to 87% in179 Hf, at the TRIGA reactor in the TAMU Nuclear Science Center. In order to minimise the self-absorption of γ-rays in Hf we required a thin source that was prepared following a procedure described by Kellog and Norman. The activated HfO2sample was dissolved in 0.50 ml of hot 48% HF acid to

  13. A regional application of the MAGIC model in Wales: calibration and assessment of future recovery using a Monte-Carlo approach

    Directory of Open Access Journals (Sweden)

    C. E. M. Sefton

    1998-01-01

    Full Text Available A survey and resurvey of 77 headwater streams in Wales provides an opportunity for assessing changes in streamwater chemistry in the region. The Model of Acidification of Groundwater In Catchment (MAGIC has been calibrated to the second of two surveys, taken in 1994-1995, using a Monte-Carlo methodology. The first survey, 1983-1984, provides a basis for model validation. The model simulates a significant decline of water quality across the region since industrialisation. Agreed reductions in sulphur (S emissions in Europe in accordance with the Second S Protocol will result in a 49% reduction of S deposition across Wales from 1996 to 2010. In response to these reductions, the proportion of streams in the region with mean annual acid neutralising capacity (ANC > 0 is predicted to increase from 81% in 1995 to 90% by 2030. The greatest recovery between 1984 and 1995 and into the future is at those streams with low ANC. In order to ensure that streams in the most heavily acidified areas of Wales recover to ANC zero by 2030, a reduction of S deposition of 80-85% will be required.

  14. Experimental characterization and Monte Carlo simulation of Si(Li) detector efficiency by radioactive sources and PIXE

    Energy Technology Data Exchange (ETDEWEB)

    Mesradi, M. [Institut Pluridisciplinaire Hubert-Curien, UMR 7178 CNRS/IN2P3 et Universite Louis Pasteur, 23 rue du Loess, BP 28, F-67037 Strasbourg Cedex 2 (France); Elanique, A. [Departement de Physique, FS/BP 8106, Universite Ibn Zohr, Agadir, Maroc (Morocco); Nourreddine, A. [Institut Pluridisciplinaire Hubert-Curien, UMR 7178 CNRS/IN2P3 et Universite Louis Pasteur, 23 rue du Loess, BP 28, F-67037 Strasbourg Cedex 2 (France)], E-mail: abdelmjid.nourreddine@ires.in2p3.fr; Pape, A.; Raiser, D.; Sellam, A. [Institut Pluridisciplinaire Hubert-Curien, UMR 7178 CNRS/IN2P3 et Universite Louis Pasteur, 23 rue du Loess, BP 28, F-67037 Strasbourg Cedex 2 (France)

    2008-06-15

    This work relates to the study and characterization of the response function of an X-ray spectrometry system. The intrinsic efficiency of a Si(Li) detector has been simulated with the Monte Carlo codes MCNP and GEANT4 in the photon energy range of 2.6-59.5 keV. After finding it necessary to take a radiograph of the detector inside its cryostat to learn the correct dimensions, agreement within 10% between the simulations and experimental measurements with several point-like sources and PIXE results was obtained.

  15. Ge well detector calibration by means of a trial and error procedure using the dead layers as a unique parameter in a Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Courtine, Fabien; Pilleyre, Thierry; Sanzelle, Serge [Laboratoire de Physique Corpusculaire, IN2P3-CNRS, Universite Blaise Pascal, F-63177 Aubiere Cedex (France); Miallier, Didier [Laboratoire de Physique Corpusculaire, IN2P3-CNRS, Universite Blaise Pascal, F-63177 Aubiere Cedex (France)], E-mail: miallier@clermont.in2p3.fr

    2008-11-01

    The project aimed at modelling an HPGe well detector in view to predict its photon-counting efficiency by means of the Monte Carlo simulation code GEANT4. Although a qualitative and quantitative description of the crystal and housing was available, uncertainties were associated to parameters controlling the detector response. This induced poor agreement between the efficiency calculated on the basis of nominal data and the actual efficiency experimentally measured with a {sup 137}Cs point source. It was then decided to improve the model, by parameterization of a trial and error method. The distribution of the dead layers was adopted as a unique parameter, in order to explore the possibilities and pertinence of this parameter. In the course of the work, it appeared necessary to introduce the possibility that the thickness of the dead layers was not uniform for a given surface. At the end of the process, the results allowed to conclude that the approach was able to give a model adapted to practical application with a satisfactory precision in the calculated efficiency. The pattern of the 'dead layers' that was obtained is characterized by a variable thickness which seems to be physically relevant. It implicitly and partly accounts for effects that are not originated from actual dead layers, such as incomplete charge collection. But, such effects, which are uneasily accounted for, can, in a first approximation, be represented by 'dead layers'; this is an advantage of the parameterization that was adopted.

  16. Efficient Monte Carlo simulations using a shuffled nested Weyl sequence random number generator.

    Science.gov (United States)

    Tretiakov, K V; Wojciechowski, K W

    1999-12-01

    The pseudorandom number generator proposed recently by Holian et al. [B. L. Holian, O. E. Percus, T. T. Warnock, and P. A. Whitlock, Phys. Rev. E 50, 1607 (1994)] is tested via Monte Carlo computation of the free energy difference between the defectless hcp and fcc hard sphere crystals by the Frenkel-Ladd method [D. Frenkel and A. J. C. Ladd, J. Chem. Phys. 81, 3188 (1984)]. It is shown that this fast and convenient for parallel computing generator gives results in good agreement with results obtained by other generators. An estimate of high accuracy is obtained for the hcp-fcc free energy difference near melting. PMID:11970727

  17. Efficient data management techniques implemented in the Karlsruhe Monte Carlo code KAMCCO

    International Nuclear Information System (INIS)

    The Karlsruhe Monte Carlo Code KAMCCO is a forward neutron transport code with an eigenfunction and a fixed source option, including time-dependence. A continuous energy model is combined with a detailed representation of neutron cross sections, based on linear interpolation, Breit-Wigner resonances and probability tables. All input is processed into densely packed, dynamically addressed parameter fields and networks of pointers (addresses). Estimation routines are decoupled from random walk and analyze a storage region with sample records. This technique leads to fast execution with moderate storage requirements and without any I/O-operations except in the input and output stages. 7 references. (U.S.)

  18. On stochastic error and computational efficiency of the Markov Chain Monte Carlo method

    KAUST Repository

    Li, Jun

    2014-01-01

    In Markov Chain Monte Carlo (MCMC) simulations, thermal equilibria quantities are estimated by ensemble average over a sample set containing a large number of correlated samples. These samples are selected in accordance with the probability distribution function, known from the partition function of equilibrium state. As the stochastic error of the simulation results is significant, it is desirable to understand the variance of the estimation by ensemble average, which depends on the sample size (i.e., the total number of samples in the set) and the sampling interval (i.e., cycle number between two consecutive samples). Although large sample sizes reduce the variance, they increase the computational cost of the simulation. For a given CPU time, the sample size can be reduced greatly by increasing the sampling interval, while having the corresponding increase in variance be negligible if the original sampling interval is very small. In this work, we report a few general rules that relate the variance with the sample size and the sampling interval. These results are observed and confirmed numerically. These variance rules are derived for theMCMCmethod but are also valid for the correlated samples obtained using other Monte Carlo methods. The main contribution of this work includes the theoretical proof of these numerical observations and the set of assumptions that lead to them. © 2014 Global-Science Press.

  19. Development of self-learning Monte Carlo technique for more efficient modeling of nuclear logging measurements

    International Nuclear Information System (INIS)

    The self-learning Monte Carlo technique has been implemented to the commonly used general purpose neutron transport code MORSE, in order to enhance sampling of the particle histories that contribute to a detector response. The parameters of all the biasing techniques available in MORSE, i.e. of splitting, Russian roulette, source and collision outgoing energy importance sampling, path length transformation and additional biasing of the source angular distribution are optimized. The learning process is iteratively performed after each batch of particles, by retrieving the data concerning the subset of histories that passed the detector region and energy range in the previous batches. This procedure has been tested on two sample problems in nuclear geophysics, where an unoptimized Monte Carlo calculation is particularly inefficient. The results are encouraging, although the presented method does not directly minimize the variance and the convergence of our algorithm is restricted by the statistics of successful histories from previous random walk. Further applications for modeling of the nuclear logging measurements seem to be promising. 11 refs., 2 figs., 3 tabs. (author)

  20. Thermal inertia and energy efficiency – Parametric simulation assessment on a calibrated case study

    International Nuclear Information System (INIS)

    Highlights: • We perform a parametric simulation study on a calibrated building energy model. • We introduce adaptive shadings and night free cooling in simulations. • We analyze the effect of thermal capacity on the parametric simulations results. • We recognize that cooling demand and savings scales linearly with thermal capacity. • We assess the advantage of medium-heavy over medium and light configurations. - Abstract: The reduction of energy consumption for heating and cooling services in the existing building stock is a key challenge for global sustainability today and buildings’ envelopes retrofit is one the main issues. Most of the existing buildings’ envelopes have low levels of insulation, high thermal losses due to thermal bridges and cracks, absence of appropriate solar control, etc. Further, in building refurbishment, the importance of a system level approach is often undervalued in favour of simplistic “off the shelf” efficient solutions, focused on the reduction of thermal transmittance and on the enhancement of solar control capabilities. In many cases, the importance of the dynamic thermal properties is often neglected or underestimated and the effective thermal capacity is not properly considered as one of the design parameters. The research presented aims to critically assess the influence of the dynamic thermal properties of the building fabric (roof, walls and floors) on sensible heating and cooling energy demand for a case study. The case study chosen is an existing office building which has been retrofitted in recent years and whose energy model has been calibrated according to the data collected in the monitoring process. The research illustrates the variations of the sensible thermal energy demand of the building in different retrofit scenarios, and relates them to the variations of the dynamic thermal properties of the construction components. A parametric simulation study has been performed, encompassing the use of

  1. Efficiency Calibration of LaBr3(Ce) γ Spectroscopy in Analyzing Radionucles in Reactor Loop Water

    Institute of Scientific and Technical Information of China (English)

    CHEN; Xi-lin; QIN; Guo-xiu; GUO; Xiao-qing; CHEN; Yong-yong; MENG; Jun

    2013-01-01

    Monitoring the occurring and radioactivity concentration of fission products in nuclear reactor loop water is important for the nuclear reactor safe running evaluation,prevention of accidence and safe protection of working personnel.Study on the efficiency calibration for a LaBr3(Ce)detector experimental

  2. CdTe detector efficiency calibration using thick targets of pure and stable compounds

    Energy Technology Data Exchange (ETDEWEB)

    Chaves, P.C.; Taborda, A., E-mail: ataborda@itn.pt; Reis, M.A.

    2012-02-15

    Quantitative PIXE measurements require perfectly calibrated set-ups. Cooled CdTe detectors have good efficiency for energies above those covered by Si(Li) detectors and turn on the possibility of studying K X-rays lines instead of L X-rays lines for medium and eventually heavy elements, which is an important advantage in various cases, if only limited resolution systems are available in the low energy range. In this work we present and discuss spectra from a CdTe semiconductor detector covering the energy region from Cu (K{sub {alpha}1} = 8.047 keV) to U (K{sub {alpha}1} = 98.439 keV). Pure thick samples were irradiated with proton beams at the ITN 3.0 MV Tandetron accelerator in the High Resolution High Energy PIXE set-up. Results and the application to the study of a Portuguese Ossa Morena region Dark Stone sample are presented in this work.

  3. Monte Carlo calculations of the free energy of binary sII hydrogen clathrate hydrates for identifying efficient promoter molecules.

    Science.gov (United States)

    Atamas, Alexander A; Cuppen, Herma M; Koudriachova, Marina V; de Leeuw, Simon W

    2013-01-31

    The thermodynamics of binary sII hydrogen clathrates with secondary guest molecules is studied with Monte Carlo simulations. The small cages of the sII unit cell are occupied by one H(2) guest molecule. Different promoter molecules entrapped in the large cages are considered. Simulations are conducted at a pressure of 1000 atm in a temperature range of 233-293 K. To determine the stabilizing effect of different promoter molecules on the clathrate, the Gibbs free energy of fully and partially occupied sII hydrogen clathrates are calculated. Our aim is to predict what would be an efficient promoter molecule using properties such as size, dipole moment, and hydrogen bonding capability. The gas clathrate configurational and free energies are compared. The entropy makes a considerable contribution to the free energy and should be taken into account in determining stability conditions of binary sII hydrogen clathrates.

  4. An investigation of HPGe gamma efficiency calibration software (ANGLE V.3) for applications in nuclear decommissioning.

    Science.gov (United States)

    Bell, S J; Judge, S M; Regan, P H

    2012-12-01

    High resolution gamma spectrometry offers a rapid method to characterise waste materials on a decommissioning nuclear site. To meet regulatory requirements, measurements must be traceable to national standards, meaning that the spectrometers must be calibrated for a wide range of materials. Semi-empirical modelling software (such as ANGLE™) offers a convenient method to carry out such calibrations. This paper describes an assessment of the modelling software for use by a small laboratory based on a nuclear site. The results confirmed the need for accurate information on the detection construction if the calibration were to be accurate to within 10%. PMID:23041778

  5. Monte-Carlo simulation to determine detector efficiency of plastic scintillating fiber

    Institute of Scientific and Technical Information of China (English)

    Mohammad Mehdi NASSERI; MA Qing-Li; YIN Ze-Jie; WU Xiao-Yi

    2004-01-01

    Fundamental characteristics of the plastic scintillating fiber (PSF) as a detector for electromagnetic radiation (X & γ) are obtained by GEANT4 detector simulation tool package. The detector response to radiation with energy of 10~400 keV is found out. Energy deposition as well as detector efficiency (DE) of the PSF are studied. In order to make linear array of the PSF for imaging purpose, the optimum length of fiber is also estimated.

  6. Efficiency of radiation protection equipment in interventional radiology: a systematic Monte Carlo study of eye lens and whole body doses

    International Nuclear Information System (INIS)

    Monte Carlo calculations were used to investigate the efficiency of radiation protection equipment in reducing eye and whole body doses during fluoroscopically guided interventional procedures. Eye lens doses were determined considering different models of eyewear with various shapes, sizes and lead thickness. The origin of scattered radiation reaching the eyes was also assessed to explain the variation in the protection efficiency of the different eyewear models with exposure conditions. The work also investigates the variation of eye and whole body doses with ceiling-suspended shields of various shapes and positioning. For all simulations, a broad spectrum of configurations typical for most interventional procedures was considered. Calculations showed that ‘wrap around’ glasses are the most efficient eyewear models reducing, on average, the dose by 74% and 21% for the left and right eyes respectively. The air gap between the glasses and the eyes was found to be the primary source of scattered radiation reaching the eyes. The ceiling-suspended screens were more efficient when positioned close to the patient’s skin and to the x-ray field. With the use of such shields, the Hp(10) values recorded at the collar, chest and waist level and the Hp(3) values for both eyes were reduced on average by 47%, 37%, 20% and 56% respectively. Finally, simulations proved that beam quality and lead thickness have little influence on eye dose while beam projection, the position and head orientation of the operator as well as the distance between the image detector and the patient are key parameters affecting eye and whole body doses. (paper)

  7. Beta-efficiency of a typical gas-flow ionization chamber using GEANT4 Monte Carlo simulations

    Directory of Open Access Journals (Sweden)

    Hussain Abid

    2011-01-01

    Full Text Available GEANT4 based Monte Carlo simulations have been carried out for the determination of efficiency and conversion factors of a gas-flow ionization chamber for beta particles emitted by 86 different radioisotopes covering the average-b energy range of 5.69 keV-2.061 MeV. Good agreements were found between the GEANT4 predicted values and corresponding experimental data, as well as with EGS4 based calculations. For the reported set of b-emitters, the values of the conversion factor have been established in the range of 0.5×1013-2.5×1013 Bqcm-3/A. The computed xenon-to-air conversion factor ratios have attained the minimum value of 0.2 in the range of 0.1-1 MeV. As the radius and/or volume of the ion chamber increases, conversion factors approach a flat energy response. These simulations show a small, but significant dependence of ionization efficiency on the type of wall material.

  8. Broad-band efficiency calibration of ITER bolometer prototypes using Pt absorbers on SiN membranes.

    Science.gov (United States)

    Meister, H; Willmeroth, M; Zhang, D; Gottwald, A; Krumrey, M; Scholze, F

    2013-12-01

    The energy resolved efficiency of two bolometer detector prototypes for ITER with 4 channels each and absorber thicknesses of 4.5 μm and 12.5 μm, respectively, has been calibrated in a broad spectral range from 1.46 eV up to 25 keV. The calibration in the energy range above 3 eV was performed against previously calibrated silicon photodiodes using monochromatized synchrotron radiation provided by five different beamlines of Physikalische Technische Bundesanstalt at the electron storage rings BESSY II and Metrology Light Source in Berlin. For the measurements in the visible range, a setup was realised using monochromatized halogen lamp radiation and a calibrated laser power meter as reference. The measurements clearly demonstrate that the efficiency of the bolometer prototype detectors in the range from 50 eV up to ≈6 keV is close to unity; at a photon energy of 20 keV the bolometer with the thick absorber detects 80% of the photons, the one with the thin absorber about 50%. This indicates that the detectors will be well capable of measuring the plasma radiation expected from the standard ITER scenario. However, a minimum absorber thickness will be required for the high temperatures in the central plasma. At 11.56 keV, the sharp Pt-L3 absorption edge allowed to cross-check the absorber thickness by fitting the measured efficiency to the theoretically expected absorption of X-rays in a homogeneous Pt-layer. Furthermore, below 50 eV the efficiency first follows the losses due to reflectance expected for Pt, but below 10 eV it is reduced further by a factor of 2 for the thick absorber and a factor of 4 for the thin absorber. Most probably, the different histories in production, storage, and operation led to varying surface conditions and additional loss channels.

  9. Broad-band efficiency calibration of ITER bolometer prototypes using Pt absorbers on SiN membranes

    International Nuclear Information System (INIS)

    The energy resolved efficiency of two bolometer detector prototypes for ITER with 4 channels each and absorber thicknesses of 4.5 μm and 12.5 μm, respectively, has been calibrated in a broad spectral range from 1.46 eV up to 25 keV. The calibration in the energy range above 3 eV was performed against previously calibrated silicon photodiodes using monochromatized synchrotron radiation provided by five different beamlines of Physikalische Technische Bundesanstalt at the electron storage rings BESSY II and Metrology Light Source in Berlin. For the measurements in the visible range, a setup was realised using monochromatized halogen lamp radiation and a calibrated laser power meter as reference. The measurements clearly demonstrate that the efficiency of the bolometer prototype detectors in the range from 50 eV up to ≈6 keV is close to unity; at a photon energy of 20 keV the bolometer with the thick absorber detects 80% of the photons, the one with the thin absorber about 50%. This indicates that the detectors will be well capable of measuring the plasma radiation expected from the standard ITER scenario. However, a minimum absorber thickness will be required for the high temperatures in the central plasma. At 11.56 keV, the sharp Pt-L3 absorption edge allowed to cross-check the absorber thickness by fitting the measured efficiency to the theoretically expected absorption of X-rays in a homogeneous Pt-layer. Furthermore, below 50 eV the efficiency first follows the losses due to reflectance expected for Pt, but below 10 eV it is reduced further by a factor of 2 for the thick absorber and a factor of 4 for the thin absorber. Most probably, the different histories in production, storage, and operation led to varying surface conditions and additional loss channels

  10. Broad-band efficiency calibration of ITER bolometer prototypes using Pt absorbers on SiN membranes.

    Science.gov (United States)

    Meister, H; Willmeroth, M; Zhang, D; Gottwald, A; Krumrey, M; Scholze, F

    2013-12-01

    The energy resolved efficiency of two bolometer detector prototypes for ITER with 4 channels each and absorber thicknesses of 4.5 μm and 12.5 μm, respectively, has been calibrated in a broad spectral range from 1.46 eV up to 25 keV. The calibration in the energy range above 3 eV was performed against previously calibrated silicon photodiodes using monochromatized synchrotron radiation provided by five different beamlines of Physikalische Technische Bundesanstalt at the electron storage rings BESSY II and Metrology Light Source in Berlin. For the measurements in the visible range, a setup was realised using monochromatized halogen lamp radiation and a calibrated laser power meter as reference. The measurements clearly demonstrate that the efficiency of the bolometer prototype detectors in the range from 50 eV up to ≈6 keV is close to unity; at a photon energy of 20 keV the bolometer with the thick absorber detects 80% of the photons, the one with the thin absorber about 50%. This indicates that the detectors will be well capable of measuring the plasma radiation expected from the standard ITER scenario. However, a minimum absorber thickness will be required for the high temperatures in the central plasma. At 11.56 keV, the sharp Pt-L3 absorption edge allowed to cross-check the absorber thickness by fitting the measured efficiency to the theoretically expected absorption of X-rays in a homogeneous Pt-layer. Furthermore, below 50 eV the efficiency first follows the losses due to reflectance expected for Pt, but below 10 eV it is reduced further by a factor of 2 for the thick absorber and a factor of 4 for the thin absorber. Most probably, the different histories in production, storage, and operation led to varying surface conditions and additional loss channels. PMID:24387428

  11. Broad-band efficiency calibration of ITER bolometer prototypes using Pt absorbers on SiN membranes

    Energy Technology Data Exchange (ETDEWEB)

    Meister, H.; Willmeroth, M. [Max-Planck-Institut für Plasmaphysik (IPP), EURATOM Association, Boltzmannstr. 2, 85748 Garching (Germany); Zhang, D. [Max-Planck-Institut für Plasmaphysik (IPP), EURATOM Association, Teilinstitut Greifswald, Wendelsteinstraße 1, 17491 Greifswald (Germany); Gottwald, A.; Krumrey, M.; Scholze, F. [Physikalisch-Technische Bundesanstalt (PTB), Abbestraße 2-12, 10587 Berlin (Germany)

    2013-12-15

    The energy resolved efficiency of two bolometer detector prototypes for ITER with 4 channels each and absorber thicknesses of 4.5 μm and 12.5 μm, respectively, has been calibrated in a broad spectral range from 1.46 eV up to 25 keV. The calibration in the energy range above 3 eV was performed against previously calibrated silicon photodiodes using monochromatized synchrotron radiation provided by five different beamlines of Physikalische Technische Bundesanstalt at the electron storage rings BESSY II and Metrology Light Source in Berlin. For the measurements in the visible range, a setup was realised using monochromatized halogen lamp radiation and a calibrated laser power meter as reference. The measurements clearly demonstrate that the efficiency of the bolometer prototype detectors in the range from 50 eV up to ≈6 keV is close to unity; at a photon energy of 20 keV the bolometer with the thick absorber detects 80% of the photons, the one with the thin absorber about 50%. This indicates that the detectors will be well capable of measuring the plasma radiation expected from the standard ITER scenario. However, a minimum absorber thickness will be required for the high temperatures in the central plasma. At 11.56 keV, the sharp Pt-L{sub 3} absorption edge allowed to cross-check the absorber thickness by fitting the measured efficiency to the theoretically expected absorption of X-rays in a homogeneous Pt-layer. Furthermore, below 50 eV the efficiency first follows the losses due to reflectance expected for Pt, but below 10 eV it is reduced further by a factor of 2 for the thick absorber and a factor of 4 for the thin absorber. Most probably, the different histories in production, storage, and operation led to varying surface conditions and additional loss channels.

  12. A Time Efficient Adaptive Gridding Approach and Improved Calibrations in Five-Hole Probe Measurements

    Directory of Open Access Journals (Sweden)

    Jason Town

    2015-01-01

    Full Text Available Five-Hole Probes (FHP, being a dependable and accurate aerodynamic tool, are an excellent choice for measuring three-dimensional flow fields in turbomachinery. To improve spatial resolution, a subminiature FHP with a diameter of 1.68 mm is employed. High length to diameter ratio of the tubing and manual pitch and yaw calibration cause increased uncertainty. A new FHP calibrator is designed and built to reduce the uncertainty by precise, computer controlled movements and reduced calibration time. The calibrated FHP is then placed downstream of the nozzle guide vane (NGV assembly of a low-speed, large-scale, axial flow turbine. The cold flow HP turbine stage contains 29 vanes and 36 blades. A fast and computer controllable traversing system is implemented using an adaptive grid method for the refinement of measurements in regions such as vane wake, secondary flows, and boundary layers. The current approach increases the possible number of measurement points in a two-hour period by 160%. Flow structures behind the NGV measurement plane are identified with high spatial resolution and reduced uncertainty. The automated pitch and yaw calibration and the adaptive grid approach introduced in this study are shown to be a highly effective way of measuring complex flow fields in the research turbine.

  13. The Role of Mathematical Methods in Efficiency Calibration and Uncertainty Estimation in Gamma Based Non-Destructive Assay - 12311

    International Nuclear Information System (INIS)

    Mathematical methods are being increasingly employed in the efficiency calibration of gamma based systems for non-destructive assay (NDA) of radioactive waste and for the estimation of the Total Measurement Uncertainty (TMU). Recently, ASTM (American Society for Testing and Materials) released a standard guide for use of modeling passive gamma measurements. This is a testimony to the common use and increasing acceptance of mathematical techniques in the calibration and characterization of NDA systems. Mathematical methods offer flexibility and cost savings in terms of rapidly incorporating calibrations for multiple container types, geometries, and matrix types in a new waste assay system or a system that may already be operational. Mathematical methods are also useful in modeling heterogeneous matrices and non-uniform activity distributions. In compliance with good practice, if a computational method is used in waste assay (or in any other radiological application), it must be validated or benchmarked using representative measurements. In this paper, applications involving mathematical methods in gamma based NDA systems are discussed with several examples. The application examples are from NDA systems that were recently calibrated and performance tested. Measurement based verification results are presented. Mathematical methods play an important role in the efficiency calibration of gamma based NDA systems. This is especially true when the measurement program involves a wide variety of complex item geometries and matrix combinations for which the development of physical standards may be impractical. Mathematical methods offer a cost effective means to perform TMU campaigns. Good practice demands that all mathematical estimates be benchmarked and validated using representative sets of measurements. (authors)

  14. Improving the trade-off between simulation time and accuracy in efficiency calibrations with the code DETEFF

    Energy Technology Data Exchange (ETDEWEB)

    Cornejo Diaz, N. [Centre for Radiological Protection and Hygiene, P.O. Box 6195, Habana (Cuba); Jurado Vargas, M., E-mail: mjv@unex.e [Physics Department, University of Extremadura, 06071 Badajoz (Spain)

    2010-07-15

    Quick and relatively simple procedures were incorporated into the Monte Carlo code DETEFF in order to consider the escape of Bremsstrahlung radiation and secondary electrons. The relative bias in efficiency values was thus reduced for photon energies between 1500 and 2000 keV, without any noticeable increment of the simulation time. A relatively simple method was also included to consider the rounding of detector edges. The validation studies showed relative deviations of about 1% in the energy range 10-2000 keV.

  15. An efficient Monte Carlo method for calculating ab initio transition state theory reaction rates in solution

    CERN Document Server

    Iftimie, R; Schofield, J P; Iftimie, Radu; Salahub, Dennis; Schofield, Jeremy

    2003-01-01

    In this article, we propose an efficient method for sampling the relevant state space in condensed phase reactions. In the present method, the reaction is described by solving the electronic Schr\\"{o}dinger equation for the solute atoms in the presence of explicit solvent molecules. The sampling algorithm uses a molecular mechanics guiding potential in combination with simulated tempering ideas and allows thorough exploration of the solvent state space in the context of an ab initio calculation even when the dielectric relaxation time of the solvent is long. The method is applied to the study of the double proton transfer reaction that takes place between a molecule of acetic acid and a molecule of methanol in tetrahydrofuran. It is demonstrated that calculations of rates of chemical transformations occurring in solvents of medium polarity can be performed with an increase in the cpu time of factors ranging from 4 to 15 with respect to gas-phase calculations.

  16. A Calibration Routine for Efficient ETD in Large-Scale Proteomics

    Science.gov (United States)

    Rose, Christopher M.; Rush, Matthew J. P.; Riley, Nicholas M.; Merrill, Anna E.; Kwiecien, Nicholas W.; Holden, Dustin D.; Mullen, Christopher; Westphall, Michael S.; Coon, Joshua J.

    2015-11-01

    Electron transfer dissociation (ETD) has been broadly adopted and is now available on a variety of commercial mass spectrometers. Unlike collisional activation techniques, optimal performance of ETD requires considerable user knowledge and input. ETD reaction duration is one key parameter that can greatly influence spectral quality and overall experiment outcome. We describe a calibration routine that determines the correct number of reagent anions necessary to reach a defined ETD reaction rate. Implementation of this automated calibration routine on two hybrid Orbitrap platforms illustrate considerable advantages, namely, increased product ion yield with concomitant reduction in scan rates netting up to 75% more unique peptide identifications in a shotgun experiment.

  17. Calibration with MCNP of NaI detector for the determination of natural radioactivity levels in the field

    OpenAIRE

    CINELLI GIORGIA; TOSITTI Laura; Mostacci, Domiziano; BARE Jonathan

    2015-01-01

    In view of assessing natural radioactivity with on-site quantitative gamma spectrometry, efficiency calibration of NaI(Tl) detectors is investigated. A calibration based on Monte Carlo simulation of detector response is proposed, to render reliable quantitative analysis practicable in field campaigns. The method is developed with reference to contact geometry, in which measurements are taken placing the NaI(Tl) probe directly against the solid source to be analyzed. The Monte Carlo code us...

  18. Efficient masonry vault inspection by Monte Carlo simulations: Case of hidden defect

    Directory of Open Access Journals (Sweden)

    Abdelmounaim Zanaz

    2016-06-01

    Full Text Available This paper presents a methodology for probabilistic assessment of masonry vaults bearing capacity with the consideration of existing defects. A comprehensive methodology and software package have been developed and adapted to inspection requirements. First, the mechanical analysis model is explained and validated by showing a good compromise between computation time and accuracy. This compromise is required when probabilistic approach is considered, as it requires a large number of mechanical analysis runs. To model the defect, an inspection case is simulated by considering a segmental vault. As the inspection data is often insufficient, the defect position and size are considered to be unknown. As the NDT results could not provide useful and reliable information, it is therefore decided to take samples with the obligation to minimize as much as possible their number. In this case the main difficulty is to know on which segment the coring would be mostly efficient. To find out, all possible positions are studied with the consideration of one single core. Using probabilistic approaches, the distribution function of the critical load has been determined for each segment. The results allow to identify the best segment for vault inspection.

  19. ALIS: An efficient method to compute high spectral resolution polarized solar radiances using the Monte Carlo approach

    International Nuclear Information System (INIS)

    An efficient method to compute accurate polarized solar radiance spectra using the (3D) Monte Carlo model MYSTIC has been developed. Such high resolution spectra are measured by various satellite instruments for remote sensing of atmospheric trace gases. ALIS (Absorption Lines Importance Sampling) allows the calculation of spectra by tracing photons at only one wavelength. In order to take into account the spectral dependence of the absorption coefficient a spectral absorption weight is calculated for each photon path. At each scattering event the local estimate method is combined with an importance sampling method to take into account the spectral dependence of the scattering coefficient. Since each wavelength grid point is computed based on the same set of random photon paths, the statistical error is almost same for all wavelengths and hence the simulated spectrum is not noisy. The statistical error mainly results in a small relative deviation which is independent of wavelength and can be neglected for those remote sensing applications, where differential absorption features are of interest. Two example applications are presented: The simulation of shortwave-infrared polarized spectra as measured by GOSAT from which CO2 is retrieved, and the simulation of the differential optical thickness in the visible spectral range which is derived from SCIAMACHY measurements to retrieve NO2. The computational speed of ALIS (for 1D or 3D atmospheres) is of the order of or even faster than that of one-dimensional discrete ordinate methods, in particular when polarization is considered.

  20. Efficient Orientation and Calibration of Large Aerial Blocks of Multi-Camera Platforms

    Science.gov (United States)

    Karel, W.; Ressl, C.; Pfeifer, N.

    2016-06-01

    Aerial multi-camera platforms typically incorporate a nadir-looking camera accompanied by further cameras that provide oblique views, potentially resulting in utmost coverage, redundancy, and accuracy even on vertical surfaces. However, issues have remained unresolved with the orientation and calibration of the resulting imagery, to two of which we present feasible solutions. First, as standard feature point descriptors used for the automated matching of homologous points are only invariant to the geometric variations of translation, rotation, and scale, they are not invariant to general changes in perspective. While the deviations from local 2D-similarity transforms may be negligible for corresponding surface patches in vertical views of flat land, they become evident at vertical surfaces, and in oblique views in general. Usage of such similarity-invariant descriptors thus limits the amount of tie points that stabilize the orientation and calibration of oblique views and cameras. To alleviate this problem, we present the positive impact on image connectivity of using a quasi affine-invariant descriptor. Second, no matter which hard- and software are used, at some point, the number of unknowns of a bundle block may be too large to be handled. With multi-camera platforms, these limits are reached even sooner. Adjustment of sub-blocks is sub-optimal, as it complicates data management, and hinders self-calibration. Simply discarding unreliable tie points of low manifold is not an option either, because these points are needed at the block borders and in poorly textured areas. As a remedy, we present a straight-forward method how to considerably reduce the number of tie points and hence unknowns before bundle block adjustment, while preserving orientation and calibration quality.

  1. Calibration of environmental radionuclide transfer models using a Bayesian approach with Markov chain Monte Carlo simulations and model comparisons - Calibration of radionuclides transfer models in the environment using a Bayesian approach with Markov chain Monte Carlo simulation and comparison of models

    Energy Technology Data Exchange (ETDEWEB)

    Nicoulaud-Gouin, V.; Giacalone, M.; Gonze, M.A. [Institut de Radioprotection et de Surete Nucleaire-PRP-ENV/SERIS/LM2E (France); Martin-Garin, A.; Garcia-Sanchez, L. [IRSN-PRP-ENV/SERIS/L2BT (France)

    2014-07-01

    Calibration of transfer models according to observation data is a challenge, especially if parameters uncertainty is required, and if competing models should be decided between them. Generally two main calibration methods are used: The frequentist approach in which the unknown parameter of interest is supposed fixed and its estimation is based on the data only. In this category, least squared method has many restrictions in nonlinear models and competing models need to be nested in order to be compared. The bayesian inference in which the unknown parameter of interest is supposed random and its estimation is based on the data and on prior information. Compared to frequentist method, it provides probability density functions and therefore pointwise estimation with credible intervals. However, in practical cases, Bayesian inference is a complex problem of numerical integration, which explains its low use in operational modeling including radioecology. This study aims to illustrate the interest and feasibility of Bayesian approach in radioecology particularly in the case of ordinary differential equations with non-constant coefficients models, which cover most radiological risk assessment models, notably those implemented in the Symbiose platform (Gonze et al, 2010). Markov Chain Monte Carlo (MCMC) method (Metropolis et al., 1953) was used because the posterior expectations are intractable integrals. The invariant distribution of the parameters was performed by the metropolis-Hasting algorithm (Hastings, 1970). GNU-MCSim software (Bois and Maszle, 2011) a bayesian hierarchical framework, was used to deal with nonlinear differential models. Two case studies including this type of model were investigated: An Equilibrium Kinetic sorption model (EK) (e.g. van Genuchten et al, 1974), with experimental data concerning {sup 137}Cs and {sup 85}Sr sorption and desorption in different soils studied in stirred flow-through reactors. This model, generalizing the K{sub d} approach

  2. Efficiency calibration and minimum detectable activity concentration of a real-time UAV airborne sensor system with two gamma spectrometers.

    Science.gov (United States)

    Tang, Xiao-Bin; Meng, Jia; Wang, Peng; Cao, Ye; Huang, Xi; Wen, Liang-Sheng; Chen, Da

    2016-04-01

    A small-sized UAV (NH-UAV) airborne system with two gamma spectrometers (LaBr3 detector and HPGe detector) was developed to monitor activity concentration in serious nuclear accidents, such as the Fukushima nuclear accident. The efficiency calibration and determination of minimum detectable activity concentration (MDAC) of the specific system were studied by MC simulations at different flight altitudes, different horizontal distances from the detection position to the source term center and different source term sizes. Both air and ground radiation were considered in the models. The results obtained may provide instructive suggestions for in-situ radioactivity measurements of NH-UAV. PMID:26773821

  3. Efficiency calibration and minimum detectable activity concentration of a real-time UAV airborne sensor system with two gamma spectrometers.

    Science.gov (United States)

    Tang, Xiao-Bin; Meng, Jia; Wang, Peng; Cao, Ye; Huang, Xi; Wen, Liang-Sheng; Chen, Da

    2016-04-01

    A small-sized UAV (NH-UAV) airborne system with two gamma spectrometers (LaBr3 detector and HPGe detector) was developed to monitor activity concentration in serious nuclear accidents, such as the Fukushima nuclear accident. The efficiency calibration and determination of minimum detectable activity concentration (MDAC) of the specific system were studied by MC simulations at different flight altitudes, different horizontal distances from the detection position to the source term center and different source term sizes. Both air and ground radiation were considered in the models. The results obtained may provide instructive suggestions for in-situ radioactivity measurements of NH-UAV.

  4. Calibrating and Controlling the Quantum Efficiency Distribution of Inhomogeneously Broadened Quantum Rods by Using a Mirror Ball

    DEFF Research Database (Denmark)

    Hansen, Per Lunnemann; Rabouw, Freddy T.; van Dijk-Moes, Relinde J. A.;

    2013-01-01

    near a mirror, not only allows an extraction of calibrated ensemble-averaged rates, but for the first time also to quantify the full inhomogeneous dispersion of radiative and non radiative decay rates across thousands of nanocrystals. We apply the technique to novel ultrastable CdSe/CdS dot......-in-rod emitters. The emitters are of large current interest due to their improved stability and reduced blinking. We retrieve a room-temperature ensemble average quantum efficiency of 0.87 ± 0.08 at a mean lifetime around 20 ns. We confirm a log-normal distribution of decay rates as often assumed in literature...

  5. Application of the Gamma Spectrometry Sourceless Efficiency Calibration Method to the Measurement of Radionuclides in Rare Earth Residues

    International Nuclear Information System (INIS)

    The paper investigates and analyses NORM residues from rare earth smelting and separation plants in Jiangsu Province using the high purity germanium gamma spectrometry sourceless efficiency calibration method which was verified by IAEA reference materials. The results show that in the rare earth residues the radioactive equilibrium of uranium and thorium decay series has been broken and the activity concentrations in the samples have obvious differences. Based on the results, the paper makes some suggestions and proposes some protective measures for the disposal of rare earth residues. (author)

  6. Efficiency calibration of a liquid scintillation counter for 90Y Cherenkov counting

    International Nuclear Information System (INIS)

    In this paper a complete and self-consistent method for 90Sr determination in environmental samples is presented. It is based on the Cherenkov counting of 90Y with a conventional liquid scintillation counter. The effects of color quenching on the counting efficiency and background are carefully studied. A working curve is presented which allows to quantify the correction in the counting efficiency depending on the color quenching strength. (orig.)

  7. Efficiency calibration of the ELBE nuclear resonance fluorescence setup using a proton beam

    Energy Technology Data Exchange (ETDEWEB)

    Trompler, Erik; Bemmerer, Daniel; Beyer, Roland; Erhard, Martin; Grosse, Eckart; Hannaske, Roland; Junghans, Arnd Rudolf; Marta, Michele; Nair, Chithra; Schwengner, R.; Wagner, Andreas; Yakorev, Dmitry [Forschungszentrum Dresden-Rossendorf (FZD), Dresden (Germany); Broggini, Carlo; Caciolli, Antonio; Menegazzo, Roberto [INFN Sezione di Padova, Padova (Italy); Fueloep, Zsolt; Gyuerky, Gyoergy; Szuecs, Tamas [Atomki, Debrecen (Hungary)

    2009-07-01

    The nuclear resonance fluorescence (NRF) setup at ELBE uses bremsstrahlung with endpoint energies up to 20 MeV. The setup consists of four 100% high-purity germanium detectors, each surrounded by a BGO escape-suppression shield and a lead collimator. The detection efficiency up to E{sub {gamma}}=12 MeV has been determined using the proton beam from the FZD Tandetron and well-known resonances in the {sup 11}B(p,{gamma}){sup 12}C, {sup 14}N(p,{gamma}){sup 15}O, and {sup 27}Al(p,{gamma}){sup 28}Si reactions. The deduced efficiency curve allows to check efficiency curves calculated with GEANT. Future photon-scattering work can be carried out with improved precision at high energy.

  8. Efficient Calibration/Uncertainty Analysis Using Paired Complex/Surrogate Models.

    Science.gov (United States)

    Burrows, Wesley; Doherty, John

    2015-01-01

    The use of detailed groundwater models to simulate complex environmental processes can be hampered by (1) long run-times and (2) a penchant for solution convergence problems. Collectively, these can undermine the ability of a modeler to reduce and quantify predictive uncertainty, and therefore limit the use of such detailed models in the decision-making context. We explain and demonstrate a novel approach to calibration and the exploration of posterior predictive uncertainty, of a complex model, that can overcome these problems in many modelling contexts. The methodology relies on conjunctive use of a simplified surrogate version of the complex model in combination with the complex model itself. The methodology employs gradient-based subspace analysis and is thus readily adapted for use in highly parameterized contexts. In its most basic form, one or more surrogate models are used for calculation of the partial derivatives that collectively comprise the Jacobian matrix. Meanwhile, testing of parameter upgrades and the making of predictions is done by the original complex model. The methodology is demonstrated using a density-dependent seawater intrusion model in which the model domain is characterized by a heterogeneous distribution of hydraulic conductivity. PMID:25142272

  9. Detection of 15 dB Squeezed States of Light and their Application for the Absolute Calibration of Photoelectric Quantum Efficiency

    Science.gov (United States)

    Vahlbruch, Henning; Mehmet, Moritz; Danzmann, Karsten; Schnabel, Roman

    2016-09-01

    Squeezed states of light belong to the most prominent nonclassical resources. They have compelling applications in metrology, which has been demonstrated by their routine exploitation for improving the sensitivity of a gravitational-wave detector since 2010. Here, we report on the direct measurement of 15 dB squeezed vacuum states of light and their application to calibrate the quantum efficiency of photoelectric detection. The object of calibration is a customized InGaAs positive intrinsic negative (p-i-n) photodiode optimized for high external quantum efficiency. The calibration yields a value of 99.5% with a 0.5% (k =2 ) uncertainty for a photon flux of the order 1 017 s-1 at a wavelength of 1064 nm. The calibration neither requires any standard nor knowledge of the incident light power and thus represents a valuable application of squeezed states of light in quantum metrology.

  10. Crop physiology calibration in the CLM

    Directory of Open Access Journals (Sweden)

    I. Bilionis

    2015-04-01

    scalable and adaptive scheme based on sequential Monte Carlo (SMC. The model showed significant improvement of crop productivity with the new calibrated parameters. We demonstrate that the calibrated parameters are applicable across alternative years and different sites.

  11. Close-geometry efficiency calibration of LaCl3:Ce detectors: measurements and simulations

    International Nuclear Information System (INIS)

    In particular, large amount of literature is available with HPGe detectors. However, not much work has been done on coincidence summing effects in scintillation detectors. This may be due to inferiority of scintillation detectors over HPGe detectors in terms of energy resolution which makes the accurate estimation of counts under individual peak very difficult. We report here experimental measurements and realistic simulations of absolute efficiencies (both photo-peak and total detection) and of coincidence summing correction factors in LaCl3 (Ce) scintillation detectors under close-geometry. These detectors have drawn interest owing to their properties superior to that of NaI(Tl) detectors, such as high light yield (46,000 photons/MeV), energy resolution (about 4%), decay time (25 ns), etc.

  12. Calibration of a gamma spectrometer for natural radioactivity measurement. Experimental measurements and Monte Carlo modelling; Etalonnage d'un spectrometre gamma en vue de la mesure de la radioactivite naturelle. Mesures experimentales et modelisation par techniques de Monte-Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Courtine, Fabien [Laboratoire de Physique Corpusculaire, Universite Blaise Pascal - CNRS/IN2P3, 63000 Aubiere Cedex (France)

    2007-03-15

    The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of {sup 137}Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the {sup 60}Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)

  13. Calculation of the photoelectric efficiency with Monte Carlo method of a planar high purity Ge detectors and application to cross sections measurement

    International Nuclear Information System (INIS)

    The aim of this work is to elaborate a Monte Carlo programme which calculate the photoelectric efficiency of a planar high purity Ge detector for low energy photons. This programme calculate the auto absorption, the absorption in different media crossed by the photon and the intrinsic and total efficiencies. The results of this programme were very satisfactory since they reproduce the measured values in the two different cases of punctual and volumic sources. The result of the photoelectric efficiency calculation with this programme has been applied to determine the cross section of the 166-Er (n,2 n) 165-Er reaction induced by 14 MeV neutron, where only the measurement by x spectrometry is possible. The value obtained is concordant with the data given by the literature. 119 figs., 39 tabs., 96 refs. (F.M.)

  14. Reconstruction, Energy Calibration, and Identification of Hadronically Decaying Tau Leptons in the ATLAS Experiment for Run-2 of the LHC

    CERN Document Server

    The ATLAS collaboration

    2015-01-01

    The reconstruction algorithm, energy calibration, and identification methods for hadronically decaying tau leptons in ATLAS used at the start of Run-2 of the Large Hadron Collider are described in this note. All algorithms have been optimised for Run-2 conditions. The energy calibration relies on Monte Carlo samples with hadronic tau lepton decays, and applies multiplicative factors based on the pT of the reconstructed tau lepton to the energy measurements in the calorimeters. The identification employs boosted decision trees. Systematic uncertainties on the energy scale, reconstruction efficiency and identification efficiency of hadronically decaying tau leptons are determined using Monte Carlo samples that simulate varying conditions.

  15. Monte Carlo probabilistic sensitivity analysis for patient level simulation models: efficient estimation of mean and variance using ANOVA.

    Science.gov (United States)

    O'Hagan, Anthony; Stevenson, Matt; Madan, Jason

    2007-10-01

    Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.

  16. Sequential Monte Carlo on large binary sampling spaces

    CERN Document Server

    Schäfer, Christian

    2011-01-01

    A Monte Carlo algorithm is said to be adaptive if it automatically calibrates its current proposal distribution using past simulations. The choice of the parametric family that defines the set of proposal distributions is critical for a good performance. In this paper, we present such a parametric family for adaptive sampling on high-dimensional binary spaces. A practical motivation for this problem is variable selection in a linear regression context. We want to sample from a Bayesian posterior distribution on the model space using an appropriate version of Sequential Monte Carlo. Raw versions of Sequential Monte Carlo are easily implemented using binary vectors with independent components. For high-dimensional problems, however, these simple proposals do not yield satisfactory results. The key to an efficient adaptive algorithm are binary parametric families which take correlations into account, analogously to the multivariate normal distribution on continuous spaces. We provide a review of models for binar...

  17. Rotated-Random-Scanning: a simple method for set-valued model calibration

    NARCIS (Netherlands)

    Janssen PHM; Heuberger PSC; CWM

    1995-01-01

    A simple method is proposed for calibrating models in ill-defined and information-poor situations, which are frequently encountered in environmental applications. The method performs an efficient scan of the parameter space, based on Monte Carlo sampling in cobination with rotations. Software has be

  18. Efficiency calibration of a mini-orange type beta-spectrometer by the beta sup - -spectrum of sup 9 sup 0 Sr

    CERN Document Server

    Kalinnikov, V G; Ibrakhim, Y S; Lebedev, N A; Samatov, Z K; Sehrehehtehr, Z; Solnyshkin, A A

    2002-01-01

    A specific method for efficiency calibration of a mini-orange type beta-spectrometer by means of the continuous beta sup - -spectrum of sup 9 sup 0 Sr and the conversion electron spectrum of sup 2 sup 0 sup 7 Bi in the energy range from 500 to 2200 keV has been elaborated. In the experiment typical SmCo sub 5 magnets (6A and 8A) were used. An accuracy of efficiency determination was 5-10 %.

  19. Development of a stochastic detection efficiency calibration procedure for studying collimation effects on a broad energy germanium detector

    Energy Technology Data Exchange (ETDEWEB)

    Altavilla, Massimo [High Institute for Environmental Protection and Research (ISPRA)—Department for Nuclear, Technological and Industrial Risk. Via Vitaliano Brancati 48, 00144 Rome (Italy); Remetti, Romolo, E-mail: romolo.remetti@uniroma1.it [“Sapienza”—University of Rome, Department BASE—Basic and Applied Sciences for Engineering. Via Antonio Scarpa 14, 00161 Rome (Italy)

    2013-06-01

    ISPRA, the Italian nuclear safety regulatory body, has started a measurement campaign for validating the performances of in situ gamma-ray spectrometry based on BEGe detectors and ISOCS software. The goal of the validation program is to verify if the mathematical algorithms used by Canberra to account for collimation effects of HpGe detectors continue to work well also for BEGe detectors. This has required the development of a calibration methodology, based on MCNPX code, which, by avoiding any mathematical algorithm utilization, is purely stochastic.Experimental results obtained by such a new procedure, were generally found to be 5% of the reference values. While, in the case of gamma-ray energies greater than 400 keV and small angles collimation, results given by ISOCS software produced larger deviations, around 20%. This work presents a detailed description of the simulation procedure and of the first experimental results. -- Highlights: ► Broad Energy Germanium Detector modeled using the MCNPX code. ► MCNPX Gaussian Energy Broadening option. ► Coincidence of simulated spectrum and experimental photopeaks. ► Validation with reference source and comparison with ISOCS efficiency determination.

  20. Octree indexing of DICOM images for voxel number reduction and improvement of Monte Carlo simulation computing efficiency

    International Nuclear Information System (INIS)

    The purpose of the present study is to introduce a compression algorithm for the CT (computed tomography) data used in Monte Carlo simulations. Performing simulations on the CT data implies large computational costs as well as large memory requirements since the number of voxels in such data reaches typically into hundreds of millions voxels. CT data, however, contain homogeneous regions which could be regrouped to form larger voxels without affecting the simulation's accuracy. Based on this property we propose a compression algorithm based on octrees: in homogeneous regions the algorithm replaces groups of voxels with a smaller number of larger voxels. This reduces the number of voxels while keeping the critical high-density gradient area. Results obtained using the present algorithm on both phantom and clinical data show that compression rates up to 75% are possible without losing the dosimetric accuracy of the simulation

  1. A Monte Carlo simulation and setup optimization of output efficiency to PGNAA thermal neutron using 252Cf neutrons

    Science.gov (United States)

    Zhang, Jin-Zhao; Tuo, Xian-Guo

    2014-07-01

    We present the design and optimization of a prompt γ-ray neutron activation analysis (PGNAA) thermal neutron output setup based on Monte Carlo simulations using MCNP5 computer code. In these simulations, the moderator materials, reflective materials, and structure of the PGNAA 252Cf neutrons of thermal neutron output setup are optimized. The simulation results reveal that the thin layer paraffin and the thick layer of heavy water moderating effect work best for the 252Cf neutron spectrum. Our new design shows a significantly improved performance of the thermal neutron flux and flux rate, that are increased by 3.02 times and 3.27 times, respectively, compared with the conventional neutron source design.

  2. Technology for radiation efficiency measurement of high-power halogen tungsten lamp used in calibration of high-energy laser energy meter.

    Science.gov (United States)

    Wei, Ji Feng; Hu, Xiao Yang; Sun, Li Qun; Zhang, Kai; Chang, Yan

    2015-03-20

    The calibration method using a high-power halogen tungsten lamp as a calibration source has many advantages such as strong equivalence and high power, so it is very fit for the calibration of high-energy laser energy meters. However, high-power halogen tungsten lamps after power-off still reserve much residual energy and continually radiate energy, which is difficult to be measured. Two measuring systems were found to solve the problems. One system is composed of an integrating sphere and two optical spectrometers, which can accurately characterize the radiative spectra and power-time variation of the halogen tungsten lamp. This measuring system was then calibrated using a normal halogen tungsten lamp made of the same material as the high-power halogen tungsten lamp. In this way, the radiation efficiency of the halogen tungsten lamp after power-off can be quantitatively measured. In the other measuring system, a wide-spectrum power meter was installed far away from the halogen tungsten lamp; thus, the lamp can be regarded as a point light source. The radiation efficiency of residual energy from the halogen tungsten lamp was computed on the basis of geometrical relations. The results show that the halogen tungsten lamp's radiation efficiency was improved with power-on time but did not change under constant power-on time/energy. All the tested halogen tungsten lamps reached 89.3% of radiation efficiency at 50 s after power-on. After power-off, the residual energy in the halogen tungsten lamp gradually dropped to less than 10% of the initial radiation power, and the radiation efficiency changed with time. The final total radiation energy was decided by the halogen tungsten lamp's radiation efficiency, the radiation efficiency of residual energy, and the total power consumption. The measuring uncertainty of total radiation energy was 2.4% (here, the confidence factor is two).

  3. Efficiency calibration and coincidence summing correction for large arrays of NaI(Tl) detectors in soccer-ball and castle geometries

    Energy Technology Data Exchange (ETDEWEB)

    Anil Kumar, G., E-mail: anilg@tifr.res.i [Department of Nuclear and Atomic Physics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005 (India); Mazumdar, I.; Gothe, D.A. [Department of Nuclear and Atomic Physics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005 (India)

    2009-11-21

    Efficiency calibration and coincidence summing correction have been performed for two large arrays of NaI(Tl) detectors in two different configurations. They are, a compact array of 32 conical detectors of pentagonal and hexagonal shapes in soccer-ball geometry and an array of 14 straight hexagonal NaI(Tl) detectors in castle geometry. Both of these arrays provide a large solid angle of detection, leading to considerable coincidence summing of gamma rays. The present work aims to understand the effect of coincidence summing of gamma rays while determining the energy dependence of efficiencies of these two arrays. We have carried out extensive GEANT4 simulations with radio-nuclides that decay with a two-step cascade, considering both arrays in their realistic geometries. The absolute efficiencies have been simulated for gamma energies from 700 to 2800 keV using four different double-photon emitters, namely, {sup 60}Co, {sup 46}Sc, {sup 94}Nb and {sup 24}Na. The efficiencies so obtained have been corrected for coincidence summing using the method proposed by Vidmar et al. . The simulations have also been carried out for the same energies assuming mono-energetic point sources, for comparison. Experimental measurements have also been carried out using calibrated point sources of {sup 137}Cs and {sup 60}Co. The simulated and the experimental results are found to be in good agreement. This demonstrates the reliability of the correction method for efficiency calibration of two large arrays in very different configurations.

  4. Calibration of Ge gamma-ray spectrometers for complex sample geometries and matrices

    Energy Technology Data Exchange (ETDEWEB)

    Semkow, T.M., E-mail: thomas.semkow@health.ny.gov [Wadsworth Center, New York State Department of Health, Empire State Plaza, Albany, NY 12201 (United States); Department of Environmental Health Sciences, School of Public Health, University at Albany, State University of New York, Rensselaer, NY 12144 (United States); Bradt, C.J.; Beach, S.E.; Haines, D.K.; Khan, A.J.; Bari, A.; Torres, M.A.; Marrantino, J.C.; Syed, U.-F. [Wadsworth Center, New York State Department of Health, Empire State Plaza, Albany, NY 12201 (United States); Kitto, M.E. [Wadsworth Center, New York State Department of Health, Empire State Plaza, Albany, NY 12201 (United States); Department of Environmental Health Sciences, School of Public Health, University at Albany, State University of New York, Rensselaer, NY 12144 (United States); Hoffman, T.J. [Wadsworth Center, New York State Department of Health, Empire State Plaza, Albany, NY 12201 (United States); Curtis, P. [Kiltel Systems, Inc., Clyde Hill, WA 98004 (United States)

    2015-11-01

    A comprehensive study of the efficiency calibration and calibration verification of Ge gamma-ray spectrometers was performed using semi-empirical, computational Monte-Carlo (MC), and transfer methods. The aim of this study was to evaluate the accuracy of the quantification of gamma-emitting radionuclides in complex matrices normally encountered in environmental and food samples. A wide range of gamma energies from 59.5 to 1836.0 keV and geometries from a 10-mL jar to 1.4-L Marinelli beaker were studied on four Ge spectrometers with the relative efficiencies between 102% and 140%. Density and coincidence summing corrections were applied. Innovative techniques were developed for the preparation of artificial complex matrices from materials such as acidified water, polystyrene, ethanol, sugar, and sand, resulting in the densities ranging from 0.3655 to 2.164 g cm{sup −3}. They were spiked with gamma activity traceable to international standards and used for calibration verifications. A quantitative method of tuning MC calculations to experiment was developed based on a multidimensional chi-square paraboloid. - Highlights: • Preparation and spiking of traceable complex matrices in extended geometries. • Calibration of Ge gamma spectrometers for complex matrices. • Verification of gamma calibrations. • Comparison of semi-empirical, computational Monte Carlo, and transfer methods of Ge calibration. • Tuning of Monte Carlo calculations using a multidimensional paraboloid.

  5. Optimization of Monte Carlo simulations

    OpenAIRE

    Bryskhe, Henrik

    2009-01-01

    This thesis considers several different techniques for optimizing Monte Carlo simulations. The Monte Carlo system used is Penelope but most of the techniques are applicable to other systems. The two mayor techniques are the usage of the graphics card to do geometry calculations, and raytracing. Using graphics card provides a very efficient way to do fast ray and triangle intersections. Raytracing provides an approximation of Monte Carlo simulation but is much faster to perform. A program was ...

  6. High Efficiency, Digitally Calibrated TR Modules Enabling Lightweight SweepSAR Architectures for DESDynI-Class Radar Instruments Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Develop and demonstrate a next-generation digitally calibrated, highly scalable, L-band Transmit/Receive (TR) module to enable a precision beamforming SweepSAR...

  7. The GERDA calibration system

    International Nuclear Information System (INIS)

    A system with three identical custom made units is used for the energy calibration of the GERDA Ge diodes. To perform a calibration the 228Th sources are lowered from the parking positions at the top of the cryostat. Their positions are measured by two independent modules. One, the incremental encoder, counts the holes in the perforated steel band holding the sources, the other measures the drive shaft's angular position even if not powered. The system can be controlled remotely by a Labview program. The calibration data is analyzed by an iterative calibration algorithm determining the calibration functions for different energy reconstruction algorithms and the resolution of several peaks in the 228Th spectrum is determined. A Monte Carlo simulation using the GERDA simulation software MAGE has been performed to determine the background induced by the sources in the parking positions.

  8. Statistical inference about the relative efficiency of a new survey protocol, based on paired-tow survey calibration data

    OpenAIRE

    Cadigan, Noel G.; Dowden, Jeff J.

    2010-01-01

    Paired-tow calibration studies provide information on changes in survey catchability that may occur because of some necessary change in protocols (e.g., change in vessel or vessel gear) in a fish stock survey. This information is important to ensure the continuity of annual time-series of survey indices of stock size that provide the basis for fish stock assessments. There are several statistical models used to analyze the paired-catch data from calibration studies. Our main contribu...

  9. Efficient Markov Chain Monte Carlo Implementation of Bayesian Analysis of Additive and Dominance Genetic Variances in Noninbred Pedigrees

    Science.gov (United States)

    Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J.

    2008-01-01

    Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655

  10. Efficient Markov chain Monte Carlo implementation of Bayesian analysis of additive and dominance genetic variances in noninbred pedigrees.

    Science.gov (United States)

    Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J

    2008-06-01

    Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655

  11. An assessment of the efficiency of methods for measurement of the computed tomography dose index (CTDI) for cone beam (CBCT) dosimetry by Monte Carlo simulation

    Science.gov (United States)

    Abuhaimed, Abdullah; Martin, Colin J.; Sankaralingam, Marimuthu; Gentle, David J.; McJury, Mark

    2014-10-01

    The IEC has introduced a practical approach to overcome shortcomings of the CTDI100 for measurements on wide beams employed for cone beam (CBCT) scans. This study evaluated the efficiency of this approach (CTDIIEC) for different arrangements using Monte Carlo simulation techniques, and compared CTDIIEC to the efficiency of CTDI100 for CBCT. Monte Carlo EGSnrc/BEAMnrc and EGSnrc/DOSXYZnrc codes were used to simulate the kV imaging system mounted on a Varian TrueBeam linear accelerator. The Monte Carlo model was benchmarked against experimental measurements and good agreement shown. Standard PMMA head and body phantoms with lengths 150, 600, and 900 mm were simulated. Beam widths studied ranged from 20-300 mm, and four scanning protocols using two acquisition modes were utilized. The efficiency values were calculated at the centre (ɛc) and periphery (ɛp) of the phantoms and for the weighted CTDI (ɛw). The efficiency values for CTDI100 were approximately constant for beam widths 20-40 mm, where ɛc(CTDI100), ɛp(CTDI100), and ɛw(CTDI100) were 74.7  ±  0.6%, 84.6  ±  0.3%, and 80.9  ±  0.4%, for the head phantom and 59.7  ±  0.3%, 82.1  ±  0.3%, and 74.9  ±  0.3%, for the body phantom, respectively. When beam width increased beyond 40 mm, ɛ(CTDI100) values fell steadily reaching ~30% at a beam width of 300 mm. In contrast, the efficiency of the CTDIIEC was approximately constant over all beam widths, demonstrating its suitability for assessment of CBCT. ɛc(CTDIIEC), ɛp(CTDIIEC), and ɛw(CTDIIEC) were 76.1  ±  0.9%, 85.9  ±  1.0%, and 82.2  ±  0.9% for the head phantom and 60.6  ±  0.7%, 82.8  ±  0.8%, and 75.8  ±  0.7%, for the body phantom, respectively, within 2% of ɛ(CTDI100) values for narrower beam widths. CTDI100,w and CTDIIEC,w underestimate CTDI∞,w by ~55% and ~18% for the head phantom and by ~56% and ~24% for the body phantom, respectively, using a clinical beam width 198 mm. The

  12. The efficiency calibration and development of environmental correction factors for an in situ high-resolution gamma spectroscopy well logging system

    International Nuclear Information System (INIS)

    A Gamma Spectroscopy Logging System (GSLS) has been developed to study sub-surface radionuclide contamination. Absolute efficiency calibration of the GSLS was performed using simple cylindrical borehole geometry. The calibration source incorporated naturally occurring radioactive material (NORM) that emitted photons ranging from 186-keV to 2,614-keV. More complex borehole geometries were modeled using commercially available shielding software. A linear relationship was found between increasing source thickness and relative photon fluence rates at the detector. Examination of varying porosity and moisture content showed that as porosity increases, relative photon fluence rates increase linearly for all energies. Attenuation effects due to iron, water, PVC, and concrete cylindrical shields were found to agree with previous studies. Regression analyses produced energy-dependent equations for efficiency corrections applicable to spectral gamma-ray well logs collected under non-standard borehole conditions

  13. Calibration and validation of a model describing complete autotrophic nitrogen removal in a granular SBR system

    DEFF Research Database (Denmark)

    Vangsgaard, Anna Katrine; Mutlu, Ayten Gizem; Gernaey, Krist;

    2013-01-01

    steady-state in the biofilm system. For oxygen mass transfer coefficient (kLa) estimation, long-term data, removal efficiencies, and the stoichiometry of the reactions were used. For the dynamic calibration a pragmatic model fitting approach was used - in this case an iterative Monte Carlo based...... screening of the parameter space proposed by Sin et al. (2008) - to find the best fit of the model to dynamic data. Finally, the calibrated model was validated with an independent data set. CONCLUSION: The presented calibration procedure is the first customized procedure for this type of system and is...

  14. Comparative study using Monte Carlo methods of the radiation detection efficiency of LSO, LuAP, GSO and YAP scintillators for use in positron emission imaging (PET)

    International Nuclear Information System (INIS)

    The radiation detection efficiency of four scintillators employed, or designed to be employed, in positron emission imaging (PET) was evaluated as a function of the crystal thickness by applying Monte Carlo Methods. The scintillators studied were the LuSiO5 (LSO), LuAlO3 (LuAP), Gd2SiO5 (GSO) and the YAlO3 (YAP). Crystal thicknesses ranged from 0 to 50 mm. The study was performed via a previously generated photon transport Monte Carlo code. All photon track and energy histories were recorded and the energy transferred or absorbed in the scintillator medium was calculated together with the energy redistributed and retransported as secondary characteristic fluorescence radiation. Various parameters were calculated e.g. the fraction of the incident photon energy absorbed, transmitted or redistributed as fluorescence radiation, the scatter to primary ratio, the photon and energy distribution within each scintillator block etc. As being most significant, the fraction of the incident photon energy absorbed was found to increase with increasing crystal thickness tending to form a plateau above the 30 mm thickness. For LSO, LuAP, GSO and YAP scintillators, respectively, this fraction had the value of 44.8, 36.9 and 45.7% at the 10 mm thickness and 96.4, 93.2 and 96.9% at the 50 mm thickness. Within the plateau area approximately (57-59)% (59-63)% (52-63)% and (58-61)% of this fraction was due to scattered and reabsorbed radiation for the LSO, GSO, YAP and LuAP scintillators, respectively. In all cases, a negligible fraction (<0.1%) of the absorbed energy was found to escape the crystal as fluorescence radiation

  15. TARC: Carlo Rubbia's Energy Amplifier

    CERN Multimedia

    Laurent Guiraud

    1997-01-01

    Transmutation by Adiabatic Resonance Crossing (TARC) is Carlo Rubbia's energy amplifier. This CERN experiment demonstrated that long-lived fission fragments, such as 99-TC, can be efficiently destroyed.

  16. An Efficient Method of Reweighting and Reconstructing Monte Carlo Molecular Simulation Data for Extrapolation to Different Temperature and Density Conditions

    KAUST Repository

    Sun, Shuyu

    2013-06-01

    This paper introduces an efficient technique to generate new molecular simulation Markov chains for different temperature and density conditions, which allow for rapid extrapolation of canonical ensemble averages at a range of temperatures and densities different from the original conditions where a single simulation is conducted. Obtained information from the original simulation are reweighted and even reconstructed in order to extrapolate our knowledge to the new conditions. Our technique allows not only the extrapolation to a new temperature or density, but also the double extrapolation to both new temperature and density. The method was implemented for Lennard-Jones fluid with structureless particles in single-gas phase region. Extrapolation behaviors as functions of extrapolation ranges were studied. Limits of extrapolation ranges showed a remarkable capability especially along isochors where only reweighting is required. Various factors that could affect the limits of extrapolation ranges were investigated and compared. In particular, these limits were shown to be sensitive to the number of particles used and starting point where the simulation was originally conducted.

  17. The MINOS calibration detector

    International Nuclear Information System (INIS)

    This paper describes the MINOS calibration detector (CalDet) and the procedure used to calibrate it. The CalDet, a scaled-down but functionally equivalent model of the MINOS Far and Near detectors, was exposed to test beams in the CERN PS East Area during 2001-2003 to establish the response of the MINOS calorimeters to hadrons, electrons and muons in the range 0.2-10GeV/c. The CalDet measurements are used to fix the energy scale and constrain Monte Carlo simulations of MINOS

  18. Simulation of ventilation efficiency, and pre-closure temperatures in emplacement drifts at Yucca Mountain, Nevada, using Monte Carlo and composite thermal-pulse methods

    Science.gov (United States)

    Case, J.B.; Buesch, D.C.

    2004-01-01

    Predictions of waste canister and repository driftwall temperatures as functions of space and time are important to evaluate pre-closure performance of the proposed repository for spent nuclear fuel and high-level radioactive waste at Yucca Mountain, Nevada. Variations in the lithostratigraphic features in densely welded and crystallized rocks of the 12.8-million-year-old Topopah Spring Tuff, especially the porosity resulting from lithophysal cavities, affect thermal properties. A simulated emplacement drift is based on projecting lithophysal cavity porosity values 50 to 800 m from the Enhanced Characterization of the Repository Block cross drift. Lithophysal cavity porosity varies from 0.00 to 0.05 cm3/cm3 in the middle nonlithophysal zone and from 0.03 to 0.28 cm3/cm3 in the lower lithophysal zone. A ventilation model and computer program titled "Monte Carlo Simulation of Ventilation" (MCSIMVENT), which is based on a composite thermal-pulse calculation, simulates statistical variability and uncertainty of rock-mass thermal properties and ventilation performance along a simulated emplacement drift for a pre-closure period of 50 years. Although ventilation efficiency is relatively insensitive to thermal properties, variations in lithophysal porosity along the drift can result in a range of peak driftwall temperatures can range from 40 to 85??C for the preclosure period. Copyright ?? 2004 by ASME.

  19. Kinetic Monte Carlo simulation of the efficiency roll-off, emission color, and degradation of organic light-emitting diodes (Presentation Recording)

    Science.gov (United States)

    Coehoorn, Reinder; van Eersel, Harm; Bobbert, Peter A.; Janssen, Rene A. J.

    2015-10-01

    The performance of Organic Light Emitting Diodes (OLEDs) is determined by a complex interplay of the charge transport and excitonic processes in the active layer stack. We have developed a three-dimensional kinetic Monte Carlo (kMC) OLED simulation method which includes all these processes in an integral manner. The method employs a physically transparent mechanistic approach, and is based on measurable parameters. All processes can be followed with molecular-scale spatial resolution and with sub-nanosecond time resolution, for any layer structure and any mixture of materials. In the talk, applications to the efficiency roll-off, emission color and lifetime of white and monochrome phosphorescent OLEDs [1,2] are demonstrated, and a comparison with experimental results is given. The simulations show to which extent the triplet-polaron quenching (TPQ) and triplet-triplet-annihilation (TTA) contribute to the roll-off, and how the microscopic parameters describing these processes can be deduced properly from dedicated experiments. Degradation is treated as a result of the (accelerated) conversion of emitter molecules to non-emissive sites upon a triplet-polaron quenching (TPQ) process. The degradation rate, and hence the device lifetime, is shown to depend on the emitter concentration and on the precise type of TPQ process. Results for both single-doped and co-doped OLEDs are presented, revealing that the kMC simulations enable efficient simulation-assisted layer stack development. [1] H. van Eersel et al., Appl. Phys. Lett. 105, 143303 (2014). [2] R. Coehoorn et al., Adv. Funct. Mater. (2015), publ. online (DOI: 10.1002/adfm.201402532)

  20. U.S. Department Of Energy's nuclear engineering education research: highlights of recent and current research-I. 2. Monte Carlo Characterization of a Highly Efficient Photon Detector

    International Nuclear Information System (INIS)

    computational tool for the particle transport through the detector geometry. The detector geometry was implemented as a component module within the EGS4/BEAM Monte Carlo code. Each geometric component can be assigned different dimensions and materials. To validate the Monte Carlo calculation, the calculated and measured responses of the detector to a 44-cm-long and 3.56- cm-wide 6-MV fan beam (at the iso-center) were compared (Fig. 1). Although the incident photon fluence intensity has the shape of a centered triangle function along the fan beam, the dose per particle in xenon exhibits a sharp increase with increasing distance from the center of the detector. At larger distances, the response profile drops with the incident intensity. As measures of efficiency, the quantum efficiency QE (i.e., the probability of detecting a single incident quantum), and the detective quantum efficiency at zero frequency DQE(0) were calculated. The results are shown in Table I. It is clearly demonstrated that the out-of-focus position of the detector results in a higher detection efficiency, as the geometrical cross-section for the tungsten plates 'seen' by the incident photons is much larger than for the in-focus position. Compared to other technologies used for portal imaging in radiotherapy (metal/phosphor screens, or indirect and direct active matrix arrays), the efficiencies are one order of magnitude higher. A separate calculation and measurement showed that the line-spread functions (and the corresponding modulation transfer functions) were nearly independent of the detector location even when placed out of focus with the photon source. This is important to guarantee a spatially independent (shift invariant) response of the detector. In conclusion, the combination of a dense, high-atomic material with a low-density signal-generating medium might serve as a model for a future generation of highly efficient photon radiation detectors. (authors)

  1. Krypton calibration of time projection chambers of the NA61/SHINE experiment

    CERN Document Server

    Naskret, Michal

    The NA61/SHINE experiment at CERN is searching for the critical point in phase transition between quark-gluon plasma and hadronic matter. To do so we use the most precise apparatus - Time Projection Chamber. Its main task is to find trajectories of particles created in a relativistic collision. In order to improve efficiency of TPCs, we introduce calibration using radioactive krypton gas. Simulation of events in a TPC cham- ber through a decay of excited krypton atoms gives us a spectrum, which is later fitted to the model spectrum of krypton from a Monte-Carlo simulation. The data obtained in such a way serves us to determine malfunctioning electronics in TPCs. Thanks to the krypton calibration we can create a map of pad by pad gains. In this thesis I will de- scribe in detail the NA61 experimental setup, krypton calibration procedure, calibration algorithm and results for recent calibration runs

  2. An integrated approach to the simultaneous selection of variables, mathematical pre-processing and calibration samples in partial least-squares multivariate calibration.

    Science.gov (United States)

    Allegrini, Franco; Olivieri, Alejandro C

    2013-10-15

    A new optimization strategy for multivariate partial-least-squares (PLS) regression analysis is described. It was achieved by integrating three efficient strategies to improve PLS calibration models: (1) variable selection based on ant colony optimization, (2) mathematical pre-processing selection by a genetic algorithm, and (3) sample selection through a distance-based procedure. Outlier detection has also been included as part of the model optimization. All the above procedures have been combined into a single algorithm, whose aim is to find the best PLS calibration model within a Monte Carlo-type philosophy. Simulated and experimental examples are employed to illustrate the success of the proposed approach. PMID:24054659

  3. Euromet action 428: transfer of ge detectors efficiency calibration from point source geometry to other geometries; Action euromet 428: transfert de l'etalonnage en rendement de detecteurs au germanium pour une source ponctuelle vers d'autres geometries

    Energy Technology Data Exchange (ETDEWEB)

    Lepy, M.Ch

    2000-07-01

    The EUROMET project 428 examines efficiency transfer computation for Ge gamma-ray spectrometers when the efficiency is known for a reference point source geometry in the 60 keV to 2 MeV energy range. For this, different methods are used, such as Monte Carlo simulation or semi-empirical computation. The exercise compares the application of these methods to the same selected experimental cases to determine the usage limitations versus the requested accuracy. For carefully examining these results and trying to derive information for improving the computation codes, this study was limited to a few simple cases, from an experimental efficiency calibration for point source at 10-cm source-to-detector distance. The first part concerns the simplest case of geometry transfer, i.e., using point sources for 3 source-to-detector distances: 2,5 and 20 cm; the second part deals with transfer from point source geometry to cylindrical geometry with three different matrices. The general results show that the deviations between the computed results and the measured efficiencies are for the most part within 10%. The quality of the results is rather inhomogeneous and shows that these codes cannot be used directly for metrological purposes. However, most of them are operational for routine measurements when efficiency uncertainties of 5-10% can be sufficient. (author)

  4. The correct and incorrect way to calibrate a Compton suppression counting system for gamma-ray efficiency

    International Nuclear Information System (INIS)

    Gamma-ray efficiency calculations for a germanium detector have been made for a Compton suppression system. Results have shown that for radionuclides that have gamma rays in coincidence the photopeaks can be severely depressed leading to erroneous results. While this can be overcome in routine neutron activation analysis using a comparator method, special consideration must be given to determine the suppression for coincident gamma rays when calculating the efficiency curve and for radionuclide activities. This is especially important for users of the k0 method and for fission product identification using Compton suppression methods. (author)

  5. Tau Reconstruction, Energy Calibration and Identification at ATLAS

    CERN Document Server

    Trottier-McDonald, M; The ATLAS collaboration

    2011-01-01

    Tau leptons play a central role in the LHC physics programme, in particular as an important signature in many Higgs boson and Supersymmetry searches. They are further used in Standard Model electroweak measurements, as well as detector related studies like the determination of the missing transverse energy scale. Copious backgrounds from QCD processes call for both efficient identification of hadronically decaying tau leptons, as well as large fake rejection. A solid understanding of the combined performance of the calorimeter and tracking detectors is also required. We present the current status of the tau reconstruction, energy calibration and identification with the ATLAS detector at the LHC. Identification efficiencies are measured in Wtaunu events in data and compared with predictions from Monte Carlo simulations, whereas the misidentification probabilities of QCD jets and electrons are determined from various jet-enriched data samples and from Zee events, respectively. The tau energy scale calibration i...

  6. Monte Carlo molecular simulations: improving the statistical efficiency of samples with the help of artificial evolution algorithms; Simulations moleculaires de Monte Carlo: amelioration de l'efficacite statistique de l'echantillonnage grace aux algorithmes d'evolution artificielle

    Energy Technology Data Exchange (ETDEWEB)

    Leblanc, B.

    2002-03-01

    Molecular simulation aims at simulating particles in interaction, describing a physico-chemical system. When considering Markov Chain Monte Carlo sampling in this context, we often meet the same problem of statistical efficiency as with Molecular Dynamics for the simulation of complex molecules (polymers for example). The search for a correct sampling of the space of possible configurations with respect to the Boltzmann-Gibbs distribution is directly related to the statistical efficiency of such algorithms (i.e. the ability of rapidly providing uncorrelated states covering all the configuration space). We investigated how to improve this efficiency with the help of Artificial Evolution (AE). AE algorithms form a class of stochastic optimization algorithms inspired by Darwinian evolution. Efficiency measures that can be turned into efficiency criteria have been first searched before identifying parameters that could be optimized. Relative frequencies for each type of Monte Carlo moves, usually empirically chosen in reasonable ranges, were first considered. We combined parallel simulations with a 'genetic server' in order to dynamically improve the quality of the sampling during the simulations progress. Our results shows that in comparison with some reference settings, it is possible to improve the quality of samples with respect to the chosen criterion. The same algorithm has been applied to improve the Parallel Tempering technique, in order to optimize in the same time the relative frequencies of Monte Carlo moves and the relative frequencies of swapping between sub-systems simulated at different temperatures. Finally, hints for further research in order to optimize the choice of additional temperatures are given. (author)

  7. Development of an absolute method for efficiency calibration of a coaxial HPGe detector for large volume sources

    Science.gov (United States)

    Ortiz-Ramírez, Pablo C.

    2015-09-01

    In this work an absolute method for the determination of the full energy peak efficiency of a gamma spectroscopy system for voluminous sources is presented. The method was tested for a high-resolution coaxial HPGe detector and cylindrical homogeneous volume source. The volume source is represented by a set of point sources filling its volume. We found that the absolute efficiency of a volume source can be determined as the average over its volume of the absolute efficiency of each point source. Experimentally, we measure the intrinsic efficiency as a function upon source-detector position. Then, considering the solid angle and the attenuations of the gamma rays emitted to the detector by each point source, considered as embedded in the source matrix, the absolute efficiency for each point source inside of the volume was determined. The factor associate with the solid angle and the self-attenuation of photons in the sample was deduced from first principles without any mathematical approximation. The method was tested by determining the specific activity of 137Cs in cylindrical homogeneous sources, using IAEA reference materials with specific activities between 14.2 Bq/kg and 9640 Bq/kg at the moment of the experimentation. The results obtained shown a good agreement with the expected values. The relative difference was less than 7% in most of the cases. The main advantage of this method is that it does not require of the use of expensive and hard to produce standard materials. In addition it does not require of matrix effect corrections, which are the main cause of error in this type of measurements, and it is easy to implement in any nuclear physics laboratory.

  8. Parallelizing Monte Carlo with PMC

    Energy Technology Data Exchange (ETDEWEB)

    Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.

    1994-11-01

    PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described.

  9. Absolute calibration in vivo measurement systems

    International Nuclear Information System (INIS)

    Lawrence Livermore National Laboratory (LLNL) is currently investigating a new method for obtaining absolute calibration factors for radiation measurement systems used to measure internally deposited radionuclides in vivo. Absolute calibration of in vivo measurement systems will eliminate the need to generate a series of human surrogate structures (i.e., phantoms) for calibrating in vivo measurement systems. The absolute calibration of in vivo measurement systems utilizes magnetic resonance imaging (MRI) to define physiological structure, size, and composition. The MRI image provides a digitized representation of the physiological structure, which allows for any mathematical distribution of radionuclides within the body. Using Monte Carlo transport codes, the emission spectrum from the body is predicted. The in vivo measurement equipment is calibrated using the Monte Carlo code and adjusting for the intrinsic properties of the detection system. The calibration factors are verified using measurements of existing phantoms and previously obtained measurements of human volunteers. 8 refs

  10. Calibration with MCNP of NaI detector for the determination of natural radioactivity levels in the field.

    Science.gov (United States)

    Cinelli, Giorgia; Tositti, Laura; Mostacci, Domiziano; Baré, Jonathan

    2016-05-01

    In view of assessing natural radioactivity with on-site quantitative gamma spectrometry, efficiency calibration of NaI(Tl) detectors is investigated. A calibration based on Monte Carlo simulation of detector response is proposed, to render reliable quantitative analysis practicable in field campaigns. The method is developed with reference to contact geometry, in which measurements are taken placing the NaI(Tl) probe directly against the solid source to be analyzed. The Monte Carlo code used for the simulations was MCNP. Experimental verification of the calibration goodness is obtained by comparison with appropriate standards, as reported. On-site measurements yield a quick quantitative assessment of natural radioactivity levels present ((40)K, (238)U and (232)Th). On-site gamma spectrometry can prove particularly useful insofar as it provides information on materials from which samples cannot be taken. PMID:26913974

  11. Calibration with MCNP of NaI detector for the determination of natural radioactivity levels in the field.

    Science.gov (United States)

    Cinelli, Giorgia; Tositti, Laura; Mostacci, Domiziano; Baré, Jonathan

    2016-05-01

    In view of assessing natural radioactivity with on-site quantitative gamma spectrometry, efficiency calibration of NaI(Tl) detectors is investigated. A calibration based on Monte Carlo simulation of detector response is proposed, to render reliable quantitative analysis practicable in field campaigns. The method is developed with reference to contact geometry, in which measurements are taken placing the NaI(Tl) probe directly against the solid source to be analyzed. The Monte Carlo code used for the simulations was MCNP. Experimental verification of the calibration goodness is obtained by comparison with appropriate standards, as reported. On-site measurements yield a quick quantitative assessment of natural radioactivity levels present ((40)K, (238)U and (232)Th). On-site gamma spectrometry can prove particularly useful insofar as it provides information on materials from which samples cannot be taken.

  12. Efficient solution methodology for calibrating the hemodynamic model using functional Magnetic Resonance Imaging (fMRI) measurements

    KAUST Repository

    Zambri, Brian

    2015-11-05

    Our aim is to propose a numerical strategy for retrieving accurately and efficiently the biophysiological parameters as well as the external stimulus characteristics corresponding to the hemodynamic mathematical model that describes changes in blood flow and blood oxygenation during brain activation. The proposed method employs the TNM-CKF method developed in [1], but in a prediction/correction framework. We present numerical results using both real and synthetic functional Magnetic Resonance Imaging (fMRI) measurements to highlight the performance characteristics of this computational methodology. © 2015 IEEE.

  13. The PROMIS Physical Function item bank was calibrated to a standardized metric and shown to improve measurement efficiency

    DEFF Research Database (Denmark)

    Rose, Matthias; Bjørner, Jakob; Gandek, Barbara;

    2014-01-01

    of 16,065 adults answered item subsets (n>2,200/item) on the Internet, with oversampling of the chronically ill. Classical test and item response theory methods were used to evaluate 149 PROMIS PF items plus 10 Short Form-36 and 20 Health Assessment Questionnaire-Disability Index items. A graded....... In simulations, a 10-item computerized adaptive test (CAT) eliminated floor and decreased ceiling effects, achieving higher measurement precision than any comparable length static tool across four SDs of the measurement range. Improved psychometric properties were transferred to the CAT's superior ability...... to identify differences between age and disease groups. CONCLUSION: The item bank provides a common metric and can improve the measurement of PF by facilitating the standardization of patient-reported outcome measures and implementation of CATs for more efficient PF assessments over a larger range....

  14. Fusion yield measurements on JET and their calibration

    Energy Technology Data Exchange (ETDEWEB)

    Syme, D.B., E-mail: brian.syme@ccfe.ac.uk [EURATOM-CCFE Fusion Association, Culham Science Centre, Abingdon, OXON OX14 3DB (United Kingdom); Popovichev, S. [EURATOM-CCFE Fusion Association, Culham Science Centre, Abingdon, OXON OX14 3DB (United Kingdom); Conroy, S. [EURATOM-VR Association, Department of Physics and Astronomy, Uppsala University, Box 516, SE-75120 Uppsala (Sweden); Lengar, I.; Snoj, L. [EURATOM-MHEST Association, Reactor Physics Division, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Sowden, C. [EURATOM-CCFE Fusion Association, Culham Science Centre, Abingdon, OXON OX14 3DB (United Kingdom); Giacomelli, L. [EURATOM-ENEA-CNR Association, CNR-IFP and Univ. di Milano-Bicocca, Milan (Italy); Hermon, G.; Allan, P.; Macheta, P.; Plummer, D.; Stephens, J. [EURATOM-CCFE Fusion Association, Culham Science Centre, Abingdon, OXON OX14 3DB (United Kingdom); Batistoni, P. [EURATOM-ENEA Association, Via E. Fermi,40, 00044 Frascati (Italy); Prokopowicz, R.; Jednorog, S. [EURATOM-IPPLM Association, Institute of Plasma Physics and Laser Microfusion, Hery 23, 01-497 Warsaw (Poland); Abhangi, M.R.; Makwana, R. [Institute for Plasma Research, Bhat, Gandhinagar, 382 428 Gujarat (India)

    2014-11-15

    The power output of fusion experiments and fusion reactor-like devices is measured in terms of the neutron yields which relate directly to the fusion yield. In this paper we describe the devices and methods used to make the new in situ calibration of JET in April 2013 and its early results. The target accuracy of this calibration was 10%, just as in the earlier JET calibration and as required for ITER, where a precise neutron yield measurement is important, e.g., for tritium accountancy. We discuss the constraints and early decisions which defined the main calibration approach, e.g., the choice of source type and the deployment method. We describe the physics, source issues, safety and engineering aspects required to calibrate directly the Fission Chambers and the Activation System which carry the JET neutron calibration. In particular a direct calibration of the Activation system was planned for the first time in JET. We used the existing JET remote-handling system to deploy the {sup 252}Cf source and developed the compatible tooling and systems necessary to ensure safe and efficient deployment in these cases. The scientific programme has sought to better understand the limitations of the calibration, to optimise the measurements and other provisions, to provide corrections for perturbing factors (e.g., presence of the remote-handling boom and other non-standard torus conditions) and to ensure personnel safety and safe working conditions. Much of this work has been based on an extensive programme of Monte-Carlo calculations which, e.g., revealed a potential contribution to the neutron yield via a direct line of sight through the ports which presents individually depending on the details of the port geometry.

  15. Calibration uncertainty

    DEFF Research Database (Denmark)

    Heydorn, Kaj; Anglov, Thomas

    2002-01-01

    uncertainty was verified from independent measurements of the same sample by demonstrating statistical control of analytical results and the absence of bias. The proposed method takes into account uncertainties of the measurement, as well as of the amount of calibrant. It is applicable to all types......Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration...

  16. Improvements in the simulation of the efficiency of a HPGe detector with Monte Carlo code MCNP5; Mejoras en la simulacion de la eficiencia de un detector HPGe con el codigo Monte Carlo MCNP5

    Energy Technology Data Exchange (ETDEWEB)

    Gallardo, S.; Querol, A.; Rodenas, J.; Verdu, G.

    2014-07-01

    in this paper we propose to perform a simulation model using the MCNP5 code and a registration form meshing to improve the simulation efficiency of the detector in the range of energies ranging from 50 to 2000 keV. This meshing is built by FMESH MCNP5 registration code that allows a mesh with cells of few microns. The photon and electron flow is calculated in the different cells of the mesh which is superimposed on detector geometry. It analyzes the variation of efficiency (related to the variation of energy deposited in the active volume). (Author)

  17. Improvement of personalized Monte Carlo-aided direct internal contamination monitoring: optimization of calculation times and measurement methodology for the establishment of activity distribution

    International Nuclear Information System (INIS)

    To optimize the monitoring of female workers using in vivo spectrometry measurements, it is necessary to correct the typical calibration coefficients obtained with the Livermore male physical phantom. To do so, numerical calibrations based on the use of Monte Carlo simulations combined with anthropomorphic 3D phantoms were used. Such computational calibrations require on the one hand the development of representative female phantoms of different size and morphologies and on the other hand rapid and reliable Monte Carlo calculations. A library of female torso models was hence developed by fitting the weight of internal organs and breasts according to the body height and to relevant plastic surgery recommendations. This library was next used to realize a numerical calibration of the AREVA NC La Hague in vivo counting installation. Moreover, the morphology-induced counting efficiency variations with energy were put into equation and recommendations were given to correct the typical calibration coefficients for any monitored female worker as a function of body height and breast size. Meanwhile, variance reduction techniques and geometry simplification operations were considered to accelerate simulations. Furthermore, to determine the activity mapping in the case of complex contaminations, a method that combines Monte Carlo simulations with in vivo measurements was developed. This method consists of realizing several spectrometry measurements with different detector positioning. Next, the contribution of each contaminated organ to the count is assessed from Monte Carlo calculations. The in vivo measurements realized at LEDI, CIEMAT and KIT have demonstrated the effectiveness of the method and highlighted the valuable contribution of Monte Carlo simulations for a more detailed analysis of spectrometry measurements. Thus, a more precise estimate of the activity distribution is given in the case of an internal contamination. (author)

  18. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  19. Validation of a Monte Carlo model for a GMX detector used for measurements of environmental radioactivity

    International Nuclear Information System (INIS)

    In an Environmental Radioactivity Laboratory, samples from several products are analyzed, in order to determine the amount of radioactive products they contain. A usual method is the gamma activity measurement of these samples, which typically requires the use of High Purity Germanium Detectors (HPGe). GMX (n-type) detectors can be found among this group of detectors. They have a high efficiency for low energy emissions. As any detector, it must be calibrated, in energy, efficiency and resolution (FWHM). To do this calibration, a gamma standard solution is used, whose composition and activity are certified by a reference laboratory. This source contains several radionuclides, providing a wide energy spectrum. The simulation of the detection process with MCNP5, a code based on the Monte Carlo method, is a useful tool in an Environmental Radioactivity Laboratory, since it can reproduce the experimental conditions of the essay, without manipulating radioactive sources, and consequently reducing radioactive wastes. On the other hand, the simulation of the detector calibration permits to analyze the influence of different variables on detector efficiency. In this paper, the simulation of the calibration of the GMX detector used in the Environmental Radioactivity Laboratory of the Polytechnic University of Valencia (UPV) is presented. Results obtained with this simulation are compared with laboratory measurements, in order to validate the model. (author)

  20. Monte Carlo simulation by GEANT 4 and GESPECOR of in situ gamma-ray spectrometry measurements.

    Science.gov (United States)

    Chirosca, Alecsandru; Suvaila, Rares; Sima, Octavian

    2013-11-01

    The application of GEANT 4 and GESPECOR Monte Carlo simulation codes for efficiency calibration of in situ gamma-ray spectrometry was studied. The long computing time required by GEANT 4 prevents its use in routine simulations. Due to the application of variance reduction techniques, GESPECOR is much faster. In this code specific procedures for incorporating the depth profile of the activity were implemented. In addition procedures for evaluating the effect of non-homogeneity of the source were developed. The code was validated by comparison with test simulations carried out with GEANT 4 and by comparison with published results. PMID:23566809

  1. Calibration method for a in vivo measurement system using mathematical simulation of the radiation source and the detector

    International Nuclear Information System (INIS)

    A Monte Carlo program which uses a voxel phantom has been developed to simulate in vivo measurement systems for calibration purposes. The calibration method presented here employs a mathematical phantom, produced in the form of volume elements (voxels), obtained through Magnetic Resonance Images of the human body. The calibration method uses the Monte Carlo technique to simulate the tissue contamination, the transport of the photons through the tissues and the detection of the radiation. The program simulates the transport and detection of photons between 0.035 and 2 MeV and uses, for the body representation, a voxel phantom with a format of 871 slices each of 277 x 148 picture elements. The Monte Carlo code was applied to the calibration of in vivo systems and to estimate differences in counting efficiencies between homogeneous and non-homogeneous radionuclide distributions in the lung. Calculations show a factor of 20 between deposition of 241 Am at the back compared with the front of the lung. The program was also used to estimate the 137 Cs body burden of an internally contaminated individual, counted with an 8 x 4 Nal (TI) detector and an 241 Am body burden of an internally contaminated individual, who was counted using a planar germanium detector. (author)

  2. A variable acceleration calibration system

    Science.gov (United States)

    Johnson, Thomas H.

    2011-12-01

    A variable acceleration calibration system that applies loads using gravitational and centripetal acceleration serves as an alternative, efficient and cost effective method for calibrating internal wind tunnel force balances. Two proof-of-concept variable acceleration calibration systems are designed, fabricated and tested. The NASA UT-36 force balance served as the test balance for the calibration experiments. The variable acceleration calibration systems are shown to be capable of performing three component calibration experiments with an approximate applied load error on the order of 1% of the full scale calibration loads. Sources of error are indentified using experimental design methods and a propagation of uncertainty analysis. Three types of uncertainty are indentified for the systems and are attributed to prediction error, calibration error and pure error. Angular velocity uncertainty is shown to be the largest indentified source of prediction error. The calibration uncertainties using a production variable acceleration based system are shown to be potentially equivalent to current methods. The production quality system can be realized using lighter materials and a more precise instrumentation. Further research is needed to account for balance deflection, forcing effects due to vibration, and large tare loads. A gyroscope measurement technique is shown to be capable of resolving the balance deflection angle calculation. Long term research objectives include a demonstration of a six degree of freedom calibration, and a large capacity balance calibration.

  3. Calibration of the Super-Kamiokande Detector

    CERN Document Server

    Abe, K; Iida, T; Iyogi, K; Kameda, J; Kishimoto, Y; Koshio, Y; Marti, Ll; Miura, M; Moriyama, S; Nakahata, M; Nakano, Y; Nakayama, S; Obayashi, Y; Sekiya, H; Shiozawa, M; Suzuki, Y; Takeda, A; Takenaga, Y; Tanaka, H; Tomura, T; Ueno, K; Wendell, R A; Yokozawa, T; Irvine, T J; Kaji, H; Kajita, T; Kaneyuki, K; Lee, K P; Nishimura, Y; Okumura, K; McLachlan, T; Labarga, L; Kearns, E; Raaf, J L; Stone, J L; Sulak, L R; Berkman, S; Tanaka, H A; Tobayama, S; Goldhaber, M; Bays, K; Carminati, G; Kropp, W R; Mine, S; Renshaw, A; Smy, M B; Sobel, H W; Ganezer, K S; Hill, J; Keig, W E; Jang, J S; Kim, J Y; Lim, I T; Hong, N; Akiri, T; Albert, J B; Himmel, A; Scholberg, K; Walter, C W; Wongjirad, T; Ishizuka, T; Tasaka, S; Learned, J G; Matsuno, S; Smith, S N; Hasegawa, T; Ishida, T; Ishii, T; Kobayashi, T; Nakadaira, T; Nakamura, K; Nishikawa, K; Oyama, Y; Sakashita, K; Sekiguchi, T; Tsukamoto, T; Suzuki, A T; Takeuchi, Y; Huang, K; Ieki, K; Ikeda, M; Kikawa, T; Kubo, H; Minamino, A; Murakami, A; Nakaya, T; Otani, M; Suzuki, K; Takahashi, S; Fukuda, Y; Choi, K; Itow, Y; Mitsuka, G; Miyake, M; Mijakowski, P; Tacik, R; Hignight, J; Imber, J; Jung, C K; Taylor, I; Yanagisawa, C; Idehara, Y; Ishino, H; Kibayashi, A; Mori, T; Sakuda, M; Yamaguchi, R; Yano, T; Kuno, Y; Kim, S B; Yang, B S; Okazawa, H; Choi, Y; Nishijima, K; Koshiba, M; Totsuka, Y; Yokoyama, M; Martens, K; Vagins, M R; Martin, J F; de Perio, P; Konaka, A; Wilking, M J; Chen, S; Heng, Y; Sui, H; Yang, Z; Zhang, H; Zhenwei, Y; Connolly, K; Dziomba, M; Wilkes, R J

    2013-01-01

    Procedures and results on hardware level detector calibration in Super-Kamiokande (SK) are presented in this paper. In particular, we report improvements made in our calibration methods for the experimental phase IV in which new readout electronics have been operating since 2008. The topics are separated into two parts. The first part describes the determination of constants needed to interpret the digitized output of our electronics so that we can obtain physical numbers such as photon counts and their arrival times for each photomultiplier tube (PMT). In this context, we developed an in-situ procedure to determine high-voltage settings for PMTs in large detectors like SK, as well as a new method for measuring PMT quantum efficiency and gain in such a detector. The second part describes the modeling of the detector in our Monte Carlo simulation, including in particular the optical properties of its water target and their variability over time. Detailed studies on the water quality are also presented. As a re...

  4. State-of-the-art Monte Carlo 1988

    Energy Technology Data Exchange (ETDEWEB)

    Soran, P.D.

    1988-06-28

    Particle transport calculations in highly dimensional and physically complex geometries, such as detector calibration, radiation shielding, space reactors, and oil-well logging, generally require Monte Carlo transport techniques. Monte Carlo particle transport can be performed on a variety of computers ranging from APOLLOs to VAXs. Some of the hardware and software developments, which now permit Monte Carlo methods to be routinely used, are reviewed in this paper. The development of inexpensive, large, fast computer memory, coupled with fast central processing units, permits Monte Carlo calculations to be performed on workstations, minicomputers, and supercomputers. The Monte Carlo renaissance is further aided by innovations in computer architecture and software development. Advances in vectorization and parallelization architecture have resulted in the development of new algorithms which have greatly reduced processing times. Finally, the renewed interest in Monte Carlo has spawned new variance reduction techniques which are being implemented in large computer codes. 45 refs.

  5. Accurate and efficient radiation transport in optically thick media -- by means of the Symbolic Implicit Monte Carlo method in the difference formulation

    Energy Technology Data Exchange (ETDEWEB)

    Szoke, A; Brooks, E D; McKinley, M; Daffin, F

    2005-03-30

    The equations of radiation transport for thermal photons are notoriously difficult to solve in thick media without resorting to asymptotic approximations such as the diffusion limit. One source of this difficulty is that in thick, absorbing media thermal emission is almost completely balanced by strong absorption. In a previous publication [SB03], the photon transport equation was written in terms of the deviation of the specific intensity from the local equilibrium field. We called the new form of the equations the difference formulation. The difference formulation is rigorously equivalent to the original transport equation. It is particularly advantageous in thick media, where the radiation field approaches local equilibrium and the deviations from the Planck distribution are small. The difference formulation for photon transport also clarifies the diffusion limit. In this paper, the transport equation is solved by the Symbolic Implicit Monte Carlo (SIMC) method and a comparison is made between the standard formulation and the difference formulation. The SIMC method is easily adapted to the derivative source terms of the difference formulation, and a remarkable reduction in noise is obtained when the difference formulation is applied to problems involving thick media.

  6. Validation of the ATLAS hadronic calibration with the LAr End-Cap beam tests data

    Science.gov (United States)

    Barillari, Teresa

    2009-04-01

    The high granularity of the ATLAS calorimeter and the large number of expected particles per event require a clustering algorithm that is able to suppress noise and pile-up efficiently. Therefore the cluster reconstruction is the essential first step in the hadronic calibration. The identification of electromagnetic components within a hadronic cluster using cluster shape variables is the next step in the hadronic calibration procedure. Finally the energy density of individual cells is used to assign the proper weight to correct for the invisible energy deposits of hadrons due to the non-compensating nature of the ATLAS calorimeter and to correct for energy losses in material non instrumented with read-out. The weighting scheme employs the energy density in individual cells. Therefore the validation of the monte carlo simulation, which is used to define the weighting parameters and energy correction algorithms, is an essential step in the hadronic calibration procedure. Pion data, obtained in a beam test corresponding to the pseudorapidity region 2.5 < |η| < 4.0 in ATLAS and in the energy range 40 GeV <= E <= 200 GeV, have been compared with monte carlo simulations, using the full ATLAS hadronic calibration procedure.

  7. Monte Carlo simulation of gamma-ray interactions in an over-square high-purity germanium detector for in-vivo measurements

    Science.gov (United States)

    Saizu, Mirela Angela

    2016-09-01

    The developments of high-purity germanium detectors match very well the requirements of the in-vivo human body measurements regarding the gamma energy ranges of the radionuclides intended to be measured, the shape of the extended radioactive sources, and the measurement geometries. The Whole Body Counter (WBC) from IFIN-HH is based on an “over-square” high-purity germanium detector (HPGe) to perform accurate measurements of the incorporated radionuclides emitting X and gamma rays in the energy range of 10 keV-1500 keV, under conditions of good shielding, suitable collimation, and calibration. As an alternative to the experimental efficiency calibration method consisting of using reference calibration sources with gamma energy lines that cover all the considered energy range, it is proposed to use the Monte Carlo method for the efficiency calibration of the WBC using the radiation transport code MCNP5. The HPGe detector was modelled and the gamma energy lines of 241Am, 57Co, 133Ba, 137Cs, 60Co, and 152Eu were simulated in order to obtain the virtual efficiency calibration curve of the WBC. The Monte Carlo method was validated by comparing the simulated results with the experimental measurements using point-like sources. For their optimum matching, the impact of the variation of the front dead layer thickness and of the detector photon absorbing layers materials on the HPGe detector efficiency was studied, and the detector’s model was refined. In order to perform the WBC efficiency calibration for realistic people monitoring, more numerical calculations were generated simulating extended sources of specific shape according to the standard man characteristics.

  8. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    Science.gov (United States)

    Setiani, Tia Dwi; Suprijadi, Haryanto, Freddy

    2016-03-01

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 - 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 108 and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.

  9. TARGETLESS CAMERA CALIBRATION

    Directory of Open Access Journals (Sweden)

    L. Barazzetti

    2012-09-01

    Full Text Available In photogrammetry a camera is considered calibrated if its interior orientation parameters are known. These encompass the principal distance, the principal point position and some Additional Parameters used to model possible systematic errors. The current state of the art for automated camera calibration relies on the use of coded targets to accurately determine the image correspondences. This paper presents a new methodology for the efficient and rigorous photogrammetric calibration of digital cameras which does not require any longer the use of targets. A set of images depicting a scene with a good texture are sufficient for the extraction of natural corresponding image points. These are automatically matched with feature-based approaches and robust estimation techniques. The successive photogrammetric bundle adjustment retrieves the unknown camera parameters and their theoretical accuracies. Examples, considerations and comparisons with real data and different case studies are illustrated to show the potentialities of the proposed methodology.

  10. Discrete diffusion Monte Carlo for frequency-dependent radiative transfer

    Energy Technology Data Exchange (ETDEWEB)

    Densmore, Jeffrey D [Los Alamos National Laboratory; Kelly, Thompson G [Los Alamos National Laboratory; Urbatish, Todd J [Los Alamos National Laboratory

    2010-11-17

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.

  11. Experiments and Monte Carlo modeling of a higher resolution Cadmium Zinc Telluride detector for safeguards applications

    Science.gov (United States)

    Borella, Alessandro

    2016-09-01

    The Belgian Nuclear Research Centre is engaged in R&D activity in the field of Non Destructive Analysis on nuclear materials, with focus on spent fuel characterization. A 500 mm3 Cadmium Zinc Telluride (CZT) with enhanced resolution was recently purchased. With a full width at half maximum of 1.3% at 662 keV, the detector is very promising in view of its use for applications such as determination of uranium enrichment and plutonium isotopic composition, as well as measurement on spent fuel. In this paper, I report about the work done with such a detector in terms of its characterization. The detector energy calibration, peak shape and efficiency were determined from experimental data. The data included measurements with calibrated sources, both in a bare and in a shielded environment. In addition, Monte Carlo calculations with the MCNPX code were carried out and benchmarked with experiments.

  12. The influence of the calibration standard and the chemical composition of the water samples residue in the counting efficiency of proportional detectors for gross alpha and beta counting. Application on the radiologic control of the IPEN-CNEN/SP

    International Nuclear Information System (INIS)

    In this work the efficiency calibration curves of thin-window and low background gas-flow proportional counters were determined for calibration standards with different energies and different absorber thicknesses. For the gross alpha counting we have used 241Am and natural uranium standards and for the gross beta counting we have used 90Sr/90Y and 137Cs standards in residue thicknesses ranging from 0 to approximately 18 mg/cm2. These sample thicknesses were increased with a previously determined salted solution prepared simulating the chemical composition of the underground water of IPEN The counting efficiency for alpha emitters ranged from 0,273 +- 0,038 for a weightless residue to only 0,015 +- 0,002 in a planchet containing 15 mg/cm2 of residue for 241Am standard. For natural uranium standard the efficiency ranged from 0,322 +- 0,030 for a weightless residue to 0,023 +- 0,003 in a planchet containing 14,5 mg/cm2 of residue. The counting efficiency for beta emitters ranged from 0,430 +- 0,036 for a weightless residue to 0,247 +- 0,020 in a planchet containing 17 mg/cm2 of residue for 137Cs standard. For 90Sr/90Y standard the efficiency ranged from 0,489 +- 0,041 for a weightless residue to 0,323 +- 0,026 in a planchet containing 18 mg/cm2 of residue. Results make evident the counting efficiency variation with the alpha or beta emitters energies and the thickness of the water samples residue. So, the calibration standard, the thickness and the chemical composition of the residue must always be considered in the gross alpha and beta radioactivity determination in water samples. (author)

  13. Efficiency

    NARCIS (Netherlands)

    I.P. van Staveren (Irene)

    2009-01-01

    textabstractThe dominant economic theory, neoclassical economics, employs a single economic evaluative criterion: efficiency. Moreover, it assigns this criterion a very specific meaning. Other – heterodox – schools of thought in economics tend to use more open concepts of efficiency, related to comm

  14. Radium needle used to calibrate germanium gamma-ray detector.

    Science.gov (United States)

    Kamboj, S; Lovett, D; Kahn, B; Walker, D

    1993-03-01

    A standard platinum-iridium needle that contains 374 MBq 226Ra was tested as a source for calibrating a portable germanium detector used with a gamma-ray spectrometer for environmental radioactivity measurements. The counting efficiencies of the 11 most intense gamma rays emitted by 226Ra and its short-lived radioactive progeny at energies between 186 and 2,448 keV were determined, at the full energy peaks, to construct a curve of counting efficiency vs. energy. The curve was compared to another curve between 43 and 1,596 keV obtained with a NIST mixed-radionuclide standard. It was also compared to the results of a Monte Carlo simulation. The 226Ra source results were consistent with the NIST standard between 248 and 1,596 keV. The Monte Carlo simulation gave a curve parallel to the curve for the combined radium and NIST standard data between 250 and 2,000 keV, but at higher efficiency.

  15. Design of a neutron source for calibration

    International Nuclear Information System (INIS)

    The neutron spectra produced by an isotopic neutron source located at the center of moderating media were calculated using Monte Carlo method in the aim to design a neutron source for calibration purposes. To improve the evaluation of the dosimetric quantities, is recommended to calibrate the radiation protection devices with calibrated neutron sources whose neutron spectra being similar to those met in practice. Here, a 239Pu-Be neutron source was inserted in H2O, D2O and polyethylene cylindrical moderators in order to produce neutron spectra that resembles spectra found in workplaces

  16. Radio Interferometric Calibration Using a Riemannian Manifold

    CERN Document Server

    Yatawatta, Sarod

    2013-01-01

    In order to cope with the increased data volumes generated by modern radio interferometers such as LOFAR (Low Frequency Array) or SKA (Square Kilometre Array), fast and efficient calibration algorithms are essential. Traditional radio interferometric calibration is performed using nonlinear optimization techniques such as the Levenberg-Marquardt algorithm in Euclidean space. In this paper, we reformulate radio interferometric calibration as a nonlinear optimization problem on a Riemannian manifold. The reformulated calibration problem is solved using the Riemannian trust-region method. We show that calibration on a Riemannian manifold has faster convergence with reduced computational cost compared to conventional calibration in Euclidean space.

  17. Exploring Monte Carlo methods

    CERN Document Server

    Dunn, William L

    2012-01-01

    Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble

  18. The ATLAS Electromagnetic Calorimeter Calibration Workshop

    CERN Multimedia

    Hong Ma; Isabelle Wingerter

    The ATLAS Electromagnetic Calorimeter Calibration Workshop took place at LAPP-Annecy from the 1st to the 3rd of October; 45 people attended the workshop. A detailed program was setup before the workshop. The agenda was organised around very focused presentations where questions were raised to allow arguments to be exchanged and answers to be proposed. The main topics were: Electronics calibration Handling of problematic channels Cluster level corrections for electrons and photons Absolute energy scale Streams for calibration samples Calibration constants processing Learning from commissioning Forty-five people attended the workshop. The workshop was on the whole lively and fruitful. Based on years of experience with test beam analysis and Monte Carlo simulation, and the recent operation of the detector in the commissioning, the methods to calibrate the electromagnetic calorimeter are well known. Some of the procedures are being exercised in the commisssioning, which have demonstrated the c...

  19. Tau reconstruction, energy calibration and identification at ATLAS

    Indian Academy of Sciences (India)

    Michel Trottier-McDonald; on behalf of the ATLAS Collaboration

    2012-11-01

    Tau leptons play a central role in the LHC physics programme, in particular as an important signature in many Higgs boson and supersymmetry searches. They are further used in Standard Model electroweak measurements, as well as detector-related studies like the determination of the missing transverse energy scale. Copious backgrounds from QCD processes call for both efficient identification of hadronically decaying tau leptons, as well as large suppression of fake candidates. A solid understanding of the combined performance of the calorimeter and tracking detectors is also required. We present the current status of the tau reconstruction, energy calibration and identification with the ATLAS detector at the LHC. Identification efficiencies are measured in → events in data and compared with predictions from Monte Carlo simulations, whereas the misidentification probabilities of QCD jets and electrons are determined from various jet-enriched data samples and from → events, respectively. The tau energy scale calibration is described and systematic uncertainties on both energy scale and identification efficiencies discussed.

  20. SAN CARLOS APACHE PAPERS.

    Science.gov (United States)

    ROESSEL, ROBERT A., JR.

    THE FIRST SECTION OF THIS BOOK COVERS THE HISTORICAL AND CULTURAL BACKGROUND OF THE SAN CARLOS APACHE INDIANS, AS WELL AS AN HISTORICAL SKETCH OF THE DEVELOPMENT OF THEIR FORMAL EDUCATIONAL SYSTEM. THE SECOND SECTION IS DEVOTED TO THE PROBLEMS OF TEACHERS OF THE INDIAN CHILDREN IN GLOBE AND SAN CARLOS, ARIZONA. IT IS DIVIDED INTO THREE PARTS--(1)…

  1. San Carlo Operaen

    DEFF Research Database (Denmark)

    Holm, Bent

    2005-01-01

    En indplacering af operahuset San Carlo i en kulturhistorisk repræsentationskontekst med særligt henblik på begrebet napolalità.......En indplacering af operahuset San Carlo i en kulturhistorisk repræsentationskontekst med særligt henblik på begrebet napolalità....

  2. Antenna Calibration and Measurement Equipment

    Science.gov (United States)

    Rochblatt, David J.; Cortes, Manuel Vazquez

    2012-01-01

    A document describes the Antenna Calibration & Measurement Equipment (ACME) system that will provide the Deep Space Network (DSN) with instrumentation enabling a trained RF engineer at each complex to perform antenna calibration measurements and to generate antenna calibration data. This data includes continuous-scan auto-bore-based data acquisition with all-sky data gathering in support of 4th order pointing model generation requirements. Other data includes antenna subreflector focus, system noise temperature and tipping curves, antenna efficiency, reports system linearity, and instrument calibration. The ACME system design is based on the on-the-fly (OTF) mapping technique and architecture. ACME has contributed to the improved RF performance of the DSN by approximately a factor of two. It improved the pointing performances of the DSN antennas and productivity of its personnel and calibration engineers.

  3. Smart detectors for Monte Carlo radiative transfer

    CERN Document Server

    Baes, Maarten

    2008-01-01

    Many optimization techniques have been invented to reduce the noise that is inherent in Monte Carlo radiative transfer simulations. As the typical detectors used in Monte Carlo simulations do not take into account all the information contained in the impacting photon packages, there is still room to optimize this detection process and the corresponding estimate of the surface brightness distributions. We want to investigate how all the information contained in the distribution of impacting photon packages can be optimally used to decrease the noise in the surface brightness distributions and hence to increase the efficiency of Monte Carlo radiative transfer simulations. We demonstrate that the estimate of the surface brightness distribution in a Monte Carlo radiative transfer simulation is similar to the estimate of the density distribution in an SPH simulation. Based on this similarity, a recipe is constructed for smart detectors that take full advantage of the exact location of the impact of the photon pack...

  4. Monte Carlo simulation of the standardization of {sup 22}Na using scintillation detector arrays

    Energy Technology Data Exchange (ETDEWEB)

    Sato, Y., E-mail: yss.sato@aist.go.j [National Metrology Institute of Japan, National Institute of Advanced Industrial Science and Technology, Quantum Radiation Division, Radioactivity and Neutron Section, Tsukuba Central 2, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568 (Japan); Murayama, H. [National Institute of Radiological Sciences, 4-9-1, Anagawa, Inage, Chiba 263-8555 (Japan); Yamada, T. [Japan Radioisotope Association, 2-28-45, Hon-komagome, Bunkyo, Tokyo 113-8941 (Japan); National Metrology Institute of Japan, National Institute of Advanced Industrial Science and Technology, Quantum Radiation Division, Radioactivity and Neutron Section, Tsukuba Central 2, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568 (Japan); Tohoku University, 6-6, Aoba, Aramaki, Aoba, Sendai 980-8579 (Japan); Hasegawa, T. [Kitasato University, 1-15-1, Kitasato, Sagamihara, Kanagawa 228-8555 (Japan); Oda, K. [Tokyo Metropolitan Institute of Gerontology, 1-1 Nakacho, Itabashi-ku, Tokyo 173-0022 (Japan); Unno, Y.; Yunoki, A. [National Metrology Institute of Japan, National Institute of Advanced Industrial Science and Technology, Quantum Radiation Division, Radioactivity and Neutron Section, Tsukuba Central 2, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568 (Japan)

    2010-07-15

    In order to calibrate PET devices by a sealed point source, we contrived an absolute activity measurement method for the sealed point source using scintillation detector arrays. This new method was verified by EGS5 Monte Carlo simulation.

  5. MontePython: Implementing Quantum Monte Carlo using Python

    OpenAIRE

    J.K. Nilsen

    2006-01-01

    We present a cross-language C++/Python program for simulations of quantum mechanical systems with the use of Quantum Monte Carlo (QMC) methods. We describe a system for which to apply QMC, the algorithms of variational Monte Carlo and diffusion Monte Carlo and we describe how to implement theses methods in pure C++ and C++/Python. Furthermore we check the efficiency of the implementations in serial and parallel cases to show that the overhead using Python can be negligible.

  6. Accounting for Calibration Uncertainties in X-ray Analysis: Effective Areas in Spectral Fitting

    CERN Document Server

    Lee, Hyunsook; van Dyk, David A; Connors, Alanna; Drake, Jeremy J; Izem, Rima; Meng, Xiao-Li; Min, Shandong; Park, Taeyoung; Ratzlaff, Pete; Siemiginowska, Aneta; Zezas, Andreas

    2011-01-01

    While considerable advance has been made to account for statistical uncertainties in astronomical analyses, systematic instrumental uncertainties have been generally ignored. This can be crucial to a proper interpretation of analysis results because instrumental calibration uncertainty is a form of systematic uncertainty. Ignoring it can underestimate error bars and introduce bias into the fitted values of model parameters. Accounting for such uncertainties currently requires extensive case-specific simulations if using existing analysis packages. Here we present general statistical methods that incorporate calibration uncertainties into spectral analysis of high-energy data. We first present a method based on multiple imputation that can be applied with any fitting method, but is necessarily approximate. We then describe a more exact Bayesian approach that works in conjunction with a Markov chain Monte Carlo based fitting. We explore methods for improving computational efficiency, and in particular detail a ...

  7. Calibration of the whole body counter at PSI

    International Nuclear Information System (INIS)

    At the Paul Scherrer Institut (PSI), measurements with the whole body counter are routinely carried out for occupationally exposed persons and occasionally for individuals of the population suspected of radioactive intake. In total about 400 measurements are performed per year. The whole body counter is based on a p-type high purity germanium (HPGe) coaxial detector mounted above a canvas chair in a shielded small room. The detector is used to detect the presence of radionuclides that emit photons with energies between 50 keV and 2 MeV. The room itself is made of iron from old railway rails to reduce the natural background radiation to 24 n Sv/h. The present paper describes the calibration of the system with the IGOR phantom. Different body sizes are realized by different standardized configurations of polyethylene bricks, in which small tubes of calibration sources can be introduced. The efficiency of the detector was determined for four phantom geometries (P1, P2, P4 and P6 simulating human bodies in sitting position of 12 kg, 24 kg, 70 kg and 110 kg, respectively. The measurements were performed serially using five different radionuclide sources (40K, 60Co, 133Ba, 137Cs, 152Eu) within the phantom bricks. Based on results of the experiment, an efficiency curve for each configuration and the detection limits for relevant radionuclides were determined. For routine measurements, the efficiency curve obtained with the phantom geometry P4 was chosen. The detection limits range from 40 Bq to 1000 Bq for selected radionuclides applying a measurement time of 7 min. The proper calibration of the system, on one hand, is essential for the routine measurements at PSI. On the other hand, it serves as a benchmark for the already initiated characterisation of the system with Monte Carlo simulations. (author)

  8. Monte Carlo Radiative Transfer

    CERN Document Server

    Whitney, Barbara A

    2011-01-01

    I outline methods for calculating the solution of Monte Carlo Radiative Transfer (MCRT) in scattering, absorption and emission processes of dust and gas, including polarization. I provide a bibliography of relevant papers on methods with astrophysical applications.

  9. Monte Carlo transition probabilities

    OpenAIRE

    Lucy, L. B.

    2001-01-01

    Transition probabilities governing the interaction of energy packets and matter are derived that allow Monte Carlo NLTE transfer codes to be constructed without simplifying the treatment of line formation. These probabilities are such that the Monte Carlo calculation asymptotically recovers the local emissivity of a gas in statistical equilibrium. Numerical experiments with one-point statistical equilibrium problems for Fe II and Hydrogen confirm this asymptotic behaviour. In addition, the re...

  10. Calibration of the RSS-131 high efficiency ionization chamber for radiation dose monitoring during plasma experiments conducted on plasma focus device

    Science.gov (United States)

    Szewczak, Kamil; Jednoróg, Sławomir

    2014-10-01

    Plasma research poses a radiation hazard. Due to the program of deuterium plasma research using the PF-1000 device, it is an intensive source of neutrons (up to 1011 n · pulse -1) with energy of 2,45 MeV and ionizing electromagnetic radiation with a broad energy spectrum. Both types of radiation are mostly emitted in ultra-short pulses (˜100 ns). The aim of this work was to test and calibrate the RSS-131 radiometer for its application in measurements of ultra-short electromagnetic radiation pulses with broad energy spectrum emitted during PF-1000 discharge. In addition, the results of raw measurements performed in the control room are presented.

  11. Advances in Monte Carlo computer simulation

    Science.gov (United States)

    Swendsen, Robert H.

    2011-03-01

    Since the invention of the Metropolis method in 1953, Monte Carlo methods have been shown to provide an efficient, practical approach to the calculation of physical properties in a wide variety of systems. In this talk, I will discuss some of the advances in the MC simulation of thermodynamics systems, with an emphasis on optimization to obtain a maximum of useful information.

  12. Variance and efficiency of contribution Monte Carlo

    International Nuclear Information System (INIS)

    The game of contribution is compared with the game of splitting in radiation transport using numerical results obtained by solving the set of coupled integral equations for first and second moments around the score. The splitting game is found superior. (author)

  13. Spectrometric methods used in the calibration of radiodiagnostic measuring instruments

    Energy Technology Data Exchange (ETDEWEB)

    De Vries, W. [Rijksuniversiteit Utrecht (Netherlands)

    1995-12-01

    Recently a set of parameters for checking the quality of radiation for use in diagnostic radiology was established at the calibration facility of Nederlands Meetinstituut (NMI). The establishment of the radiation quality required re-evaluation of the correction factors for the primary air-kerma standards. Free-air ionisation chambers require several correction factors to measure air-kerma according to its definition. These correction factors were calculated for the NMi free-air chamber by Monte Carlo simulations for monoenergetic photons in the energy range from 10 keV to 320 keV. The actual correction factors follow from weighting these mono-energetic correction factors with the air-kerma spectrum of the photon beam. This paper describes the determination of the photon spectra of the X-ray qualities used for the calibration of dosimetric instruments used in radiodiagnostics. The detector used for these measurements is a planar HPGe-detector, placed in the direct beam of the X-ray machine. To convert the measured pulse height spectrum to the actual photon spectrum corrections must be made for fluorescent photon escape, single and multiple compton scattering inside the detector, and detector efficiency. From the calculated photon spectra a number of parameters of the X-ray beam can be calculated. The calculated first and second half value layer in aluminum and copper are compared with the measured values of these parameters to validate the method of spectrum reconstruction. Moreover the spectrum measurements offer the possibility to calibrate the X-ray generator in terms of maximum high voltage. The maximum photon energy in the spectrum is used as a standard for calibration of kVp-meters.

  14. Geometric calibration for a SPECT system dedicated to breast imaging

    Institute of Scientific and Technical Information of China (English)

    WU Li-Wei; WEI Long; CAO Xue-Xiang; WANG Lu; HUANG Xian-Chao; CHAI Pei; YUN Ming-Kai; ZHANG Yu-Bao; ZHANG Long; SHAN Bao-Ci

    2012-01-01

    Geometric calibration is critical to the accurate SPECT reconstruction.In this paper,a geometric calibration method was developed for a dedicated breast SPECT system with a tilted parallel beam (TPB)orbit.The acquisition geometry of the breast SPECT was firstly characterized.And then its projection model was established based on the acquisition geometry.Finally,the calibration results were obtained using a nonlinear optimization method that fitted the measured projections to the model.Monte Carlo data of the breast SPECT were used to verify the calibration method.Simulation results showed that the geometric parameters with reasonable accuracy could be obtained by the proposed method.

  15. Traceable Pyrgeometer Calibrations

    Energy Technology Data Exchange (ETDEWEB)

    Dooraghi, Mike; Kutchenreiter, Mark; Reda, Ibrahim; Habte, Aron; Sengupta, Manajit; Andreas, Afshin; Newman, Martina

    2016-05-02

    This poster presents the development, implementation, and operation of the Broadband Outdoor Radiometer Calibrations (BORCAL) Longwave (LW) system at the Southern Great Plains Radiometric Calibration Facility for the calibration of pyrgeometers that provide traceability to the World Infrared Standard Group.

  16. CERN honours Carlo Rubbia

    CERN Document Server

    2009-01-01

    Carlo Rubbia turned 75 on March 31, and CERN held a symposium to mark his birthday and pay tribute to his impressive contribution to both CERN and science. Carlo Rubbia, 4th from right, together with the speakers at the symposium.On 7 April CERN hosted a celebration marking Carlo Rubbia’s 75th birthday and 25 years since he was awarded the Nobel Prize for Physics. "Today we will celebrate 100 years of Carlo Rubbia" joked CERN’s Director-General, Rolf Heuer in his opening speech, "75 years of his age and 25 years of the Nobel Prize." Rubbia received the Nobel Prize along with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. During the symposium, which was held in the Main Auditorium, several eminent speakers gave lectures on areas of science to which Carlo Rubbia made decisive contributions. Among those who spoke were Michel Spiro, Director of the French National Insti...

  17. Calibration of the Cherenkov Telescope Array

    CERN Document Server

    Gaug, Markus; Berge, David; Reyes, Raquel de los; Doro, Michele; Foerster, Andreas; Maccarone, Maria Concetta; Parsons, Dan; van Eldik, Christopher

    2015-01-01

    The construction of the Cherenkov Telescope Array is expected to start soon. We will present the baseline methods and their extensions currently foreseen to calibrate the observatory. These are bound to achieve the strong requirements on allowed systematic uncertainties for the reconstructed gamma-ray energy and flux scales, as well as on the pointing resolution, and on the overall duty cycle of the observatory. Onsite calibration activities are designed to include a robust and efficient calibration of the telescope cameras, and various methods and instruments to achieve calibration of the overall optical throughput of each telescope, leading to both inter-telescope calibration and an absolute calibration of the entire observatory. One important aspect of the onsite calibration is a correct understanding of the atmosphere above the telescopes, which constitutes the calorimeter of this detection technique. It is planned to be constantly monitored with state-of-the-art instruments to obtain a full molecular and...

  18. Research of Camera Calibration Based on DSP

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2013-09-01

    Full Text Available To take advantage of the high-efficiency and stability of DSP in the data processing and the functions of OpenCV library, this study brought forward a scheme that camera calibration in DSP embedded system calibration. An arithmetic of camera calibration based on OpenCV is designed by analyzing the camera model and lens distortion. The transplantation of EMCV to DSP is completed and the arithmetic of camera calibration is migrated and optimized based on the CCS development environment and the DSP/BIOS system. On the premise of realizing calibration function, this arithmetic improves the efficiency of program execution and the precision of calibration and lays the foundation for further research of the visual location based on DSP embedded system.

  19. Monte carlo simulations of organic photovoltaics.

    Science.gov (United States)

    Groves, Chris; Greenham, Neil C

    2014-01-01

    Monte Carlo simulations are a valuable tool to model the generation, separation, and collection of charges in organic photovoltaics where charges move by hopping in a complex nanostructure and Coulomb interactions between charge carriers are important. We review the Monte Carlo techniques that have been applied to this problem, and describe the results of simulations of the various recombination processes that limit device performance. We show how these processes are influenced by the local physical and energetic structure of the material, providing information that is useful for design of efficient photovoltaic systems.

  20. New radiation protection calibration facility at CERN.

    Science.gov (United States)

    Brugger, Markus; Carbonez, Pierre; Pozzi, Fabio; Silari, Marco; Vincke, Helmut

    2014-10-01

    The CERN radiation protection group has designed a new state-of-the-art calibration laboratory to replace the present facility, which is >20 y old. The new laboratory, presently under construction, will be equipped with neutron and gamma sources, as well as an X-ray generator and a beta irradiator. The present work describes the project to design the facility, including the facility placement criteria, the 'point-zero' measurements and the shielding study performed via FLUKA Monte Carlo simulations.

  1. Preliminary evaluation of a Neutron Calibration Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Alvarenga, Talysson S.; Neves, Lucio P.; Perini, Ana P.; Sanches, Matias P.; Mitake, Malvina B.; Caldas, Linda V.E., E-mail: talvarenga@ipen.br, E-mail: lpneves@ipen.br, E-mail: aperini@ipen.br, E-mail: msanches@ipen.br, E-mail: mbmitake@ipen.br, E-mail: lcaldas@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Federico, Claudio A., E-mail: claudiofederico@ieav.cta.br [Instituto de Estudos Avancados (IEAv/DCTA), Sao Jose dos Campos, SP (Brazil). Dept. de Ciencia e Tecnologia Aeroespacial

    2013-07-01

    In the past few years, Brazil and several other countries in Latin America have experimented a great demand for the calibration of neutron detectors, mainly due to the increase in oil prospection and extraction. The only laboratory for calibration of neutron detectors in Brazil is localized at the Institute for Radioprotection and Dosimetry (IRD/CNEN), Rio de Janeiro, which is part of the IAEA SSDL network. This laboratory is the national standard laboratory in Brazil. With the increase in the demand for the calibration of neutron detectors, there is a need for another calibration services. In this context, the Calibration Laboratory of IPEN/CNEN, Sao Paulo, which already offers calibration services of radiation detectors with standard X, gamma, beta and alpha beams, has recently projected a new calibration laboratory for neutron detectors. In this work, the ambient equivalent dose rate (H⁎(10)) was evaluated in several positions inside and around this laboratory, using Monte Carlo simulation (MCNP5 code), in order to verify the adequateness of the shielding. The obtained results showed that the shielding is effective, and that this is a low-cost methodology to improve the safety of the workers and evaluate the total staff workload. (author)

  2. Calibration of a single hexagonal NaI(Tl) detector using a new numerical method based on the efficiency transfer method

    Energy Technology Data Exchange (ETDEWEB)

    Abbas, Mahmoud I., E-mail: mabbas@physicist.net [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Badawi, M.S. [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Ruskov, I.N. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, 1784 Sofia (Bulgaria); El-Khatib, A.M. [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Grozdanov, D.N. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, 1784 Sofia (Bulgaria); Thabet, A.A. [Department of Medical Equipment Technology, Faculty of Allied Medical Sciences, Pharos University in Alexandria (Egypt); Kopatch, Yu.N. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Gouda, M.M. [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Skoy, V.R. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation)

    2015-01-21

    Gamma-ray detector systems are important instruments in a broad range of science and new setup are continually developing. The most recent step in the evolution of detectors for nuclear spectroscopy is the construction of large arrays of detectors of different forms (for example, conical, pentagonal, hexagonal, etc.) and sizes, where the performance and the efficiency can be increased. In this work, a new direct numerical method (NAM), in an integral form and based on the efficiency transfer (ET) method, is used to calculate the full-energy peak efficiency of a single hexagonal NaI(Tl) detector. The algorithms and the calculations of the effective solid angle ratios for a point (isotropic irradiating) gamma-source situated coaxially at different distances from the detector front-end surface, taking into account the attenuation of the gamma-rays in the detector's material, end-cap and the other materials in-between the gamma-source and the detector, are considered as the core of this (ET) method. The calculated full-energy peak efficiency values by the (NAM) are found to be in a good agreement with the measured experimental data.

  3. The Virtual Monte Carlo

    CERN Document Server

    Hrivnacova, I; Berejnov, V V; Brun, R; Carminati, F; Fassò, A; Futo, E; Gheata, A; Caballero, I G; Morsch, Andreas

    2003-01-01

    The concept of Virtual Monte Carlo (VMC) has been developed by the ALICE Software Project to allow different Monte Carlo simulation programs to run without changing the user code, such as the geometry definition, the detector response simulation or input and output formats. Recently, the VMC classes have been integrated into the ROOT framework, and the other relevant packages have been separated from the AliRoot framework and can be used individually by any other HEP project. The general concept of the VMC and its set of base classes provided in ROOT will be presented. Existing implementations for Geant3, Geant4 and FLUKA and simple examples of usage will be described.

  4. Calibration method for a in vivo measurement system using mathematical simulation of the radiation source and the detector; Metodo de calibracao de um sistema de medida in vivo atraves da simulacao matematica da fonte de radiacao e do detector

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, John

    1998-12-31

    A Monte Carlo program which uses a voxel phantom has been developed to simulate in vivo measurement systems for calibration purposes. The calibration method presented here employs a mathematical phantom, produced in the form of volume elements (voxels), obtained through Magnetic Resonance Images of the human body. The calibration method uses the Monte Carlo technique to simulate the tissue contamination, the transport of the photons through the tissues and the detection of the radiation. The program simulates the transport and detection of photons between 0.035 and 2 MeV and uses, for the body representation, a voxel phantom with a format of 871 slices each of 277 x 148 picture elements. The Monte Carlo code was applied to the calibration of in vivo systems and to estimate differences in counting efficiencies between homogeneous and non-homogeneous radionuclide distributions in the lung. Calculations show a factor of 20 between deposition of {sup 241} Am at the back compared with the front of the lung. The program was also used to estimate the {sup 137} Cs body burden of an internally contaminated individual, counted with an 8 x 4 Nal (TI) detector and an {sup 241} Am body burden of an internally contaminated individual, who was counted using a planar germanium detector. (author) 24 refs., 38 figs., 23 tabs.

  5. Computing Greeks with Multilevel Monte Carlo Methods using Importance Sampling

    OpenAIRE

    Euget, Thomas

    2012-01-01

    This paper presents a new efficient way to reduce the variance of an estimator of popular payoffs and greeks encounter in financial mathematics. The idea is to apply Importance Sampling with the Multilevel Monte Carlo recently introduced by M.B. Giles. So far, Importance Sampling was proved successful in combination with standard Monte Carlo method. We will show efficiency of our approach on the estimation of financial derivatives prices and then on the estimation of Greeks (i.e. sensitivitie...

  6. WFC3: UVIS Dark Calibration

    Science.gov (United States)

    Bourque, Matthew; Biretta, John A.; Anderson, Jay; Baggett, Sylvia M.; Gunning, Heather C.; MacKenty, John W.

    2014-06-01

    Wide Field Camera 3 (WFC3), a fourth-generation imaging instrument on board the Hubble Space Telescope (HST), has exhibited excellent performance since its installation during Servicing Mission 4 in May 2009. The UVIS detector, comprised of two e2v CCDs, is one of two channels available on WFC3 and is named for its ultraviolet and visible light sensitivity. We present the various procedures and results of the WFC3/UVIS dark calibration, which monitors the health and stability of the UVIS detector, provides characterization of hot pixels and dark current, and produces calibration files to be used as a correction for dark current in science images. We describe the long-term growth of hot pixels and the impacts that UVIS Charge Transfer Efficiency (CTE) losses, postflashing, and proximity to the readout amplifiers have on the population. We also discuss the evolution of the median dark current, which has been slowly increasing since the start of the mission and is currently ~6 e-/hr/pix, averaged across each chip. We outline the current algorithm for creating UVIS dark calibration files, which includes aggressive cosmic ray masking, image combination, and hot pixel flagging. Calibration products are available to the user community, typically 3-5 days after initial processing, through the Calibration Database System (CDBS). Finally, we discuss various improvements to the calibration and monitoring procedures. UVIS dark monitoring will continue throughout and beyond HST’s current proposal cycle.

  7. Development of Monte Carlo depletion code MCDEP

    Energy Technology Data Exchange (ETDEWEB)

    Kim, K. S.; Kim, K. Y.; Lee, J. C.; Ji, S. K. [KAERI, Taejon (Korea, Republic of)

    2003-07-01

    Monte Carlo neutron transport calculation has been used to obtain a reference solution in reactor physics analysis. The typical and widely-used Monte Carlo transport code is MCNP (Monte Carlo N-Particle Transport Code) developed in Los Alamos National Laboratory. The drawbacks of Monte-Carlo transport codes are the lacks of the capacities for the depletion and temperature dependent calculations. In this research we developed MCDEP (Monte Carlo Depletion Code Package) using MCNP with the capacity of the depletion calculation. This code package is the integration of MCNP and depletion module of ORIGEN-2 using the matrix exponential method. This code package enables the automatic MCNP and depletion calculations only with the initial MCNP and MCDEP inputs prepared by users. Depletion chains were simplified for the efficiency of computing time and the treatment of short-lived nuclides without cross section data. The results of MCDEP showed that the reactivity and pin power distributions for the PWR fuel pins and assemblies are consistent with those of CASMO-3 and HELIOS.

  8. Composite biasing in Monte Carlo radiative transfer

    CERN Document Server

    Baes, Maarten; Lunttila, Tuomas; Bianchi, Simone; Camps, Peter; Juvela, Mika; Kuiper, Rolf

    2016-01-01

    Biasing or importance sampling is a powerful technique in Monte Carlo radiative transfer, and can be applied in different forms to increase the accuracy and efficiency of simulations. One of the drawbacks of the use of biasing is the potential introduction of large weight factors. We discuss a general strategy, composite biasing, to suppress the appearance of large weight factors. We use this composite biasing approach for two different problems faced by current state-of-the-art Monte Carlo radiative transfer codes: the generation of photon packages from multiple components, and the penetration of radiation through high optical depth barriers. In both cases, the implementation of the relevant algorithms is trivial and does not interfere with any other optimisation techniques. Through simple test models, we demonstrate the general applicability, accuracy and efficiency of the composite biasing approach. In particular, for the penetration of high optical depths, the gain in efficiency is spectacular for the spe...

  9. Clinical dosimetry in photon radiotherapy. A Monte Carlo based investigation

    International Nuclear Information System (INIS)

    Practical clinical dosimetry is a fundamental step within the radiation therapy process and aims at quantifying the absorbed radiation dose within a 1-2% uncertainty. To achieve this level of accuracy, corrections are needed for calibrated and air-filled ionization chambers, which are used for dose measurement. The procedures of correction are based on cavity theory of Spencer-Attix and are defined in current dosimetry protocols. Energy dependent corrections for deviations from calibration beams account for changed ionization chamber response in the treatment beam. The corrections applied are usually based on semi-analytical models or measurements and are generally hard to determine due to their magnitude of only a few percents or even less. Furthermore the corrections are defined for fixed geometrical reference-conditions and do not apply to non-reference conditions in modern radiotherapy applications. The stochastic Monte Carlo method for the simulation of radiation transport is becoming a valuable tool in the field of Medical Physics. As a suitable tool for calculation of these corrections with high accuracy the simulations enable the investigation of ionization chambers under various conditions. The aim of this work is the consistent investigation of ionization chamber dosimetry in photon radiation therapy with the use of Monte Carlo methods. Nowadays Monte Carlo systems exist, which enable the accurate calculation of ionization chamber response in principle. Still, their bare use for studies of this type is limited due to the long calculation times needed for a meaningful result with a small statistical uncertainty, inherent to every result of a Monte Carlo simulation. Besides heavy use of computer hardware, techniques methods of variance reduction to reduce the needed calculation time can be applied. Methods for increasing the efficiency in the results of simulation were developed and incorporated in a modern and established Monte Carlo simulation environment

  10. Quantitative modeling of Cerenkov light production efficiency from medical radionuclides.

    Science.gov (United States)

    Beattie, Bradley J; Thorek, Daniel L J; Schmidtlein, Charles R; Pentlow, Keith S; Humm, John L; Hielscher, Andreas H

    2012-01-01

    There has been recent and growing interest in applying Cerenkov radiation (CR) for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use.

  11. Carlo Caso (1940 - 2007)

    CERN Multimedia

    Leonardo Rossi

    Carlo Caso (1940 - 2007) Our friend and colleague Carlo Caso passed away on July 7th, after several months of courageous fight against cancer. Carlo spent most of his scientific career at CERN, taking an active part in the experimental programme of the laboratory. His long and fruitful involvement in particle physics started in the sixties, in the Genoa group led by G. Tomasini. He then made several experiments using the CERN liquid hydrogen bubble chambers -first the 2000HBC and later BEBC- to study various facets of the production and decay of meson and baryon resonances. He later made his own group and joined the NA27 Collaboration to exploit the EHS Spectrometer with a rapid cycling bubble chamber as vertex detector. Amongst their many achievements, they were the first to measure, with excellent precision, the lifetime of the charmed D mesons. At the start of the LEP era, Carlo and his group moved to the DELPHI experiment, participating in the construction and running of the HPC electromagnetic c...

  12. Fundamentals of Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of Monte Carlo. Welcome to Los Alamos, the birthplace of “Monte Carlo” for computational physics. Stanislaw Ulam, John von Neumann, and Nicholas Metropolis are credited as the founders of modern Monte Carlo methods. The name “Monte Carlo” was chosen in reference to the Monte Carlo Casino in Monaco (purportedly a place where Ulam’s uncle went to gamble). The central idea (for us) – to use computer-generated “random” numbers to determine expected values or estimate equation solutions – has since spread to many fields. "The first thoughts and attempts I made to practice [the Monte Carlo Method] were suggested by a question which occurred to me in 1946 as I was convalescing from an illness and playing solitaires. The question was what are the chances that a Canfield solitaire laid out with 52 cards will come out successfully? After spending a lot of time trying to estimate them by pure combinatorial calculations, I wondered whether a more practical method than “abstract thinking” might not be to lay it out say one hundred times and simply observe and count the number of successful plays... Later [in 1946], I described the idea to John von Neumann, and we began to plan actual calculations." - Stanislaw Ulam.

  13. Construction of Chinese adult male phantom library and its application in the virtual calibration of in vivo measurement

    Science.gov (United States)

    Chen, Yizheng; Qiu, Rui; Li, Chunyan; Wu, Zhen; Li, Junli

    2016-03-01

    In vivo measurement is a main method of internal contamination evaluation, particularly for large numbers of people after a nuclear accident. Before the practical application, it is necessary to obtain the counting efficiency of the detector by calibration. The virtual calibration based on Monte Carlo simulation usually uses the reference human computational phantom, and the morphological difference between the monitored personnel with the calibrated phantom may lead to the deviation of the counting efficiency. Therefore, a phantom library containing a wide range of heights and total body masses is needed. In this study, a Chinese reference adult male polygon surface (CRAM_S) phantom was constructed based on the CRAM voxel phantom, with the organ models adjusted to match the Chinese reference data. CRAMS phantom was then transformed to sitting posture for convenience in practical monitoring. Referring to the mass and height distribution of the Chinese adult male, a phantom library containing 84 phantoms was constructed by deforming the reference surface phantom. Phantoms in the library have 7 different heights ranging from 155 cm to 185 cm, and there are 12 phantoms with different total body masses in each height. As an example of application, organ specific and total counting efficiencies of Ba-133 were calculated using the MCNPX code, with two series of phantoms selected from the library. The influence of morphological variation on the counting efficiency was analyzed. The results show only using the reference phantom in virtual calibration may lead to an error of 68.9% for total counting efficiency. Thus the influence of morphological difference on virtual calibration can be greatly reduced using the phantom library with a wide range of masses and heights instead of a single reference phantom.

  14. The calibration system for the GERDA experiment

    International Nuclear Information System (INIS)

    The GERDA experiment uses the neutrinoless double beta decay to probe three fundamental questions in neutrino physics - Are they Dirac or Majorana particles? What is their absolute mass? What is the mass hierarchy of the three generations? In my talk I present the calibration system for the Ge semiconductor diodes enriched in Ge-76. The system is used to set the energy scale and calibrate the pulse shapes which will be used to further reject background events. The lowest possible background is crucial for the whole experiment and therefore the calibration system must not interfere with the data acquisition phase while at the same time operate efficiently during the calibration runs.

  15. Research of Camera Calibration Based on DSP

    OpenAIRE

    Zheng Zhang; Yukun Wan; Lixin Cai

    2013-01-01

    To take advantage of the high-efficiency and stability of DSP in the data processing and the functions of OpenCV library, this study brought forward a scheme that camera calibration in DSP embedded system calibration. An arithmetic of camera calibration based on OpenCV is designed by analyzing the camera model and lens distortion. The transplantation of EMCV to DSP is completed and the arithmetic of camera calibration is migrated and optimized based on the CCS development environment and the ...

  16. Absolute calibration technique for spontaneous fission sources

    International Nuclear Information System (INIS)

    An absolute calibration technique for a spontaneously fissioning nuclide (which involves no arbitrary parameters) allows unique determination of the detector efficiency for that nuclide, hence of the fission source strength

  17. Calibration of Ge gamma-ray spectrometers for complex sample geometries and matrices

    Science.gov (United States)

    Semkow, T. M.; Bradt, C. J.; Beach, S. E.; Haines, D. K.; Khan, A. J.; Bari, A.; Torres, M. A.; Marrantino, J. C.; Syed, U.-F.; Kitto, M. E.; Hoffman, T. J.; Curtis, P.

    2015-11-01

    A comprehensive study of the efficiency calibration and calibration verification of Ge gamma-ray spectrometers was performed using semi-empirical, computational Monte-Carlo (MC), and transfer methods. The aim of this study was to evaluate the accuracy of the quantification of gamma-emitting radionuclides in complex matrices normally encountered in environmental and food samples. A wide range of gamma energies from 59.5 to 1836.0 keV and geometries from a 10-mL jar to 1.4-L Marinelli beaker were studied on four Ge spectrometers with the relative efficiencies between 102% and 140%. Density and coincidence summing corrections were applied. Innovative techniques were developed for the preparation of artificial complex matrices from materials such as acidified water, polystyrene, ethanol, sugar, and sand, resulting in the densities ranging from 0.3655 to 2.164 g cm-3. They were spiked with gamma activity traceable to international standards and used for calibration verifications. A quantitative method of tuning MC calculations to experiment was developed based on a multidimensional chi-square paraboloid.

  18. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  19. Monte Carlo and nonlinearities

    CERN Document Server

    Dauchet, Jérémi; Blanco, Stéphane; Caliot, Cyril; Charon, Julien; Coustet, Christophe; Hafi, Mouna El; Eymet, Vincent; Farges, Olivier; Forest, Vincent; Fournier, Richard; Galtier, Mathieu; Gautrais, Jacques; Khuong, Anaïs; Pelissier, Lionel; Piaud, Benjamin; Roger, Maxime; Terrée, Guillaume; Weitz, Sebastian

    2016-01-01

    The Monte Carlo method is widely used to numerically predict systems behaviour. However, its powerful incremental design assumes a strong premise which has severely limited application so far: the estimation process must combine linearly over dimensions. Here we show that this premise can be alleviated by projecting nonlinearities on a polynomial basis and increasing the configuration-space dimension. Considering phytoplankton growth in light-limited environments, radiative transfer in planetary atmospheres, electromagnetic scattering by particles and concentrated-solar-power-plant productions, we prove the real world usability of this advance on four test-cases that were so far regarded as impracticable by Monte Carlo approaches. We also illustrate an outstanding feature of our method when applied to sharp problems with interacting particles: handling rare events is now straightforward. Overall, our extension preserves the features that made the method popular: addressing nonlinearities does not compromise o...

  20. Fundamentals of Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  1. CERN honours Carlo Rubbia

    CERN Multimedia

    2009-01-01

    On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency and Professor at the IUSS School for Advanced Studies in Pavia will speak about his work with Carlo Rubbia. Finally, Hans Joachim Sch...

  2. CERN honours Carlo Rubbia

    CERN Multimedia

    2009-01-01

    On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency, will speak about his work with Carlo Rubbia. Finally, Hans Joachim Schellnhuber of the Potsdam Institute for Climate Research and Sven Kul...

  3. Who Writes Carlos Bulosan?

    Directory of Open Access Journals (Sweden)

    Charlie Samuya Veric

    2001-12-01

    Full Text Available The importance of Carlos Bulosan in Filipino and Filipino-American radical history and literature is indisputable. His eminence spans the pacific, and he is known, diversely, as a radical poet, fictionist, novelist, and labor organizer. Author of the canonical America Iis the Hearts, Bulosan is celebrated for chronicling the conditions in America in his time, such as racism and unemployment. In the history of criticism on Bulosan's life and work, however, there is an undeclared general consensus that views Bulosan and his work as coherent permanent texts of radicalism and anti-imperialism. Central to the existence of such a tradition of critical reception are the generations of critics who, in more ways than one, control the discourse on and of Carlos Bulosan. This essay inquires into the sphere of the critical reception that orders, for our time and for the time ahead, the reading and interpretation of Bulosan. What eye and seeing, the essay asks, determine the perception of Bulosan as the angel of radicalism? What is obscured in constructing Bulosan as an immutable figure of the political? What light does the reader conceive when the personal is brought into the open and situated against the political? the essay explores the answers to these questions in Bulosan's loving letters to various friends, strangers, and white American women. The presence of these interrogations, the essay believes, will secure ultimately the continuing importance of Carlos Bulosan to radical literature and history.

  4. Trinocular Calibration Method Based on Binocular Calibration

    Directory of Open Access Journals (Sweden)

    CAO Dan-Dan

    2012-10-01

    Full Text Available In order to solve the self-occlusion problem in plane-based multi-camera calibration system and expand the measurement range, a tri-camera vision system based on binocular calibration is proposed. The three cameras are grouped into two pairs, while the public camera is taken as the reference to build the global coordinate. By calibration of the measured absolute distance and the true absolute distance, global calibration is realized. The MRE (mean relative error of the global calibration of the two camera pairs in the experiments can be as low as 0.277% and 0.328% respectively. Experiment results show that this method is feasible, simple and effective, and has high precision.

  5. Lookahead Strategies for Sequential Monte Carlo

    OpenAIRE

    Lin, Ming; Chen, Rong; Liu, Jun

    2013-01-01

    Based on the principles of importance sampling and resampling, sequential Monte Carlo (SMC) encompasses a large set of powerful techniques dealing with complex stochastic dynamic systems. Many of these systems possess strong memory, with which future information can help sharpen the inference about the current state. By providing theoretical justification of several existing algorithms and introducing several new ones, we study systematically how to construct efficient SMC algorithms to take ...

  6. Crop physiology calibration in CLM

    Directory of Open Access Journals (Sweden)

    I. Bilionis

    2014-10-01

    Full Text Available Farming is using more terrestrial ground, as population increases and agriculture is increasingly used for non-nutritional purposes such as biofuel production. This agricultural expansion exerts an increasing impact on the terrestrial carbon cycle. In order to understand the impact of such processes, the Community Land Model (CLM has been augmented with a CLM-Crop extension that simulates the development of three crop types: maize, soybean, and spring wheat. The CLM-Crop model is a complex system that relies on a suite of parametric inputs that govern plant growth under a given atmospheric forcing and available resources. CLM-Crop development used measurements of gross primary productivity and net ecosystem exchange from AmeriFlux sites to choose parameter values that optimize crop productivity in the model. In this paper we calibrate these parameters for one crop type, soybean, in order to provide a faithful projection in terms of both plant development and net carbon exchange. Calibration is performed in a Bayesian framework by developing a scalable and adaptive scheme based on sequential Monte Carlo (SMC.

  7. AXAF calibration: the HXDS flow proportional counters

    Science.gov (United States)

    Wargelin, Bradford J.; Kellogg, Edwin M.; McDermott, Walter C.; Evans, Ian N.; Vitek, S. A.

    1997-07-01

    The design, performance, and calibration of the seven flow proportional counters (FPCs) used during AXAF ground calibration are described. Five of the FPCs served as beam normalization detectors (BNDs), and two were used in the telescope focal plane in combination with a set of apertures to measure the point response functions and effective areas of the AXAF mirrors and transmission gratings. The BNDs also provide standards for determining the effective areas of the several telescope/grating/flight-detector combinations. With useful energy resolution and quantum efficiency over the entire 100-eV to 10 keV AXAF energy band, the FPCs provided most of the data acquired during AXAF calibration. Although the principles of proportional counter operation are relatively simple, AXAF's stringent calibration goals require detailed calibration and modeling of such effects as window- support-wire obscuration, window deformation between the support wires, electron diffusion and avalanche processes, gain nonuniformities, and gas pressure and temperature variations. Detector aperture areas and signal processing deadtime must also be precisely determined, and detector degradation during the many months of AXAF calibration must be prevented. The FPC calibration program is based on measurement of individual components (such as window transmission and aperture size) and the relative quantum efficiencies of complete detector systems, as well as absolute QE calibration of selected detectors at the BESSY synchrotron, an x-ray source of precisely known intensity.

  8. ORNL calibrations facility

    International Nuclear Information System (INIS)

    The ORNL Calibrations Facility is operated by the Instrumentation Group of the Industrial Safety and Applied Health Physics Division. Its primary purpose is to maintain radiation calibration standards for calibration of ORNL health physics instruments and personnel dosimeters. This report includes a discussion of the radioactive sources and ancillary equipment in use and a step-by-step procedure for calibration of those survey instruments and personnel dosimeters in routine use at ORNL

  9. Spiral reader calibration

    International Nuclear Information System (INIS)

    The method to calibrate the spiral reader (SR) is presented. A brief description of the main procedures of the calibration program SCALP, adapted for the IHEP equipment and purposes, is described. The precision characteristics of the IHEP SR have been analysed on the results, presented in the form of diagrams. There is a calibration manual for the user

  10. PERSONALISED BODY COUNTER CALIBRATION USING ANTHROPOMETRIC PARAMETERS.

    Science.gov (United States)

    Pölz, S; Breustedt, B

    2016-09-01

    Current calibration methods for body counting offer personalisation for lung counting predominantly with respect to ratios of body mass and height. Chest wall thickness is used as an intermediate parameter. This work revises and extends these methods using a series of computational phantoms derived from medical imaging data in combination with radiation transport simulation and statistical analysis. As an example, the method is applied to the calibration of the In Vivo Measurement Laboratory (IVM) at Karlsruhe Institute of Technology (KIT) comprising four high-purity germanium detectors in two partial body measurement set-ups. The Monte Carlo N-Particle (MCNP) transport code and the Extended Cardiac-Torso (XCAT) phantom series have been used. Analysis of the computed sample data consisting of 18 anthropometric parameters and calibration factors generated from 26 photon sources for each of the 30 phantoms reveals the significance of those parameters required for producing an accurate estimate of the calibration function. Body circumferences related to the source location perform best in the example, while parameters related to body mass show comparable but lower performances, and those related to body height and other lengths exhibit low performances. In conclusion, it is possible to give more accurate estimates of calibration factors using this proposed approach including estimates of uncertainties related to interindividual anatomical variation of the target population. PMID:26396263

  11. Detection efficiency simulation and measurement of 6LiI/natLiI scintillation detector

    International Nuclear Information System (INIS)

    Background: Being of very high detection efficiency and small size, Lithium iodide (LiI) scintillator detector is used extensively in neutron measurement and environmental monitoring. Purpose: Using thermal reactor, neutron detectors will be tested and calibrated. And a new neutron detector device will be designed and studied. Methods: The relationship between the size and detection efficiency of the thermal neutron detector 6LiI/natLil was studied using Monte Carlo code GEANT4 and MCNP5 package, and the thermal neutron efficiency of detector was calibrated by reactor neutrons. Results: The theoretical simulation shows that the thermal neutron detection efficiency of detector of 10-mm thickness is relatively high, the enriched 6Lil is up to 98% and the nature natLiI 65%. The thermal neutron efficiency of detector is calibrated by reactor thermal neutrons. Considering the neutron scattering by the lead brick, high density polythene and environment neutron contribution, the detection efficiency of 6LiI detector is about 90% and natLiI detector 70%. Conclusion: The detector efficiency can reach the efficiency value of theoretical calculations. (authors)

  12. Calibration of cathode strip gains in multiwire drift chambers of the GlueX experiment

    Energy Technology Data Exchange (ETDEWEB)

    Berdnikov, V. V.; Somov, S. V.; Pentchev, L.; Somov, A.

    2016-07-01

    A technique for calibrating cathode strip gains in multiwire drift chambers of the GlueX experiment is described. The accuracy of the technique is estimated based on Monte Carlo generated data with known gain coefficients in the strip signal channels. One of the four detector sections has been calibrated using cosmic rays. Results of drift chamber calibration on the accelerator beam upon inclusion in the GlueX experimental setup are presented.

  13. Experimental calibration of transmission grating and theoretical calculation of diffraction efficiency%透射光栅的实验标定和衍射效率的理论模拟

    Institute of Scientific and Technical Information of China (English)

    尚万里; 杨家敏; 赵屹东; 崔明启; 郑雷; 韩勇; 周克瑾; 马陈燕; 朱托; 熊刚; 赵阳; 张文海; 易荣清; 况龙钰; 曹磊峰; 高宇林

    2011-01-01

    透射光栅广泛应用于软X射线能谱测量.为了获得用于惯性约束聚变研究的透射光栅的各级衍射效率及其他参数,在北京同步辐射源上200-1600eV能量范围内对其进行了标定,获得了透射光栅衍射效率的实验结果.扩展了透射光栅衍射效率的计算方法,提出了7边准梯形截面衍射效率计算模型.分析拟合了实验数据,理论结果与实验结果很好符合.得到了7边准梯形的透射光栅栅线截面结构.%Transmission grating is widely used in measurement of soft X rays. In order to measure the diffraction efficiencies of the transmission grating which is used in laser fusion, the transmission grating was calibrated on Beijing Synchrotron Radiation Facility in the energy region from 200eV to 1600eV, and the experimental results have been obtained. The model for grating efficiency simulation has been developed and calculations using a new so called 7-side quasi-trapezoidal cross section model were carried out. The results from the new model are in good agreement with the experimental data.The exact grating wire cross section is described.

  14. Residual gas analyzer calibration

    Science.gov (United States)

    Lilienkamp, R. H.

    1972-01-01

    A technique which employs known gas mixtures to calibrate the residual gas analyzer (RGA) is described. The mass spectra from the RGA are recorded for each gas mixture. This mass spectra data and the mixture composition data each form a matrix. From the two matrices the calibration matrix may be computed. The matrix mathematics requires the number of calibration gas mixtures be equal to or greater than the number of gases included in the calibration. This technique was evaluated using a mathematical model of an RGA to generate the mass spectra. This model included shot noise errors in the mass spectra. Errors in the gas concentrations were also included in the valuation. The effects of these errors was studied by varying their magnitudes and comparing the resulting calibrations. Several methods of evaluating an actual calibration are presented. The effects of the number of gases in then, the composition of the calibration mixture, and the number of mixtures used are discussed.

  15. TOD to TTP calibration

    Science.gov (United States)

    Bijl, Piet; Reynolds, Joseph P.; Vos, Wouter K.; Hogervorst, Maarten A.; Fanning, Jonathan D.

    2011-05-01

    The TTP (Targeting Task Performance) metric, developed at NVESD, is the current standard US Army model to predict EO/IR Target Acquisition performance. This model however does not have a corresponding lab or field test to empirically assess the performance of a camera system. The TOD (Triangle Orientation Discrimination) method, developed at TNO in The Netherlands, provides such a measurement. In this study, we make a direct comparison between TOD performance for a range of sensors and the extensive historical US observer performance database built to develop and calibrate the TTP metric. The US perception data were collected doing an identification task by military personnel on a standard 12 target, 12 aspect tactical vehicle image set that was processed through simulated sensors for which the most fundamental sensor parameters such as blur, sampling, spatial and temporal noise were varied. In the present study, we measured TOD sensor performance using exactly the same sensors processing a set of TOD triangle test patterns. The study shows that good overall agreement is obtained when the ratio between target characteristic size and TOD test pattern size at threshold equals 6.3. Note that this number is purely based on empirical data without any intermediate modeling. The calibration of the TOD to the TTP is highly beneficial to the sensor modeling and testing community for a variety of reasons. These include: i) a connection between requirement specification and acceptance testing, and ii) a very efficient method to quickly validate or extend the TTP range prediction model to new systems and tasks.

  16. Calibration of Nanopositioning Stages

    Directory of Open Access Journals (Sweden)

    Ning Tan

    2015-12-01

    Full Text Available Accuracy is one of the most important criteria for the performance evaluation of micro- and nanorobots or systems. Nanopositioning stages are used to achieve the high positioning resolution and accuracy for a wide and growing scope of applications. However, their positioning accuracy and repeatability are not well known and difficult to guarantee, which induces many drawbacks for many applications. For example, in the mechanical characterisation of biological samples, it is difficult to perform several cycles in a repeatable way so as not to induce negative influences on the study. It also prevents one from controlling accurately a tool with respect to a sample without adding additional sensors for closed loop control. This paper aims at quantifying the positioning repeatability and accuracy based on the ISO 9283:1998 standard, and analyzing factors influencing positioning accuracy onto a case study of 1-DoF (Degree-of-Freedom nanopositioning stage. The influence of thermal drift is notably quantified. Performances improvement of the nanopositioning stage are then investigated through robot calibration (i.e., open-loop approach. Two models (static and adaptive models are proposed to compensate for both geometric errors and thermal drift. Validation experiments are conducted over a long period (several days showing that the accuracy of the stage is improved from typical micrometer range to 400 nm using the static model and even down to 100 nm using the adaptive model. In addition, we extend the 1-DoF calibration to multi-DoF with a case study of a 2-DoF nanopositioning robot. Results demonstrate that the model efficiently improved the 2D accuracy from 1400 nm to 200 nm.

  17. Development of methodology for characterization of cartridge filters from the IEA-R1 using the Monte Carlo method; Desenvolvimento de uma metodologia para caracterizacao do filtro cuno do reator IEA-R1 utilizando o Metodo Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Priscila

    2014-07-01

    The Cuno filter is part of the water processing circuit of the IEA-R1 reactor and, when saturated, it is replaced and becomes a radioactive waste, which must be managed. In this work, the primary characterization of the Cuno filter of the IEA-R1 nuclear reactor at IPEN was carried out using gamma spectrometry associated with the Monte Carlo method. The gamma spectrometry was performed using a hyperpure germanium detector (HPGe). The germanium crystal represents the detection active volume of the HPGe detector, which has a region called dead layer or inactive layer. It has been reported in the literature a difference between the theoretical and experimental values when obtaining the efficiency curve of these detectors. In this study we used the MCNP-4C code to obtain the detector calibration efficiency for the geometry of the Cuno filter, and the influence of the dead layer and the effect of sum in cascade at the HPGe detector were studied. The correction of the dead layer values were made by varying the thickness and the radius of the germanium crystal. The detector has 75.83 cm{sup 3} of active volume of detection, according to information provided by the manufacturer. Nevertheless, the results showed that the actual value of active volume is less than the one specified, where the dead layer represents 16% of the total volume of the crystal. A Cuno filter analysis by gamma spectrometry has enabled identifying energy peaks. Using these peaks, three radionuclides were identified in the filter: {sup 108m}Ag, {sup 110m}Ag and {sup 60}Co. From the calibration efficiency obtained by the Monte Carlo method, the value of activity estimated for these radionuclides is in the order of MBq. (author)

  18. Parallel Calibration for Sensor Array Radio Interferometers

    CERN Document Server

    Brossard, Martin; Pesavento, Marius; Boyer, Rémy; Larzabal, Pascal; Wijnholds, Stefan J

    2016-01-01

    In order to meet the theoretically achievable imaging performance, calibration of modern radio interferometers is a mandatory challenge, especially at low frequencies. In this perspective, we propose a novel parallel iterative multi-wavelength calibration algorithm. The proposed algorithm estimates the apparent directions of the calibration sources, the directional and undirectional complex gains of the array elements and their noise powers, with a reasonable computational complexity. Furthermore, the algorithm takes into account the specific variation of the aforementioned parameter values across wavelength. Realistic numerical simulations reveal that the proposed scheme outperforms the mono-wavelength calibration scheme and approaches the derived constrained Cram\\'er-Rao bound even with the presence of non-calibration sources at unknown directions, in a computationally efficient manner.

  19. Development of methodology for characterization of cartridge filters from the IEA-R1 using the Monte Carlo method

    International Nuclear Information System (INIS)

    The Cuno filter is part of the water processing circuit of the IEA-R1 reactor and, when saturated, it is replaced and becomes a radioactive waste, which must be managed. In this work, the primary characterization of the Cuno filter of the IEA-R1 nuclear reactor at IPEN was carried out using gamma spectrometry associated with the Monte Carlo method. The gamma spectrometry was performed using a hyperpure germanium detector (HPGe). The germanium crystal represents the detection active volume of the HPGe detector, which has a region called dead layer or inactive layer. It has been reported in the literature a difference between the theoretical and experimental values when obtaining the efficiency curve of these detectors. In this study we used the MCNP-4C code to obtain the detector calibration efficiency for the geometry of the Cuno filter, and the influence of the dead layer and the effect of sum in cascade at the HPGe detector were studied. The correction of the dead layer values were made by varying the thickness and the radius of the germanium crystal. The detector has 75.83 cm3 of active volume of detection, according to information provided by the manufacturer. Nevertheless, the results showed that the actual value of active volume is less than the one specified, where the dead layer represents 16% of the total volume of the crystal. A Cuno filter analysis by gamma spectrometry has enabled identifying energy peaks. Using these peaks, three radionuclides were identified in the filter: 108mAg, 110mAg and 60Co. From the calibration efficiency obtained by the Monte Carlo method, the value of activity estimated for these radionuclides is in the order of MBq. (author)

  20. Study of the response of an ORTEC GMX45 HPGe detector with a multi-radionuclide volume source using Monte Carlo simulations.

    Science.gov (United States)

    Saraiva, A; Oliveira, C; Reis, M; Portugal, L; Paiva, I; Cruz, C

    2016-07-01

    A model of an n-type ORTEC GMX45 HPGe detector was created using the MCNPX and the MCNP-CP codes. In order to validate the model, experimental efficiency was compared with the Monte Carlo simulations results. The reference source is a NIST traceable multi-gamma volume source in a water-equivalent epoxy resin matrix (1.15gcm(-3) density) containing several radionuclides: (210)Pb, (241)Am, (137)Cs and (60)Co in a cylinder shape container. Two distances of source bottom to end cap front surface of the detector have been considered. The efficiency for the nearest distance is higher than for longer distance. The relative difference between the measured and the simulated full-energy peak efficiency is less than 4.0% except for the 46.5keV energy peak of (210)Pb for the longer distance (6.5%) allowing to consider the model validated. In the absence of adequate standard calibration sources, efficiency and efficiency transfer factors for geometry deviations and matrix effects can be accurately computed by using Monte Carlo methods even if true coincidence could occur as is the case when the (60)Co radioisotope is present in the source. PMID:27131096

  1. Multilevel Monte Carlo Approaches for Numerical Homogenization

    KAUST Repository

    Efendiev, Yalchin R.

    2015-10-01

    In this article, we study the application of multilevel Monte Carlo (MLMC) approaches to numerical random homogenization. Our objective is to compute the expectation of some functionals of the homogenized coefficients, or of the homogenized solutions. This is accomplished within MLMC by considering different sizes of representative volumes (RVEs). Many inexpensive computations with the smallest RVE size are combined with fewer expensive computations performed on larger RVEs. Likewise, when it comes to homogenized solutions, different levels of coarse-grid meshes are used to solve the homogenized equation. We show that, by carefully selecting the number of realizations at each level, we can achieve a speed-up in the computations in comparison to a standard Monte Carlo method. Numerical results are presented for both one-dimensional and two-dimensional test-cases that illustrate the efficiency of the approach.

  2. Quantum Monte Carlo Calculations of Neutron Matter

    CERN Document Server

    Carlson, J; Ravenhall, D G

    2003-01-01

    Uniform neutron matter is approximated by a cubic box containing a finite number of neutrons, with periodic boundary conditions. We report variational and Green's function Monte Carlo calculations of the ground state of fourteen neutrons in a periodic box using the Argonne $\\vep $ two-nucleon interaction at densities up to one and half times the nuclear matter density. The effects of the finite box size are estimated using variational wave functions together with cluster expansion and chain summation techniques. They are small at subnuclear densities. We discuss the expansion of the energy of low-density neutron gas in powers of its Fermi momentum. This expansion is strongly modified by the large nn scattering length, and does not begin with the Fermi-gas kinetic energy as assumed in both Skyrme and relativistic mean field theories. The leading term of neutron gas energy is ~ half the Fermi-gas kinetic energy. The quantum Monte Carlo results are also used to calibrate the accuracy of variational calculations ...

  3. Quasi Monte Carlo methods for optimization models of the energy industry with pricing and load processes

    International Nuclear Information System (INIS)

    We discuss progress in quasi Monte Carlo methods for numerical calculation integrals or expected values and justify why these methods are more efficient than the classic Monte Carlo methods. Quasi Monte Carlo methods are found to be particularly efficient if the integrands have a low effective dimension. That's why We also discuss the concept of effective dimension and prove on the example of a stochastic Optimization model of the energy industry that such models can posses a low effective dimension. Modern quasi Monte Carlo methods are therefore for such models very promising.

  4. Monte Carlo techniques

    International Nuclear Information System (INIS)

    The course of ''Monte Carlo Techniques'' will try to give a general overview of how to build up a method based on a given theory, allowing you to compare the outcome of an experiment with that theory. Concepts related with the construction of the method, such as, random variables, distributions of random variables, generation of random variables, random-based numerical methods, will be introduced in this course. Examples of some of the current theories in High Energy Physics describing the e+e- annihilation processes (QED, Electro-Weak, QCD) will also be briefly introduced. A second step in the employment of this method is related to the detector. The interactions that a particle could have along its way, through the detector as well as the response of the different materials which compound the detector will be quoted in this course. An example of detector at LEP era, in which these techniques are being applied, will close the course. (orig.)

  5. MCMini: Monte Carlo on GPGPU

    Energy Technology Data Exchange (ETDEWEB)

    Marcus, Ryan C. [Los Alamos National Laboratory

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  6. An integrated hydrological, ecological, and economical (HEE) modeling system for assessing water resources and ecosystem production: calibration and validation in the upper and middle parts of the Yellow River Basin, China

    Science.gov (United States)

    Li, Xianglian; Yang, Xiusheng; Gao, Wei

    2006-08-01

    Effective management of water resources in arid and semi-arid areas demands studies that cross over the disciplinaries of natural and social sciences. An integrated Hydrological, Ecological and Economical (HEE) modeling system at regional scale has been developed to assess water resources use and ecosystem production in arid and semi-arid areas. As a physically-based distributed modeling system, the HEE modeling system requires various input parameters including those for soil, vegetation, topography, groundwater, and water and agricultural management at different spatial levels. A successful implementation of the modeling system highly depends on how well it is calibrated. This paper presented an automatic calibration procedure for the HEE modeling system and its test in the upper and middle parts of the Yellow River basin. Previous to calibration, comprehensive literature investigation and sensitivity analysis were performed to identify important parameters for calibration. The automatic calibration procedure was base on conventional Monte Carlo sampling method together with a multi-objective criterion for calibration over multi-site and multi-output. The multi-objective function consisted of optimizing statistics of mean absolute relative error (MARE), Nash-Sutcliffe model efficiency coefficient (E NS), and coefficient of determination (R2). The modeling system was calibrated against streamflow and harvest yield data from multiple sites/provinces within the basin over 2001 by using the proposed automatic procedure, and validated over 1993-1995. Over the calibration period, the mean absolute relative error of simulated daily streamflow was within 7% while the statistics R2 and E NS of daily streamflow were 0.61 and 0.49 respectively. Average simulated harvest yield over the calibration period was about 9.2% less than that of observations. Overall calibration results have indicated that the calibration procedures developed in this study can efficiently calibrate

  7. Monte Carlo methods for electromagnetics

    CERN Document Server

    Sadiku, Matthew NO

    2009-01-01

    Until now, novices had to painstakingly dig through the literature to discover how to use Monte Carlo techniques for solving electromagnetic problems. Written by one of the foremost researchers in the field, Monte Carlo Methods for Electromagnetics provides a solid understanding of these methods and their applications in electromagnetic computation. Including much of his own work, the author brings together essential information from several different publications.Using a simple, clear writing style, the author begins with a historical background and review of electromagnetic theory. After addressing probability and statistics, he introduces the finite difference method as well as the fixed and floating random walk Monte Carlo methods. The text then applies the Exodus method to Laplace's and Poisson's equations and presents Monte Carlo techniques for handing Neumann problems. It also deals with whole field computation using the Markov chain, applies Monte Carlo methods to time-varying diffusion problems, and ...

  8. Using a Monte-Carlo-based approach to evaluate the uncertainty on fringe projection technique

    CERN Document Server

    Molimard, Jérôme

    2013-01-01

    A complete uncertainty analysis on a given fringe projection set-up has been performed using Monte-Carlo approach. In particular the calibration procedure is taken into account. Two applications are given: at a macroscopic scale, phase noise is predominant whilst at microscopic scale, both phase noise and calibration errors are important. Finally, uncertainty found at macroscopic scale is close to some experimental tests (~100 {\\mu}m).

  9. Uncertainty budget for a whole body counter in the scan geometry and computer simulation of the calibration phantoms

    International Nuclear Information System (INIS)

    At the Austrian Research Centers Seibersdorf (ARCS), a whole body counter (WBC) in the scan geometry is used to perform routine measurements for the determination of radioactive intake of workers. The calibration of the WBC is made using bottle phantoms with a homogeneous activity distribution. The same calibration procedures have been simulated using Monte Carlo N-Particle (MCNP) code and FLUKA and the results of the full energy peak efficiencies for eight energies and five phantoms have been compared with the experimental results. The deviation between experiment and simulation results is within 10%. Furthermore, uncertainty budget evaluations have been performed to find out which parameters make substantial contributions to these differences. Therefore, statistical errors of the Monte Carlo simulation, uncertainties in the cross section tables and differences due to geometrical considerations have been taken into account. Comparisons between these results and the one with inhomogeneous distribution, for which the activity is concentrated only in certain parts of the body (such as head, lung, arms and legs), have been performed. The maximum deviation of 43% from the homogeneous case has been found when the activity is concentrated on the arms. (authors)

  10. The role of research efficiency in the evolution of scientific productivity and impact: An agent-based model

    Science.gov (United States)

    You, Zhi-Qiang; Han, Xiao-Pu; Hadzibeganovic, Tarik

    2016-02-01

    We introduce an agent-based model to investigate the effects of production efficiency (PE) and hot field tracing capability (HFTC) on productivity and impact of scientists embedded in a competitive research environment. Agents compete to publish and become cited by occupying the nodes of a citation network calibrated by real-world citation datasets. Our Monte-Carlo simulations reveal that differences in individual performance are strongly related to PE, whereas HFTC alone cannot provide sustainable academic careers under intensely competitive conditions. Remarkably, the negative effect of high competition levels on productivity can be buffered by elevated research efficiency if simultaneously HFTC is sufficiently low.

  11. Ground calibrations of Nuclear Compton Telescope

    Science.gov (United States)

    Chiu, Jeng-Lun; Liu, Zhong-Kai; Bandstra, Mark S.; Bellm, Eric C.; Liang, Jau-Shian; Perez-Becker, Daniel; Zoglauer, Andreas; Boggs, Steven E.; Chang, Hsiang-Kuang; Chang, Yuan-Hann; Huang, Minghuey A.; Amman, Mark; Chiang, Shiuan-Juang; Hung, Wei-Che; Lin, Chih-Hsun; Luke, Paul N.; Run, Ray-Shine; Wunderer, Cornelia B.

    2010-07-01

    The Nuclear Compton Telescope (NCT) is a balloon-borne soft gamma ray (0.2-10 MeV) telescope designed to study astrophysical sources of nuclear line emission and polarization. The heart of NCT is an array of 12 cross-strip germanium detectors, designed to provide 3D positions for each photon interaction with full 3D position resolution to imaging, effectively reduces background, and enables the measurement of polarization. The keys to Compton imaging with NCT's detectors are determining the energy deposited in the detector at each strip and tracking the gamma-ray photon interaction within the detector. The 3D positions are provided by the orthogonal X and Y strips, and by determining the interaction depth using the charge collection time difference (CTD) between the anode and cathode. Calibrations of the energy as well as the 3D position of interactions have been completed, and extensive calibration campaigns for the whole system were also conducted using radioactive sources prior to our flights from Ft. Sumner, New Mexico, USA in Spring 2009, and from Alice Springs, Australia in Spring 2010. Here we will present the techniques and results of our ground calibrations so far, and then compare the calibration results of the effective area throughout NCT's field of view with Monte Carlo simulations using a detailed mass model.

  12. Calibrating Gyrochronology using Kepler Asteroseismic targets

    CERN Document Server

    Angus, Ruth; Foreman-Mackey, Daniel; McQuillan, Amy

    2015-01-01

    Among the available methods for dating stars, gyrochronology is a powerful one because it requires knowledge of only the star's mass and rotation period. Gyrochronology relations have previously been calibrated using young clusters, with the Sun providing the only age dependence, and are therefore poorly calibrated at late ages. We used rotation period measurements of 310 Kepler stars with asteroseismic ages, 50 stars from the Hyades and Coma Berenices clusters and 6 field stars (including the Sun) with precise age measurements to calibrate the gyrochronology relation, whilst fully accounting for measurement uncertainties in all observable quantities. We calibrated a relation of the form $P=A^n\\times(B-V-c)^b$, where $P$ is rotation period in days, $A$ is age in Myr, $B$ and $V$ are magnitudes and $a$, $b$ and $n$ are the free parameters of our model. We found $a = 0.40^{+0.3}_{-0.05}$, $b = 0.31^{+0.05}_{-0.02}$ and $n = 0.55^{+0.02}_{-0.09}$. Markov Chain Monte Carlo methods were used to explore the posteri...

  13. Monte Carlo Form-Finding Method for Tensegrity Structures

    Science.gov (United States)

    Li, Yue; Feng, Xi-Qiao; Cao, Yan-Ping

    2010-05-01

    In this paper, we propose a Monte Carlo-based approach to solve tensegrity form-finding problems. It uses a stochastic procedure to find the deterministic equilibrium configuration of a tensegrity structure. The suggested Monte Carlo form-finding (MCFF) method is highly efficient because it does not involve complicated matrix operations and symmetry analysis and it works for arbitrary initial configurations. Both regular and non-regular tensegrity problems of large scale can be solved. Some representative examples are presented to demonstrate the efficiency and accuracy of this versatile method.

  14. Algorithm researches for efficient global tallying in criticality calculation of Monte Carlo metho d%蒙特卡罗临界计算全局计数效率新算法研究∗

    Institute of Scientific and Technical Information of China (English)

    上官丹骅; 邓力; 李刚; 张宝印; 马彦; 付元光; 李瑞; 胡小利

    2016-01-01

    为提高蒙特卡罗临界计算时全局计数的整体效率,对比分析了新提出的均匀计数密度算法、均匀径迹数密度算法和原有的均匀裂变点算法。以大亚湾核反应堆pin-by-pin模型的全局体平均通量计数和中子沉积能计数为例,前两种算法较均匀裂变点算法都获得了整体效率的提高。上述算法已经在自主开发的并行蒙特卡罗输运程序JMCT上予以实现。%Based on the research of the uniform fission site algorithm, the uniform tally density algorithm and the uniform track number density algorithm are proposed and compared with the original uniform fission site algorithm in this paper for seeking high performance of global tallying in Monte Carlo criticality calculation. Because reducing the largest uncertainties to an acceptable level simply by running a large number of neutron histories is often prohibitively expensive, the researches are indispensable for the calculation to reach the goal of practical application (the so called 95/95 standard). Using the global volume-averaged cell flux tally and energy deposition tally of the pin-by-pin model of Dayawan nuclear reactor as two examples, these new algorithms show better results. Although the uniform tally density algorithm has the best performance, the uniform track number density algorithm still has the advantage of being applicable to any type of tally, which is based on the track length estimator without any modification. All the algorithms are realized in a recently developed parallel Monte Carlo particle transport code JMCT.

  15. A FAST FOREGROUND DIGITAL CALIBRATION TECHNIQUE FOR PIPELINED ADC

    Institute of Scientific and Technical Information of China (English)

    Wang Yu; Yang Haigang; Cheng Xin; Liu Fei; Yin Tao

    2012-01-01

    Digital calibration techniques are widely developed to cancel the non-idealities of the pipelined Analog-to-Digital Converters (ADCs).This letter presents a fast foreground digital calibration technique based on the analysis of error sources which influence the resolution of pipelined ADCs.This method estimates the gain error of the ADC prototype quickly and calibrates the ADC simultaneously in the operation time.Finally,a 10 bit,100 Ms/s pipelined ADC is implemented and calibrated.The simulation results show that the digital calibration technique has its efficiency with fewem operation cycles.

  16. Monte Carlo Methods for Rough Free Energy Landscapes: Population Annealing and Parallel Tempering

    OpenAIRE

    Machta, Jon; Ellis, Richard S.

    2011-01-01

    Parallel tempering and population annealing are both effective methods for simulating equilibrium systems with rough free energy landscapes. Parallel tempering, also known as replica exchange Monte Carlo, is a Markov chain Monte Carlo method while population annealing is a sequential Monte Carlo method. Both methods overcome the exponential slowing associated with high free energy barriers. The convergence properties and efficiency of the two methods are compared. For large systems, populatio...

  17. Calibration and measurement of 210Pb using two independent techniques

    International Nuclear Information System (INIS)

    An experimental procedure has been developed for a rapid and accurate determination of the activity concentration of 210Pb in sediments by liquid scintillation counting (LSC). Additionally, an alternative technique using γ-spectrometry and Monte Carlo simulation has been developed. A radiochemical procedure, based on radium and barium sulphates co-precipitation have been applied to isolate the Pb-isotopes. 210Pb activity measurements were done in a low background scintillation spectrometer Quantulus 1220. A calibration of the liquid scintillation spectrometer, including its α/β discrimination system, has been made, in order to minimize background and, additionally, some improvements are suggested for the calculation of the 210Pb activity concentration, taking into account that 210Pb counting efficiency cannot be accurately determined. Therefore, the use of an effective radiochemical yield, which can be empirically evaluated, is proposed. 210Pb activity concentration in riverbed sediments from an area affected by NORM wastes has been determined using both the proposed method. Results using γ-spectrometry and LSC are compared to the results obtained following indirect α-spectrometry (210Po) method

  18. OLI Radiometric Calibration

    Science.gov (United States)

    Markham, Brian; Morfitt, Ron; Kvaran, Geir; Biggar, Stuart; Leisso, Nathan; Czapla-Myers, Jeff

    2011-01-01

    Goals: (1) Present an overview of the pre-launch radiance, reflectance & uniformity calibration of the Operational Land Imager (OLI) (1a) Transfer to orbit/heliostat (1b) Linearity (2) Discuss on-orbit plans for radiance, reflectance and uniformity calibration of the OLI

  19. Absolute angular calibration of a submarine km3 neutrino telescope

    International Nuclear Information System (INIS)

    A requirement for neutrino telescope is the ability to resolve point sources of neutrinos. In order to understand its resolving power a way to perform absolute angular calibration with muons is required. Muons produced by cosmic rays in the atmosphere offer an abundant calibration source. By covering a surface vessel with 200 modules of 5 m2 plastic scintillator a surface air shower array can be set up. Running this array in coincidence with a deep-sea km3 size neutrino detector, where the coincidence is defined by the absolute clock timing stamp for each event, would allow absolute angular calibration to be performed. Monte Carlo results simulating the absolute angular calibration of the km3 size neutrino detector will be presented. Future work and direction will be discussed.

  20. Sandia WIPP calibration traceability

    International Nuclear Information System (INIS)

    This report summarizes the work performed to establish calibration traceability for the instrumentation used by Sandia National Laboratories at the Waste Isolation Pilot Plant (WIPP) during testing from 1980-1985. Identifying the calibration traceability is an important part of establishing a pedigree for the data and is part of the qualification of existing data. In general, the requirement states that the calibration of Measuring and Test equipment must have a valid relationship to nationally recognized standards or the basis for the calibration must be documented. Sandia recognized that just establishing calibration traceability would not necessarily mean that all QA requirements were met during the certification of test instrumentation. To address this concern, the assessment was expanded to include various activities

  1. Monte Carlo Simulation of an American Option

    Directory of Open Access Journals (Sweden)

    Gikiri Thuo

    2007-04-01

    Full Text Available We implement gradient estimation techniques for sensitivity analysis of option pricing which can be efficiently employed in Monte Carlo simulation. Using these techniques we can simultaneously obtain an estimate of the option value together with the estimates of sensitivities of the option value to various parameters of the model. After deriving the gradient estimates we incorporate them in an iterative stochastic approximation algorithm for pricing an option with early exercise features. We illustrate the procedure using an example of an American call option with a single dividend that is analytically tractable. In particular we incorporate estimates for the gradient with respect to the early exercise threshold level.

  2. Calculation of HPGe efficiency for environmental samples: comparison of EFFTRAN and GEANT4

    Energy Technology Data Exchange (ETDEWEB)

    Nikolic, Jelena, E-mail: jnikolic@vinca.rs [University of Belgrade Institut for Nuclear Sciences Vinča, Mike Petrovica Alasa 12-16, 11001 Belgrade (Serbia); Vidmar, Tim [SCK.CEN, Belgian Nuclear Research Centre, Boeretang 200, BE-2400 Mol (Belgium); Jokovic, Dejan [University of Belgrade, Institute for Physics, Pregrevica 18, Belgrade (Serbia); Rajacic, Milica; Todorovic, Dragana [University of Belgrade Institut for Nuclear Sciences Vinča, Mike Petrovica Alasa 12-16, 11001 Belgrade (Serbia)

    2014-11-01

    Determination of full energy peak efficiency is one of the most important tasks that have to be performed before gamma spectrometry of environmental samples. Many methods, including measurement of specific reference materials, Monte Carlo simulations, efficiency transfer and semi empirical calculations, were developed in order to complete this task. Monte Carlo simulation, based on GEANT4 simulation package and EFFTRAN efficiency transfer software are applied for the efficiency calibration of three detectors, readily used in the Environment and Radiation Protection Laboratory of Institute for Nuclear Sciences Vinca, for measurement of environmental samples. Efficiencies were calculated for water, soil and aerosol samples. The aim of this paper is to perform efficiency calculations for HPGe detectors using both GEANT4 simulation and EFFTRAN efficiency transfer software and to compare obtained results with the experimental results. This comparison should show how the two methods agree with experimentally obtained efficiencies of our measurement system and in which part of the spectrum do the discrepancies appear. The detailed knowledge of accuracy and precision of both methods should enable us to choose an appropriate method for each situation that is presented in our and other laboratories on a daily basis.

  3. Estimation of population variance in contributon Monte Carlo

    International Nuclear Information System (INIS)

    Based on the theory of contributons, a new Monte Carlo method known as the contributon Monte Carlo method has recently been developed. The method has found applications in several practical shielding problems. The authors analyze theoretically the variance and efficiency of the new method, by taking moments around the score. In order to compare the contributon game with a game of simple geometrical splitting and also to get the optimal placement of the contributon volume, the moments equations were solved numerically for a one-dimensional, one-group problem using a 10-mfp-thick homogeneous slab. It is found that the optimal placement of the contributon volume is adjacent to the detector; even at its most optimal the contributon Monte Carlo is less efficient than geometrical splitting

  4. Calibration of the JEM-EUSO detector

    Directory of Open Access Journals (Sweden)

    Gorodetzky P.

    2013-06-01

    Full Text Available In order to unveil the mystery of ultra high energy cosmic rays (UHECRs, JEM-EUSO (Extreme Universe Space Observatory on-board Japan Experiment Module will observe extensive air showers induced by UHECRs from the International Space Station orbit with a huge acceptance. Calibration of the JEM-EUSO instrument, which consists of Fresnel optics and a focal surface detector with 5000 photomultipliers, is very important to discuss the origin of UHECRs precisely with the observed results. In this paper, the calibration before launch and on-orbit is described. The calibration before flight will be performed as precisely as possible with integrating spheres. In the orbit, the relative change of the performance will be checked regularly with on-board and on-ground light sources. The absolute calibration of photon detection efficiency may be performed with the moon, which is a stable light source in the nature.

  5. Segment Based Camera Calibration

    Institute of Scientific and Technical Information of China (English)

    马颂德; 魏国庆; 等

    1993-01-01

    The basic idea of calibrating a camera system in previous approaches is to determine camera parmeters by using a set of known 3D points as calibration reference.In this paper,we present a method of camera calibration in whih camera parameters are determined by a set of 3D lines.A set of constraints is derived on camea parameters in terms of perspective line mapping.Form these constraints,the same perspective transformation matrix as that for point mapping can be computed linearly.The minimum number of calibration lines is 6.This result generalizes that of Liu,Huang and Faugeras[12] for camera location determination in which at least 8 line correspondences are required for linear computation of camera location.Since line segments in an image can be located easily and more accurately than points,the use of lines as calibration reference tends to ease the computation in inage preprocessing and to improve calibration accuracy.Experimental results on the calibration along with stereo reconstruction are reported.

  6. Monte Carlo simulation of mixed neutron-gamma radiation fields and dosimetry devices

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Guoqing

    2011-12-22

    Monte Carlo methods based on random sampling are widely used in different fields for the capability of solving problems with a large number of coupled degrees of freedom. In this work, Monte Carlos methods are successfully applied for the simulation of the mixed neutron-gamma field in an interim storage facility and neutron dosimeters of different types. Details are discussed in two parts: In the first part, the method of simulating an interim storage facility loaded with CASTORs is presented. The size of a CASTOR is rather large (several meters) and the CASTOR wall is very thick (tens of centimeters). Obtaining the results of dose rates outside a CASTOR with reasonable errors costs usually hours or even days. For the simulation of a large amount of CASTORs in an interim storage facility, it needs weeks or even months to finish a calculation. Variance reduction techniques were used to reduce the calculation time and to achieve reasonable relative errors. Source clones were applied to avoid unnecessary repeated calculations. In addition, the simulations were performed on a cluster system. With the calculation techniques discussed above, the efficiencies of calculations can be improved evidently. In the second part, the methods of simulating the response of neutron dosimeters are presented. An Alnor albedo dosimeter was modelled in MCNP, and it has been simulated in the facility to calculate the calibration factor to get the evaluated response to a Cf-252 source. The angular response of Makrofol detectors to fast neutrons has also been investigated. As a kind of SSNTD, Makrofol can detect fast neutrons by recording the neutron induced heavy charged recoils. To obtain the information of charged recoils, general-purpose Monte Carlo codes were used for transporting incident neutrons. The response of Makrofol to fast neutrons is dependent on several factors. Based on the parameters which affect the track revealing, the formation of visible tracks was determined. For

  7. Monte Carlo simulation of mixed neutron-gamma radiation fields and dosimetry devices

    International Nuclear Information System (INIS)

    Monte Carlo methods based on random sampling are widely used in different fields for the capability of solving problems with a large number of coupled degrees of freedom. In this work, Monte Carlos methods are successfully applied for the simulation of the mixed neutron-gamma field in an interim storage facility and neutron dosimeters of different types. Details are discussed in two parts: In the first part, the method of simulating an interim storage facility loaded with CASTORs is presented. The size of a CASTOR is rather large (several meters) and the CASTOR wall is very thick (tens of centimeters). Obtaining the results of dose rates outside a CASTOR with reasonable errors costs usually hours or even days. For the simulation of a large amount of CASTORs in an interim storage facility, it needs weeks or even months to finish a calculation. Variance reduction techniques were used to reduce the calculation time and to achieve reasonable relative errors. Source clones were applied to avoid unnecessary repeated calculations. In addition, the simulations were performed on a cluster system. With the calculation techniques discussed above, the efficiencies of calculations can be improved evidently. In the second part, the methods of simulating the response of neutron dosimeters are presented. An Alnor albedo dosimeter was modelled in MCNP, and it has been simulated in the facility to calculate the calibration factor to get the evaluated response to a Cf-252 source. The angular response of Makrofol detectors to fast neutrons has also been investigated. As a kind of SSNTD, Makrofol can detect fast neutrons by recording the neutron induced heavy charged recoils. To obtain the information of charged recoils, general-purpose Monte Carlo codes were used for transporting incident neutrons. The response of Makrofol to fast neutrons is dependent on several factors. Based on the parameters which affect the track revealing, the formation of visible tracks was determined. For

  8. Lidar Calibration Centre

    Science.gov (United States)

    Pappalardo, Gelsomina; Freudenthaler, Volker; Nicolae, Doina; Mona, Lucia; Belegante, Livio; D'Amico, Giuseppe

    2016-06-01

    This paper presents the newly established Lidar Calibration Centre, a distributed infrastructure in Europe, whose goal is to offer services for complete characterization and calibration of lidars and ceilometers. Mobile reference lidars, laboratories for testing and characterization of optics and electronics, facilities for inspection and debugging of instruments, as well as for training in good practices are open to users from the scientific community, operational services and private sector. The Lidar Calibration Centre offers support for trans-national access through the EC HORIZON2020 project ACTRIS-2.

  9. Equipment for dosemeter calibration

    International Nuclear Information System (INIS)

    The device is used for precise calibration of dosimetric instrumentation, such as used at nuclear facilities. The high precision of the calibration procedure is primarily due to the fact that one single and steady radiation source is used. The accurate alignment of the source and the absence of shielding materials in the beam axis make for high homogeneity of the beam and reproducibility of the measurement; this is also contributed to by the horizontal displacement of the optical bench, which ensures a constant temperature field and the possibility of adjusting the radiation source at a sufficient distance from the instrument to be calibrated. (Z.S.). 3 figs

  10. A hybrid Monte Carlo and response matrix Monte Carlo method in criticality calculation

    International Nuclear Information System (INIS)

    Full core calculations are very useful and important in reactor physics analysis, especially in computing the full core power distributions, optimizing the refueling strategies and analyzing the depletion of fuels. To reduce the computing time and accelerate the convergence, a method named Response Matrix Monte Carlo (RMMC) method based on analog Monte Carlo simulation was used to calculate the fixed source neutron transport problems in repeated structures. To make more accurate calculations, we put forward the RMMC method based on non-analog Monte Carlo simulation and investigate the way to use RMMC method in criticality calculations. Then a new hybrid RMMC and MC (RMMC+MC) method is put forward to solve the criticality problems with combined repeated and flexible geometries. This new RMMC+MC method, having the advantages of both MC method and RMMC method, can not only increase the efficiency of calculations, also simulate more complex geometries rather than repeated structures. Several 1-D numerical problems are constructed to test the new RMMC and RMMC+MC method. The results show that RMMC method and RMMC+MC method can efficiently reduce the computing time and variations in the calculations. Finally, the future research directions are mentioned and discussed at the end of this paper to make RMMC method and RMMC+MC method more powerful. (authors)

  11. Analytical band Monte Carlo analysis of electron transport in silicene

    Science.gov (United States)

    Yeoh, K. H.; Ong, D. S.; Ooi, C. H. Raymond; Yong, T. K.; Lim, S. K.

    2016-06-01

    An analytical band Monte Carlo (AMC) with linear energy band dispersion has been developed to study the electron transport in suspended silicene and silicene on aluminium oxide (Al2O3) substrate. We have calibrated our model against the full band Monte Carlo (FMC) results by matching the velocity-field curve. Using this model, we discover that the collective effects of charge impurity scattering and surface optical phonon scattering can degrade the electron mobility down to about 400 cm2 V‑1 s‑1 and thereafter it is less sensitive to the changes of charge impurity in the substrate and surface optical phonon. We also found that further reduction of mobility to ∼100 cm2 V‑1 s‑1 as experimentally demonstrated by Tao et al (2015 Nat. Nanotechnol. 10 227) can only be explained by the renormalization of Fermi velocity due to interaction with Al2O3 substrate.

  12. Comparison of experimental and calculated calibration coefficients for a high sensitivity ionization chamber.

    Science.gov (United States)

    Amiot, M N; Mesradi, M R; Chisté, V; Morin, M; Rigoulay, F

    2012-09-01

    The response of a Vacutec 70129 ionization chamber was calculated using the PENELOPE-2008 Monte Carlo code and compared to experimental data. The filling gas mixture composition and its pressure have been determined using IC simulated response adjustment to experimental results. The Monte Carlo simulation revealed a physical effect in the detector response to photons due to the presence of xenon in the chamber. A very good agreement is found between calculated and experimental calibration coefficients for 17 radionuclides.

  13. Monte Carlo Greeks for financial products via approximative transition densities

    OpenAIRE

    Joerg Kampen; Anastasia Kolodko; John Schoenmakers

    2008-01-01

    In this paper we introduce efficient Monte Carlo estimators for the valuation of high-dimensional derivatives and their sensitivities (''Greeks''). These estimators are based on an analytical, usually approximative representation of the underlying density. We study approximative densities obtained by the WKB method. The results are applied in the context of a Libor market model.

  14. SPOTS Calibration Example

    Directory of Open Access Journals (Sweden)

    Patterson E.

    2010-06-01

    Full Text Available The results are presented using the procedure outlined by the Standardisation Project for Optical Techniques of Strain measurement to calibrate a digital image correlation system. The process involves comparing the experimental data obtained with the optical measurement system to the theoretical values for a specially designed specimen. The standard states the criteria which must be met in order to achieve successful calibration, in addition to quantifying the measurement uncertainty in the system. The system was evaluated at three different displacement load levels, generating strain ranges from 289 µstrain to 2110 µstrain. At the 289 µstrain range, the calibration uncertainty was found to be 14.1 µstrain, and at the 2110 µstrain range it was found to be 28.9 µstrain. This calibration procedure was performed without painting a speckle pattern on the surface of the metal. Instead, the specimen surface was prepared using different grades of grit paper to produce the desired texture.

  15. Traceable Pyrgeometer Calibrations

    Energy Technology Data Exchange (ETDEWEB)

    Dooraghi, Mike; Kutchenreiter, Mark; Reda, Ibrahim; Habte, Aron; Sengupta, Manajit; Andreas, Afshin; Newman, Martina; Webb, Craig

    2016-05-02

    This presentation provides a high-level overview of the progress on the Broadband Outdoor Radiometer Calibrations for all shortwave and longwave radiometers that are deployed by the Atmospheric Radiation Measurement program.

  16. Air Data Calibration Facility

    Data.gov (United States)

    Federal Laboratory Consortium — This facility is for low altitude subsonic altimeter system calibrations of air vehicles. Mission is a direct support of the AFFTC mission. Postflight data merge is...

  17. Device calibration impacts security of quantum key distribution

    OpenAIRE

    Jain, Nitin; Wittmann, Christoffer; Lydersen, Lars; Wiechers, Carlos; Elser, Dominique; Marquardt, Christoph; Makarov, Vadim; Leuchs, Gerd

    2011-01-01

    Characterizing the physical channel and calibrating the cryptosystem hardware are prerequisites for establishing a quantum channel for quantum key distribution (QKD). Moreover, an inappropriately implemented calibration routine can open a fatal security loophole. We propose and experimentally demonstrate a method to induce a large temporal detector efficiency mismatch in a commercial QKD system by deceiving a channel length calibration routine. We then devise an optimal and realistic strategy...

  18. Approximation Behooves Calibration

    DEFF Research Database (Denmark)

    da Silva Ribeiro, André Manuel; Poulsen, Rolf

    2013-01-01

    Calibration based on an expansion approximation for option prices in the Heston stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005–2009.......Calibration based on an expansion approximation for option prices in the Heston stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005–2009....

  19. Scanner calibration revisited

    Directory of Open Access Journals (Sweden)

    Pozhitkov Alexander E

    2010-07-01

    Full Text Available Abstract Background Calibration of a microarray scanner is critical for accurate interpretation of microarray results. Shi et al. (BMC Bioinformatics, 2005, 6, Art. No. S11 Suppl. 2. reported usage of a Full Moon BioSystems slide for calibration. Inspired by the Shi et al. work, we have calibrated microarray scanners in our previous research. We were puzzled however, that most of the signal intensities from a biological sample fell below the sensitivity threshold level determined by the calibration slide. This conundrum led us to re-investigate the quality of calibration provided by the Full Moon BioSystems slide as well as the accuracy of the analysis performed by Shi et al. Methods Signal intensities were recorded on three different microarray scanners at various photomultiplier gain levels using the same calibration slide from Full Moon BioSystems. Data analysis was conducted on raw signal intensities without normalization or transformation of any kind. Weighted least-squares method was used to fit the data. Results We found that initial analysis performed by Shi et al. did not take into account autofluorescence of the Full Moon BioSystems slide, which led to a grossly distorted microarray scanner response. Our analysis revealed that a power-law function, which is explicitly accounting for the slide autofluorescence, perfectly described a relationship between signal intensities and fluorophore quantities. Conclusions Microarray scanners respond in a much less distorted fashion than was reported by Shi et al. Full Moon BioSystems calibration slides are inadequate for performing calibration. We recommend against using these slides.

  20. Energy calibration via correlation

    CERN Document Server

    Maier, Daniel

    2015-01-01

    The main task of an energy calibration is to find a relation between pulse-height values and the corresponding energies. Doing this for each pulse-height channel individually requires an elaborated input spectrum with an excellent counting statistics and a sophisticated data analysis. This work presents an easy to handle energy calibration process which can operate reliably on calibration measurements with low counting statistics. The method uses a parameter based model for the energy calibration and concludes on the optimal parameters of the model by finding the best correlation between the measured pulse-height spectrum and multiple synthetic pulse-height spectra which are constructed with different sets of calibration parameters. A CdTe-based semiconductor detector and the line emissions of an 241 Am source were used to test the performance of the correlation method in terms of systematic calibration errors for different counting statistics. Up to energies of 60 keV systematic errors were measured to be le...

  1. Calibrating nacelle lidars

    Energy Technology Data Exchange (ETDEWEB)

    Courtney, M.

    2013-01-15

    Nacelle mounted, forward looking wind lidars are beginning to be used to provide reference wind speed measurements for the power performance testing of wind turbines. In such applications, a formal calibration procedure with a corresponding uncertainty assessment will be necessary. This report presents four concepts for performing such a nacelle lidar calibration. Of the four methods, two are found to be immediately relevant and are pursued in some detail. The first of these is a line of sight calibration method in which both lines of sight (for a two beam lidar) are individually calibrated by accurately aligning the beam to pass close to a reference wind speed sensor. A testing procedure is presented, reporting requirements outlined and the uncertainty of the method analysed. It is seen that the main limitation of the line of sight calibration method is the time required to obtain a representative distribution of radial wind speeds. An alternative method is to place the nacelle lidar on the ground and incline the beams upwards to bisect a mast equipped with reference instrumentation at a known height and range. This method will be easier and faster to implement and execute but the beam inclination introduces extra uncertainties. A procedure for conducting such a calibration is presented and initial indications of the uncertainties given. A discussion of the merits and weaknesses of the two methods is given together with some proposals for the next important steps to be taken in this work. (Author)

  2. A simple methodology for characterization of germanium coaxial detectors by using Monte Carlo simulation and evolutionary algorithms.

    Science.gov (United States)

    Guerra, J G; Rubiano, J G; Winter, G; Guerra, A G; Alonso, H; Arnedo, M A; Tejera, A; Gil, J M; Rodríguez, R; Martel, P; Bolivar, J P

    2015-11-01

    The determination in a sample of the activity concentration of a specific radionuclide by gamma spectrometry needs to know the full energy peak efficiency (FEPE) for the energy of interest. The difficulties related to the experimental calibration make it advisable to have alternative methods for FEPE determination, such as the simulation of the transport of photons in the crystal by the Monte Carlo method, which requires an accurate knowledge of the characteristics and geometry of the detector. The characterization process is mainly carried out by Canberra Industries Inc. using proprietary techniques and methodologies developed by that company. It is a costly procedure (due to shipping and to the cost of the process itself) and for some research laboratories an alternative in situ procedure can be very useful. The main goal of this paper is to find an alternative to this costly characterization process, by establishing a method for optimizing the parameters of characterizing the detector, through a computational procedure which could be reproduced at a standard research lab. This method consists in the determination of the detector geometric parameters by using Monte Carlo simulation in parallel with an optimization process, based on evolutionary algorithms, starting from a set of reference FEPEs determined experimentally or computationally. The proposed method has proven to be effective and simple to implement. It provides a set of characterization parameters which it has been successfully validated for different source-detector geometries, and also for a wide range of environmental samples and certified materials.

  3. Establishing a NORM based radiation calibration facility.

    Science.gov (United States)

    Wallace, J

    2016-05-01

    An environmental radiation calibration facility has been constructed by the Radiation and Nuclear Sciences unit of Queensland Health at the Forensic and Scientific Services Coopers Plains campus in Brisbane. This facility consists of five low density concrete pads, spiked with a NORM source, to simulate soil and effectively provide a number of semi-infinite uniformly distributed sources for improved energy response calibrations of radiation equipment used in NORM measurements. The pads have been sealed with an environmental epoxy compound to restrict radon loss and so enhance the quality of secular equilibrium achieved. Monte Carlo models (MCNP),used to establish suitable design parameters and identify appropriate geometric correction factors linking the air kerma measured above these calibration pads to that predicted for an infinite plane using adjusted ICRU53 data, are discussed. Use of these correction factors as well as adjustments for cosmic radiation and the impact of surrounding low levels of NORM in the soil, allows for good agreement between the radiation fields predicted and measured above the pads at both 0.15 m and 1 m. PMID:26921707

  4. Example of Monte Carlo uncertainty assessment in the field of radionuclide metrology

    Science.gov (United States)

    Cassette, Philippe; Bochud, François; Keightley, John

    2015-06-01

    This chapter presents possible uses and examples of Monte Carlo methods for the evaluation of uncertainties in the field of radionuclide metrology. The method is already well documented in GUM supplement 1, but here we present a more restrictive approach, where the quantities of interest calculated by the Monte Carlo method are estimators of the expectation and standard deviation of the measurand, and the Monte Carlo method is used to propagate the uncertainties of the input parameters through the measurement model. This approach is illustrated by an example of the activity calibration of a 103Pd source by liquid scintillation counting and the calculation of a linear regression on experimental data points. An electronic supplement presents some algorithms which may be used to generate random numbers with various statistical distributions, for the implementation of this Monte Carlo calculation method.

  5. Improvement of the WBC calibration of the Internal Dosimetry Laboratory of the CDTN/CNEN using MCNPX code

    Energy Technology Data Exchange (ETDEWEB)

    Guerra P, F.; Heeren de O, A. [Universidade Federal de Minas Gerais, Departamento de Engenharia Nuclear, Programa de Pos Graduacao em Ciencias e Tecnicas Nucleares, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil); Melo, B. M.; Lacerda, M. A. S.; Da Silva, T. A.; Ferreira F, T. C., E-mail: tcff01@gmail.com [Centro de Desenvolvimento da Tecnologia Nuclear, Programa de Pos Graduacao / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil)

    2015-10-15

    The Plan of Radiological Protection licensed by the National Nuclear Energy Commission - CNEN in Brazil includes the risks of assessment of internal and external exposure by implementing a program of individual monitoring which is responsible of controlling exposures and ensuring the maintenance of radiation safety. The Laboratory of Internal Dosimetry of the Center for Development of Nuclear Technology - LID/CDTN is responsible for routine monitoring of internal contamination of the Individuals Occupationally Exposed (IOEs). These are, the IOEs involved in handling {sup 18}F produced by the Unit for Research and Production of Radiopharmaceuticals sources; as well a monitoring of the entire body of workers from the Research Reactor TRIGA IPR-R1/CDTN or whenever there is any risk of accidental incorporation. The determination of photon emitting radionuclides from the human body requires calibration techniques of the counting geometries, in order to obtain a curve of efficiency. The calibration process normally makes use of physical phantoms containing certified activities of the radionuclides of interest. The objective of this project is the calibration of the WBC facility of the LID/CDTN using the BOMAB physical phantom and Monte Carlo simulations. Three steps were needed to complete the calibration process. First, the BOMAB was filled with a KCl solution and several measurements of the gamma ray energy (1.46 MeV) emitted by {sup 40}K were done. Second, simulations using MCNPX code were performed to calculate the counting efficiency (Ce) for the BOMAB model phantom and compared with the measurements Ce results. Third and last step, the modeled BOMAB phantom was used to calculate the Ce covering the energy range of interest. The results showed a good agreement and are within the expected ratio between the measured and simulated results. (Author)

  6. Parallelization of Monte Carlo codes MVP/GMVP

    Energy Technology Data Exchange (ETDEWEB)

    Nagaya, Yasunobu; Mori, Takamasa; Nakagawa, Masayuki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Sasaki, Makoto

    1998-03-01

    General-purpose Monte Carlo codes MVP/GMVP are well-vectorized and thus enable us to perform high-speed Monte Carlo calculations. In order to achieve more speedups, we parallelized the codes on the different types of the parallel processing platforms. The platforms reported are a distributed-memory vector-parallel computer Fujitsu VPP500, a distributed-memory massively parallel computer Intel Paragon and a distributed-memory scalar-parallel computer Hitachi SR2201. As mentioned generally, ideal speedup could be obtained for large-scale problems but parallelization efficiency got worse as the batch size per a processing element (PE) was smaller. (author)

  7. Self-optimizing Monte Carlo method for nuclear well logging simulation

    Science.gov (United States)

    Liu, Lianyan

    1997-09-01

    In order to increase the efficiency of Monte Carlo simulation for nuclear well logging problems, a new method has been developed for variance reduction. With this method, an importance map is generated in the regular Monte Carlo calculation as a by-product, and the importance map is later used to conduct the splitting and Russian roulette for particle population control. By adopting a spatial mesh system, which is independent of physical geometrical configuration, the method allows superior user-friendliness. This new method is incorporated into the general purpose Monte Carlo code MCNP4A through a patch file. Two nuclear well logging problems, a neutron porosity tool and a gamma-ray lithology density tool are used to test the performance of this new method. The calculations are sped up over analog simulation by 120 and 2600 times, for the neutron porosity tool and for the gamma-ray lithology density log, respectively. The new method enjoys better performance by a factor of 4~6 times than that of MCNP's cell-based weight window, as per the converged figure-of-merits. An indirect comparison indicates that the new method also outperforms the AVATAR process for gamma-ray density tool problems. Even though it takes quite some time to generate a reasonable importance map from an analog run, a good initial map can create significant CPU time savings. This makes the method especially suitable for nuclear well logging problems, since one or several reference importance maps are usually available for a given tool. Study shows that the spatial mesh sizes should be chosen according to the mean-free-path. The overhead of the importance map generator is 6% and 14% for neutron and gamma-ray cases. The learning ability towards a correct importance map is also demonstrated. Although false-learning may happen, physical judgement can help diagnose with contributon maps. Calibration and analysis are performed for the neutron tool and the gamma-ray tool. Due to the fact that a very

  8. Monte Carlo simulation of source-excited in vivo x-ray fluorescence measurements of heavy metals.

    Science.gov (United States)

    O'Meara, J M; Chettle, D R; McNeill, F E; Prestwich, W V; Svensson, C E

    1998-06-01

    This paper reports on the Monte Carlo simulation of in vivo x-ray fluorescence (XRF) measurements. Our model is an improvement on previously reported simulations in that it relies on a theoretical basis for modelling Compton momentum broadening as well as detector efficiency. Furthermore, this model is an accurate simulation of experimentally detected spectra when comparisons are made in absolute counts; preceding models have generally only achieved agreement with spectra normalized to unit area. Our code is sufficiently flexible to be applied to the investigation of numerous source-excited in vivo XRF systems. Thus far the simulation has been applied to the modelling of two different systems. The first application was the investigation of various aspects of a new in vivo XRF system, the measurement of uranium in bone with 57Co in a backscatter (approximately 180 degrees) geometry. The Monte Carlo simulation was critical in assessing the potential of applying XRF to the measurement of uranium in bone. Currently the Monte Carlo code is being used to evaluate a potential means of simplifying an established in vivo XRF system, the measurement of lead in bone with 57Co in a 90 degrees geometry. The results from these simulations may demonstrate that calibration procedures can be significantly simplified and subject dose may be reduced. As well as providing an excellent tool for optimizing designs of new systems and improving existing techniques, this model can be used in the investigation of the dosimetry of various XRF systems. Our simulation allows a detailed understanding of the numerous processes involved when heavy metal concentrations are measured in vivo with XRF. PMID:9651014

  9. Monte Carlo simulation of source-excited in vivo x-ray fluorescence measurements of heavy metals

    Science.gov (United States)

    O'Meara, J. M.; Chettle, D. R.; McNeill, F. E.; Prestwich, W. V.; Svensson, C. E.

    1998-06-01

    This paper reports on the Monte Carlo simulation of in vivo x-ray fluorescence (XRF) measurements. Our model is an improvement on previously reported simulations in that it relies on a theoretical basis for modelling Compton momentum broadening as well as detector efficiency. Furthermore, this model is an accurate simulation of experimentally detected spectra when comparisons are made in absolute counts; preceding models have generally only achieved agreement with spectra normalized to unit area. Our code is sufficiently flexible to be applied to the investigation of numerous source-excited in vivo XRF systems. Thus far the simulation has been applied to the modelling of two different systems. The first application was the investigation of various aspects of a new in vivo XRF system, the measurement of uranium in bone with images/0031-9155/43/6/003/img1.gif" ALIGN="MIDDLE"/> in a backscatter images/0031-9155/43/6/003/img2.gif" ALIGN="MIDDLE"/> geometry. The Monte Carlo simulation was critical in assessing the potential of applying XRF to the measurement of uranium in bone. Currently the Monte Carlo code is being used to evaluate a potential means of simplifying an established in vivo XRF system, the measurement of lead in bone with images/0031-9155/43/6/003/img1.gif" ALIGN="MIDDLE"/> in a images/0031-9155/43/6/003/img4.gif" ALIGN="MIDDLE"/> geometry. The results from these simulations may demonstrate that calibration procedures can be significantly simplified and subject dose may be reduced. As well as providing an excellent tool for optimizing designs of new systems and improving existing techniques, this model can be used in the investigation of the dosimetry of various XRF systems. Our simulation allows a detailed understanding of the numerous processes involved when heavy metal concentrations are measured in vivo with XRF.

  10. HAWC Timing Calibration

    CERN Document Server

    Huentemeyer, Petra; Dingus, Brenda

    2009-01-01

    The High-Altitude Water Cherenkov (HAWC) Experiment is a second-generation highsensitivity gamma-ray and cosmic-ray detector that builds on the experience and technology of the Milagro observatory. Like Milagro, HAWC utilizes the water Cherenkov technique to measure extensive air showers. Instead of a pond filled with water (as in Milagro) an array of closely packed water tanks is used. The event direction will be reconstructed using the times when the PMTs in each tank are triggered. Therefore, the timing calibration will be crucial for reaching an angular resolution as low as 0.25 degrees.We propose to use a laser calibration system, patterned after the calibration system in Milagro. Like Milagro, the HAWC optical calibration system will use ~1 ns laser light pulses. Unlike Milagro, the PMTs are optically isolated and require their own optical fiber calibration. For HAWC the laser light pulses will be directed through a series of optical fan-outs and fibers to illuminate the PMTs in approximately one half o...

  11. Calibration Under Uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton; Trucano, Timothy Guy

    2005-03-01

    This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.

  12. Polarimetric Palsar Calibration

    Science.gov (United States)

    Touzi, R.; Shimada, M.

    2008-11-01

    Polarimetric PALSAR system parameters are assessed using data sets collected over various calibration sites. The data collected over the Amazonian forest permits validating the zero Faraday rotation hypotheses near the equator. The analysis of the Amazonian forest data and the response of the corner reflectors deployed during the PALSAR acquisitions lead to the conclusion that the antenna is highly isolated (better than -35 dB). Theses results are confirmed using data collected over the Sweden and Ottawa calibration sites. The 5-m height trihedrals deployed in the Sweden calibration site by the Chalmers University of technology permits accurate measurement of antenna parameters, and detection of 2-3 degree Faraday rotation during day acquisition, whereas no Faraday rotation was noted during night acquisition. Small Faraday rotation angles (2-3 degree) have been measured using acquisitions over the DLR Oberpfaffenhofen and the Ottawa calibration sites. The presence of small but still significant Faraday rotation (2-3 degree) induces a CR return at the cross-polarization HV and VH that should not be interpreted as the actual antenna cross-talk. PALSAR antenna is highly isolated (better than -35 dB), and diagonal antenna distortion matrices (with zero cross-talk terms) can be used for accurate calibration of PALSAR polarimetric data.

  13. GTC Photometric Calibration

    Science.gov (United States)

    di Cesare, M. A.; Hammersley, P. L.; Rodriguez Espinosa, J. M.

    2006-06-01

    We are currently developing the calibration programme for GTC using techniques similar to the ones use for the space telescope calibration (Hammersley et al. 1998, A&AS, 128, 207; Cohen et al. 1999, AJ, 117, 1864). We are planning to produce a catalogue with calibration stars which are suitable for a 10-m telescope. These sources will be not variable, non binary and do not have infrared excesses if they are to be used in the infrared. The GTC science instruments require photometric calibration between 0.35 and 2.5 microns. The instruments are: OSIRIS (Optical System for Imaging low Resolution Integrated Spectroscopy), ELMER and EMIR (Espectrógrafo Multiobjeto Infrarrojo) and the Acquisition and Guiding boxes (Di Césare, Hammersley, & Rodriguez Espinosa 2005, RevMexAA Ser. Conf., 24, 231). The catalogue will consist of 30 star fields distributed in all of North Hemisphere. We will use fields containing sources over the range 12 to 22 magnitude, and spanning a wide range of spectral types (A to M) for the visible and near infrared. In the poster we will show the method used for selecting these fields and we will present the analysis of the data on the first calibration fields observed.

  14. Proton Upset Monte Carlo Simulation

    Science.gov (United States)

    O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.

    2009-01-01

    The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

  15. Methods for fitting of efficiency curves obtained by means of HPGe gamma rays spectrometers

    International Nuclear Information System (INIS)

    The present work describes a few methodologies developed for fitting efficiency curves obtained by means of a HPGe gamma-ray spectrometer. The interpolated values were determined by simple polynomial fitting and polynomial fitting between the ratio of experimental peak efficiency and total efficiency, calculated by Monte Carlo technique, as a function of gamma-ray energy. Moreover, non-linear fitting has been performed using a segmented polynomial function and applying the Gauss-Marquardt method. For the peak area obtainment different methodologies were developed in order to estimate the background area under the peak. This information was obtained by numerical integration or by using analytical functions associated to the background. One non-calibrated radioactive source has been included in the curve efficiency in order to provide additional calibration points. As a by-product, it was possible to determine the activity of this non-calibrated source. For all fittings developed in the present work the covariance matrix methodology was used, which is an essential procedure in order to give a complete description of the partial uncertainties involved. (author)

  16. Application of biasing techniques to the contributon Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Dubi, A.; Gerstl, S.A.W.

    1980-01-01

    Recently, a new Monte Carlo Method called the Contribution Monte Carlo Method was developed. The method is based on the theory of contributions, and uses a new receipe for estimating target responses by a volume integral over the contribution current. The analog features of the new method were discussed in previous publications. The application of some biasing methods to the new contribution scheme is examined here. A theoretical model is developed that enables an analytic prediction of the benefit to be expected when these biasing schemes are applied to both the contribution method and regular Monte Carlo. This model is verified by a variety of numerical experiments and is shown to yield satisfying results, especially for deep-penetration problems. Other considerations regarding the efficient use of the new method are also discussed, and remarks are made as to the application of other biasing methods. 14 figures, 1 tables.

  17. VARIATIONAL MONTE-CARLO APPROACH FOR ARTICULATED OBJECT TRACKING

    Directory of Open Access Journals (Sweden)

    Kartik Dwivedi

    2013-12-01

    Full Text Available In this paper, we describe a novel variational Monte Carlo approach for modeling and tracking body parts of articulated objects. An articulated object (human target is represented as a dynamic Markov network of the different constituent parts. The proposed approach combines local information of individual body parts and other spatial constraints influenced by neighboring parts. The movement of the relative parts of the articulated body is modeled with local information of displacements from the Markov network and the global information from other neighboring parts. We explore the effect of certain model parameters (including the number of parts tracked; number of Monte-Carlo cycles, etc. on system accuracy and show that ourvariational Monte Carlo approach achieves better efficiency and effectiveness compared to other methods on a number of real-time video datasets containing single targets.

  18. Multiple-time-stepping generalized hybrid Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  19. A continuation multilevel Monte Carlo algorithm

    KAUST Repository

    Collier, Nathan

    2014-09-05

    We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error tolerance is satisfied. CMLMC assumes discretization hierarchies that are defined a priori for each level and are geometrically refined across levels. The actual choice of computational work across levels is based on parametric models for the average cost per sample and the corresponding variance and weak error. These parameters are calibrated using Bayesian estimation, taking particular notice of the deepest levels of the discretization hierarchy, where only few realizations are available to produce the estimates. The resulting CMLMC estimator exhibits a non-trivial splitting between bias and statistical contributions. We also show the asymptotic normality of the statistical error in the MLMC estimator and justify in this way our error estimate that allows prescribing both required accuracy and confidence in the final result. Numerical results substantiate the above results and illustrate the corresponding computational savings in examples that are described in terms of differential equations either driven by random measures or with random coefficients. © 2014, Springer Science+Business Media Dordrecht.

  20. Monte Carlo Simulation of River Meander Modelling

    Science.gov (United States)

    Posner, A. J.; Duan, J. G.

    2010-12-01

    This study first compares the first order analytical solutions for flow field by Ikeda et. al. (1981) and Johanesson and Parker (1989b). Ikeda et. al.’s (1981) linear bank erosion model was implemented to predict the rate of bank erosion in which the bank erosion coefficient is treated as a stochastic variable that varies with physical properties of the bank (e.g. cohesiveness, stratigraphy, vegetation density). The developed model was used to predict the evolution of meandering planforms. Then, the modeling results were analyzed and compared to the observed data. Since the migration of meandering channel consists of downstream translation, lateral expansion, and downstream or upstream rotations. Several measures are formulated in order to determine which of the resulting planform is closest to the experimental measured one. Results from the deterministic model highly depend on the calibrated erosion coefficient. Since field measurements are always limited, the stochastic model yielded more realistic predictions of meandering planform evolutions. Due to the random nature of bank erosion coefficient, the meandering planform evolution is a stochastic process that can only be accurately predicted by a stochastic model. Quasi-2D Ikeda (1989) flow solution with Monte Carlo Simulation of Bank Erosion Coefficient.

  1. Monte Carlo simulation of a clearance box monitor used for nuclear power plant decommissioning.

    Science.gov (United States)

    Bochud, François O; Laedermann, Jean-Pascal; Bailat, Claude J; Schuler, Christoph

    2009-05-01

    When decommissioning a nuclear facility it is important to be able to estimate activity levels of potentially radioactive samples and compare with clearance values defined by regulatory authorities. This paper presents a method of calibrating a clearance box monitor based on practical experimental measurements and Monte Carlo simulations. Adjusting the simulation for experimental data obtained using a simple point source permits the computation of absolute calibration factors for more complex geometries with an accuracy of a bit more than 20%. The uncertainty of the calibration factor can be improved to about 10% when the simulation is used relatively, in direct comparison with a measurement performed in the same geometry but with another nuclide. The simulation can also be used to validate the experimental calibration procedure when the sample is supposed to be homogeneous but the calibration factor is derived from a plate phantom. For more realistic geometries, like a small gravel dumpster, Monte Carlo simulation shows that the calibration factor obtained with a larger homogeneous phantom is correct within about 20%, if sample density is taken as the influencing parameter. Finally, simulation can be used to estimate the effect of a contamination hotspot. The research supporting this paper shows that activity could be largely underestimated in the event of a centrally-located hotspot and overestimated for a peripherally-located hotspot if the sample is assumed to be homogeneously contaminated. This demonstrates the usefulness of being able to complement experimental methods with Monte Carlo simulations in order to estimate calibration factors that cannot be directly measured because of a lack of available material or specific geometries. PMID:19359851

  2. Individual dosimetry and calibration

    International Nuclear Information System (INIS)

    In 1995 both the Individual Dosimetry and Calibration Sections worked under the condition of a status quo and concentrated fully on the routine part of their work. Nevertheless, the machine for printing the bar code which will be glued onto the film holder and hence identify the people when entering into high radiation areas was put into operation and most of the holders were equipped with the new identification. As far as the Calibration Section is concerned the project of the new source control system that is realized by the Technical Support Section was somewhat accelerated

  3. Quantum Monte Carlo for atoms and molecules

    International Nuclear Information System (INIS)

    The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H2, LiH, Li2, and H2O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li2, and H2O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations, the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions

  4. Information Geometry and Sequential Monte Carlo

    CERN Document Server

    Sim, Aaron; Stumpf, Michael P H

    2012-01-01

    This paper explores the application of methods from information geometry to the sequential Monte Carlo (SMC) sampler. In particular the Riemannian manifold Metropolis-adjusted Langevin algorithm (mMALA) is adapted for the transition kernels in SMC. Similar to its function in Markov chain Monte Carlo methods, the mMALA is a fully adaptable kernel which allows for efficient sampling of high-dimensional and highly correlated parameter spaces. We set up the theoretical framework for its use in SMC with a focus on the application to the problem of sequential Bayesian inference for dynamical systems as modelled by sets of ordinary differential equations. In addition, we argue that defining the sequence of distributions on geodesics optimises the effective sample sizes in the SMC run. We illustrate the application of the methodology by inferring the parameters of simulated Lotka-Volterra and Fitzhugh-Nagumo models. In particular we demonstrate that compared to employing a standard adaptive random walk kernel, the SM...

  5. Synchronous Parallel Kinetic Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Mart?nez, E; Marian, J; Kalos, M H

    2006-12-14

    A novel parallel kinetic Monte Carlo (kMC) algorithm formulated on the basis of perfect time synchronicity is presented. The algorithm provides an exact generalization of any standard serial kMC model and is trivially implemented in parallel architectures. We demonstrate the mathematical validity and parallel performance of the method by solving several well-understood problems in diffusion.

  6. Monte Carlo Particle Lists: MCPL

    CERN Document Server

    Kittelmann, Thomas; Knudsen, Erik B; Willendrup, Peter; Cai, Xiao Xiao; Kanaki, Kalliopi

    2016-01-01

    A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular simulation packages.

  7. Muon Calibration at SoLid

    CERN Document Server

    Saunders, Daniel

    2016-01-01

    The SoLid experiment aims to make a measurement of very short distance neutrino oscillations using reactor antineutrinos. Key to its sensitivity are the experiments high spatial and energy resolution, combined with a very suitable reactor source and efficient background rejection. The fine segmentation of the detector (cubes of side 5cm), and ability to resolve signals in space and time, gives SoLid the capability to track cosmic muons. In principle a source of background, these turn into a valuable calibration source if they can be cleanly identified. This work presents the first energy calibration results, using cosmic muons, of the 288kg SoLid prototype SM1. This includes the methodology of tracking at SoLid, cosmic ray angular analyses at the reactor site, estimates of the time resolution, and calibrations at the cube level.

  8. Hierarchical Bayesian Data Analysis in Radiometric SAR System Calibration: A Case Study on Transponder Calibration with RADARSAT-2 Data

    Directory of Open Access Journals (Sweden)

    Björn J. Döring

    2013-12-01

    Full Text Available A synthetic aperture radar (SAR system requires external absolute calibration so that radiometric measurements can be exploited in numerous scientific and commercial applications. Besides estimating a calibration factor, metrological standards also demand the derivation of a respective calibration uncertainty. This uncertainty is currently not systematically determined. Here for the first time it is proposed to use hierarchical modeling and Bayesian statistics as a consistent method for handling and analyzing the hierarchical data typically acquired during external calibration campaigns. Through the use of Markov chain Monte Carlo simulations, a joint posterior probability can be conveniently derived from measurement data despite the necessary grouping of data samples. The applicability of the method is demonstrated through a case study: The radar reflectivity of DLR’s new C-band Kalibri transponder is derived through a series of RADARSAT-2 acquisitions and a comparison with reference point targets (corner reflectors. The systematic derivation of calibration uncertainties is seen as an important step toward traceable radiometric calibration of synthetic aperture radars.

  9. Entropic calibration revisited

    Energy Technology Data Exchange (ETDEWEB)

    Brody, Dorje C. [Blackett Laboratory, Imperial College, London SW7 2BZ (United Kingdom)]. E-mail: d.brody@imperial.ac.uk; Buckley, Ian R.C. [Centre for Quantitative Finance, Imperial College, London SW7 2AZ (United Kingdom); Constantinou, Irene C. [Blackett Laboratory, Imperial College, London SW7 2BZ (United Kingdom); Meister, Bernhard K. [Blackett Laboratory, Imperial College, London SW7 2BZ (United Kingdom)

    2005-04-11

    The entropic calibration of the risk-neutral density function is effective in recovering the strike dependence of options, but encounters difficulties in determining the relevant greeks. By use of put-call reversal we apply the entropic method to the time reversed economy, which allows us to obtain the spot price dependence of options and the relevant greeks.

  10. LOFAR Facet Calibration

    Science.gov (United States)

    van Weeren, R. J.; Williams, W. L.; Hardcastle, M. J.; Shimwell, T. W.; Rafferty, D. A.; Sabater, J.; Heald, G.; Sridhar, S. S.; Dijkema, T. J.; Brunetti, G.; Brüggen, M.; Andrade-Santos, F.; Ogrean, G. A.; Röttgering, H. J. A.; Dawson, W. A.; Forman, W. R.; de Gasperin, F.; Jones, C.; Miley, G. K.; Rudnick, L.; Sarazin, C. L.; Bonafede, A.; Best, P. N.; Bîrzan, L.; Cassano, R.; Chyży, K. T.; Croston, J. H.; Ensslin, T.; Ferrari, C.; Hoeft, M.; Horellou, C.; Jarvis, M. J.; Kraft, R. P.; Mevius, M.; Intema, H. T.; Murray, S. S.; Orrú, E.; Pizzo, R.; Simionescu, A.; Stroe, A.; van der Tol, S.; White, G. J.

    2016-03-01

    LOFAR, the Low-Frequency Array, is a powerful new radio telescope operating between 10 and 240 MHz. LOFAR allows detailed sensitive high-resolution studies of the low-frequency radio sky. At the same time LOFAR also provides excellent short baseline coverage to map diffuse extended emission. However, producing high-quality deep images is challenging due to the presence of direction-dependent calibration errors, caused by imperfect knowledge of the station beam shapes and the ionosphere. Furthermore, the large data volume and presence of station clock errors present additional difficulties. In this paper we present a new calibration scheme, which we name facet calibration, to obtain deep high-resolution LOFAR High Band Antenna images using the Dutch part of the array. This scheme solves and corrects the direction-dependent errors in a number of facets that cover the observed field of view. Facet calibration provides close to thermal noise limited images for a typical 8 hr observing run at ∼ 5\\prime\\prime resolution, meeting the specifications of the LOFAR Tier-1 northern survey.

  11. LOFAR facet calibration

    CERN Document Server

    van Weeren, R J; Hardcastle, M J; Shimwell, T W; Rafferty, D A; Sabater, J; Heald, G; Sridhar, S S; Dijkema, T J; Brunetti, G; Brüggen, M; Andrade-Santos, F; Ogrean, G A; Röttgering, H J A; Dawson, W A; Forman, W R; de Gasperin, F; Jones, C; Miley, G K; Rudnick, L; Sarazin, C L; Bonafede, A; Best, P N; Bîrzan, L; Cassano, R; Chyży, K T; Croston, J H; Ensslin, T; Ferrari, C; Hoeft, M; Horellou, C; Jarvis, M J; Kraft, R P; Mevius, M; Intema, H T; Murray, S S; Orrú, E; Pizzo, R; Simionescu, A; Stroe, A; van der Tol, S; White, G J

    2016-01-01

    LOFAR, the Low-Frequency Array, is a powerful new radio telescope operating between 10 and 240 MHz. LOFAR allows detailed sensitive high-resolution studies of the low-frequency radio sky. At the same time LOFAR also provides excellent short baseline coverage to map diffuse extended emission. However, producing high-quality deep images is challenging due to the presence of direction dependent calibration errors, caused by imperfect knowledge of the station beam shapes and the ionosphere. Furthermore, the large data volume and presence of station clock errors present additional difficulties. In this paper we present a new calibration scheme, which we name facet calibration, to obtain deep high-resolution LOFAR High Band Antenna images using the Dutch part of the array. This scheme solves and corrects the direction dependent errors in a number of facets that cover the observed field of view. Facet calibration provides close to thermal noise limited images for a typical 8 hr observing run at $\\sim$ 5arcsec resolu...

  12. Calibration of farmer dosemeters

    International Nuclear Information System (INIS)

    The Farmer Dosemeters of Atomic Energy Medical Centre (AEMC) Jamshoro were calibrated in the Secondary Standard Dosimetry Laboratory (SSDL) at PINSTECH, using the NPL Secondary Standard Therapy level X-ray exposure meter. The results are presented in this report. (authors)

  13. Calibration Of Oxygen Monitors

    Science.gov (United States)

    Zalenski, M. A.; Rowe, E. L.; Mcphee, J. R.

    1988-01-01

    Readings corrected for temperature, pressure, and humidity of air. Program for handheld computer developed to ensure accuracy of oxygen monitors in National Transonic Facility, where liquid nitrogen stored. Calibration values, determined daily, based on entries of data on barometric pressure, temperature, and relative humidity. Output provided directly in millivolts.

  14. Commodity-Free Calibration

    Science.gov (United States)

    2008-01-01

    Commodity-free calibration is a reaction rate calibration technique that does not require the addition of any commodities. This technique is a specific form of the reaction rate technique, where all of the necessary reactants, other than the sample being analyzed, are either inherent in the analyzing system or specifically added or provided to the system for a reason other than calibration. After introduction, the component of interest is exposed to other reactants or flow paths already present in the system. The instrument detector records one of the following to determine the rate of reaction: the increase in the response of the reaction product, a decrease in the signal of the analyte response, or a decrease in the signal from the inherent reactant. With this data, the initial concentration of the analyte is calculated. This type of system can analyze and calibrate simultaneously, reduce the risk of false positives and exposure to toxic vapors, and improve accuracy. Moreover, having an excess of the reactant already present in the system eliminates the need to add commodities, which further reduces cost, logistic problems, and potential contamination. Also, the calculations involved can be simplified by comparison to those of the reaction rate technique. We conducted tests with hypergols as an initial investigation into the feasiblility of the technique.

  15. Measurement System & Calibration report

    DEFF Research Database (Denmark)

    Kock, Carsten Weber; Vesth, Allan

    This Measurement System & Calibration report is describing DTU’s measurement system installed at a specific wind turbine. A major part of the sensors has been installed by others (see [1]) the rest of the sensors have been installed by DTU. The results of the measurements, described in this report...

  16. Calibration with Absolute Shrinkage

    DEFF Research Database (Denmark)

    Øjelund, Henrik; Madsen, Henrik; Thyregod, Poul

    2001-01-01

    In this paper, penalized regression using the L-1 norm on the estimated parameters is proposed for chemometric je calibration. The algorithm is of the lasso type, introduced by Tibshirani in 1996 as a linear regression method with bound on the absolute length of the parameters, but a modification...

  17. NVLAP calibration laboratory program

    Energy Technology Data Exchange (ETDEWEB)

    Cigler, J.L.

    1993-12-31

    This paper presents an overview of the progress up to April 1993 in the development of the Calibration Laboratories Accreditation Program within the framework of the National Voluntary Laboratory Accreditation Program (NVLAP) at the National Institute of Standards and Technology (NIST).

  18. Pleiades Absolute Calibration : Inflight Calibration Sites and Methodology

    Science.gov (United States)

    Lachérade, S.; Fourest, S.; Gamet, P.; Lebègue, L.

    2012-07-01

    In-flight calibration of space sensors once in orbit is a decisive step to be able to fulfil the mission objectives. This article presents the methods of the in-flight absolute calibration processed during the commissioning phase. Four In-flight calibration methods are used: absolute calibration, cross-calibration with reference sensors such as PARASOL or MERIS, multi-temporal monitoring and inter-bands calibration. These algorithms are based on acquisitions over natural targets such as African deserts, Antarctic sites, La Crau (Automatic calibration station) and Oceans (Calibration over molecular scattering) or also new extra-terrestrial sites such as the Moon and selected stars. After an overview of the instrument and a description of the calibration sites, it is pointed out how each method is able to address one or several aspects of the calibration. We focus on how these methods complete each other in their operational use, and how they help building a coherent set of information that addresses all aspects of in-orbit calibration. Finally, we present the perspectives that the high level of agility of PLEIADES offers for the improvement of its calibration and a better characterization of the calibration sites.

  19. Mercury CEM Calibration

    Energy Technology Data Exchange (ETDEWEB)

    John F. Schabron; Joseph F. Rovani; Susan S. Sorini

    2007-03-31

    The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005, requires that calibration of mercury continuous emissions monitors (CEMs) be performed with NIST-traceable standards. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor generators. The traceability protocol will be written by EPA. Traceability will be based on the actual analysis of the output of each calibration unit at several concentration levels ranging from about 2-40 ug/m{sup 3}, and this analysis will be directly traceable to analyses by NIST using isotope dilution inductively coupled plasma/mass spectrometry (ID ICP/MS) through a chain of analyses linking the calibration unit in the power plant to the NIST ID ICP/MS. Prior to this project, NIST did not provide a recommended mercury vapor pressure equation or list mercury vapor pressure in its vapor pressure database. The NIST Physical and Chemical Properties Division in Boulder, Colorado was subcontracted under this project to study the issue in detail and to recommend a mercury vapor pressure equation that the vendors of mercury vapor pressure calibration units can use to calculate the elemental mercury vapor concentration in an equilibrium chamber at a particular temperature. As part of this study, a preliminary evaluation of calibration units from five vendors was made. The work was performed by NIST in Gaithersburg, MD and Joe Rovani from WRI who traveled to NIST as a Visiting Scientist.

  20. Reduced Calibration Curve for Proton Computed Tomography

    Science.gov (United States)

    Yevseyeva, Olga; de Assis, Joaquim; Evseev, Ivan; Schelin, Hugo; Paschuk, Sergei; Milhoretto, Edney; Setti, João; Díaz, Katherin; Hormaza, Joel; Lopes, Ricardo

    2010-05-01

    The pCT deals with relatively thick targets like the human head or trunk. Thus, the fidelity of pCT as a tool for proton therapy planning depends on the accuracy of physical formulas used for proton interaction with thick absorbers. Although the actual overall accuracy of the proton stopping power in the Bethe-Bloch domain is about 1%, the analytical calculations and the Monte Carlo simulations with codes like TRIM/SRIM, MCNPX and GEANT4 do not agreed with each other. A tentative to validate the codes against experimental data for thick absorbers bring some difficulties: only a few data is available and the existing data sets have been acquired at different initial proton energies, and for different absorber materials. In this work we compare the results of our Monte Carlo simulations with existing experimental data in terms of reduced calibration curve, i.e. the range—energy dependence normalized on the range scale by the full projected CSDA range for given initial proton energy in a given material, taken from the NIST PSTAR database, and on the final proton energy scale—by the given initial energy of protons. This approach is almost energy and material independent. The results of our analysis are important for pCT development because the contradictions observed at arbitrary low initial proton energies could be easily scaled now to typical pCT energies.

  1. Field calibration of cup anemometers

    DEFF Research Database (Denmark)

    Schmidt Paulsen, Uwe; Mortensen, Niels Gylling; Hansen, Jens Carsten;

    2007-01-01

    A field calibration method and results are described along with the experience gained with the method. The cup anemometers to be calibrated are mounted in a row on a 10-m high rig and calibrated in the free wind against a reference cup anemometer. The method has been reported [1] to improve the...... statistical bias on the data relative to calibrations carried out in a wind tunnel. The methodology is sufficiently accurate for calibration of cup anemometers used for wind resource assessments and provides a simple, reliable and cost-effective solution to cup anemometer calibration, especially suited for...

  2. Hybrid algorithms in quantum Monte Carlo

    International Nuclear Information System (INIS)

    With advances in algorithms and growing computing powers, quantum Monte Carlo (QMC) methods have become a leading contender for high accuracy calculations for the electronic structure of realistic systems. The performance gain on recent HPC systems is largely driven by increasing parallelism: the number of compute cores of a SMP and the number of SMPs have been going up, as the Top500 list attests. However, the available memory as well as the communication and memory bandwidth per element has not kept pace with the increasing parallelism. This severely limits the applicability of QMC and the problem size it can handle. OpenMP/MPI hybrid programming provides applications with simple but effective solutions to overcome efficiency and scalability bottlenecks on large-scale clusters based on multi/many-core SMPs. We discuss the design and implementation of hybrid methods in QMCPACK and analyze its performance on current HPC platforms characterized by various memory and communication hierarchies.

  3. San Carlos Apache Tribe - Energy Organizational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rapp, James; Albert, Steve

    2012-04-01

    The San Carlos Apache Tribe (SCAT) was awarded $164,000 in late-2011 by the U.S. Department of Energy (U.S. DOE) Tribal Energy Program's "First Steps Toward Developing Renewable Energy and Energy Efficiency on Tribal Lands" Grant Program. This grant funded:  The analysis and selection of preferred form(s) of tribal energy organization (this Energy Organization Analysis, hereinafter referred to as "EOA").  Start-up staffing and other costs associated with the Phase 1 SCAT energy organization.  An intern program.  Staff training.  Tribal outreach and workshops regarding the new organization and SCAT energy programs and projects, including two annual tribal energy summits (2011 and 2012). This report documents the analysis and selection of preferred form(s) of a tribal energy organization.

  4. Calibration of thin-film dosimeters irradiated with 80-120 kev electrons

    DEFF Research Database (Denmark)

    Helt-Hansen, J.; Miller, A.; McEwen, M.;

    2004-01-01

    A method for calibration of thin-film dosimeters irradiated with 80-120keV electrons has been developed. The method is based on measurement of dose with a totally absorbing graphite calorimeter, and conversion of dose in the graphite calorimeter to dose in the film dosimeter by Monte Carlo calcul...

  5. Calibration of the EDGES Receiver to Observe the Global 21-cm Signature from the Epoch of Reionization

    CERN Document Server

    Monsalve, Raul A; Bowman, Judd D; Mozdzen, Thomas J

    2016-01-01

    The EDGES experiment strives to detect the the sky-average brightness temperature from the $21$-cm line emitted during the Epoch of Reionization (EoR) in the redshift range $14 \\gtrsim z \\gtrsim 6$. To probe this signal, EDGES conducts single-antenna measurements in the frequency range $\\sim 100-200$ MHz from the Murchison Radio-astronomy Observatory in Western Australia. In this paper we describe the current strategy for calibration of the EDGES instrument and, in particular, of its receiver. The calibration involves measuring accurately modeled passive and active noise sources connected to the receiver input in place of the antenna. We model relevant uncertainties that arise during receiver calibration and propagate them to the calibrated antenna temperature using a Monte Carlo approach. Calibration effects are isolated by assuming that the sky foregrounds and the antenna beam are perfectly known. We find that if five polynomial terms are used to account for calibration systematics, most of the calibration ...

  6. SKIRT: the design of a suite of input models for Monte Carlo radiative transfer simulations

    CERN Document Server

    Baes, Maarten

    2015-01-01

    The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can...

  7. Kinematics of multigrid Monte Carlo

    International Nuclear Information System (INIS)

    We study the kinematics of multigrid Monte Carlo algorithms by means of acceptance rates for nonlocal Metropolis update proposals. An approximation formula for acceptance rates is derived. We present a comparison of different coarse-to-fine interpolation schemes in free field theory, where the formula is exact. The predictions of the approximation formula for several interacting models are well confirmed by Monte Carlo simulations. The following rule is found: For a critical model with fundametal Hamiltonian Η(φ), absence of critical slowing down can only be expected if the expansion of (Η(φ+ψ)) in terms of the shift ψ contains no relevant (mass) term. We also introduce a multigrid update procedure for nonabelian lattice gauge theory and study the acceptance rates for gauge group SU(2) in four dimensions. (orig.)

  8. Polarimetric calibration of large mirrors

    CERN Document Server

    Ariste, A Lopez

    2015-01-01

    Aims: To propose a method for the polarimetric calibration of large astronomical mirrors that does not require use of special optical devices nor knowledge of the exact polarization properties of the calibration target. Methods: We study the symmetries of the Mueller matrix of mirrors to exploit them for polarimetric calibration under the assumptions that only the orientation of the linear polarization plane of the calibration target is known with certainty. Results: A method is proposed to calibrate the polarization effects of single astronomical mirrors by the observation of calibration targets with known orientation of the linear polarization. We study the uncertainties of the method and the signal-to-noise ratios required for an acceptable calibration. We list astronomical targets ready for the method. We finally extend the method to the calibration of two or more mirrors, in particular to the case when they share the same incidence plane.

  9. The Calibration Reference Data System

    Science.gov (United States)

    Greenfield, P.; Miller, T.

    2016-07-01

    We describe a software architecture and implementation for using rules to determine which calibration files are appropriate for calibrating a given observation. This new system, the Calibration Reference Data System (CRDS), replaces what had been previously used for the Hubble Space Telescope (HST) calibration pipelines, the Calibration Database System (CDBS). CRDS will be used for the James Webb Space Telescope (JWST) calibration pipelines, and is currently being used for HST calibration pipelines. CRDS can be easily generalized for use in similar applications that need a rules-based system for selecting the appropriate item for a given dataset; we give some examples of such generalizations that will likely be used for JWST. The core functionality of the Calibration Reference Data System is available under an Open Source license. CRDS is briefly contrasted with a sampling of other similar systems used at other observatories.

  10. Calibration of line structured light vision system based on camera's projective center

    Institute of Scientific and Technical Information of China (English)

    ZHU Ji-gui; LI Yan-jun; YE Sheng-hua

    2005-01-01

    Based on the characteristics of line structured light sensor, a speedy method for the calibration was established. With the coplanar reference target, the spacial pose between camera and optical plane can be calibrated by using of the camera's projective center and the light's information in the camera's image surface. Without striction to the movement of the coplanar reference target and assistant adjustment equipment, this calibration method can be implemented. This method has been used and decreased the cost of calibration equipment, simplified the calibration procedure, improved calibration efficiency. Using experiment, the sensor can attain relative accuracy about 0.5%, which indicates the rationality and effectivity of this method.

  11. Simulation Monte Carlo as a method of verification of the characterization of fountains in ophthalmic brachytherapy; Simulacion Monte Carlo como metodo de verificacion de la caracterizacion de fuentes en braquiterapia oftalmica

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz Lora, A.; Miras del Rio, H.; Terron Leon, J. A.

    2013-07-01

    Following the recommendations of the IAEA, and as a further check, they have been Monte Carlo simulation of each one of the plates that are arranged at the Hospital. The objective of the work is the verification of the certificates of calibration and intends to establish criteria of action for its acceptance. (Author)

  12. Neural Adaptive Sequential Monte Carlo

    OpenAIRE

    Gu, Shixiang; Ghahramani, Zoubin; Turner, Richard E

    2015-01-01

    Sequential Monte Carlo (SMC), or particle filtering, is a popular class of methods for sampling from an intractable target distribution using a sequence of simpler intermediate distributions. Like other importance sampling-based methods, performance is critically dependent on the proposal distribution: a bad proposal can lead to arbitrarily inaccurate estimates of the target distribution. This paper presents a new method for automatically adapting the proposal using an approximation of the Ku...

  13. Adaptive Multilevel Monte Carlo Simulation

    KAUST Repository

    Hoel, H

    2011-08-23

    This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).

  14. Monomial Gamma Monte Carlo Sampling

    OpenAIRE

    Zhang, Yizhe; Wang, Xiangyu; Chen, Changyou; Fan, Kai; Carin, Lawrence

    2016-01-01

    We unify slice sampling and Hamiltonian Monte Carlo (HMC) sampling by demonstrating their connection under the canonical transformation from Hamiltonian mechanics. This insight enables us to extend HMC and slice sampling to a broader family of samplers, called monomial Gamma samplers (MGS). We analyze theoretically the mixing performance of such samplers by proving that the MGS draws samples from a target distribution with zero-autocorrelation, in the limit of a single parameter. This propert...

  15. Parallel Monte Carlo simulation of aerosol dynamics

    KAUST Repository

    Zhou, K.

    2014-01-01

    A highly efficient Monte Carlo (MC) algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process). Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI). The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands) of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD) function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles. 2014 Kun Zhou et al.

  16. Lidar calibration experiments

    DEFF Research Database (Denmark)

    Ejsing Jørgensen, Hans; Mikkelsen, T.; Streicher, J.;

    1997-01-01

    A series of atmospheric aerosol diffusion experiments combined with lidar detection was conducted to evaluate and calibrate an existing retrieval algorithm for aerosol backscatter lidar systems. The calibration experiments made use of two (almost) identical mini-lidar systems for aerosol cloud...... detection to test the reproducibility and uncertainty of lidars. Lidar data were obtained from both single-ended and double-ended Lidar configurations. A backstop was introduced in one of the experiments and a new method was developed where information obtained from the backstop can be used in the inversion...... algorithm. Independent in-situ aerosol plume concentrations were obtained from a simultaneous tracer gas experiment with SF6, and comparisons with the two lidars were made. The study shows that the reproducibility of the lidars is within 15%, including measurements from both sides of a plume...

  17. Optical tweezers absolute calibration

    CERN Document Server

    Dutra, R S; Neto, P A Maia; Nussenzveig, H M

    2014-01-01

    Optical tweezers are highly versatile laser traps for neutral microparticles, with fundamental applications in physics and in single molecule cell biology. Force measurements are performed by converting the stiffness response to displacement of trapped transparent microspheres, employed as force transducers. Usually, calibration is indirect, by comparison with fluid drag forces. This can lead to discrepancies by sizable factors. Progress achieved in a program aiming at absolute calibration, conducted over the past fifteen years, is briefly reviewed. Here we overcome its last major obstacle, a theoretical overestimation of the peak stiffness, within the most employed range for applications, and we perform experimental validation. The discrepancy is traced to the effect of primary aberrations of the optical system, which are now included in the theory. All required experimental parameters are readily accessible. Astigmatism, the dominant effect, is measured by analyzing reflected images of the focused laser spo...

  18. Astrid-2 SSC ASUMagnetic Calibration

    DEFF Research Database (Denmark)

    Primdahl, Fritz

    1997-01-01

    Report of the inter calibration between the starcamera and the fluxgate magnetometer onboard the ASTRID-2 satellite. This calibration was performed in the night between the 15. and 16. May 1997 at the Lovö magnetic observatory.......Report of the inter calibration between the starcamera and the fluxgate magnetometer onboard the ASTRID-2 satellite. This calibration was performed in the night between the 15. and 16. May 1997 at the Lovö magnetic observatory....

  19. Program Calibrates Strain Gauges

    Science.gov (United States)

    Okazaki, Gary D.

    1991-01-01

    Program dramatically reduces personnel and time requirements for acceptance tests of hardware. Data-acquisition system reads output from Wheatstone full-bridge strain-gauge circuit and calculates strain by use of shunt calibration technique. Program nearly instantaneously tabulates and plots strain data against load-cell outputs. Modified to acquire strain data for other specimens wherever full-bridge strain-gauge circuits used. Written in HP BASIC.

  20. Mesoscale hybrid calibration artifact

    Science.gov (United States)

    Tran, Hy D.; Claudet, Andre A.; Oliver, Andrew D.

    2010-09-07

    A mesoscale calibration artifact, also called a hybrid artifact, suitable for hybrid dimensional measurement and the method for make the artifact. The hybrid artifact has structural characteristics that make it suitable for dimensional measurement in both vision-based systems and touch-probe-based systems. The hybrid artifact employs the intersection of bulk-micromachined planes to fabricate edges that are sharp to the nanometer level and intersecting planes with crystal-lattice-defined angles.

  1. Calibrating recruitment estimates for mourning doves from harvest age ratios

    Science.gov (United States)

    Miller, David A.; Otis, David L.

    2010-01-01

    We examined results from the first national-scale effort to estimate mourning dove (Zenaida macroura) age ratios and developed a simple, efficient, and generalizable methodology for calibrating estimates. Our method predicted age classes of unknown-age wings based on backward projection of molt distributions from fall harvest collections to preseason banding. We estimated 1) the proportion of late-molt individuals in each age class, and 2) the molt rates of juvenile and adult birds. Monte Carlo simulations demonstrated our estimator was minimally biased. We estimated model parameters using 96,811 wings collected from hunters and 42,189 birds banded during preseason from 68 collection blocks in 22 states during the 2005–2007 hunting seasons. We also used estimates to derive a correction factor, based on latitude and longitude of samples, which can be applied to future surveys. We estimated differential vulnerability of age classes to harvest using data from banded birds and applied that to harvest age ratios to estimate population age ratios. Average, uncorrected age ratio of known-age wings for states that allow hunting was 2.25 (SD 0.85) juveniles:adult, and average, corrected ratio was 1.91 (SD 0.68), as determined from harvest age ratios from an independent sample of 41,084 wings collected from random hunters in 2007 and 2008. We used an independent estimate of differential vulnerability to adjust corrected harvest age ratios and estimated the average population age ratio as 1.45 (SD 0.52), a direct measure of recruitment rates. Average annual recruitment rates were highest east of the Mississippi River and in the northwestern United States, with lower rates between. Our results demonstrate a robust methodology for calibrating recruitment estimates for mourning doves and represent the first large-scale estimates of recruitment for the species. Our methods can be used by managers to correct future harvest survey data to generate recruitment estimates for use in

  2. Dosimetry and Calibration Section

    International Nuclear Information System (INIS)

    The two tasks of the Dosimetry and Calibration Section at CERN are the Individual Dosimetry Service which assures the personal monitoring of about 5000 persons potentially exposed to ionizing radiation at CERN, and the Calibration Laboratory which verifies all the instruments and monitors. This equipment is used by the sections of the RP Group for assuring radiation protection around CERN's accelerators, and by the Environmental Section of TISTE. In addition, nearly 250 electronic and 300 quartz fibre dosimeters, employed in operational dosimetry, are calibrated at least once a year. The Individual Dosimetry Service uses an extended database (INDOS) which contains information about all the individual doses ever received at CERN. For most of 1997 it was operated without the support of a database administrator as the technician who had assured this work retired. The Software Support Section of TIS-TE took over the technical responsibility of the database, but in view of the many other tasks of this Section and the lack of personnel, only a few interventions for solving immediate problems were possible

  3. Calibration of Underwater Sound Transducers

    Directory of Open Access Journals (Sweden)

    H.R.S. Sastry

    1983-07-01

    Full Text Available The techniques of calibration of underwater sound transducers for farfield, near-field and closed environment conditions are reviewed in this paper .The design of acoustic calibration tank is mentioned. The facilities available at Naval Physical & Oceanographic Laboratory, Cochin for calibration of transducers are also listed.

  4. Library Design in Combinatorial Chemistry by Monte Carlo Methods

    OpenAIRE

    Falcioni, Marco; Michael W. Deem

    2000-01-01

    Strategies for searching the space of variables in combinatorial chemistry experiments are presented, and a random energy model of combinatorial chemistry experiments is introduced. The search strategies, derived by analogy with the computer modeling technique of Monte Carlo, effectively search the variable space even in combinatorial chemistry experiments of modest size. Efficient implementations of the library design and redesign strategies are feasible with current experimental capabilities.

  5. Monte Carlo maximum likelihood estimation for discretely observed diffusion processes

    OpenAIRE

    Beskos, Alexandros; Papaspiliopoulos, Omiros; Roberts, Gareth

    2009-01-01

    This paper introduces a Monte Carlo method for maximum likelihood inference in the context of discretely observed diffusion processes. The method gives unbiased and a.s.\\@ continuous estimators of the likelihood function for a family of diffusion models and its performance in numerical examples is computationally efficient. It uses a recently developed technique for the exact simulation of diffusions, and involves no discretization error. We show that, under regularity conditions, the Monte C...

  6. Panoptes: Calibration of a dosimetry system for eye brachytherapy

    International Nuclear Information System (INIS)

    Intraocular cancer is a serious threat to the lives of those that suffer from it. Dosimetry for eye brachytherapy presents a significant challenge due to the inherently steep dose gradients that are needed to treat such small tumours in close proximity to sensitive normal structures. This issue is addressed by providing much needed quality assurance to eye brachytherapy, a novel volumetric dosimetry system, called PANOPTES was developed. This study focuses on the preliminary characterisation and calibration of the system. Using ion beam facilities, the custom, pixelated silicon detector of PANOPTES was shown to have good charge collection uniformity and a well defined sensitive volume. Flat-field calibration was conducted on the device using a 250 kVp orthovoltage beam. Finally, the detector and phantom were simulated with Monte Carlo in Geant4, to create water equivalent dose correction factors for each pixel across a range of angles. - Highlights: • Volumetric detector system produced for plaque brachytherapy. • Orthovoltage, flat-field calibration performed for detector pixels. • Monte Carlo simulation showed mostly little angular deviation across all angles. • Ion beam induced charge collection showed pixels uniform and fully depleted

  7. An analysis of dependency of counting efficiency on worker anatomy for in vivo measurements: whole-body counting

    Science.gov (United States)

    Zhang, Binquan; Mille, Matthew; Xu, X. George

    2008-07-01

    In vivo radiobioassay is integral to many health physics and radiological protection programs dealing with internal exposures. The Bottle Manikin Absorber (BOMAB) physical phantom has been widely used for whole-body counting calibrations. However, the shape of BOMAB phantoms—a collection of plastic, cylindrical shells which contain no bones or internal organs—does not represent realistic human anatomy. Furthermore, workers who come in contact with radioactive materials have rather different body shape and size. To date, there is a lack of understanding about how the counting efficiency would change when the calibrated counter is applied to a worker with complicated internal organs or tissues. This paper presents a study on various in vivo counting efficiencies obtained from Monte Carlo simulations of two BOMAB phantoms and three tomographic image-based models (VIP-Man, NORMAN and CNMAN) for a scenario involving homogeneous whole-body radioactivity contamination. The results reveal that a phantom's counting efficiency is strongly dependent on the shape and size of a phantom. Contrary to what was expected, it was found that only small differences in efficiency were observed when the density and material composition of all internal organs and tissues of the tomographic phantoms were changed to water. The results of this study indicate that BOMAB phantoms with appropriately adjusted size and shape can be sufficient for whole-body counting calibrations when the internal contamination is homogeneous.

  8. An analysis of dependency of counting efficiency on worker anatomy for in vivo measurements: whole-body counting

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Binquan; Mille, Matthew; Xu, X George [Nuclear Engineering and Engineering Physics, Rensselaer Polytechnic Institute, Troy, NY 12180 (United States)], E-mail: xug2@rpi.edu

    2008-07-07

    In vivo radiobioassay is integral to many health physics and radiological protection programs dealing with internal exposures. The Bottle Manikin Absorber (BOMAB) physical phantom has been widely used for whole-body counting calibrations. However, the shape of BOMAB phantoms-a collection of plastic, cylindrical shells which contain no bones or internal organs-does not represent realistic human anatomy. Furthermore, workers who come in contact with radioactive materials have rather different body shape and size. To date, there is a lack of understanding about how the counting efficiency would change when the calibrated counter is applied to a worker with complicated internal organs or tissues. This paper presents a study on various in vivo counting efficiencies obtained from Monte Carlo simulations of two BOMAB phantoms and three tomographic image-based models (VIP-Man, NORMAN and CNMAN) for a scenario involving homogeneous whole-body radioactivity contamination. The results reveal that a phantom's counting efficiency is strongly dependent on the shape and size of a phantom. Contrary to what was expected, it was found that only small differences in efficiency were observed when the density and material composition of all internal organs and tissues of the tomographic phantoms were changed to water. The results of this study indicate that BOMAB phantoms with appropriately adjusted size and shape can be sufficient for whole-body counting calibrations when the internal contamination is homogeneous.

  9. Monte Carlo simulation experiments on box-type radon dosimeter

    Energy Technology Data Exchange (ETDEWEB)

    Jamil, Khalid, E-mail: kjamil@comsats.edu.pk; Kamran, Muhammad; Illahi, Ahsan; Manzoor, Shahid

    2014-11-11

    Epidemiological studies show that inhalation of radon gas ({sup 222}Rn) may be carcinogenic especially to mine workers, people living in closed indoor energy conserved environments and underground dwellers. It is, therefore, of paramount importance to measure the {sup 222}Rn concentrations (Bq/m{sup 3}) in indoors environments. For this purpose, box-type passive radon dosimeters employing ion track detector like CR-39 are widely used. Fraction of the number of radon alphas emitted in the volume of the box type dosimeter resulting in latent track formation on CR-39 is the latent track registration efficiency. Latent track registration efficiency is ultimately required to evaluate the radon concentration which consequently determines the effective dose and the radiological hazards. In this research, Monte Carlo simulation experiments were carried out to study the alpha latent track registration efficiency for box type radon dosimeter as a function of dosimeter’s dimensions and range of alpha particles in air. Two different self developed Monte Carlo simulation techniques were employed namely: (a) Surface ratio (SURA) method and (b) Ray hitting (RAHI) method. Monte Carlo simulation experiments revealed that there are two types of efficiencies i.e. intrinsic efficiency (η{sub int}) and alpha hit efficiency (η{sub hit}). The η{sub int} depends upon only on the dimensions of the dosimeter and η{sub hit} depends both upon dimensions of the dosimeter and range of the alpha particles. The total latent track registration efficiency is the product of both intrinsic and hit efficiencies. It has been concluded that if diagonal length of box type dosimeter is kept smaller than the range of alpha particle then hit efficiency is achieved as 100%. Nevertheless the intrinsic efficiency keeps playing its role. The Monte Carlo simulation experimental results have been found helpful to understand the intricate track registration mechanisms in the box type dosimeter. This paper

  10. Monte Carlo simulation experiments on box-type radon dosimeter

    Science.gov (United States)

    Jamil, Khalid; Kamran, Muhammad; Illahi, Ahsan; Manzoor, Shahid

    2014-11-01

    Epidemiological studies show that inhalation of radon gas (222Rn) may be carcinogenic especially to mine workers, people living in closed indoor energy conserved environments and underground dwellers. It is, therefore, of paramount importance to measure the 222Rn concentrations (Bq/m3) in indoors environments. For this purpose, box-type passive radon dosimeters employing ion track detector like CR-39 are widely used. Fraction of the number of radon alphas emitted in the volume of the box type dosimeter resulting in latent track formation on CR-39 is the latent track registration efficiency. Latent track registration efficiency is ultimately required to evaluate the radon concentration which consequently determines the effective dose and the radiological hazards. In this research, Monte Carlo simulation experiments were carried out to study the alpha latent track registration efficiency for box type radon dosimeter as a function of dosimeter's dimensions and range of alpha particles in air. Two different self developed Monte Carlo simulation techniques were employed namely: (a) Surface ratio (SURA) method and (b) Ray hitting (RAHI) method. Monte Carlo simulation experiments revealed that there are two types of efficiencies i.e. intrinsic efficiency (ηint) and alpha hit efficiency (ηhit). The ηint depends upon only on the dimensions of the dosimeter and ηhit depends both upon dimensions of the dosimeter and range of the alpha particles. The total latent track registration efficiency is the product of both intrinsic and hit efficiencies. It has been concluded that if diagonal length of box type dosimeter is kept smaller than the range of alpha particle then hit efficiency is achieved as 100%. Nevertheless the intrinsic efficiency keeps playing its role. The Monte Carlo simulation experimental results have been found helpful to understand the intricate track registration mechanisms in the box type dosimeter. This paper explains that how radon concentration from the

  11. The calibration of DD neutron indium activation diagnostic for Shenguang-III facility

    CERN Document Server

    Song, Zi-Feng; Liu, Zhong-Jie; Zhan, Xia-Yu; Tang, Qi

    2014-01-01

    The indium activation diagnostic was calibrated on an accelerator neutron source in order to diagnose deuterium-deuterium (DD) neutron yields of implosion experiments on Shenguang-III facility. The scattered neutron background of the accelerator room was measured by placing a polypropylene shield in front of indium sample, in order to correct the calibrated factor of this activation diagnostic. The proper size of this shield was given by Monte Carlo simulation software. The affect from some other activated nuclei on the calibration was verified by judging whether the measured curve obeys exponential decay and contrasting the half life of the activated sample. The calibration results showed that the linear range reached up to 100 cps net count rate in the full energy peak of interest, the scattered neutron background of accelerator room was about 9% of the total neutrons and the possible interferences mixed scarcely in the sample. Subtracting the portion induced by neutron background, the calibrated factor of ...

  12. Development and calibration of a real-time airborne radioactivity monitor using direct gamma-ray spectrometry with two scintillation detectors.

    Science.gov (United States)

    Casanovas, R; Morant, J J; Salvadó, M

    2014-07-01

    The implementation of in-situ gamma-ray spectrometry in an automatic real-time environmental radiation surveillance network can help to identify and characterize abnormal radioactivity increases quickly. For this reason, a Real-time Airborne Radioactivity Monitor using direct gamma-ray spectrometry with two scintillation detectors (RARM-D2) was developed. The two scintillation detectors in the RARM-D2 are strategically shielded with Pb to permit the separate measurement of the airborne isotopes with respect to the deposited isotopes.In this paper, we describe the main aspects of the development and calibration of the RARM-D2 when using NaI(Tl) or LaBr3(Ce) detectors. The calibration of the monitor was performed experimentally with the exception of the efficiency curve, which was set using Monte Carlo (MC) simulations with the EGS5 code system. Prior to setting the efficiency curve, the effect of the radioactive source term size on the efficiency calculations was studied for the gamma-rays from (137)Cs. Finally, to study the measurement capabilities of the RARM-D2, the minimum detectable activity concentrations for (131)I and (137)Cs were calculated for typical spectra at different integration times. PMID:24607535

  13. Poster — Thur Eve — 14: Improving Tissue Segmentation for Monte Carlo Dose Calculation using DECT

    Energy Technology Data Exchange (ETDEWEB)

    Di Salvio, A.; Bedwani, S.; Carrier, J-F. [Centre hospitalier de l' Université de Montréal (Canada); Bouchard, H. [National Physics Laboratory, Teddington (United Kingdom)

    2014-08-15

    Purpose: To improve Monte Carlo dose calculation accuracy through a new tissue segmentation technique with dual energy CT (DECT). Methods: Electron density (ED) and effective atomic number (EAN) can be extracted directly from DECT data with a stoichiometric calibration method. Images are acquired with Monte Carlo CT projections using the user code egs-cbct and reconstructed using an FDK backprojection algorithm. Calibration is performed using projections of a numerical RMI phantom. A weighted parameter algorithm then uses both EAN and ED to assign materials to voxels from DECT simulated images. This new method is compared to a standard tissue characterization from single energy CT (SECT) data using a segmented calibrated Hounsfield unit (HU) to ED curve. Both methods are compared to the reference numerical head phantom. Monte Carlo simulations on uniform phantoms of different tissues using dosxyz-nrc show discrepancies in depth-dose distributions. Results: Both SECT and DECT segmentation methods show similar performance assigning soft tissues. Performance is however improved with DECT in regions with higher density, such as bones, where it assigns materials correctly 8% more often than segmentation with SECT, considering the same set of tissues and simulated clinical CT images, i.e. including noise and reconstruction artifacts. Furthermore, Monte Carlo results indicate that kV photon beam depth-dose distributions can double between two tissues of density higher than muscle. Conclusions: A direct acquisition of ED and the added information of EAN with DECT data improves tissue segmentation and increases the accuracy of Monte Carlo dose calculation in kV photon beams.

  14. Calibration effects on orbit determination

    Science.gov (United States)

    Madrid, G. A.; Winn, F. B.; Zielenbach, J. W.; Yip, K. B.

    1974-01-01

    The effects of charged particle and tropospheric calibrations on the orbit determination (OD) process are analyzed. The calibration process consisted of correcting the Doppler observables for the media effects. Calibrated and uncalibrated Doppler data sets were used to obtain OD results for past missions as well as Mariner Mars 1971. Comparisons of these Doppler reductions show the significance of the calibrations. For the MM'71 mission, the media calibrations proved themselves effective in diminishing the overall B-plane error and reducing the Doppler residual signatures.

  15. Experimental and Monte Carlo evaluation of an ionization chamber in a 60Co beam

    Science.gov (United States)

    Perini, A. P.; Neves, L. P.; Santos, W. S.; Caldas, L. V. E.

    2016-07-01

    Recently a special parallel-plate ionization chamber was developed and characterized at the Instituto de Pesquisas Energeticas e Nucleares. The operational tests presented results within the recommended limits. In order to determine the influence of some components of the ionization chamber on its response, Monte Carlo simulations were carried out. The experimental and simulation results pointed out that the dosimeter evaluated in the present work has favorable properties to be applied to 60Co dosimetry at calibration laboratories.

  16. Experimental and Monte Carlo evaluation of an ionization chamber in a {sup 60}Co beam

    Energy Technology Data Exchange (ETDEWEB)

    Perini, Ana P.; Neves, Lucio Pereira, E-mail: anapaula.perini@ufu.br [Universidade Federal de Uberlandia (INFIS/UFU), MG (Brazil). Instituto de Fisica; Santos, William S.; Caldas, Linda V.E. [Instituto de Pesquisas Energeticas e Nucleres (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    Recently a special parallel-plate ionization chamber was developed and characterized at the Instituto de Pesquisas Energeticas e Nucleares. The operational tests presented results within the recommended limits. In order to determine the influence of some components of the ionization chamber on its response, Monte Carlo simulations were carried out. The experimental and simulation results pointed out that the dosimeter evaluated in the present work has favorable properties to be applied to {sup 60}Co dosimetry at calibration laboratories. (author)

  17. JCOGIN. A parallel programming infrastructure for Monte Carlo particle transport

    International Nuclear Information System (INIS)

    The advantages of the Monte Carlo method for reactor analysis are well known, but the full-core reactor analysis challenges the computational time and computer memory. Meanwhile, the exponential growth of computer power in the last 10 years is now creating a great opportunity for large scale parallel computing on the Monte Carlo full-core reactor analysis. In this paper, a parallel programming infrastructure is introduced for Monte Carlo particle transport, named JCOGIN, which aims at accelerating the development of Monte Carlo codes for the large scale parallelism simulations of the full-core reactor. Now, JCOGIN implements the hybrid parallelism of the spatial decomposition and the traditional particle parallelism on MPI and OpenMP. Finally, JMCT code is developed on JCOGIN, which reaches the parallel efficiency of 70% on 20480 cores for fixed source problem. By the hybrid parallelism, the full-core pin-by-pin simulation of the Dayawan reactor was implemented, with the number of the cells up to 10 million and the tallies of the fluxes utilizing over 40GB of memory. (author)

  18. A separable shadow Hamiltonian hybrid Monte Carlo method

    Science.gov (United States)

    Sweet, Christopher R.; Hampton, Scott S.; Skeel, Robert D.; Izaguirre, Jesús A.

    2009-11-01

    Hybrid Monte Carlo (HMC) is a rigorous sampling method that uses molecular dynamics (MD) as a global Monte Carlo move. The acceptance rate of HMC decays exponentially with system size. The shadow hybrid Monte Carlo (SHMC) was previously introduced to reduce this performance degradation by sampling instead from the shadow Hamiltonian defined for MD when using a symplectic integrator. SHMC's performance is limited by the need to generate momenta for the MD step from a nonseparable shadow Hamiltonian. We introduce the separable shadow Hamiltonian hybrid Monte Carlo (S2HMC) method based on a formulation of the leapfrog/Verlet integrator that corresponds to a separable shadow Hamiltonian, which allows efficient generation of momenta. S2HMC gives the acceptance rate of a fourth order integrator at the cost of a second-order integrator. Through numerical experiments we show that S2HMC consistently gives a speedup greater than two over HMC for systems with more than 4000 atoms for the same variance. By comparison, SHMC gave a maximum speedup of only 1.6 over HMC. S2HMC has the additional advantage of not requiring any user parameters beyond those of HMC. S2HMC is available in the program PROTOMOL 2.1. A Python version, adequate for didactic purposes, is also in MDL (http://mdlab.sourceforge.net/s2hmc).

  19. New methods for the Monte Carlo simulation of neutron noise experiments in Ads

    International Nuclear Information System (INIS)

    This paper presents two improvements to speed up the Monte-Carlo simulation of neutron noise experiments. The first one is to separate the actual Monte Carlo transport calculation from the digital signal processing routines, while the second is to introduce non-analogue techniques to improve the efficiency of the Monte Carlo calculation. For the latter method, adaptations to the theory of neutron noise experiments were made to account for the distortion of the higher-moments of the calculated neutron noise. Calculations were performed to test the feasibility of the above outlined scheme and to demonstrate the advantages of the application of the track length estimator. It is shown that the modifications improve the efficiency of these calculations to a high extent, which turns the Monte Carlo method into a powerful tool for the development and design of on-line reactivity measurement systems for ADS

  20. Cell-veto Monte Carlo algorithm for long-range systems

    Science.gov (United States)

    Kapfer, Sebastian C.; Krauth, Werner

    2016-09-01

    We present a rigorous efficient event-chain Monte Carlo algorithm for long-range interacting particle systems. Using a cell-veto scheme within the factorized Metropolis algorithm, we compute each single-particle move with a fixed number of operations. For slowly decaying potentials such as Coulomb interactions, screening line charges allow us to take into account periodic boundary conditions. We discuss the performance of the cell-veto Monte Carlo algorithm for general inverse-power-law potentials, and illustrate how it provides a new outlook on one of the prominent bottlenecks in large-scale atomistic Monte Carlo simulations.

  1. HEXANN-EVALU - a Monte Carlo program system for pressure vessel neutron irradiation calculation

    International Nuclear Information System (INIS)

    The Monte Carlo program HEXANN and the evaluation program EVALU are intended to calculate Monte Carlo estimates of reaction rates and currents in segments of concentric angular regions around a hexagonal reactor-core region. The report describes the theoretical basis, structure and activity of the programs. Input data preparation guides and a sample problem are also included. Theoretical considerations as well as numerical experimental results suggest the user a nearly optimum way of making use of the Monte Carlo efficiency increasing options included in the program

  2. Measurement of top-quark pair production cross sections and calibration of the top-quark Monte-Carlo mass using LHC run I proton-proton collision data at √(s) = 7 and 8 TeV with the CMS experiment

    International Nuclear Information System (INIS)

    In this thesis, measurements of the production cross sections for top-quark pairs and the determination of the top-quark mass are presented. Dileptonic decays of top-quark pairs (t anti t) with two opposite-charged lepton (electron and muon) candidates in the final state are considered. The studied data samples are collected in proton-proton collisions at the CERN Large Hadron Collider with the CMS detector and correspond to integrated luminosities of 5.0 fb-1 and 19.7 fb-1 at center-of-mass energies of √(s) = 7 TeV and √(s) = 8 TeV, respectively. The cross sections, σt anti t, are measured in the fiducial detector volume (visible phase space), defined by the kinematics of the top-quark decay products, and are extrapolated to the full phase space. The visible cross sections are extracted in a simultaneous binned-likelihood fit to multi-differential distributions of final-state observables, categorized according to the multiplicity of jets associated to b quarks (b jets) and other jets in each event. The fit is performed with emphasis on a consistent treatment of correlations between systematic uncertainties and taking into account features of the t anti t event topology. By comparison with predictions from the Standard Model at next-to-next-to leading order (NNLO) accuracy, the top-quark pole mass, mtpole, is extracted from the measured cross sections for different state-of-the-art PDF sets. Furthermore, the top-quark mass parameter used in Monte-Carlo simulations, mtMC, is determined using the distribution of the invariant mass of a lepton candidate and the leading b jet in the event, mlb. Being defined by the kinematics of the top-quark decay, this observable is unaffected by the description of the top-quark production mechanism. Events are selected from the data collected at √(s) = 8 TeV that contain at least two jets and one b jet in addition to the lepton candidate pair. A novel technique is presented, in which fixed-order calculations in quantum

  3. Measurement of top-quark pair production cross sections and calibration of the top-quark Monte-Carlo mass using LHC run I proton-proton collision data at √(s) = 7 and 8 TeV with the CMS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Kieseler, Jan

    2015-12-15

    In this thesis, measurements of the production cross sections for top-quark pairs and the determination of the top-quark mass are presented. Dileptonic decays of top-quark pairs (t anti t) with two opposite-charged lepton (electron and muon) candidates in the final state are considered. The studied data samples are collected in proton-proton collisions at the CERN Large Hadron Collider with the CMS detector and correspond to integrated luminosities of 5.0 fb{sup -1} and 19.7 fb{sup -1} at center-of-mass energies of √(s) = 7 TeV and √(s) = 8 TeV, respectively. The cross sections, σ{sub t} {sub anti} {sub t}, are measured in the fiducial detector volume (visible phase space), defined by the kinematics of the top-quark decay products, and are extrapolated to the full phase space. The visible cross sections are extracted in a simultaneous binned-likelihood fit to multi-differential distributions of final-state observables, categorized according to the multiplicity of jets associated to b quarks (b jets) and other jets in each event. The fit is performed with emphasis on a consistent treatment of correlations between systematic uncertainties and taking into account features of the t anti t event topology. By comparison with predictions from the Standard Model at next-to-next-to leading order (NNLO) accuracy, the top-quark pole mass, m{sub t}{sup pole}, is extracted from the measured cross sections for different state-of-the-art PDF sets. Furthermore, the top-quark mass parameter used in Monte-Carlo simulations, m{sub t}{sup MC}, is determined using the distribution of the invariant mass of a lepton candidate and the leading b jet in the event, m{sub lb}. Being defined by the kinematics of the top-quark decay, this observable is unaffected by the description of the top-quark production mechanism. Events are selected from the data collected at √(s) = 8 TeV that contain at least two jets and one b jet in addition to the lepton candidate pair. A novel technique is

  4. Accelerated GPU based SPECT Monte Carlo simulations

    Science.gov (United States)

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-01

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency

  5. Efficient quadrature rules for illumination integrals from quasi Monte Carlo to Bayesian Monte Carlo

    CERN Document Server

    Marques, Ricardo; Santos, Luís Paulo; Bouatouch, Kadi

    2015-01-01

    Rendering photorealistic images is a costly process which can take up to several days in the case of high quality images. In most cases, the task of sampling the incident radiance function to evaluate the illumination integral is responsible for an important share of the computation time. Therefore, to reach acceptable rendering times, the illumination integral must be evaluated using a limited set of samples. Such a restriction raises the question of how to obtain the most accurate approximation possible with such a limited set of samples. One must thus ensure that sampling produces the highe

  6. Research of optimizing the microwave wide band blackbody calibration target

    Institute of Scientific and Technical Information of China (English)

    Nian Feng; Yang Yujie; Wang Wei

    2009-01-01

    The blackbody calibration targets applied for microwave radiometer's prelauneh calibration are opti-mized with its electromagnetic and thermal characteristics. Based on the method of emissivity optimization with radar cross section (RCS) simulation and the method of subgrid finite difference time domain (FDTD), the following design rules are summarized: that the round wedge is better than the square one, the best ratio from height to bottom radius is 4:1, for wide band calibration, the multilayer absorbing material coating is efficient to increase the emissivity, and the gradual thickness absorbing material coating is helpful to guarantee the uniform distribution of surface temperature even as it keeps a higher emissivity. Finally following the above conclusions, a new type of blackbody calibration target with the cellular array is preferred to improve the uniformity of polarization, which will increase the performance of the calibration targets further.

  7. SAR antenna calibration techniques

    Science.gov (United States)

    Carver, K. R.; Newell, A. C.

    1978-01-01

    Calibration of SAR antennas requires a measurement of gain, elevation and azimuth pattern shape, boresight error, cross-polarization levels, and phase vs. angle and frequency. For spaceborne SAR antennas of SEASAT size operating at C-band or higher, some of these measurements can become extremely difficult using conventional far-field antenna test ranges. Near-field scanning techniques offer an alternative approach and for C-band or X-band SARs, give much improved accuracy and precision as compared to that obtainable with a far-field approach.

  8. Description of a stable scheme for steady-state coupled Monte Carlo-thermal-hydraulic calculations

    OpenAIRE

    Dufek, Jan; Eduard Hoogenboom, J.

    2014-01-01

    We provide a detailed description of a numerically stable and efficient coupling scheme for steady-state Monte Carlo neutronic calculations with thermal-hydraulic feedback. While we have previously derived and published the stochastic approximation based method for coupling the Monte Carlo criticality and thermal-hydraulic calculations, its possible implementation has not been described in a step-by-step manner. As the simple description of the coupling scheme was repeatedly requested from us...

  9. Cuartel San Carlos. Yacimiento veterano

    Directory of Open Access Journals (Sweden)

    Mariana Flores

    2007-01-01

    Full Text Available El Cuartel San Carlos es un monumento histórico nacional (1986 de finales del siglo XVIII (1785-1790, caracterizado por sufrir diversas adversidades en su construcción y soportar los terremotos de 1812 y 1900. En el año 2006, el organismo encargado de su custodia, el Instituto de Patrimonio Cultural del Ministerio de Cultura, ejecutó tres etapas de exploración arqueológica, que abarcaron las áreas Traspatio, Patio Central y las Naves Este y Oeste de la edificación. Este trabajo reseña el análisis de la documentación arqueológica obtenida en el sitio, a partir de la realización de dicho proyecto, denominado EACUSAC (Estudio Arqueológico del Cuartel San Carlos, que representa además, la tercera campaña realizada en el sitio. La importancia de este yacimiento histórico, radica en su participación en los acontecimientos que propiciaron conflictos de poder durante el surgimiento de la República y en los sucesos políticos del siglo XX. De igual manera, se encontró en el sitio una amplia muestra de materiales arqueológicos que reseñan un estilo de vida cotidiana militar, así como las dinámicas sociales internas ocurridas en el San Carlos, como lugar estratégico para la defensa de los diferentes regímenes que atravesó el país, desde la época del imperialismo español hasta nuestros días.

  10. Carlos Restrepo. Un verdadero Maestro

    OpenAIRE

    Pelayo Correa

    2009-01-01

    Carlos Restrepo fue el primer profesor de Patología y un miembro ilustre del grupo de pioneros que fundaron la Facultad de Medicina de la Universidad del Valle. Estos pioneros convergieron en Cali en la década de 1950, en posesión de un espíritu renovador y creativo que emprendió con mucho éxito la labor de cambiar la cultura académica del Valle del Cauca. Ellos encontraron una sociedad apacible, que disfrutaba de la generosidad de su entorno, sin deseos de romper las tradiciones centenarias...

  11. Calibration of the MACHO photometry database

    Energy Technology Data Exchange (ETDEWEB)

    Alcock, A

    1998-10-23

    The MACHO Project is a microlensing survey that monitors the brightnesses of ~60 million stars in the Large Magellanic Cloud (LMC), Small Magellanic Cloud, and Galactic bulge. The database presently contains more photometric measurements than previously recorded in the history of astronomy. We describe the calibration of the MACHO two-color photometry and transformation to the standard Kron-Cousins V and R system. This allows for proper comparison with all other observations on the Kron-Cousins standard system. The highest precision calibrations are for ~9 million stars in the LMC bar. For these stars, independent photometric measurements in field-overlap regions indicate standard deviations δvR = 0.020 mag. Calibrated MACHO photometry data are compared with published photometric sequences and with new Hubble Space Telescope observations. We additionally describe the first application of these calibrated data: the construction of the "efficiency" color-magnitude diagram which will be used to calculate our experimental sensitivity for detecting microlensing in the LMC.

  12. The LOFAR long baseline snapshot calibrator survey

    CERN Document Server

    Moldón, J; Wucknitz, O; Jackson, N; Drabent, A; Carozzi, T; Conway, J; Kapińska, A D; McKean, P; Morabito, L; Varenius, E; Zarka, P; Anderson, J; Asgekar, A; Avruch, I M; Bell, M E; Bentum, M J; Bernardi, G; Best, P; Bîrzan, L; Bregman, J; Breitling, F; Broderick, J W; Brüggen, M; Butcher, H R; Carbone, D; Ciardi, B; de Gasperin, F; de Geus, E; Duscha, S; Eislöffel, J; Engels, D; Falcke, H; Fallows, R A; Fender, R; Ferrari, C; Frieswijk, W; Garrett, M A; Grießmeier, J; Gunst, A W; Hamaker, J P; Hassall, T E; Heald, G; Hoeft, M; Juette, E; Karastergiou, A; Kondratiev, V I; Kramer, M; Kuniyoshi, M; Kuper, G; Maat, P; Mann, G; Markoff, S; McFadden, R; McKay-Bukowski, D; Morganti, R; Munk, H; Norden, M J; Offringa, A R; Orru, E; Paas, H; Pandey-Pommier, M; Pizzo, R; Polatidis, A G; Reich, W; Röttgering, H; Rowlinson, A; Scaife, A M M; Schwarz, D; Sluman, J; Smirnov, O; Stappers, B W; Steinmetz, M; Tagger, M; Tang, Y; Tasse, C; Thoudam, S; Toribio, M C; Vermeulen, R; Vocks, C; van Weeren, R J; White, S; Wise, M W; Yatawatta, S; Zensus, A

    2014-01-01

    Aims. An efficient means of locating calibrator sources for International LOFAR is developed and used to determine the average density of usable calibrator sources on the sky for subarcsecond observations at 140 MHz. Methods. We used the multi-beaming capability of LOFAR to conduct a fast and computationally inexpensive survey with the full International LOFAR array. Sources were pre-selected on the basis of 325 MHz arcminute-scale flux density using existing catalogues. By observing 30 different sources in each of the 12 sets of pointings per hour, we were able to inspect 630 sources in two hours to determine if they possess a sufficiently bright compact component to be usable as LOFAR delay calibrators. Results. Over 40% of the observed sources are detected on multiple baselines between international stations and 86 are classified as satisfactory calibrators. We show that a flat low-frequency spectrum (from 74 to 325 MHz) is the best predictor of compactness at 140 MHz. We extrapolate from our sample to sho...

  13. On-orbit instrument calibration of CALET

    Science.gov (United States)

    Javaid, Amir; Calet Collaboration

    2015-04-01

    The CALorimetric Electron Telescope (CALET) is a high-energy cosmic ray experiment which will be placed on the International Space Station in 2015. Primary goals of CALET are measurement of cosmic ray electron spectra from 1 GeV to 20 TeV, gamma rays from 10 GeV to 10 TeV, and protons and nuclei from 10 GeV up to 1000 TeV. The detector consists of three main components: a Charge Detector (CHD), Imaging Calorimeter (IMC), and Total Absorption Calorimeter (TASC). As CALET is going to work in the ISS orbit space environment, it needs to be calibrated while it is in orbit. Penetrating non-showering protons and helium nuclei are prime candidates for instrument calibration, as they provide a known energy signal for calibrating the detector response. In the present paper, we discuss estimation of CALET's detector efficiency to protons and helium nuclei. Included is a discussion of different galactic cosmic ray and trapped proton models used for flux calculation and simulations performed for detector geometric area and trigger rate calculation. This paper also discusses the importance of the albedo proton flux for the CALET detector calibration. This research was supported by NASA at Louisiana State University under Grant Number NNX11AE01G.

  14. Calibration aspects of the JEM-EUSO mission

    Science.gov (United States)

    Adams, J. H.; Ahmad, S.; Albert, J.-N.; Allard, D.; Anchordoqui, L.; Andreev, V.; Anzalone, A.; Arai, Y.; Asano, K.; Ave Pernas, M.; Baragatti, P.; Barrillon, P.; Batsch, T.; Bayer, J.; Bechini, R.; Belenguer, T.; Bellotti, R.; Belov, K.; Berlind, A. A.; Bertaina, M.; Biermann, P. L.; Biktemerova, S.; Blaksley, C.; Blanc, N.; Błȩcki, J.; Blin-Bondil, S.; Blümer, J.; Bobik, P.; Bogomilov, M.; Bonamente, M.; Briggs, M. S.; Briz, S.; Bruno, A.; Cafagna, F.; Campana, D.; Capdevielle, J.-N.; Caruso, R.; Casolino, M.; Cassardo, C.; Castellinic, G.; Catalano, C.; Catalano, G.; Cellino, A.; Chikawa, M.; Christl, M. J.; Cline, D.; Connaughton, V.; Conti, L.; Cordero, G.; Crawford, H. J.; Cremonini, R.; Csorna, S.; Dagoret-Campagne, S.; de Castro, A. J.; De Donato, C.; de la Taille, C.; De Santis, C.; del Peral, L.; Dell'Oro, A.; De Simone, N.; Di Martino, M.; Distratis, G.; Dulucq, F.; Dupieux, M.; Ebersoldt, A.; Ebisuzaki, T.; Engel, R.; Falk, S.; Fang, K.; Fenu, F.; Fernández-Gómez, I.; Ferrarese, S.; Finco, D.; Flamini, M.; Fornaro, C.; Franceschi, A.; Fujimoto, J.; Fukushima, M.; Galeotti, P.; Garipov, G.; Geary, J.; Gelmini, G.; Giraudo, G.; Gonchar, M.; González Alvarado, C.; Gorodetzky, P.; Guarino, F.; Guzmán, A.; Hachisu, Y.; Harlov, B.; Haungs, A.; Hernández Carretero, J.; Higashide, K.; Ikeda, D.; Ikeda, H.; Inoue, N.; Inoue, S.; Insolia, A.; Isgrò, F.; Itow, Y.; Joven, E.; Judd, E. G.; Jung, A.; Kajino, F.; Kajino, T.; Kaneko, I.; Karadzhov, Y.; Karczmarczyk, J.; Karus, M.; Katahira, K.; Kawai, K.; Kawasaki, Y.; Keilhauer, B.; Khrenov, B. A.; Kim, J.-S.; Kim, S.-W.; Kim, S.-W.; Kleifges, M.; Klimov, P. A.; Kolev, D.; Kreykenbohm, I.; Kudela, K.; Kurihara, Y.; Kusenko, A.; Kuznetsov, E.; Lacombe, M.; Lachaud, C.; Lee, J.; Licandro, J.; Lim, H.; López, F.; Maccarone, M. C.; Mannheim, K.; Maravilla, D.; Marcelli, L.; Marini, A.; Martinez, O.; Masciantonio, G.; Mase, K.; Matev, R.; Medina-Tanco, G.; Mernik, T.; Miyamoto, H.; Miyazaki, Y.; Mizumoto, Y.; Modestino, G.; Monaco, A.; Monnier-Ragaigne, D.; Morales de los Ríos, J. A.; Moretto, C.; Morozenko, V. S.; Mot, B.; Murakami, T.; Murakami, M. Nagano; Nagata, M.; Nagataki, S.; Nakamura, T.; Napolitano, T.; Naumov, D.; Nava, R.; Neronov, A.; Nomoto, K.; Nonaka, T.; Ogawa, T.; Ogio, S.; Ohmori, H.; Olinto, A. V.; Orleański, P.; Osteria, G.; Panasyuk, M. I.; Parizot, E.; Park, I. H.; Park, H. W.; Pastircak, B.; Patzak, T.; Paul, T.; Pennypacker, C.; Perez Cano, S.; Peter, T.; Picozza, P.; Pierog, T.; Piotrowski, L. W.; Piraino, S.; Plebaniak, Z.; Pollini, A.; Prat, P.; Prévôt, G.; Prieto, H.; Putis, M.; Reardon, P.; Reyes, M.; Ricci, M.; Rodríguez, I.; Rodríguez Frías, M. D.; Ronga, F.; Roth, M.; Rothkaehl, H.; Roudil, G.; Rusinov, I.; Rybczyński, M.; Sabau, M. D.; Sáez-Cano, G.; Sagawa, H.; Saito, A.; Sakaki, N.; Sakata, M.; Salazar, H.; Sánchez, S.; Santangelo, A.; Santiago Crúz, L.; Sanz Palomino, M.; Saprykin, O.; Sarazin, F.; Sato, H.; Sato, M.; Schanz, T.; Schieler, H.; Scotti, V.; Segreto, A.; Selmane, S.; Semikoz, D.; Serra, M.; Sharakin, S.; Shibata, T.; Shimizu, H. M.; Shinozaki, K.; Shirahama, T.; Siemieniec-Oziȩbło, G.; Silva López, H. H.; Sledd, J.; Słomińska, K.; Sobey, A.; Sugiyama, T.; Supanitsky, D.; Suzuki, M.; Szabelska, B.; Szabelski, J.; Tajima, F.; Tajima, N.; Tajima, T.; Takahashi, Y.; Takami, H.; Takeda, M.; Takizawa, Y.; Tenzer, C.; Tibolla, O.; Tkachev, L.; Tokuno, H.; Tomida, T.; Tone, N.; Toscano, S.; Trillaud, F.; Tsenov, R.; Tsunesada, Y.; Tsuno, K.; Tymieniecka, T.; Uchihori, Y.; Unger, M.; Vaduvescu, O.; Valdés-Galicia, J. F.; Vallania, P.; Valore, L.; Vankova, G.; Vigorito, C.; Villaseñor, L.; von Ballmoos, P.; Wada, S.; Watanabe, J.; Watanabe, S.; Watts, J.; Weber, M.; Weiler, T. J.; Wibig, T.; Wiencke, L.; Wille, M.; Wilms, J.; Włodarczyk, Z.; Yamamoto, T.; Yamamoto, Y.; Yang, J.; Yano, H.; Yashin, I. V.; Yonetoku, D.; Yoshida, K.; Yoshida, S.; Young, R.; Zotov, M. Yu.; Zuccaro Marchi, A.

    2015-11-01

    The JEM-EUSO telescope will be, after calibration, a very accurate instrument which yields the number of received photons from the number of measured photo-electrons. The project is in phase A (demonstration of the concept) including already operating prototype instruments, i.e. many parts of the instrument have been constructed and tested. Calibration is a crucial part of the instrument and its use. The focal surface (FS) of the JEM-EUSO telescope will consist of about 5000 photo-multiplier tubes (PMTs), which have to be well calibrated to reach the required accuracy in reconstructing the air-shower parameters. The optics system consists of 3 plastic Fresnel (double-sided) lenses of 2.5 m diameter. The aim of the calibration system is to measure the efficiencies (transmittances) of the optics and absolute efficiencies of the entire focal surface detector. The system consists of 3 main components: (i) Pre-flight calibration devices on ground, where the efficiency and gain of the PMTs will be measured absolutely and also the transmittance of the optics will be. (ii) On-board relative calibration system applying two methods: a) operating during the day when the JEM-EUSO lid will be closed with small light sources on board. b) operating during the night, together with data taking: the monitoring of the background rate over identical sites. (iii) Absolute in-flight calibration, again, applying two methods: a) measurement of the moon light, reflected on high altitude, high albedo clouds. b) measurements of calibrated flashes and tracks produced by the Global Light System (GLS). Some details of each calibration method will be described in this paper.

  15. Use of Radiometrically Calibrated Flat-Plate Calibrators in Calibration of Radiation Thermometers

    Science.gov (United States)

    Cárdenas-García, D.; Méndez-Lango, E.

    2015-08-01

    Most commonly used, low-temperature, infrared thermometers have large fields of view sizes that make them difficult to be calibrated with narrow aperture blackbodies. Flat-plate calibrators with large emitting surfaces have been proposed for calibrating these infrared thermometers. Because the emissivity of the flat plate is not unity, its radiance temperature is wavelength dependent. For calibration, the wavelength pass band of the device under test should match that of the reference infrared thermometer. If the device under test and reference radiometer have different pass bands, then it is possible to calculate the corresponding correction if the emissivity of the flat plate is known. For example, a correction of at is required when calibrating a infrared thermometer with a "" radiometrically calibrated flat-plate calibrator. A method is described for using a radiometrically calibrated flat-plate calibrator that covers both cases of match and mismatch working wavelength ranges of a reference infrared thermometer and infrared thermometers to be calibrated with the flat-plate calibrator. Also, an application example is included in this paper.

  16. Monte Carlo techniques in radiation therapy

    CERN Document Server

    Verhaegen, Frank

    2013-01-01

    Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...

  17. Monte Carlo primer for health physicists

    International Nuclear Information System (INIS)

    The basic ideas and principles of Monte Carlo calculations are presented in the form of a primer for health physicists. A simple integral with a known answer is evaluated by two different Monte Carlo approaches. Random number, which underlie Monte Carlo work, are discussed, and a sample table of random numbers generated by a hand calculator is presented. Monte Carlo calculations of dose and linear energy transfer (LET) from 100-keV neutrons incident on a tissue slab are discussed. The random-number table is used in a hand calculation of the initial sequence of events for a 100-keV neutron entering the slab. Some pitfalls in Monte Carlo work are described. While this primer addresses mainly the bare bones of Monte Carlo, a final section briefly describes some of the more sophisticated techniques used in practice to reduce variance and computing time

  18. Monte Carlo Treatment Planning for Advanced Radiotherapy

    DEFF Research Database (Denmark)

    Cronholm, Rickard

    for commissioning of a Monte Carlo model of a medical linear accelerator, ensuring agreement with measurements within 1% for a range of situations, is presented. The resulting Monte Carlo model was validated against measurements for a wider range of situations, including small field output factors, and agreement...... modulating the intensity of the field during the irradiation. The workflow described has the potential to fully model the dynamic delivery, including gantry rotation during irradiation, of modern radiotherapy. Three corner stones of Monte Carlo Treatment Planning are identified: Building, commissioning...... and validation of a Monte Carlo model of a medical linear accelerator (i), converting a CT scan of a patient to a Monte Carlo compliant phantom (ii) and translating the treatment plan parameters (including beam energy, angles of incidence, collimator settings etc) to a Monte Carlo input file (iii). A protocol...

  19. On the long-term stability of calibration standards in different matrices.

    Science.gov (United States)

    Kandić, A; Vukanac, I; Djurašević, M; Novković, D; Šešlak, B; Milošević, Z

    2012-09-01

    In order to assure Quality Control in accordance with ISO/IEC 17025, it was important, from metrological point of view, to examine the long-term stability of calibration standards previously prepared. Comprehensive reconsideration on efficiency curves with respect to the ageing of calibration standards is presented in this paper. The calibration standards were re-used after a period of 5 years and analysis of the results showed discrepancies in efficiency values. PMID:22405642

  20. AVATAR -- Automatic variance reduction in Monte Carlo calculations

    Energy Technology Data Exchange (ETDEWEB)

    Van Riper, K.A.; Urbatsch, T.J.; Soran, P.D. [and others

    1997-05-01

    AVATAR{trademark} (Automatic Variance And Time of Analysis Reduction), accessed through the graphical user interface application, Justine{trademark}, is a superset of MCNP{trademark} that automatically invokes THREEDANT{trademark} for a three-dimensional deterministic adjoint calculation on a mesh independent of the Monte Carlo geometry, calculates weight windows, and runs MCNP. Computational efficiency increases by a factor of 2 to 5 for a three-detector oil well logging tool model. Human efficiency increases dramatically, since AVATAR eliminates the need for deep intuition and hours of tedious handwork.

  1. Monte Carlo simulations of charge transport in heterogeneous organic semiconductors

    Science.gov (United States)

    Aung, Pyie Phyo; Khanal, Kiran; Luettmer-Strathmann, Jutta

    2015-03-01

    The efficiency of organic solar cells depends on the morphology and electronic properties of the active layer. Research teams have been experimenting with different conducting materials to achieve more efficient solar panels. In this work, we perform Monte Carlo simulations to study charge transport in heterogeneous materials. We have developed a coarse-grained lattice model of polymeric photovoltaics and use it to generate active layers with ordered and disordered regions. We determine carrier mobilities for a range of conditions to investigate the effect of the morphology on charge transport.

  2. A Simple Accelerometer Calibrator

    Science.gov (United States)

    Salam, R. A.; Islamy, M. R. F.; Munir, M. M.; Latief, H.; Irsyam, M.; Khairurrijal

    2016-08-01

    High possibility of earthquake could lead to the high number of victims caused by it. It also can cause other hazards such as tsunami, landslide, etc. In that case it requires a system that can examine the earthquake occurrence. Some possible system to detect earthquake is by creating a vibration sensor system using accelerometer. However, the output of the system is usually put in the form of acceleration data. Therefore, a calibrator system for accelerometer to sense the vibration is needed. In this study, a simple accelerometer calibrator has been developed using 12 V DC motor, optocoupler, Liquid Crystal Display (LCD) and AVR 328 microcontroller as controller system. The system uses the Pulse Wave Modulation (PWM) form microcontroller to control the motor rotational speed as response to vibration frequency. The frequency of vibration was read by optocoupler and then those data was used as feedback to the system. The results show that the systems could control the rotational speed and the vibration frequencies in accordance with the defined PWM.

  3. Novel Quantum Monte Carlo Approaches for Quantum Liquids

    Science.gov (United States)

    Rubenstein, Brenda M.

    the eventual hope is to apply this algorithm to the exploration of yet unidentified high-pressure, low-temperature phases of hydrogen, I employ this algorithm to determine whether or not quantum hard spheres can form a low-temperature bcc solid if exchange is not taken into account. In the final chapter of this thesis, I use Path Integral Monte Carlo once again to explore whether glassy para-hydrogen exhibits superfluidity. Physicists have long searched for ways to coax hydrogen into becoming a superfluid. I present evidence that, while glassy hydrogen does not crystallize at the temperatures at which hydrogen might become a superfluid, it nevertheless does not exhibit superfluidity. This is because the average binding energy per p-H2 molecule poses a severe barrier to exchange regardless of whether the system is crystalline. All in all, this work extends the reach of Quantum Monte Carlo methods to new systems and brings the power of existing methods to bear on new problems. Portions of this work have been published in Rubenstein, PRE (2010) and Rubenstein, PRA (2012) [167;169]. Other papers not discussed here published during my Ph.D. include Rubenstein, BPJ (2008) and Rubenstein, PRL (2012) [166;168]. The work in Chapters 6 and 7 is currently unpublished. [166] Brenda M. Rubenstein, Ivan Coluzza, and Mark A. Miller. Controlling the folding and substrate-binding of proteins using polymer brushes. Physical Review Letters, 108(20):208104, May 2012. [167] Brenda M. Rubenstein, J.E. Gubernatis, and J.D. Doll. Comparative monte carlo efficiency by monte carlo analysis. Physical Review E, 82(3):036701, September 2010. [168] Brenda M. Rubenstein and Laura J. Kaufman. The role of extracellular matrix in glioma invasion: A cellular potts model approach. Biophysical Journal, 95(12):5661-- 5680, December 2008. [169] Brenda M. Rubenstein, Shiwei Zhang, and David R. Reichman. Finite-temperature auxiliary-field quantum monte carlo for bose-fermi mixtures. Physical Review A, 86

  4. Mean field simulation for Monte Carlo integration

    CERN Document Server

    Del Moral, Pierre

    2013-01-01

    In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko

  5. Device calibration impacts security of quantum key distribution

    CERN Document Server

    Jain, Nitin; Lydersen, Lars; Wiechers, Carlos; Elser, Dominique; Marquardt, Christoph; Makarov, Vadim; Leuchs, Gerd

    2011-01-01

    Characterizing the physical channel and calibrating the cryptosystem hardware are prerequisites for establishing a quantum channel for quantum key distribution (QKD). Moreover, an inappropriately implemented calibration routine can open a fatal security loophole. We propose and experimentally demonstrate a method to induce a large temporal detector efficiency mismatch in a commercial QKD system by deceiving a channel length calibration routine. We then devise an optimal and realistic strategy based on a faked-state attack that breaks the security of the cryptosystem. A fix for this loophole is also suggested.

  6. Multidimensional stochastic approximation Monte Carlo.

    Science.gov (United States)

    Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383

  7. Efficient Bayesian inference of subsurface flow models using nested sampling and sparse polynomial chaos surrogates

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-02-01

    An efficient Bayesian calibration method based on the nested sampling (NS) algorithm and non-intrusive polynomial chaos method is presented. Nested sampling is a Bayesian sampling algorithm that builds a discrete representation of the posterior distributions by iteratively re-focusing a set of samples to high likelihood regions. NS allows representing the posterior probability density function (PDF) with a smaller number of samples and reduces the curse of dimensionality effects. The main difficulty of the NS algorithm is in the constrained sampling step which is commonly performed using a random walk Markov Chain Monte-Carlo (MCMC) algorithm. In this work, we perform a two-stage sampling using a polynomial chaos response surface to filter out rejected samples in the Markov Chain Monte-Carlo method. The combined use of nested sampling and the two-stage MCMC based on approximate response surfaces provides significant computational gains in terms of the number of simulation runs. The proposed algorithm is applied for calibration and model selection of subsurface flow models. © 2013.

  8. Calibrating Flavour Tagging Algorithms using $t\\bar{t}$ events with the ATLAS Detector at $\\sqrt{s}=13$ TeV

    CERN Document Server

    Bell, Andrew Stuart; The ATLAS collaboration

    2016-01-01

    $b$-jets are identified in the ATLAS experiment using a complex multivariate algorithm. In many analyses, the performance of this algorithm in signal and background processes is estimated using Monte Carlo simulated events. As the event and detector simulation may not give a perfect account of real events, and given the large number of changes between Run-1 and Run-2, it is vital to calibrate the performance of this algorithm with data. The $t\\bar{t}$ Probability Distribution Function method has been employed to measure the $b$-jet identification efficiency in data using a combinatorial likelihood approach. Results are presented incorporating the first $3.2~\\text{fb}^{-1}$ of $pp$ collisions at $\\sqrt{s} = 13~\\text{TeV}$ collected by the ATLAS detector during Run-2.

  9. Experimental determination of electron-hole pair creation energy in 4H-SiC epitaxial layer: An absolute calibration approach

    Energy Technology Data Exchange (ETDEWEB)

    Chaudhuri, Sandeep K.; Zavalla, Kelvin J.; Mandal, Krishna C. [Department of Electrical Engineering, University of South Carolina, Columbia, South Carolina 29208 (United States)

    2013-01-21

    Electron-hole pair creation energy ({epsilon}) has been determined from alpha spectroscopy using 4H-SiC epitaxial layer Schottky detectors and a pulser calibration technique. We report an experimentally obtained {epsilon} value of 7.28 eV in 4H-SiC. The obtained {epsilon} value and theoretical models were used to calculate a Fano factor of 0.128 for 5.48 MeV alpha particles. The contributions of different factors to the ultimate alpha peak broadening in pulse-height spectra were determined using the calculated {epsilon} value and Monte-Carlo simulations. The determined {epsilon} value was verified using a drift-diffusion model of variation of charge collection efficiency with applied bias.

  10. 1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO

    Energy Technology Data Exchange (ETDEWEB)

    T. EVANS; ET AL

    2000-08-01

    We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.

  11. Potential of modern technologies for improvement of in vivo calibration.

    Science.gov (United States)

    Franck, D; de Carlan, L; Fisher, H; Pierrat, N; Schlagbauer, M; Wahl, W

    2007-01-01

    In the frame of IDEA project, a research programme has been carried out to study the potential of the reconstruction of numerical anthropomorphic phantoms based on personal physiological data obtained by computed tomography (CT) and Magnetic Resonance Imaging (MRI) for calibration in in vivo monitoring. As a result, new procedures have been developed taking advantage of recent progress in image processing codes that allow, after scanning and rapidly reconstructing a realistic voxel phantom, to convert the whole measurement geometry into computer file to be used on line for MCNP (Monte Carlo N-Particule code) calculations. The present paper overviews the major abilities of the OEDIPE software studies made in the frame of the IDEA project, on the examples of calibration for lung monitoring as well as whole body counting of a real patient.

  12. On chromatic and geometrical calibration

    DEFF Research Database (Denmark)

    Folm-Hansen, Jørgen

    1999-01-01

    we present the implementation of a complete calibration method for an accurate colour texture measurement device called VMX2000, the calibration for uneven laser sheet illumination in a flow measuring system and the use of automatic detection of calibration targets for a DLT/warping in a 3D PIV......The main subject of the present thesis is different methods for the geometrical and chromatic calibration of cameras in various environments. For the monochromatic issues of the calibration we present the acquisition of monochrome images, the classic monochrome aberrations and the various sources...... of non-uniformity of the illumination of the image plane. Only the image deforming aberrations and the non-uniformity of illumination are included in the calibration models. The topics of the pinhole camera model and the extension to the Direct Linear Transform (DLT) are described. It is shown how...

  13. Calibration procedure for zenith plummets

    Directory of Open Access Journals (Sweden)

    Jelena GUČEVIĆ

    2013-09-01

    Full Text Available Zenith plummets are used mainly in applied geodesy, in civil engineering surveying, for materialization of the local vertical. The error of the vertical deflection of the instrument is directly transferred to the error of the observing construction. That is why a proper calibration procedure for the zenithlot is required. Metrological laboratory of the Faculty of Civil Engineering in Belgrade developed such calibration procedure. Here we present a mathematical model of the calibration and some selected results.

  14. Calibration procedure for zenith plummets

    OpenAIRE

    Jelena GUČEVIĆ; Delčev, Siniša; Vukan OGRIZOVIĆ

    2013-01-01

    Zenith plummets are used mainly in applied geodesy, in civil engineering surveying, for materialization of the local vertical. The error of the vertical deflection of the instrument is directly transferred to the error of the observing construction. That is why a proper calibration procedure for the zenithlot is required. Metrological laboratory of the Faculty of Civil Engineering in Belgrade developed such calibration procedure. Here we present a mathematical model of the calibration and som...

  15. Calibration of neutron albedo dosemeters.

    Science.gov (United States)

    Schwartz, R B; Eisenhauer, C M

    2002-01-01

    It is shown that by calibrating neutron albedo dosemeters under the proper conditions, two complicating effects will essentially cancel out, allowing accurate calibrations with no need for explicit corrections. The 'proper conditions' are: a large room (> or = 8 m on a side). use of a D2O moderated 252Cf source, and a source-to-phantom calibration distance of approximately 70 cm. PMID:12212898

  16. Radiological Calibration and Standards Facility

    Data.gov (United States)

    Federal Laboratory Consortium — PNNL maintains a state-of-the-art Radiological Calibration and Standards Laboratory on the Hanford Site at Richland, Washington. Laboratory staff provide expertise...

  17. Calibration Techniques for VERITAS

    CERN Document Server

    Hanna, David

    2007-01-01

    VERITAS is an array of four identical telescopes designed for detecting and measuring astrophysical gamma rays with energies in excess of 100 GeV. Each telescope uses a 12 m diameter reflector to collect Cherenkov light from air showers initiated by incident gamma rays and direct it onto a `camera' comprising 499 photomultiplier tubes read out by flash ADCs. We describe here calibration methods used for determining the values of the parameters which are necessary for converting the digitized PMT pulses to gamma-ray energies and directions. Use of laser pulses to determine and monitor PMT gains is discussed, as are measurements of the absolute throughput of the telescopes using muon rings.

  18. RX130 Robot Calibration

    Science.gov (United States)

    Fugal, Mario

    2012-10-01

    In order to create precision magnets for an experiment at Oak Ridge National Laboratory, a new reverse engineering method has been proposed that uses the magnetic scalar potential to solve for the currents necessary to produce the desired field. To make the magnet it is proposed to use a copper coated G10 form, upon which a drill, mounted on a robotic arm, will carve wires. The accuracy required in the manufacturing of the wires exceeds nominal robot capabilities. However, due to the rigidity as well as the precision servo motor and harmonic gear drivers, there are robots capable of meeting this requirement with proper calibration. Improving the accuracy of an RX130 to be within 35 microns (the accuracy necessary of the wires) is the goal of this project. Using feedback from a displacement sensor, or camera and inverse kinematics it is possible to achieve this accuracy.

  19. Accelerate Monte Carlo Simulations with Restricted Boltzmann Machines

    CERN Document Server

    Huang, Li

    2016-01-01

    Despite their exceptional flexibility and popularity, the Monte Carlo methods often suffer from slow mixing times for challenging statistical physics problems. We present a general strategy to overcome this difficulty by adopting ideas and techniques from the machine learning community. We fit the unnormalized probability of the physical model to a feedforward neural network and reinterpret the architecture as a restricted Boltzmann machine. Then, exploiting its feature detection ability, we utilize the restricted Boltzmann machine for efficient Monte Carlo updates and to speed up the simulation of the original physical system. We implement these ideas for the Falicov-Kimball model and demonstrate improved acceptance ratio and autocorrelation time near the phase transition point.

  20. Extending the alias Monte Carlo sampling method to general distributions

    International Nuclear Information System (INIS)

    The alias method is a Monte Carlo sampling technique that offers significant advantages over more traditional methods. It equals the accuracy of table lookup and the speed of equal probable bins. The original formulation of this method sampled from discrete distributions and was easily extended to histogram distributions. We have extended the method further to applications more germane to Monte Carlo particle transport codes: continuous distributions. This paper presents the alias method as originally derived and our extensions to simple continuous distributions represented by piecewise linear functions. We also present a method to interpolate accurately between distributions tabulated at points other than the point of interest. We present timing studies that demonstrate the method's increased efficiency over table lookup and show further speedup achieved through vectorization. 6 refs., 12 figs., 2 tabs

  1. Subtle Monte Carlo Updates in Dense Molecular Systems

    DEFF Research Database (Denmark)

    Bottaro, Sandro; Boomsma, Wouter; Johansson, Kristoffer E.;

    2012-01-01

    as correlations in a multivariate Gaussian distribution. We demonstrate that our method reproduces structural variation in proteins with greater efficiency than current state-of-the-art Monte Carlo methods and has real-time simulation performance on par with molecular dynamics simulations. The presented results...... suggest our method as a valuable tool in the study of molecules in atomic detail, offering a potential alternative to molecular dynamics for probing long time-scale conformational transitions.......Although Markov chain Monte Carlo (MC) simulation is a potentially powerful approach for exploring conformational space, it has been unable to compete with molecular dynamics (MD) in the analysis of high density structural states, such as the native state of globular proteins. Here, we introduce...

  2. Advanced interacting sequential Monte Carlo sampling for inverse scattering

    Science.gov (United States)

    Giraud, F.; Minvielle, P.; Del Moral, P.

    2013-09-01

    The following electromagnetism (EM) inverse problem is addressed. It consists in estimating the local radioelectric properties of materials recovering an object from global EM scattering measurements, at various incidences and wave frequencies. This large scale ill-posed inverse problem is explored by an intensive exploitation of an efficient 2D Maxwell solver, distributed on high performance computing machines. Applied to a large training data set, a statistical analysis reduces the problem to a simpler probabilistic metamodel, from which Bayesian inference can be performed. Considering the radioelectric properties as a hidden dynamic stochastic process that evolves according to the frequency, it is shown how advanced Markov chain Monte Carlo methods—called sequential Monte Carlo or interacting particles—can take benefit of the structure and provide local EM property estimates.

  3. A Monte Carlo Model of Light Propagation in Nontransparent Tissue

    Institute of Scientific and Technical Information of China (English)

    姚建铨; 朱水泉; 胡海峰; 王瑞康

    2004-01-01

    To sharpen the imaging of structures, it is vital to develop a convenient and efficient quantitative algorithm of the optical coherence tomography (OCT) sampling. In this paper a new Monte Carlo model is set up and how light propagates in bio-tissue is analyzed in virtue of mathematics and physics equations. The relations,in which light intensity of Class 1 and Class 2 light with different wavelengths changes with their permeation depth,and in which Class 1 light intensity (signal light intensity) changes with the probing depth, and in which angularly resolved diffuse reflectance and diffuse transmittance change with the exiting angle, are studied. The results show that Monte Carlo simulation results are consistent with the theory data.

  4. Monte Carlo simulation of tomography techniques using the platform Gate

    International Nuclear Information System (INIS)

    Simulations play a key role in functional imaging, with applications ranging from scanner design, scatter correction, protocol optimisation. GATE (Geant4 for Application Tomography Emission) is a platform for Monte Carlo Simulation. It is based on Geant4 to generate and track particles, to model geometry and physics process. Explicit modelling of time includes detector motion, time of flight, tracer kinetics. Interfaces to voxellised models and image reconstruction packages improve the integration of GATE in the global modelling cycle. In this work Monte Carlo simulations are used to understand and optimise the gamma camera's performances. We study the effect of the distance between source and collimator, the diameter of the holes and the thick of the collimator on the spatial resolution, energy resolution and efficiency of the gamma camera. We also study the reduction of simulation's time and implement a model of left ventricle in GATE. (Author). 7 refs

  5. Monte Carlo Euler approximations of HJM term structure financial models

    KAUST Repository

    Björk, Tomas

    2012-11-22

    We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.

  6. Path integral Monte Carlo and the electron gas

    Science.gov (United States)

    Brown, Ethan W.

    Path integral Monte Carlo is a proven method for accurately simulating quantum mechanical systems at finite-temperature. By stochastically sampling Feynman's path integral representation of the quantum many-body density matrix, path integral Monte Carlo includes non-perturbative effects like thermal fluctuations and particle correlations in a natural way. Over the past 30 years, path integral Monte Carlo has been successfully employed to study the low density electron gas, high-pressure hydrogen, and superfluid helium. For systems where the role of Fermi statistics is important, however, traditional path integral Monte Carlo simulations have an exponentially decreasing efficiency with decreased temperature and increased system size. In this thesis, we work towards improving this efficiency, both through approximate and exact methods, as specifically applied to the homogeneous electron gas. We begin with a brief overview of the current state of atomic simulations at finite-temperature before we delve into a pedagogical review of the path integral Monte Carlo method. We then spend some time discussing the one major issue preventing exact simulation of Fermi systems, the sign problem. Afterwards, we introduce a way to circumvent the sign problem in PIMC simulations through a fixed-node constraint. We then apply this method to the homogeneous electron gas at a large swatch of densities and temperatures in order to map out the warm-dense matter regime. The electron gas can be a representative model for a host of real systems, from simple medals to stellar interiors. However, its most common use is as input into density functional theory. To this end, we aim to build an accurate representation of the electron gas from the ground state to the classical limit and examine its use in finite-temperature density functional formulations. The latter half of this thesis focuses on possible routes beyond the fixed-node approximation. As a first step, we utilize the variational

  7. Self Calibrated Wireless Distributed Environmental Sensory Networks

    Science.gov (United States)

    Fishbain, Barak; Moreno-Centeno, Erick

    2016-04-01

    Recent advances in sensory and communication technologies have made Wireless Distributed Environmental Sensory Networks (WDESN) technically and economically feasible. WDESNs present an unprecedented tool for studying many environmental processes in a new way. However, the WDESNs’ calibration process is a major obstacle in them becoming the common practice. Here, we present a new, robust and efficient method for aggregating measurements acquired by an uncalibrated WDESN, and producing accurate estimates of the observed environmental variable’s true levels rendering the network as self-calibrated. The suggested method presents novelty both in group-decision-making and in environmental sensing as it offers a most valuable tool for distributed environmental monitoring data aggregation. Applying the method on an extensive real-life air-pollution dataset showed markedly more accurate results than the common practice and the state-of-the-art.

  8. Source geometry factors for HDR 192Ir brachytherapy secondary standard well-type ionization chamber calibrations

    Science.gov (United States)

    Shipley, D. R.; Sander, T.; Nutbrown, R. F.

    2015-03-01

    Well-type ionization chambers are used for measuring the source strength of radioactive brachytherapy sources before clinical use. Initially, the well chambers are calibrated against a suitable national standard. For high dose rate (HDR) 192Ir, this calibration is usually a two-step process. Firstly, the calibration source is traceably calibrated against an air kerma primary standard in terms of either reference air kerma rate or air kerma strength. The calibrated 192Ir source is then used to calibrate the secondary standard well-type ionization chamber. Calibration laboratories are usually only equipped with one type of HDR 192Ir source. If the clinical source type is different from that used for the calibration of the well chamber at the standards laboratory, a source geometry factor, ksg, is required to correct the calibration coefficient for any change of the well chamber response due to geometric differences between the sources. In this work we present source geometry factors for six different HDR 192Ir brachytherapy sources which have been determined using Monte Carlo techniques for a specific ionization chamber, the Standard Imaging HDR 1000 Plus well chamber with a type 70010 HDR iridium source holder. The calculated correction factors were normalized to the old and new type of calibration source used at the National Physical Laboratory. With the old Nucletron microSelectron-v1 (classic) HDR 192Ir calibration source, ksg was found to be in the range 0.983 to 0.999 and with the new Isodose Control HDR 192Ir Flexisource ksg was found to be in the range 0.987 to 1.004 with a relative uncertainty of 0.4% (k = 2). Source geometry factors for different combinations of calibration sources, clinical sources, well chambers and associated source holders, can be calculated with the formalism discussed in this paper.

  9. Monte Carlo simulations and dosimetric studies of an irradiation facility

    Science.gov (United States)

    Belchior, A.; Botelho, M. L.; Vaz, P.

    2007-09-01

    There is an increasing utilization of ionizing radiation for industrial applications. Additionally, the radiation technology offers a variety of advantages in areas, such as sterilization and food preservation. For these applications, dosimetric tests are of crucial importance in order to assess the dose distribution throughout the sample being irradiated. The use of Monte Carlo methods and computational tools in support of the assessment of the dose distributions in irradiation facilities can prove to be economically effective, representing savings in the utilization of dosemeters, among other benefits. One of the purposes of this study is the development of a Monte Carlo simulation, using a state-of-the-art computational tool—MCNPX—in order to determine the dose distribution inside an irradiation facility of Cobalt 60. This irradiation facility is currently in operation at the ITN campus and will feature an automation and robotics component, which will allow its remote utilization by an external user, under REEQ/996/BIO/2005 project. The detailed geometrical description of the irradiation facility has been implemented in MCNPX, which features an accurate and full simulation of the electron-photon processes involved. The validation of the simulation results obtained was performed by chemical dosimetry methods, namely a Fricke solution. The Fricke dosimeter is a standard dosimeter and is widely used in radiation processing for calibration purposes.

  10. Bayesian calibration of the Community Land Model using surrogates

    Energy Technology Data Exchange (ETDEWEB)

    Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Swiler, Laura Painton

    2014-02-01

    We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditional on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural error in CLM under two error models. We find that surrogate models can be created for CLM in most cases. The posterior distributions are more predictive than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters' distributions significantly. The structural error model reveals a correlation time-scale which can be used to identify the physical process that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.

  11. Revised absolute amplitude calibration of the LOPES experiment

    CERN Document Server

    Link, K; Apel, W D; Arteaga-Velázquez, J C; Bähren, L; Bekk, K; Bertaina, M; Biermann, P L; Blümer, J; Bozdog, H; Brancus, I M; Cantoni, E; Chiavassa, A; Daumiller, K; de Souza, V; Di Pierro, F; Doll, P; Engel, R; Falcke, H; Fuchs, B; Gemmeke, H; Grupen, C; Haungs, A; Heck, D; Hiller, R; Hörandel, J R; Horneffer, A; Huber, D; Isar, P G; Kampert, K-H; Kang, D; Krömer, O; Kuijpers, J; Łuczak, P; Ludwig, M; Mathes, H J; Melissas, M; Morello, C; Oehlschläger, J; Palmieri, N; Pierog, T; Rautenberg, J; Rebel, H; Roth, M; Rühle, C; Saftoiu, A; Schieler, H; Schmidt, A; Schoo, S; Schröder, F G; Sima, O; Toma, G; Trinchero, G C; Weindl, A; Wochele, J; Zabierowski, J; Zensus, J A

    2015-01-01

    One of the main aims of the LOPES experiment was the evaluation of the absolute amplitude of the radio signal of air showers. This is of special interest since the radio technique offers the possibility for an independent and highly precise determination of the energy scale of cosmic rays on the basis of signal predictions from Monte Carlo simulations. For the calibration of the amplitude measured by LOPES we used an external source. Previous comparisons of LOPES measurements and simulations of the radio signal amplitude predicted by CoREAS revealed a discrepancy of the order of a factor of two. A re-measurement of the reference calibration source, now performed for the free field, was recently performed by the manufacturer. The updated calibration values lead to a lowering of the reconstructed electric field measured by LOPES by a factor of $2.6 \\pm 0.2$ and therefore to a significantly better agreement with CoREAS simulations. We discuss the updated calibration and its impact on the LOPES analysis results.

  12. Prediction of beam hardening artefacts in computed tomography using Monte Carlo simulations

    Science.gov (United States)

    Thomsen, M.; Knudsen, E. B.; Willendrup, P. K.; Bech, M.; Willner, M.; Pfeiffer, F.; Poulsen, M.; Lefmann, K.; Feidenhans'l, R.

    2015-01-01

    We show how radiological images of both single and multi material samples can be simulated using the Monte Carlo simulation tool McXtrace and how these images can be used to make a three dimensional reconstruction. Good numerical agreement between the X-ray attenuation coefficient in experimental and simulated data can be obtained, which allows us to use simulated projections in the linearisation procedure for single material samples and in that way reduce beam hardening artefacts. The simulations can be used to predict beam hardening artefacts in multi material samples with complex geometry, illustrated with an example. Linearisation requires knowledge about the X-ray transmission at varying sample thickness, but in some cases homogeneous calibration phantoms are hard to manufacture, which affects the accuracy of the calibration. Using simulated data overcomes the manufacturing problems and in that way improves the calibration.

  13. El lenguaje de Carlos Alonso

    Directory of Open Access Journals (Sweden)

    Bárbara Bustamante

    2005-10-01

    Full Text Available El talento de Carlos Alonso (Argentina, 1929 ha logrado conquistar un lenguaje con estilo propio. La creación de dibujos, pinturas, pasteles y tintas, collages y grabados fijaron en el campo visual la proyección de su subjetividad. Tanto la imagen como la palabra explicitan una visión crítica de la realidad, que tensiona al espectador obligándolo a una condición reflexiva y comprometida con el mensaje; este es el aspecto más destacado por los historiadores del arte. Sin embargo, la presente investigación pretende focalizar aspectos icónicos y plásticos de su hacer.

  14. Monte Carlo lattice program KIM

    International Nuclear Information System (INIS)

    The Monte Carlo program KIM solves the steady-state linear neutron transport equation for a fixed-source problem or, by successive fixed-source runs, for the eigenvalue problem, in a two-dimensional thermal reactor lattice. Fluxes and reaction rates are the main quantities computed by the program, from which power distribution and few-group averaged cross sections are derived. The simulation ranges from 10 MeV to zero and includes anisotropic and inelastic scattering in the fast energy region, the epithermal Doppler broadening of the resonances of some nuclides, and the thermalization phenomenon by taking into account the thermal velocity distribution of some molecules. Besides the well known combinatorial geometry, the program allows complex configurations to be represented by a discrete set of points, an approach greatly improving calculation speed

  15. General Monte Carlo code MONK

    International Nuclear Information System (INIS)

    The Monte Carlo code MONK is a general program written to provide a high degree of flexibility to the user. MONK is distinguished by its detailed representation of nuclear data in point form i.e., the cross-section is tabulated at specific energies instead of the more usual group representation. The nuclear data are unadjusted in the point form but recently the code has been modified to accept adjusted group data as used in fast and thermal reactor applications. The various geometrical handling capabilities and importance sampling techniques are described. In addition to the nuclear data aspects, the following features are also described; geometrical handling routines, tracking cycles, neutron source and output facilities. 12 references. (U.S.)

  16. Challenges of Monte Carlo Transport

    Energy Technology Data Exchange (ETDEWEB)

    Long, Alex Roberts [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Computational Physics and Methods (CCS-2)

    2016-06-10

    These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and, finally, the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.

  17. Challenges of Monte Carlo Transport

    Energy Technology Data Exchange (ETDEWEB)

    Long, Alex Roberts [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Computational Physics and Methods (CCS-2)

    2016-06-10

    These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and finally the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.

  18. Carlos Restrepo. Un verdadero Maestro

    Directory of Open Access Journals (Sweden)

    Pelayo Correa

    2009-12-01

    Full Text Available Carlos Restrepo fue el primer profesor de Patología y un miembro ilustre del grupo de pioneros que fundaron la Facultad de Medicina de la Universidad del Valle. Estos pioneros convergieron en Cali en la década de 1950, en posesión de un espíritu renovador y creativo que emprendió con mucho éxito la labor de cambiar la cultura académica del Valle del Cauca. Ellos encontraron una sociedad apacible, que disfrutaba de la generosidad de su entorno, sin deseos de romper las tradiciones centenarias de estilo de vida sencillo y satisfecho. Cuando los hijos tenían el deseo y la capacidad de seguir estudios universitarios, especialmente en el área de la medicina, la familia los enviaba a climas menos cálidos, que supuestamente favorecían la función cerebral y la acumulación de conocimientos. Los pioneros de la educación médica en el Valle del Cauca, en buena parte reclutados en universidades nacionales y extranjeras, sabían muy bien que el ambiente vallecaucano no impide una formación universitaria de primera clase. Carlos Restrepo era prototipo del espíritu de cambio y formación intelectual de las nuevas generaciones. Lo manifestaba de múltiples maneras, en buena parte con su genio alegre, extrovertido, optimista, de risa fácil y contagiosa. Pero esta fase amable de su personalidad no ocultaba su tarea formativa; exigía de sus discípulos dedicación y trabajo duro, con fidelidad expresados en memorables caricaturas que exageraban su genio ocasionalmente explosivo. El grupo de pioneros se enfocó con un espíritu de total entrega (tiempo completo y dedicación exclusiva y organizó la nueva Facultad en bien definidos y estructurados departamentos: Anatomía, Bioquímica, Fisiología, Farmacología, Patología, Medicina Interna, Cirugía, Obstetricia y Ginecología, Psiquiatría y Medicina Preventiva. Los departamentos integraron sus funciones primordiales en la enseñanza, la investigación y el servicio a la comunidad. El centro

  19. Frequency-domain deviational Monte Carlo method for linear oscillatory gas flows

    Science.gov (United States)

    Ladiges, Daniel R.; Sader, John E.

    2015-10-01

    Oscillatory non-continuum low Mach number gas flows are often generated by nanomechanical devices in ambient conditions. These flows can be simulated using a range of particle based Monte Carlo techniques, which in their original form operate exclusively in the time-domain. Recently, a frequency-domain weight-based Monte Carlo method was proposed [D. R. Ladiges and J. E. Sader, "Frequency-domain Monte Carlo method for linear oscillatory gas flows," J. Comput. Phys. 284, 351-366 (2015)] that exhibits superior statistical convergence when simulating oscillatory flows. This previous method used the Bhatnagar-Gross-Krook (BGK) kinetic model and contains a "virtual-time" variable to maintain the inherent time-marching nature of existing Monte Carlo algorithms. Here, we propose an alternative frequency-domain deviational Monte Carlo method that facilitates the use of a wider range of molecular models and more efficient collision/relaxation operators. We demonstrate this method with oscillatory Couette flow and the flow generated by an oscillating sphere, utilizing both the BGK kinetic model and hard sphere particles. We also discuss how oscillatory motion of arbitrary time-dependence can be simulated using computationally efficient parallelization. As in the weight-based method, this deviational frequency-domain Monte Carlo method is shown to offer improved computational speed compared to the equivalent time-domain technique.

  20. Clinical implementation of full Monte Carlo dose calculation in proton beam therapy

    Energy Technology Data Exchange (ETDEWEB)

    Paganetti, Harald; Jiang, Hongyu; Parodi, Katia; Slopsema, Roelf; Engelsman, Martijn [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 (United States)

    2008-09-07

    The goal of this work was to facilitate the clinical use of Monte Carlo proton dose calculation to support routine treatment planning and delivery. The Monte Carlo code Geant4 was used to simulate the treatment head setup, including a time-dependent simulation of modulator wheels (for broad beam modulation) and magnetic field settings (for beam scanning). Any patient-field-specific setup can be modeled according to the treatment control system of the facility. The code was benchmarked against phantom measurements. Using a simulation of the ionization chamber reading in the treatment head allows the Monte Carlo dose to be specified in absolute units (Gy per ionization chamber reading). Next, the capability of reading CT data information was implemented into the Monte Carlo code to model patient anatomy. To allow time-efficient dose calculation, the standard Geant4 tracking algorithm was modified. Finally, a software link of the Monte Carlo dose engine to the patient database and the commercial planning system was established to allow data exchange, thus completing the implementation of the proton Monte Carlo dose calculation engine ('DoC++'). Monte Carlo re-calculated plans are a valuable tool to revisit decisions in the planning process. Identification of clinically significant differences between Monte Carlo and pencil-beam-based dose calculations may also drive improvements of current pencil-beam methods. As an example, four patients (29 fields in total) with tumors in the head and neck regions were analyzed. Differences between the pencil-beam algorithm and Monte Carlo were identified in particular near the end of range, both due to dose degradation and overall differences in range prediction due to bony anatomy in the beam path. Further, the Monte Carlo reports dose-to-tissue as compared to dose-to-water by the planning system. Our implementation is tailored to a specific Monte Carlo code and the treatment planning system XiO (Computerized Medical

  1. Methods for fitting of efficiency curves obtained by means of HPGe gamma rays spectrometers; Metodos de ajuste de curvas de eficiencia obtidas por meio de espectrometros de HPGe

    Energy Technology Data Exchange (ETDEWEB)

    Cardoso, Vanderlei

    2002-07-01

    The present work describes a few methodologies developed for fitting efficiency curves obtained by means of a HPGe gamma-ray spectrometer. The interpolated values were determined by simple polynomial fitting and polynomial fitting between the ratio of experimental peak efficiency and total efficiency, calculated by Monte Carlo technique, as a function of gamma-ray energy. Moreover, non-linear fitting has been performed using a segmented polynomial function and applying the Gauss-Marquardt method. For the peak area obtainment different methodologies were developed in order to estimate the background area under the peak. This information was obtained by numerical integration or by using analytical functions associated to the background. One non-calibrated radioactive source has been included in the curve efficiency in order to provide additional calibration points. As a by-product, it was possible to determine the activity of this non-calibrated source. For all fittings developed in the present work the covariance matrix methodology was used, which is an essential procedure in order to give a complete description of the partial uncertainties involved. (author)

  2. Calibration and measurement of {sup 210}Pb using two independent techniques

    Energy Technology Data Exchange (ETDEWEB)

    Villa, M. [Centro de Investigacion, Tecnologia e Innovacion, CITIUS, Universidad de Sevilla, Av. Reina Mercedes 4B, 41012 Sevilla (Spain)], E-mail: mvilla@us.es; Hurtado, S. [Centro de Investigacion, Tecnologia e Innovacion, CITIUS, Universidad de Sevilla, Av. Reina Mercedes 4B, 41012 Sevilla (Spain); Manjon, G.; Garcia-Tenorio, R. [Departamento de Fisica Aplicada II, E.T.S. Arquitectura, Universidad de Sevilla, Av. Reina Mercedes 2, 41012 Sevilla (Spain)

    2007-10-15

    An experimental procedure has been developed for a rapid and accurate determination of the activity concentration of {sup 210}Pb in sediments by liquid scintillation counting (LSC). Additionally, an alternative technique using {gamma}-spectrometry and Monte Carlo simulation has been developed. A radiochemical procedure, based on radium and barium sulphates co-precipitation have been applied to isolate the Pb-isotopes. {sup 210}Pb activity measurements were done in a low background scintillation spectrometer Quantulus 1220. A calibration of the liquid scintillation spectrometer, including its {alpha}/{beta} discrimination system, has been made, in order to minimize background and, additionally, some improvements are suggested for the calculation of the {sup 210}Pb activity concentration, taking into account that {sup 210}Pb counting efficiency cannot be accurately determined. Therefore, the use of an effective radiochemical yield, which can be empirically evaluated, is proposed. {sup 210}Pb activity concentration in riverbed sediments from an area affected by NORM wastes has been determined using both the proposed method. Results using {gamma}-spectrometry and LSC are compared to the results obtained following indirect {alpha}-spectrometry ({sup 210}Po) method.

  3. Rare event simulation using Monte Carlo methods

    CERN Document Server

    Rubino, Gerardo

    2009-01-01

    In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. ...

  4. Tectonic calibrations in molecular dating

    Institute of Scientific and Technical Information of China (English)

    Ullasa KODANDARAMAIAH

    2011-01-01

    Molecular dating techniques require the use of calibrations, which are usually fossil or geological vicariance-based.Fossil calibrations have been criticised because they result only in minimum age estimates. Based on a historical biogeographic perspective, Ⅰ suggest that vicariance-based calibrations are more dangerous. Almost all analytical methods in historical biogeography are strongly biased towards inferring vicariance, hence vicariance identified through such methods is unreliable. Other studies, especially of groups found on Gondwanan fragments, have simply assumed vicariance. Although it was previously believed that vicariance was the predominant mode of speciation, mounting evidence now indicates that speciation by dispersal is common, dominating vicariance in several groups. Moreover, the possibility of speciation having occurred before the said geological event cannot be precluded. Thus, geological calibrations can under- or overestimate times, whereas fossil calibrations always result in minimum estimates. Another major drawback of vicariant calibrations is the problem of circular reasoning when the resulting estimates are used to infer ages of biogeographic events. Ⅰ argue that fossil-based dating is a superior alternative to vicariance, primarily because the strongest assumption in the latter, that speciation was caused by the said geological process, is more often than not the most tenuous. When authors prefer to use a combination of fossil and vicariant calibrations, one suggestion is to report results both with and without inclusion of the geological constraints. Relying solely on vicariant calibrations should be strictly avoided.

  5. Comparative evaluation of photon cross section libraries for materials of interest in PET Monte Carlo simulations

    CERN Document Server

    Zaidi, H

    1999-01-01

    the many applications of Monte Carlo modelling in nuclear medicine imaging make it desirable to increase the accuracy and computational speed of Monte Carlo codes. The accuracy of Monte Carlo simulations strongly depends on the accuracy in the probability functions and thus on the cross section libraries used for photon transport calculations. A comparison between different photon cross section libraries and parametrizations implemented in Monte Carlo simulation packages developed for positron emission tomography and the most recent Evaluated Photon Data Library (EPDL97) developed by the Lawrence Livermore National Laboratory was performed for several human tissues and common detector materials for energies from 1 keV to 1 MeV. Different photon cross section libraries and parametrizations show quite large variations as compared to the EPDL97 coefficients. This latter library is more accurate and was carefully designed in the form of look-up tables providing efficient data storage, access, and management. Toge...

  6. Design of a transportable high efficiency fast neutron spectrometer

    Science.gov (United States)

    Roecker, C.; Bernstein, A.; Bowden, N. S.; Cabrera-Palmer, B.; Dazeley, S.; Gerling, M.; Marleau, P.; Sweany, M. D.; Vetter, K.

    2016-08-01

    A transportable fast neutron detection system has been designed and constructed for measuring neutron energy spectra and flux ranging from tens to hundreds of MeV. The transportability of the spectrometer reduces the detector-related systematic bias between different neutron spectra and flux measurements, which allows for the comparison of measurements above or below ground. The spectrometer will measure neutron fluxes that are of prohibitively low intensity compared to the site-specific background rates targeted by other transportable fast neutron detection systems. To measure low intensity high-energy neutron fluxes, a conventional capture-gating technique is used for measuring neutron energies above 20 MeV and a novel multiplicity technique is used for measuring neutron energies above 100 MeV. The spectrometer is composed of two Gd containing plastic scintillator detectors arranged around a lead spallation target. To calibrate and characterize the position dependent response of the spectrometer, a Monte Carlo model was developed and used in conjunction with experimental data from gamma ray sources. Multiplicity event identification algorithms were developed and used with a Cf-252 neutron multiplicity source to validate the Monte Carlo model Gd concentration and secondary neutron capture efficiency. The validated Monte Carlo model was used to predict an effective area for the multiplicity and capture gating analyses. For incident neutron energies between 100 MeV and 1000 MeV with an isotropic angular distribution, the multiplicity analysis predicted an effective area of 500 cm2 rising to 5000 cm2. For neutron energies above 20 MeV, the capture-gating analysis predicted an effective area between 1800 cm2 and 2500 cm2. The multiplicity mode was found to be sensitive to the incident neutron angular distribution.

  7. Improving Langley calibrations by reducing diurnal variations of aerosol Ångström parameters

    Directory of Open Access Journals (Sweden)

    A. Kreuter

    2013-01-01

    Full Text Available Errors in the sun photometer calibration constant lead to artificial diurnal variations, symmetric around solar noon, of the retrieved aerosol optical depth (AOD and the associated Ångström exponent α and its curvature γ. We show in simulations that within the uncertainty of state-of-the-art Langley calibrations, these diurnal variations of α and γ can be significant in low AOD conditions, while those of AOD are negligible. We implement a weighted Monte Carlo method of finding an improved calibration constant by minimizing the diurnal variations in α and γ and apply the method to sun photometer data of a clear day in Innsbruck, Austria. The results show that our method can be used to improve the calibrations in two of the four wavelength channels by up to a factor of 3.6.

  8. Improving Langley calibrations by reducing diurnal variations of aerosol Ångström parameters

    Directory of Open Access Journals (Sweden)

    A. Kreuter

    2012-09-01

    Full Text Available Errors in the sun photometer calibration constant lead to artificial diurnal variations, symmetric around solar noon, of the retrieved Aerosol Optical Depth (AOD and the associated Ångström exponent α and its curvature γ. We show in simulations that within the uncertainty of state-of-the-art Langley calibrations, these diurnal variations of α and γ can be significant in low AOD conditions, while those of AOD are negligible. We implement a weighted Monte-Carlo method of finding an improved calibration constant by minimizing the diurnal variations in α and γ and apply the method to sun photometer data of a clear day in Innsbruck, Austria. The results show that our method can be used to improve the calibrations in two of the four wavelength channels by up to a factor of 3.6.

  9. Model Independent Approach to the Single Photoelectron Calibration of Photomultiplier Tubes

    CERN Document Server

    Saldanha, R; Guardincerri, Y; Wester, T

    2016-01-01

    The accurate calibration of photomultiplier tubes is critical in a wide variety of applications in which it is necessary to know the absolute number of detected photons or precisely determine the resolution of the signal. Conventional calibration methods rely on fitting the photomultiplier response to a low intensity light source with analytical approximations to the single photoelectron distribution, often leading to biased estimates due to the inability to accurately model the full distribution, especially at low charge values. In this paper we present a simple statistical method to extract the relevant single photoelectron calibration parameters without making any assumptions about the underlying single photoelectron distribution. We illustrate the use of this method through the calibration of a Hamamatsu R11410 photomultiplier tube and study the accuracy and precision of the method using Monte Carlo simulations. The method is found to have significantly reduced bias compared to conventional methods and wo...

  10. The Advanced LIGO Photon Calibrators

    CERN Document Server

    Karki, S; Kandhasamy, S; Abbott, B P; Abbott, T D; Anders, E H; Berliner, J; Betzwieser, J; Daveloza, H P; Cahillane, C; Canete, L; Conley, C; Gleason, J R; Goetz, E; Kissel, J S; Izumi, K; Mendell, G; Quetschke, V; Rodruck, M; Sachdev, S; Sadecki, T; Schwinberg, P B; Sottile, A; Wade, M; Weinstein, A J; West, M; Savage, R L

    2016-01-01

    The two interferometers of the Laser Interferometry Gravitaional-wave Observatory (LIGO) recently detected gravitational waves from the mergers of binary black hole systems. Accurate calibration of the output of these detectors was crucial for the observation of these events, and the extraction of parameters of the sources. The principal tools used to calibrate the responses of the second-generation (Advanced) LIGO detectors to gravitational waves are systems based on radiation pressure and referred to as Photon Calibrators. These systems, which were completely redesigned for Advanced LIGO, include several significant upgrades that enable them to meet the calibration requirements of second-generation gravitational wave detectors in the new era of gravitational-wave astronomy. We report on the design, implementation, and operation of these Advanced LIGO Photon Calibrators that are currently providing fiducial displacements on the order of $10^{-18}$ m/$\\sqrt{\\textrm{Hz}}$ with accuracy and precision of better ...

  11. A calibrated Franklin chimes

    Science.gov (United States)

    Gonta, Igor; Williams, Earle

    1994-05-01

    Benjamin Franklin devised a simple yet intriguing device to measure electrification in the atmosphere during conditions of foul weather. He constructed a system of bells, one of which was attached to a conductor that was suspended vertically above his house. The device is illustrated in a well-known painting of Franklin (Cohen, 1985). The elevated conductor acquired a potential due to the electric field in the atmosphere and caused a brass ball to oscillate between two bells. The purpose of this study is to extend Franklin's idea by constructing a set of 'chimes' which will operate both in fair and in foul weather conditions. In addition, a mathematical relationship will be established between the frequency of oscillation of a metallic sphere in a simplified geometry and the potential on one plate due to the electrification of the atmosphere. Thus it will be possible to calibrate the 'Franklin Chimes' and to obtain a nearly instantaneous measurement of the potential of the elevated conductor in both fair and foul weather conditions.

  12. Systematic study of finite-size effects in quantum Monte Carlo calculations of real metallic systems

    Energy Technology Data Exchange (ETDEWEB)

    Azadi, Sam, E-mail: s.azadi@imperial.ac.uk; Foulkes, W. M. C. [Department of Physics, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom)

    2015-09-14

    We present a systematic and comprehensive study of finite-size effects in diffusion quantum Monte Carlo calculations of metals. Several previously introduced schemes for correcting finite-size errors are compared for accuracy and efficiency, and practical improvements are introduced. In particular, we test a simple but efficient method of finite-size correction based on an accurate combination of twist averaging and density functional theory. Our diffusion quantum Monte Carlo results for lithium and aluminum, as examples of metallic systems, demonstrate excellent agreement between all of the approaches considered.

  13. A Monte Carlo approach to food density corrections in gamma spectroscopy

    International Nuclear Information System (INIS)

    Evaluation of food products by gamma spectroscopy requires a correction for food density for many counting geometries and isotopes. An inexpensive method to develop these corrections has been developed by creating a detailed model of the HPGe crystal and counting geometry for the Monte Carlo transport code MCNP. The Monte Carlo code was then used to generate a series of efficiency curves for a wide range of sample densities. The method was validated by comparing the MCNP generated efficiency curves against those obtained from measurements of NIST traceable standards, and spiked food samples across a range of food densities. (author)

  14. Mercury Continuous Emmission Monitor Calibration

    Energy Technology Data Exchange (ETDEWEB)

    John Schabron; Eric Kalberer; Ryan Boysen; William Schuster; Joseph Rovani

    2009-03-12

    Mercury continuous emissions monitoring systems (CEMs) are being implemented in over 800 coal-fired power plant stacks throughput the U.S. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor calibrators/generators. These devices are used to calibrate mercury CEMs at power plant sites. The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005 and vacated by a Federal appeals court in early 2008 required that calibration be performed with NIST-traceable standards. Despite the vacature, mercury emissions regulations in the future will require NIST traceable calibration standards, and EPA does not want to interrupt the effort towards developing NIST traceability protocols. The traceability procedures will be defined by EPA. An initial draft traceability protocol was issued by EPA in May 2007 for comment. In August 2007, EPA issued a conceptual interim traceability protocol for elemental mercury calibrators. The protocol is based on the actual analysis of the output of each calibration unit at several concentration levels ranging initially from about 2-40 {micro}g/m{sup 3} elemental mercury, and in the future down to 0.2 {micro}g/m{sup 3}, and this analysis will be directly traceable to analyses by NIST. The EPA traceability protocol document is divided into two separate sections. The first deals with the qualification of calibrator models by the vendors for use in mercury CEM calibration. The second describes the procedure that the vendors must use to certify the calibrators that meet the qualification specifications. The NIST traceable certification is performance based, traceable to analysis using isotope dilution inductively coupled plasma

  15. Robot Calibration Using Active Vision-based Measurement

    Institute of Scientific and Technical Information of China (English)

    郭剑鹰; 张家梁; 吕恬生

    2004-01-01

    This paper presents an efficient robot calibration method with non-contact vision metrology. Using the coplanar pattern to calibrate camera made the active-vision-based end-effector pose measurement be a feasible and costeffective way. Kinematic parameter errors were linearized and identified through two-step procedure, thus the singular and non-linear condition was overcome. These errors were then compensated using inverse model method. The whole calibration process is flexible, easy to implement and prevents the error propagation from the earlier stages to the later ones. Calibration was performed on MOTOMAN SV3industrial robot. Experiment results show that the proposed method is easy to setup and with satisfactory accuracy.

  16. Radio interferometric gain calibration as a complex optimization problem

    CERN Document Server

    Smirnov, Oleg

    2015-01-01

    Recent developments in optimization theory have extended some traditional algorithms for least-squares optimization of real-valued functions (Gauss-Newton, Levenberg-Marquardt, etc.) into the domain of complex functions of a complex variable. This employs a formalism called the Wirtinger derivative, and derives a full-complex Jacobian counterpart to the conventional real Jacobian. We apply these developments to the problem of radio interferometric gain calibration, and show how the general complex Jacobian formalism, when combined with conventional optimization approaches, yields a whole new family of calibration algorithms, including those for the polarized and direction-dependent gain regime. We further extend the Wirtinger calculus to an operator-based matrix calculus for describing the polarized calibration regime. Using approximate matrix inversion results in computationally efficient implementations; we show that some recently proposed calibration algorithms such as StefCal and peeling can be understood...

  17. The wall correction factor for a spherical ionization chamber used in brachytherapy source calibration

    Energy Technology Data Exchange (ETDEWEB)

    Piermattei, A [Istituto di Fisica, Universita Cattolica S Cuore, Rome (Italy); Azario, L [Istituto di Fisica, Universita Cattolica S Cuore, Rome (Italy); Fidanzio, A [Istituto di Fisica, Universita Cattolica S Cuore, Rome (Italy); Viola, P [Istituto di Fisica, Universita Cattolica S Cuore, Rome (Italy); Dell' Omo, C [Istituto di Fisica, Universita Cattolica S Cuore, Rome (Italy); Iadanza, L [Centro di Riferimento Oncologico della Basilicata-Rionero in Vulture, Pz (Italy); Fusco, V [Centro di Riferimento Oncologico della Basilicata-Rionero in Vulture, Pz (Italy); Lagares, J I [Universidad de Sevilla, Facultad de Medicina, Dpto Fisiologia Medica y Biofisica, Sevilla (Spain); Capote, R [Universidad de Sevilla, Facultad de Medicina, Dpto Fisiologia Medica y Biofisica, Sevilla (Spain)

    2003-12-21

    The effect of wall chamber attenuation and scattering is one of the most important corrections that must be determined when the linear interpolation method between two calibration factors of an ionization chamber is used. For spherical ionization chambers the corresponding correction factors A{sub w} have to be determined by a non-linear trend of the response as a function of the wall thickness. The Monte Carlo and experimental data here reported show that the A{sub w} factors obtained for an Exradin A4 chamber, used in the brachytherapy source calibration, in terms of reference air kerma rate, are up to 1.2% greater than the values obtained by the linear extrapolation method for the studied beam qualities. Using the A{sub w} factors derived from Monte Carlo calculations, the accuracy of the calibration factor N{sub K,Ir} for the Exradin A4, obtained by the interpolation between two calibration factors, improves about 0.6%. The discrepancy between the new calculated factor and that obtained using the complete calibration curve of the ion-chamber and the {sup 192}Ir spectrum is only 0.1%.

  18. Seepage Calibration Model and Seepage Testing Data

    Energy Technology Data Exchange (ETDEWEB)

    P. Dixon

    2004-02-17

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM is developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA (see upcoming REV 02 of CRWMS M&O 2000 [153314]), which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model (see BSC 2003 [161530]). The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross Drift to obtain the permeability structure for the seepage model; (3) to use inverse modeling to calibrate the SCM and to estimate seepage-relevant, model-related parameters on the drift scale; (4) to estimate the epistemic uncertainty of the derived parameters, based on the goodness-of-fit to the observed data and the sensitivity of calculated seepage with respect to the parameters of interest; (5) to characterize the aleatory uncertainty of

  19. Screen-Camera Calibration Using Gray Codes

    OpenAIRE

    FRANCKEN, Yannick; Hermans, Chris; Bekaert, Philippe

    2009-01-01

    In this paper we present a method for efficient calibration of a screen-camera setup, in which the camera is not directly facing the screen. A spherical mirror is used to make the screen visible to the camera. Using Gray code illumination patterns, we can uniquely identify the reflection of each screen pixel on the imaged spherical mirror. This allows us to compute a large set of 2D-3D correspondences, using only two sphere locations. Compared to previous work, this means we require less manu...

  20. FAST CONVERGENT MONTE CARLO RECEIVER FOR OFDM SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Wu Lili; Liao Guisheng; Bao Zheng; Shang Yong

    2005-01-01

    The paper investigates the problem of the design of an optimal Orthogonal Frequency Division Multiplexing (OFDM) receiver against unknown frequency selective fading. A fast convergent Monte Carlo receiver is proposed. In the proposed method, the Markov Chain Monte Carlo (MCMC) methods are employed for the blind Bayesian detection without channel estimation. Meanwhile, with the exploitation of the characteristics of OFDM systems, two methods are employed to improve the convergence rate and enhance the efficiency of MCMC algorithms.One is the integration of the posterior distribution function with respect to the associated channel parameters, which is involved in the derivation of the objective distribution function; the other is the intra-symbol differential coding for the elimination of the bimodality problem resulting from the presence of unknown fading channels. Moreover, no matrix inversion is needed with the use of the orthogonality property of OFDM modulation and hence the computational load is significantly reduced. Computer simulation results show the effectiveness of the fast convergent Monte Carlo receiver.

  1. Pattern Recognition for a Flight Dynamics Monte Carlo Simulation

    Science.gov (United States)

    Restrepo, Carolina; Hurtado, John E.

    2011-01-01

    The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.

  2. Monte Carlo studies for medical imaging detector optimization

    Science.gov (United States)

    Fois, G. R.; Cisbani, E.; Garibaldi, F.

    2016-02-01

    This work reports on the Monte Carlo optimization studies of detection systems for Molecular Breast Imaging with radionuclides and Bremsstrahlung Imaging in nuclear medicine. Molecular Breast Imaging requires competing performances of the detectors: high efficiency and high spatial resolutions; in this direction, it has been proposed an innovative device which combines images from two different, and somehow complementary, detectors at the opposite sides of the breast. The dual detector design allows for spot compression and improves significantly the performance of the overall system if all components are well tuned, layout and processing carefully optimized; in this direction the Monte Carlo simulation represents a valuable tools. In recent years, Bremsstrahlung Imaging potentiality in internal radiotherapy (with beta-radiopharmaceuticals) has been clearly emerged; Bremsstrahlung Imaging is currently performed with existing detector generally used for single photon radioisotopes. We are evaluating the possibility to adapt an existing compact gamma camera and optimize by Monte Carlo its performance for Bremsstrahlung imaging with photons emitted by the beta- from 90 Y.

  3. A semianalytic Monte Carlo code for modelling LIDAR measurements

    Science.gov (United States)

    Palazzi, Elisa; Kostadinov, Ivan; Petritoli, Andrea; Ravegnani, Fabrizio; Bortoli, Daniele; Masieri, Samuele; Premuda, Margherita; Giovanelli, Giorgio

    2007-10-01

    LIDAR (LIght Detection and Ranging) is an optical active remote sensing technology with many applications in atmospheric physics. Modelling of LIDAR measurements appears useful approach for evaluating the effects of various environmental variables and scenarios as well as of different measurement geometries and instrumental characteristics. In this regard a Monte Carlo simulation model can provide a reliable answer to these important requirements. A semianalytic Monte Carlo code for modelling LIDAR measurements has been developed at ISAC-CNR. The backscattered laser signal detected by the LIDAR system is calculated in the code taking into account the contributions due to the main atmospheric molecular constituents and aerosol particles through processes of single and multiple scattering. The contributions by molecular absorption, ground and clouds reflection are evaluated too. The code can perform simulations of both monostatic and bistatic LIDAR systems. To enhance the efficiency of the Monte Carlo simulation, analytical estimates and expected value calculations are performed. Artificial devices (such as forced collision, local forced collision, splitting and russian roulette) are moreover foreseen by the code, which can enable the user to drastically reduce the variance of the calculation.

  4. Swift/BAT Calibration and Spectral Response

    Science.gov (United States)

    Parsons, A.

    2004-01-01

    The Burst Alert Telescope (BAT) aboard NASA#s Swift Gamma-Ray Burst Explorer is a large coded aperture gamma-ray telescope consisting of a 2.4 m (8#) x 1.2 m (4#) coded aperture mask supported 1 meter above a 5200 square cm area detector plane containing 32,768 individual 4 mm x 4 mm x 2 mm CZT detectors. The BAT is now completely assembled and integrated with the Swift spacecraft in anticipation of an October 2004 launch. Extensive ground calibration measurements using a variety of radioactive sources have resulted in a moderately high fidelity model for the BAT spectral and photometric response. This paper describes these ground calibration measurements as well as related computer simulations used to study the efficiency and individual detector properties of the BAT detector array. The creation of a single spectral response model representative of the fully integrated BAT posed an interesting challenge and is at the heart of the public analysis tool #batdrmgen# which computes a response matrix for any given sky position within the BAT FOV. This paper will describe the batdrmgen response generator tool and conclude with a description of the on-orbit calibration plans as well as plans for the future improvements needed to produce the more detailed spectral response model that is required for the construction of an all-sky hard x-ray survey.

  5. A Scalable Multi-chain Markov Chain Monte Carlo Method for Inverting Subsurface Hydraulic and Geological Properties

    Science.gov (United States)

    Bao, J.; Ren, H.; Hou, Z.; Ray, J.; Swiler, L.; Huang, M.

    2015-12-01

    We developed a novel scalable multi-chain Markov chain Monte Carlo (MCMC) method for high-dimensional inverse problems. The method is scalable in terms of number of chains and processors, and is useful for Bayesian calibration of computationally expensive simulators typically used for scientific and engineering calculations. In this study, we demonstrate two applications of this method for hydraulic and geological inverse problems. The first one is monitoring soil moisture variations using tomographic ground penetrating radar (GPR) travel time data, where challenges exist in the inversion of GPR tomographic data for handling non-uniqueness and nonlinearity and high-dimensionality of unknowns. We integrated the multi-chain MCMC framework with the pilot point concept, a curved-ray GPR forward model, and a sequential Gaussian simulation (SGSIM) algorithm for estimating the dielectric permittivity at pilot point locations distributed within the tomogram, as well as its spatial correlation range, which are used to construct the whole field of dielectric permittivity using SGSIM. The second application is reservoir porosity and saturation estimation using the multi-chain MCMC approach to jointly invert marine seismic amplitude versus angle (AVA) and controlled-source electro-magnetic (CSEM) data for a layered reservoir model, where the unknowns to be estimated include the porosity and fluid saturation in each reservoir layer and the electrical conductivity of the overburden and bedrock. The computational efficiency, accuracy, and convergence behaviors of the inversion approach are systematically evaluated.

  6. A development of NRESPG Monte Carlo code for the calculation of neutron response function for gas counters

    Energy Technology Data Exchange (ETDEWEB)

    Takeda, N. [Electrotechnical Laboratory, 1-1-4 Umezono, Tsukuba-shi, Ibaraki 305-8568 (Japan); Kudo, K. [Electrotechnical Laboratory, 1-1-4 Umezono, Tsukuba-shi, Ibaraki 305-8568 (Japan); Toyokawa, H. [Electrotechnical Laboratory, 1-1-4 Umezono, Tsukuba-shi, Ibaraki 305-8568 (Japan); Torii, T. [Japan Power Reactor and Nuclear Fuel Development Corporation, Tsuruga Office, Fukui 919-12 (Japan); Hashimoto, M. [Japan Power Reactor and Nuclear Fuel Development Corporation, O-arai Engineering Center, Ibaraki 311-13 (Japan); Sugita, T. [Science System Laboratory, Ibaraki 309-17 (Japan); Dietze, G. [Physikalisch-Technische Bundesanstalt, 38023 Braunschweig (Germany); Yang, X. [China Institute of Atomic Energy (China)

    1999-02-11

    A Monte Carlo code Neutron RESPonse function for Gas counters (NRESPG) has been developed for the calculation of neutron response functions and efficiencies for neutron energies up to 20 MeV, which can be applied for {sup 3}He, H{sub 2}, or BF{sub 3} gas proportional counters with or without moderator. This code can simulate the neutron behavior in a two-dimensional detector configuration and treat the thermal motion of a moderator atom which becomes important as the neutron energy becomes sufficiently low. Further, a more precise measured data was taken to simulate the position-dependent gas multiplication in the sensitive and insensitive gas region of a proportional counter. The NRESPG code has been applied for the calculation of response functions of {sup 3}He cylindrical proportional counters to determine neutron energy and neutron fluence in a monoenergetic calibration field. Thus, a remarkable discrepancy in the lower portion of the full-energy peak produced by the {sup 3}He(n,p)T reaction can be removed which results in a good agreement between simulations and experiments. The code has been also used for the simulation of the response of a McTaggart-type long counter consisting of a central cylindrical BF{sub 3} counter surrounded by a polyethylene moderator. The results of the NRESPG simulations were compared with those obtained from MCNP calculations.

  7. A development of NRESPG Monte Carlo code for the calculation of neutron response function for gas counters

    International Nuclear Information System (INIS)

    A Monte Carlo code Neutron RESPonse function for Gas counters (NRESPG) has been developed for the calculation of neutron response functions and efficiencies for neutron energies up to 20 MeV, which can be applied for 3He, H2, or BF3 gas proportional counters with or without moderator. This code can simulate the neutron behavior in a two-dimensional detector configuration and treat the thermal motion of a moderator atom which becomes important as the neutron energy becomes sufficiently low. Further, a more precise measured data was taken to simulate the position-dependent gas multiplication in the sensitive and insensitive gas region of a proportional counter. The NRESPG code has been applied for the calculation of response functions of 3He cylindrical proportional counters to determine neutron energy and neutron fluence in a monoenergetic calibration field. Thus, a remarkable discrepancy in the lower portion of the full-energy peak produced by the 3He(n,p)T reaction can be removed which results in a good agreement between simulations and experiments. The code has been also used for the simulation of the response of a McTaggart-type long counter consisting of a central cylindrical BF3 counter surrounded by a polyethylene moderator. The results of the NRESPG simulations were compared with those obtained from MCNP calculations

  8. Determination of scattered gamma radiation in the calibration of environmental dose rate meters

    DEFF Research Database (Denmark)

    Bøtter-Jensen, L.; Hedemann Jensen, P.

    1992-01-01

    Practical free-field and shadow-shield calibration techniques using a variety of environmental dose rate meters were studied, and experimental and theoretical determinations were made of the contribution of scattered photons to the air kerma rate from certificated Cs-137, Co-60 and Ra-226 gamma...... calculated. The Monte Carlo code used enables the scatter components from ground and air to be separated. Calculated relative air kerma rates scattered from ground and air for the radionuclides Cs-137, Co-60 and Ra-226 are listed with the aim of recommending these values in practical free-field calibrations...

  9. Dependence of the glass badge response on the different calibration phantoms

    International Nuclear Information System (INIS)

    Chiyoda Technol Corporation provides a glass badge including the GD-450 for photon dosimetry and the WNP composed of a poly-allyldiglycol carbonate, the CR39, for neutron dosimetry. For maintenance of quality on monitoring service, it is very important to establish the dose estimation formula for a calibration phantom. In this study, we evaluated the GD-450 response for photon energy range from 10 keV to 1250 keV and the WNP response for neutron energy range from thermal to 15 MeV, both by experiment and Monte Carlo calculation. The dependence of the Glass Badge response was clarified on three different calibration phantoms. (author)

  10. Mexican national pyronometer network calibration

    Science.gov (United States)

    VAldes, M.; Villarreal, L.; Estevez, H.; Riveros, D.

    2013-12-01

    In order to take advantage of the solar radiation as an alternate energy source it is necessary to evaluate the spatial and temporal availability. The Mexican National Meterological Service (SMN) has a network with 136 meteorological stations, each coupled with a pyronometer for measuring the global solar radiation. Some of these stations had not been calibrated in several years. The Mexican Department of Energy (SENER) in order to count on a reliable evaluation of the solar resource funded this project to calibrate the SMN pyrometer network and validate the data. The calibration of the 136 pyronometers by the intercomparison method recommended by the World Meterological Organization (WMO) requires lengthy observations and specific environmental conditions such as clear skies and a stable atmosphere, circumstances that determine the site and season of the calibration. The Solar Radiation Section of the Instituto de Geofísica of the Universidad Nacional Autónoma de México is a Regional Center of the WMO and is certified to carry out the calibration procedures and emit certificates. We are responsible for the recalibration of the pyronometer network of the SMN. A continuous emission solar simulator with exposed areas with 30cm diameters was acquired to reduce the calibration time and not depend on atmospheric conditions. We present the results of the calibration of 10 thermopile pyronometers and one photovoltaic cell by the intercomparison method with more than 10000 observations each and those obtained with the solar simulator.

  11. Monte carlo simulation for soot dynamics

    KAUST Repository

    Zhou, Kun

    2012-01-01

    A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.

  12. Fast quantum Monte Carlo on a GPU

    CERN Document Server

    Lutsyshyn, Y

    2013-01-01

    We present a scheme for the parallelization of quantum Monte Carlo on graphical processing units, focusing on bosonic systems and variational Monte Carlo. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent acceleration. Comparing with single core execution, GPU-accelerated code runs over x100 faster. The CUDA code is provided along with the package that is necessary to execute variational Monte Carlo for a system representing liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the latest Kepler architecture K20 GPU. Kepler-specific optimization is discussed.

  13. Advanced computers and Monte Carlo

    International Nuclear Information System (INIS)

    High-performance parallelism that is currently available is synchronous in nature. It is manifested in such architectures as Burroughs ILLIAC-IV, CDC STAR-100, TI ASC, CRI CRAY-1, ICL DAP, and many special-purpose array processors designed for signal processing. This form of parallelism has apparently not been of significant value to many important Monte Carlo calculations. Nevertheless, there is much asynchronous parallelism in many of these calculations. A model of a production code that requires up to 20 hours per problem on a CDC 7600 is studied for suitability on some asynchronous architectures that are on the drawing board. The code is described and some of its properties and resource requirements ae identified to compare with corresponding properties and resource requirements are identified to compare with corresponding properties and resource requirements are identified to compare with corresponding properties and resources of some asynchronous multiprocessor architectures. Arguments are made for programer aids and special syntax to identify and support important asynchronous parallelism. 2 figures, 5 tables

  14. A new method for commissioning Monte Carlo treatment planning systems

    Science.gov (United States)

    Aljarrah, Khaled Mohammed

    2005-11-01

    The Monte Carlo method is an accurate method for solving numerical problems in different fields. It has been used for accurate radiation dose calculation for radiation treatment of cancer. However, the modeling of an individual radiation beam produced by a medical linear accelerator for Monte Carlo dose calculation, i.e., the commissioning of a Monte Carlo treatment planning system, has been the bottleneck for the clinical implementation of Monte Carlo treatment planning. In this study a new method has been developed to determine the parameters of the initial electron beam incident on the target for a clinical linear accelerator. The interaction of the initial electron beam with the accelerator target produces x-ray and secondary charge particles. After successive interactions in the linac head components, the x-ray photons and the secondary charge particles interact with the patient's anatomy and deliver dose to the region of interest. The determination of the initial electron beam parameters is important for estimating the delivered dose to the patients. These parameters, such as beam energy and radial intensity distribution, are usually estimated through a trial and error process. In this work an easy and efficient method was developed to determine these parameters. This was accomplished by comparing calculated 3D dose distributions for a grid of assumed beam energies and radii in a water phantom with measurements data. Different cost functions were studied to choose the appropriate function for the data comparison. The beam parameters were determined on the light of this method. Due to the assumption that same type of linacs are exactly the same in their geometries and only differ by the initial phase space parameters, the results of this method were considered as a source data to commission other machines of the same type.

  15. Monte Carlo model for electron degradation in methane

    CERN Document Server

    Bhardwaj, Anil

    2015-01-01

    We present a Monte Carlo model for degradation of 1-10,000 eV electrons in an atmosphere of methane. The electron impact cross sections for CH4 are compiled and analytical representations of these cross sections are used as input to the model.model.Yield spectra, which provides information about the number of inelastic events that have taken place in each energy bin, is used to calculate the yield (or population) of various inelastic processes. The numerical yield spectra, obtained from the Monte Carlo simulations, is represented analytically, thus generating the Analytical Yield Spectra (AYS). AYS is employed to obtain the mean energy per ion pair and efficiencies of various inelastic processes.Mean energy per ion pair for neutral CH4 is found to be 26 (27.8) eV at 10 (0.1) keV. Efficiency calculation showed that ionization is the dominant process at energies >50 eV, for which more than 50% of the incident electron energy is used. Above 25 eV, dissociation has an efficiency of 27%. Below 10 eV, vibrational e...

  16. Efficiency transfer using the GEANT4 code of CERN for HPGe gamma spectrometry.

    Science.gov (United States)

    Chagren, S; Ben Tekaya, M; Reguigui, N; Gharbi, F

    2016-01-01

    In this work we apply the GEANT4 code of CERN to calculate the peak efficiency in High Pure Germanium (HPGe) gamma spectrometry using three different procedures. The first is a direct calculation. The second corresponds to the usual case of efficiency transfer between two different configurations at constant emission energy assuming a reference point detection configuration and the third, a new procedure, consists on the transfer of the peak efficiency between two detection configurations emitting the gamma ray in different energies assuming a "virtual" reference point detection configuration. No pre-optimization of the detector geometrical characteristics was performed before the transfer to test the ability of the efficiency transfer to reduce the effect of the ignorance on their real magnitude on the quality of the transferred efficiency. The obtained and measured efficiencies were found in good agreement for the two investigated methods of efficiency transfer. The obtained agreement proves that Monte Carlo method and especially the GEANT4 code constitute an efficient tool to obtain accurate detection efficiency values. The second investigated efficiency transfer procedure is useful to calibrate the HPGe gamma detector for any emission energy value for a voluminous source using one point source detection efficiency emitting in a different energy as a reference efficiency. The calculations preformed in this work were applied to the measurement exercise of the EUROMET428 project. A measurement exercise where an evaluation of the full energy peak efficiencies in the energy range 60-2000 keV for a typical coaxial p-type HpGe detector and several types of source configuration: point sources located at various distances from the detector and a cylindrical box containing three matrices was performed. PMID:26623928

  17. Efficiency transfer using the GEANT4 code of CERN for HPGe gamma spectrometry.

    Science.gov (United States)

    Chagren, S; Ben Tekaya, M; Reguigui, N; Gharbi, F

    2016-01-01

    In this work we apply the GEANT4 code of CERN to calculate the peak efficiency in High Pure Germanium (HPGe) gamma spectrometry using three different procedures. The first is a direct calculation. The second corresponds to the usual case of efficiency transfer between two different configurations at constant emission energy assuming a reference point detection configuration and the third, a new procedure, consists on the transfer of the peak efficiency between two detection configurations emitting the gamma ray in different energies assuming a "virtual" reference point detection configuration. No pre-optimization of the detector geometrical characteristics was performed before the transfer to test the ability of the efficiency transfer to reduce the effect of the ignorance on their real magnitude on the quality of the transferred efficiency. The obtained and measured efficiencies were found in good agreement for the two investigated methods of efficiency transfer. The obtained agreement proves that Monte Carlo method and especially the GEANT4 code constitute an efficient tool to obtain accurate detection efficiency values. The second investigated efficiency transfer procedure is useful to calibrate the HPGe gamma detector for any emission energy value for a voluminous source using one point source detection efficiency emitting in a different energy as a reference efficiency. The calculations preformed in this work were applied to the measurement exercise of the EUROMET428 project. A measurement exercise where an evaluation of the full energy peak efficiencies in the energy range 60-2000 keV for a typical coaxial p-type HpGe detector and several types of source configuration: point sources located at various distances from the detector and a cylindrical box containing three matrices was performed.

  18. Growing lattice animals and Monte-Carlo methods

    Science.gov (United States)

    Reich, G. R.; Leath, P. L.

    1980-01-01

    We consider the search problems which arise in Monte-Carlo studies involving growing lattice animals. A new periodic hashing scheme (based on a periodic cell) especially suited to these problems is presented which takes advantage both of the connected geometric structure of the animals and the traversal-oriented nature of the search. The scheme is motivated by a physical analogy and tested numerically on compact and on ramified animals. In both cases the performance is found to be more efficient than random hashing, and to a degree depending on the compactness of the animals

  19. Monte Carlo simulation of the Neutrino-4 experiment

    Energy Technology Data Exchange (ETDEWEB)

    Serebrov, A. P., E-mail: serebrov@pnpi.spb.ru; Fomin, A. K.; Onegin, M. S.; Ivochkin, V. G.; Matrosov, L. N. [National Research Center Kurchatov Institute, Petersburg Nuclear Physics Institute (Russian Federation)

    2015-12-15

    Monte Carlo simulation of the two-section reactor antineutrino detector of the Neutrino-4 experiment is carried out. The scintillation-type detector is based on the inverse beta-decay reaction. The antineutrino is recorded by two successive signals from the positron and the neutron. The simulation of the detector sections and the active shielding is performed. As a result of the simulation, the distributions of photomultiplier signals from the positron and the neutron are obtained. The efficiency of the detector depending on the signal recording thresholds is calculated.

  20. MATLAB platform for Monte Carlo planning and dosimetry experimental evaluation

    International Nuclear Information System (INIS)

    A new platform for the full Monte Carlo planning and an independent experimental evaluation that it can be integrated into clinical practice. The tool has proved its usefulness and efficiency and now forms part of the flow of work of our research group, the tool used for the generation of results, which are to be suitably revised and are being published. This software is an effort of integration of numerous algorithms of image processing, along with planning optimization algorithms, allowing the process of MCTP planning from a single interface. In addition, becomes a flexible and accurate tool for the evaluation of experimental dosimetric data for the quality control of actual treatments. (Author)