WorldWideScience

Sample records for carlo efficiency calibration

  1. Monte Carlo Simulations for the Purpose of Efficiency Curve Calibration for the Fastscan Whole Body Counter

    Science.gov (United States)

    Graham, Hannah Robyn

    In order to be able to qualify and quantify radiation exposure in terms of dose, a Fastscan whole body counter must be calibrated correctly. Current calibration methods do not take the full range of body types into consideration when creating efficiency curve calibrations. The goal of this work is the creation of a Monte Carlo (MCNP) model, that allows the simulation of efficiency curves for a diverse population of subjects. Models were created for both the Darlington and the Pickering Fastscan WBCs, and the simulations were benchmarked against experimental results with good agreement. The Pickering Fastscan was found to have agreement to within +/-9%, and the Darlington Fastscan had agreement to within +/-11%. Further simulations were conducted to investigate the effects of increased body fat on the detected activity, as well as locating the position of external contamination using front/back ratios of activity. Simulations were also conducted to create efficiency calibrations that had good agreement with the manufacturer's efficiency curves. The work completed in this thesis can be used to create efficiency calibration curves for unique body compositions in the future.

  2. Efficiency calibration of an extended-range Ge detector by a detailed Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Peyres, V. [Metrologia de Radiaciones Ionizantes, CIEMAT, Avda. Complutense 22, Madrid 28040 (Spain)], E-mail: Virginia.peyres@ciemat.es; Garcia-Torano, E. [Metrologia de Radiaciones Ionizantes, CIEMAT, Avda. Complutense 22, Madrid 28040 (Spain)

    2007-09-21

    A Monte Carlo simulation has been employed for calibrating an extended-range Ge detector in an energy range from 14 to 1800 keV. A set of sources from monoenergetic and multi-gamma emitters point were measured at 15 cm from the detector window and provided 26 experimental values to which the results of the simulations are compared. Discrepancies between simulated and experimental values are within 1 standard deviation, and relative differences are, in most cases, below 1%.

  3. Energy and resolution calibration of NaI(Tl) and LaBr{sub 3}(Ce) scintillators and validation of an EGS5 Monte Carlo user code for efficiency calculations

    Energy Technology Data Exchange (ETDEWEB)

    Casanovas, R., E-mail: ramon.casanovas@urv.cat [Unitat de Fisica Medica, Facultat de Medicina i Ciencies de la Salut, Universitat Rovira i Virgili, ES-43201 Reus (Tarragona) (Spain); Morant, J.J. [Servei de Proteccio Radiologica, Facultat de Medicina i Ciencies de la Salut, Universitat Rovira i Virgili, ES-43201 Reus (Tarragona) (Spain); Salvado, M. [Unitat de Fisica Medica, Facultat de Medicina i Ciencies de la Salut, Universitat Rovira i Virgili, ES-43201 Reus (Tarragona) (Spain)

    2012-05-21

    The radiation detectors yield the optimal performance if they are accurately calibrated. This paper presents the energy, resolution and efficiency calibrations for two scintillation detectors, NaI(Tl) and LaBr{sub 3}(Ce). For the two former calibrations, several fitting functions were tested. To perform the efficiency calculations, a Monte Carlo user code for the EGS5 code system was developed with several important implementations. The correct performance of the simulations was validated by comparing the simulated spectra with the experimental spectra and reproducing a number of efficiency and activity calculations. - Highlights: Black-Right-Pointing-Pointer NaI(Tl) and LaBr{sub 3}(Ce) scintillation detectors are used for gamma-ray spectrometry. Black-Right-Pointing-Pointer Energy, resolution and efficiency calibrations are discussed for both detectors. Black-Right-Pointing-Pointer For the two former calibrations, several fitting functions are tested. Black-Right-Pointing-Pointer A Monte Carlo user code for EGS5 was developed for the efficiency calculations. Black-Right-Pointing-Pointer The code was validated reproducing some efficiency and activity calculations.

  4. Efficient kinetic Monte Carlo simulation

    Science.gov (United States)

    Schulze, Tim P.

    2008-02-01

    This paper concerns kinetic Monte Carlo (KMC) algorithms that have a single-event execution time independent of the system size. Two methods are presented—one that combines the use of inverted-list data structures with rejection Monte Carlo and a second that combines inverted lists with the Marsaglia-Norman-Cannon algorithm. The resulting algorithms apply to models with rates that are determined by the local environment but are otherwise arbitrary, time-dependent and spatially heterogeneous. While especially useful for crystal growth simulation, the algorithms are presented from the point of view that KMC is the numerical task of simulating a single realization of a Markov process, allowing application to a broad range of areas where heterogeneous random walks are the dominate simulation cost.

  5. Monte Carlo simulation: tool for the calibration in analytical determination of radionuclides; Simulacion Monte Carlo: herramienta para la calibracion en determinaciones analiticas de radionucleidos

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, Jorge A. Carrazana; Ferrera, Eduardo A. Capote; Gomez, Isis M. Fernandez; Castro, Gloria V. Rodriguez; Ricardo, Niury Martinez, E-mail: cphr@cphr.edu.cu [Centro de Proteccion e Higiene de las Radiaciones (CPHR), La Habana (Cuba)

    2013-07-01

    This work shows how is established the traceability of the analytical determinations using this calibration method. Highlights the advantages offered by Monte Carlo simulation for the application of corrections by differences in chemical composition, density and height of the samples analyzed. Likewise, the results obtained by the LVRA in two exercises organized by the International Agency for Atomic Energy (IAEA) are presented. In these exercises (an intercomparison and a proficiency test) all reported analytical results were obtained based on calibrations in efficiency by Monte Carlo simulation using the DETEFF program.

  6. Top Quark Mass Calibration for Monte Carlo Event Generators

    CERN Document Server

    Butenschoen, Mathias; Hoang, Andre H; Mateu, Vicent; Preisser, Moritz; Stewart, Iain W

    2016-01-01

    The most precise top quark mass measurements use kinematic reconstruction methods, determining the top mass parameter of a Monte Carlo event generator, $m_t^{\\rm MC}$. Due to hadronization and parton shower dynamics, relating $m_t^{\\rm MC}$ to a field theory mass is difficult. We present a calibration procedure to determine this relation using hadron level QCD predictions for observables with kinematic mass sensitivity. Fitting $e^+e^-$ 2-Jettiness calculations at NLL/NNLL order to Pythia 8.205, $m_t^{\\rm MC}$ differs from the pole mass by $900$/$600$ MeV, and agrees with the MSR mass within uncertainties, $m_t^{\\rm MC}\\simeq m_{t,1\\,{\\rm GeV}}^{\\rm MSR}$.

  7. Covariances for Gamma Spectrometer Efficiency Calibrations

    Directory of Open Access Journals (Sweden)

    Williams John G.

    2016-01-01

    Full Text Available An essential part of the efficiency calibration of gamma spectrometers is the determination of uncertainties on the results. Although this is routinely done, it often does not include the correlations between efficiencies at different energies. These can be important in the subsequent use of the detectors to obtain activities for a set of dosimetry reactions. If those values are not mutually independent, then obviously that fact could impact the validity of adjustments or of other conclusions resulting from the analysis. Examples are given of detector calibrations in which the correlations are calculated and propagated through an analysis of measured activities.

  8. Monte Carlo based calibration of scintillation detectors for laboratory and in situ gamma ray measurements

    NARCIS (Netherlands)

    van der Graaf, E. R.; Limburg, J.; Koomans, R. L.; Tijs, M.

    2011-01-01

    The calibration of scintillation detectors for gamma radiation in a well characterized setup can be transferred to other geometries using Monte Carlo simulations to account for the differences between the calibration and the other geometry. In this study a calibration facility was used that is const

  9. Calibration of Li-glass Detector Efficiency

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    Li-glass detector will be used to measure the flux of neutron beam in Gamma-ray Total Absorption Facility(GTAF). We have calibrated the detection efficiency of Li-glass detector in 5SDH-2 accelerator. The beam of neutron was produced by the reaction 7Li

  10. Efficiency and accuracy of Monte Carlo (importance) sampling

    NARCIS (Netherlands)

    Waarts, P.H.

    2003-01-01

    Monte Carlo Analysis is often regarded as the most simple and accurate reliability method. Be-sides it is the most transparent method. The only problem is the accuracy in correlation with the efficiency. Monte Carlo gets less efficient or less accurate when very low probabilities are to be computed

  11. Use of Monte Carlo simulations in the assessment of calibration strategies-Part I: an introduction to Monte Carlo mathematics.

    Science.gov (United States)

    Burrows, John

    2013-04-01

    An introduction to the use of the mathematical technique of Monte Carlo simulations to evaluate least squares regression calibration is described. Monte Carlo techniques involve the repeated sampling of data from a population that may be derived from real (experimental) data, but is more conveniently generated by a computer using a model of the analytical system and a randomization process to produce a large database. Datasets are selected from this population and fed into the calibration algorithms under test, thus providing a facile way of producing a sufficiently large number of assessments of the algorithm to enable a statically valid appraisal of the calibration process to be made. This communication provides a description of the technique that forms the basis of the results presented in Parts II and III of this series, which follow in this issue, and also highlights the issues arising from the use of small data populations in bioanalysis.

  12. High-precision efficiency calibration of a high-purity co-axial germanium detector

    Energy Technology Data Exchange (ETDEWEB)

    Blank, B., E-mail: blank@cenbg.in2p3.fr [Centre d' Etudes Nucléaires de Bordeaux Gradignan, UMR 5797, CNRS/IN2P3, Université de Bordeaux, Chemin du Solarium, BP 120, 33175 Gradignan Cedex (France); Souin, J.; Ascher, P.; Audirac, L.; Canchel, G.; Gerbaux, M.; Grévy, S.; Giovinazzo, J.; Guérin, H.; Nieto, T. Kurtukian; Matea, I. [Centre d' Etudes Nucléaires de Bordeaux Gradignan, UMR 5797, CNRS/IN2P3, Université de Bordeaux, Chemin du Solarium, BP 120, 33175 Gradignan Cedex (France); Bouzomita, H.; Delahaye, P.; Grinyer, G.F.; Thomas, J.C. [Grand Accélérateur National d' Ions Lourds, CEA/DSM, CNRS/IN2P3, Bvd Henri Becquerel, BP 55027, F-14076 CAEN Cedex 5 (France)

    2015-03-11

    A high-purity co-axial germanium detector has been calibrated in efficiency to a precision of about 0.15% over a wide energy range. High-precision scans of the detector crystal and γ-ray source measurements have been compared to Monte-Carlo simulations to adjust the dimensions of a detector model. For this purpose, standard calibration sources and short-lived online sources have been used. The resulting efficiency calibration reaches the precision needed e.g. for branching ratio measurements of super-allowed β decays for tests of the weak-interaction standard model.

  13. EMCCD calibration for astronomical imaging: Wide FastCam at the Telescopio Carlos Sánchez

    Science.gov (United States)

    Velasco, S.; Oscoz, A.; López, R. L.; Puga, M.; Pérez-Garrido, A.; Pallé, E.; Ricci, D.; Ayuso, I.; Hernández-Sánchez, M.; Vázquez-Martín, S.; Protasio, C.; Béjar, V.; Truant, N.

    2017-03-01

    The evident benefits of Electron Multiplying CCDs (EMCCDs) -speed, high sensitivity, low noise and their capability of detecting single photon events whilst maintaining high quantum efficiency- are bringing these kinds of detectors to many state-of-the-art astronomical instruments (Velasco et al. 2016; Oscoz et al. 2008). The EMCCDs are the perfect answer to the need for great sensitivity levels as they are not limited by the readout noise of the output amplifier, while conventional CCDs are, even when operated at high readout frame rates. Here we present a quantitative on-sky method to calibrate EMCCD detectors dedicated to astronomical imaging, developed during the commissioning process (Velasco et al. 2016) and first observations (Ricci et al. 2016, in prep.) with Wide FastCam (Marga et al. 2014) at Telescopio Carlos Sánchez (TCS) in the Observatorio del Teide.

  14. Calibration and Monte Carlo modelling of neutron long counters

    CERN Document Server

    Tagziria, H

    2000-01-01

    The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivit...

  15. Force calibration using errors-in-variables regression and Monte Carlo uncertainty evaluation

    Science.gov (United States)

    Bartel, Thomas; Stoudt, Sara; Possolo, Antonio

    2016-06-01

    An errors-in-variables regression method is presented as an alternative to the ordinary least-squares regression computation currently employed for determining the calibration function for force measuring instruments from data acquired during calibration. A Monte Carlo uncertainty evaluation for the errors-in-variables regression is also presented. The corresponding function (which we call measurement function, often called analysis function in gas metrology) necessary for the subsequent use of the calibrated device to measure force, and the associated uncertainty evaluation, are also derived from the calibration results. Comparisons are made, using real force calibration data, between the results from the errors-in-variables and ordinary least-squares analyses, as well as between the Monte Carlo uncertainty assessment and the conventional uncertainty propagation employed at the National Institute of Standards and Technology (NIST). The results show that the errors-in-variables analysis properly accounts for the uncertainty in the applied calibrated forces, and that the Monte Carlo method, owing to its intrinsic ability to model uncertainty contributions accurately, yields a better representation of the calibration uncertainty throughout the transducer’s force range than the methods currently in use. These improvements notwithstanding, the differences between the results produced by the current and by the proposed new methods generally are small because the relative uncertainties of the inputs are small and most contemporary load cells respond approximately linearly to such inputs. For this reason, there will be no compelling need to revise any of the force calibration reports previously issued by NIST.

  16. Calibration, characterisation and Monte Carlo modelling of a fast-UNCL

    Energy Technology Data Exchange (ETDEWEB)

    Tagziria, Hamid, E-mail: hamid.tagziria@jrc.ec.europa.eu [European Commission, Joint Research Center, ITU-Nuclear Security Unit, I-21027 Ispra (Italy); Bagi, Janos; Peerani, Paolo [European Commission, Joint Research Center, ITU-Nuclear Security Unit, I-21027 Ispra (Italy); Belian, Antony [Department of Safeguards, SGTS/TAU, IAEA Vienna Austria (Austria)

    2012-09-21

    This paper describes the calibration, characterisation and Monte Carlo modelling of a new IAEA Uranium Neutron Collar (UNCL) for LWR fuel, which can be operated in both passive and active modes. It can employ either 35 {sup 3}He tubes (in active configuration) or 44 tubes at 10 atm pressure (in its passive configuration) and thus can be operated in fast mode (with Cd liner) as its efficiency is higher than that of the standard UNCL. Furthermore, it has an adjustable internal cavity which allows the measurement of varying sizes of fuel assemblies such as WWER, PWR and BWR. It is intended to be used with Cd liners in active mode (with an AmLi interrogation source in place) by the inspectorate for the determination of the {sup 235}U content in fresh fuel assemblies, especially in cases where high concentrations of burnable poisons cause problems with accurate assays. A campaign of measurements has been carried out at the JRC Performance Laboratories (PERLA) in Ispra (Italy) using various radionuclide neutron sources ({sup 252}Cf, {sup 241}AmLi and PuGa) and our BWR and PWR reference assemblies, in order to calibrate and characterise the counter as well as assess its performance and determine its optimum operational parameters. Furthermore, the fast-UNCL has been extensively modelled at JRC using the Monte Carlo code, MCNP-PTA, which simulates both the neutron transport and the coincidence electronics. The model has been validated using our measurements which agreed well with calculations. The WWER1000 fuel assembly for which there are no representative reference materials for an adequate calibration of the counter, has also been modelled and the response of the counter to this fuel assembly has been simulated. Subsequently numerical calibrations curves have been obtained for the above fuel assemblies in various modes (fast and thermal). The sensitivity of the counter to fuel rods substitution as well as other important aspects and the parameters of the fast

  17. Confidence and efficiency scaling in variational quantum Monte Carlo calculations

    Science.gov (United States)

    Delyon, F.; Bernu, B.; Holzmann, Markus

    2017-02-01

    Based on the central limit theorem, we discuss the problem of evaluation of the statistical error of Monte Carlo calculations using a time-discretized diffusion process. We present a robust and practical method to determine the effective variance of general observables and show how to verify the equilibrium hypothesis by the Kolmogorov-Smirnov test. We then derive scaling laws of the efficiency illustrated by variational Monte Carlo calculations on the two-dimensional electron gas.

  18. Confidence and efficiency scaling in Variational Quantum Monte Carlo calculations

    CERN Document Server

    Delyon, François; Holzmann, Markus

    2016-01-01

    Based on the central limit theorem, we discuss the problem of evaluation of the statistical error of Monte Carlo calculations using a time discretized diffusion process. We present a robust and practical method to determine the effective variance of general observables and show how to verify the equilibrium hypothesis by the Kolmogorov-Smirnov test. We then derive scaling laws of the efficiency illustrated by Variational Monte Carlo calculations on the two dimensional electron gas.

  19. Calibration of the Top-Quark Monte-Carlo Mass

    CERN Document Server

    Kieseler, Jan; Moch, Sven-Olaf

    2015-01-01

    We present a method to establish experimentally the relation between the top-quark mass $m_t^{MC}$ as implemented in Monte-Carlo generators and the Lagrangian mass parameter $m_t$ in a theoretically well-defined renormalization scheme. We propose a simultaneous fit of $m_t^{MC}$ and an observable sensitive to $m_t$, which does not rely on any prior assumptions about the relation between $m_t$ and $m_t^{MC}$. The measured observable is independent of $m_t^{MC}$ and can be used subsequently for a determination of $m_t$. The analysis strategy is illustrated with examples for the extraction of $m_t$ from inclusive and differential cross sections for hadro-production of top-quarks.

  20. Efficiency of Monte Carlo sampling in chaotic systems.

    Science.gov (United States)

    Leitão, Jorge C; Lopes, J M Viana Parente; Altmann, Eduardo G

    2014-11-01

    In this paper we investigate how the complexity of chaotic phase spaces affect the efficiency of importance sampling Monte Carlo simulations. We focus on flat-histogram simulations of the distribution of finite-time Lyapunov exponent in a simple chaotic system and obtain analytically that the computational effort: (i) scales polynomially with the finite time, a tremendous improvement over the exponential scaling obtained in uniform sampling simulations; and (ii) the polynomial scaling is suboptimal, a phenomenon known as critical slowing down. We show that critical slowing down appears because of the limited possibilities to issue a local proposal in the Monte Carlo procedure when it is applied to chaotic systems. These results show how generic properties of chaotic systems limit the efficiency of Monte Carlo simulations.

  1. Application of the Monte Carlo method to the analysis of measurement geometries for the calibration of a HP Ge detector in an environmental radioactivity laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Rodenas, Jose [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain)], E-mail: jrodenas@iqn.upv.es; Gallardo, Sergio; Ballester, Silvia; Primault, Virginie [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain); Ortiz, Josefina [Laboratorio de Radiactividad Ambiental, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain)

    2007-10-15

    A gamma spectrometer including an HP Ge detector is commonly used for environmental radioactivity measurements. The efficiency of the detector should be calibrated for each geometry considered. Simulation of the calibration procedure with a validated computer program is an important auxiliary tool for environmental radioactivity laboratories. The MCNP code based on the Monte Carlo method has been applied to simulate the detection process in order to obtain spectrum peaks and determine the efficiency curve for each modelled geometry. The source used for measurements was a calibration mixed radionuclide gamma reference solution, covering a wide energy range (50-2000 keV). Two measurement geometries - Marinelli beaker and Petri boxes - as well as different materials - water, charcoal, sand - containing the source have been considered. Results obtained from the Monte Carlo model have been compared with experimental measurements in the laboratory in order to validate the model.

  2. An efficiency calibration method without a radioactive source using BOMAB phantom and Monte Carlo simulation for Inspector 2000 gamma spectroscopy system%蒙特卡罗方法在γ谱测量中效率刻度的研究

    Institute of Scientific and Technical Information of China (English)

    张富利; 曲德成; 杨国山; 郑明民

    2009-01-01

    目的 建立探测器的无源效率刻度方法 .方法 利用137Cs点源,对HPGe和NaI晶体尺寸进行调整以获得正确的探测器几何参数,基于BOMAB体模的数学模型,在γ线能量126~1836 keV范围内,采用蒙特卡罗方法 ,结合核应急情况下体内核索探测几何模式,分别计算了HPGe探测器和NaI探测器对BOMAB体模的计数效率,根据计算结果 拟合出相关的效率曲线和函数.结果 通过拟合函数计算获得的探测效率与应用蒙特卡罗算法得到的探测效率一致性较好.对于Nal探测器,残差为-19%~18%;对于HPGe探测器,残差为-11%~17%.结论 根据BOMAB体模的数学模型应用蒙特卡罗方法 对便携式γ谱仪进行无源刻度省时、省力,具有较强的可操作性,是一种非常方便的谱仪校准方法 .%Objective To establish an efficiency calibration method without a radioactive source for Inspector 2000 gamma spectroscopy system.Methods At the fwst step,geometry parameters of the detector were properly specified by comparisons of FEPE(Full Energy Peak Efficiency)between Monte Carlo(MC)calculations and the average measured values of HPGe and NaI detectors using point sources(137 Cs).The differences between calculations and measurements were generally within ±10% for gamma rays.Then,Monte Carlo simulation was used to evaluate the counting efficiency of Nal(TI)and HPGe detectors for BOMAB phantoms.Corresponding efficiency curves and functions were fitted.Results The counting efficiency received from the fitted efficiency functions had a good agreement with those from MC simulation.The bias for Nal detector ranged from-19% to 18%,while the bias for HPGe detector ranged from-11% to 17%.The above errors were totally acceptable in the on-site deployment during nuclear and radiological emergency events.Conclusions Monte Carlo method is simple,time and laborsaving in determing the counting efficiency of gamma spectroscopy system.

  3. Calibration coefficient of reference brachytherapy ionization chamber using analytical and Monte Carlo methods.

    Science.gov (United States)

    Kumar, Sudhir; Srinivasan, P; Sharma, S D

    2010-06-01

    A cylindrical graphite ionization chamber of sensitive volume 1002.4 cm(3) was designed and fabricated at Bhabha Atomic Research Centre (BARC) for use as a reference dosimeter to measure the strength of high dose rate (HDR) (192)Ir brachytherapy sources. The air kerma calibration coefficient (N(K)) of this ionization chamber was estimated analytically using Burlin general cavity theory and by the Monte Carlo method. In the analytical method, calibration coefficients were calculated for each spectral line of an HDR (192)Ir source and the weighted mean was taken as N(K). In the Monte Carlo method, the geometry of the measurement setup and physics related input data of the HDR (192)Ir source and the surrounding material were simulated using the Monte Carlo N-particle code. The total photon energy fluence was used to arrive at the reference air kerma rate (RAKR) using mass energy absorption coefficients. The energy deposition rates were used to simulate the value of charge rate in the ionization chamber and N(K) was determined. The Monte Carlo calculated N(K) agreed within 1.77 % of that obtained using the analytical method. The experimentally determined RAKR of HDR (192)Ir sources, using this reference ionization chamber by applying the analytically estimated N(K), was found to be in agreement with the vendor quoted RAKR within 1.43%.

  4. Whole body counter calibration using Monte Carlo modeling with an array of phantom sizes based on national anthropometric reference data

    Science.gov (United States)

    Shypailo, R. J.; Ellis, K. J.

    2011-05-01

    During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of 40K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.

  5. Whole body counter calibration using Monte Carlo modeling with an array of phantom sizes based on national anthropometric reference data.

    Science.gov (United States)

    Shypailo, R J; Ellis, K J

    2011-05-21

    During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of (40)K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.

  6. Whole body counter calibration using Monte Carlo modeling with an array of phantom sizes based on national anthropometric reference data

    Energy Technology Data Exchange (ETDEWEB)

    Shypailo, R J; Ellis, K J, E-mail: shypailo@bcm.edu [USDA/ARS Children' s Nutrition Research Center, Baylor College of Medicine, 1100 Bates Street, Houston, TX 77030 (United States)

    2011-05-21

    During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of {sup 40}K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.

  7. Physical phantoms for counting efficiency calibration at in vivo measurements; Physikalische Kalibrierphantome in der In-vivo-Messtechnik

    Energy Technology Data Exchange (ETDEWEB)

    Hegenbart, Lars [Karlsruher Institut fuer Technologie, Eggenstein-Leopoldshafen (Germany). Abt. Strahlenschutzforschung; Schwabenland, Florian [Kerntechnische Hilfsdienst GmbH (KHG), Eggenstein-Leopoldshafen (Germany). Gruppe Strahlenschutz

    2011-07-01

    The in vivo Measurement Laboratory at the Karlsruher Institute of Technologie (KIT) has several physical phantoms for counting efficiency calibration of whole- and partial body counters. A head phantom from the 1980s containing {sup 241}Am is available for the determination of counting efficiency for skull measurements. A virtual model of this head phantom was created with the aim to replace conventional efficiency calibration methods by Monte Carlo simulations. The counting efficiencies obtained from simulations have been compared with real in vivo measurements. Absolute counting efficiency values obtained from the best simulations deviate between 4.1 % bis 16.0 % from the measured values. (orig.)

  8. Using standard calibrated geometries to characterize a coaxial high purity germanium gamma detector for Monte Carlo simulations

    NARCIS (Netherlands)

    van der Graaf, E. R.; Dendooven, P.; Brandenburg, S.

    2014-01-01

    A detector model optimization procedure based on matching Monte Carlo simulations with measurements for two experimentally calibrated sample geometries which are frequently used in radioactivity measurement laboratories results in relative agreement within 5% between simulated and measured efficienc

  9. Application of the Monte Carlo efficiency transfer method to an HPGe detector with the purpose of environmental samples measurement.

    Science.gov (United States)

    Morera-Gómez, Yasser; Cartas-Aguila, Héctor A; Alonso-Hernández, Carlos M; Bernal-Castillo, Jose L; Guillén-Arruebarrena, Aniel

    2015-03-01

    Monte Carlo efficiency transfer method was used to determine the full energy peak efficiency of a coaxial n-type HPGe detector. The efficiencies calibration curves for three Certificate Reference Materials were determined by efficiency transfer using a (152)Eu reference source. The efficiency values obtained after efficiency transfer were used to calculate the activity concentration of the radionuclides detected in the three materials, which were measured in a low-background gamma spectrometry system. Reported and calculated activity concentration show a good agreement with mean deviations of 5%, which is satisfactory for environmental samples measurement.

  10. Monte Carlo Studies for the Calibration System of the GERDA Experiment

    CERN Document Server

    Baudis, Laura; Froborg, Francis; Tarka, Michal

    2013-01-01

    The GERmanium Detector Array, GERDA, searches for neutrinoless double beta decay in Ge-76 using bare high-purity germanium detectors submerged in liquid argon. For the calibration of these detectors gamma emitting sources have to be lowered from their parking position on top of the cryostat over more than five meters down to the germanium crystals. With the help of Monte Carlo simulations, the relevant parameters of the calibration system were determined. It was found that three Th-228 sources with an activity of 20 kBq each at two different vertical positions will be necessary to reach sufficient statistics in all detectors in less than four hours of calibration time. These sources will contribute to the background of the experiment with a total of (1.07 +/- 0.04(stat) +0.13 -0.19(sys)) 10^{-4} cts/(keV kg yr) when shielded from below with 6 cm of tantalum in the parking position.

  11. Efficient Word Alignment with Markov Chain Monte Carlo

    Directory of Open Access Journals (Sweden)

    Östling Robert

    2016-10-01

    Full Text Available We present EFMARAL, a new system for efficient and accurate word alignment using a Bayesian model with Markov Chain Monte Carlo (MCMC inference. Through careful selection of data structures and model architecture we are able to surpass the fast_align system, commonly used for performance-critical word alignment, both in computational efficiency and alignment accuracy. Our evaluation shows that a phrase-based statistical machine translation (SMT system produces translations of higher quality when using word alignments from EFMARAL than from fast_align, and that translation quality is on par with what is obtained using GIZA++, a tool requiring orders of magnitude more processing time. More generally we hope to convince the reader that Monte Carlo sampling, rather than being viewed as a slow method of last resort, should actually be the method of choice for the SMT practitioner and others interested in word alignment.

  12. Calibration of the identiFINDER detector for the iodine measurement in thyroid using the Monte Carlo method; Calibracion del detector identiFINDER para la medicion de yodo en tiroides utilizando el metodo Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Ramos M, D.; Yera S, Y.; Lopez B, G. M.; Acosta R, N.; Vergara G, A., E-mail: dayana@cphr.edu.cu [Centro de Proteccion e Higiene de las Radiaciones, Calle 20 No. 4113 e/ 41 y 47, Playa, 10600 La Habana (Cuba)

    2014-08-15

    This work is based on the determination of the detection efficiency of {sup 125}I and {sup 131}I in thyroid of the identiFINDER detector using the Monte Carlo method. The suitability of the calibration method is analyzed, when comparing the results of the direct Monte Carlo method with the corrected, choosing the latter because the differences with the real efficiency stayed below 10%. To simulate the detector their geometric parameters were optimized using a tomographic study, what allowed the uncertainties minimization of the estimates. Finally were obtained the simulations of the detector geometry-point source to find the correction factors to 5 cm, 15 cm and 25 cm, and those corresponding to the detector-simulator arrangement for the method validation and final calculation of the efficiency, demonstrating that in the Monte Carlo method implementation if simulates at a greater distance than the used in the Laboratory measurements an efficiency overestimation can be obtained, while if simulates at a shorter distance this will be underestimated, so should be simulated at the same distance to which will be measured in the reality. Also, is achieved the obtaining of the efficiency curves and minimum detectable activity for the measurement of {sup 131}I and {sup 125}I. In general is achieved the implementation of the Monte Carlo methodology for the identiFINDER calibration with the purpose of estimating the measured activity of iodine in thyroid. This method represents an ideal way to replace the lack of patterns solutions and simulators assuring the capacities of the Internal Contamination Laboratory of the Centro de Proteccion e Higiene de las Radiaciones are always calibrated for the iodine measurement in thyroid. (author)

  13. Coincidence corrected efficiency calibration of Compton-suppressed HPGe detectors

    Energy Technology Data Exchange (ETDEWEB)

    Aucott, Timothy [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Brand, Alexander [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); DiPrete, David [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-04-20

    The authors present a reliable method to calibrate the full-energy efficiency and the coincidence correction factors using a commonly-available mixed source gamma standard. This is accomplished by measuring the peak areas from both summing and non-summing decay schemes and simultaneously fitting both the full-energy efficiency, as well as the total efficiency, as functions of energy. By using known decay schemes, these functions can then be used to provide correction factors for other nuclides not included in the calibration standard.

  14. Analysis of the effect of true coincidence summing on efficiency calibration for an HP GE detector

    Energy Technology Data Exchange (ETDEWEB)

    Rodenas, J.; Gallardo, S.; Ballester, S.; Primault, V. [Valencia Univ. Politecnica, Dept. de Ingenieria Quimica y Nuclear (Spain); Ortiz, J. [Valencia Univ. Politecnica, Lab. de Radiactividad Ambiental (Spain)

    2006-07-01

    The H.P. (High Purity) Germanium detector is commonly used for gamma spectrometry in environmental radioactivity laboratories. The efficiency of the detector must be calibrated for each geometry considered. This calibration is performed using a standard solution containing gamma emitter sources. The usual goal is the obtaining of an efficiency curve to be used in the determination of the activity of samples with the same geometry. It is evident the importance of the detector calibration. However, the procedure presents some problems as it depends on the source geometry (shape, volume, distance to detector, etc.) and shall be repeated when these factors change. That means an increasing use of standard solutions and consequently an increasing generation of radioactive wastes. Simulation of the calibration procedure with a validated computer program is clearly an important auxiliary tool for environmental radioactivity laboratories. This simulation is useful for both optimising calibration procedures and reducing the amount of radioactivity wastes produced. The M.C.N.P. code, based on the Monte Carlo method, has been used in this work for the simulation of detector calibration. A model has been developed for the detector as well as for the source contained in a Petri box. The source is a standard solution that contains the following radionuclides: {sup 241}Am, {sup 109}Cd, {sup 57}Co, {sup 139}Ce, {sup 203}Hg, {sup 113}Sn, {sup 85}Sr, {sup 137}Cs, {sup 88}Y and {sup 60}Co; covering a wide energy range (50 to 2000 keV). However, there are two radionuclides in the solution ({sup 60}Co and {sup 88}Y) that emit gamma rays in true coincidence. The effect of the true coincidence summing produces a distortion of the calibration curve at higher energies. To decrease this effect some measurements have been performed at increasing distances between the source and the detector. As the true coincidence effect is observed in experimental measurements but not in the Monte Carlo

  15. Investigating Transmission Efficiency of Light Guide by Monte Carlo Simulation

    Institute of Scientific and Technical Information of China (English)

    LiChen; XiaoGuoqing; GuoZhongyan; ZhanWenlongt; SunZhiyu; WangMeng; ChenZhiqiang; MaoRuishi; BaiJie; HuZhengguo; ChenLixin

    2003-01-01

    A large area neutron detector to detect the energy of about 1 GeV neutron by time-of flight method will be installed at RIBLL II of CSR. To obtain good energy resolution, the time resolution of the detector is a crucial parameter. For this purpose, the transmission efficiency of the light guide to transport the photons from detec-tor unit to light sensitive detector has been investigated by Monte-Carlo simulation. Here, the simulations were done mainly with two types of the light guides, namely type A and type B as shown in Figs.1 and 2 respectively.

  16. Searching for efficient Markov chain Monte Carlo proposal kernels.

    Science.gov (United States)

    Yang, Ziheng; Rodríguez, Carlos E

    2013-11-26

    Markov chain Monte Carlo (MCMC) or the Metropolis-Hastings algorithm is a simulation algorithm that has made modern Bayesian statistical inference possible. Nevertheless, the efficiency of different Metropolis-Hastings proposal kernels has rarely been studied except for the Gaussian proposal. Here we propose a unique class of Bactrian kernels, which avoid proposing values that are very close to the current value, and compare their efficiency with a number of proposals for simulating different target distributions, with efficiency measured by the asymptotic variance of a parameter estimate. The uniform kernel is found to be more efficient than the Gaussian kernel, whereas the Bactrian kernel is even better. When optimal scales are used for both, the Bactrian kernel is at least 50% more efficient than the Gaussian. Implementation in a Bayesian program for molecular clock dating confirms the general applicability of our results to generic MCMC algorithms. Our results refute a previous claim that all proposals had nearly identical performance and will prompt further research into efficient MCMC proposals.

  17. An Efficient Approach to Ab Initio Monte Carlo Simulation

    CERN Document Server

    Leiding, Jeff

    2013-01-01

    We present a Nested Markov Chain Monte Carlo (NMC) scheme for building equilibrium averages based on accurate potentials such as density functional theory. Metropolis sampling of a reference system, defined by an inexpensive but approximate potential, is used to substantially decorrelate configurations at which the potential of interest is evaluated, thereby dramatically reducing the number needed to build ensemble averages at a given level of precision. The efficiency of this procedure is maximized on-the-fly through variation of the reference system thermodynamic state (characterized here by its inverse temperature \\beta^0), which is otherwise unconstrained. Local density approximation (LDA) results are presented for shocked states in argon at pressures from 4 to 60 GPa. Depending on the quality of the reference potential, the acceptance probability is enhanced by factors of 1.2-28 relative to unoptimized NMC sampling, and the procedure's efficiency is found to be competitive with that of standard ab initio...

  18. On the efficiency calibration of a drum waste assay system

    CERN Document Server

    Dinescu, L; Cazan, I L; Macrin, R; Caragheorgheopol, G; Rotarescu, G

    2002-01-01

    The efficiency calibration of a gamma spectroscopy waste assay system, constructed by IFIN-HH, was performed. The calibration technique was based on the assumption of a uniform distribution of the source activity in the drum and also a uniform sample matrix. A collimated detector (HPGe--20% relative efficiency) placed at 30 cm from the drum was used. The detection limit for sup 1 sup 3 sup 7 Cs and sup 6 sup 0 Co is approximately 45 Bq/kg for a sample of about 400 kg and a counting time of 10 min. A total measurement uncertainty of -70% to +40% was estimated.

  19. Improved photon counting efficiency calibration using superconducting single photon detectors

    Science.gov (United States)

    Gan, Haiyong; Xu, Nan; Li, Jianwei; Sun, Ruoduan; Feng, Guojin; Wang, Yanfei; Ma, Chong; Lin, Yandong; Zhang, Labao; Kang, Lin; Chen, Jian; Wu, Peiheng

    2015-10-01

    The quantum efficiency of photon counters can be measured with standard uncertainty below 1% level using correlated photon pairs generated through spontaneous parametric down-conversion process. Normally a laser in UV, blue or green wavelength range with sufficient photon energy is applied to produce energy and momentum conserved photon pairs in two channels with desired wavelengths for calibration. One channel is used as the heralding trigger, and the other is used for the calibration of the detector under test. A superconducting nanowire single photon detector with advantages such as high photon counting speed (optical spectroscopy, super resolution microscopy, deep space observation, and so on.

  20. Efficiency in nonequilibrium molecular dynamics Monte Carlo simulations

    Science.gov (United States)

    Radak, Brian K.; Roux, Benoît

    2016-10-01

    Hybrid algorithms combining nonequilibrium molecular dynamics and Monte Carlo (neMD/MC) offer a powerful avenue for improving the sampling efficiency of computer simulations of complex systems. These neMD/MC algorithms are also increasingly finding use in applications where conventional approaches are impractical, such as constant-pH simulations with explicit solvent. However, selecting an optimal nonequilibrium protocol for maximum efficiency often represents a non-trivial challenge. This work evaluates the efficiency of a broad class of neMD/MC algorithms and protocols within the theoretical framework of linear response theory. The approximations are validated against constant pH-MD simulations and shown to provide accurate predictions of neMD/MC performance. An assessment of a large set of protocols confirms (both theoretically and empirically) that a linear work protocol gives the best neMD/MC performance. Finally, a well-defined criterion for optimizing the time parameters of the protocol is proposed and demonstrated with an adaptive algorithm that improves the performance on-the-fly with minimal cost.

  1. Efficiencies of dynamic Monte Carlo algorithms for off-lattice particle systems with a single impurity

    KAUST Repository

    Novotny, M.A.

    2010-02-01

    The efficiency of dynamic Monte Carlo algorithms for off-lattice systems composed of particles is studied for the case of a single impurity particle. The theoretical efficiencies of the rejection-free method and of the Monte Carlo with Absorbing Markov Chains method are given. Simulation results are presented to confirm the theoretical efficiencies. © 2010.

  2. Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector.

    Science.gov (United States)

    Cabal, Fatima Padilla; Lopez-Pino, Neivy; Bernal-Castillo, Jose Luis; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D'Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar

    2010-12-01

    A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ((241)Am, (133)Ba, (22)Na, (60)Co, (57)Co, (137)Cs and (152)Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.

  3. Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector

    Energy Technology Data Exchange (ETDEWEB)

    Padilla Cabal, Fatima, E-mail: fpadilla@instec.c [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba); Lopez-Pino, Neivy; Luis Bernal-Castillo, Jose; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D' Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba)

    2010-12-15

    A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ({sup 241}Am, {sup 133}Ba, {sup 22}Na, {sup 60}Co, {sup 57}Co, {sup 137}Cs and {sup 152}Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.

  4. A Generic Algorithm for IACT Optical Efficiency Calibration using Muons

    CERN Document Server

    Mitchell, A M W; Parsons, R D

    2015-01-01

    Muons produced in Extensive Air Showers (EAS) generate ring-like images in Imaging Atmospheric Cherenkov Telescopes when travelling near parallel to the optical axis. From geometrical parameters of these images, the absolute amount of light emitted may be calculated analytically. Comparing the amount of light recorded in these images to expectation is a well established technique for telescope optical efficiency calibration. However, this calculation is usually performed under the assumption of an approximately circular telescope mirror. The H.E.S.S. experiment entered its second phase in 2012, with the addition of a fifth telescope with a non-circular 600m$^2$ mirror. Due to the differing mirror shape of this telescope to the original four H.E.S.S. telescopes, adaptations to the standard muon calibration were required. We present a generalised muon calibration procedure, adaptable to telescopes of differing shapes and sizes, and demonstrate its performance on the H.E.S.S. II array.

  5. Monte-Carlo investigation of radiation beam quality of the CRNA neutron irradiator for calibration purposes

    Energy Technology Data Exchange (ETDEWEB)

    Mazrou, Hakim, E-mail: mazrou_h@crna.d [Centre de Recherche Nucleaire d' Alger (CRNA), 02 Boulevard Frantz, Fanon, B.P. 399, Alger-RP 16000 (Algeria); Sidahmed, Tassadit [Centre de Recherche Nucleaire d' Alger (CRNA), 02 Boulevard Frantz, Fanon, B.P. 399, Alger-RP 16000 (Algeria); Allab, Malika [Faculte de Physique, Universite des Sciences et de la Technologie de Houari-Boumediene (USTHB), 16111, Alger (Algeria)

    2010-10-15

    An irradiation system has been acquired by the Nuclear Research Center of Algiers (CRNA) to provide neutron references for metrology and dosimetry purposes. It consists of an {sup 241}Am-Be radionuclide source of 185 GBq (5 Ci) activity inside a cylindrical steel-enveloped polyethylene container with radially positioned beam channel. Because of its composition, filled with hydrogenous material, which is not recommended by ISO standards, we expect large changes in the physical quantities of primary importance of the source compared to a free-field situation. Thus, the main goal of the present work is to fully characterize neutron field of such special delivered set-up. This was conducted by both extensive Monte-Carlo calculations and experimental measurements obtained by using BF{sub 3} and {sup 3}He based neutron area dosimeters. Effects of each component present in the bunker facility of the Algerian Secondary Standard Dosimetry Laboratory (SSDL) on the energy neutron spectrum have been investigated by simulating four irradiation configurations and comparison to the ISO spectrum has been performed. The ambient dose equivalent rate was determined based upon a correct estimate of the mean fluence to ambient dose equivalent conversion factors at different irradiations positions by means of a 3-D transport code MCNP5. Finally, according to practical requirements established for calibration purposes an optimal irradiation position has been suggested to the SSDL staff to perform, in appropriate manner, their routine calibrations.

  6. An efficient approach to ab initio Monte Carlo simulation.

    Science.gov (United States)

    Leiding, Jeff; Coe, Joshua D

    2014-01-21

    We present a Nested Markov chain Monte Carlo (NMC) scheme for building equilibrium averages based on accurate potentials such as density functional theory. Metropolis sampling of a reference system, defined by an inexpensive but approximate potential, was used to substantially decorrelate configurations at which the potential of interest was evaluated, thereby dramatically reducing the number needed to build ensemble averages at a given level of precision. The efficiency of this procedure was maximized on-the-fly through variation of the reference system thermodynamic state (characterized here by its inverse temperature β(0)), which was otherwise unconstrained. Local density approximation results are presented for shocked states of argon at pressures from 4 to 60 GPa, where-depending on the quality of the reference system potential-acceptance probabilities were enhanced by factors of 1.2-28 relative to unoptimized NMC. The optimization procedure compensated strongly for reference potential shortcomings, as evidenced by significantly higher speedups when using a reference potential of lower quality. The efficiency of optimized NMC is shown to be competitive with that of standard ab initio molecular dynamics in the canonical ensemble.

  7. Energy Self-calibration and low-energy efficiency calibration for an underwater in-situ LaBr3:Ce spectrometer

    CERN Document Server

    Zeng, Zhi; Ma, Hao; He, Jianhua; Cang, Jirong; Zeng, Ming; Cheng, Jianping

    2016-01-01

    An underwater in situ gamma ray spectrometer based on LaBr was developed and optimized to monitor marine radioactivity. The intrinsic background mainly from 138La and 227Ac of LaBr was well determined by low background measurement and pulse shape discrimination method. A method of self calibration using three internal contaminant peaks was proposed to eliminate the peak shift during long term monitoring. With experiments under different temperatures, the method was proved to be helpful for maintaining long term stability. To monitor the marine radioactivity, the spectrometer's efficiency was calculated via water tank experiment as well as Monte Carlo simulation.

  8. Progress Towards Optimally Efficient Schemes for Monte Carlo Thermal Radiation Transport

    Energy Technology Data Exchange (ETDEWEB)

    Smedley-Stevenson, R P; Brooks III, E D

    2007-09-26

    In this summary we review the complementary research being undertaken at AWE and LLNL aimed at developing optimally efficient algorithms for Monte Carlo thermal radiation transport based on the difference formulation. We conclude by presenting preliminary results on the application of Newton-Krylov methods for solving the Symbolic Implicit Monte Carlo (SIMC) energy equation.

  9. Efficiency of Evolutionary Algorithms for Calibration of Watershed Models

    Science.gov (United States)

    Ahmadi, M.; Arabi, M.

    2009-12-01

    Since the promulgation of the Clean Water Act in the U.S. and other similar legislations around the world over the past three decades, watershed management programs have focused on the nexus of pollution prevention and mitigation. In this context, hydrologic/water quality models have been increasingly embedded in the decision making process. Simulation models are now commonly used to investigate the hydrologic response of watershed systems under varying climatic and land use conditions, and also to study the fate and transport of contaminants at various spatiotemporal scales. Adequate calibration and corroboration of models for various outputs at varying scales is an essential component of watershed modeling. The parameter estimation process could be challenging when multiple objectives are important. For example, improving streamflow predictions of the model at a stream location may result in degradation of model predictions for sediments and/or nutrient at the same location or other outlets. This paper aims to evaluate the applicability and efficiency of single and multi objective evolutionary algorithms for parameter estimation of complex watershed models. To this end, the Shuffled Complex Evolution (SCE-UA) algorithm, a single-objective genetic algorithm (GA), and a multi-objective genetic algorithm (i.e., NSGA-II) were reconciled with the Soil and Water Assessment Tool (SWAT) to calibrate the model at various locations within the Wildcat Creek Watershed, Indiana. The efficiency of these methods were investigated using different error statistics including root mean square error, coefficient of determination and Nash-Sutcliffe efficiency coefficient for the output variables as well as the baseflow component of the stream discharge. A sensitivity analysis was carried out to screening model parameters that bear significant uncertainties. Results indicated that while flow processes can be reasonably ascertained, parameterization of nutrient and pesticide processes

  10. Highly Efficient Monte-Carlo for Estimating the Unavailability of Markov Dynamic System1)

    Institute of Scientific and Technical Information of China (English)

    XIAOGang; DENGLi; ZHANGBen-Ai; ZHUJian-Shi

    2004-01-01

    Monte Carlo simulation has become an important tool for estimating the reliability andavailability of dynamic system, since conventional numerical methods are no longer efficient whenthe size of the system to solve is large. However, evaluating by a simulation the probability of oc-currence of very rare events means playing a very large number of histories of the system, whichleads to unacceptable computing time. Highly efficient Monte Carlo should be worked out. In thispaper, based on the integral equation describing state transitions of Markov dynamic system, a u-niform Monte Carlo for estimating unavailability is presented. Using free-flight estimator, directstatistical estimation Monte Carlo is achieved. Using both free-flight estimator and biased proba-bility space of sampling, weighted statistical estimation Monte Carlo is also achieved. Five MonteCarlo schemes, including crude simulation, analog simulation, statistical estimation based oncrude and analog simulation, and weighted statistical estimation, are used for calculating the un-availability of a repairable Con/3/30 : F system. Their efficiencies are compared with each other.The results show the weighted statistical estimation Monte Carlo has the smallest variance and thehighest efficiency in very rare events simulation.

  11. A Modified Bootstrap Monte Carlo Method to Investigate the Impact of Systematic Effects on Calibrated Optical Interferometry Data

    Science.gov (United States)

    Hasan, Mahmudul; Tycner, Christopher; Sigut, Aaron; Zavala, Robert T.

    2017-01-01

    We describe a modified bootstrap Monte Carlo method that was developed to assess quantitatively the impact of systematic residual errors on calibrated optical interferometry data from the Navy Precision Optical Interferometer. A variety of atmospheric and instrumental effects represent the sources of residual systematic errors that remain in the data after calibration, for example when there are atmospheric fluctuations with shorter time scales than the time scale between the observations of calibrator-target pairs. The modified bootstrap Monte Carlo method retains the inherent structure of how the underlying data set was acquired, by accounting for the fact that groups of data points are obtained simultaneously instead of individual data points. When telescope pairs (baselines) and spectral channels corresponding to a specific output beam from a beam combiner are treated as groups, this method provides a more realistic (and typically larger) uncertainties associated with the fitted model parameters, such as angular diameters of resolved stars, than the standard method based solely on formal errors.This work has been supported by NSF grant AST-1614983.

  12. Efficiency calibration for a NaI scintillation detector based on Monte-Carlo pro cess and preliminary measurements of bremsstrahlung%基于蒙特卡罗方法的NaI探测器效率刻度及其测量轫致辐射实验

    Institute of Scientific and Technical Information of China (English)

    黄建微; 王乃彦

    2014-01-01

    In order to better apply the NaI scintillation spectrometer to bremsstrahlung measurements, the energy response function of a NaI detector spectrometer system is studied by using 137Cs and 60Co sources based on Monte Carlo N particle transport code (MCNP) process. Simulated and measured almighty peak efficiency are in good agreement. An energy response matrix (ERM) is obtained by simulating photons with a certain energy incident on the NaI crystal in MCNP process, through deconvoluting the detected spectrum of NaI using the ERM, and the results of the deconvolution accord well with those from the original spectrum. Furthermore, the NaI detector is used to preliminarily detect its response to bremsstrahlung generated by high intensity electrons bombarding a target of 1.5 mm thickness.%为了将NaI探测器更好地应用到轫致辐射谱测量工作中,对一套NaI探测器做了研究:利用137 Cs,60 Co等同位素γ源,结合蒙特卡罗方法,得到全能峰效率的模拟值与实验测量值符合得较好;利用蒙特卡罗N粒子编码模拟NaI对不同能量光子的响应,得到了该探测器对光子的能量响应,并将获得的能量响应用于轫致辐射的解谱工作,解谱结果与原始谱符合得很好;将该探测器应用到强流电子束打靶轫致辐射测量实验中,对轫致辐射在NaI探测器中的响应做了初步测量。

  13. Nonlinear calibration transfer based on hierarchical Bayesian models and Lagrange Multipliers: Error bounds of estimates via Monte Carlo - Markov Chain sampling.

    Science.gov (United States)

    Seichter, Felicia; Vogt, Josef; Radermacher, Peter; Mizaikoff, Boris

    2017-01-25

    The calibration of analytical systems is time-consuming and the effort for daily calibration routines should therefore be minimized, while maintaining the analytical accuracy and precision. The 'calibration transfer' approach proposes to combine calibration data already recorded with actual calibrations measurements. However, this strategy was developed for the multivariate, linear analysis of spectroscopic data, and thus, cannot be applied to sensors with a single response channel and/or a non-linear relationship between signal and desired analytical concentration. To fill this gap for a non-linear calibration equation, we assume that the coefficients for the equation, collected over several calibration runs, are normally distributed. Considering that coefficients of an actual calibration are a sample of this distribution, only a few standards are needed for a complete calibration data set. The resulting calibration transfer approach is demonstrated for a fluorescence oxygen sensor and implemented as a hierarchical Bayesian model, combined with a Lagrange Multipliers technique and Monte-Carlo Markov-Chain sampling. The latter provides realistic estimates for coefficients and prediction together with accurate error bounds by simulating known measurement errors and system fluctuations. Performance criteria for validation and optimal selection of a reduced set of calibration samples were developed and lead to a setup which maintains the analytical performance of a full calibration. Strategies for a rapid determination of problems occurring in a daily calibration routine, are proposed, thereby opening the possibility of correcting the problem just in time.

  14. Simple formalism for efficient derivatives and multi-determinant expansions in quantum Monte Carlo

    NARCIS (Netherlands)

    Filippi, C.; Assaraf, R.; Moroni, S.

    2016-01-01

    We present a simple and general formalism to compute efficiently the derivatives of a multi-determinant Jastrow-Slater wave function, the local energy, the interatomic forces, and similar quantities needed in quantum Monte Carlo. Through a straightforward manipulation of matrices evaluated on the oc

  15. Response and Monte Carlo evaluation of a reference ionization chamber for radioprotection level at calibration laboratories

    Science.gov (United States)

    Neves, Lucio P.; Vivolo, Vitor; Perini, Ana P.; Caldas, Linda V. E.

    2015-07-01

    A special parallel plate ionization chamber, inserted in a slab phantom for the personal dose equivalent Hp(10) determination, was developed and characterized in this work. This ionization chamber has collecting electrodes and window made of graphite, and the walls and phantom made of PMMA. The tests comprise experimental evaluation following international standards and Monte Carlo simulations, employing the PENELOPE code to evaluate the design of this new dosimeter. The experimental tests were conducted employing the radioprotection level quality N-60 established at the IPEN, and all results were within the recommended standards.

  16. Improving the efficiency of Monte Carlo simulations of systems that undergo temperature-driven phase transitions

    Science.gov (United States)

    Velazquez, L.; Castro-Palacio, J. C.

    2013-07-01

    Recently, Velazquez and Curilef proposed a methodology to extend Monte Carlo algorithms based on a canonical ensemble which aims to overcome slow sampling problems associated with temperature-driven discontinuous phase transitions. We show in this work that Monte Carlo algorithms extended with this methodology also exhibit a remarkable efficiency near a critical point. Our study is performed for the particular case of a two-dimensional four-state Potts model on a square lattice with periodic boundary conditions. This analysis reveals that the extended version of Metropolis importance sampling is more efficient than the usual Swendsen-Wang and Wolff cluster algorithms. These results demonstrate the effectiveness of this methodology to improve the efficiency of MC simulations of systems that undergo any type of temperature-driven phase transition.

  17. Study on Gamma Full-Energy Peak Efficiency Calibration of HPGe Detector for Bulky Sources

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    <正>This paper briefly introduces a method for the calibration of the gamma full-energy peak efficiency of HPGe for bulky sources with different geometries and different matrices. Simultaneously, the effects of

  18. Monte Carlo efficiency improvement by multiple sampling of conditioned integration variables

    Science.gov (United States)

    Weitz, Sebastian; Blanco, Stéphane; Charon, Julien; Dauchet, Jérémi; El Hafi, Mouna; Eymet, Vincent; Farges, Olivier; Fournier, Richard; Gautrais, Jacques

    2016-12-01

    We present a technique that permits to increase the efficiency of multidimensional Monte Carlo algorithms when the sampling of the first, unconditioned random variable consumes much more computational time than the sampling of the remaining, conditioned random variables while its variability contributes only little to the total variance. This is in particular relevant for transport problems in complex and randomly distributed geometries. The proposed technique is based on an new Monte Carlo estimator in which the conditioned random variables are sampled more often than the unconditioned one. A significant contribution of the present Short Note is an automatic procedure for calculating the optimal number of samples of the conditioned random variable per sample of the unconditioned one. The technique is illustrated by a current research example where it permits to increase the efficiency by a factor 100.

  19. Calibration and efficiency curve of SANAEM ionization chamber for activity measurements.

    Science.gov (United States)

    Yeltepe, Emin; Kossert, Karsten; Dirican, Abdullah; Nähle, Ole; Niedergesäß, Christiane; Kemal Şahin, Namik

    2016-03-01

    A commercially available Fidelis ionization chamber was calibrated and assessed in PTB with activity standard solutions. The long-term stability and linearity of the system was checked. Energy-dependent efficiency curves for photons and beta particles were determined, using an iterative method in Excel™, to enable calibration factors to be calculated for radionuclides which were not used in the calibration. Relative deviations between experimental and calculated radionuclide efficiencies are of the order of 1% for most photon emitters and below 5% for pure beta emitters. The system will enable TAEK-SANAEM to provide traceable activity measurements.

  20. The Time-Dependent FX-SABR Model: Efficient Calibration based on Effective Parameters

    OpenAIRE

    Stoep, van der, H.; Grzelak, Lech Aleksander; OOSTERLEE, Cornelis

    2014-01-01

    We present a framework for efficient calibration of the time-dependent SABR model (Fern´andez et al. (2013) Mathematics and Computers in Simulation 94, 55–75; Hagan et al. (2002) Wilmott Magazine 84–108; Osajima (2007) Available at SSRN 965265.) in an foreign exchange (FX) context. In a similar fashion as in (Piterbarg (2005) Risk 18 (5), 71–75) we derive effective parameters, which yield an accurate and efficient calibration. On top of the calibrated FX-SABR model, we add a non-parametric lo...

  1. Efficient 3D Kinetic Monte Carlo Method for Modeling of Molecular Structure and Dynamics

    DEFF Research Database (Denmark)

    Panshenskov, Mikhail; Solov'yov, Ilia; Solov'yov, Andrey V.

    2014-01-01

    Self-assembly of molecular systems is an important and general problem that intertwines physics, chemistry, biology, and material sciences. Through understanding of the physical principles of self-organization, it often becomes feasible to control the process and to obtain complex structures with...... the kinetic Monte Carlo approach in a three-dimensional space. We describe the computational side of the developed code, discuss its efficiency, and apply it for studying an exemplary system....

  2. Efficient implementation of the Hellmann-Feynman theorem in a diffusion Monte Carlo calculation.

    Science.gov (United States)

    Vitiello, S A

    2011-02-07

    Kinetic and potential energies of systems of (4)He atoms in the solid phase are computed at T = 0. Results at two densities of the liquid phase are presented as well. Calculations are performed by the multiweight extension to the diffusion Monte Carlo method that allows the application of the Hellmann-Feynman theorem in a robust and efficient way. This is a general method that can be applied in other situations of interest as well.

  3. Monte Carlo calculations of efficiencies for photon interactions in plastic scintillators

    Energy Technology Data Exchange (ETDEWEB)

    Bonzi, E.V.; Mainardi, R.T. (Facultad de Matematica, Astronomia y Fisica, Univ. Nacional de Cordoba (Argentina))

    1992-12-01

    Energy absorption and total peak efficiencies for plastic scintillators have been calculated by means of the Monte Carlo method. These results are of interest for potential uses of plastic scintillators as dosimetric or spectrometric devices. The calculations were carried out for photon energies from 2 keV up to 1 MeV. We considered all of the physical effects involved in each range of energy, photoelectric, Compton and Rayleigh. As a consistency test the same code was used to calculate efficiencies for NaI scintillators. The agreement with results published previously by other authors, within calculated errors, is very satisfactory. (orig.).

  4. Systematic efficiency enhancement in Monte Carlo applications. Final progress report, July 1, 1976-January 31, 1980

    Energy Technology Data Exchange (ETDEWEB)

    Spanier, J.

    1980-06-11

    Research performed under the grant period has been undertaken as part of the principal investigator's long-term efforts to develop new, more efficient estimators for application to a wide variety of practical problems. Two rather different approaches have characterized the work: (1) the use of a multistage analysis (1) to optimize the efficiencies (variances) of families of estimating random variables in a traditional statistical Monte Carlo framework, and (2) the development of parallel quasi-random sampling techniques and corresponding deterministic error bounds.

  5. Calibrating the photon detection efficiency in IceCube

    CERN Document Server

    Tosi, Delia

    2015-01-01

    The IceCube neutrino observatory is composed of more than five thousand light sensors, Digital Optical Modules (DOMs), installed on the surface and at depths between 1450 and 2450 m in clear ice at the South Pole. Each DOM incorporates a 10-inch diameter photomultiplier tube (PMT) intended to detect light emitted when high energy neutrinos interact with atoms in the ice. Depending on the energy of the neutrino and the distance from secondary particle tracks, PMTs can be hit by up to several thousand photons within a few hundred nanoseconds. The number of photons per PMT and their time distribution is used to reject background events and to determine the energy and direction of each neutrino. The detector energy scale was established from previous lab measurements of DOM optical sensitivity, then refined based on observed light yield from stopping muons and calibration of ice properties. A laboratory setup has now been developed to more precisely measure the DOM optical sensitivity as a function of angle and w...

  6. Determination of gossypol content in cottonseeds by near infrared spectroscopy based on Monte Carlo uninformative variable elimination and nonlinear calibration methods.

    Science.gov (United States)

    Li, Cheng; Zhao, Tianlun; Li, Cong; Mei, Lei; Yu, En; Dong, Yating; Chen, Jinhong; Zhu, Shuijin

    2017-04-15

    Near infrared (NIR) spectroscopy combined with Monte Carlo uninformative variable elimination (MC-UVE) and nonlinear calibration methods employed to determine gossypol content in cottonseeds were investigated. The reference method was performed by high performance liquid chromatography coupled to an ultraviolet detector (HPLC-UV). MC-UVE was employed to extract the effective information from the full NIR spectra. Nonlinear calibration methods were applied to establish the models compared with the linear method. The optimal model for gossypol content was obtained by MC-UVE-WLS-SVM, with root mean squares error of prediction (RMSEP) of 0.0422, coefficient of determination (R(2)) of 0.9331, and residual predictive deviation (RPD) of 3.8374, respectively, which was accurate and robust enough to substitute for traditional gossypol measurements. The nonlinear methods performed more reliable than linear method during the development of calibration models. Furthermore, MC-UVE could provide better and simpler calibration models than full spectra.

  7. Prediction of La0.6Sr0.4Co0.2Fe0.8O3 cathode microstructures during sintering: Kinetic Monte Carlo (KMC) simulations calibrated by artificial neural networks

    Science.gov (United States)

    Yan, Zilin; Kim, Yongtae; Hara, Shotaro; Shikazono, Naoki

    2017-04-01

    The Potts Kinetic Monte Carlo (KMC) model, proven to be a robust tool to study all stages of sintering process, is an ideal tool to analyze the microstructure evolution of electrodes in solid oxide fuel cells (SOFCs). Due to the nature of this model, the input parameters of KMC simulations such as simulation temperatures and attempt frequencies are difficult to identify. We propose a rigorous and efficient approach to facilitate the input parameter calibration process using artificial neural networks (ANNs). The trained ANN reduces drastically the number of trial-and-error of KMC simulations. The KMC simulation using the calibrated input parameters predicts the microstructures of a La0.6Sr0.4Co0.2Fe0.8O3 cathode material during sintering, showing both qualitative and quantitative congruence with real 3D microstructures obtained by focused ion beam scanning electron microscopy (FIB-SEM) reconstruction.

  8. Monte Carlo evaluation of the neutron detection efficiency of a superheated drop detector

    Energy Technology Data Exchange (ETDEWEB)

    Gualdrini, G.F. [ENEA, Centro Ricerche `Ezio Clementel`, Bologna (Italy). Dipt. Ambiente; D`Errico, F.; Noccioni, P. [Pisa, Univ. (Italy). Dipt. di Costruzioni Meccaniche e Nucleari

    1997-03-01

    Neuron dosimetry has recently gained renewed attention, following concerns on the exposure of crew members on board aircraft, and of workers around the increasing number of high energy accelerators for medical and research purpose. At the same time the new operational qualities for radiation dosimetry introduced by ICRU and the ICRP, aiming at a unified metrological system applicable to all types of radiation exposure, involved the need to update current devices in order to meet new requirements. Superheated Drop (Bubble) Detectors (SDD) offer an alternative approach to neutron radiation protection dosimetry. The SDDs are currently studied within a large collaborative effort involving Yale University. New Haven CT, Pisa (IT) University, the Physikalisch-Technische Bundesanstalt, Braunschweig D, and ENEA (Italian National Agency for new Technologies Energy and the Environment) Centre of Bologna. The detectors were characterised through calibrations with monoenergetic neutron beams and where experimental investigations were inadequate or impossible, such as in the intermediate energy range , parametric Monte Carlo calculations of the response were carried out. This report describes the general characteristic of the SDDs along with the Monte Carlo computations of the energy response and a comparison with the experimental results.

  9. Monte Carlo evaluation of the neutron detection efficiency of a superheated drop detector

    Energy Technology Data Exchange (ETDEWEB)

    Gualdrini, G. F. [ENEA, Centro Ricerche `Ezio Clementel`, Bologna (Italy). Dipt. Ambiente; D`Errico, F.; Noccioni, P. [Pisa, Univ. (Italy). Dipt. di Costruzioni Meccaniche e Nucleari

    1997-06-01

    Neutron dosimetry has recently gained renewed attention, following concerns on the exposure of crew members on board aircraft, and of workers around the increasing number of high energy accelerators for medical and research purposes. At the same time the new operational quantities for radiation dosimetry introduced by ICRU and the ICRP, aiming at a unified metrological system applicable to all types of radiation exposure, involved the need to update current devices in order to meet new requirements. Superheated Drop (Bubble) Detectors (SDD) offer an alternative approach to neutron radiation protection dosimetry. The SDDs are currently studied within a large collaborative effort involving Yale University, New Haven CT, the `Universita` degli Studi di Pisa`, the Physikalisch-Technische Bundesanstalt, Braunschweig D. and ENEA (National Agency for New Technology, Energy and the Environment)-C.R., Bologna. The detectors were characterised through calibrations with monoenergetic neutron beams and where experimental investigations were inadequate or impossible, such as in the intermediate energy range, parametric Monte Carlo calculations of the response were carried out. This report describes the general characteristics of the SDDs along with the Monte Carlo computations of the energy response and a comparison with the experimental results.

  10. An efficient interpolation technique for jump proposals in reversible-jump Markov chain Monte Carlo calculations.

    Science.gov (United States)

    Farr, W M; Mandel, I; Stevens, D

    2015-06-01

    Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the Markov chain Monte Carlo (MCMC) algorithm and convergence is correspondingly slow. Here, we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose intermodel jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient 'global' proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher dimensional spaces efficiently.

  11. Efficient Implementation of the Barnes-Hut Octree Algorithm for Monte Carlo Simulations of Charged Systems

    CERN Document Server

    Gan, Zecheng

    2013-01-01

    Computer simulation with Monte Carlo is an important tool to investigate the function and equilibrium properties of many systems with biological and soft matter materials solvable in solvents. The appropriate treatment of long-range electrostatic interaction is essential for these charged systems, but remains a challenging problem for large-scale simulations. We have developed an efficient Barnes-Hut treecode algorithm for electrostatic evaluation in Monte Carlo simulations of Coulomb many-body systems. The algorithm is based on a divide-and-conquer strategy and fast update of the octree data structure in each trial move through a local adjustment procedure. We test the accuracy of the tree algorithm, and use it to computer simulations of electric double layer near a spherical interface. It has been shown that the computational cost of the Monte Carlo method with treecode acceleration scales as $\\log N$ in each move. For a typical system with ten thousand particles, by using the new algorithm, the speed has b...

  12. A fast and efficient algorithm for Slater determinant updates in quantum Monte Carlo simulations.

    Science.gov (United States)

    Nukala, Phani K V V; Kent, P R C

    2009-05-28

    We present an efficient low-rank updating algorithm for updating the trial wave functions used in quantum Monte Carlo (QMC) simulations. The algorithm is based on low-rank updating of the Slater determinants. In particular, the computational complexity of the algorithm is O(kN) during the kth step compared to traditional algorithms that require O(N(2)) computations, where N is the system size. For single determinant trial wave functions the new algorithm is faster than the traditional O(N(2)) Sherman-Morrison algorithm for up to O(N) updates. For multideterminant configuration-interaction-type trial wave functions of M+1 determinants, the new algorithm is significantly more efficient, saving both O(MN(2)) work and O(MN(2)) storage. The algorithm enables more accurate and significantly more efficient QMC calculations using configuration-interaction-type wave functions.

  13. Efficiency calibration of a HPGe detector for [{sup 18}F] FDG activity measurements

    Energy Technology Data Exchange (ETDEWEB)

    Fragoso, Maria da Conceicao de Farias; Lacerda, Isabelle Viviane Batista de; Albuquerque, Antonio Morais de Sa, E-mail: mariacc05@yahoo.com.br, E-mail: isabelle.lacerda@ufpe.br, E-mail: moraisalbuquerque@hotmaiI.com [Universidade Federal de Pernambuco (DEN/UFPE), Recife, PE (Brazil). Departamento de Energia Nuclear; Oliveira, Mercia Liane de; Hazin, Clovis Abrahao; Lima, Fernando Roberto de Andrade, E-mail: mercial@cnen.gov.br, E-mail: chazin@cnen.gov.br, E-mail: falima@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2013-11-01

    The radionuclide {sup 18}F, in the form of flurodeoxyglucose (FDG), is the most used radiopharmaceutical for Positron Emission Tomography (PET). Due to [{sup 18}F]FDG increasing demand, it is important to ensure high quality activity measurements in the nuclear medicine practice. Therefore, standardized reference sources are necessary to calibrate of {sup 18}F measuring systems. Usually, the activity measurements are performed in re-entrant ionization chambers, also known as radionuclide calibrators. Among the existing alternatives for the standardization of radioactive sources, the method known as gamma spectrometry is widely used for short-lived radionuclides, since it is essential to minimize source preparation time. The purpose of this work was to perform the standardization of the [{sup 18}F]FDG solution by gamma spectrometry. In addition, the reference sources calibrated by this method can be used to calibrate and test the radionuclide calibrators from the Divisao de Producao de Radiofarmacos (DIPRA) of the Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE). Standard sources of {sup 152}Eu, {sup 137}Cs and {sup 68}Ge were used for the efficiency calibration of the spectrometer system. As a result, the efficiency curve as a function of energy was determined in wide energy range from 122 to 1408 keV. Reference sources obtained by this method can be used in [{sup 18}F]FDG activity measurements comparison programs for PET services localized in the Brazilian Northeast region. (author)

  14. Time- and Computation-Efficient Calibration of MEMS 3D Accelerometers and Gyroscopes

    Directory of Open Access Journals (Sweden)

    Sara Stančin

    2014-08-01

    Full Text Available We propose calibration methods for microelectromechanical system (MEMS 3D accelerometers and gyroscopes that are efficient in terms of time and computational complexity. The calibration process for both sensors is simple, does not require additional expensive equipment, and can be performed in the field before or between motion measurements. The methods rely on a small number of defined calibration measurements that are used to obtain the values of 12 calibration parameters. This process enables the static compensation of sensor inaccuracies. The values detected by the 3D sensor are interpreted using a generalized 3D sensor model. The model assumes that the values detected by the sensor are equal to the projections of the measured value on the sensor sensitivity axes. Although this finding is trivial for 3D accelerometers, its validity for 3D gyroscopes is not immediately apparent; thus, this paper elaborates on this latter topic. For an example sensor device, calibration parameters were established using calibration measurements of approximately 1.5 min in duration for the 3D accelerometer and 2.5 min in duration for the 3D gyroscope. Correction of each detected 3D value using the established calibration parameters in further measurements requires only nine addition and nine multiplication operations.

  15. Time- and computation-efficient calibration of MEMS 3D accelerometers and gyroscopes.

    Science.gov (United States)

    Stančin, Sara; Tomažič, Sašo

    2014-08-13

    We propose calibration methods for microelectromechanical system (MEMS) 3D accelerometers and gyroscopes that are efficient in terms of time and computational complexity. The calibration process for both sensors is simple, does not require additional expensive equipment, and can be performed in the field before or between motion measurements. The methods rely on a small number of defined calibration measurements that are used to obtain the values of 12 calibration parameters. This process enables the static compensation of sensor inaccuracies. The values detected by the 3D sensor are interpreted using a generalized 3D sensor model. The model assumes that the values detected by the sensor are equal to the projections of the measured value on the sensor sensitivity axes. Although this finding is trivial for 3D accelerometers, its validity for 3D gyroscopes is not immediately apparent; thus, this paper elaborates on this latter topic. For an example sensor device, calibration parameters were established using calibration measurements of approximately 1.5 min in duration for the 3D accelerometer and 2.5 min in duration for the 3D gyroscope. Correction of each detected 3D value using the established calibration parameters in further measurements requires only nine addition and nine multiplication operations.

  16. An efficient Monte Carlo interior penalty discontinuous Galerkin method for elastic wave scattering in random media

    Science.gov (United States)

    Feng, X.; Lorton, C.

    2017-03-01

    This paper develops and analyzes an efficient Monte Carlo interior penalty discontinuous Galerkin (MCIP-DG) method for elastic wave scattering in random media. The method is constructed based on a multi-modes expansion of the solution of the governing random partial differential equations. It is proved that the mode functions satisfy a three-term recurrence system of partial differential equations (PDEs) which are nearly deterministic in the sense that the randomness only appears in the right-hand side source terms, not in the coefficients of the PDEs. Moreover, the same differential operator applies to all mode functions. A proven unconditionally stable and optimally convergent IP-DG method is used to discretize the deterministic PDE operator, an efficient numerical algorithm is proposed based on combining the Monte Carlo method and the IP-DG method with the $LU$ direct linear solver. It is shown that the algorithm converges optimally with respect to both the mesh size $h$ and the sampling number $M$, and practically its total computational complexity is only amount to solving very few deterministic elastic Helmholtz equations using the $LU$ direct linear solver. Numerically experiments are also presented to demonstrate the performance and key features of the proposed MCIP-DG method.

  17. Self-Calibrated Energy-Efficient and Reliable Channels for On-Chip Interconnection Networks

    Directory of Open Access Journals (Sweden)

    Po-Tsang Huang

    2012-01-01

    Full Text Available Energy-efficient and reliable channels are provided for on-chip interconnection networks (OCINs using a self-calibrated voltage scaling technique with self-corrected green (SCG coding scheme. This self-calibrated low-power coding and voltage scaling technique increases reliability and reduces energy consumption simultaneously. The SCG coding is a joint bus and error correction coding scheme that provides a reliable mechanism for channels. In addition, it achieves a significant reduction in energy consumption via a joint triplication bus power model for crosstalk avoidance. Based on SCG coding scheme, the proposed self-calibrated voltage scaling technique adjusts voltage swing for energy reduction. Furthermore, this technique tolerates timing variations. Based on UMC 65 nm CMOS technology, the proposed channels reduces energy consumption by nearly 28.3% compared with that for uncoded channels at the lowest voltage. This approach makes the channels of OCINs tolerant of transient malfunctions and realizes energy efficiency.

  18. Simple formalism for efficient derivatives and multi-determinant expansions in quantum Monte Carlo.

    Science.gov (United States)

    Filippi, Claudia; Assaraf, Roland; Moroni, Saverio

    2016-05-21

    We present a simple and general formalism to compute efficiently the derivatives of a multi-determinant Jastrow-Slater wave function, the local energy, the interatomic forces, and similar quantities needed in quantum Monte Carlo. Through a straightforward manipulation of matrices evaluated on the occupied and virtual orbitals, we obtain an efficiency equivalent to algorithmic differentiation in the computation of the interatomic forces and the optimization of the orbital parameters. Furthermore, for a large multi-determinant expansion, the significant computational gain afforded by a recently introduced table method is here extended to the local value of any one-body operator and to its derivatives, in both all-electron and pseudopotential calculations.

  19. Simple formalism for efficient derivatives and multi-determinant expansions in quantum Monte Carlo

    Science.gov (United States)

    Filippi, Claudia; Assaraf, Roland; Moroni, Saverio

    2016-05-01

    We present a simple and general formalism to compute efficiently the derivatives of a multi-determinant Jastrow-Slater wave function, the local energy, the interatomic forces, and similar quantities needed in quantum Monte Carlo. Through a straightforward manipulation of matrices evaluated on the occupied and virtual orbitals, we obtain an efficiency equivalent to algorithmic differentiation in the computation of the interatomic forces and the optimization of the orbital parameters. Furthermore, for a large multi-determinant expansion, the significant computational gain afforded by a recently introduced table method is here extended to the local value of any one-body operator and to its derivatives, in both all-electron and pseudopotential calculations.

  20. Study on calibration of neutron efficiency and relative photo-yield of plastic scintillator

    CERN Document Server

    Peng Tai Ping; Li Ru Rong; Zhang Jian Hua; Luo Xiao Bing; Xia Yi Jun; Yang Zhi Hu

    2002-01-01

    A method used for the calibration of neutron efficiency and the relative photo yield of plastic scintillator is studied. T(p, n) and D(d, n) reactions are used as neutron resources. The neutron efficiencies and the relative photo yields of plastic scintillators 1421 (40 mm in diameter and 5 mm in thickness) and determined in the neutron energy range of 0.655-5 MeV

  1. An efficient calibration method for SQUID measurement system using three orthogonal Helmholtz coils

    Science.gov (United States)

    Hua, Li; Shu-Lin, Zhang; Chao-Xiang, Zhang; Xiang-Yan, Kong; Xiao-Ming, Xie

    2016-06-01

    For a practical superconducting quantum interference device (SQUID) based measurement system, the Tesla/volt coefficient must be accurately calibrated. In this paper, we propose a highly efficient method of calibrating a SQUID magnetometer system using three orthogonal Helmholtz coils. The Tesla/volt coefficient is regarded as the magnitude of a vector pointing to the normal direction of the pickup coil. By applying magnetic fields through a three-dimensional Helmholtz coil, the Tesla/volt coefficient can be directly calculated from magnetometer responses to the three orthogonally applied magnetic fields. Calibration with alternating current (AC) field is normally used for better signal-to-noise ratio in noisy urban environments and the results are compared with the direct current (DC) calibration to avoid possible effects due to eddy current. In our experiment, a calibration relative error of about 6.89 × 10-4 is obtained, and the error is mainly caused by the non-orthogonality of three axes of the Helmholtz coils. The method does not need precise alignment of the magnetometer inside the Helmholtz coil. It can be used for the multichannel magnetometer system calibration effectively and accurately. Project supported by the “Strategic Priority Research Program (B)” of the Chinese Academy of Sciences (Grant No. XDB04020200) and the Shanghai Municipal Science and Technology Commission Project, China (Grant No. 15DZ1940902).

  2. Beyond histograms: Efficiently estimating radial distribution functions via spectral Monte Carlo

    Science.gov (United States)

    Patrone, Paul N.; Rosch, Thomas W.

    2017-03-01

    Despite more than 40 years of research in condensed-matter physics, state-of-the-art approaches for simulating the radial distribution function (RDF) g(r) still rely on binning pair-separations into a histogram. Such methods suffer from undesirable properties, including subjectivity, high uncertainty, and slow rates of convergence. Moreover, such problems go undetected by the metrics often used to assess RDFs. To address these issues, we propose (I) a spectral Monte Carlo (SMC) quadrature method that yields g(r) as an analytical series expansion and (II) a Sobolev norm that assesses the quality of RDFs by quantifying their fluctuations. Using the latter, we show that, relative to histogram-based approaches, SMC reduces by orders of magnitude both the noise in g(r) and the number of pair separations needed for acceptable convergence. Moreover, SMC reduces subjectivity and yields simple, differentiable formulas for the RDF, which are useful for tasks such as coarse-grained force-field calibration via iterative Boltzmann inversion.

  3. Quantum Monte Carlo for large chemical systems: implementing efficient strategies for petascale platforms and beyond.

    Science.gov (United States)

    Scemama, Anthony; Caffarel, Michel; Oseret, Emmanuel; Jalby, William

    2013-04-30

    Various strategies to implement efficiently quantum Monte Carlo (QMC) simulations for large chemical systems are presented. These include: (i) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), (ii) the possibility of keeping the memory footprint minimal, (iii) the important enhancement of single-core performance when efficient optimization tools are used, and (iv) the definition of a universal, dynamic, fault-tolerant, and load-balanced framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC=Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056, and 1731 electrons). Using 10-80 k computing cores of the Curie machine (GENCI-TGCC-CEA, France), QMC=Chem has been shown to be capable of running at the petascale level, thus demonstrating that for this machine a large part of the peak performance can be achieved. Implementation of large-scale QMC simulations for future exascale platforms with a comparable level of efficiency is expected to be feasible.

  4. Efficient Monte Carlo and greedy heuristic for the inference of stochastic block models

    CERN Document Server

    Peixoto, Tiago P

    2014-01-01

    We present an efficient algorithm for the inference of stochastic block models in large networks. The algorithm can be used as an optimized Markov chain Monte Carlo (MCMC) method, with a fast mixing time and a much reduced susceptibility to getting trapped in metastable states, or as a greedy agglomerative heuristic, with an almost linear $O(N\\ln^2N)$ complexity, where $N$ is the number of nodes in the network, independent on the number of blocks being inferred. We show that the heuristic is capable of delivering results which are indistinguishable from the more exact and numerically expensive MCMC method in many artificial and empirical networks, despite being much faster. The method is entirely unbiased towards any specific mixing pattern, and in particular it does not favor assortative community structures.

  5. Amorphous silicon EPID calibration for dosimetric applications: comparison of a method based on Monte Carlo prediction of response with existing techniques

    Energy Technology Data Exchange (ETDEWEB)

    Parent, L [Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Sutton (United Kingdom); Fielding, A L [School of Physical and Chemical Sciences, Queensland University of Technology, Brisbane (Australia); Dance, D R [Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, London (United Kingdom); Seco, J [Department of Radiation Oncology, Francis Burr Proton Therapy Center, Massachusetts General Hospital, Harvard Medical School, Boston (United States); Evans, P M [Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Sutton (United Kingdom)

    2007-07-21

    For EPID dosimetry, the calibration should ensure that all pixels have a similar response to a given irradiation. A calibration method (MC), using an analytical fit of a Monte Carlo simulated flood field EPID image to correct for the flood field image pixel intensity shape, was proposed. It was compared with the standard flood field calibration (FF), with the use of a water slab placed in the beam to flatten the flood field (WS) and with a multiple field calibration where the EPID was irradiated with a fixed 10 x 10 field for 16 different positions (MF). The EPID was used in its normal configuration (clinical setup) and with an additional 3 mm copper slab (modified setup). Beam asymmetry measured with a diode array was taken into account in MC and WS methods. For both setups, the MC method provided pixel sensitivity values within 3% of those obtained with the MF and WS methods (mean difference <1%, standard deviation <2%). The difference of pixel sensitivity between MC and FF methods was up to 12.2% (clinical setup) and 11.8% (modified setup). MC calibration provided images of open fields (5 x 5 to 20 x 20 cm{sup 2}) and IMRT fields to within 3% of that obtained with WS and MF calibrations while differences with images calibrated with the FF method for fields larger than 10 x 10 cm{sup 2} were up to 8%. MC, WS and MF methods all provided a major improvement on the FF method. Advantages and drawbacks of each method were reviewed.

  6. Multi Objective Optimization for Calibration and Efficient Uncertainty Analysis of Computationally Expensive Watershed Models

    Science.gov (United States)

    Akhtar, T.; Shoemaker, C. A.

    2011-12-01

    Assessing the sensitivity of calibration results to different calibration criteria can be done through multi objective optimization that considers multiple calibration criteria. This analysis can be extended to uncertainty analysis by comparing the results of simulation of the model with parameter sets from many points along a Pareto Front. In this study we employ multi-objective optimization in order to understand which parameter values should be used for flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville Reservoir in upstate New York. The comprehensive analysis procedure encapsulates identification of suitable objectives, analysis of trade-offs obtained through multi-objective optimization, and the impact of the trade-offs uncertainty. Examples of multiple criteria can include a) quality of the fit in different seasons, b) quality of the fit for high flow events and for low flow events, c) quality of the fit for different constituents (e.g. water versus nutrients). Many distributed watershed models are computationally expensive and include a large number of parameters that are to be calibrated. Efficient optimization algorithms are hence needed to find good solutions to multi-criteria calibration problems in a feasible amount of time. We apply a new algorithm called Gap Optimized Multi-Objective Optimization using Response Surfaces (GOMORS), for efficient multi-criteria optimization of the Cannonsville SWAT watershed calibration problem. GOMORS is a stochastic optimization method, which makes use of Radial Basis Functions for approximation of the computationally expensive objectives. GOMORS performance is also compared against other multi-objective algorithms ParEGO and NSGA-II. ParEGO is a kriging based efficient multi-objective optimization algorithm, whereas NSGA-II is a well-known multi-objective evolutionary optimization algorithm. GOMORS is more efficient than both ParEGO and NSGA-II in providing

  7. Efficiency of rejection-free methods for dynamic Monte Carlo studies of off-lattice interacting particles

    KAUST Repository

    Guerra, Marta L.

    2009-02-23

    We calculate the efficiency of a rejection-free dynamic Monte Carlo method for d -dimensional off-lattice homogeneous particles interacting through a repulsive power-law potential r-p. Theoretically we find the algorithmic efficiency in the limit of low temperatures and/or high densities is asymptotically proportional to ρ (p+2) /2 T-d/2 with the particle density ρ and the temperature T. Dynamic Monte Carlo simulations are performed in one-, two-, and three-dimensional systems with different powers p, and the results agree with the theoretical predictions. © 2009 The American Physical Society.

  8. Efficient heterogeneous execution of Monte Carlo shielding calculations on a Beowulf cluster.

    Science.gov (United States)

    Dewar, David; Hulse, Paul; Cooper, Andrew; Smith, Nigel

    2005-01-01

    Recent work has been done in using a high-performance 'Beowulf' cluster computer system for the efficient distribution of Monte Carlo shielding calculations. This has enabled the rapid solution of complex shielding problems at low cost and with greater modularity and scalability than traditional platforms. The work has shown that a simple approach to distributing the workload is as efficient as using more traditional techniques such as PVM (Parallel Virtual Machine). In addition, when used in an operational setting this technique is fairer with the use of resources than traditional methods, in that it does not tie up a single computing resource but instead shares the capacity with other tasks. These developments in computing technology have enabled shielding problems to be solved that would have taken an unacceptably long time to run on traditional platforms. This paper discusses the BNFL Beowulf cluster and a number of tests that have recently been run to demonstrate the efficiency of the asynchronous technique in running the MCBEND program. The BNFL Beowulf currently consists of 84 standard PCs running RedHat Linux. Current performance of the machine has been estimated to be between 40 and 100 Gflop s(-1). When the whole system is employed on one problem up to four million particles can be tracked per second. There are plans to review its size in line with future business needs.

  9. Efficiency of rejection-free dynamic Monte Carlo methods for homogeneous spin models, hard disk systems, and hard sphere systems.

    Science.gov (United States)

    Watanabe, Hiroshi; Yukawa, Satoshi; Novotny, M A; Ito, Nobuyasu

    2006-08-01

    We construct asymptotic arguments for the relative efficiency of rejection-free Monte Carlo (MC) methods compared to the standard MC method. We find that the efficiency is proportional to exp(constbeta) in the Ising, sqrt[beta] in the classical XY, and beta in the classical Heisenberg spin systems with inverse temperature beta, regardless of the dimension. The efficiency in hard particle systems is also obtained, and found to be proportional to (rho(cp)-rho)(-d) with the closest packing density rho(cp), density rho, and dimension d of the systems. We construct and implement a rejection-free Monte Carlo method for the hard-disk system. The RFMC has a greater computational efficiency at high densities, and the density dependence of the efficiency is as predicted by our arguments.

  10. Increasing innovation in home energy efficiency: Monte Carlo simulation of potential improvements

    Energy Technology Data Exchange (ETDEWEB)

    Soratana, Kullapa; Marriott, Joe [Civil and Environmental Engineering Department, University of Pittsburgh, 949 Benedum Hall, 3700 O' Hara Street, Pittsburgh, PA 15261 (United States)

    2010-06-15

    Despite the enormous potential for savings, there is little penetration of market-based solutions in the residential energy efficiency market. We hypothesize that there is a failure in the residential efficiency improvement market: due to lack of customer knowledge and capital to invest in improvements, there is unrecovered savings. In this paper, we model a means of extracting profit from those unrecovered energy savings with a market-based residential energy services company, or RESCO. We use a Monte Carlo simulation of the cost and performance of various improvements along with a hypothetical business model to derive general information about the financial viability of these companies. Despite the large amount of energy savings potential, we find that an average contract length with residential customers needs to be nearly 35 years to recoup the cost of the improvements. However, our modeling of an installer knowledge parameter indicates that experience plays a large part in minimizing the time to profitability for each home. Large numbers of inexperienced workers driven by government investment in this area could result in the installation of improvements with long payback periods, whereas a free market might eliminate companies making poor decisions. (author)

  11. A new NaI(Tl) four-detector layout for field contamination assessment using artificial neural networks and the Monte Carlo method for system calibration

    Energy Technology Data Exchange (ETDEWEB)

    Moreira, M.C.F., E-mail: marcos@ird.gov.b [Universidade Federal do Rio de Janeiro, COPPE, Programa de Engenharia Nuclear, Laboratorio de Monitoracao de Processos (Federal University of Rio de Janeiro, COPPE, Nuclear Engineering Program, Process Monitoring Laboratory), P.O. Box 68509, 21941-972 Rio de Janeiro (Brazil); Instituto de Radioprotecao e Dosimetria, CNEN/IRD (Radiation Protection and Dosimetry Institute, CNEN/IRD), Av. Salvador Allende s/no, P.O. Box 37750, 22780-160 Rio de Janeiro (Brazil); Conti, C.C. [Instituto de Radioprotecao e Dosimetria, CNEN/IRD (Radiation Protection and Dosimetry Institute, CNEN/IRD), Av. Salvador Allende s/no, P.O. Box 37750, 22780-160 Rio de Janeiro (Brazil); Schirru, R. [Universidade Federal do Rio de Janeiro, COPPE, Programa de Engenharia Nuclear, Laboratorio de Monitoracao de Processos (Federal University of Rio de Janeiro, COPPE, Nuclear Engineering Program, Process Monitoring Laboratory), P.O. Box 68509, 21941-972 Rio de Janeiro (Brazil)

    2010-09-21

    An NaI(Tl) multidetector layout combined with the use of Monte Carlo (MC) calculations and artificial neural networks(ANN) is proposed to assess the radioactive contamination of urban and semi-urban environment surfaces. A very simple urban environment like a model street composed of a wall on either side and the ground surface was the study case. A layout of four NaI(Tl) detectors was used, and the data corresponding to the response of the detectors were obtained by the Monte Carlo method. Two additional data sets with random values for the contamination and for detectors' response were also produced to test the ANNs. For this work, 18 feedforward topologies with backpropagation learning algorithm ANNs were chosen and trained. The results showed that some trained ANNs were able to accurately predict the contamination on the three urban surfaces when submitted to values within the training range. Other results showed that generalization outside the training range of values could not be achieved. The use of Monte Carlo calculations in combination with ANNs has been proven to be a powerful tool to perform detection calibration for highly complicated detection geometries.

  12. A class of Monte-Carlo-based statistical algorithms for efficient detection of repolarization alternans.

    Science.gov (United States)

    Iravanian, Shahriar; Kanu, Uche B; Christini, David J

    2012-07-01

    Cardiac repolarization alternans is an electrophysiologic condition identified by a beat-to-beat fluctuation in action potential waveform. It has been mechanistically linked to instances of T-wave alternans, a clinically defined ECG alternation in T-wave morphology, and associated with the onset of cardiac reentry and sudden cardiac death. Many alternans detection algorithms have been proposed in the past, but the majority have been designed specifically for use with T-wave alternans. Action potential duration (APD) signals obtained from experiments (especially those derived from optical mapping) possess unique characteristics, which requires the development and use of a more appropriate alternans detection method. In this paper, we present a new class of algorithms, based on the Monte Carlo method, for the detection and quantitative measurement of alternans. Specifically, we derive a set of algorithms (one an analytical and more efficient version of the other) and compare its performance with the standard spectral method and the generalized likelihood ratio test algorithm using synthetic APD sequences and optical mapping data obtained from an alternans control experiment. We demonstrate the benefits of the new algorithm in the presence of Gaussian and Laplacian noise and frame-shift errors. The proposed algorithms are well suited for experimental applications, and furthermore, have low complexity and are implementable using fixed-point arithmetic, enabling potential use with implantable cardiac devices.

  13. An Efficient Interpolation Technique for Jump Proposals in Reversible-Jump Markov Chain Monte Carlo Calculations

    CERN Document Server

    Farr, Will M

    2011-01-01

    Selection among alternative theoretical models given an observed data set is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty: it requires jumps between model parameter spaces, but cannot retain a memory of the favored locations in more than one parameter space at a time. Thus, a naive jump between parameter spaces is unlikely to be accepted in the MCMC algorithm and convergence is correspondingly slow. Here we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose inter-model jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in arbitrary dimensions. We show that our technique leads to dramatically improved convergence over naive jumps in an RJMCMC, and compare it ...

  14. A fast, primary-interaction Monte Carlo methodology for determination of total efficiency of cylindrical scintillation gamma-ray detectors

    Directory of Open Access Journals (Sweden)

    Rehman Shakeel U.

    2009-01-01

    Full Text Available A primary-interaction based Monte Carlo algorithm has been developed for determination of the total efficiency of cylindrical scintillation g-ray detectors. This methodology has been implemented in a Matlab based computer program BPIMC. For point isotropic sources at axial locations with respect to the detector axis, excellent agreement has been found between the predictions of the BPIMC code with the corresponding results obtained by using hybrid Monte Carlo as well as by experimental measurements over a wide range of g-ray energy values. For off-axis located point sources, the comparison of the BPIMC predictions with the corresponding results obtained by direct calculations as well as by conventional Monte Carlo schemes shows good agreement validating the proposed algorithm. Using the BPIMC program, the energy dependent detector efficiency has been found to approach an asymptotic profile by increasing either thickness or diameter of scintillator while keeping the other fixed. The variation of energy dependent total efficiency of a 3'x3' NaI(Tl scintillator with axial distance has been studied using the BPIMC code. About two orders of magnitude change in detector efficiency has been observed for zero to 50 cm variation in the axial distance. For small values of axial separation, a similar large variation has also been observed in total efficiency for 137Cs as well as for 60Co sources by increasing the axial-offset from zero to 50 cm.

  15. An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model

    Science.gov (United States)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.

    2017-01-01

    Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.

  16. Thermodynamics of long supercoiled molecules: insights from highly efficient Monte Carlo simulations.

    Science.gov (United States)

    Lepage, Thibaut; Képès, François; Junier, Ivan

    2015-07-01

    Supercoiled DNA polymer models for which the torsional energy depends on the total twist of molecules (Tw) are a priori well suited for thermodynamic analysis of long molecules. So far, nevertheless, the exact determination of Tw in these models has been based on a computation of the writhe of the molecules (Wr) by exploiting the conservation of the linking number, Lk=Tw+Wr, which reflects topological constraints coming from the helical nature of DNA. Because Wr is equal to the number of times the main axis of a DNA molecule winds around itself, current Monte Carlo algorithms have a quadratic time complexity, O(L(2)), with respect to the contour length (L) of the molecules. Here, we present an efficient method to compute Tw exactly, leading in principle to algorithms with a linear complexity, which in practice is O(L(1.2)). Specifically, we use a discrete wormlike chain that includes the explicit double-helix structure of DNA and where the linking number is conserved by continuously preventing the generation of twist between any two consecutive cylinders of the discretized chain. As an application, we show that long (up to 21 kbp) linear molecules stretched by mechanical forces akin to magnetic tweezers contain, in the buckling regime, multiple and branched plectonemes that often coexist with curls and helices, and whose length and number are in good agreement with experiments. By attaching the ends of the molecules to a reservoir of twists with which these can exchange helix turns, we also show how to compute the torques in these models. As an example, we report values that are in good agreement with experiments and that concern the longest molecules that have been studied so far (16 kbp).

  17. Calibrated multi-subband Monte Carlo modeling of tunnel-FETs in silicon and III-V channel materials

    Science.gov (United States)

    Revelant, A.; Palestri, P.; Osgnach, P.; Selmi, L.

    2013-10-01

    We present a semiclassical model for Tunnel-FET (TFET) devices capable to describe band-to-band tunneling (BtBT) as well as far from equilibrium transport of the generated carriers. BtBT generation is implemented as an add-on into an existing multi-subband Monte Carlo (MSMC) transport simulator that accounts as well for the effects typical to alternative channel materials and high-κ dielectrics. A simple but accurate correction for the calculation of the BtBT generation rate to account for carrier confinement in the subbands is proposed and verified by comparison with full 2D quantum calculation.

  18. Determination of relative efficiency of a detector using Monte Carlo method; Determinacao da eficiencia relativa de um detector usando metodo de Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Medeiros, M.P.C.; Rebello, W.F., E-mail: eng.cavaliere@ime.eb.br, E-mail: rebello@ime.eb.br [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil). Secao de Engenharia Nuclear; Lopes, J.M.; Silva, A.X., E-mail: marqueslopez@yahoo.com.br, E-mail: ademir@nuclear.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear

    2015-07-01

    High-purity germanium detectors (HPGe) are mandatory tools for spectrometry because of their excellent energy resolution. The efficiency of such detectors, quoted in the list of specifications by the manufacturer, frequently refers to the relative full-energy peak efficiency, related to the absolute full-energy peak efficiency of a 7.6 cm x 7.6 cm (diameter x height) NaI(Tl) crystal, based on the 1.33 MeV peak of a {sup 60}Co source positioned 25 cm from the detector. In this study, we used MCNPX code to simulate an HPGe detector (Canberra GC3020), from Real-Time Neutrongraphy Laboratory of UFRJ, to survey the spectrum of a {sup 60}Co source located 25 cm from the detector in order to calculate and confirm the efficiency declared by the manufacturer. Agreement between experimental and simulated data was achieved. The model under development will be used for calculating and comparison purposes with the detector calibration curve from software Genie2000™, also serving as a reference for future studies. (author)

  19. An efficient method of wavelength interval selection based on random frog for multivariate spectral calibration

    Science.gov (United States)

    Yun, Yong-Huan; Li, Hong-Dong; Wood, Leslie R. E.; Fan, Wei; Wang, Jia-Jun; Cao, Dong-Sheng; Xu, Qing-Song; Liang, Yi-Zeng

    2013-07-01

    Wavelength selection is a critical step for producing better prediction performance when applied to spectral data. Considering the fact that the vibrational and rotational spectra have continuous features of spectral bands, we propose a novel method of wavelength interval selection based on random frog, called interval random frog (iRF). To obtain all the possible continuous intervals, spectra are first divided into intervals by moving window of a fix width over the whole spectra. These overlapping intervals are ranked applying random frog coupled with PLS and the optimal ones are chosen. This method has been applied to two near-infrared spectral datasets displaying higher efficiency in wavelength interval selection than others. The source code of iRF can be freely downloaded for academy research at the website: http://code.google.com/p/multivariate-calibration/downloads/list.

  20. CdTe detector efficiency calibration using thick targets of pure and stable compounds

    Science.gov (United States)

    Chaves, P. C.; Taborda, A.; Reis, M. A.

    2012-02-01

    Quantitative PIXE measurements require perfectly calibrated set-ups. Cooled CdTe detectors have good efficiency for energies above those covered by Si(Li) detectors and turn on the possibility of studying K X-rays lines instead of L X-rays lines for medium and eventually heavy elements, which is an important advantage in various cases, if only limited resolution systems are available in the low energy range. In this work we present and discuss spectra from a CdTe semiconductor detector covering the energy region from Cu (K α1 = 8.047 keV) to U (K α1 = 98.439 keV). Pure thick samples were irradiated with proton beams at the ITN 3.0 MV Tandetron accelerator in the High Resolution High Energy PIXE set-up. Results and the application to the study of a Portuguese Ossa Morena region Dark Stone sample are presented in this work.

  1. Calibrating Self-Reported Measures of Maternal Smoking in Pregnancy via Bioassays Using a Monte Carlo Approach

    Directory of Open Access Journals (Sweden)

    Lauren S. Wakschlag

    2009-06-01

    Full Text Available Maternal smoking during pregnancy is a major public health problem that has been associated with numerous short- and long-term adverse health outcomes in offspring. However, characterizing smoking exposure during pregnancy precisely has been rather difficult: self-reported measures of smoking often suffer from recall bias, deliberate misreporting, and selective non-disclosure, while single bioassay measures of nicotine metabolites only reflect recent smoking history and cannot capture the fluctuating and complex patterns of varying exposure of the fetus. Recently, Dukic et al. [1] have proposed a statistical method for combining information from both sources in order to increase the precision of the exposure measurement and power to detect more subtle effects of smoking. In this paper, we extend the Dukic et al. [1] method to incorporate individual variation of the metabolic parameters (such as clearance rates into the calibration model of smoking exposure during pregnancy. We apply the new method to the Family Health and Development Project (FHDP, a small convenience sample of 96 predominantly working-class white pregnant women oversampled for smoking. We find that, on average, misreporters smoke 7.5 cigarettes more than what they report to smoke, with about one third underreporting by 1.5, one third under-reporting by about 6.5, and one third underreporting by 8.5 cigarettes. Partly due to the limited demographic heterogeneity in the FHDP sample, the results are similar to those obtained by the deterministic calibration model, whose adjustments were slightly lower (by 0.5 cigarettes on average. The new results are also, as expected, less sensitive to assumed values of cotinine half-life.

  2. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    Science.gov (United States)

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed.

  3. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution

    Energy Technology Data Exchange (ETDEWEB)

    Mukhopadhyay, Nitai D. [Department of Biostatistics, Virginia Commonwealth University, Richmond, VA 23298 (United States); Sampson, Andrew J. [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, VA 23298 (United States); Deniz, Daniel; Alm Carlsson, Gudrun [Department of Radiation Physics, Faculty of Health Sciences, Linkoeping University, SE 581 85 (Sweden); Williamson, Jeffrey [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, VA 23298 (United States); Malusek, Alexandr, E-mail: malusek@ujf.cas.cz [Department of Radiation Physics, Faculty of Health Sciences, Linkoeping University, SE 581 85 (Sweden); Department of Radiation Dosimetry, Nuclear Physics Institute AS CR v.v.i., Na Truhlarce 39/64, 180 86 Prague (Czech Republic)

    2012-01-15

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed.

  4. Efficient Monte Carlo evaluation of resampling-based hypothesis tests with applications to genetic epidemiology.

    Science.gov (United States)

    Fung, Wing K; Yu, Kexin; Yang, Yingrui; Zhou, Ji-Yuan

    2016-08-08

    Monte Carlo evaluation of resampling-based tests is often conducted in statistical analysis. However, this procedure is generally computationally intensive. The pooling resampling-based method has been developed to reduce the computational burden but the validity of the method has not been studied before. In this article, we first investigate the asymptotic properties of the pooling resampling-based method and then propose a novel Monte Carlo evaluation procedure namely the n-times pooling resampling-based method. Theorems as well as simulations show that the proposed method can give smaller or comparable root mean squared errors and bias with much less computing time, thus can be strongly recommended especially for evaluating highly computationally intensive hypothesis testing procedures in genetic epidemiology.

  5. Nonequilibrium candidate Monte Carlo is an efficient tool for equilibrium simulation

    Energy Technology Data Exchange (ETDEWEB)

    Nilmeier, J. P.; Crooks, G. E.; Minh, D. D. L.; Chodera, J. D.

    2011-10-24

    Metropolis Monte Carlo simulation is a powerful tool for studying the equilibrium properties of matter. In complex condensed-phase systems, however, it is difficult to design Monte Carlo moves with high acceptance probabilities that also rapidly sample uncorrelated configurations. Here, we introduce a new class of moves based on nonequilibrium dynamics: candidate configurations are generated through a finite-time process in which a system is actively driven out of equilibrium, and accepted with criteria that preserve the equilibrium distribution. The acceptance rule is similar to the Metropolis acceptance probability, but related to the nonequilibrium work rather than the instantaneous energy difference. Our method is applicable to sampling from both a single thermodynamic state or a mixture of thermodynamic states, and allows both coordinates and thermodynamic parameters to be driven in nonequilibrium proposals. While generating finite-time switching trajectories incurs an additional cost, driving some degrees of freedom while allowing others to evolve naturally can lead to large enhancements in acceptance probabilities, greatly reducing structural correlation times. Using nonequilibrium driven processes vastly expands the repertoire of useful Monte Carlo proposals in simulations of dense solvated systems.

  6. An efficient measurement-driven sequential Monte Carlo multi-Bernoulli fi lter for multi-target fi ltering

    Institute of Scientific and Technical Information of China (English)

    Tong-yang JIANG; Mei-qin LIU; Xie WANG; Sen-lin ZHANG

    2014-01-01

    We propose an efficient measurement-driven sequential Monte Carlo multi-Bernoulli (SMC-MB) fi lter for multi-target fi ltering in the presence of clutter and missing detection. The survival and birth measurements are distinguished from the original measurements using the gating technique. Then the survival measurements are used to update both survival and birth targets, and the birth measurements are used to update only the birth targets. Since most clutter measurements do not participate in the update step, the computing time is reduced signifi cantly. Simulation results demonstrate that the proposed approach improves the real-time performance without degradation of fi ltering performance.

  7. Efficient implementation of the Monte Carlo method for lattice gauge theory calculations on the floating point systems FPS-164

    Energy Technology Data Exchange (ETDEWEB)

    Moriarty, K.J.M. (Royal Holloway Coll., Englefield Green (UK). Dept. of Mathematics); Blackshaw, J.E. (Floating Point Systems UK Ltd., Bracknell)

    1983-04-01

    The computer program calculates the average action per plaquette for SU(6)/Z/sub 6/ lattice gauge theory. By considering quantum field theory on a space-time lattice, the ultraviolet divergences of the theory are regulated through the finite lattice spacing. The continuum theory results can be obtained by a renormalization group procedure. Making use of the FPS Mathematics Library (MATHLIB), we are able to generate an efficient code for the Monte Carlo algorithm for lattice gauge theory calculations which compares favourably with the performance of the CDC 7600.

  8. Ge well detector calibration by means of a trial and error procedure using the dead layers as a unique parameter in a Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Courtine, Fabien; Pilleyre, Thierry; Sanzelle, Serge [Laboratoire de Physique Corpusculaire, IN2P3-CNRS, Universite Blaise Pascal, F-63177 Aubiere Cedex (France); Miallier, Didier [Laboratoire de Physique Corpusculaire, IN2P3-CNRS, Universite Blaise Pascal, F-63177 Aubiere Cedex (France)], E-mail: miallier@clermont.in2p3.fr

    2008-11-01

    The project aimed at modelling an HPGe well detector in view to predict its photon-counting efficiency by means of the Monte Carlo simulation code GEANT4. Although a qualitative and quantitative description of the crystal and housing was available, uncertainties were associated to parameters controlling the detector response. This induced poor agreement between the efficiency calculated on the basis of nominal data and the actual efficiency experimentally measured with a {sup 137}Cs point source. It was then decided to improve the model, by parameterization of a trial and error method. The distribution of the dead layers was adopted as a unique parameter, in order to explore the possibilities and pertinence of this parameter. In the course of the work, it appeared necessary to introduce the possibility that the thickness of the dead layers was not uniform for a given surface. At the end of the process, the results allowed to conclude that the approach was able to give a model adapted to practical application with a satisfactory precision in the calculated efficiency. The pattern of the 'dead layers' that was obtained is characterized by a variable thickness which seems to be physically relevant. It implicitly and partly accounts for effects that are not originated from actual dead layers, such as incomplete charge collection. But, such effects, which are uneasily accounted for, can, in a first approximation, be represented by 'dead layers'; this is an advantage of the parameterization that was adopted.

  9. Efficiency Calibration of LaBr3(Ce) γ Spectroscopy in Analyzing Radionucles in Reactor Loop Water

    Institute of Scientific and Technical Information of China (English)

    CHEN; Xi-lin; QIN; Guo-xiu; GUO; Xiao-qing; CHEN; Yong-yong; MENG; Jun

    2013-01-01

    Monitoring the occurring and radioactivity concentration of fission products in nuclear reactor loop water is important for the nuclear reactor safe running evaluation,prevention of accidence and safe protection of working personnel.Study on the efficiency calibration for a LaBr3(Ce)detector experimental

  10. On stochastic error and computational efficiency of the Markov Chain Monte Carlo method

    KAUST Repository

    Li, Jun

    2014-01-01

    In Markov Chain Monte Carlo (MCMC) simulations, thermal equilibria quantities are estimated by ensemble average over a sample set containing a large number of correlated samples. These samples are selected in accordance with the probability distribution function, known from the partition function of equilibrium state. As the stochastic error of the simulation results is significant, it is desirable to understand the variance of the estimation by ensemble average, which depends on the sample size (i.e., the total number of samples in the set) and the sampling interval (i.e., cycle number between two consecutive samples). Although large sample sizes reduce the variance, they increase the computational cost of the simulation. For a given CPU time, the sample size can be reduced greatly by increasing the sampling interval, while having the corresponding increase in variance be negligible if the original sampling interval is very small. In this work, we report a few general rules that relate the variance with the sample size and the sampling interval. These results are observed and confirmed numerically. These variance rules are derived for theMCMCmethod but are also valid for the correlated samples obtained using other Monte Carlo methods. The main contribution of this work includes the theoretical proof of these numerical observations and the set of assumptions that lead to them. © 2014 Global-Science Press.

  11. Efficiency Calibration of a Mini-Orange Type beta-Spectrometer by the $\\beta^{-}$-Spectrum of $^{90}$Sr

    CERN Document Server

    Kalinnikov, V G; Solnyshkin, A A; Sereeter, Z; Lebedev, N A; Chumin, V G; Ibrakhin, Ya S

    2002-01-01

    A specific method for efficiency calibration of a mini-orange type beta-spectrometer by means of the continuous beta^{-}-spectrum of ^{90}Sr and the conversion electron spectrum of ^{207}Bi in the energy range from 500 to 2200 keV has been elaborated. In the experiment typical SmCo_5 magnets (6A and 8A) were used. An accuracy of efficiency determination was 5-10 %.

  12. Calibrating and Controlling the Quantum Efficiency Distribution of Inhomogeneously Broadened Quantum Rods Using a Mirror Ball

    CERN Document Server

    Lunnemann, Per; van Dijk-Moes, Relinde J A; Pietra, Francesca; Vanmaekelbergh, Daniël; Koenderink, A Femius

    2013-01-01

    We demonstrate that a simple silver coated ball lens can be used to accurately measure the entire distribution of radiative transition rates of quantum dot nanocrystals. This simple and cost-effective implementation of Drexhage's method that uses nanometer-controlled optical mode density variations near a mirror, not only allows to extract calibrated ensemble-averaged rates, but for the first time also to quantify the full inhomogeneous dispersion of radiative and non radiative decay rates across thousands of nanocrystals. We apply the technique to novel ultra-stable CdSe/CdS dot-in-rod emitters. The emitters are of large current interest due to their improved stability and reduced blinking. We retrieve a room-temperature ensemble average quantum efficiency of 0.87+-0.08 at a mean lifetime around 20 ns. We confirm a log-normal distribution of decay rates as often assumed in literature and we show that the rate distribution-width, that amounts to about 30% of the mean decay rate, is strongly dependent on the l...

  13. CdTe detector efficiency calibration using thick targets of pure and stable compounds

    Energy Technology Data Exchange (ETDEWEB)

    Chaves, P.C.; Taborda, A., E-mail: ataborda@itn.pt; Reis, M.A.

    2012-02-15

    Quantitative PIXE measurements require perfectly calibrated set-ups. Cooled CdTe detectors have good efficiency for energies above those covered by Si(Li) detectors and turn on the possibility of studying K X-rays lines instead of L X-rays lines for medium and eventually heavy elements, which is an important advantage in various cases, if only limited resolution systems are available in the low energy range. In this work we present and discuss spectra from a CdTe semiconductor detector covering the energy region from Cu (K{sub {alpha}1} = 8.047 keV) to U (K{sub {alpha}1} = 98.439 keV). Pure thick samples were irradiated with proton beams at the ITN 3.0 MV Tandetron accelerator in the High Resolution High Energy PIXE set-up. Results and the application to the study of a Portuguese Ossa Morena region Dark Stone sample are presented in this work.

  14. Study of carrier dynamics and radiative efficiency in InGaN/GaN LEDs with Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Lu, I. Lin; Wu, Yuh-Renn [Institute of Photonics and Optoelectronics and Department of Electrical Engineering, National Taiwan University, Taipei (China); Singh, Jasprit [Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI (United States)

    2011-07-15

    In this paper, we have applied the Monte Carlo method to study carrier dynamics in InGaN quantum well. Vertical and lateral transport and its impact on device radiative efficiency is studied for different In compositions, dislocation densities, temperatures, and carrier densities. Our results show that the non-radiative recombination caused by the defect trapping plays a dominating role for higher indium composition and this limits the internal quantum efficiency (IQE). For lower indium composition cases, carrier leakage plays some role in the mid to high injection conditions and carrier leakage is strong in very high carrier density in all cases. Our results suggest that reducing the trap density and QCSE are still the key factors to improve the IQE. The paper examines the relative roles of leakage and non-radiative processes on IQE. (copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  15. Cross Calibration of Omnidirectional Orbital Neutron Detectors of Lunar Prospector (LP) and Lunar Exploration Neutron Detector (LEND) by Monte Carlo Simulation

    Science.gov (United States)

    Murray, J.; SU, J. J.; Sagdeev, R.; Chin, G.

    2014-12-01

    Introduction:Monte Carlo (MC) simulations have been used to investigate neutron production and leakage from the lunar surface to assess the composition of the lunar soil [1-3]. Orbital measurements of lunar neutron flux have been made by the Lunar Prospector Neutron Spectrometer (LPNS)[4] of the Lunar Prospector mission and the Lunar Exploration Neutron Detector (LEND)[5] of the Lunar Reconnaissance Orbiter mission. While both are cylindrical helium-3 detectors, LEND's SETN (Sensor EpiThermal Neutrons) instrument is shorter, with double the helium-3 pressure than that of LPNS. The two instruments therefore have different angular sensitivities and neutron detection efficiencies. Furthermore, the Lunar Prospector's spin-stabilized design makes its detection efficiency latitude-dependent, while the SETN instrument faces permanently downward toward the lunar surface. We use the GEANT4 Monte Carlo simulation code[6] to investigate the leakage lunar neutron energy spectrum, which follows a power law of the form E-0.9 in the epithermal energy range, and the signals detected by LPNS and SETN in the LP and LRO mission epochs, respectively. Using the lunar neutron flux reconstructed for LPNS epoch, we calculate the signal that would have been observed by SETN at that time. The subsequent deviation from the actual signal observed during the LEND epoch is due to the significantly higher intensity of Galactic Cosmic Rays during the anomalous Solar Minimum of 2009-2010. References: [1] W. C. Feldman, et al., (1998) Science Vol. 281 no. 5382 pp. 1496-1500. [2] Gasnault, O., et al.,(2000) J. Geophys. Res., 105(E2), 4263-4271. [3] Little, R. C., et al. (2003), J. Geophys. Res., 108(E5), 5046. [4]W. C. Feldman, et al., (1999) Nucl. Inst. And Method in Phys. Res. A 422, [5] M. L. Litvak, et al., (2012) J.Geophys. Res. 117, E00H32 [6] J. Allison, et al, (2006) IEEE Trans. on Nucl Sci, Vol 53, No 1.

  16. An Efficient Monte Carlo Method for Modeling Radiative Transfer in Protoplanetary Disks

    Science.gov (United States)

    Kim, Stacy

    2011-01-01

    Monte Carlo methods have been shown to be effective and versatile in modeling radiative transfer processes to calculate model temperature profiles for protoplanetary disks. Temperatures profiles are important for connecting physical structure to observation and for understanding the conditions for planet formation and migration. However, certain areas of the disk such as the optically thick disk interior are under-sampled, or are of particular interest such as the snow line (where water vapor condenses into ice) and the area surrounding a protoplanet. To improve the sampling, photon packets can be preferentially scattered and reemitted toward the preferred locations at the cost of weighting packet energies to conserve the average energy flux. Here I report on the weighting schemes developed, how they can be applied to various models, and how they affect simulation mechanics and results. We find that improvements in sampling do not always imply similar improvements in temperature accuracies and calculation speeds.

  17. Leveraging Gibbs Ensemble Molecular Dynamics and Hybrid Monte Carlo/Molecular Dynamics for Efficient Study of Phase Equilibria.

    Science.gov (United States)

    Gartner, Thomas E; Epps, Thomas H; Jayaraman, Arthi

    2016-11-08

    We describe an extension of the Gibbs ensemble molecular dynamics (GEMD) method for studying phase equilibria. Our modifications to GEMD allow for direct control over particle transfer between phases and improve the method's numerical stability. Additionally, we found that the modified GEMD approach had advantages in computational efficiency in comparison to a hybrid Monte Carlo (MC)/MD Gibbs ensemble scheme in the context of the single component Lennard-Jones fluid. We note that this increase in computational efficiency does not compromise the close agreement of phase equilibrium results between the two methods. However, numerical instabilities in the GEMD scheme hamper GEMD's use near the critical point. We propose that the computationally efficient GEMD simulations can be used to map out the majority of the phase window, with hybrid MC/MD used as a follow up for conditions under which GEMD may be unstable (e.g., near-critical behavior). In this manner, we can capitalize on the contrasting strengths of these two methods to enable the efficient study of phase equilibria for systems that present challenges for a purely stochastic GEMC method, such as dense or low temperature systems, and/or those with complex molecular topologies.

  18. A backward Monte Carlo method for efficient computation of runaway probabilities in runaway electron simulation

    Science.gov (United States)

    Zhang, Guannan; Del-Castillo-Negrete, Diego

    2016-10-01

    Kinetic descriptions of RE are usually based on the bounced-averaged Fokker-Planck model that determines the PDFs of RE in the 2 dimensional momentum space. Despite of the simplification involved, the Fokker-Planck equation can rarely be solved analytically and direct numerical approaches (e.g., continuum and particle-based Monte Carlo (MC)) can be time consuming specially in the computation of asymptotic-type observable including the runaway probability, the slowing-down and runaway mean times, and the energy limit probability. Here we present a novel backward MC approach to these problems based on backward stochastic differential equations (BSDEs). The BSDE model can simultaneously describe the PDF of RE and the runaway probabilities by means of the well-known Feynman-Kac theory. The key ingredient of the backward MC algorithm is to place all the particles in a runaway state and simulate them backward from the terminal time to the initial time. As such, our approach can provide much faster convergence than the brute-force MC methods, which can significantly reduce the number of particles required to achieve a prescribed accuracy. Moreover, our algorithm can be parallelized as easy as the direct MC code, which paves the way for conducting large-scale RE simulation. This work is supported by DOE FES and ASCR under the Contract Numbers ERKJ320 and ERAT377.

  19. Efficient orbital storage and evaluation for quantum Monte Carlo simulations of solids

    Science.gov (United States)

    Esler, Kenneth

    2008-03-01

    Researchers have applied continuum quantum Monte Carlo methods to solids with great success, but thus far applications have been largely limited to crystals with simple geometry. In these simulations, three-dimensional cubic B-splines have proven to be a fast and accurate means of storing and evaluating electron orbitals. While B-splines require less memory than other spline interpolation schemes, modern cluster nodes often have insufficient memory to store the orbitals for more complex systems. We introduce three techniques, appropriate in different circumstances, to dramatically reduce the memory required for orbital storage, while retaining high accuracy: the generalized tiling of primitive-cell orbitals into a supercell of arbitrary shape, the use of nonuniform grids for localized orbitals, and the periodic replication of localized orbitals. We give examples for cubic boron nitride and wüstite (FeO), and show that these methods can reduce the memory used for orbital storage by more than two orders of magnitude. Finally, we introduce an open-source B-spline library to facilitate the incorporation of these methods into QMC simulation codes.

  20. Monte Carlo calculations of the free energy of binary sII hydrogen clathrate hydrates for identifying efficient promoter molecules.

    Science.gov (United States)

    Atamas, Alexander A; Cuppen, Herma M; Koudriachova, Marina V; de Leeuw, Simon W

    2013-01-31

    The thermodynamics of binary sII hydrogen clathrates with secondary guest molecules is studied with Monte Carlo simulations. The small cages of the sII unit cell are occupied by one H(2) guest molecule. Different promoter molecules entrapped in the large cages are considered. Simulations are conducted at a pressure of 1000 atm in a temperature range of 233-293 K. To determine the stabilizing effect of different promoter molecules on the clathrate, the Gibbs free energy of fully and partially occupied sII hydrogen clathrates are calculated. Our aim is to predict what would be an efficient promoter molecule using properties such as size, dipole moment, and hydrogen bonding capability. The gas clathrate configurational and free energies are compared. The entropy makes a considerable contribution to the free energy and should be taken into account in determining stability conditions of binary sII hydrogen clathrates.

  1. Quantum Monte Carlo for large chemical systems: Implementing efficient strategies for petascale platforms and beyond

    CERN Document Server

    Scemama, Anthony; Oseret, Emmanuel; Jalby, William

    2012-01-01

    Various strategies to implement efficiently QMC simulations for large chemical systems are presented. These include: i.) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), ii.) the possibility of keeping the memory footprint minimal, iii.) the important enhancement of single-core performance when efficient optimization tools are employed, and iv.) the definition of a universal, dynamic, fault-tolerant, and load-balanced computational framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC=Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056 and 1731 electrons). Using 10k-80k computing cores of the Curie machine (GENCI-T...

  2. Broad-band efficiency calibration of ITER bolometer prototypes using Pt absorbers on SiN membranes

    Science.gov (United States)

    Meister, H.; Willmeroth, M.; Zhang, D.; Gottwald, A.; Krumrey, M.; Scholze, F.

    2013-12-01

    The energy resolved efficiency of two bolometer detector prototypes for ITER with 4 channels each and absorber thicknesses of 4.5 μm and 12.5 μm, respectively, has been calibrated in a broad spectral range from 1.46 eV up to 25 keV. The calibration in the energy range above 3 eV was performed against previously calibrated silicon photodiodes using monochromatized synchrotron radiation provided by five different beamlines of Physikalische Technische Bundesanstalt at the electron storage rings BESSY II and Metrology Light Source in Berlin. For the measurements in the visible range, a setup was realised using monochromatized halogen lamp radiation and a calibrated laser power meter as reference. The measurements clearly demonstrate that the efficiency of the bolometer prototype detectors in the range from 50 eV up to ≈6 keV is close to unity; at a photon energy of 20 keV the bolometer with the thick absorber detects 80% of the photons, the one with the thin absorber about 50%. This indicates that the detectors will be well capable of measuring the plasma radiation expected from the standard ITER scenario. However, a minimum absorber thickness will be required for the high temperatures in the central plasma. At 11.56 keV, the sharp Pt-L3 absorption edge allowed to cross-check the absorber thickness by fitting the measured efficiency to the theoretically expected absorption of X-rays in a homogeneous Pt-layer. Furthermore, below 50 eV the efficiency first follows the losses due to reflectance expected for Pt, but below 10 eV it is reduced further by a factor of 2 for the thick absorber and a factor of 4 for the thin absorber. Most probably, the different histories in production, storage, and operation led to varying surface conditions and additional loss channels.

  3. Broad-band efficiency calibration of ITER bolometer prototypes using Pt absorbers on SiN membranes.

    Science.gov (United States)

    Meister, H; Willmeroth, M; Zhang, D; Gottwald, A; Krumrey, M; Scholze, F

    2013-12-01

    The energy resolved efficiency of two bolometer detector prototypes for ITER with 4 channels each and absorber thicknesses of 4.5 μm and 12.5 μm, respectively, has been calibrated in a broad spectral range from 1.46 eV up to 25 keV. The calibration in the energy range above 3 eV was performed against previously calibrated silicon photodiodes using monochromatized synchrotron radiation provided by five different beamlines of Physikalische Technische Bundesanstalt at the electron storage rings BESSY II and Metrology Light Source in Berlin. For the measurements in the visible range, a setup was realised using monochromatized halogen lamp radiation and a calibrated laser power meter as reference. The measurements clearly demonstrate that the efficiency of the bolometer prototype detectors in the range from 50 eV up to ≈6 keV is close to unity; at a photon energy of 20 keV the bolometer with the thick absorber detects 80% of the photons, the one with the thin absorber about 50%. This indicates that the detectors will be well capable of measuring the plasma radiation expected from the standard ITER scenario. However, a minimum absorber thickness will be required for the high temperatures in the central plasma. At 11.56 keV, the sharp Pt-L3 absorption edge allowed to cross-check the absorber thickness by fitting the measured efficiency to the theoretically expected absorption of X-rays in a homogeneous Pt-layer. Furthermore, below 50 eV the efficiency first follows the losses due to reflectance expected for Pt, but below 10 eV it is reduced further by a factor of 2 for the thick absorber and a factor of 4 for the thin absorber. Most probably, the different histories in production, storage, and operation led to varying surface conditions and additional loss channels.

  4. A Time Efficient Adaptive Gridding Approach and Improved Calibrations in Five-Hole Probe Measurements

    Directory of Open Access Journals (Sweden)

    Jason Town

    2015-01-01

    Full Text Available Five-Hole Probes (FHP, being a dependable and accurate aerodynamic tool, are an excellent choice for measuring three-dimensional flow fields in turbomachinery. To improve spatial resolution, a subminiature FHP with a diameter of 1.68 mm is employed. High length to diameter ratio of the tubing and manual pitch and yaw calibration cause increased uncertainty. A new FHP calibrator is designed and built to reduce the uncertainty by precise, computer controlled movements and reduced calibration time. The calibrated FHP is then placed downstream of the nozzle guide vane (NGV assembly of a low-speed, large-scale, axial flow turbine. The cold flow HP turbine stage contains 29 vanes and 36 blades. A fast and computer controllable traversing system is implemented using an adaptive grid method for the refinement of measurements in regions such as vane wake, secondary flows, and boundary layers. The current approach increases the possible number of measurement points in a two-hour period by 160%. Flow structures behind the NGV measurement plane are identified with high spatial resolution and reduced uncertainty. The automated pitch and yaw calibration and the adaptive grid approach introduced in this study are shown to be a highly effective way of measuring complex flow fields in the research turbine.

  5. Optimization Efficiency of Monte Carlo Simulation Tool for Evanescent Wave Spectroscopy Fiber-Optic Probe

    Directory of Open Access Journals (Sweden)

    Daniel Khankin

    2012-01-01

    Full Text Available In a previous work, we described the simulation tool (FOPS 3D (Khankin et al., 2001 which can simulate the full three-dimensional geometrical structure of a fiber and the propagation of a light beam sent through it. In this paper we are focusing on three major points: the first concerns the improvements made with respect to the simulation tool and the second, optimizations implemented with respect to the calculations' efficiency. Finally, the major research improvement from our previous works is the simulation results of the optimal absorbance value, as a function of bending angle for a given uncladded part diameter, that are presented; it is suggested that fiber-bending may improve the efficiency of recording the relevant measurements. This is the third iteration of the FOPS development process (Mann et al., 2009 which was significantly optimized by decreasing memory usage and increasing CPU utilization.

  6. A Calibration Routine for Efficient ETD in Large-Scale Proteomics

    Science.gov (United States)

    Rose, Christopher M.; Rush, Matthew J. P.; Riley, Nicholas M.; Merrill, Anna E.; Kwiecien, Nicholas W.; Holden, Dustin D.; Mullen, Christopher; Westphall, Michael S.; Coon, Joshua J.

    2015-11-01

    Electron transfer dissociation (ETD) has been broadly adopted and is now available on a variety of commercial mass spectrometers. Unlike collisional activation techniques, optimal performance of ETD requires considerable user knowledge and input. ETD reaction duration is one key parameter that can greatly influence spectral quality and overall experiment outcome. We describe a calibration routine that determines the correct number of reagent anions necessary to reach a defined ETD reaction rate. Implementation of this automated calibration routine on two hybrid Orbitrap platforms illustrate considerable advantages, namely, increased product ion yield with concomitant reduction in scan rates netting up to 75% more unique peptide identifications in a shotgun experiment.

  7. Efficient Monte Carlo Methods for the Potts Model at Low Temperature

    CERN Document Server

    Molkaraie, Mehdi

    2015-01-01

    We consider the problem of estimating the partition function of the ferromagnetic $q$-state Potts model. We propose an importance sampling algorithm in the dual of the normal factor graph representing the model. The algorithm can efficiently compute an estimate of the partition function in a wide range of parameters; in particular, when the coupling parameters of the model are strong (corresponding to models at low temperature) or when the model contains a mixture of strong and weak couplings. We show that, in this setting, the proposed algorithm significantly outperforms the state of the art methods in the primal and in the dual domains.

  8. Beta-efficiency of a typical gas-flow ionization chamber using GEANT4 Monte Carlo simulations

    Directory of Open Access Journals (Sweden)

    Hussain Abid

    2011-01-01

    Full Text Available GEANT4 based Monte Carlo simulations have been carried out for the determination of efficiency and conversion factors of a gas-flow ionization chamber for beta particles emitted by 86 different radioisotopes covering the average-b energy range of 5.69 keV-2.061 MeV. Good agreements were found between the GEANT4 predicted values and corresponding experimental data, as well as with EGS4 based calculations. For the reported set of b-emitters, the values of the conversion factor have been established in the range of 0.5×1013-2.5×1013 Bqcm-3/A. The computed xenon-to-air conversion factor ratios have attained the minimum value of 0.2 in the range of 0.1-1 MeV. As the radius and/or volume of the ion chamber increases, conversion factors approach a flat energy response. These simulations show a small, but significant dependence of ionization efficiency on the type of wall material.

  9. An efficient Monte Carlo method for calculating ab initio transition state theory reaction rates in solution

    CERN Document Server

    Iftimie, R; Schofield, J P; Iftimie, Radu; Salahub, Dennis; Schofield, Jeremy

    2003-01-01

    In this article, we propose an efficient method for sampling the relevant state space in condensed phase reactions. In the present method, the reaction is described by solving the electronic Schr\\"{o}dinger equation for the solute atoms in the presence of explicit solvent molecules. The sampling algorithm uses a molecular mechanics guiding potential in combination with simulated tempering ideas and allows thorough exploration of the solvent state space in the context of an ab initio calculation even when the dielectric relaxation time of the solvent is long. The method is applied to the study of the double proton transfer reaction that takes place between a molecule of acetic acid and a molecule of methanol in tetrahydrofuran. It is demonstrated that calculations of rates of chemical transformations occurring in solvents of medium polarity can be performed with an increase in the cpu time of factors ranging from 4 to 15 with respect to gas-phase calculations.

  10. Probing gas adsorption in MOFs using an efficient ab initio widom insertion Monte Carlo method.

    Science.gov (United States)

    Lee, Youhan; Poloni, Roberta; Kim, Jihan

    2016-12-15

    We propose a novel biased Widom insertion method that can efficiently compute the Henry coefficient, KH , of gas molecules inside porous materials exhibiting strong adsorption sites by employing purely DFT calculations. This is achieved by partitioning the simulation volume into strongly and weakly adsorbing regions and selectively biasing the Widom insertion moves into the former region. We show that only few thousands of single point energy calculations are necessary to achieve accurate statistics compared to many hundreds of thousands or millions of such calculations in conventional random insertions. The methodology is used to compute the Henry coefficient for CO2 , N2 , CH4 , and C2 H2 in M-MOF-74(M = Zn and Mg), yielding good agreement with published experimental data. Our results demonstrate that the DFT binding energy and the heat of adsorption are not accurate enough indicators to rank the guest adsorption properties at the Henry regime. © 2016 Wiley Periodicals, Inc.

  11. An efficient collision limiter Monte Carlo simulation for hypersonic near-continuum flows

    Science.gov (United States)

    Liang, Jie; Li, Zhihui; Li, Xuguo; Fang, Boqiang Du Ming

    2016-11-01

    The implementation of a collision limiter DSMC-based hybrid approach is presented to simulate hypersonic near-continuum flow. The continuum breakdown parameters based on gradient-length local Knudsen number are characterized different regions of the flowfield. The collision limiter is used in continuum inviscid regions with large time step and cell size. Local density gradient-based dynamic adaptation of collision and sampling cells refinement is employed in high gradient regions including strong shocks and boundary layer near surface. A variable time step scheme is adopted to make sure a more uniform distribution of model particles per collision cell throughout the computational domain, with a constant ratio of local time step interval to particle weights to avoid particles cloned or destroyed when crossing interface from cell to cell. The surface pressure and friction coefficients of hypersonic reentry flow for a blunt capsule are computed in different conditions and compared with benchmark case in transitional regime to examine the efficiency and accuracy. The aerodynamic characteristics of a wave rider shape with sharp leading edge are simulated in the test state for hypersonic near-continuum. The computed aerodynamic coefficients have good agreements with experimental data in low density wind tunnel of CARDC and have less computational expense.

  12. Efficient Orientation and Calibration of Large Aerial Blocks of Multi-Camera Platforms

    Science.gov (United States)

    Karel, W.; Ressl, C.; Pfeifer, N.

    2016-06-01

    Aerial multi-camera platforms typically incorporate a nadir-looking camera accompanied by further cameras that provide oblique views, potentially resulting in utmost coverage, redundancy, and accuracy even on vertical surfaces. However, issues have remained unresolved with the orientation and calibration of the resulting imagery, to two of which we present feasible solutions. First, as standard feature point descriptors used for the automated matching of homologous points are only invariant to the geometric variations of translation, rotation, and scale, they are not invariant to general changes in perspective. While the deviations from local 2D-similarity transforms may be negligible for corresponding surface patches in vertical views of flat land, they become evident at vertical surfaces, and in oblique views in general. Usage of such similarity-invariant descriptors thus limits the amount of tie points that stabilize the orientation and calibration of oblique views and cameras. To alleviate this problem, we present the positive impact on image connectivity of using a quasi affine-invariant descriptor. Second, no matter which hard- and software are used, at some point, the number of unknowns of a bundle block may be too large to be handled. With multi-camera platforms, these limits are reached even sooner. Adjustment of sub-blocks is sub-optimal, as it complicates data management, and hinders self-calibration. Simply discarding unreliable tie points of low manifold is not an option either, because these points are needed at the block borders and in poorly textured areas. As a remedy, we present a straight-forward method how to considerably reduce the number of tie points and hence unknowns before bundle block adjustment, while preserving orientation and calibration quality.

  13. Efficiency calibration and minimum detectable activity concentration of a real-time UAV airborne sensor system with two gamma spectrometers.

    Science.gov (United States)

    Tang, Xiao-Bin; Meng, Jia; Wang, Peng; Cao, Ye; Huang, Xi; Wen, Liang-Sheng; Chen, Da

    2016-04-01

    A small-sized UAV (NH-UAV) airborne system with two gamma spectrometers (LaBr3 detector and HPGe detector) was developed to monitor activity concentration in serious nuclear accidents, such as the Fukushima nuclear accident. The efficiency calibration and determination of minimum detectable activity concentration (MDAC) of the specific system were studied by MC simulations at different flight altitudes, different horizontal distances from the detection position to the source term center and different source term sizes. Both air and ground radiation were considered in the models. The results obtained may provide instructive suggestions for in-situ radioactivity measurements of NH-UAV.

  14. Syringe shape and positioning relative to efficiency volume inside dose calibrators and its role in nuclear medicine quality assurance programs

    Energy Technology Data Exchange (ETDEWEB)

    Santos, J.A.M. [Servico de Fisica Medica, Instituto Portugues de Oncologia do Porto Francisco Gentil, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200072 Porto (Portugal); Centro de Investigacao, Instituto Portugues de Oncologia do Porto Francisco Gentil, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200072 Porto (Portugal)], E-mail: a.miranda@portugalmail.pt; Carrasco, M.F. [Servico de Fisica Medica, Instituto Portugues de Oncologia do Porto Francisco Gentil, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200072 Porto (Portugal); Centro de Investigacao, Instituto Portugues de Oncologia do Porto Francisco Gentil, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200072 Porto (Portugal); Lencart, J. [Servico de Fisica Medica, Instituto Portugues de Oncologia do Porto Francisco Gentil, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200072 Porto (Portugal); Bastos, A.L. [Servico de Medicina Nuclear, Instituto Portugues de Oncologia do Porto Francisco Gentil, E.P.E., Rua Dr. Antonio Bernardino de Almeida, 4200072 Porto (Portugal)

    2009-06-15

    A careful analysis of geometry and source positioning influence in the activity measurement outcome of a nuclear medicine dose calibrator is presented for {sup 99m}Tc. The implementation of a quasi-point source apparent activity curve measurement is proposed for an accurate correction of the activity inside several syringes, and compared with a theoretical geometric efficiency model. Additionally, new geometrical parameters are proposed to test and verify the correct positioning of the syringes as part of acceptance testing and quality control procedures.

  15. A Monte-Carlo simulation analysis for evaluating the severity distribution functions (SDFs) calibration methodology and determining the minimum sample-size requirements.

    Science.gov (United States)

    Shirazi, Mohammadali; Reddy Geedipally, Srinivas; Lord, Dominique

    2017-01-01

    Severity distribution functions (SDFs) are used in highway safety to estimate the severity of crashes and conduct different types of safety evaluations and analyses. Developing a new SDF is a difficult task and demands significant time and resources. To simplify the process, the Highway Safety Manual (HSM) has started to document SDF models for different types of facilities. As such, SDF models have recently been introduced for freeway and ramps in HSM addendum. However, since these functions or models are fitted and validated using data from a few selected number of states, they are required to be calibrated to the local conditions when applied to a new jurisdiction. The HSM provides a methodology to calibrate the models through a scalar calibration factor. However, the proposed methodology to calibrate SDFs was never validated through research. Furthermore, there are no concrete guidelines to select a reliable sample size. Using extensive simulation, this paper documents an analysis that examined the bias between the 'true' and 'estimated' calibration factors. It was indicated that as the value of the true calibration factor deviates further away from '1', more bias is observed between the 'true' and 'estimated' calibration factors. In addition, simulation studies were performed to determine the calibration sample size for various conditions. It was found that, as the average of the coefficient of variation (CV) of the 'KAB' and 'C' crashes increases, the analyst needs to collect a larger sample size to calibrate SDF models. Taking this observation into account, sample-size guidelines are proposed based on the average CV of crash severities that are used for the calibration process.

  16. Corruption of accuracy and efficiency of Markov chain Monte Carlo simulation by inaccurate numerical implementation of conceptual hydrologic models

    Science.gov (United States)

    Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.

    2010-10-01

    Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.

  17. An efficient stable optical polariser module for calibration of the S4UVN earth observation satellite

    Science.gov (United States)

    Rolt, Stephen; Calcines, Ariadna; Lomanowski, Bartosz; Bramall, David; Shaw, Benjamin

    2016-07-01

    We describe here an optical polariser module intended to deliver well characterised polarised light to an imaging spectrometer instrument. The instrument in question is the Sentinel-4/UVN Earth observation imaging spectrometer due to be deployed in 2019 in a geostationary orbit. The polariser module described here will be used in the ground based calibration campaign for this instrument. One critical task of the calibration campaign will be the highly accurate characterisation of the polarisation sensitivity of instrument. The polariser module provides a constant, uniform source of linearly polarised light whose direction can be adjusted without changing the output level or uniformity of the illumination. A critical requirement of the polariser module is that the illumination is uniform across the exit pupil. Unfortunately, a conventional Glan-Taylor arrangement cannot provide this uniformity due to the strong variation in transmission at a refractive surface for angles close to the critical angle. Therefore a modified prism arrangement is proposed and this is described in detail. Detailed tolerance modelling and straylight modelling is also reported here.

  18. Efficiency calibration of a liquid scintillation counter for {sup 90}Y Cherenkov counting

    Energy Technology Data Exchange (ETDEWEB)

    Vaca, F. [Huelva Univ. (Spain). Dept. de Fisica Aplicada e Ingenieria Electrica; Manjon, G. [Departamento de Fisica Aplicada, E.T.S. de Arquitectura, Universidad de Sevilla, Av. Reina Mercedes, 2, E-41012 Sevilla (Spain); Garcia-Leon, M. [Departamento de Fisica Atomica, Molecular y Nuclear, Facultad de Fisica, Universidad de Sevilla, Av. Reina Mercedes, s/n. Apartado 1061, E-41080 Sevilla (Spain)

    1998-04-01

    In this paper a complete and self-consistent method for {sup 90}Sr determination in environmental samples is presented. It is based on the Cherenkov counting of {sup 90}Y with a conventional liquid scintillation counter. The effects of color quenching on the counting efficiency and background are carefully studied. A working curve is presented which allows to quantify the correction in the counting efficiency depending on the color quenching strength. (orig.). 6 refs.

  19. Calibration of environmental radionuclide transfer models using a Bayesian approach with Markov chain Monte Carlo simulations and model comparisons - Calibration of radionuclides transfer models in the environment using a Bayesian approach with Markov chain Monte Carlo simulation and comparison of models

    Energy Technology Data Exchange (ETDEWEB)

    Nicoulaud-Gouin, V.; Giacalone, M.; Gonze, M.A. [Institut de Radioprotection et de Surete Nucleaire-PRP-ENV/SERIS/LM2E (France); Martin-Garin, A.; Garcia-Sanchez, L. [IRSN-PRP-ENV/SERIS/L2BT (France)

    2014-07-01

    Calibration of transfer models according to observation data is a challenge, especially if parameters uncertainty is required, and if competing models should be decided between them. Generally two main calibration methods are used: The frequentist approach in which the unknown parameter of interest is supposed fixed and its estimation is based on the data only. In this category, least squared method has many restrictions in nonlinear models and competing models need to be nested in order to be compared. The bayesian inference in which the unknown parameter of interest is supposed random and its estimation is based on the data and on prior information. Compared to frequentist method, it provides probability density functions and therefore pointwise estimation with credible intervals. However, in practical cases, Bayesian inference is a complex problem of numerical integration, which explains its low use in operational modeling including radioecology. This study aims to illustrate the interest and feasibility of Bayesian approach in radioecology particularly in the case of ordinary differential equations with non-constant coefficients models, which cover most radiological risk assessment models, notably those implemented in the Symbiose platform (Gonze et al, 2010). Markov Chain Monte Carlo (MCMC) method (Metropolis et al., 1953) was used because the posterior expectations are intractable integrals. The invariant distribution of the parameters was performed by the metropolis-Hasting algorithm (Hastings, 1970). GNU-MCSim software (Bois and Maszle, 2011) a bayesian hierarchical framework, was used to deal with nonlinear differential models. Two case studies including this type of model were investigated: An Equilibrium Kinetic sorption model (EK) (e.g. van Genuchten et al, 1974), with experimental data concerning {sup 137}Cs and {sup 85}Sr sorption and desorption in different soils studied in stirred flow-through reactors. This model, generalizing the K{sub d} approach

  20. Efficient Calibration/Uncertainty Analysis Using Paired Complex/Surrogate Models.

    Science.gov (United States)

    Burrows, Wesley; Doherty, John

    2015-01-01

    The use of detailed groundwater models to simulate complex environmental processes can be hampered by (1) long run-times and (2) a penchant for solution convergence problems. Collectively, these can undermine the ability of a modeler to reduce and quantify predictive uncertainty, and therefore limit the use of such detailed models in the decision-making context. We explain and demonstrate a novel approach to calibration and the exploration of posterior predictive uncertainty, of a complex model, that can overcome these problems in many modelling contexts. The methodology relies on conjunctive use of a simplified surrogate version of the complex model in combination with the complex model itself. The methodology employs gradient-based subspace analysis and is thus readily adapted for use in highly parameterized contexts. In its most basic form, one or more surrogate models are used for calculation of the partial derivatives that collectively comprise the Jacobian matrix. Meanwhile, testing of parameter upgrades and the making of predictions is done by the original complex model. The methodology is demonstrated using a density-dependent seawater intrusion model in which the model domain is characterized by a heterogeneous distribution of hydraulic conductivity.

  1. Construction and Calibration of Optically Efficient LCD-based Multi-Layer Light Field Displays

    Science.gov (United States)

    Hirsch, Matthew; Lanman, Douglas; Wetzstein, Gordon; Raskar, Ramesh

    2013-02-01

    Near-term commercial multi-view displays currently employ ray-based 3D or 4D light field techniques. Conventional approaches to ray-based display typically include lens arrays or heuristic barrier patterns combined with integral interlaced views on a display screen such as an LCD panel. Recent work has placed an emphasis on the co-design of optics and image formation algorithms to achieve increased frame rates, brighter images, and wider fields-of-view using optimization-in-the-loop and novel arrangements of commodity LCD panels. In this paper we examine the construction and calibration methods of computational, multi-layer LCD light field displays. We present several experimental configurations that are simple to build and can be tuned to sufficient precision to achieve a research quality light field display. We also present an analysis of moiré interference in these displays, and guidelines for diffuser placement and display alignment to reduce the effects of moiré. We describe a technique using the moiré magnifier to fine-tune the alignment of the LCD layers.

  2. Detection of 15 dB Squeezed States of Light and their Application for the Absolute Calibration of Photoelectric Quantum Efficiency

    Science.gov (United States)

    Vahlbruch, Henning; Mehmet, Moritz; Danzmann, Karsten; Schnabel, Roman

    2016-09-01

    Squeezed states of light belong to the most prominent nonclassical resources. They have compelling applications in metrology, which has been demonstrated by their routine exploitation for improving the sensitivity of a gravitational-wave detector since 2010. Here, we report on the direct measurement of 15 dB squeezed vacuum states of light and their application to calibrate the quantum efficiency of photoelectric detection. The object of calibration is a customized InGaAs positive intrinsic negative (p-i-n) photodiode optimized for high external quantum efficiency. The calibration yields a value of 99.5% with a 0.5% (k =2 ) uncertainty for a photon flux of the order 1 017 s-1 at a wavelength of 1064 nm. The calibration neither requires any standard nor knowledge of the incident light power and thus represents a valuable application of squeezed states of light in quantum metrology.

  3. SU-F-BRA-09: New Efficient Method for Xoft Axxent Electronic Brachytherapy Source Calibration by Pre-Characterizing Surface Applicators

    Energy Technology Data Exchange (ETDEWEB)

    Pai, S [iCAD Inc., Los Gatos, CA (United States)

    2015-06-15

    Purpose: The objective is to improve the efficiency and efficacy of Xoft™ Axxent™ electronic brachytherapy (EBT) calibration of the source & surface applicator using AAPM TG-61 formalism. Methods: Current method of Xoft EBT source calibration involves determination of absolute dose rate of the source in each of the four conical surface applicators using in-air chamber measurements & TG61 formalism. We propose a simplified TG-61 calibration methodology involving initial characterization of surface cone applicators. This is accomplished by calibrating dose rates for all 4 surface applicator sets (for 10 sources) which establishes the “applicator output ratios” with respect to the selected reference applicator (20 mm applicator). After the initial time, Xoft™ Axxent™ source TG61 Calibration is carried out only in the reference applicator. Using the established applicator output ratios, dose rates for other applicators will be calculated. Results: 200 sources & 8 surface applicator sets were calibrated cumulatively using a Standard Imaging A20 ion-chamber in accordance with manufacturer-recommended protocols. Dose rates of 10, 20, 35 & 50mm applicators were normalized to the reference (20mm) applicator. The data in Figure 1 indicates that the normalized dose rate variation for each applicator for all 200 sources is better than ±3%. The average output ratios are 1.11, 1.02 and 0.49 for the 10 mm,35 mm and 50 mm applicators, respectively, which are in good agreement with the manufacturer’s published output ratios of 1.13, 1.02 and 0.49. Conclusion: Our measurements successfully demonstrate the accuracy of a new calibration method using a single surface applicator for Xoft EBT sources and deriving the dose rates of other applicators. The accuracy of the calibration is improved as this method minimizes the source position variation inside the applicator during individual source calibrations. The new method significantly reduces the calibration time to less

  4. Calibration of the b-tagging efficiency on jets with charm quark for the ATLAS experiment

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00536668; Schiavi, Carlo

    The correct identification of jets originated from a beauty quark (b-jets) is of fundamental importance for many physics analysis performed by the ATLAS experiment, operating at the Large Hadron Collider, CERN. The efficiency to mistakenly tag a jet originated from a charm quark (c-jet) as a b-jet has been measured in data with two different methods: a first one, referred as the "D* method", uses a sample of jets containing reconstructed D* mesons (adopted for 7 TeV and 8 TeV data analyses), and a second one, referred as the "W+c method", uses a sample of c-jets produced in association with a W boson (studied on 7 TeV data). This thesis work focuses on some significant improvements made to the D* method, increasing the measurement precision. A study for the improvement of the W+c method and its first application to 13 TeV data is also presented: focusing on the event selection, the W+c signal yield has been considerably increased with respect to the background processes

  5. Crop physiology calibration in the CLM

    Directory of Open Access Journals (Sweden)

    I. Bilionis

    2015-04-01

    scalable and adaptive scheme based on sequential Monte Carlo (SMC. The model showed significant improvement of crop productivity with the new calibrated parameters. We demonstrate that the calibrated parameters are applicable across alternative years and different sites.

  6. Multi-Conformation Monte Carlo: A Method for Introducing Flexibility in Efficient Simulations of Many-Protein Systems.

    Science.gov (United States)

    Prytkova, Vera; Heyden, Matthias; Khago, Domarin; Freites, J Alfredo; Butts, Carter T; Martin, Rachel W; Tobias, Douglas J

    2016-08-25

    We present a novel multi-conformation Monte Carlo simulation method that enables the modeling of protein-protein interactions and aggregation in crowded protein solutions. This approach is relevant to a molecular-scale description of realistic biological environments, including the cytoplasm and the extracellular matrix, which are characterized by high concentrations of biomolecular solutes (e.g., 300-400 mg/mL for proteins and nucleic acids in the cytoplasm of Escherichia coli). Simulation of such environments necessitates the inclusion of a large number of protein molecules. Therefore, computationally inexpensive methods, such as rigid-body Brownian dynamics (BD) or Monte Carlo simulations, can be particularly useful. However, as we demonstrate herein, the rigid-body representation typically employed in simulations of many-protein systems gives rise to certain artifacts in protein-protein interactions. Our approach allows us to incorporate molecular flexibility in Monte Carlo simulations at low computational cost, thereby eliminating ambiguities arising from structure selection in rigid-body simulations. We benchmark and validate the methodology using simulations of hen egg white lysozyme in solution, a well-studied system for which extensive experimental data, including osmotic second virial coefficients, small-angle scattering structure factors, and multiple structures determined by X-ray and neutron crystallography and solution NMR, as well as rigid-body BD simulation results, are available for comparison.

  7. Modeling of germanium detector and its sourceless calibration

    Directory of Open Access Journals (Sweden)

    Steljić Milijana

    2008-01-01

    Full Text Available The paper describes the procedure of adapting a coaxial high-precision germanium detector to a device with numerical calibration. The procedure includes the determination of detector dimensions and establishing the corresponding model of the system. In order to achieve a successful calibration of the system without the usage of standard sources, Monte Carlo simulations were performed to determine its efficiency and pulse-height response function. A detailed Monte Carlo model was developed using the MCNP-5.0 code. The obtained results have indicated that this method represents a valuable tool for the quantitative uncertainty analysis of radiation spectrometers and gamma-ray detector calibration, thus minimizing the need for the deployment of radioactive sources.

  8. Calibration of a gamma spectrometer for natural radioactivity measurement. Experimental measurements and Monte Carlo modelling; Etalonnage d'un spectrometre gamma en vue de la mesure de la radioactivite naturelle. Mesures experimentales et modelisation par techniques de Monte-Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Courtine, Fabien [Laboratoire de Physique Corpusculaire, Universite Blaise Pascal - CNRS/IN2P3, 63000 Aubiere Cedex (France)

    2007-03-15

    The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of {sup 137}Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the {sup 60}Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)

  9. Efficiency calibration of a mini-orange type beta-spectrometer by the beta sup - -spectrum of sup 9 sup 0 Sr

    CERN Document Server

    Kalinnikov, V G; Ibrakhim, Y S; Lebedev, N A; Samatov, Z K; Sehrehehtehr, Z; Solnyshkin, A A

    2002-01-01

    A specific method for efficiency calibration of a mini-orange type beta-spectrometer by means of the continuous beta sup - -spectrum of sup 9 sup 0 Sr and the conversion electron spectrum of sup 2 sup 0 sup 7 Bi in the energy range from 500 to 2200 keV has been elaborated. In the experiment typical SmCo sub 5 magnets (6A and 8A) were used. An accuracy of efficiency determination was 5-10 %.

  10. Calibrating and Controlling the Quantum Efficiency Distribution of Inhomogeneously Broadened Quantum Rods by Using a Mirror Ball

    DEFF Research Database (Denmark)

    Hansen, Per Lunnemann; Rabouw, Freddy T.; van Dijk-Moes, Relinde J. A.

    2013-01-01

    We demonstrate that a simple silver coated ball lens can be used to accurately measure the entire distribution of radiative transition rates of quantum dot nanocrystals. This simple and cost-effective implementation of Drexhage’s method that uses nanometer-controlled optical mode density variatio......, and we show that the rate distribution-width, that amounts to about 30% of the mean decay rate, is strongly dependent on the local density of optical states....... near a mirror, not only allows an extraction of calibrated ensemble-averaged rates, but for the first time also to quantify the full inhomogeneous dispersion of radiative and non radiative decay rates across thousands of nanocrystals. We apply the technique to novel ultrastable CdSe/CdS dot......-in-rod emitters. The emitters are of large current interest due to their improved stability and reduced blinking. We retrieve a room-temperature ensemble average quantum efficiency of 0.87 ± 0.08 at a mean lifetime around 20 ns. We confirm a log-normal distribution of decay rates as often assumed in literature...

  11. HPGe 探测器对环状γ面源探测效率刻度研究%Detect Efficiency Calibration Methods of HPGeγSpectrometer for Annularity Source

    Institute of Scientific and Technical Information of China (English)

    冯松; 羊奕伟; 王玫; 刘荣; 严小松; 鹿心鑫

    2014-01-01

    The reaction rates of uranium in the Natural uranium decomposition simulation device is the important data for studying the reliability of neutronics design in the conceptual design of fission cladding in subcritical re-actor.The detect efficiency for annularity source in a HPGe spectrometer must be calibrated accurately.We want to apply the Monte-Carlo method to calibrate the efficiency for annularity source and improve the accura-cy and efficiency.The detect efficiency curve of point source was measured at the axis of the detector with a 6cm distance from the surface.The results are obtained by calculation in MCNP5 and the measurements are fit well through adjusting the size of dead layer and hole in the HPGe.Then the detect efficiency for the annularity gamma source is calculated and compared with the measurement result which was measured by integral calculat-ing the point detect efficiency in radial direction, and the comparison will examine the efficiency calibration method.The measured full energy peak efficiency for the annularity gamma source agreed with the simulated value to within 4%in the energy range 200 to 1400 keV.The method is an accurate and efficient way for cali-brating the annularity source.%天然铀分解模拟装置中铀的相关反应率是研究混合堆包层设计宏观中子学的重要数据,采用活化法测量相应反应率的过程中必须对环状天然铀箔片的探测效率进行精确刻度。为了研究快速有效刻度HPGe探测器探测效率的方法,利用一系列标准伽马点源测量了轴线上6cm高度位置的点源探测效率曲线,在蒙特卡罗程序MCNP5中调整探测器内部结构参数,同时对HPGe探测器的探测效率进行模拟计算,在计算结果与实验结果能较好拟合的情况下推算探测器的死层厚度、冷指长度和半径等参数的实际尺寸。利用计算的尺寸模拟计算探测器对环状伽马源的探测效率,计算结果与实验结果

  12. Reconstruction, Energy Calibration, and Identification of Hadronically Decaying Tau Leptons in the ATLAS Experiment for Run-2 of the LHC

    CERN Document Server

    The ATLAS collaboration

    2015-01-01

    The reconstruction algorithm, energy calibration, and identification methods for hadronically decaying tau leptons in ATLAS used at the start of Run-2 of the Large Hadron Collider are described in this note. All algorithms have been optimised for Run-2 conditions. The energy calibration relies on Monte Carlo samples with hadronic tau lepton decays, and applies multiplicative factors based on the pT of the reconstructed tau lepton to the energy measurements in the calorimeters. The identification employs boosted decision trees. Systematic uncertainties on the energy scale, reconstruction efficiency and identification efficiency of hadronically decaying tau leptons are determined using Monte Carlo samples that simulate varying conditions.

  13. Monte Carlo probabilistic sensitivity analysis for patient level simulation models: efficient estimation of mean and variance using ANOVA.

    Science.gov (United States)

    O'Hagan, Anthony; Stevenson, Matt; Madan, Jason

    2007-10-01

    Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.

  14. Technology for radiation efficiency measurement of high-power halogen tungsten lamp used in calibration of high-energy laser energy meter.

    Science.gov (United States)

    Wei, Ji Feng; Hu, Xiao Yang; Sun, Li Qun; Zhang, Kai; Chang, Yan

    2015-03-20

    The calibration method using a high-power halogen tungsten lamp as a calibration source has many advantages such as strong equivalence and high power, so it is very fit for the calibration of high-energy laser energy meters. However, high-power halogen tungsten lamps after power-off still reserve much residual energy and continually radiate energy, which is difficult to be measured. Two measuring systems were found to solve the problems. One system is composed of an integrating sphere and two optical spectrometers, which can accurately characterize the radiative spectra and power-time variation of the halogen tungsten lamp. This measuring system was then calibrated using a normal halogen tungsten lamp made of the same material as the high-power halogen tungsten lamp. In this way, the radiation efficiency of the halogen tungsten lamp after power-off can be quantitatively measured. In the other measuring system, a wide-spectrum power meter was installed far away from the halogen tungsten lamp; thus, the lamp can be regarded as a point light source. The radiation efficiency of residual energy from the halogen tungsten lamp was computed on the basis of geometrical relations. The results show that the halogen tungsten lamp's radiation efficiency was improved with power-on time but did not change under constant power-on time/energy. All the tested halogen tungsten lamps reached 89.3% of radiation efficiency at 50 s after power-on. After power-off, the residual energy in the halogen tungsten lamp gradually dropped to less than 10% of the initial radiation power, and the radiation efficiency changed with time. The final total radiation energy was decided by the halogen tungsten lamp's radiation efficiency, the radiation efficiency of residual energy, and the total power consumption. The measuring uncertainty of total radiation energy was 2.4% (here, the confidence factor is two).

  15. Efficiency calibration and coincidence summing correction for large arrays of NaI(Tl) detectors in soccer-ball and castle geometries

    Energy Technology Data Exchange (ETDEWEB)

    Anil Kumar, G., E-mail: anilg@tifr.res.i [Department of Nuclear and Atomic Physics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005 (India); Mazumdar, I.; Gothe, D.A. [Department of Nuclear and Atomic Physics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005 (India)

    2009-11-21

    Efficiency calibration and coincidence summing correction have been performed for two large arrays of NaI(Tl) detectors in two different configurations. They are, a compact array of 32 conical detectors of pentagonal and hexagonal shapes in soccer-ball geometry and an array of 14 straight hexagonal NaI(Tl) detectors in castle geometry. Both of these arrays provide a large solid angle of detection, leading to considerable coincidence summing of gamma rays. The present work aims to understand the effect of coincidence summing of gamma rays while determining the energy dependence of efficiencies of these two arrays. We have carried out extensive GEANT4 simulations with radio-nuclides that decay with a two-step cascade, considering both arrays in their realistic geometries. The absolute efficiencies have been simulated for gamma energies from 700 to 2800 keV using four different double-photon emitters, namely, {sup 60}Co, {sup 46}Sc, {sup 94}Nb and {sup 24}Na. The efficiencies so obtained have been corrected for coincidence summing using the method proposed by Vidmar et al. . The simulations have also been carried out for the same energies assuming mono-energetic point sources, for comparison. Experimental measurements have also been carried out using calibrated point sources of {sup 137}Cs and {sup 60}Co. The simulated and the experimental results are found to be in good agreement. This demonstrates the reliability of the correction method for efficiency calibration of two large arrays in very different configurations.

  16. Synchrotron calibration and response modelling of back-illuminated XMM-RGS CCDs

    CERN Document Server

    Bootsma, T M V; Brinkman, A C; Herder, J W D; Jong, L D; Korte, P D; Olsthoorn, S M

    2000-01-01

    Back-illuminated CCDs with high quantum efficiency for 0.35-2.5 keV X-rays in combination with low efficiency at optical wavelengths have been developed for XMM-RGS. As part of the calibration programme, a systematic study of the CCD response in this energy range has been performed at the Synchrotron Radiation Source in Daresbury, UK. These measurements show the good quantum efficiency of the CCDs. The results are consistently described by a Monte Carlo model.

  17. Sequential Monte Carlo on large binary sampling spaces

    CERN Document Server

    Schäfer, Christian

    2011-01-01

    A Monte Carlo algorithm is said to be adaptive if it automatically calibrates its current proposal distribution using past simulations. The choice of the parametric family that defines the set of proposal distributions is critical for a good performance. In this paper, we present such a parametric family for adaptive sampling on high-dimensional binary spaces. A practical motivation for this problem is variable selection in a linear regression context. We want to sample from a Bayesian posterior distribution on the model space using an appropriate version of Sequential Monte Carlo. Raw versions of Sequential Monte Carlo are easily implemented using binary vectors with independent components. For high-dimensional problems, however, these simple proposals do not yield satisfactory results. The key to an efficient adaptive algorithm are binary parametric families which take correlations into account, analogously to the multivariate normal distribution on continuous spaces. We provide a review of models for binar...

  18. Some Improved Estimators of Co-efficient of Variation from Bi-variate normal distribution: A Monte Carlo Comparison

    OpenAIRE

    Archana V; Aruna Rao K

    2014-01-01

    Co-efficient of variation is a unitless measure of dispersion and is very frequently used in scientific investigations. This has motivated several researchers to propose estimators and tests concerning the co-efficient of variation of normal distribution(s). While proposing a class of estimators for the co-efficient of variation of a finite population, Tripathi et al., (2002) suggested that the estimator of co-efficient of variation of a finite population can also be used as an estimator of C...

  19. Some Improved Estimators of Co-efficient of Variation from Bi-variate normal distribution: A Monte Carlo Comparison

    Directory of Open Access Journals (Sweden)

    Archana V

    2014-05-01

    Full Text Available Co-efficient of variation is a unitless measure of dispersion and is very frequently used in scientific investigations. This has motivated several researchers to propose estimators and tests concerning the co-efficient of variation of normal distribution(s. While proposing a class of estimators for the co-efficient of variation of a finite population, Tripathi et al., (2002 suggested that the estimator of co-efficient of variation of a finite population can also be used as an estimator of C.V for any distribution when the sampling design is SRSWR. This has motivated us to propose 28 estimators of finite population co-efficient of variation as estimators of co-efficient of variation of one component of a bivariate normal distribution when prior information is available regarding the second component. Cramer Rao type lower bound is derived to the mean square error of these estimators. Extensive simulation is carried out to compare these estimators. The results indicate that out of these 28 estimators, eight estimators have larger relative efficiency compared to the sample co-efficient of variation. The asymptotic mean square errors of the best estimators are derived to the order of  for the benefit of users of co-efficient of variation.

  20. High Efficiency, Digitally Calibrated TR Modules Enabling Lightweight SweepSAR Architectures for DESDynI-Class Radar Instruments Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Develop and demonstrate a next-generation digitally calibrated, highly scalable, L-band Transmit/Receive (TR) module to enable a precision beamforming SweepSAR...

  1. Calibration of Ge gamma-ray spectrometers for complex sample geometries and matrices

    Energy Technology Data Exchange (ETDEWEB)

    Semkow, T.M., E-mail: thomas.semkow@health.ny.gov [Wadsworth Center, New York State Department of Health, Empire State Plaza, Albany, NY 12201 (United States); Department of Environmental Health Sciences, School of Public Health, University at Albany, State University of New York, Rensselaer, NY 12144 (United States); Bradt, C.J.; Beach, S.E.; Haines, D.K.; Khan, A.J.; Bari, A.; Torres, M.A.; Marrantino, J.C.; Syed, U.-F. [Wadsworth Center, New York State Department of Health, Empire State Plaza, Albany, NY 12201 (United States); Kitto, M.E. [Wadsworth Center, New York State Department of Health, Empire State Plaza, Albany, NY 12201 (United States); Department of Environmental Health Sciences, School of Public Health, University at Albany, State University of New York, Rensselaer, NY 12144 (United States); Hoffman, T.J. [Wadsworth Center, New York State Department of Health, Empire State Plaza, Albany, NY 12201 (United States); Curtis, P. [Kiltel Systems, Inc., Clyde Hill, WA 98004 (United States)

    2015-11-01

    A comprehensive study of the efficiency calibration and calibration verification of Ge gamma-ray spectrometers was performed using semi-empirical, computational Monte-Carlo (MC), and transfer methods. The aim of this study was to evaluate the accuracy of the quantification of gamma-emitting radionuclides in complex matrices normally encountered in environmental and food samples. A wide range of gamma energies from 59.5 to 1836.0 keV and geometries from a 10-mL jar to 1.4-L Marinelli beaker were studied on four Ge spectrometers with the relative efficiencies between 102% and 140%. Density and coincidence summing corrections were applied. Innovative techniques were developed for the preparation of artificial complex matrices from materials such as acidified water, polystyrene, ethanol, sugar, and sand, resulting in the densities ranging from 0.3655 to 2.164 g cm{sup −3}. They were spiked with gamma activity traceable to international standards and used for calibration verifications. A quantitative method of tuning MC calculations to experiment was developed based on a multidimensional chi-square paraboloid. - Highlights: • Preparation and spiking of traceable complex matrices in extended geometries. • Calibration of Ge gamma spectrometers for complex matrices. • Verification of gamma calibrations. • Comparison of semi-empirical, computational Monte Carlo, and transfer methods of Ge calibration. • Tuning of Monte Carlo calculations using a multidimensional paraboloid.

  2. Beta-gamma coincidence counting efficiency and energy resolution optimization by Geant4 Monte Carlo simulations for a phoswich well detector.

    Science.gov (United States)

    Zhang, Weihua; Mekarski, Pawel; Ungar, Kurt

    2010-12-01

    A single-channel phoswich well detector has been assessed and analysed in order to improve beta-gamma coincidence measurement sensitivity of (131m)Xe and (133m)Xe. This newly designed phoswich well detector consists of a plastic cell (BC-404) embedded in a CsI(Tl) crystal coupled to a photomultiplier tube (PMT). It can be used to distinguish 30.0-keV X-ray signals of (131m)Xe and (133m)Xe using their unique coincidence signatures between the conversion electrons (CEs) and the 30.0-keV X-rays. The optimum coincidence efficiency signal depends on the energy resolutions of the two CE peaks, which could be affected by relative positions of the plastic cell to the CsI(Tl) because the embedded plastic cell would interrupt scintillation light path from the CsI(Tl) crystal to the PMT. In this study, several relative positions between the embedded plastic cell and the CsI(Tl) crystal have been evaluated using Monte Carlo modeling for its effects on coincidence detection efficiency and X-ray and CE energy resolutions. The results indicate that the energy resolution and beta-gamma coincidence counting efficiency of X-ray and CE depend significantly on the plastic cell locations inside the CsI(Tl). The degraded X-ray and CE peak energy resolutions due to light collection efficiency deterioration by the embedded cell can be minimised. The optimum of CE and X-ray energy resolution, beta-gamma coincidence efficiency as well as the ease of manufacturing could be achieved by varying the embedded plastic cell positions inside the CsI(Tl) and consequently setting the most efficient geometry.

  3. An efficient multi-stage algorithm for full calibration of the hemodynamic model from BOLD signal responses

    KAUST Repository

    Zambri, Brian

    2017-02-22

    We propose a computational strategy that falls into the category of prediction/correction iterative-type approaches, for calibrating the hemodynamic model introduced by Friston et al. (2000). The proposed method is employed to estimate consecutively the values of the biophysiological system parameters and the external stimulus characteristics of the model. Numerical results corresponding to both synthetic and real functional Magnetic Resonance Imaging (fMRI) measurements for a single stimulus as well as for multiple stimuli are reported to highlight the capability of this computational methodology to fully calibrate the considered hemodynamic model. This article is protected by copyright. All rights reserved.

  4. An Efficient Independence Sampler for Updating Branches in Bayesian Markov chain Monte Carlo Sampling of Phylogenetic Trees.

    Science.gov (United States)

    Aberer, Andre J; Stamatakis, Alexandros; Ronquist, Fredrik

    2016-01-01

    Sampling tree space is the most challenging aspect of Bayesian phylogenetic inference. The sheer number of alternative topologies is problematic by itself. In addition, the complex dependency between branch lengths and topology increases the difficulty of moving efficiently among topologies. Current tree proposals are fast but sample new trees using primitive transformations or re-mappings of old branch lengths. This reduces acceptance rates and presumably slows down convergence and mixing. Here, we explore branch proposals that do not rely on old branch lengths but instead are based on approximations of the conditional posterior. Using a diverse set of empirical data sets, we show that most conditional branch posteriors can be accurately approximated via a [Formula: see text] distribution. We empirically determine the relationship between the logarithmic conditional posterior density, its derivatives, and the characteristics of the branch posterior. We use these relationships to derive an independence sampler for proposing branches with an acceptance ratio of ~90% on most data sets. This proposal samples branches between 2× and 3× more efficiently than traditional proposals with respect to the effective sample size per unit of runtime. We also compare the performance of standard topology proposals with hybrid proposals that use the new independence sampler to update those branches that are most affected by the topological change. Our results show that hybrid proposals can sometimes noticeably decrease the number of generations necessary for topological convergence. Inconsistent performance gains indicate that branch updates are not the limiting factor in improving topological convergence for the currently employed set of proposals. However, our independence sampler might be essential for the construction of novel tree proposals that apply more radical topology changes.

  5. An Efficient Method of Reweighting and Reconstructing Monte Carlo Molecular Simulation Data for Extrapolation to Different Temperature and Density Conditions

    KAUST Repository

    Sun, Shuyu

    2013-06-01

    This paper introduces an efficient technique to generate new molecular simulation Markov chains for different temperature and density conditions, which allow for rapid extrapolation of canonical ensemble averages at a range of temperatures and densities different from the original conditions where a single simulation is conducted. Obtained information from the original simulation are reweighted and even reconstructed in order to extrapolate our knowledge to the new conditions. Our technique allows not only the extrapolation to a new temperature or density, but also the double extrapolation to both new temperature and density. The method was implemented for Lennard-Jones fluid with structureless particles in single-gas phase region. Extrapolation behaviors as functions of extrapolation ranges were studied. Limits of extrapolation ranges showed a remarkable capability especially along isochors where only reweighting is required. Various factors that could affect the limits of extrapolation ranges were investigated and compared. In particular, these limits were shown to be sensitive to the number of particles used and starting point where the simulation was originally conducted.

  6. TARC: Carlo Rubbia's Energy Amplifier

    CERN Multimedia

    Laurent Guiraud

    1997-01-01

    Transmutation by Adiabatic Resonance Crossing (TARC) is Carlo Rubbia's energy amplifier. This CERN experiment demonstrated that long-lived fission fragments, such as 99-TC, can be efficiently destroyed.

  7. Simulation of ventilation efficiency, and pre-closure temperatures in emplacement drifts at Yucca Mountain, Nevada, using Monte Carlo and composite thermal-pulse methods

    Science.gov (United States)

    Case, J.B.; Buesch, D.C.

    2004-01-01

    Predictions of waste canister and repository driftwall temperatures as functions of space and time are important to evaluate pre-closure performance of the proposed repository for spent nuclear fuel and high-level radioactive waste at Yucca Mountain, Nevada. Variations in the lithostratigraphic features in densely welded and crystallized rocks of the 12.8-million-year-old Topopah Spring Tuff, especially the porosity resulting from lithophysal cavities, affect thermal properties. A simulated emplacement drift is based on projecting lithophysal cavity porosity values 50 to 800 m from the Enhanced Characterization of the Repository Block cross drift. Lithophysal cavity porosity varies from 0.00 to 0.05 cm3/cm3 in the middle nonlithophysal zone and from 0.03 to 0.28 cm3/cm3 in the lower lithophysal zone. A ventilation model and computer program titled "Monte Carlo Simulation of Ventilation" (MCSIMVENT), which is based on a composite thermal-pulse calculation, simulates statistical variability and uncertainty of rock-mass thermal properties and ventilation performance along a simulated emplacement drift for a pre-closure period of 50 years. Although ventilation efficiency is relatively insensitive to thermal properties, variations in lithophysal porosity along the drift can result in a range of peak driftwall temperatures can range from 40 to 85??C for the preclosure period. Copyright ?? 2004 by ASME.

  8. Random frog: an efficient reversible jump Markov Chain Monte Carlo-like approach for variable selection with applications to gene selection and disease classification.

    Science.gov (United States)

    Li, Hong-Dong; Xu, Qing-Song; Liang, Yi-Zeng

    2012-08-31

    The identification of disease-relevant genes represents a challenge in microarray-based disease diagnosis where the sample size is often limited. Among established methods, reversible jump Markov Chain Monte Carlo (RJMCMC) methods have proven to be quite promising for variable selection. However, the design and application of an RJMCMC algorithm requires, for example, special criteria for prior distributions. Also, the simulation from joint posterior distributions of models is computationally extensive, and may even be mathematically intractable. These disadvantages may limit the applications of RJMCMC algorithms. Therefore, the development of algorithms that possess the advantages of RJMCMC methods and are also efficient and easy to follow for selecting disease-associated genes is required. Here we report a RJMCMC-like method, called random frog that possesses the advantages of RJMCMC methods and is much easier to implement. Using the colon and the estrogen gene expression datasets, we show that random frog is effective in identifying discriminating genes. The top 2 ranked genes for colon and estrogen are Z50753, U00968, and Y10871_at, Z22536_at, respectively. (The source codes with GNU General Public License Version 2.0 are freely available to non-commercial users at: http://code.google.com/p/randomfrog/.).

  9. Monte Carlo analysis of the influence of germanium dead layer thickness on the HPGe gamma detector experimental efficiency measured by use of extended sources.

    Science.gov (United States)

    Chham, E; García, F Piñero; El Bardouni, T; Ferro-García, M Angeles; Azahra, M; Benaalilou, K; Krikiz, M; Elyaakoubi, H; El Bakkali, J; Kaddour, M

    2014-09-22

    We have carried out a study to figure out the influence of crystal inactive-layer thickness on gamma spectra measured by an HPGe detector. The thickness of this dead layer (DL) is not known (no information about it was delivered by the manufacturer) due to the existence of a transition zone where photons are increasingly absorbed. To perform this analyses a virtual model of a Canberra HPGe detector was produced with the aid of MCNPX 2.7 code. The main objective of this work is to produce an optimal modeling for our GPGe detector. To this end, the study included the analysis of the total inactive germanium layer thickness and the active volume that are needed in order to obtain the smallest discrepancy between calculated and experimental efficiencies. Calculations and measurements were performed for all of the radionuclides included in a standard calibration gamma cocktail solution. Different geometry sources were used: a Marinelli and two other new sources represented as S(1) and S(2). The former was used for the determination of the active volume, whereas the two latter were used for the determination of the face and lateral DL, respectively. The model was validated by comparing calculated and experimental full energy peak efficiencies in the 50-1900keV energy range. the results show that the insertion of the DL parameter in the modeling is absolutely essential to reproduce the experimental results, and that the thickness of this DL varies from one position to the other on the detector surface.

  10. Kinetic Monte Carlo simulation of the efficiency roll-off, emission color, and degradation of organic light-emitting diodes (Presentation Recording)

    Science.gov (United States)

    Coehoorn, Reinder; van Eersel, Harm; Bobbert, Peter A.; Janssen, Rene A. J.

    2015-10-01

    The performance of Organic Light Emitting Diodes (OLEDs) is determined by a complex interplay of the charge transport and excitonic processes in the active layer stack. We have developed a three-dimensional kinetic Monte Carlo (kMC) OLED simulation method which includes all these processes in an integral manner. The method employs a physically transparent mechanistic approach, and is based on measurable parameters. All processes can be followed with molecular-scale spatial resolution and with sub-nanosecond time resolution, for any layer structure and any mixture of materials. In the talk, applications to the efficiency roll-off, emission color and lifetime of white and monochrome phosphorescent OLEDs [1,2] are demonstrated, and a comparison with experimental results is given. The simulations show to which extent the triplet-polaron quenching (TPQ) and triplet-triplet-annihilation (TTA) contribute to the roll-off, and how the microscopic parameters describing these processes can be deduced properly from dedicated experiments. Degradation is treated as a result of the (accelerated) conversion of emitter molecules to non-emissive sites upon a triplet-polaron quenching (TPQ) process. The degradation rate, and hence the device lifetime, is shown to depend on the emitter concentration and on the precise type of TPQ process. Results for both single-doped and co-doped OLEDs are presented, revealing that the kMC simulations enable efficient simulation-assisted layer stack development. [1] H. van Eersel et al., Appl. Phys. Lett. 105, 143303 (2014). [2] R. Coehoorn et al., Adv. Funct. Mater. (2015), publ. online (DOI: 10.1002/adfm.201402532)

  11. Euromet action 428: transfer of ge detectors efficiency calibration from point source geometry to other geometries; Action euromet 428: transfert de l'etalonnage en rendement de detecteurs au germanium pour une source ponctuelle vers d'autres geometries

    Energy Technology Data Exchange (ETDEWEB)

    Lepy, M.Ch

    2000-07-01

    The EUROMET project 428 examines efficiency transfer computation for Ge gamma-ray spectrometers when the efficiency is known for a reference point source geometry in the 60 keV to 2 MeV energy range. For this, different methods are used, such as Monte Carlo simulation or semi-empirical computation. The exercise compares the application of these methods to the same selected experimental cases to determine the usage limitations versus the requested accuracy. For carefully examining these results and trying to derive information for improving the computation codes, this study was limited to a few simple cases, from an experimental efficiency calibration for point source at 10-cm source-to-detector distance. The first part concerns the simplest case of geometry transfer, i.e., using point sources for 3 source-to-detector distances: 2,5 and 20 cm; the second part deals with transfer from point source geometry to cylindrical geometry with three different matrices. The general results show that the deviations between the computed results and the measured efficiencies are for the most part within 10%. The quality of the results is rather inhomogeneous and shows that these codes cannot be used directly for metrological purposes. However, most of them are operational for routine measurements when efficiency uncertainties of 5-10% can be sufficient. (author)

  12. Distributed Radio Interferometric Calibration

    CERN Document Server

    Yatawatta, Sarod

    2015-01-01

    Increasing data volumes delivered by a new generation of radio interferometers require computationally efficient and robust calibration algorithms. In this paper, we propose distributed calibration as a way of improving both computational cost as well as robustness in calibration. We exploit the data parallelism across frequency that is inherent in radio astronomical observations that are recorded as multiple channels at different frequencies. Moreover, we also exploit the smoothness of the variation of calibration parameters across frequency. Data parallelism enables us to distribute the computing load across a network of compute agents. Smoothness in frequency enables us reformulate calibration as a consensus optimization problem. With this formulation, we enable flow of information between compute agents calibrating data at different frequencies, without actually passing the data, and thereby improving robustness. We present simulation results to show the feasibility as well as the advantages of distribute...

  13. Efficient chain moves for Monte Carlo simulations of a wormlike DNA model: excluded volume, supercoils, site juxtapositions, knots, and comparisons with random-flight and lattice models.

    Science.gov (United States)

    Liu, Zhirong; Chan, Hue Sun

    2008-04-14

    We develop two classes of Monte Carlo moves for efficient sampling of wormlike DNA chains that can have significant degrees of supercoiling, a conformational feature that is key to many aspects of biological function including replication, transcription, and recombination. One class of moves entails reversing the coordinates of a segment of the chain along one, two, or three axes of an appropriately chosen local frame of reference. These transformations may be viewed as a generalization, to the continuum, of the Madras-Orlitsky-Shepp algorithm for cubic lattices. Another class of moves, termed T+/-2, allows for interconversions between chains with different lengths by adding or subtracting two beads (monomer units) to or from the chain. Length-changing moves are generally useful for conformational sampling with a given site juxtaposition, as has been shown in previous lattice studies. Here, the continuum T+/-2 moves are designed to enhance their acceptance rate in supercoiled conformations. We apply these moves to a wormlike model in which excluded volume is accounted for by a bond-bond repulsion term. The computed autocorrelation functions for the relaxation of bond length, bond angle, writhe, and branch number indicate that the new moves lead to significantly more efficient sampling than conventional bead displacements and crankshaft rotations. A close correspondence is found in the equilibrium ensemble between the map of writhe computed for pair of chain segments and the map of site juxtapositions or self-contacts. To evaluate the more coarse-grained freely jointed chain (random-flight) and cubic lattice models that are commonly used in DNA investigations, twisting (torsional) potentials are introduced into these models. Conformational properties for a given superhelical density sigma may then be sampled by computing the writhe and using White's formula to relate the degree of twisting to writhe and sigma. Extensive comparisons of contact patterns and knot

  14. Peak efficiency calibration for attenuation corrected cylindrical sources in gamma ray spectrometry by the use of a point source.

    Science.gov (United States)

    Aguiar, Julio C; Galiano, Eduardo; Fernandez, Jorge

    2006-12-01

    A theoretical method of determining the gamma-ray peak efficiency for a cylindrical source, based on a modified expression for point sources is derived. A term for the photon self-attenuation is included in the calculation. The method is valid for any source material as long as the source activity concentration is considered to be homogeneous. Results of this expression are checked against experimental data obtained with (241)Am, (57)Co, (137)Cs, and (60)Co sources.

  15. Peak efficiency calibration for attenuation corrected cylindrical sources in gamma ray spectrometry by the use of a point source

    Energy Technology Data Exchange (ETDEWEB)

    Aguiar, Julio C. [Autoridad Regulatoria Nuclear, Laboratorio de Espectrometria Gamma, Centro Atomico Ezeiza, B1802AYA, Buenos Aires (Argentina); Galiano, Eduardo [Department of Physics, Laurentian University, Sudbury, Ont., P3E 2C6 (Canada)]. E-mail: egalianoriveros@laurentian.ca; Fernandez, Jorge [Autoridad Regulatoria Nuclear, Laboratorio de Espectrometria Gamma, Centro Atomico Ezeiza, B1802AYA, Buenos Aires (Argentina)

    2006-12-15

    A theoretical method of determining the gamma-ray peak efficiency for a cylindrical source, based on a modified expression for point sources is derived. A term for the photon self-attenuation is included in the calculation. The method is valid for any source material as long as the source activity concentration is considered to be homogeneous. Results of this expression are checked against experimental data obtained with {sup 241}Am, {sup 57}Co, {sup 137}Cs, and {sup 60}Co sources.

  16. Tau Reconstruction, Energy Calibration and Identification at ATLAS

    CERN Document Server

    Trottier-McDonald, M; The ATLAS collaboration

    2011-01-01

    Tau leptons play a central role in the LHC physics programme, in particular as an important signature in many Higgs boson and Supersymmetry searches. They are further used in Standard Model electroweak measurements, as well as detector related studies like the determination of the missing transverse energy scale. Copious backgrounds from QCD processes call for both efficient identification of hadronically decaying tau leptons, as well as large fake rejection. A solid understanding of the combined performance of the calorimeter and tracking detectors is also required. We present the current status of the tau reconstruction, energy calibration and identification with the ATLAS detector at the LHC. Identification efficiencies are measured in Wtaunu events in data and compared with predictions from Monte Carlo simulations, whereas the misidentification probabilities of QCD jets and electrons are determined from various jet-enriched data samples and from Zee events, respectively. The tau energy scale calibration i...

  17. Calibration of forcefields for molecular simulation: sequential design of computer experiments for building cost-efficient kriging metamodels.

    Science.gov (United States)

    Cailliez, Fabien; Bourasseau, Arnaud; Pernot, Pascal

    2014-01-15

    We present a global strategy for molecular simulation forcefield optimization, using recent advances in Efficient Global Optimization algorithms. During the course of the optimization process, probabilistic kriging metamodels are used, that predict molecular simulation results for a given set of forcefield parameter values. This enables a thorough investigation of parameter space, and a global search for the minimum of a score function by properly integrating relevant uncertainty sources. Additional information about the forcefield parameters are obtained that are inaccessible with standard optimization strategies. In particular, uncertainty on the optimal forcefield parameters can be estimated, and transferred to simulation predictions. This global optimization strategy is benchmarked on the TIP4P water model.

  18. Efficient solution methodology for calibrating the hemodynamic model using functional Magnetic Resonance Imaging (fMRI) measurements

    KAUST Repository

    Zambri, Brian

    2015-11-05

    Our aim is to propose a numerical strategy for retrieving accurately and efficiently the biophysiological parameters as well as the external stimulus characteristics corresponding to the hemodynamic mathematical model that describes changes in blood flow and blood oxygenation during brain activation. The proposed method employs the TNM-CKF method developed in [1], but in a prediction/correction framework. We present numerical results using both real and synthetic functional Magnetic Resonance Imaging (fMRI) measurements to highlight the performance characteristics of this computational methodology. © 2015 IEEE.

  19. Calibration uncertainty

    DEFF Research Database (Denmark)

    Heydorn, Kaj; Anglov, Thomas

    2002-01-01

    Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration unce...

  20. Calibration of a gamma ray astronomy telescope in the 5-50 MeV energy range

    Energy Technology Data Exchange (ETDEWEB)

    Lavigne, J.M.; Niel, M.; Vedrenne, G.; Doulade, C.; Giordano, G. (Toulouse-3 Univ., 31 (France). Centre d' Etude Spatiale des Rayonnements); Agrinier, B.; Bonfand, E.; Andrejol, J.; Courtois, J.C.; Gorisse, M. (CEA Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France))

    1982-08-15

    The Agathe experiment is a 1 m/sup 2/ spark chamber sensitive in the 5-50 MeV energy range designed for use on a stratospheric balloon. The different methods used to determine the efficiency, angular resolution and energy are described. Efficiency calibrations are made in gamma ray beams at different energies. The results are verified by a Monte Carlo calculation then extended to higher energies. The photon energy is determined by calibrations in electron beams and in gamma ray beams; two methods are compared and discussed.

  1. Monte Carlo molecular simulations: improving the statistical efficiency of samples with the help of artificial evolution algorithms; Simulations moleculaires de Monte Carlo: amelioration de l'efficacite statistique de l'echantillonnage grace aux algorithmes d'evolution artificielle

    Energy Technology Data Exchange (ETDEWEB)

    Leblanc, B.

    2002-03-01

    Molecular simulation aims at simulating particles in interaction, describing a physico-chemical system. When considering Markov Chain Monte Carlo sampling in this context, we often meet the same problem of statistical efficiency as with Molecular Dynamics for the simulation of complex molecules (polymers for example). The search for a correct sampling of the space of possible configurations with respect to the Boltzmann-Gibbs distribution is directly related to the statistical efficiency of such algorithms (i.e. the ability of rapidly providing uncorrelated states covering all the configuration space). We investigated how to improve this efficiency with the help of Artificial Evolution (AE). AE algorithms form a class of stochastic optimization algorithms inspired by Darwinian evolution. Efficiency measures that can be turned into efficiency criteria have been first searched before identifying parameters that could be optimized. Relative frequencies for each type of Monte Carlo moves, usually empirically chosen in reasonable ranges, were first considered. We combined parallel simulations with a 'genetic server' in order to dynamically improve the quality of the sampling during the simulations progress. Our results shows that in comparison with some reference settings, it is possible to improve the quality of samples with respect to the chosen criterion. The same algorithm has been applied to improve the Parallel Tempering technique, in order to optimize in the same time the relative frequencies of Monte Carlo moves and the relative frequencies of swapping between sub-systems simulated at different temperatures. Finally, hints for further research in order to optimize the choice of additional temperatures are given. (author)

  2. Calibration with MCNP of NaI detector for the determination of natural radioactivity levels in the field.

    Science.gov (United States)

    Cinelli, Giorgia; Tositti, Laura; Mostacci, Domiziano; Baré, Jonathan

    2016-05-01

    In view of assessing natural radioactivity with on-site quantitative gamma spectrometry, efficiency calibration of NaI(Tl) detectors is investigated. A calibration based on Monte Carlo simulation of detector response is proposed, to render reliable quantitative analysis practicable in field campaigns. The method is developed with reference to contact geometry, in which measurements are taken placing the NaI(Tl) probe directly against the solid source to be analyzed. The Monte Carlo code used for the simulations was MCNP. Experimental verification of the calibration goodness is obtained by comparison with appropriate standards, as reported. On-site measurements yield a quick quantitative assessment of natural radioactivity levels present ((40)K, (238)U and (232)Th). On-site gamma spectrometry can prove particularly useful insofar as it provides information on materials from which samples cannot be taken.

  3. ALTEA: The instrument calibration

    Energy Technology Data Exchange (ETDEWEB)

    Zaconte, V. [INFN and University of Rome Tor Vergata, Department of Physics, Via della Ricerca Scientifica 1, 00133 Rome (Italy)], E-mail: livio.narici@roma2.infn.it; Belli, F.; Bidoli, V.; Casolino, M.; Di Fino, L.; Narici, L.; Picozza, P.; Rinaldi, A. [INFN and University of Rome Tor Vergata, Department of Physics, Via della Ricerca Scientifica 1, 00133 Rome (Italy); Sannita, W.G. [DISM, University of Genova, Genova (Italy); Department of Psychiatry, SUNY, Stoony Brook, NY (United States); Finetti, N.; Nurzia, G.; Rantucci, E.; Scrimaglio, R.; Segreto, E. [Department of Physics, University and INFN, L' Aquila (Italy); Schardt, D. [GSI/Biophysik, Darmstadt (Germany)

    2008-05-15

    The ALTEA program is an international and multi-disciplinary project aimed at studying particle radiation in space environment and its effects on astronauts' brain functions, as the anomalous perception of light flashes first reported during Apollo missions. The ALTEA space facility includes a 6-silicon telescopes particle detector, and is onboard the International Space Station (ISS) since July 2006. In this paper, the detector calibration at the heavy-ion synchrotron SIS18 at GSI Darmstadt will be presented and compared to the Geant 3 Monte Carlo simulation. Finally, the results of a neural network analysis that was used for ion discrimination on fragmentation data will also be presented.

  4. Fusion yield measurements on JET and their calibration

    Energy Technology Data Exchange (ETDEWEB)

    Syme, D.B., E-mail: brian.syme@ccfe.ac.uk [EURATOM-CCFE Fusion Association, Culham Science Centre, Abingdon, OXON OX14 3DB (United Kingdom); Popovichev, S. [EURATOM-CCFE Fusion Association, Culham Science Centre, Abingdon, OXON OX14 3DB (United Kingdom); Conroy, S. [EURATOM-VR Association, Department of Physics and Astronomy, Uppsala University, Box 516, SE-75120 Uppsala (Sweden); Lengar, I.; Snoj, L. [EURATOM-MHEST Association, Reactor Physics Division, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Sowden, C. [EURATOM-CCFE Fusion Association, Culham Science Centre, Abingdon, OXON OX14 3DB (United Kingdom); Giacomelli, L. [EURATOM-ENEA-CNR Association, CNR-IFP and Univ. di Milano-Bicocca, Milan (Italy); Hermon, G.; Allan, P.; Macheta, P.; Plummer, D.; Stephens, J. [EURATOM-CCFE Fusion Association, Culham Science Centre, Abingdon, OXON OX14 3DB (United Kingdom); Batistoni, P. [EURATOM-ENEA Association, Via E. Fermi,40, 00044 Frascati (Italy); Prokopowicz, R.; Jednorog, S. [EURATOM-IPPLM Association, Institute of Plasma Physics and Laser Microfusion, Hery 23, 01-497 Warsaw (Poland); Abhangi, M.R.; Makwana, R. [Institute for Plasma Research, Bhat, Gandhinagar, 382 428 Gujarat (India)

    2014-11-15

    The power output of fusion experiments and fusion reactor-like devices is measured in terms of the neutron yields which relate directly to the fusion yield. In this paper we describe the devices and methods used to make the new in situ calibration of JET in April 2013 and its early results. The target accuracy of this calibration was 10%, just as in the earlier JET calibration and as required for ITER, where a precise neutron yield measurement is important, e.g., for tritium accountancy. We discuss the constraints and early decisions which defined the main calibration approach, e.g., the choice of source type and the deployment method. We describe the physics, source issues, safety and engineering aspects required to calibrate directly the Fission Chambers and the Activation System which carry the JET neutron calibration. In particular a direct calibration of the Activation system was planned for the first time in JET. We used the existing JET remote-handling system to deploy the {sup 252}Cf source and developed the compatible tooling and systems necessary to ensure safe and efficient deployment in these cases. The scientific programme has sought to better understand the limitations of the calibration, to optimise the measurements and other provisions, to provide corrections for perturbing factors (e.g., presence of the remote-handling boom and other non-standard torus conditions) and to ensure personnel safety and safe working conditions. Much of this work has been based on an extensive programme of Monte-Carlo calculations which, e.g., revealed a potential contribution to the neutron yield via a direct line of sight through the ports which presents individually depending on the details of the port geometry.

  5. CREATION OF FEMALE COMPUTATIONAL PHANTOMS FOR CALIBRATION OF LUNG COUNTERS.

    Science.gov (United States)

    Lombardo, Pasquale Alessandro; Lebacq, Anne Laure; Vanhavere, Filip

    2016-09-01

    Plutonium isotopes are of high concern because they lead to high doses. In case of contamination, the activity burden inside the lungs should be assessed accurately. Many studies showed that the presence of breasts has a substantial influence on lung counting efficiencies. Currently, the calibration of most lung counting systems is done by means of physical phantoms representing only male chests. A set of female computational phantoms has been developed in order to provide gender-specific efficiency calibrations for the (241)Am gamma emission (59.54 keV). The phantoms were created starting from a library of female chest phantoms provided by Institut de radioprotection et de sûreté nucléaire (IRSN) (Farah, J. Amélioration des mesures anthroporadiamétriques personnalisées assistées par calcul Monte Carlo: optimisation des temps de calculs et méthodologie de mesure pour l'établissement de la répartition d'activite. PhD Thesis, 2011). While the IRSN phantoms represent a supine measurement position, the SCK•CEN lung counter set-up requires the persons to be sitting in a chair. Using open-source software, the breast shapes of the original phantoms have been recreated to simulate the drooping of breasts in vertical sitting position. A Monte Carlo approach was chosen for calculating calibration coefficients for female lung counting. The results obtained with MCNPx 2.7 simulations showed a significant decrease in the detection efficiency. For bigger bust and breast sizes, the detection efficiency showed to be up to 10 times lower than the ones measured with the Livermore male torso phantom.

  6. Improvements in the simulation of the efficiency of a HPGe detector with Monte Carlo code MCNP5; Mejoras en la simulacion de la eficiencia de un detector HPGe con el codigo Monte Carlo MCNP5

    Energy Technology Data Exchange (ETDEWEB)

    Gallardo, S.; Querol, A.; Rodenas, J.; Verdu, G.

    2014-07-01

    in this paper we propose to perform a simulation model using the MCNP5 code and a registration form meshing to improve the simulation efficiency of the detector in the range of energies ranging from 50 to 2000 keV. This meshing is built by FMESH MCNP5 registration code that allows a mesh with cells of few microns. The photon and electron flow is calculated in the different cells of the mesh which is superimposed on detector geometry. It analyzes the variation of efficiency (related to the variation of energy deposited in the active volume). (Author)

  7. Neutronic analysis for in situ calibration of ITER in-vessel neutron flux monitor with microfission chamber

    Energy Technology Data Exchange (ETDEWEB)

    Ishikawa, Masao, E-mail: ishikawa.masao@jaea.go.jp [Fusion Research and Development Directorate, Japan Atomic Energy Agency, Ibaraki 311-0193 (Japan); Kondoh, Takashi; Kusama, Yoshinori [Fusion Research and Development Directorate, Japan Atomic Energy Agency, Ibaraki 311-0193 (Japan); Bertalot, Luciano [ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France)

    2013-10-15

    Highlights: ► Neutronic analysis is performed for in situ calibration of the microfission chamber (MFC). ► The source transfer system deigned in this study does not affect MFC detection efficiency. ► The rotation method is appropriate for full calibration because the calibration time is shorter. ► But, point-by-point method should be performed to check the accuracy of the MCNP model. ► Combination of two methods are important to perform in situ calibration efficiently. -- Abstract: Neutronic analysis is performed for in situ calibration of the microfission chamber (MFC), which is the in-vessel neutron-flux monitor at the International Thermonuclear Experimental Reactor (ITER). We present the design of the transfer system for a neutron generator, which consists of two toroidal rings and a neutron-generator holder, and estimate the effect of the system on MFC detection efficiency through neutronic analysis with the Monte Carlo N-particle (MCNP) code. The result indicates that the designed transfer system does not affect MFC detection efficiency. In situ calibrations by the point-by-point method and by the rotation method are also simulated and compared by neutronic analysis. The results indicate that the rotation method is appropriate for full calibration because the calibration time is shorter (all neutron-flux monitors can be calibrated simultaneously). However, the rotation method makes it difficult to compare the results with neutronic analysis, so the point-by-point method should be performed prior to full calibration to check the accuracy of the MCNP model.

  8. Planned missing-data designs in experience-sampling research: Monte Carlo simulations of efficient designs for assessing within-person constructs.

    Science.gov (United States)

    Silvia, Paul J; Kwapil, Thomas R; Walsh, Molly A; Myin-Germeys, Inez

    2014-03-01

    Experience-sampling research involves trade-offs between the number of questions asked per signal, the number of signals per day, and the number of days. By combining planned missing-data designs and multilevel latent variable modeling, we show how to reduce the items per signal without reducing the number of items. After illustrating different designs using real data, we present two Monte Carlo studies that explored the performance of planned missing-data designs across different within-person and between-person sample sizes and across different patterns of response rates. The missing-data designs yielded unbiased parameter estimates but slightly higher standard errors. With realistic sample sizes, even designs with extensive missingness performed well, so these methods are promising additions to an experience-sampler's toolbox.

  9. RUN DMC: An efficient, parallel code for analyzing Radial Velocity Observations using N-body Integrations and Differential Evolution Markov chain Monte Carlo

    CERN Document Server

    Nelson, Benjamin E; Payne, Matthew J

    2013-01-01

    In the 20+ years of Doppler observations of stars, scientists have uncovered a diverse population of extrasolar multi-planet systems. A common technique for characterizing the orbital elements of these planets is Markov chain Monte Carlo (MCMC), using a Keplerian model with random walk proposals and paired with the Metropolis-Hastings algorithm. For approximately a couple of dozen planetary systems with Doppler observations, there are strong planet-planet interactions due to the system being in or near a mean-motion resonance (MMR). An N-body model is often required to accurately describe these systems. Further computational difficulties arise from exploring a high-dimensional parameter space ($\\sim$7 x number of planets) that can have complex parameter correlations. To surmount these challenges, we introduce a differential evolution MCMC (DEMCMC) applied to radial velocity data while incorporating self-consistent N-body integrations. Our Radial velocity Using N-body DEMCMC (RUN DMC) algorithm improves upon t...

  10. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  11. Calibration of the Super-Kamiokande Detector

    CERN Document Server

    Abe, K; Iida, T; Iyogi, K; Kameda, J; Kishimoto, Y; Koshio, Y; Marti, Ll; Miura, M; Moriyama, S; Nakahata, M; Nakano, Y; Nakayama, S; Obayashi, Y; Sekiya, H; Shiozawa, M; Suzuki, Y; Takeda, A; Takenaga, Y; Tanaka, H; Tomura, T; Ueno, K; Wendell, R A; Yokozawa, T; Irvine, T J; Kaji, H; Kajita, T; Kaneyuki, K; Lee, K P; Nishimura, Y; Okumura, K; McLachlan, T; Labarga, L; Kearns, E; Raaf, J L; Stone, J L; Sulak, L R; Berkman, S; Tanaka, H A; Tobayama, S; Goldhaber, M; Bays, K; Carminati, G; Kropp, W R; Mine, S; Renshaw, A; Smy, M B; Sobel, H W; Ganezer, K S; Hill, J; Keig, W E; Jang, J S; Kim, J Y; Lim, I T; Hong, N; Akiri, T; Albert, J B; Himmel, A; Scholberg, K; Walter, C W; Wongjirad, T; Ishizuka, T; Tasaka, S; Learned, J G; Matsuno, S; Smith, S N; Hasegawa, T; Ishida, T; Ishii, T; Kobayashi, T; Nakadaira, T; Nakamura, K; Nishikawa, K; Oyama, Y; Sakashita, K; Sekiguchi, T; Tsukamoto, T; Suzuki, A T; Takeuchi, Y; Huang, K; Ieki, K; Ikeda, M; Kikawa, T; Kubo, H; Minamino, A; Murakami, A; Nakaya, T; Otani, M; Suzuki, K; Takahashi, S; Fukuda, Y; Choi, K; Itow, Y; Mitsuka, G; Miyake, M; Mijakowski, P; Tacik, R; Hignight, J; Imber, J; Jung, C K; Taylor, I; Yanagisawa, C; Idehara, Y; Ishino, H; Kibayashi, A; Mori, T; Sakuda, M; Yamaguchi, R; Yano, T; Kuno, Y; Kim, S B; Yang, B S; Okazawa, H; Choi, Y; Nishijima, K; Koshiba, M; Totsuka, Y; Yokoyama, M; Martens, K; Vagins, M R; Martin, J F; de Perio, P; Konaka, A; Wilking, M J; Chen, S; Heng, Y; Sui, H; Yang, Z; Zhang, H; Zhenwei, Y; Connolly, K; Dziomba, M; Wilkes, R J

    2013-01-01

    Procedures and results on hardware level detector calibration in Super-Kamiokande (SK) are presented in this paper. In particular, we report improvements made in our calibration methods for the experimental phase IV in which new readout electronics have been operating since 2008. The topics are separated into two parts. The first part describes the determination of constants needed to interpret the digitized output of our electronics so that we can obtain physical numbers such as photon counts and their arrival times for each photomultiplier tube (PMT). In this context, we developed an in-situ procedure to determine high-voltage settings for PMTs in large detectors like SK, as well as a new method for measuring PMT quantum efficiency and gain in such a detector. The second part describes the modeling of the detector in our Monte Carlo simulation, including in particular the optical properties of its water target and their variability over time. Detailed studies on the water quality are also presented. As a re...

  12. Equilibrium Statistics: Monte Carlo Methods

    Science.gov (United States)

    Kröger, Martin

    Monte Carlo methods use random numbers, or ‘random’ sequences, to sample from a known shape of a distribution, or to extract distribution by other means. and, in the context of this book, to (i) generate representative equilibrated samples prior being subjected to external fields, or (ii) evaluate high-dimensional integrals. Recipes for both topics, and some more general methods, are summarized in this chapter. It is important to realize, that Monte Carlo should be as artificial as possible to be efficient and elegant. Advanced Monte Carlo ‘moves’, required to optimize the speed of algorithms for a particular problem at hand, are outside the scope of this brief introduction. One particular modern example is the wavelet-accelerated MC sampling of polymer chains [406].

  13. Efficiency

    NARCIS (Netherlands)

    I.P. van Staveren (Irene)

    2009-01-01

    textabstractThe dominant economic theory, neoclassical economics, employs a single economic evaluative criterion: efficiency. Moreover, it assigns this criterion a very specific meaning. Other – heterodox – schools of thought in economics tend to use more open concepts of efficiency, related to comm

  14. Improvement of personalized Monte Carlo-aided direct internal contamination monitoring: optimization of calculation times and measurement methodology for the establishment of activity distribution; Amelioration des mesures anthroporadiametriques personnalisees assistees par calcul Monte Carlo: optimisation des temps de calculs et methodologie de mesure pour l'etablissement de la repartition d'activite

    Energy Technology Data Exchange (ETDEWEB)

    Farah, Jad

    2011-10-06

    To optimize the monitoring of female workers using in vivo spectrometry measurements, it is necessary to correct the typical calibration coefficients obtained with the Livermore male physical phantom. To do so, numerical calibrations based on the use of Monte Carlo simulations combined with anthropomorphic 3D phantoms were used. Such computational calibrations require on the one hand the development of representative female phantoms of different size and morphologies and on the other hand rapid and reliable Monte Carlo calculations. A library of female torso models was hence developed by fitting the weight of internal organs and breasts according to the body height and to relevant plastic surgery recommendations. This library was next used to realize a numerical calibration of the AREVA NC La Hague in vivo counting installation. Moreover, the morphology-induced counting efficiency variations with energy were put into equation and recommendations were given to correct the typical calibration coefficients for any monitored female worker as a function of body height and breast size. Meanwhile, variance reduction techniques and geometry simplification operations were considered to accelerate simulations. Furthermore, to determine the activity mapping in the case of complex contaminations, a method that combines Monte Carlo simulations with in vivo measurements was developed. This method consists of realizing several spectrometry measurements with different detector positioning. Next, the contribution of each contaminated organ to the count is assessed from Monte Carlo calculations. The in vivo measurements realized at LEDI, CIEMAT and KIT have demonstrated the effectiveness of the method and highlighted the valuable contribution of Monte Carlo simulations for a more detailed analysis of spectrometry measurements. Thus, a more precise estimate of the activity distribution is given in the case of an internal contamination. (author)

  15. Accurate and efficient radiation transport in optically thick media -- by means of the Symbolic Implicit Monte Carlo method in the difference formulation

    Energy Technology Data Exchange (ETDEWEB)

    Szoke, A; Brooks, E D; McKinley, M; Daffin, F

    2005-03-30

    The equations of radiation transport for thermal photons are notoriously difficult to solve in thick media without resorting to asymptotic approximations such as the diffusion limit. One source of this difficulty is that in thick, absorbing media thermal emission is almost completely balanced by strong absorption. In a previous publication [SB03], the photon transport equation was written in terms of the deviation of the specific intensity from the local equilibrium field. We called the new form of the equations the difference formulation. The difference formulation is rigorously equivalent to the original transport equation. It is particularly advantageous in thick media, where the radiation field approaches local equilibrium and the deviations from the Planck distribution are small. The difference formulation for photon transport also clarifies the diffusion limit. In this paper, the transport equation is solved by the Symbolic Implicit Monte Carlo (SIMC) method and a comparison is made between the standard formulation and the difference formulation. The SIMC method is easily adapted to the derivative source terms of the difference formulation, and a remarkable reduction in noise is obtained when the difference formulation is applied to problems involving thick media.

  16. A new experimental procedure for determination of photoelectric efficiency of a NaI(Tl) detector used for nuclear medicine liquid waste monitoring with traceability to a reference standard radionuclide calibrator.

    Science.gov (United States)

    Ceccatelli, A; Campanella, F; Ciofetta, G; Marracino, F M; Cannatà, V

    2010-02-01

    To determine photopeak efficiency for (99m)Tc of the NaI(Tl) detector used for liquid waste monitoring at the Nuclear Medicine Unit of IRCCS Paediatric Hospital Bambino Gesù in Rome, a specific experimental procedure, with traceability to primary standards, was developed. Working with the Italian National Institute for Occupational Prevention and Safety, two different calibration source geometries were employed and the detector response dependence on geometry was investigated. The large percentage difference (almost 40%) between the two efficiency values obtained showed that geometrical effects cannot be neglected.

  17. Ibis ground calibration

    Energy Technology Data Exchange (ETDEWEB)

    Bird, A.J.; Barlow, E.J.; Tikkanen, T. [Southampton Univ., School of Physics and Astronomy (United Kingdom); Bazzano, A.; Del Santo, M.; Ubertini, P. [Istituto di Astrofisica Spaziale e Fisica Cosmica - IASF/CNR, Roma (Italy); Blondel, C.; Laurent, P.; Lebrun, F. [CEA Saclay - Sap, 91 - Gif sur Yvette (France); Di Cocco, G.; Malaguti, E. [Istituto di Astrofisica Spaziale e Fisica-Bologna - IASF/CNR (Italy); Gabriele, M.; La Rosa, G.; Segreto, A. [Istituto di Astrofisica Spaziale e Fisica- IASF/CNR, Palermo (Italy); Quadrini, E. [Istituto di Astrofisica Spaziale e Fisica-Cosmica, EASF/CNR, Milano (Italy); Volkmer, R. [Institut fur Astronomie und Astrophysik, Tubingen (Germany)

    2003-11-01

    We present an overview of results obtained from IBIS ground calibrations. The spectral and spatial characteristics of the detector planes and surrounding passive materials have been determined through a series of calibration campaigns. Measurements of pixel gain, energy resolution, detection uniformity, efficiency and imaging capability are presented. The key results obtained from the ground calibration have been: - optimization of the instrument tunable parameters, - determination of energy linearity for all detection modes, - determination of energy resolution as a function of energy through the range 20 keV - 3 MeV, - demonstration of imaging capability in each mode, - measurement of intrinsic detector non-uniformity and understanding of the effects of passive materials surrounding the detector plane, and - discovery (and closure) of various leakage paths through the passive shielding system.

  18. TARGETLESS CAMERA CALIBRATION

    Directory of Open Access Journals (Sweden)

    L. Barazzetti

    2012-09-01

    Full Text Available In photogrammetry a camera is considered calibrated if its interior orientation parameters are known. These encompass the principal distance, the principal point position and some Additional Parameters used to model possible systematic errors. The current state of the art for automated camera calibration relies on the use of coded targets to accurately determine the image correspondences. This paper presents a new methodology for the efficient and rigorous photogrammetric calibration of digital cameras which does not require any longer the use of targets. A set of images depicting a scene with a good texture are sufficient for the extraction of natural corresponding image points. These are automatically matched with feature-based approaches and robust estimation techniques. The successive photogrammetric bundle adjustment retrieves the unknown camera parameters and their theoretical accuracies. Examples, considerations and comparisons with real data and different case studies are illustrated to show the potentialities of the proposed methodology.

  19. Extended Ensemble Monte Carlo

    OpenAIRE

    Iba, Yukito

    2000-01-01

    ``Extended Ensemble Monte Carlo''is a generic term that indicates a set of algorithms which are now popular in a variety of fields in physics and statistical information processing. Exchange Monte Carlo (Metropolis-Coupled Chain, Parallel Tempering), Simulated Tempering (Expanded Ensemble Monte Carlo), and Multicanonical Monte Carlo (Adaptive Umbrella Sampling) are typical members of this family. Here we give a cross-disciplinary survey of these algorithms with special emphasis on the great f...

  20. Monte Carlo simulation of gamma-ray interactions in an over-square high-purity germanium detector for in-vivo measurements

    Science.gov (United States)

    Saizu, Mirela Angela

    2016-09-01

    The developments of high-purity germanium detectors match very well the requirements of the in-vivo human body measurements regarding the gamma energy ranges of the radionuclides intended to be measured, the shape of the extended radioactive sources, and the measurement geometries. The Whole Body Counter (WBC) from IFIN-HH is based on an “over-square” high-purity germanium detector (HPGe) to perform accurate measurements of the incorporated radionuclides emitting X and gamma rays in the energy range of 10 keV-1500 keV, under conditions of good shielding, suitable collimation, and calibration. As an alternative to the experimental efficiency calibration method consisting of using reference calibration sources with gamma energy lines that cover all the considered energy range, it is proposed to use the Monte Carlo method for the efficiency calibration of the WBC using the radiation transport code MCNP5. The HPGe detector was modelled and the gamma energy lines of 241Am, 57Co, 133Ba, 137Cs, 60Co, and 152Eu were simulated in order to obtain the virtual efficiency calibration curve of the WBC. The Monte Carlo method was validated by comparing the simulated results with the experimental measurements using point-like sources. For their optimum matching, the impact of the variation of the front dead layer thickness and of the detector photon absorbing layers materials on the HPGe detector efficiency was studied, and the detector’s model was refined. In order to perform the WBC efficiency calibration for realistic people monitoring, more numerical calculations were generated simulating extended sources of specific shape according to the standard man characteristics.

  1. State-of-the-art Monte Carlo 1988

    Energy Technology Data Exchange (ETDEWEB)

    Soran, P.D.

    1988-06-28

    Particle transport calculations in highly dimensional and physically complex geometries, such as detector calibration, radiation shielding, space reactors, and oil-well logging, generally require Monte Carlo transport techniques. Monte Carlo particle transport can be performed on a variety of computers ranging from APOLLOs to VAXs. Some of the hardware and software developments, which now permit Monte Carlo methods to be routinely used, are reviewed in this paper. The development of inexpensive, large, fast computer memory, coupled with fast central processing units, permits Monte Carlo calculations to be performed on workstations, minicomputers, and supercomputers. The Monte Carlo renaissance is further aided by innovations in computer architecture and software development. Advances in vectorization and parallelization architecture have resulted in the development of new algorithms which have greatly reduced processing times. Finally, the renewed interest in Monte Carlo has spawned new variance reduction techniques which are being implemented in large computer codes. 45 refs.

  2. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    Science.gov (United States)

    Setiani, Tia Dwi; Suprijadi, Haryanto, Freddy

    2016-03-01

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 - 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 108 and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.

  3. Radium needle used to calibrate germanium gamma-ray detector.

    Science.gov (United States)

    Kamboj, S; Lovett, D; Kahn, B; Walker, D

    1993-03-01

    A standard platinum-iridium needle that contains 374 MBq 226Ra was tested as a source for calibrating a portable germanium detector used with a gamma-ray spectrometer for environmental radioactivity measurements. The counting efficiencies of the 11 most intense gamma rays emitted by 226Ra and its short-lived radioactive progeny at energies between 186 and 2,448 keV were determined, at the full energy peaks, to construct a curve of counting efficiency vs. energy. The curve was compared to another curve between 43 and 1,596 keV obtained with a NIST mixed-radionuclide standard. It was also compared to the results of a Monte Carlo simulation. The 226Ra source results were consistent with the NIST standard between 248 and 1,596 keV. The Monte Carlo simulation gave a curve parallel to the curve for the combined radium and NIST standard data between 250 and 2,000 keV, but at higher efficiency.

  4. Bayesian Calibration of Microsimulation Models.

    Science.gov (United States)

    Rutter, Carolyn M; Miglioretti, Diana L; Savarino, James E

    2009-12-01

    Microsimulation models that describe disease processes synthesize information from multiple sources and can be used to estimate the effects of screening and treatment on cancer incidence and mortality at a population level. These models are characterized by simulation of individual event histories for an idealized population of interest. Microsimulation models are complex and invariably include parameters that are not well informed by existing data. Therefore, a key component of model development is the choice of parameter values. Microsimulation model parameter values are selected to reproduce expected or known results though the process of model calibration. Calibration may be done by perturbing model parameters one at a time or by using a search algorithm. As an alternative, we propose a Bayesian method to calibrate microsimulation models that uses Markov chain Monte Carlo. We show that this approach converges to the target distribution and use a simulation study to demonstrate its finite-sample performance. Although computationally intensive, this approach has several advantages over previously proposed methods, including the use of statistical criteria to select parameter values, simultaneous calibration of multiple parameters to multiple data sources, incorporation of information via prior distributions, description of parameter identifiability, and the ability to obtain interval estimates of model parameters. We develop a microsimulation model for colorectal cancer and use our proposed method to calibrate model parameters. The microsimulation model provides a good fit to the calibration data. We find evidence that some parameters are identified primarily through prior distributions. Our results underscore the need to incorporate multiple sources of variability (i.e., due to calibration data, unknown parameters, and estimated parameters and predicted values) when calibrating and applying microsimulation models.

  5. Radio Interferometric Calibration Using a Riemannian Manifold

    CERN Document Server

    Yatawatta, Sarod

    2013-01-01

    In order to cope with the increased data volumes generated by modern radio interferometers such as LOFAR (Low Frequency Array) or SKA (Square Kilometre Array), fast and efficient calibration algorithms are essential. Traditional radio interferometric calibration is performed using nonlinear optimization techniques such as the Levenberg-Marquardt algorithm in Euclidean space. In this paper, we reformulate radio interferometric calibration as a nonlinear optimization problem on a Riemannian manifold. The reformulated calibration problem is solved using the Riemannian trust-region method. We show that calibration on a Riemannian manifold has faster convergence with reduced computational cost compared to conventional calibration in Euclidean space.

  6. Experiments and Monte Carlo modeling of a higher resolution Cadmium Zinc Telluride detector for safeguards applications

    Science.gov (United States)

    Borella, Alessandro

    2016-09-01

    The Belgian Nuclear Research Centre is engaged in R&D activity in the field of Non Destructive Analysis on nuclear materials, with focus on spent fuel characterization. A 500 mm3 Cadmium Zinc Telluride (CZT) with enhanced resolution was recently purchased. With a full width at half maximum of 1.3% at 662 keV, the detector is very promising in view of its use for applications such as determination of uranium enrichment and plutonium isotopic composition, as well as measurement on spent fuel. In this paper, I report about the work done with such a detector in terms of its characterization. The detector energy calibration, peak shape and efficiency were determined from experimental data. The data included measurements with calibrated sources, both in a bare and in a shielded environment. In addition, Monte Carlo calculations with the MCNPX code were carried out and benchmarked with experiments.

  7. Signal inference with unknown response: calibration uncertainty renormalized estimator

    CERN Document Server

    Dorn, Sebastian; Greiner, Maksim; Selig, Marco; Böhm, Vanessa

    2014-01-01

    The calibration of a measurement device is crucial for every scientific experiment, where a signal has to be inferred from data. We present CURE, the calibration uncertainty renormalized estimator, to reconstruct a signal and simultaneously the instrument's calibration from the same data without knowing the exact calibration, but its covariance structure. The idea of CURE is starting with an assumed calibration to successively include more and more portions of calibration uncertainty into the signal inference equations and to absorb the resulting corrections into renormalized signal (and calibration) solutions. Thereby, the signal inference and calibration problem turns into solving a single system of ordinary differential equations and can be identified with common resummation techniques used in field theories. We verify CURE by applying it to a simplistic toy example and compare it against existent self-calibration schemes, Wiener filter solutions, and Markov Chain Monte Carlo sampling. We conclude that the...

  8. Discrete diffusion Monte Carlo for frequency-dependent radiative transfer

    Energy Technology Data Exchange (ETDEWEB)

    Densmore, Jeffrey D [Los Alamos National Laboratory; Kelly, Thompson G [Los Alamos National Laboratory; Urbatish, Todd J [Los Alamos National Laboratory

    2010-11-17

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.

  9. Internal calibration of gel dosimeters: A feasibility study

    Energy Technology Data Exchange (ETDEWEB)

    Trapp, J V; Kairn, T; Crowe, S; Fielding, A [School of Physical and Chemical Sciences Queensland University of Technology GPO Box 2434, Brisbane 4001 (Australia)], E-mail: j.trapp@qut.edu.au

    2009-05-01

    In this work we test the feasibility of a new calibration method for gel dosimetry. We examine, through Monte Carlo modelling, whether the inclusion of an organic plastic scintillator system at key points within the gel phantom would perturb the dose map. Such a system would remove the requirement for a separate calibration gel, removing many sources of uncertainty.

  10. TU-EF-304-10: Efficient Multiscale Simulation of the Proton Relative Biological Effectiveness (RBE) for DNA Double Strand Break (DSB) Induction and Bio-Effective Dose in the FLUKA Monte Carlo Radiation Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    Moskvin, V; Tsiamas, P; Axente, M; Farr, J [St. Jude Children’s Research Hospital, Memphis, TN (United States); Stewart, R [University of Washington, Seattle, WA. (United States)

    2015-06-15

    Purpose: One of the more critical initiating events for reproductive cell death is the creation of a DNA double strand break (DSB). In this study, we present a computationally efficient way to determine spatial variations in the relative biological effectiveness (RBE) of proton therapy beams within the FLUKA Monte Carlo (MC) code. Methods: We used the independently tested Monte Carlo Damage Simulation (MCDS) developed by Stewart and colleagues (Radiat. Res. 176, 587–602 2011) to estimate the RBE for DSB induction of monoenergetic protons, tritium, deuterium, hellium-3, hellium-4 ions and delta-electrons. The dose-weighted (RBE) coefficients were incorporated into FLUKA to determine the equivalent {sup 6}°60Co γ-ray dose for representative proton beams incident on cells in an aerobic and anoxic environment. Results: We found that the proton beam RBE for DSB induction at the tip of the Bragg peak, including primary and secondary particles, is close to 1.2. Furthermore, the RBE increases laterally to the beam axis at the area of Bragg peak. At the distal edge, the RBE is in the range from 1.3–1.4 for cells irradiated under aerobic conditions and may be as large as 1.5–1.8 for cells irradiated under anoxic conditions. Across the plateau region, the recorded RBE for DSB induction is 1.02 for aerobic cells and 1.05 for cells irradiated under anoxic conditions. The contribution to total effective dose from secondary heavy ions decreases with depth and is higher at shallow depths (e.g., at the surface of the skin). Conclusion: Multiscale simulation of the RBE for DSB induction provides useful insights into spatial variations in proton RBE within pristine Bragg peaks. This methodology is potentially useful for the biological optimization of proton therapy for the treatment of cancer. The study highlights the need to incorporate spatial variations in proton RBE into proton therapy treatment plans.

  11. The ATLAS Electromagnetic Calorimeter Calibration Workshop

    CERN Multimedia

    Hong Ma; Isabelle Wingerter

    The ATLAS Electromagnetic Calorimeter Calibration Workshop took place at LAPP-Annecy from the 1st to the 3rd of October; 45 people attended the workshop. A detailed program was setup before the workshop. The agenda was organised around very focused presentations where questions were raised to allow arguments to be exchanged and answers to be proposed. The main topics were: Electronics calibration Handling of problematic channels Cluster level corrections for electrons and photons Absolute energy scale Streams for calibration samples Calibration constants processing Learning from commissioning Forty-five people attended the workshop. The workshop was on the whole lively and fruitful. Based on years of experience with test beam analysis and Monte Carlo simulation, and the recent operation of the detector in the commissioning, the methods to calibrate the electromagnetic calorimeter are well known. Some of the procedures are being exercised in the commisssioning, which have demonstrated the c...

  12. Tau reconstruction, energy calibration and identification at ATLAS

    Indian Academy of Sciences (India)

    Michel Trottier-McDonald; on behalf of the ATLAS Collaboration

    2012-11-01

    Tau leptons play a central role in the LHC physics programme, in particular as an important signature in many Higgs boson and supersymmetry searches. They are further used in Standard Model electroweak measurements, as well as detector-related studies like the determination of the missing transverse energy scale. Copious backgrounds from QCD processes call for both efficient identification of hadronically decaying tau leptons, as well as large suppression of fake candidates. A solid understanding of the combined performance of the calorimeter and tracking detectors is also required. We present the current status of the tau reconstruction, energy calibration and identification with the ATLAS detector at the LHC. Identification efficiencies are measured in → events in data and compared with predictions from Monte Carlo simulations, whereas the misidentification probabilities of QCD jets and electrons are determined from various jet-enriched data samples and from → events, respectively. The tau energy scale calibration is described and systematic uncertainties on both energy scale and identification efficiencies discussed.

  13. Antenna Calibration and Measurement Equipment

    Science.gov (United States)

    Rochblatt, David J.; Cortes, Manuel Vazquez

    2012-01-01

    A document describes the Antenna Calibration & Measurement Equipment (ACME) system that will provide the Deep Space Network (DSN) with instrumentation enabling a trained RF engineer at each complex to perform antenna calibration measurements and to generate antenna calibration data. This data includes continuous-scan auto-bore-based data acquisition with all-sky data gathering in support of 4th order pointing model generation requirements. Other data includes antenna subreflector focus, system noise temperature and tipping curves, antenna efficiency, reports system linearity, and instrument calibration. The ACME system design is based on the on-the-fly (OTF) mapping technique and architecture. ACME has contributed to the improved RF performance of the DSN by approximately a factor of two. It improved the pointing performances of the DSN antennas and productivity of its personnel and calibration engineers.

  14. Monte Carlo methods

    OpenAIRE

    Bardenet, R.

    2012-01-01

    ISBN:978-2-7598-1032-1; International audience; Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC) methods. We give intuition on the theoretic...

  15. Exploring Monte Carlo methods

    CERN Document Server

    Dunn, William L

    2012-01-01

    Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble

  16. Density matrix quantum Monte Carlo

    CERN Document Server

    Blunt, N S; Spencer, J S; Foulkes, W M C

    2013-01-01

    This paper describes a quantum Monte Carlo method capable of sampling the full density matrix of a many-particle system, thus granting access to arbitrary reduced density matrices and allowing expectation values of complicated non-local operators to be evaluated easily. The direct sampling of the density matrix also raises the possibility of calculating previously inaccessible entanglement measures. The algorithm closely resembles the recently introduced full configuration interaction quantum Monte Carlo method, but works all the way from infinite to zero temperature. We explain the theory underlying the method, describe the algorithm, and introduce an importance-sampling procedure to improve the stochastic efficiency. To demonstrate the potential of our approach, the energy and staggered magnetization of the isotropic antiferromagnetic Heisenberg model on small lattices and the concurrence of one-dimensional spin rings are compared to exact or well-established results. Finally, the nature of the sign problem...

  17. Mercury Calibration System

    Energy Technology Data Exchange (ETDEWEB)

    John Schabron; Eric Kalberer; Joseph Rovani; Mark Sanderson; Ryan Boysen; William Schuster

    2009-03-11

    actual capabilities of the current calibration technology. As part of the current effort, WRI worked with Thermo Fisher elemental mercury calibrator units to conduct qualification experiments to demonstrate their performance characteristics under a variety of conditions and to demonstrate that they qualify for use in the CEM calibration program. Monitoring of speciated mercury is another concern of this research. The mercury emissions from coal-fired power plants are comprised of both elemental and oxidized mercury. Current CEM analyzers are designed to measure elemental mercury only. Oxidized mercury must first be converted to elemental mercury prior to entering the analyzer inlet in order to be measured. CEM systems must demonstrate the ability to measure both elemental and oxidized mercury. This requires the use of oxidized mercury generators with an efficient conversion of the oxidized mercury to elemental mercury. There are currently two basic types of mercuric chloride (HgCl{sub 2}) generators used for this purpose. One is an evaporative HgCl{sub 2} generator, which produces gas standards of known concentration by vaporization of aqueous HgCl{sub 2} solutions and quantitative mixing with a diluent carrier gas. The other is a device that converts the output from an elemental Hg generator to HgCl{sub 2} by means of a chemical reaction with chlorine gas. The Thermo Fisher oxidizer system involves reaction of elemental mercury vapor with chlorine gas at an elevated temperature. The draft interim protocol for oxidized mercury units involving reaction with chlorine gas requires the vendors to demonstrate high efficiency of oxidation of an elemental mercury stream from an elemental mercury vapor generator. The Thermo Fisher oxidizer unit is designed to operate at the power plant stack at the probe outlet. Following oxidation of elemental mercury from reaction with chlorine gas, a high temperature module reduces the mercuric chloride back to elemental mercury. WRI

  18. San Carlo Operaen

    DEFF Research Database (Denmark)

    Holm, Bent

    2005-01-01

    En indplacering af operahuset San Carlo i en kulturhistorisk repræsentationskontekst med særligt henblik på begrebet napolalità.......En indplacering af operahuset San Carlo i en kulturhistorisk repræsentationskontekst med særligt henblik på begrebet napolalità....

  19. SAN CARLOS APACHE PAPERS.

    Science.gov (United States)

    ROESSEL, ROBERT A., JR.

    THE FIRST SECTION OF THIS BOOK COVERS THE HISTORICAL AND CULTURAL BACKGROUND OF THE SAN CARLOS APACHE INDIANS, AS WELL AS AN HISTORICAL SKETCH OF THE DEVELOPMENT OF THEIR FORMAL EDUCATIONAL SYSTEM. THE SECOND SECTION IS DEVOTED TO THE PROBLEMS OF TEACHERS OF THE INDIAN CHILDREN IN GLOBE AND SAN CARLOS, ARIZONA. IT IS DIVIDED INTO THREE PARTS--(1)…

  20. Smart detectors for Monte Carlo radiative transfer

    CERN Document Server

    Baes, Maarten

    2008-01-01

    Many optimization techniques have been invented to reduce the noise that is inherent in Monte Carlo radiative transfer simulations. As the typical detectors used in Monte Carlo simulations do not take into account all the information contained in the impacting photon packages, there is still room to optimize this detection process and the corresponding estimate of the surface brightness distributions. We want to investigate how all the information contained in the distribution of impacting photon packages can be optimally used to decrease the noise in the surface brightness distributions and hence to increase the efficiency of Monte Carlo radiative transfer simulations. We demonstrate that the estimate of the surface brightness distribution in a Monte Carlo radiative transfer simulation is similar to the estimate of the density distribution in an SPH simulation. Based on this similarity, a recipe is constructed for smart detectors that take full advantage of the exact location of the impact of the photon pack...

  1. Monte Carlo simulation of the standardization of {sup 22}Na using scintillation detector arrays

    Energy Technology Data Exchange (ETDEWEB)

    Sato, Y., E-mail: yss.sato@aist.go.j [National Metrology Institute of Japan, National Institute of Advanced Industrial Science and Technology, Quantum Radiation Division, Radioactivity and Neutron Section, Tsukuba Central 2, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568 (Japan); Murayama, H. [National Institute of Radiological Sciences, 4-9-1, Anagawa, Inage, Chiba 263-8555 (Japan); Yamada, T. [Japan Radioisotope Association, 2-28-45, Hon-komagome, Bunkyo, Tokyo 113-8941 (Japan); National Metrology Institute of Japan, National Institute of Advanced Industrial Science and Technology, Quantum Radiation Division, Radioactivity and Neutron Section, Tsukuba Central 2, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568 (Japan); Tohoku University, 6-6, Aoba, Aramaki, Aoba, Sendai 980-8579 (Japan); Hasegawa, T. [Kitasato University, 1-15-1, Kitasato, Sagamihara, Kanagawa 228-8555 (Japan); Oda, K. [Tokyo Metropolitan Institute of Gerontology, 1-1 Nakacho, Itabashi-ku, Tokyo 173-0022 (Japan); Unno, Y.; Yunoki, A. [National Metrology Institute of Japan, National Institute of Advanced Industrial Science and Technology, Quantum Radiation Division, Radioactivity and Neutron Section, Tsukuba Central 2, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568 (Japan)

    2010-07-15

    In order to calibrate PET devices by a sealed point source, we contrived an absolute activity measurement method for the sealed point source using scintillation detector arrays. This new method was verified by EGS5 Monte Carlo simulation.

  2. Calibration of the RSS-131 high efficiency ionization chamber for radiation dose monitoring during plasma experiments conducted on plasma focus device

    Science.gov (United States)

    Szewczak, Kamil; Jednoróg, Sławomir

    2014-10-01

    Plasma research poses a radiation hazard. Due to the program of deuterium plasma research using the PF-1000 device, it is an intensive source of neutrons (up to 1011 n · pulse -1) with energy of 2,45 MeV and ionizing electromagnetic radiation with a broad energy spectrum. Both types of radiation are mostly emitted in ultra-short pulses (˜100 ns). The aim of this work was to test and calibrate the RSS-131 radiometer for its application in measurements of ultra-short electromagnetic radiation pulses with broad energy spectrum emitted during PF-1000 discharge. In addition, the results of raw measurements performed in the control room are presented.

  3. Accounting for Calibration Uncertainties in X-ray Analysis: Effective Areas in Spectral Fitting

    CERN Document Server

    Lee, Hyunsook; van Dyk, David A; Connors, Alanna; Drake, Jeremy J; Izem, Rima; Meng, Xiao-Li; Min, Shandong; Park, Taeyoung; Ratzlaff, Pete; Siemiginowska, Aneta; Zezas, Andreas

    2011-01-01

    While considerable advance has been made to account for statistical uncertainties in astronomical analyses, systematic instrumental uncertainties have been generally ignored. This can be crucial to a proper interpretation of analysis results because instrumental calibration uncertainty is a form of systematic uncertainty. Ignoring it can underestimate error bars and introduce bias into the fitted values of model parameters. Accounting for such uncertainties currently requires extensive case-specific simulations if using existing analysis packages. Here we present general statistical methods that incorporate calibration uncertainties into spectral analysis of high-energy data. We first present a method based on multiple imputation that can be applied with any fitting method, but is necessarily approximate. We then describe a more exact Bayesian approach that works in conjunction with a Markov chain Monte Carlo based fitting. We explore methods for improving computational efficiency, and in particular detail a ...

  4. Adaptive Markov chain Monte Carlo forward projection for statistical analysis in epidemic modelling of human papillomavirus.

    Science.gov (United States)

    Korostil, Igor A; Peters, Gareth W; Cornebise, Julien; Regan, David G

    2013-05-20

    A Bayesian statistical model and estimation methodology based on forward projection adaptive Markov chain Monte Carlo is developed in order to perform the calibration of a high-dimensional nonlinear system of ordinary differential equations representing an epidemic model for human papillomavirus types 6 and 11 (HPV-6, HPV-11). The model is compartmental and involves stratification by age, gender and sexual-activity group. Developing this model and a means to calibrate it efficiently is relevant because HPV is a very multi-typed and common sexually transmitted infection with more than 100 types currently known. The two types studied in this paper, types 6 and 11, are causing about 90% of anogenital warts. We extend the development of a sexual mixing matrix on the basis of a formulation first suggested by Garnett and Anderson, frequently used to model sexually transmitted infections. In particular, we consider a stochastic mixing matrix framework that allows us to jointly estimate unknown attributes and parameters of the mixing matrix along with the parameters involved in the calibration of the HPV epidemic model. This matrix describes the sexual interactions between members of the population under study and relies on several quantities that are a priori unknown. The Bayesian model developed allows one to estimate jointly the HPV-6 and HPV-11 epidemic model parameters as well as unknown sexual mixing matrix parameters related to assortativity. Finally, we explore the ability of an extension to the class of adaptive Markov chain Monte Carlo algorithms to incorporate a forward projection strategy for the ordinary differential equation state trajectories. Efficient exploration of the Bayesian posterior distribution developed for the ordinary differential equation parameters provides a challenge for any Markov chain sampling methodology, hence the interest in adaptive Markov chain methods. We conclude with simulation studies on synthetic and recent actual data.

  5. Spectrometric methods used in the calibration of radiodiagnostic measuring instruments

    Energy Technology Data Exchange (ETDEWEB)

    De Vries, W. [Rijksuniversiteit Utrecht (Netherlands)

    1995-12-01

    Recently a set of parameters for checking the quality of radiation for use in diagnostic radiology was established at the calibration facility of Nederlands Meetinstituut (NMI). The establishment of the radiation quality required re-evaluation of the correction factors for the primary air-kerma standards. Free-air ionisation chambers require several correction factors to measure air-kerma according to its definition. These correction factors were calculated for the NMi free-air chamber by Monte Carlo simulations for monoenergetic photons in the energy range from 10 keV to 320 keV. The actual correction factors follow from weighting these mono-energetic correction factors with the air-kerma spectrum of the photon beam. This paper describes the determination of the photon spectra of the X-ray qualities used for the calibration of dosimetric instruments used in radiodiagnostics. The detector used for these measurements is a planar HPGe-detector, placed in the direct beam of the X-ray machine. To convert the measured pulse height spectrum to the actual photon spectrum corrections must be made for fluorescent photon escape, single and multiple compton scattering inside the detector, and detector efficiency. From the calculated photon spectra a number of parameters of the X-ray beam can be calculated. The calculated first and second half value layer in aluminum and copper are compared with the measured values of these parameters to validate the method of spectrum reconstruction. Moreover the spectrum measurements offer the possibility to calibrate the X-ray generator in terms of maximum high voltage. The maximum photon energy in the spectrum is used as a standard for calibration of kVp-meters.

  6. MORSE Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.

  7. Calibration of a single hexagonal NaI(Tl) detector using a new numerical method based on the efficiency transfer method

    Energy Technology Data Exchange (ETDEWEB)

    Abbas, Mahmoud I., E-mail: mabbas@physicist.net [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Badawi, M.S. [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Ruskov, I.N. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, 1784 Sofia (Bulgaria); El-Khatib, A.M. [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Grozdanov, D.N. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, 1784 Sofia (Bulgaria); Thabet, A.A. [Department of Medical Equipment Technology, Faculty of Allied Medical Sciences, Pharos University in Alexandria (Egypt); Kopatch, Yu.N. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation); Gouda, M.M. [Physics Department, Faculty of Science, Alexandria University, 21511 Alexandria (Egypt); Skoy, V.R. [Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation)

    2015-01-21

    Gamma-ray detector systems are important instruments in a broad range of science and new setup are continually developing. The most recent step in the evolution of detectors for nuclear spectroscopy is the construction of large arrays of detectors of different forms (for example, conical, pentagonal, hexagonal, etc.) and sizes, where the performance and the efficiency can be increased. In this work, a new direct numerical method (NAM), in an integral form and based on the efficiency transfer (ET) method, is used to calculate the full-energy peak efficiency of a single hexagonal NaI(Tl) detector. The algorithms and the calculations of the effective solid angle ratios for a point (isotropic irradiating) gamma-source situated coaxially at different distances from the detector front-end surface, taking into account the attenuation of the gamma-rays in the detector's material, end-cap and the other materials in-between the gamma-source and the detector, are considered as the core of this (ET) method. The calculated full-energy peak efficiency values by the (NAM) are found to be in a good agreement with the measured experimental data.

  8. Geometric calibration for a SPECT system dedicated to breast imaging

    Institute of Scientific and Technical Information of China (English)

    WU Li-Wei; WEI Long; CAO Xue-Xiang; WANG Lu; HUANG Xian-Chao; CHAI Pei; YUN Ming-Kai; ZHANG Yu-Bao; ZHANG Long; SHAN Bao-Ci

    2012-01-01

    Geometric calibration is critical to the accurate SPECT reconstruction.In this paper,a geometric calibration method was developed for a dedicated breast SPECT system with a tilted parallel beam (TPB)orbit.The acquisition geometry of the breast SPECT was firstly characterized.And then its projection model was established based on the acquisition geometry.Finally,the calibration results were obtained using a nonlinear optimization method that fitted the measured projections to the model.Monte Carlo data of the breast SPECT were used to verify the calibration method.Simulation results showed that the geometric parameters with reasonable accuracy could be obtained by the proposed method.

  9. Traceable Pyrgeometer Calibrations

    Energy Technology Data Exchange (ETDEWEB)

    Dooraghi, Mike; Kutchenreiter, Mark; Reda, Ibrahim; Habte, Aron; Sengupta, Manajit; Andreas, Afshin; Newman, Martina

    2016-05-02

    This poster presents the development, implementation, and operation of the Broadband Outdoor Radiometer Calibrations (BORCAL) Longwave (LW) system at the Southern Great Plains Radiometric Calibration Facility for the calibration of pyrgeometers that provide traceability to the World Infrared Standard Group.

  10. Research of Camera Calibration Based on DSP

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2013-09-01

    Full Text Available To take advantage of the high-efficiency and stability of DSP in the data processing and the functions of OpenCV library, this study brought forward a scheme that camera calibration in DSP embedded system calibration. An arithmetic of camera calibration based on OpenCV is designed by analyzing the camera model and lens distortion. The transplantation of EMCV to DSP is completed and the arithmetic of camera calibration is migrated and optimized based on the CCS development environment and the DSP/BIOS system. On the premise of realizing calibration function, this arithmetic improves the efficiency of program execution and the precision of calibration and lays the foundation for further research of the visual location based on DSP embedded system.

  11. Calibration of the Cherenkov Telescope Array

    CERN Document Server

    Gaug, Markus; Berge, David; Reyes, Raquel de los; Doro, Michele; Foerster, Andreas; Maccarone, Maria Concetta; Parsons, Dan; van Eldik, Christopher

    2015-01-01

    The construction of the Cherenkov Telescope Array is expected to start soon. We will present the baseline methods and their extensions currently foreseen to calibrate the observatory. These are bound to achieve the strong requirements on allowed systematic uncertainties for the reconstructed gamma-ray energy and flux scales, as well as on the pointing resolution, and on the overall duty cycle of the observatory. Onsite calibration activities are designed to include a robust and efficient calibration of the telescope cameras, and various methods and instruments to achieve calibration of the overall optical throughput of each telescope, leading to both inter-telescope calibration and an absolute calibration of the entire observatory. One important aspect of the onsite calibration is a correct understanding of the atmosphere above the telescopes, which constitutes the calorimeter of this detection technique. It is planned to be constantly monitored with state-of-the-art instruments to obtain a full molecular and...

  12. Calibration of sound calibrators: an overview

    Science.gov (United States)

    Milhomem, T. A. B.; Soares, Z. M. D.

    2016-07-01

    This paper presents an overview of calibration of sound calibrators. Initially, traditional calibration methods are presented. Following, the international standard IEC 60942 is discussed emphasizing parameters, target measurement uncertainty and criteria for conformance to the requirements of the standard. Last, Regional Metrology Organizations comparisons are summarized.

  13. Quantum Monte Carlo simulation

    OpenAIRE

    Wang, Yazhen

    2011-01-01

    Contemporary scientific studies often rely on the understanding of complex quantum systems via computer simulation. This paper initiates the statistical study of quantum simulation and proposes a Monte Carlo method for estimating analytically intractable quantities. We derive the bias and variance for the proposed Monte Carlo quantum simulation estimator and establish the asymptotic theory for the estimator. The theory is used to design a computational scheme for minimizing the mean square er...

  14. Monte Carlo transition probabilities

    OpenAIRE

    Lucy, L. B.

    2001-01-01

    Transition probabilities governing the interaction of energy packets and matter are derived that allow Monte Carlo NLTE transfer codes to be constructed without simplifying the treatment of line formation. These probabilities are such that the Monte Carlo calculation asymptotically recovers the local emissivity of a gas in statistical equilibrium. Numerical experiments with one-point statistical equilibrium problems for Fe II and Hydrogen confirm this asymptotic behaviour. In addition, the re...

  15. New radiation protection calibration facility at CERN.

    Science.gov (United States)

    Brugger, Markus; Carbonez, Pierre; Pozzi, Fabio; Silari, Marco; Vincke, Helmut

    2014-10-01

    The CERN radiation protection group has designed a new state-of-the-art calibration laboratory to replace the present facility, which is >20 y old. The new laboratory, presently under construction, will be equipped with neutron and gamma sources, as well as an X-ray generator and a beta irradiator. The present work describes the project to design the facility, including the facility placement criteria, the 'point-zero' measurements and the shielding study performed via FLUKA Monte Carlo simulations.

  16. Preliminary evaluation of a Neutron Calibration Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Alvarenga, Talysson S.; Neves, Lucio P.; Perini, Ana P.; Sanches, Matias P.; Mitake, Malvina B.; Caldas, Linda V.E., E-mail: talvarenga@ipen.br, E-mail: lpneves@ipen.br, E-mail: aperini@ipen.br, E-mail: msanches@ipen.br, E-mail: mbmitake@ipen.br, E-mail: lcaldas@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Federico, Claudio A., E-mail: claudiofederico@ieav.cta.br [Instituto de Estudos Avancados (IEAv/DCTA), Sao Jose dos Campos, SP (Brazil). Dept. de Ciencia e Tecnologia Aeroespacial

    2013-07-01

    In the past few years, Brazil and several other countries in Latin America have experimented a great demand for the calibration of neutron detectors, mainly due to the increase in oil prospection and extraction. The only laboratory for calibration of neutron detectors in Brazil is localized at the Institute for Radioprotection and Dosimetry (IRD/CNEN), Rio de Janeiro, which is part of the IAEA SSDL network. This laboratory is the national standard laboratory in Brazil. With the increase in the demand for the calibration of neutron detectors, there is a need for another calibration services. In this context, the Calibration Laboratory of IPEN/CNEN, Sao Paulo, which already offers calibration services of radiation detectors with standard X, gamma, beta and alpha beams, has recently projected a new calibration laboratory for neutron detectors. In this work, the ambient equivalent dose rate (H⁎(10)) was evaluated in several positions inside and around this laboratory, using Monte Carlo simulation (MCNP5 code), in order to verify the adequateness of the shielding. The obtained results showed that the shielding is effective, and that this is a low-cost methodology to improve the safety of the workers and evaluate the total staff workload. (author)

  17. Quantitative modeling of Cerenkov light production efficiency from medical radionuclides.

    Science.gov (United States)

    Beattie, Bradley J; Thorek, Daniel L J; Schmidtlein, Charles R; Pentlow, Keith S; Humm, John L; Hielscher, Andreas H

    2012-01-01

    There has been recent and growing interest in applying Cerenkov radiation (CR) for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use.

  18. Spectral calibration for convex grating imaging spectrometer

    Science.gov (United States)

    Zhou, Jiankang; Chen, Xinhua; Ji, Yiqun; Chen, Yuheng; Shen, Weimin

    2013-12-01

    Spectral calibration of imaging spectrometer plays an important role for acquiring target accurate spectrum. There are two spectral calibration types in essence, the wavelength scanning and characteristic line sampling. Only the calibrated pixel is used for the wavelength scanning methods and he spectral response function (SRF) is constructed by the calibrated pixel itself. The different wavelength can be generated by the monochromator. The SRF is constructed by adjacent pixels of the calibrated one for the characteristic line sampling methods. And the pixels are illuminated by the narrow spectrum line and the center wavelength of the spectral line is exactly known. The calibration result comes from scanning method is precise, but it takes much time and data to deal with. The wavelength scanning method cannot be used in field or space environment. The characteristic line sampling method is simple, but the calibration precision is not easy to confirm. The standard spectroscopic lamp is used to calibrate our manufactured convex grating imaging spectrometer which has Offner concentric structure and can supply high resolution and uniform spectral signal. Gaussian fitting algorithm is used to determine the center position and the Full-Width-Half-Maximum(FWHM)of the characteristic spectrum line. The central wavelengths and FWHMs of spectral pixels are calibrated by cubic polynomial fitting. By setting a fitting error thresh hold and abandoning the maximum deviation point, an optimization calculation is achieved. The integrated calibration experiment equipment for spectral calibration is developed to enhance calibration efficiency. The spectral calibration result comes from spectral lamp method are verified by monochromator wavelength scanning calibration technique. The result shows that spectral calibration uncertainty of FWHM and center wavelength are both less than 0.08nm, or 5.2% of spectral FWHM.

  19. Calibration method for a in vivo measurement system using mathematical simulation of the radiation source and the detector; Metodo de calibracao de um sistema de medida in vivo atraves da simulacao matematica da fonte de radiacao e do detector

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, John

    1998-12-31

    A Monte Carlo program which uses a voxel phantom has been developed to simulate in vivo measurement systems for calibration purposes. The calibration method presented here employs a mathematical phantom, produced in the form of volume elements (voxels), obtained through Magnetic Resonance Images of the human body. The calibration method uses the Monte Carlo technique to simulate the tissue contamination, the transport of the photons through the tissues and the detection of the radiation. The program simulates the transport and detection of photons between 0.035 and 2 MeV and uses, for the body representation, a voxel phantom with a format of 871 slices each of 277 x 148 picture elements. The Monte Carlo code was applied to the calibration of in vivo systems and to estimate differences in counting efficiencies between homogeneous and non-homogeneous radionuclide distributions in the lung. Calculations show a factor of 20 between deposition of {sup 241} Am at the back compared with the front of the lung. The program was also used to estimate the {sup 137} Cs body burden of an internally contaminated individual, counted with an 8 x 4 Nal (TI) detector and an {sup 241} Am body burden of an internally contaminated individual, who was counted using a planar germanium detector. (author) 24 refs., 38 figs., 23 tabs.

  20. Monte carlo simulations of organic photovoltaics.

    Science.gov (United States)

    Groves, Chris; Greenham, Neil C

    2014-01-01

    Monte Carlo simulations are a valuable tool to model the generation, separation, and collection of charges in organic photovoltaics where charges move by hopping in a complex nanostructure and Coulomb interactions between charge carriers are important. We review the Monte Carlo techniques that have been applied to this problem, and describe the results of simulations of the various recombination processes that limit device performance. We show how these processes are influenced by the local physical and energetic structure of the material, providing information that is useful for design of efficient photovoltaic systems.

  1. Parallel Markov chain Monte Carlo simulations.

    Science.gov (United States)

    Ren, Ruichao; Orkoulas, G

    2007-06-07

    With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.

  2. CERN honours Carlo Rubbia

    CERN Multimedia

    2009-01-01

    Carlo Rubbia turned 75 on March 31, and CERN held a symposium to mark his birthday and pay tribute to his impressive contribution to both CERN and science. Carlo Rubbia, 4th from right, together with the speakers at the symposium.On 7 April CERN hosted a celebration marking Carlo Rubbia’s 75th birthday and 25 years since he was awarded the Nobel Prize for Physics. "Today we will celebrate 100 years of Carlo Rubbia" joked CERN’s Director-General, Rolf Heuer in his opening speech, "75 years of his age and 25 years of the Nobel Prize." Rubbia received the Nobel Prize along with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. During the symposium, which was held in the Main Auditorium, several eminent speakers gave lectures on areas of science to which Carlo Rubbia made decisive contributions. Among those who spoke were Michel Spiro, Director of the French National Insti...

  3. Construction of Chinese adult male phantom library and its application in the virtual calibration of in vivo measurement

    Science.gov (United States)

    Chen, Yizheng; Qiu, Rui; Li, Chunyan; Wu, Zhen; Li, Junli

    2016-03-01

    In vivo measurement is a main method of internal contamination evaluation, particularly for large numbers of people after a nuclear accident. Before the practical application, it is necessary to obtain the counting efficiency of the detector by calibration. The virtual calibration based on Monte Carlo simulation usually uses the reference human computational phantom, and the morphological difference between the monitored personnel with the calibrated phantom may lead to the deviation of the counting efficiency. Therefore, a phantom library containing a wide range of heights and total body masses is needed. In this study, a Chinese reference adult male polygon surface (CRAM_S) phantom was constructed based on the CRAM voxel phantom, with the organ models adjusted to match the Chinese reference data. CRAMS phantom was then transformed to sitting posture for convenience in practical monitoring. Referring to the mass and height distribution of the Chinese adult male, a phantom library containing 84 phantoms was constructed by deforming the reference surface phantom. Phantoms in the library have 7 different heights ranging from 155 cm to 185 cm, and there are 12 phantoms with different total body masses in each height. As an example of application, organ specific and total counting efficiencies of Ba-133 were calculated using the MCNPX code, with two series of phantoms selected from the library. The influence of morphological variation on the counting efficiency was analyzed. The results show only using the reference phantom in virtual calibration may lead to an error of 68.9% for total counting efficiency. Thus the influence of morphological difference on virtual calibration can be greatly reduced using the phantom library with a wide range of masses and heights instead of a single reference phantom.

  4. Fast orthogonal transforms for multi-level quasi-Monte Carlo integration

    OpenAIRE

    Irrgeher, Christian; Leobacher, Gunther

    2015-01-01

    We combine a generic method for finding fast orthogonal transforms for a given quasi-Monte Carlo integration problem with the multilevel Monte Carlo method. It is shown by example that this combined method can vastly improve the efficiency of quasi-Monte Carlo.

  5. Quantum speedup of Monte Carlo methods.

    Science.gov (United States)

    Montanaro, Ashley

    2015-09-08

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.

  6. Adiabatic optimization versus diffusion Monte Carlo methods

    Science.gov (United States)

    Jarret, Michael; Jordan, Stephen P.; Lackey, Brad

    2016-10-01

    Most experimental and theoretical studies of adiabatic optimization use stoquastic Hamiltonians, whose ground states are expressible using only real nonnegative amplitudes. This raises a question as to whether classical Monte Carlo methods can simulate stoquastic adiabatic algorithms with polynomial overhead. Here we analyze diffusion Monte Carlo algorithms. We argue that, based on differences between L1 and L2 normalized states, these algorithms suffer from certain obstructions preventing them from efficiently simulating stoquastic adiabatic evolution in generality. In practice however, we obtain good performance by introducing a method that we call Substochastic Monte Carlo. In fact, our simulations are good classical optimization algorithms in their own right, competitive with the best previously known heuristic solvers for MAX-k -SAT at k =2 ,3 ,4 .

  7. Random Numbers and Monte Carlo Methods

    Science.gov (United States)

    Scherer, Philipp O. J.

    Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.

  8. The Virtual Monte Carlo

    CERN Document Server

    Hrivnacova, I; Berejnov, V V; Brun, R; Carminati, F; Fassò, A; Futo, E; Gheata, A; Caballero, I G; Morsch, Andreas

    2003-01-01

    The concept of Virtual Monte Carlo (VMC) has been developed by the ALICE Software Project to allow different Monte Carlo simulation programs to run without changing the user code, such as the geometry definition, the detector response simulation or input and output formats. Recently, the VMC classes have been integrated into the ROOT framework, and the other relevant packages have been separated from the AliRoot framework and can be used individually by any other HEP project. The general concept of the VMC and its set of base classes provided in ROOT will be presented. Existing implementations for Geant3, Geant4 and FLUKA and simple examples of usage will be described.

  9. Composite biasing in Monte Carlo radiative transfer

    CERN Document Server

    Baes, Maarten; Lunttila, Tuomas; Bianchi, Simone; Camps, Peter; Juvela, Mika; Kuiper, Rolf

    2016-01-01

    Biasing or importance sampling is a powerful technique in Monte Carlo radiative transfer, and can be applied in different forms to increase the accuracy and efficiency of simulations. One of the drawbacks of the use of biasing is the potential introduction of large weight factors. We discuss a general strategy, composite biasing, to suppress the appearance of large weight factors. We use this composite biasing approach for two different problems faced by current state-of-the-art Monte Carlo radiative transfer codes: the generation of photon packages from multiple components, and the penetration of radiation through high optical depth barriers. In both cases, the implementation of the relevant algorithms is trivial and does not interfere with any other optimisation techniques. Through simple test models, we demonstrate the general applicability, accuracy and efficiency of the composite biasing approach. In particular, for the penetration of high optical depths, the gain in efficiency is spectacular for the spe...

  10. Carlo Caso (1940 - 2007)

    CERN Multimedia

    Leonardo Rossi

    Carlo Caso (1940 - 2007) Our friend and colleague Carlo Caso passed away on July 7th, after several months of courageous fight against cancer. Carlo spent most of his scientific career at CERN, taking an active part in the experimental programme of the laboratory. His long and fruitful involvement in particle physics started in the sixties, in the Genoa group led by G. Tomasini. He then made several experiments using the CERN liquid hydrogen bubble chambers -first the 2000HBC and later BEBC- to study various facets of the production and decay of meson and baryon resonances. He later made his own group and joined the NA27 Collaboration to exploit the EHS Spectrometer with a rapid cycling bubble chamber as vertex detector. Amongst their many achievements, they were the first to measure, with excellent precision, the lifetime of the charmed D mesons. At the start of the LEP era, Carlo and his group moved to the DELPHI experiment, participating in the construction and running of the HPC electromagnetic c...

  11. Carlos Vesga Duarte

    OpenAIRE

    Pedro Medina Avendaño

    1981-01-01

    Carlos Vega Duarte tenía la sencillez de los seres elementales y puros. Su corazón era limpio como oro de aluvión. Su trato directo y coloquial ponía de relieve a un santandereano sin contaminaciones que amaba el fulgor de las armas y se encandilaba con el destello de las frases perfectas

  12. Modeling and Simulation of Efficiency Evaluation about Fire to Coast of Naval Gun Based on Monte -Carlo Method%基于统计方法的舰炮对岸作战效能建模与仿真

    Institute of Scientific and Technical Information of China (English)

    李海林; 姜俊; 彭鹏菲

    2011-01-01

    Modern fire to coast of naval gun is of high mission intensity and strong aging pressure for fire. Valid plan of ammunition reserves and well-timed determination fire ammunition of attacking targets become the key factors for effectively organizing and implementing missions. Aiming at the inherent uncertainty of traditional planning way of fire to coast of naval gun relied on experiences, an efficiency evaluation model for fire to coast of naval gun is set up based on the method of Monte-Carlo. Accoring to the forms and characteristics of fire to coast of naval gun,and based on the necessary condition simplification for the actual operation procedure, the factors of movement of warship on the sea, surrounding and grouping pattern of hits brought the firing error to naval gun, and were considered to ascertain the optimal relation. The model can offer some reference for efficient program of fire to coast of naval gun and make the blind firing command of commander, and the max-effectiveness of fire to coast of naval gun can be realized. At last, a combat simulation model example was given to illustrate the validation and practicability of e-valuating model.%研究舰炮作战效能优化建模问题,现代舰炮对岸支援作战任务强度高且时效性强,合理规划弹药携带量和适时确定打击目标需发射弹药数优化,影响实施任务完成的关键因素.针对传统的舰炮对岸作战规划和实施主要依据经验所固有的不确定性,为提高最佳匹配效能,减少盲目性,提出了一种统计方法的舰炮对岸作战效能评估仿真模型.模型根据舰炮对岸支援作战的形式和特点,在对真实的舰炮对岸作战过程进行必要和合理简化的基础上,考虑舰艇海上运动、环境和弹丸散布三个影响舰炮对岸射击误差的随机因素,用以确定上述优化关系,从而为制定高效的对岸火力支援作战方案提供参考,最大限度地发挥舰炮武器系统对岸作战效能.仿

  13. Trinocular Calibration Method Based on Binocular Calibration

    Directory of Open Access Journals (Sweden)

    CAO Dan-Dan

    2012-10-01

    Full Text Available In order to solve the self-occlusion problem in plane-based multi-camera calibration system and expand the measurement range, a tri-camera vision system based on binocular calibration is proposed. The three cameras are grouped into two pairs, while the public camera is taken as the reference to build the global coordinate. By calibration of the measured absolute distance and the true absolute distance, global calibration is realized. The MRE (mean relative error of the global calibration of the two camera pairs in the experiments can be as low as 0.277% and 0.328% respectively. Experiment results show that this method is feasible, simple and effective, and has high precision.

  14. Efficient uncertainty quantification methodologies for high-dimensional climate land models

    Energy Technology Data Exchange (ETDEWEB)

    Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Berry, Robert Dan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Debusschere, Bert J. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2011-11-01

    In this report, we proposed, examined and implemented approaches for performing efficient uncertainty quantification (UQ) in climate land models. Specifically, we applied Bayesian compressive sensing framework to a polynomial chaos spectral expansions, enhanced it with an iterative algorithm of basis reduction, and investigated the results on test models as well as on the community land model (CLM). Furthermore, we discussed construction of efficient quadrature rules for forward propagation of uncertainties from high-dimensional, constrained input space to output quantities of interest. The work lays grounds for efficient forward UQ for high-dimensional, strongly non-linear and computationally costly climate models. Moreover, to investigate parameter inference approaches, we have applied two variants of the Markov chain Monte Carlo (MCMC) method to a soil moisture dynamics submodel of the CLM. The evaluation of these algorithms gave us a good foundation for further building out the Bayesian calibration framework towards the goal of robust component-wise calibration.

  15. Rapid calibration for line structured light vision sensors

    Institute of Scientific and Technical Information of China (English)

    YANG Pei; XU Bin-shi; WU Lin

    2006-01-01

    The mathematic model of the line structured-light 3-D vision sensor was established.A separating-parameter method was proposed to calibrate the structure parameter,and a calibration target was designed.In the calibrating process,complicated measurement is avoided and the calibration is fast and easy to carry out,which simplifies the calibrating procedure.The experimental results show that the space measurement accuracy is better than 0.15 mm.This method is high efficient and practical in vision sensors calibration.

  16. Crop physiology calibration in CLM

    Directory of Open Access Journals (Sweden)

    I. Bilionis

    2014-10-01

    Full Text Available Farming is using more terrestrial ground, as population increases and agriculture is increasingly used for non-nutritional purposes such as biofuel production. This agricultural expansion exerts an increasing impact on the terrestrial carbon cycle. In order to understand the impact of such processes, the Community Land Model (CLM has been augmented with a CLM-Crop extension that simulates the development of three crop types: maize, soybean, and spring wheat. The CLM-Crop model is a complex system that relies on a suite of parametric inputs that govern plant growth under a given atmospheric forcing and available resources. CLM-Crop development used measurements of gross primary productivity and net ecosystem exchange from AmeriFlux sites to choose parameter values that optimize crop productivity in the model. In this paper we calibrate these parameters for one crop type, soybean, in order to provide a faithful projection in terms of both plant development and net carbon exchange. Calibration is performed in a Bayesian framework by developing a scalable and adaptive scheme based on sequential Monte Carlo (SMC.

  17. Monte Carlo and nonlinearities

    CERN Document Server

    Dauchet, Jérémi; Blanco, Stéphane; Caliot, Cyril; Charon, Julien; Coustet, Christophe; Hafi, Mouna El; Eymet, Vincent; Farges, Olivier; Forest, Vincent; Fournier, Richard; Galtier, Mathieu; Gautrais, Jacques; Khuong, Anaïs; Pelissier, Lionel; Piaud, Benjamin; Roger, Maxime; Terrée, Guillaume; Weitz, Sebastian

    2016-01-01

    The Monte Carlo method is widely used to numerically predict systems behaviour. However, its powerful incremental design assumes a strong premise which has severely limited application so far: the estimation process must combine linearly over dimensions. Here we show that this premise can be alleviated by projecting nonlinearities on a polynomial basis and increasing the configuration-space dimension. Considering phytoplankton growth in light-limited environments, radiative transfer in planetary atmospheres, electromagnetic scattering by particles and concentrated-solar-power-plant productions, we prove the real world usability of this advance on four test-cases that were so far regarded as impracticable by Monte Carlo approaches. We also illustrate an outstanding feature of our method when applied to sharp problems with interacting particles: handling rare events is now straightforward. Overall, our extension preserves the features that made the method popular: addressing nonlinearities does not compromise o...

  18. Carlos Vesga Duarte

    Directory of Open Access Journals (Sweden)

    Pedro Medina Avendaño

    1981-01-01

    Full Text Available Carlos Vega Duarte tenía la sencillez de los seres elementales y puros. Su corazón era limpio como oro de aluvión. Su trato directo y coloquial ponía de relieve a un santandereano sin contaminaciones que amaba el fulgor de las armas y se encandilaba con el destello de las frases perfectas

  19. Fundamentals of Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  20. CERN honours Carlo Rubbia

    CERN Multimedia

    2009-01-01

    On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency, will speak about his work with Carlo Rubbia. Finally, Hans Joachim Schellnhuber of the Potsdam Institute for Climate Research and Sven Kul...

  1. Who Writes Carlos Bulosan?

    Directory of Open Access Journals (Sweden)

    Charlie Samuya Veric

    2001-12-01

    Full Text Available The importance of Carlos Bulosan in Filipino and Filipino-American radical history and literature is indisputable. His eminence spans the pacific, and he is known, diversely, as a radical poet, fictionist, novelist, and labor organizer. Author of the canonical America Iis the Hearts, Bulosan is celebrated for chronicling the conditions in America in his time, such as racism and unemployment. In the history of criticism on Bulosan's life and work, however, there is an undeclared general consensus that views Bulosan and his work as coherent permanent texts of radicalism and anti-imperialism. Central to the existence of such a tradition of critical reception are the generations of critics who, in more ways than one, control the discourse on and of Carlos Bulosan. This essay inquires into the sphere of the critical reception that orders, for our time and for the time ahead, the reading and interpretation of Bulosan. What eye and seeing, the essay asks, determine the perception of Bulosan as the angel of radicalism? What is obscured in constructing Bulosan as an immutable figure of the political? What light does the reader conceive when the personal is brought into the open and situated against the political? the essay explores the answers to these questions in Bulosan's loving letters to various friends, strangers, and white American women. The presence of these interrogations, the essay believes, will secure ultimately the continuing importance of Carlos Bulosan to radical literature and history.

  2. CERN honours Carlo Rubbia

    CERN Multimedia

    2009-01-01

    On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency and Professor at the IUSS School for Advanced Studies in Pavia will speak about his work with Carlo Rubbia. Finally, Hans Joachim Sch...

  3. The calibration of PIXIE

    Science.gov (United States)

    Fixsen, D. J.; Chuss, D. T.; Kogut, Alan; Mirel, Paul; Wollack, E. J.

    2016-07-01

    The FIRAS instrument demonstrated the use of an external calibrator to compare the sky to an instrumented blackbody. The PIXIE calibrator is improved from -35 dB to -65 dB. Another significant improvement is the ability to insert the calibrator into either input of the FTS. This allows detection and correction of additional errors, reduces the effective calibration noise by a factor of 2, eliminates an entire class of systematics and allows continuous observations. This paper presents the design and use of the PIXIE calibrator.

  4. Quantum Monte Carlo using a Stochastic Poisson Solver

    Energy Technology Data Exchange (ETDEWEB)

    Das, D; Martin, R M; Kalos, M H

    2005-05-06

    Quantum Monte Carlo (QMC) is an extremely powerful method to treat many-body systems. Usually quantum Monte Carlo has been applied in cases where the interaction potential has a simple analytic form, like the 1/r Coulomb potential. However, in a complicated environment as in a semiconductor heterostructure, the evaluation of the interaction itself becomes a non-trivial problem. Obtaining the potential from any grid-based finite-difference method, for every walker and every step is unfeasible. We demonstrate an alternative approach of solving the Poisson equation by a classical Monte Carlo within the overall quantum Monte Carlo scheme. We have developed a modified ''Walk On Spheres'' algorithm using Green's function techniques, which can efficiently account for the interaction energy of walker configurations, typical of quantum Monte Carlo algorithms. This stochastically obtained potential can be easily incorporated within popular quantum Monte Carlo techniques like variational Monte Carlo (VMC) or diffusion Monte Carlo (DMC). We demonstrate the validity of this method by studying a simple problem, the polarization of a helium atom in the electric field of an infinite capacitor.

  5. Calibration of Nanopositioning Stages

    Directory of Open Access Journals (Sweden)

    Ning Tan

    2015-12-01

    Full Text Available Accuracy is one of the most important criteria for the performance evaluation of micro- and nanorobots or systems. Nanopositioning stages are used to achieve the high positioning resolution and accuracy for a wide and growing scope of applications. However, their positioning accuracy and repeatability are not well known and difficult to guarantee, which induces many drawbacks for many applications. For example, in the mechanical characterisation of biological samples, it is difficult to perform several cycles in a repeatable way so as not to induce negative influences on the study. It also prevents one from controlling accurately a tool with respect to a sample without adding additional sensors for closed loop control. This paper aims at quantifying the positioning repeatability and accuracy based on the ISO 9283:1998 standard, and analyzing factors influencing positioning accuracy onto a case study of 1-DoF (Degree-of-Freedom nanopositioning stage. The influence of thermal drift is notably quantified. Performances improvement of the nanopositioning stage are then investigated through robot calibration (i.e., open-loop approach. Two models (static and adaptive models are proposed to compensate for both geometric errors and thermal drift. Validation experiments are conducted over a long period (several days showing that the accuracy of the stage is improved from typical micrometer range to 400 nm using the static model and even down to 100 nm using the adaptive model. In addition, we extend the 1-DoF calibration to multi-DoF with a case study of a 2-DoF nanopositioning robot. Results demonstrate that the model efficiently improved the 2D accuracy from 1400 nm to 200 nm.

  6. Parallel Calibration for Sensor Array Radio Interferometers

    CERN Document Server

    Brossard, Martin; Pesavento, Marius; Boyer, Rémy; Larzabal, Pascal; Wijnholds, Stefan J

    2016-01-01

    In order to meet the theoretically achievable imaging performance, calibration of modern radio interferometers is a mandatory challenge, especially at low frequencies. In this perspective, we propose a novel parallel iterative multi-wavelength calibration algorithm. The proposed algorithm estimates the apparent directions of the calibration sources, the directional and undirectional complex gains of the array elements and their noise powers, with a reasonable computational complexity. Furthermore, the algorithm takes into account the specific variation of the aforementioned parameter values across wavelength. Realistic numerical simulations reveal that the proposed scheme outperforms the mono-wavelength calibration scheme and approaches the derived constrained Cram\\'er-Rao bound even with the presence of non-calibration sources at unknown directions, in a computationally efficient manner.

  7. Calibration of cathode strip gains in multiwire drift chambers of the GlueX experiment

    Energy Technology Data Exchange (ETDEWEB)

    Berdnikov, V. V.; Somov, S. V.; Pentchev, L.; Somov, A.

    2016-07-01

    A technique for calibrating cathode strip gains in multiwire drift chambers of the GlueX experiment is described. The accuracy of the technique is estimated based on Monte Carlo generated data with known gain coefficients in the strip signal channels. One of the four detector sections has been calibrated using cosmic rays. Results of drift chamber calibration on the accelerator beam upon inclusion in the GlueX experimental setup are presented.

  8. First results about on-ground calibration of the Silicon Tracker for the AGILE satellite

    CERN Document Server

    Cattaneo, P W; Boffelli, F; Bulgarelli, A; Buonomo, B; Chen, A W; D'Ammando, F; Froysland, T; Fuschino, F; Galli, M; Gianotti, F; Giuliani, A; Longo, F; Marisaldi, M; Mazzitelli, G; Pellizzoni, A; Prest, M; Pucella, G; Quintieri, L; Rappoldi, A; Tavani, M; Trifoglio, M; Trois, A; Valente, P; Vallazza, E; Vercellone, S; Zambra, A; Barbiellini, G; Caraveo, P; Cocco, V; Costa, E; De Paris, G; Del Monte, E; Di Cocco, G; Donnarumma, I; Evangelista, Y; Feroci, M; Ferrari, A; Fiorini, M; Labanti, C; Lapshov, I; Lazzarotto, F; Lipari, P; Mastropietro, M; Mereghetti, S; Morelli, E; Moretti, E; Morselli, A; Pacciani, L; Perotti, F; Piano, G; Picozza, P; Pilia, M; Porrovecchio, G; Rapisarda, M; Rubini, A; Sabatini, S; Soffitta, P; Striani, E; Vittorini, V; Zanello, D; Colafrancesco, S; Giommi, P; Pittori, C; Santolamazza, P; Verrecchia, F; Salotti, L

    2011-01-01

    The AGILE scientific instrument has been calibrated with a tagged $\\gamma$-ray beam at the Beam Test Facility (BTF) of the INFN Laboratori Nazionali di Frascati (LNF). The goal of the calibration was the measure of the Point Spread Function (PSF) as a function of the photon energy and incident angle and the validation of the Monte Carlo (MC) simulation of the silicon tracker operation. The calibration setup is described and some preliminary results are presented.

  9. Development of methodology for characterization of cartridge filters from the IEA-R1 using the Monte Carlo method; Desenvolvimento de uma metodologia para caracterizacao do filtro cuno do reator IEA-R1 utilizando o Metodo Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Priscila

    2014-07-01

    The Cuno filter is part of the water processing circuit of the IEA-R1 reactor and, when saturated, it is replaced and becomes a radioactive waste, which must be managed. In this work, the primary characterization of the Cuno filter of the IEA-R1 nuclear reactor at IPEN was carried out using gamma spectrometry associated with the Monte Carlo method. The gamma spectrometry was performed using a hyperpure germanium detector (HPGe). The germanium crystal represents the detection active volume of the HPGe detector, which has a region called dead layer or inactive layer. It has been reported in the literature a difference between the theoretical and experimental values when obtaining the efficiency curve of these detectors. In this study we used the MCNP-4C code to obtain the detector calibration efficiency for the geometry of the Cuno filter, and the influence of the dead layer and the effect of sum in cascade at the HPGe detector were studied. The correction of the dead layer values were made by varying the thickness and the radius of the germanium crystal. The detector has 75.83 cm{sup 3} of active volume of detection, according to information provided by the manufacturer. Nevertheless, the results showed that the actual value of active volume is less than the one specified, where the dead layer represents 16% of the total volume of the crystal. A Cuno filter analysis by gamma spectrometry has enabled identifying energy peaks. Using these peaks, three radionuclides were identified in the filter: {sup 108m}Ag, {sup 110m}Ag and {sup 60}Co. From the calibration efficiency obtained by the Monte Carlo method, the value of activity estimated for these radionuclides is in the order of MBq. (author)

  10. Ing. Carlos M. Ochoa

    OpenAIRE

    Montesinos A, Fernando; Facultad de Farmacia y Bioquímica de la Universidad Nacional Mayor de San Marcos, Lima, Perú.

    2014-01-01

    Este personaje es un extraordinario investigador dedicado, durante muchos año, al estudio de la papa, tubérculo del genero Solanum y al infinito número de especies y variedades que cubren los territorios del Perú, Bolivia y Chile, y posiblemente otros países. Originalmente silvestre, hoy como resultado del avance científico constituye un alimento de gran valor en el mundo, desde todo punto de vista.Carlos M. Ochoa nació en el Cusco: se trasladó a Bolivia donde llevó a cabo estudios iniciales,...

  11. An integrated hydrological, ecological, and economical (HEE) modeling system for assessing water resources and ecosystem production: calibration and validation in the upper and middle parts of the Yellow River Basin, China

    Science.gov (United States)

    Li, Xianglian; Yang, Xiusheng; Gao, Wei

    2006-08-01

    Effective management of water resources in arid and semi-arid areas demands studies that cross over the disciplinaries of natural and social sciences. An integrated Hydrological, Ecological and Economical (HEE) modeling system at regional scale has been developed to assess water resources use and ecosystem production in arid and semi-arid areas. As a physically-based distributed modeling system, the HEE modeling system requires various input parameters including those for soil, vegetation, topography, groundwater, and water and agricultural management at different spatial levels. A successful implementation of the modeling system highly depends on how well it is calibrated. This paper presented an automatic calibration procedure for the HEE modeling system and its test in the upper and middle parts of the Yellow River basin. Previous to calibration, comprehensive literature investigation and sensitivity analysis were performed to identify important parameters for calibration. The automatic calibration procedure was base on conventional Monte Carlo sampling method together with a multi-objective criterion for calibration over multi-site and multi-output. The multi-objective function consisted of optimizing statistics of mean absolute relative error (MARE), Nash-Sutcliffe model efficiency coefficient (E NS), and coefficient of determination (R2). The modeling system was calibrated against streamflow and harvest yield data from multiple sites/provinces within the basin over 2001 by using the proposed automatic procedure, and validated over 1993-1995. Over the calibration period, the mean absolute relative error of simulated daily streamflow was within 7% while the statistics R2 and E NS of daily streamflow were 0.61 and 0.49 respectively. Average simulated harvest yield over the calibration period was about 9.2% less than that of observations. Overall calibration results have indicated that the calibration procedures developed in this study can efficiently calibrate

  12. The role of research efficiency in the evolution of scientific productivity and impact: An agent-based model

    Science.gov (United States)

    You, Zhi-Qiang; Han, Xiao-Pu; Hadzibeganovic, Tarik

    2016-02-01

    We introduce an agent-based model to investigate the effects of production efficiency (PE) and hot field tracing capability (HFTC) on productivity and impact of scientists embedded in a competitive research environment. Agents compete to publish and become cited by occupying the nodes of a citation network calibrated by real-world citation datasets. Our Monte-Carlo simulations reveal that differences in individual performance are strongly related to PE, whereas HFTC alone cannot provide sustainable academic careers under intensely competitive conditions. Remarkably, the negative effect of high competition levels on productivity can be buffered by elevated research efficiency if simultaneously HFTC is sufficiently low.

  13. The Science of Calibration

    Science.gov (United States)

    Kent, S. M.

    2016-05-01

    This paper presents a broad overview of the many issues involved in calibrating astronomical data, covering the full electromagnetic spectrum from radio waves to gamma rays, and considering both ground-based and space-based missions. These issues include the science drivers for absolute and relative calibration, the physics behind calibration and the mechanisms used to transfer it from the laboratory to an astronomical source, the need for networks of calibrated astronomical standards, and some of the challenges faced by large surveys and missions.

  14. An introduction to Monte Carlo methods

    Science.gov (United States)

    Walter, J.-C.; Barkema, G. T.

    2015-01-01

    Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo simulations are ergodicity and detailed balance. The Ising model is a lattice spin system with nearest neighbor interactions that is appropriate to illustrate different examples of Monte Carlo simulations. It displays a second order phase transition between disordered (high temperature) and ordered (low temperature) phases, leading to different strategies of simulations. The Metropolis algorithm and the Glauber dynamics are efficient at high temperature. Close to the critical temperature, where the spins display long range correlations, cluster algorithms are more efficient. We introduce the rejection free (or continuous time) algorithm and describe in details an interesting alternative representation of the Ising model using graphs instead of spins with the so-called Worm algorithm. We conclude with an important discussion of the dynamical effects such as thermalization and correlation time.

  15. Multilevel Monte Carlo Approaches for Numerical Homogenization

    KAUST Repository

    Efendiev, Yalchin R.

    2015-10-01

    In this article, we study the application of multilevel Monte Carlo (MLMC) approaches to numerical random homogenization. Our objective is to compute the expectation of some functionals of the homogenized coefficients, or of the homogenized solutions. This is accomplished within MLMC by considering different sizes of representative volumes (RVEs). Many inexpensive computations with the smallest RVE size are combined with fewer expensive computations performed on larger RVEs. Likewise, when it comes to homogenized solutions, different levels of coarse-grid meshes are used to solve the homogenized equation. We show that, by carefully selecting the number of realizations at each level, we can achieve a speed-up in the computations in comparison to a standard Monte Carlo method. Numerical results are presented for both one-dimensional and two-dimensional test-cases that illustrate the efficiency of the approach.

  16. Ground calibrations of Nuclear Compton Telescope

    Science.gov (United States)

    Chiu, Jeng-Lun; Liu, Zhong-Kai; Bandstra, Mark S.; Bellm, Eric C.; Liang, Jau-Shian; Perez-Becker, Daniel; Zoglauer, Andreas; Boggs, Steven E.; Chang, Hsiang-Kuang; Chang, Yuan-Hann; Huang, Minghuey A.; Amman, Mark; Chiang, Shiuan-Juang; Hung, Wei-Che; Lin, Chih-Hsun; Luke, Paul N.; Run, Ray-Shine; Wunderer, Cornelia B.

    2010-07-01

    The Nuclear Compton Telescope (NCT) is a balloon-borne soft gamma ray (0.2-10 MeV) telescope designed to study astrophysical sources of nuclear line emission and polarization. The heart of NCT is an array of 12 cross-strip germanium detectors, designed to provide 3D positions for each photon interaction with full 3D position resolution to imaging, effectively reduces background, and enables the measurement of polarization. The keys to Compton imaging with NCT's detectors are determining the energy deposited in the detector at each strip and tracking the gamma-ray photon interaction within the detector. The 3D positions are provided by the orthogonal X and Y strips, and by determining the interaction depth using the charge collection time difference (CTD) between the anode and cathode. Calibrations of the energy as well as the 3D position of interactions have been completed, and extensive calibration campaigns for the whole system were also conducted using radioactive sources prior to our flights from Ft. Sumner, New Mexico, USA in Spring 2009, and from Alice Springs, Australia in Spring 2010. Here we will present the techniques and results of our ground calibrations so far, and then compare the calibration results of the effective area throughout NCT's field of view with Monte Carlo simulations using a detailed mass model.

  17. Calibrating Gyrochronology using Kepler Asteroseismic targets

    CERN Document Server

    Angus, Ruth; Foreman-Mackey, Daniel; McQuillan, Amy

    2015-01-01

    Among the available methods for dating stars, gyrochronology is a powerful one because it requires knowledge of only the star's mass and rotation period. Gyrochronology relations have previously been calibrated using young clusters, with the Sun providing the only age dependence, and are therefore poorly calibrated at late ages. We used rotation period measurements of 310 Kepler stars with asteroseismic ages, 50 stars from the Hyades and Coma Berenices clusters and 6 field stars (including the Sun) with precise age measurements to calibrate the gyrochronology relation, whilst fully accounting for measurement uncertainties in all observable quantities. We calibrated a relation of the form $P=A^n\\times(B-V-c)^b$, where $P$ is rotation period in days, $A$ is age in Myr, $B$ and $V$ are magnitudes and $a$, $b$ and $n$ are the free parameters of our model. We found $a = 0.40^{+0.3}_{-0.05}$, $b = 0.31^{+0.05}_{-0.02}$ and $n = 0.55^{+0.02}_{-0.09}$. Markov Chain Monte Carlo methods were used to explore the posteri...

  18. A FAST FOREGROUND DIGITAL CALIBRATION TECHNIQUE FOR PIPELINED ADC

    Institute of Scientific and Technical Information of China (English)

    Wang Yu; Yang Haigang; Cheng Xin; Liu Fei; Yin Tao

    2012-01-01

    Digital calibration techniques are widely developed to cancel the non-idealities of the pipelined Analog-to-Digital Converters (ADCs).This letter presents a fast foreground digital calibration technique based on the analysis of error sources which influence the resolution of pipelined ADCs.This method estimates the gain error of the ADC prototype quickly and calibrates the ADC simultaneously in the operation time.Finally,a 10 bit,100 Ms/s pipelined ADC is implemented and calibrated.The simulation results show that the digital calibration technique has its efficiency with fewem operation cycles.

  19. Quantum Monte Carlo Calculations of Neutron Matter

    CERN Document Server

    Carlson, J; Ravenhall, D G

    2003-01-01

    Uniform neutron matter is approximated by a cubic box containing a finite number of neutrons, with periodic boundary conditions. We report variational and Green's function Monte Carlo calculations of the ground state of fourteen neutrons in a periodic box using the Argonne $\\vep $ two-nucleon interaction at densities up to one and half times the nuclear matter density. The effects of the finite box size are estimated using variational wave functions together with cluster expansion and chain summation techniques. They are small at subnuclear densities. We discuss the expansion of the energy of low-density neutron gas in powers of its Fermi momentum. This expansion is strongly modified by the large nn scattering length, and does not begin with the Fermi-gas kinetic energy as assumed in both Skyrme and relativistic mean field theories. The leading term of neutron gas energy is ~ half the Fermi-gas kinetic energy. The quantum Monte Carlo results are also used to calibrate the accuracy of variational calculations ...

  20. OLI Radiometric Calibration

    Science.gov (United States)

    Markham, Brian; Morfitt, Ron; Kvaran, Geir; Biggar, Stuart; Leisso, Nathan; Czapla-Myers, Jeff

    2011-01-01

    Goals: (1) Present an overview of the pre-launch radiance, reflectance & uniformity calibration of the Operational Land Imager (OLI) (1a) Transfer to orbit/heliostat (1b) Linearity (2) Discuss on-orbit plans for radiance, reflectance and uniformity calibration of the OLI

  1. Lidar to lidar calibration

    DEFF Research Database (Denmark)

    Fernandez Garcia, Sergio; Villanueva, Héctor

    This report presents the result of the lidar to lidar calibration performed for ground-based lidar. Calibration is here understood as the establishment of a relation between the reference lidar wind speed measurements with measurement uncertainties provided by measurement standard and correspondi...

  2. Using a Monte-Carlo-based approach to evaluate the uncertainty on fringe projection technique

    CERN Document Server

    Molimard, Jérôme

    2013-01-01

    A complete uncertainty analysis on a given fringe projection set-up has been performed using Monte-Carlo approach. In particular the calibration procedure is taken into account. Two applications are given: at a macroscopic scale, phase noise is predominant whilst at microscopic scale, both phase noise and calibration errors are important. Finally, uncertainty found at macroscopic scale is close to some experimental tests (~100 {\\mu}m).

  3. Algorithm researches for efficient global tallying in criticality calculation of Monte Carlo metho d%蒙特卡罗临界计算全局计数效率新算法研究∗

    Institute of Scientific and Technical Information of China (English)

    上官丹骅; 邓力; 李刚; 张宝印; 马彦; 付元光; 李瑞; 胡小利

    2016-01-01

    为提高蒙特卡罗临界计算时全局计数的整体效率,对比分析了新提出的均匀计数密度算法、均匀径迹数密度算法和原有的均匀裂变点算法。以大亚湾核反应堆pin-by-pin模型的全局体平均通量计数和中子沉积能计数为例,前两种算法较均匀裂变点算法都获得了整体效率的提高。上述算法已经在自主开发的并行蒙特卡罗输运程序JMCT上予以实现。%Based on the research of the uniform fission site algorithm, the uniform tally density algorithm and the uniform track number density algorithm are proposed and compared with the original uniform fission site algorithm in this paper for seeking high performance of global tallying in Monte Carlo criticality calculation. Because reducing the largest uncertainties to an acceptable level simply by running a large number of neutron histories is often prohibitively expensive, the researches are indispensable for the calculation to reach the goal of practical application (the so called 95/95 standard). Using the global volume-averaged cell flux tally and energy deposition tally of the pin-by-pin model of Dayawan nuclear reactor as two examples, these new algorithms show better results. Although the uniform tally density algorithm has the best performance, the uniform track number density algorithm still has the advantage of being applicable to any type of tally, which is based on the track length estimator without any modification. All the algorithms are realized in a recently developed parallel Monte Carlo particle transport code JMCT.

  4. Sandia WIPP calibration traceability

    Energy Technology Data Exchange (ETDEWEB)

    Schuhen, M.D. [Sandia National Labs., Albuquerque, NM (United States); Dean, T.A. [RE/SPEC, Inc., Albuquerque, NM (United States)

    1996-05-01

    This report summarizes the work performed to establish calibration traceability for the instrumentation used by Sandia National Laboratories at the Waste Isolation Pilot Plant (WIPP) during testing from 1980-1985. Identifying the calibration traceability is an important part of establishing a pedigree for the data and is part of the qualification of existing data. In general, the requirement states that the calibration of Measuring and Test equipment must have a valid relationship to nationally recognized standards or the basis for the calibration must be documented. Sandia recognized that just establishing calibration traceability would not necessarily mean that all QA requirements were met during the certification of test instrumentation. To address this concern, the assessment was expanded to include various activities.

  5. WFPC2 Polarization Calibration

    Science.gov (United States)

    Biretta, J.; McMaster, M.

    1997-12-01

    We derive a detailed calibration for WFPC2 polarization data which is accurate to about 1.5%. We begin by computing polarizer flats, and show how they are applied to data. A physical model for the polarization effects of the WFPC2 optics is then created using Mueller matricies. This model includes corrections for the instrumental polarization (diattenuation and phase retardance) of the pick-off mirror, as well as the high cross-polarization transmission of the polarizer filter. We compare this model against the on-orbit observations of polarization calibrators, and show it predicts relative counts in the different polarizer/aperture settings to 1.5% RMS accuracy. We then show how this model can be used to calibrate GO data, and present two WWW tools which allow observers to easily calibrate their data. Detailed examples are given illustrationg the calibration and display of WFPC2 polarization data. In closing we describe future plans and possible improvements.

  6. MCMini: Monte Carlo on GPGPU

    Energy Technology Data Exchange (ETDEWEB)

    Marcus, Ryan C. [Los Alamos National Laboratory

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  7. Monte Carlo methods for electromagnetics

    CERN Document Server

    Sadiku, Matthew NO

    2009-01-01

    Until now, novices had to painstakingly dig through the literature to discover how to use Monte Carlo techniques for solving electromagnetic problems. Written by one of the foremost researchers in the field, Monte Carlo Methods for Electromagnetics provides a solid understanding of these methods and their applications in electromagnetic computation. Including much of his own work, the author brings together essential information from several different publications.Using a simple, clear writing style, the author begins with a historical background and review of electromagnetic theory. After addressing probability and statistics, he introduces the finite difference method as well as the fixed and floating random walk Monte Carlo methods. The text then applies the Exodus method to Laplace's and Poisson's equations and presents Monte Carlo techniques for handing Neumann problems. It also deals with whole field computation using the Markov chain, applies Monte Carlo methods to time-varying diffusion problems, and ...

  8. Calibration of the JEM-EUSO detector

    Directory of Open Access Journals (Sweden)

    Gorodetzky P.

    2013-06-01

    Full Text Available In order to unveil the mystery of ultra high energy cosmic rays (UHECRs, JEM-EUSO (Extreme Universe Space Observatory on-board Japan Experiment Module will observe extensive air showers induced by UHECRs from the International Space Station orbit with a huge acceptance. Calibration of the JEM-EUSO instrument, which consists of Fresnel optics and a focal surface detector with 5000 photomultipliers, is very important to discuss the origin of UHECRs precisely with the observed results. In this paper, the calibration before launch and on-orbit is described. The calibration before flight will be performed as precisely as possible with integrating spheres. In the orbit, the relative change of the performance will be checked regularly with on-board and on-ground light sources. The absolute calibration of photon detection efficiency may be performed with the moon, which is a stable light source in the nature.

  9. Segment Based Camera Calibration

    Institute of Scientific and Technical Information of China (English)

    马颂德; 魏国庆; 等

    1993-01-01

    The basic idea of calibrating a camera system in previous approaches is to determine camera parmeters by using a set of known 3D points as calibration reference.In this paper,we present a method of camera calibration in whih camera parameters are determined by a set of 3D lines.A set of constraints is derived on camea parameters in terms of perspective line mapping.Form these constraints,the same perspective transformation matrix as that for point mapping can be computed linearly.The minimum number of calibration lines is 6.This result generalizes that of Liu,Huang and Faugeras[12] for camera location determination in which at least 8 line correspondences are required for linear computation of camera location.Since line segments in an image can be located easily and more accurately than points,the use of lines as calibration reference tends to ease the computation in inage preprocessing and to improve calibration accuracy.Experimental results on the calibration along with stereo reconstruction are reported.

  10. Metropolis Methods for Quantum Monte Carlo Simulations

    OpenAIRE

    Ceperley, D. M.

    2003-01-01

    Since its first description fifty years ago, the Metropolis Monte Carlo method has been used in a variety of different ways for the simulation of continuum quantum many-body systems. This paper will consider some of the generalizations of the Metropolis algorithm employed in quantum Monte Carlo: Variational Monte Carlo, dynamical methods for projector monte carlo ({\\it i.e.} diffusion Monte Carlo with rejection), multilevel sampling in path integral Monte Carlo, the sampling of permutations, ...

  11. Enhancing multi-objective evolutionary algorithm performance with Markov Chain Monte Carlo

    Science.gov (United States)

    Shafii, M.; Vrugt, J. A.; Tolson, B.; Matott, L. S.

    2009-12-01

    Multi-Objective Evolutionary Algorithms (MOEAs) have emerged as successful optimization routines to solve complex and large-scale multi-objective model calibration problems. However, a common draw-back of these methods is that they require a relatively high number of function evaluations to produce an accurate approximation of Pareto front. This requirement can translate into incredibly large computational costs in hydrologic model calibration problems. Most research efforts to address this computational burden are focused on introducing or improving the operators applied in the MOEAs structure. However, population initialization, usually done through Random Sampling (RS) or Latin Hypercube Sampling (LHS), can also affect the searching efficiency and the quality of MOEA results. This study presents a novel approach to generate initial population of a MOEA (i.e. NSGA-II) by applying a Markov Chain Monte Carlo (MCMC) sampler. The basis of MCMC methods is a Markov chain generating a random walk through the search space, using a formal likelihood function to sample the high-probability-density regions of the parameter space. Therefore, these solutions, when used as initial population, are capable of carrying quite valuable information into MOEAs process. Instead of running the MCMC sampler (i.e. DREAM) to convergence, it is applied for a relatively small and fixed number of function evaluations. The MCMC samples are then processed to identify and archive the non-dominated solutions and this archive is used as NSGA-II’s initial population. In order to analyze the applicability of this approach, it is used for a number of benchmark mathematical problems, as well as multi-objective calibration of a rainfall-runoff model (HYMOD). Initial results show promising MOEA improvement when it is initialized with an MCMC based initial population. Results will be presented that comprehensively compares MOEA results with and without an MCMC based initial population in terms of the

  12. Lidar to lidar calibration

    DEFF Research Database (Denmark)

    Georgieva Yankova, Ginka; Courtney, Michael

    This report presents the result of the lidar to lidar calibration performed for ground-based lidar. Calibration is here understood as the establishment of a relation between the reference lidar wind speed measurements with measurement uncertainties provided by measurement standard and corresponding...... lidar wind speed indications with associated measurement uncertainties. The lidar calibration concerns the 10 minute mean wind speed measurements. The comparison of the lidar measurements of the wind direction with that from the reference lidar measurements are given for information only....

  13. Site Calibration report

    DEFF Research Database (Denmark)

    Gómez Arranz, Paula; Vesth, Allan

    This report describes the site calibration carried out at Østerild, during a given period. The site calibration was performed with two Windcube WLS7 (v1) lidars at ten measurements heights. The lidar is not a sensor approved by the current version of the IEC 61400-12-1 [1] and therefore the site...... calibration with lidars does not comply with the standard. However, the measurements are carried out following the guidelines of IEC 61400-12-1 where possible, but with some deviations presented in the following chapters....

  14. Calibration Fixture For Anemometer Probes

    Science.gov (United States)

    Lewis, Charles R.; Nagel, Robert T.

    1993-01-01

    Fixture facilitates calibration of three-dimensional sideflow thermal anemometer probes. With fixture, probe oriented at number of angles throughout its design range. Readings calibrated as function of orientation in airflow. Calibration repeatable and verifiable.

  15. SPOTS Calibration Example

    Directory of Open Access Journals (Sweden)

    Patterson E.

    2010-06-01

    Full Text Available The results are presented using the procedure outlined by the Standardisation Project for Optical Techniques of Strain measurement to calibrate a digital image correlation system. The process involves comparing the experimental data obtained with the optical measurement system to the theoretical values for a specially designed specimen. The standard states the criteria which must be met in order to achieve successful calibration, in addition to quantifying the measurement uncertainty in the system. The system was evaluated at three different displacement load levels, generating strain ranges from 289 µstrain to 2110 µstrain. At the 289 µstrain range, the calibration uncertainty was found to be 14.1 µstrain, and at the 2110 µstrain range it was found to be 28.9 µstrain. This calibration procedure was performed without painting a speckle pattern on the surface of the metal. Instead, the specimen surface was prepared using different grades of grit paper to produce the desired texture.

  16. Air Data Calibration Facility

    Data.gov (United States)

    Federal Laboratory Consortium — This facility is for low altitude subsonic altimeter system calibrations of air vehicles. Mission is a direct support of the AFFTC mission. Postflight data merge is...

  17. Traceable Pyrgeometer Calibrations

    Energy Technology Data Exchange (ETDEWEB)

    Dooraghi, Mike; Kutchenreiter, Mark; Reda, Ibrahim; Habte, Aron; Sengupta, Manajit; Andreas, Afshin; Newman, Martina; Webb, Craig

    2016-05-02

    This presentation provides a high-level overview of the progress on the Broadband Outdoor Radiometer Calibrations for all shortwave and longwave radiometers that are deployed by the Atmospheric Radiation Measurement program.

  18. SRHA calibration curve

    Data.gov (United States)

    U.S. Environmental Protection Agency — an UV calibration curve for SRHA quantitation This dataset is associated with the following publication: Chang, X., and D. Bouchard. Surfactant-Wrapped Multiwalled...

  19. Calibrating nacelle lidars

    DEFF Research Database (Denmark)

    Courtney, Michael

    Nacelle mounted, forward looking wind lidars are beginning to be used to provide reference wind speed measurements for the power performance testing of wind turbines. In such applications, a formal calibration procedure with a corresponding uncertainty assessment will be necessary. This report...... presents four concepts for performing such a nacelle lidar calibration. Of the four methods, two are found to be immediately relevant and are pursued in some detail. The first of these is a line of sight calibration method in which both lines of sight (for a two beam lidar) are individually calibrated...... a representative distribution of radial wind speeds. An alternative method is to place the nacelle lidar on the ground and incline the beams upwards to bisect a mast equipped with reference instrumentation at a known height and range. This method will be easier and faster to implement and execute but the beam...

  20. Calibrating nacelle lidars

    OpenAIRE

    Courtney, Michael

    2013-01-01

    Nacelle mounted, forward looking wind lidars are beginning to be used to provide reference wind speed measurements for the power performance testing of wind turbines. In such applications, a formal calibration procedure with a corresponding uncertainty assessment will be necessary. This report presents four concepts for performing such a nacelle lidar calibration. Of the four methods, two are found to be immediately relevant and are pursued in some detail.The first of these is a line of sight...

  1. TWSTFT Link Calibration Report

    Science.gov (United States)

    2015-09-01

    box calibrator with unknown but constant total delay during a calibration tour Total Delay: The total electrical delay from the antenna phase center...to the UTCp including all the devices/cables that the satellite and clock signals pass through. It numerically equals the sum of all the sub-delays...PTB. To average out the dimnal effects and measurement noise , 5-7 days of continuous measurements is required. 3 Setups at the Lab(k) The setup

  2. Comparison of experimental and calculated calibration coefficients for a high sensitivity ionization chamber.

    Science.gov (United States)

    Amiot, M N; Mesradi, M R; Chisté, V; Morin, M; Rigoulay, F

    2012-09-01

    The response of a Vacutec 70129 ionization chamber was calculated using the PENELOPE-2008 Monte Carlo code and compared to experimental data. The filling gas mixture composition and its pressure have been determined using IC simulated response adjustment to experimental results. The Monte Carlo simulation revealed a physical effect in the detector response to photons due to the presence of xenon in the chamber. A very good agreement is found between calculated and experimental calibration coefficients for 17 radionuclides.

  3. Calibrating nacelle lidars

    Energy Technology Data Exchange (ETDEWEB)

    Courtney, M.

    2013-01-15

    Nacelle mounted, forward looking wind lidars are beginning to be used to provide reference wind speed measurements for the power performance testing of wind turbines. In such applications, a formal calibration procedure with a corresponding uncertainty assessment will be necessary. This report presents four concepts for performing such a nacelle lidar calibration. Of the four methods, two are found to be immediately relevant and are pursued in some detail. The first of these is a line of sight calibration method in which both lines of sight (for a two beam lidar) are individually calibrated by accurately aligning the beam to pass close to a reference wind speed sensor. A testing procedure is presented, reporting requirements outlined and the uncertainty of the method analysed. It is seen that the main limitation of the line of sight calibration method is the time required to obtain a representative distribution of radial wind speeds. An alternative method is to place the nacelle lidar on the ground and incline the beams upwards to bisect a mast equipped with reference instrumentation at a known height and range. This method will be easier and faster to implement and execute but the beam inclination introduces extra uncertainties. A procedure for conducting such a calibration is presented and initial indications of the uncertainties given. A discussion of the merits and weaknesses of the two methods is given together with some proposals for the next important steps to be taken in this work. (Author)

  4. Energy calibration via correlation

    CERN Document Server

    Maier, Daniel

    2015-01-01

    The main task of an energy calibration is to find a relation between pulse-height values and the corresponding energies. Doing this for each pulse-height channel individually requires an elaborated input spectrum with an excellent counting statistics and a sophisticated data analysis. This work presents an easy to handle energy calibration process which can operate reliably on calibration measurements with low counting statistics. The method uses a parameter based model for the energy calibration and concludes on the optimal parameters of the model by finding the best correlation between the measured pulse-height spectrum and multiple synthetic pulse-height spectra which are constructed with different sets of calibration parameters. A CdTe-based semiconductor detector and the line emissions of an 241 Am source were used to test the performance of the correlation method in terms of systematic calibration errors for different counting statistics. Up to energies of 60 keV systematic errors were measured to be le...

  5. Perturbation Monte Carlo methods for tissue structure alterations.

    Science.gov (United States)

    Nguyen, Jennifer; Hayakawa, Carole K; Mourant, Judith R; Spanier, Jerome

    2013-01-01

    This paper describes an extension of the perturbation Monte Carlo method to model light transport when the phase function is arbitrarily perturbed. Current perturbation Monte Carlo methods allow perturbation of both the scattering and absorption coefficients, however, the phase function can not be varied. The more complex method we develop and test here is not limited in this way. We derive a rigorous perturbation Monte Carlo extension that can be applied to a large family of important biomedical light transport problems and demonstrate its greater computational efficiency compared with using conventional Monte Carlo simulations to produce forward transport problem solutions. The gains of the perturbation method occur because only a single baseline Monte Carlo simulation is needed to obtain forward solutions to other closely related problems whose input is described by perturbing one or more parameters from the input of the baseline problem. The new perturbation Monte Carlo methods are tested using tissue light scattering parameters relevant to epithelia where many tumors originate. The tissue model has parameters for the number density and average size of three classes of scatterers; whole nuclei, organelles such as lysosomes and mitochondria, and small particles such as ribosomes or large protein complexes. When these parameters or the wavelength is varied the scattering coefficient and the phase function vary. Perturbation calculations give accurate results over variations of ∼15-25% of the scattering parameters.

  6. Monte Carlo analysis of a control technique for a tunable white lighting system

    DEFF Research Database (Denmark)

    Chakrabarti, Maumita; Thorseth, Anders; Jepsen, Jørgen

    2017-01-01

    A simulated colour control mechanism for a multi-coloured LED lighting system is presented. The system achieves adjustable and stable white light output and allows for system-to-system reproducibility after application of the control mechanism. The control unit works using a pre-calibrated lookup...... table for an experimentally realized system, with a calibrated tristimulus colour sensor. A Monte Carlo simulation is used to examine the system performance concerning the variation of luminous flux and chromaticity of the light output. The inputs to the Monte Carlo simulation, are variations of the LED...... peak wavelength, the LED rated luminous flux bin, the influence of the operating conditions, ambient temperature, driving current, and the spectral response of the colour sensor. The system performance is investigated by evaluating the outputs from the Monte Carlo simulation. The outputs show...

  7. Monte Carlo simulation of mixed neutron-gamma radiation fields and dosimetry devices

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Guoqing

    2011-12-22

    Monte Carlo methods based on random sampling are widely used in different fields for the capability of solving problems with a large number of coupled degrees of freedom. In this work, Monte Carlos methods are successfully applied for the simulation of the mixed neutron-gamma field in an interim storage facility and neutron dosimeters of different types. Details are discussed in two parts: In the first part, the method of simulating an interim storage facility loaded with CASTORs is presented. The size of a CASTOR is rather large (several meters) and the CASTOR wall is very thick (tens of centimeters). Obtaining the results of dose rates outside a CASTOR with reasonable errors costs usually hours or even days. For the simulation of a large amount of CASTORs in an interim storage facility, it needs weeks or even months to finish a calculation. Variance reduction techniques were used to reduce the calculation time and to achieve reasonable relative errors. Source clones were applied to avoid unnecessary repeated calculations. In addition, the simulations were performed on a cluster system. With the calculation techniques discussed above, the efficiencies of calculations can be improved evidently. In the second part, the methods of simulating the response of neutron dosimeters are presented. An Alnor albedo dosimeter was modelled in MCNP, and it has been simulated in the facility to calculate the calibration factor to get the evaluated response to a Cf-252 source. The angular response of Makrofol detectors to fast neutrons has also been investigated. As a kind of SSNTD, Makrofol can detect fast neutrons by recording the neutron induced heavy charged recoils. To obtain the information of charged recoils, general-purpose Monte Carlo codes were used for transporting incident neutrons. The response of Makrofol to fast neutrons is dependent on several factors. Based on the parameters which affect the track revealing, the formation of visible tracks was determined. For

  8. Conversation with Juan Carlos Negrete.

    Science.gov (United States)

    Negrete, Juan Carlos

    2013-08-01

    Juan Carlos Negrete is Emeritus Professor of Psychiatry, McGill University; Founding Director, Addictions Unit, Montreal General Hospital; former President, Canadian Society of Addiction Medicine; and former WHO/PAHO Consultant on Alcoholism, Drug Addiction and Mental Health.

  9. Monte Carlo Study on Singly Tagged D Mesons at BES-Ⅲ

    Institute of Scientific and Technical Information of China (English)

    ZHAO Ming-Gang; YU Chun-Xu; LI Xue-Qian

    2009-01-01

    We present Monte Carlo studies on the singly tagged D mesons,which are crucial in the absolute measurements of D meson decays,based on a full Monte Carlo simulation for the BES-Ⅲ detector,with the BES-Ⅲ Offline Software System.The expected detection efficiencies and mass resolutions of the tagged D mesons are well estimated.

  10. Monte Carlo integration on GPU

    OpenAIRE

    Kanzaki, J.

    2010-01-01

    We use a graphics processing unit (GPU) for fast computations of Monte Carlo integrations. Two widely used Monte Carlo integration programs, VEGAS and BASES, are parallelized on GPU. By using $W^{+}$ plus multi-gluon production processes at LHC, we test integrated cross sections and execution time for programs in FORTRAN and C on CPU and those on GPU. Integrated results agree with each other within statistical errors. Execution time of programs on GPU run about 50 times faster than those in C...

  11. Improvement of the WBC calibration of the Internal Dosimetry Laboratory of the CDTN/CNEN using MCNPX code

    Energy Technology Data Exchange (ETDEWEB)

    Guerra P, F.; Heeren de O, A. [Universidade Federal de Minas Gerais, Departamento de Engenharia Nuclear, Programa de Pos Graduacao em Ciencias e Tecnicas Nucleares, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil); Melo, B. M.; Lacerda, M. A. S.; Da Silva, T. A.; Ferreira F, T. C., E-mail: tcff01@gmail.com [Centro de Desenvolvimento da Tecnologia Nuclear, Programa de Pos Graduacao / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil)

    2015-10-15

    The Plan of Radiological Protection licensed by the National Nuclear Energy Commission - CNEN in Brazil includes the risks of assessment of internal and external exposure by implementing a program of individual monitoring which is responsible of controlling exposures and ensuring the maintenance of radiation safety. The Laboratory of Internal Dosimetry of the Center for Development of Nuclear Technology - LID/CDTN is responsible for routine monitoring of internal contamination of the Individuals Occupationally Exposed (IOEs). These are, the IOEs involved in handling {sup 18}F produced by the Unit for Research and Production of Radiopharmaceuticals sources; as well a monitoring of the entire body of workers from the Research Reactor TRIGA IPR-R1/CDTN or whenever there is any risk of accidental incorporation. The determination of photon emitting radionuclides from the human body requires calibration techniques of the counting geometries, in order to obtain a curve of efficiency. The calibration process normally makes use of physical phantoms containing certified activities of the radionuclides of interest. The objective of this project is the calibration of the WBC facility of the LID/CDTN using the BOMAB physical phantom and Monte Carlo simulations. Three steps were needed to complete the calibration process. First, the BOMAB was filled with a KCl solution and several measurements of the gamma ray energy (1.46 MeV) emitted by {sup 40}K were done. Second, simulations using MCNPX code were performed to calculate the counting efficiency (Ce) for the BOMAB model phantom and compared with the measurements Ce results. Third and last step, the modeled BOMAB phantom was used to calculate the Ce covering the energy range of interest. The results showed a good agreement and are within the expected ratio between the measured and simulated results. (Author)

  12. A new approach to calibrate steady groundwater flow models with time series of head observations

    Science.gov (United States)

    Obergfell, C.; Bakker, M.; Maas, C.

    2012-04-01

    We developed a new method to calibrate aquifer parameters of steady-state well field models using measured time series of head fluctuations. Our method is an alternative to standard pumping tests and is based on time series analysis using parametric impulse response functions. First, the pumping influence is isolated from the overall groundwater fluctuation observed at monitoring wells around the well field, and response functions are determined for each individual well. Time series parameters are optimized using a quasi-Newton algorithm. For one monitoring well, time series model parameters are also optimized by means of SCEM-UA, a Markov Chain Monte Carlo algorithm, as a control on the validity of the parameters obtained by the faster quasi-Newton method. Subsequently, the drawdown corresponding to an average yearly pumping rate is calculated from the response functions determined by time series analysis. The drawdown values estimated with acceptable confidence intervals are used as calibration targets of a steady groundwater flow model. A case study is presented of the drinking water supply well field of Waalwijk (Netherlands). In this case study, a uniform aquifer transmissivity is optimized together with the conductance of ditches in the vicinity of the well field. Groundwater recharge or boundary heads do not have to be entered, which eliminates two import sources of uncertainty. The method constitutes a cost-efficient alternative to pumping tests and allows the determination of pumping influences without changes in well field operation.

  13. Optimization of sequential decisions by least squares Monte Carlo method

    DEFF Research Database (Denmark)

    Nishijima, Kazuyoshi; Anders, Annett

    change adaptation measures, and evacuation of people and assets in the face of an emerging natural hazard event. Focusing on the last example, an efficient solution scheme is proposed by Anders and Nishijima (2011). The proposed solution scheme takes basis in the least squares Monte Carlo method, which...

  14. A simple methodology for characterization of germanium coaxial detectors by using Monte Carlo simulation and evolutionary algorithms.

    Science.gov (United States)

    Guerra, J G; Rubiano, J G; Winter, G; Guerra, A G; Alonso, H; Arnedo, M A; Tejera, A; Gil, J M; Rodríguez, R; Martel, P; Bolivar, J P

    2015-11-01

    The determination in a sample of the activity concentration of a specific radionuclide by gamma spectrometry needs to know the full energy peak efficiency (FEPE) for the energy of interest. The difficulties related to the experimental calibration make it advisable to have alternative methods for FEPE determination, such as the simulation of the transport of photons in the crystal by the Monte Carlo method, which requires an accurate knowledge of the characteristics and geometry of the detector. The characterization process is mainly carried out by Canberra Industries Inc. using proprietary techniques and methodologies developed by that company. It is a costly procedure (due to shipping and to the cost of the process itself) and for some research laboratories an alternative in situ procedure can be very useful. The main goal of this paper is to find an alternative to this costly characterization process, by establishing a method for optimizing the parameters of characterizing the detector, through a computational procedure which could be reproduced at a standard research lab. This method consists in the determination of the detector geometric parameters by using Monte Carlo simulation in parallel with an optimization process, based on evolutionary algorithms, starting from a set of reference FEPEs determined experimentally or computationally. The proposed method has proven to be effective and simple to implement. It provides a set of characterization parameters which it has been successfully validated for different source-detector geometries, and also for a wide range of environmental samples and certified materials.

  15. Calibration Under Uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton; Trucano, Timothy Guy

    2005-03-01

    This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.

  16. HAWC Timing Calibration

    CERN Document Server

    Huentemeyer, Petra; Dingus, Brenda

    2009-01-01

    The High-Altitude Water Cherenkov (HAWC) Experiment is a second-generation highsensitivity gamma-ray and cosmic-ray detector that builds on the experience and technology of the Milagro observatory. Like Milagro, HAWC utilizes the water Cherenkov technique to measure extensive air showers. Instead of a pond filled with water (as in Milagro) an array of closely packed water tanks is used. The event direction will be reconstructed using the times when the PMTs in each tank are triggered. Therefore, the timing calibration will be crucial for reaching an angular resolution as low as 0.25 degrees.We propose to use a laser calibration system, patterned after the calibration system in Milagro. Like Milagro, the HAWC optical calibration system will use ~1 ns laser light pulses. Unlike Milagro, the PMTs are optically isolated and require their own optical fiber calibration. For HAWC the laser light pulses will be directed through a series of optical fan-outs and fibers to illuminate the PMTs in approximately one half o...

  17. Calibration Systems Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Myers, Tanya L.; Broocks, Bryan T.; Phillips, Mark C.

    2006-02-01

    The Calibration Systems project at Pacific Northwest National Laboratory (PNNL) is aimed towards developing and demonstrating compact Quantum Cascade (QC) laser-based calibration systems for infrared imaging systems. These on-board systems will improve the calibration technology for passive sensors, which enable stand-off detection for the proliferation or use of weapons of mass destruction, by replacing on-board blackbodies with QC laser-based systems. This alternative technology can minimize the impact on instrument size and weight while improving the quality of instruments for a variety of missions. The potential of replacing flight blackbodies is made feasible by the high output, stability, and repeatability of the QC laser spectral radiance.

  18. Monte Carlo simulation of NSE at reactor and spallation sources

    Energy Technology Data Exchange (ETDEWEB)

    Zsigmond, G.; Wechsler, D.; Mezei, F. [Hahn-Meitner-Institut Berlin, Berlin (Germany)

    2001-03-01

    A MC (Monte Carlo) computation study of NSE (Neutron Spin Echo) has been performed by means of VITESS investigating the classic and TOF-NSE options at spallation sources. The use of white beams in TOF-NSE makes the flipper efficiency in function of the neutron wavelength an important issue. The emphasis was put on exact evaluation of flipper efficiencies for wide wavelength-band instruments. (author)

  19. Iterative Magnetometer Calibration

    Science.gov (United States)

    Sedlak, Joseph

    2006-01-01

    This paper presents an iterative method for three-axis magnetometer (TAM) calibration that makes use of three existing utilities recently incorporated into the attitude ground support system used at NASA's Goddard Space Flight Center. The method combines attitude-independent and attitude-dependent calibration algorithms with a new spinning spacecraft Kalman filter to solve for biases, scale factors, nonorthogonal corrections to the alignment, and the orthogonal sensor alignment. The method is particularly well-suited to spin-stabilized spacecraft, but may also be useful for three-axis stabilized missions given sufficient data to provide observability.

  20. Parallelization of Monte Carlo codes MVP/GMVP

    Energy Technology Data Exchange (ETDEWEB)

    Nagaya, Yasunobu; Mori, Takamasa; Nakagawa, Masayuki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Sasaki, Makoto

    1998-03-01

    General-purpose Monte Carlo codes MVP/GMVP are well-vectorized and thus enable us to perform high-speed Monte Carlo calculations. In order to achieve more speedups, we parallelized the codes on the different types of the parallel processing platforms. The platforms reported are a distributed-memory vector-parallel computer Fujitsu VPP500, a distributed-memory massively parallel computer Intel Paragon and a distributed-memory scalar-parallel computer Hitachi SR2201. As mentioned generally, ideal speedup could be obtained for large-scale problems but parallelization efficiency got worse as the batch size per a processing element (PE) was smaller. (author)

  1. Green's function monte carlo and the many-fermion problem

    Science.gov (United States)

    Kalos, M. H.

    The application of Green's function Monte Carlo to many body problems is outlined. For boson problems, the method is well developed and practical. An "efficiency principle",importance sampling, can be used to reduce variance. Fermion problems are more difficult because spatially antisymmetric functions must be represented as a difference of two density functions. Naively treated, this leads to a rapid growth of Monte Carlo error. Methods for overcoming the difficulty are discussed. Satisfactory algorithms exist for few-body problems; for many-body problems more work is needed, but it is likely that adequate methods will soon be available.

  2. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    Directory of Open Access Journals (Sweden)

    Jianhua Zhang

    2014-01-01

    Full Text Available This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views’ calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain.

  3. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros

    2016-08-29

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  4. Quantitative basis for component factors of gas flow proportional counting efficiencies

    Science.gov (United States)

    Nichols, Michael C.

    This dissertation investigates the counting efficiency calibration of a gas flow proportional counter with beta-particle emitters in order to (1) determine by measurements and simulation the values of the component factors of beta-particle counting efficiency for a proportional counter, (2) compare the simulation results and measured counting efficiencies, and (3) determine the uncertainty of the simulation and measurements. Monte Carlo simulation results by the MCNP5 code were compared with measured counting efficiencies as a function of sample thickness for 14C, 89Sr, 90Sr, and 90Y. The Monte Carlo model simulated strontium carbonate with areal thicknesses from 0.1 to 35 mg cm-2. The samples were precipitated as strontium carbonate with areal thicknesses from 3 to 33 mg cm-2 , mounted on membrane filters, and counted on a low background gas flow proportional counter. The estimated fractional standard deviation was 2--4% (except 6% for 14C) for efficiency measurements of the radionuclides. The Monte Carlo simulations have uncertainties estimated to be 5 to 6 percent for carbon-14 and 2.4 percent for strontium-89, strontium-90, and yttrium-90. The curves of simulated counting efficiency vs. sample areal thickness agreed within 3% of the curves of best fit drawn through the 25--49 measured points for each of the four radionuclides. Contributions from this research include development of uncertainty budgets for the analytical processes; evaluation of alternative methods for determining chemical yield critical to the measurement process; correcting a bias found in the MCNP normalization of beta spectra histogram; clarifying the interpretation of the commonly used ICRU beta-particle spectra for use by MCNP; and evaluation of instrument parameters as applied to the simulation model to obtain estimates of the counting efficiency from simulated pulse height tallies.

  5. Muon Calibration at SoLid

    CERN Document Server

    Saunders, Daniel

    2016-01-01

    The SoLid experiment aims to make a measurement of very short distance neutrino oscillations using reactor antineutrinos. Key to its sensitivity are the experiments high spatial and energy resolution, combined with a very suitable reactor source and efficient background rejection. The fine segmentation of the detector (cubes of side 5cm), and ability to resolve signals in space and time, gives SoLid the capability to track cosmic muons. In principle a source of background, these turn into a valuable calibration source if they can be cleanly identified. This work presents the first energy calibration results, using cosmic muons, of the 288kg SoLid prototype SM1. This includes the methodology of tracking at SoLid, cosmic ray angular analyses at the reactor site, estimates of the time resolution, and calibrations at the cube level.

  6. Smart Calibration of Excavators

    DEFF Research Database (Denmark)

    Bro, Marie; Døring, Kasper; Ellekilde, Lars-Peter

    2005-01-01

    Excavators dig holes. But where is the bucket? The purpose of this report is to treat four different problems concerning calibrations of position indicators for excavators in operation at concrete construction sites. All four problems are related to the question of how to determine the precise ge...

  7. Calibrating Communication Competencies

    Science.gov (United States)

    Surges Tatum, Donna

    2016-11-01

    The Many-faceted Rasch measurement model is used in the creation of a diagnostic instrument by which communication competencies can be calibrated, the severity of observers/raters can be determined, the ability of speakers measured, and comparisons made between various groups.

  8. Entropic calibration revisited

    Energy Technology Data Exchange (ETDEWEB)

    Brody, Dorje C. [Blackett Laboratory, Imperial College, London SW7 2BZ (United Kingdom)]. E-mail: d.brody@imperial.ac.uk; Buckley, Ian R.C. [Centre for Quantitative Finance, Imperial College, London SW7 2AZ (United Kingdom); Constantinou, Irene C. [Blackett Laboratory, Imperial College, London SW7 2BZ (United Kingdom); Meister, Bernhard K. [Blackett Laboratory, Imperial College, London SW7 2BZ (United Kingdom)

    2005-04-11

    The entropic calibration of the risk-neutral density function is effective in recovering the strike dependence of options, but encounters difficulties in determining the relevant greeks. By use of put-call reversal we apply the entropic method to the time reversed economy, which allows us to obtain the spot price dependence of options and the relevant greeks.

  9. Calibration with Absolute Shrinkage

    DEFF Research Database (Denmark)

    Øjelund, Henrik; Madsen, Henrik; Thyregod, Poul

    2001-01-01

    is suggested to cope with the singular design matrix most often seen in chemometric calibration. Furthermore, the proposed algorithm may be generalized to all convex norms like Sigma/beta (j)/(gamma) where gamma greater than or equal to 1, i.e. a method that continuously varies from ridge regression...

  10. LOFAR facet calibration

    CERN Document Server

    van Weeren, R J; Hardcastle, M J; Shimwell, T W; Rafferty, D A; Sabater, J; Heald, G; Sridhar, S S; Dijkema, T J; Brunetti, G; Brüggen, M; Andrade-Santos, F; Ogrean, G A; Röttgering, H J A; Dawson, W A; Forman, W R; de Gasperin, F; Jones, C; Miley, G K; Rudnick, L; Sarazin, C L; Bonafede, A; Best, P N; Bîrzan, L; Cassano, R; Chyży, K T; Croston, J H; Ensslin, T; Ferrari, C; Hoeft, M; Horellou, C; Jarvis, M J; Kraft, R P; Mevius, M; Intema, H T; Murray, S S; Orrú, E; Pizzo, R; Simionescu, A; Stroe, A; van der Tol, S; White, G J

    2016-01-01

    LOFAR, the Low-Frequency Array, is a powerful new radio telescope operating between 10 and 240 MHz. LOFAR allows detailed sensitive high-resolution studies of the low-frequency radio sky. At the same time LOFAR also provides excellent short baseline coverage to map diffuse extended emission. However, producing high-quality deep images is challenging due to the presence of direction dependent calibration errors, caused by imperfect knowledge of the station beam shapes and the ionosphere. Furthermore, the large data volume and presence of station clock errors present additional difficulties. In this paper we present a new calibration scheme, which we name facet calibration, to obtain deep high-resolution LOFAR High Band Antenna images using the Dutch part of the array. This scheme solves and corrects the direction dependent errors in a number of facets that cover the observed field of view. Facet calibration provides close to thermal noise limited images for a typical 8 hr observing run at $\\sim$ 5arcsec resolu...

  11. Measurement System & Calibration report

    DEFF Research Database (Denmark)

    Kock, Carsten Weber; Vesth, Allan

    This Measurement System & Calibration report is describing DTU’s measurement system installed at a specific wind turbine. A major part of the sensors has been installed by others (see [1]) the rest of the sensors have been installed by DTU. The results of the measurements, described in this report...

  12. NVLAP calibration laboratory program

    Energy Technology Data Exchange (ETDEWEB)

    Cigler, J.L.

    1993-12-31

    This paper presents an overview of the progress up to April 1993 in the development of the Calibration Laboratories Accreditation Program within the framework of the National Voluntary Laboratory Accreditation Program (NVLAP) at the National Institute of Standards and Technology (NIST).

  13. Microhotplate Temperature Sensor Calibration and BIST.

    Science.gov (United States)

    Afridi, M; Montgomery, C; Cooper-Balis, E; Semancik, S; Kreider, K G; Geist, J

    2011-01-01

    In this paper we describe a novel long-term microhotplate temperature sensor calibration technique suitable for Built-In Self Test (BIST). The microhotplate thermal resistance (thermal efficiency) and the thermal voltage from an integrated platinum-rhodium thermocouple were calibrated against a freshly calibrated four-wire polysilicon microhotplate-heater temperature sensor (heater) that is not stable over long periods of time when exposed to higher temperatures. To stress the microhotplate, its temperature was raised to around 400 °C and held there for days. The heater was then recalibrated as a temperature sensor, and microhotplate temperature measurements were made based on the fresh calibration of the heater, the first calibration of the heater, the microhotplate thermal resistance, and the thermocouple voltage. This procedure was repeated 10 times over a period of 80 days. The results show that the heater calibration drifted substantially during the period of the test while the microhotplate thermal resistance and the thermocouple-voltage remained stable to within about plus or minus 1 °C over the same period. Therefore, the combination of a microhotplate heater-temperature sensor and either the microhotplate thermal resistance or an integrated thin film platinum-rhodium thermocouple can be used to provide a stable, calibrated, microhotplate-temperature sensor, and the combination of the three sensor is suitable for implementing BIST functionality. Alternatively, if a stable microhotplate-heater temperature sensor is available, such as a properly annealed platinum heater-temperature sensor, then the thermal resistance of the microhotplate and the electrical resistance of the platinum heater will be sufficient to implement BIST. It is also shown that aluminum- and polysilicon-based temperature sensors, which are not stable enough for measuring high microhotplate temperatures (>220 °C) without impractically frequent recalibration, can be used to measure the

  14. Hierarchical Bayesian Data Analysis in Radiometric SAR System Calibration: A Case Study on Transponder Calibration with RADARSAT-2 Data

    Directory of Open Access Journals (Sweden)

    Björn J. Döring

    2013-12-01

    Full Text Available A synthetic aperture radar (SAR system requires external absolute calibration so that radiometric measurements can be exploited in numerous scientific and commercial applications. Besides estimating a calibration factor, metrological standards also demand the derivation of a respective calibration uncertainty. This uncertainty is currently not systematically determined. Here for the first time it is proposed to use hierarchical modeling and Bayesian statistics as a consistent method for handling and analyzing the hierarchical data typically acquired during external calibration campaigns. Through the use of Markov chain Monte Carlo simulations, a joint posterior probability can be conveniently derived from measurement data despite the necessary grouping of data samples. The applicability of the method is demonstrated through a case study: The radar reflectivity of DLR’s new C-band Kalibri transponder is derived through a series of RADARSAT-2 acquisitions and a comparison with reference point targets (corner reflectors. The systematic derivation of calibration uncertainties is seen as an important step toward traceable radiometric calibration of synthetic aperture radars.

  15. VARIATIONAL MONTE-CARLO APPROACH FOR ARTICULATED OBJECT TRACKING

    Directory of Open Access Journals (Sweden)

    Kartik Dwivedi

    2013-12-01

    Full Text Available In this paper, we describe a novel variational Monte Carlo approach for modeling and tracking body parts of articulated objects. An articulated object (human target is represented as a dynamic Markov network of the different constituent parts. The proposed approach combines local information of individual body parts and other spatial constraints influenced by neighboring parts. The movement of the relative parts of the articulated body is modeled with local information of displacements from the Markov network and the global information from other neighboring parts. We explore the effect of certain model parameters (including the number of parts tracked; number of Monte-Carlo cycles, etc. on system accuracy and show that ourvariational Monte Carlo approach achieves better efficiency and effectiveness compared to other methods on a number of real-time video datasets containing single targets.

  16. TAKING THE NEXT STEP WITH INTELLIGENT MONTE CARLO

    Energy Technology Data Exchange (ETDEWEB)

    Booth, T.E.; Carlson, J.A. [and others

    2000-10-01

    For many scientific calculations, Monte Carlo is the only practical method available. Unfortunately, standard Monte Carlo methods converge slowly as the square root of the computer time. We have shown, both numerically and theoretically, that the convergence rate can be increased dramatically if the Monte Carlo algorithm is allowed to adapt based on what it has learned from previous samples. As the learning continues, computational efficiency increases, often geometrically fast. The particle transport work achieved geometric convergence for a two-region problem as well as for problems with rapidly changing nuclear data. The statistics work provided theoretical proof of geometic convergence for continuous transport problems and promising initial results for airborne migration of particles. The statistical physics work applied adaptive methods to a variety of physical problems including the three-dimensional Ising glass, quantum scattering, and eigenvalue problems.

  17. Mercury CEM Calibration

    Energy Technology Data Exchange (ETDEWEB)

    John F. Schabron; Joseph F. Rovani; Susan S. Sorini

    2007-03-31

    The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005, requires that calibration of mercury continuous emissions monitors (CEMs) be performed with NIST-traceable standards. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor generators. The traceability protocol will be written by EPA. Traceability will be based on the actual analysis of the output of each calibration unit at several concentration levels ranging from about 2-40 ug/m{sup 3}, and this analysis will be directly traceable to analyses by NIST using isotope dilution inductively coupled plasma/mass spectrometry (ID ICP/MS) through a chain of analyses linking the calibration unit in the power plant to the NIST ID ICP/MS. Prior to this project, NIST did not provide a recommended mercury vapor pressure equation or list mercury vapor pressure in its vapor pressure database. The NIST Physical and Chemical Properties Division in Boulder, Colorado was subcontracted under this project to study the issue in detail and to recommend a mercury vapor pressure equation that the vendors of mercury vapor pressure calibration units can use to calculate the elemental mercury vapor concentration in an equilibrium chamber at a particular temperature. As part of this study, a preliminary evaluation of calibration units from five vendors was made. The work was performed by NIST in Gaithersburg, MD and Joe Rovani from WRI who traveled to NIST as a Visiting Scientist.

  18. Multiple-time-stepping generalized hybrid Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  19. Monte Carlo Hamiltonian: Linear Potentials

    Institute of Scientific and Technical Information of China (English)

    LUO Xiang-Qian; LIU Jin-Jiang; HUANG Chun-Qing; JIANG Jun-Qin; Helmut KROGER

    2002-01-01

    We further study the validity of the Monte Carlo Hamiltonian method. The advantage of the method,in comparison with the standard Monte Carlo Lagrangian approach, is its capability to study the excited states. Weconsider two quantum mechanical models: a symmetric one V(x) = |x|/2; and an asymmetric one V(x) = ∞, forx < 0 and V(x) = x, for x ≥ 0. The results for the spectrum, wave functions and thermodynamical observables are inagreement with the analytical or Runge-Kutta calculations.

  20. Proton Upset Monte Carlo Simulation

    Science.gov (United States)

    O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.

    2009-01-01

    The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

  1. Field calibration of cup anemometers

    DEFF Research Database (Denmark)

    Schmidt Paulsen, Uwe; Mortensen, Niels Gylling; Hansen, Jens Carsten

    2007-01-01

    A field calibration method and results are described along with the experience gained with the method. The cup anemometers to be calibrated are mounted in a row on a 10-m high rig and calibrated in the free wind against a reference cup anemometer. The method has been reported [1] to improve...... the statistical bias on the data relative to calibrations carried out in a wind tunnel. The methodology is sufficiently accurate for calibration of cup anemometers used for wind resource assessments and provides a simple, reliable and cost-effective solution to cup anemometer calibration, especially suited...

  2. A continuation multilevel Monte Carlo algorithm

    KAUST Repository

    Collier, Nathan

    2014-09-05

    We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error tolerance is satisfied. CMLMC assumes discretization hierarchies that are defined a priori for each level and are geometrically refined across levels. The actual choice of computational work across levels is based on parametric models for the average cost per sample and the corresponding variance and weak error. These parameters are calibrated using Bayesian estimation, taking particular notice of the deepest levels of the discretization hierarchy, where only few realizations are available to produce the estimates. The resulting CMLMC estimator exhibits a non-trivial splitting between bias and statistical contributions. We also show the asymptotic normality of the statistical error in the MLMC estimator and justify in this way our error estimate that allows prescribing both required accuracy and confidence in the final result. Numerical results substantiate the above results and illustrate the corresponding computational savings in examples that are described in terms of differential equations either driven by random measures or with random coefficients. © 2014, Springer Science+Business Media Dordrecht.

  3. Monte Carlo Simulation of River Meander Modelling

    Science.gov (United States)

    Posner, A. J.; Duan, J. G.

    2010-12-01

    This study first compares the first order analytical solutions for flow field by Ikeda et. al. (1981) and Johanesson and Parker (1989b). Ikeda et. al.’s (1981) linear bank erosion model was implemented to predict the rate of bank erosion in which the bank erosion coefficient is treated as a stochastic variable that varies with physical properties of the bank (e.g. cohesiveness, stratigraphy, vegetation density). The developed model was used to predict the evolution of meandering planforms. Then, the modeling results were analyzed and compared to the observed data. Since the migration of meandering channel consists of downstream translation, lateral expansion, and downstream or upstream rotations. Several measures are formulated in order to determine which of the resulting planform is closest to the experimental measured one. Results from the deterministic model highly depend on the calibrated erosion coefficient. Since field measurements are always limited, the stochastic model yielded more realistic predictions of meandering planform evolutions. Due to the random nature of bank erosion coefficient, the meandering planform evolution is a stochastic process that can only be accurately predicted by a stochastic model. Quasi-2D Ikeda (1989) flow solution with Monte Carlo Simulation of Bank Erosion Coefficient.

  4. Quantum Monte Carlo Endstation for Petascale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Lubos Mitas

    2011-01-26

    NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlo code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13

  5. Comparing Monte Carlo methods for finding ground states of Ising spin glasses: Population annealing, simulated annealing, and parallel tempering.

    Science.gov (United States)

    Wang, Wenlong; Machta, Jonathan; Katzgraber, Helmut G

    2015-07-01

    Population annealing is a Monte Carlo algorithm that marries features from simulated-annealing and parallel-tempering Monte Carlo. As such, it is ideal to overcome large energy barriers in the free-energy landscape while minimizing a Hamiltonian. Thus, population-annealing Monte Carlo can be used as a heuristic to solve combinatorial optimization problems. We illustrate the capabilities of population-annealing Monte Carlo by computing ground states of the three-dimensional Ising spin glass with Gaussian disorder, while comparing to simulated-annealing and parallel-tempering Monte Carlo. Our results suggest that population annealing Monte Carlo is significantly more efficient than simulated annealing but comparable to parallel-tempering Monte Carlo for finding spin-glass ground states.

  6. Comparing Monte Carlo methods for finding ground states of Ising spin glasses: Population annealing, simulated annealing, and parallel tempering

    Science.gov (United States)

    Wang, Wenlong; Machta, Jonathan; Katzgraber, Helmut G.

    2015-07-01

    Population annealing is a Monte Carlo algorithm that marries features from simulated-annealing and parallel-tempering Monte Carlo. As such, it is ideal to overcome large energy barriers in the free-energy landscape while minimizing a Hamiltonian. Thus, population-annealing Monte Carlo can be used as a heuristic to solve combinatorial optimization problems. We illustrate the capabilities of population-annealing Monte Carlo by computing ground states of the three-dimensional Ising spin glass with Gaussian disorder, while comparing to simulated-annealing and parallel-tempering Monte Carlo. Our results suggest that population annealing Monte Carlo is significantly more efficient than simulated annealing but comparable to parallel-tempering Monte Carlo for finding spin-glass ground states.

  7. Information Geometry and Sequential Monte Carlo

    CERN Document Server

    Sim, Aaron; Stumpf, Michael P H

    2012-01-01

    This paper explores the application of methods from information geometry to the sequential Monte Carlo (SMC) sampler. In particular the Riemannian manifold Metropolis-adjusted Langevin algorithm (mMALA) is adapted for the transition kernels in SMC. Similar to its function in Markov chain Monte Carlo methods, the mMALA is a fully adaptable kernel which allows for efficient sampling of high-dimensional and highly correlated parameter spaces. We set up the theoretical framework for its use in SMC with a focus on the application to the problem of sequential Bayesian inference for dynamical systems as modelled by sets of ordinary differential equations. In addition, we argue that defining the sequence of distributions on geodesics optimises the effective sample sizes in the SMC run. We illustrate the application of the methodology by inferring the parameters of simulated Lotka-Volterra and Fitzhugh-Nagumo models. In particular we demonstrate that compared to employing a standard adaptive random walk kernel, the SM...

  8. Calibration of line structured light vision system based on camera's projective center

    Institute of Scientific and Technical Information of China (English)

    ZHU Ji-gui; LI Yan-jun; YE Sheng-hua

    2005-01-01

    Based on the characteristics of line structured light sensor, a speedy method for the calibration was established. With the coplanar reference target, the spacial pose between camera and optical plane can be calibrated by using of the camera's projective center and the light's information in the camera's image surface. Without striction to the movement of the coplanar reference target and assistant adjustment equipment, this calibration method can be implemented. This method has been used and decreased the cost of calibration equipment, simplified the calibration procedure, improved calibration efficiency. Using experiment, the sensor can attain relative accuracy about 0.5%, which indicates the rationality and effectivity of this method.

  9. The Calibration Reference Data System

    Science.gov (United States)

    Greenfield, P.; Miller, T.

    2016-07-01

    We describe a software architecture and implementation for using rules to determine which calibration files are appropriate for calibrating a given observation. This new system, the Calibration Reference Data System (CRDS), replaces what had been previously used for the Hubble Space Telescope (HST) calibration pipelines, the Calibration Database System (CDBS). CRDS will be used for the James Webb Space Telescope (JWST) calibration pipelines, and is currently being used for HST calibration pipelines. CRDS can be easily generalized for use in similar applications that need a rules-based system for selecting the appropriate item for a given dataset; we give some examples of such generalizations that will likely be used for JWST. The core functionality of the Calibration Reference Data System is available under an Open Source license. CRDS is briefly contrasted with a sampling of other similar systems used at other observatories.

  10. Carlos Mayo and Argentine historiography

    Directory of Open Access Journals (Sweden)

    Sara E. Mata

    2012-11-01

    Full Text Available The work of Carlos Mayo is distinguished by its originality and academic excellence. Our goal has been to briefly address their valuable contributions to the Argentine historiography, particularly that relating to the agricultural history of the Río de la Plata

  11. Monte Carlo Particle Lists: MCPL

    CERN Document Server

    Kittelmann, Thomas; Knudsen, Erik B; Willendrup, Peter; Cai, Xiao Xiao; Kanaki, Kalliopi

    2016-01-01

    A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular simulation packages.

  12. Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo

    Science.gov (United States)

    Herckenrath, Daan; Langevin, Christian D.; Doherty, John

    2011-01-01

    Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of

  13. Lidar calibration experiments

    DEFF Research Database (Denmark)

    Ejsing Jørgensen, Hans; Mikkelsen, T.; Streicher, J.

    1997-01-01

    A series of atmospheric aerosol diffusion experiments combined with lidar detection was conducted to evaluate and calibrate an existing retrieval algorithm for aerosol backscatter lidar systems. The calibration experiments made use of two (almost) identical mini-lidar systems for aerosol cloud...... detection to test the reproducibility and uncertainty of lidars. Lidar data were obtained from both single-ended and double-ended Lidar configurations. A backstop was introduced in one of the experiments and a new method was developed where information obtained from the backstop can be used in the inversion...... algorithm. Independent in-situ aerosol plume concentrations were obtained from a simultaneous tracer gas experiment with SF6, and comparisons with the two lidars were made. The study shows that the reproducibility of the lidars is within 15%, including measurements from both sides of a plume...

  14. Optical tweezers absolute calibration

    CERN Document Server

    Dutra, R S; Neto, P A Maia; Nussenzveig, H M

    2014-01-01

    Optical tweezers are highly versatile laser traps for neutral microparticles, with fundamental applications in physics and in single molecule cell biology. Force measurements are performed by converting the stiffness response to displacement of trapped transparent microspheres, employed as force transducers. Usually, calibration is indirect, by comparison with fluid drag forces. This can lead to discrepancies by sizable factors. Progress achieved in a program aiming at absolute calibration, conducted over the past fifteen years, is briefly reviewed. Here we overcome its last major obstacle, a theoretical overestimation of the peak stiffness, within the most employed range for applications, and we perform experimental validation. The discrepancy is traced to the effect of primary aberrations of the optical system, which are now included in the theory. All required experimental parameters are readily accessible. Astigmatism, the dominant effect, is measured by analyzing reflected images of the focused laser spo...

  15. Calibration of the EDGES Receiver to Observe the Global 21-cm Signature from the Epoch of Reionization

    CERN Document Server

    Monsalve, Raul A; Bowman, Judd D; Mozdzen, Thomas J

    2016-01-01

    The EDGES experiment strives to detect the the sky-average brightness temperature from the $21$-cm line emitted during the Epoch of Reionization (EoR) in the redshift range $14 \\gtrsim z \\gtrsim 6$. To probe this signal, EDGES conducts single-antenna measurements in the frequency range $\\sim 100-200$ MHz from the Murchison Radio-astronomy Observatory in Western Australia. In this paper we describe the current strategy for calibration of the EDGES instrument and, in particular, of its receiver. The calibration involves measuring accurately modeled passive and active noise sources connected to the receiver input in place of the antenna. We model relevant uncertainties that arise during receiver calibration and propagate them to the calibrated antenna temperature using a Monte Carlo approach. Calibration effects are isolated by assuming that the sky foregrounds and the antenna beam are perfectly known. We find that if five polynomial terms are used to account for calibration systematics, most of the calibration ...

  16. Astrid-2 SSC ASUMagnetic Calibration

    DEFF Research Database (Denmark)

    Primdahl, Fritz

    1997-01-01

    Report of the inter calibration between the starcamera and the fluxgate magnetometer onboard the ASTRID-2 satellite. This calibration was performed in the night between the 15. and 16. May 1997 at the Lovö magnetic observatory.......Report of the inter calibration between the starcamera and the fluxgate magnetometer onboard the ASTRID-2 satellite. This calibration was performed in the night between the 15. and 16. May 1997 at the Lovö magnetic observatory....

  17. Calibration Facilities for NIF

    Energy Technology Data Exchange (ETDEWEB)

    Perry, T.S.

    2000-06-15

    The calibration facilities will be dynamic and will change to meet the needs of experiments. Small sources, such as the Manson Source should be available to everyone at any time. Carrying out experiments at Omega is providing ample opportunity for practice in pre-shot preparation. Hopefully, the needs that are demonstrated in these experiments will assure the development of (or keep in service) facilities at each of the laboratories that will be essential for in-house preparation for experiments at NIF.

  18. Mesoscale hybrid calibration artifact

    Science.gov (United States)

    Tran, Hy D.; Claudet, Andre A.; Oliver, Andrew D.

    2010-09-07

    A mesoscale calibration artifact, also called a hybrid artifact, suitable for hybrid dimensional measurement and the method for make the artifact. The hybrid artifact has structural characteristics that make it suitable for dimensional measurement in both vision-based systems and touch-probe-based systems. The hybrid artifact employs the intersection of bulk-micromachined planes to fabricate edges that are sharp to the nanometer level and intersecting planes with crystal-lattice-defined angles.

  19. CALET On-orbit Calibration and Performance

    Science.gov (United States)

    Akaike, Yosui; Calet Collaboration

    2017-01-01

    The CALorimetric Electron Telescope (CALET) was installed on the International Space Station (ISS) in August 2015, and has been accumulating high-statistics data to perform high-precision measurements of cosmic ray electrons, nuclei and gamma-rays. CALET has an imaging and a fully active calorimeter, with a total thickness of 30 radiation lengths and 1.3 proton interaction lengths, that allow measurements well into the TeV energy region with excellent energy resolution, 2% for electrons above 100 GeV, and powerful particle identification. CALET's performance has been confirmed by Monte Carlo simulations and beam tests. In order to maximize the detector performance and keep the high resolution for long observation on the ISS, it is required to perform the precise calibration of each detector component. We have therefore evaluated the detector response and monitored it by using penetrating cosmic ray events such as protons and helium nuclei. In this paper, we will present the on-orbit calibration and detector performance of CALET on the ISS. This research was supported by JSPS postdoctral fellowships for research abroad.

  20. Applications of Monte Carlo Methods in Calculus.

    Science.gov (United States)

    Gordon, Sheldon P.; Gordon, Florence S.

    1990-01-01

    Discusses the application of probabilistic ideas, especially Monte Carlo simulation, to calculus. Describes some applications using the Monte Carlo method: Riemann sums; maximizing and minimizing a function; mean value theorems; and testing conjectures. (YP)

  1. An analysis of dependency of counting efficiency on worker anatomy for in vivo measurements: whole-body counting

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Binquan; Mille, Matthew; Xu, X George [Nuclear Engineering and Engineering Physics, Rensselaer Polytechnic Institute, Troy, NY 12180 (United States)], E-mail: xug2@rpi.edu

    2008-07-07

    In vivo radiobioassay is integral to many health physics and radiological protection programs dealing with internal exposures. The Bottle Manikin Absorber (BOMAB) physical phantom has been widely used for whole-body counting calibrations. However, the shape of BOMAB phantoms-a collection of plastic, cylindrical shells which contain no bones or internal organs-does not represent realistic human anatomy. Furthermore, workers who come in contact with radioactive materials have rather different body shape and size. To date, there is a lack of understanding about how the counting efficiency would change when the calibrated counter is applied to a worker with complicated internal organs or tissues. This paper presents a study on various in vivo counting efficiencies obtained from Monte Carlo simulations of two BOMAB phantoms and three tomographic image-based models (VIP-Man, NORMAN and CNMAN) for a scenario involving homogeneous whole-body radioactivity contamination. The results reveal that a phantom's counting efficiency is strongly dependent on the shape and size of a phantom. Contrary to what was expected, it was found that only small differences in efficiency were observed when the density and material composition of all internal organs and tissues of the tomographic phantoms were changed to water. The results of this study indicate that BOMAB phantoms with appropriately adjusted size and shape can be sufficient for whole-body counting calibrations when the internal contamination is homogeneous.

  2. Calibration of Underwater Sound Transducers

    Directory of Open Access Journals (Sweden)

    H.R.S. Sastry

    1983-07-01

    Full Text Available The techniques of calibration of underwater sound transducers for farfield, near-field and closed environment conditions are reviewed in this paper .The design of acoustic calibration tank is mentioned. The facilities available at Naval Physical & Oceanographic Laboratory, Cochin for calibration of transducers are also listed.

  3. Efficiency measure of a neutron dosimeter Albedo and of the room floor calibration; Medida de la eficiencia de un dosimetro de neutrones y del Albedo del suelo de la sala de calibracion

    Energy Technology Data Exchange (ETDEWEB)

    Blazquez, J.

    2010-07-01

    In this work we have applied the method to measure the source image detection efficiency of a dosimeter using Studsvik 2202D neutron source intensity Am-Be known. It has been shown experimentally that the conditions for the correction and albedo estimated the extent of the room. Since the correction is independent of dosimeter, the result is a measure of the efficiency of other types of neutron counters.

  4. Internet-based calibration of a multifunction calibrator

    Energy Technology Data Exchange (ETDEWEB)

    BUNTING BACA,LISA A.; DUDA JR.,LEONARD E.; WALKER,RUSSELL M.; OLDHAM,NILE; PARKER,MARK

    2000-04-17

    A new way of providing calibration services is evolving which employs the Internet to expand present capabilities and make the calibration process more interactive. Sandia National Laboratories and the National Institute of Standards and Technology are collaborating to set up and demonstrate a remote calibration of multifunction calibrators using this Internet-based technique that is becoming known as e-calibration. This paper describes the measurement philosophy and the Internet resources that can provide real-time audio/video/data exchange, consultation and training, as well as web-accessible test procedures, software and calibration reports. The communication system utilizes commercial hardware and software that should be easy to integrate into most calibration laboratories.

  5. San Carlos Apache Tribe - Energy Organizational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rapp, James; Albert, Steve

    2012-04-01

    The San Carlos Apache Tribe (SCAT) was awarded $164,000 in late-2011 by the U.S. Department of Energy (U.S. DOE) Tribal Energy Program's "First Steps Toward Developing Renewable Energy and Energy Efficiency on Tribal Lands" Grant Program. This grant funded:  The analysis and selection of preferred form(s) of tribal energy organization (this Energy Organization Analysis, hereinafter referred to as "EOA").  Start-up staffing and other costs associated with the Phase 1 SCAT energy organization.  An intern program.  Staff training.  Tribal outreach and workshops regarding the new organization and SCAT energy programs and projects, including two annual tribal energy summits (2011 and 2012). This report documents the analysis and selection of preferred form(s) of a tribal energy organization.

  6. Monte Carlo simulation of radiation heat transfer in arrays of fixed discrete surfaces using cell-to-cell photon transport

    Energy Technology Data Exchange (ETDEWEB)

    Drost, M.K. [Pacific Northwest Lab., Richland, WA (United States); Welty, J.R. [Oregon State Univ., Corvallis, OR (United States)

    1992-08-01

    Radiation heat transfer in an array of fixed discrete surfaces is an important problem that is particularly difficult to analyze because of the nonhomogeneous and anisotropic optical properties involved. This article presents an efficient Monte Carlo method for evaluating radiation heat transfer in arrays of fixed discrete surfaces. This Monte Carlo model has been optimized to take advantage of the regular arrangement of surfaces often encountered in these arrays. Monte Carlo model predictions have been compared with analytical and experimental results.

  7. Quantitative modeling of Cerenkov light production efficiency from medical radionuclides.

    Directory of Open Access Journals (Sweden)

    Bradley J Beattie

    Full Text Available There has been recent and growing interest in applying Cerenkov radiation (CR for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use.

  8. A simple new way to help speed up Monte Carlo convergence rates: Energy-scaled displacement Monte Carlo

    Science.gov (United States)

    Goldman, Saul

    1983-10-01

    A method we call energy-scaled displacement Monte Carlo (ESDMC) whose purpose is to improve sampling efficiency and thereby speed up convergence rates in Monte Carlo calculations is presented. The method involves scaling the maximum displacement a particle may make on a trial move to the particle's configurational energy. The scaling is such that on the average, the most stable particles make the smallest moves and the most energetic particles the largest moves. The method is compared to Metropolis Monte Carlo (MMC) and Force Bias Monte Carlo of (FBMC) by applying all three methods to a dense Lennard-Jones fluid at two temperatures, and to hot ST2 water. The functions monitored as the Markov chains developed were, for the Lennard-Jones case: melting, radial distribution functions, internal energies, and heat capacities. For hot ST2 water, we monitored energies and heat capacities. The results suggest that ESDMC samples configuration space more efficiently than either MMC or FBMC in these systems for the biasing parameters used here. The benefit from using ESDMC seemed greatest for the Lennard-Jones systems.

  9. Calibration of a slimehole density sonde using MCNPX

    Science.gov (United States)

    Won, Byeongho; Hwang, Seho; Shin, Jehyun; Kim, Jongman

    2014-05-01

    The density log is a well logging tool that can continuously record bulk density of the formation. This is widely applied for a variety of fields such as the petroleum exploitation, mineral exploration, and geotechnical survey and so on. The density log is normally applied to open holes. But there are frequently difficult conditions such as cased boreholes, the variation of borehole diameter, the borehole fluid salinity, and the stand-off and so on. So we need a density correction curves for the various borehole conditions. The primary calibration curve by manufacturer is used for the formation density calculation. In case of density log used for the oil industry, the calibration curves for various borehole environments are applied to the density correction, but commonly used slim-hole density logging sonde normally have a calibration curve for the variation of borehole diameter. In order to correct the various borehole environmental conditions, it is necessary to make the primary calibration curve of density sonde using numerical modeling. Numerical modeling serves as a low-cost substitute for experimental test pits. We have performed numerical modeling using the MCNP based on Monte-Carlo methods can record average behaviors of radiation particles. In this study, the work for matching the primary calibration curve of FDGS (Formation Density Gamma Sonde) for slime borehole with a 100 mCi 137 Cs gamma source was performed. On the basis of this work, correction curves in various borehole environments were produced.

  10. The calibration of DD neutron indium activation diagnostic for Shenguang-III facility

    CERN Document Server

    Song, Zi-Feng; Liu, Zhong-Jie; Zhan, Xia-Yu; Tang, Qi

    2014-01-01

    The indium activation diagnostic was calibrated on an accelerator neutron source in order to diagnose deuterium-deuterium (DD) neutron yields of implosion experiments on Shenguang-III facility. The scattered neutron background of the accelerator room was measured by placing a polypropylene shield in front of indium sample, in order to correct the calibrated factor of this activation diagnostic. The proper size of this shield was given by Monte Carlo simulation software. The affect from some other activated nuclei on the calibration was verified by judging whether the measured curve obeys exponential decay and contrasting the half life of the activated sample. The calibration results showed that the linear range reached up to 100 cps net count rate in the full energy peak of interest, the scattered neutron background of accelerator room was about 9% of the total neutrons and the possible interferences mixed scarcely in the sample. Subtracting the portion induced by neutron background, the calibrated factor of ...

  11. Development and calibration of a real-time airborne radioactivity monitor using direct gamma-ray spectrometry with two scintillation detectors.

    Science.gov (United States)

    Casanovas, R; Morant, J J; Salvadó, M

    2014-07-01

    The implementation of in-situ gamma-ray spectrometry in an automatic real-time environmental radiation surveillance network can help to identify and characterize abnormal radioactivity increases quickly. For this reason, a Real-time Airborne Radioactivity Monitor using direct gamma-ray spectrometry with two scintillation detectors (RARM-D2) was developed. The two scintillation detectors in the RARM-D2 are strategically shielded with Pb to permit the separate measurement of the airborne isotopes with respect to the deposited isotopes.In this paper, we describe the main aspects of the development and calibration of the RARM-D2 when using NaI(Tl) or LaBr3(Ce) detectors. The calibration of the monitor was performed experimentally with the exception of the efficiency curve, which was set using Monte Carlo (MC) simulations with the EGS5 code system. Prior to setting the efficiency curve, the effect of the radioactive source term size on the efficiency calculations was studied for the gamma-rays from (137)Cs. Finally, to study the measurement capabilities of the RARM-D2, the minimum detectable activity concentrations for (131)I and (137)Cs were calculated for typical spectra at different integration times.

  12. Mercury CEM Calibration

    Energy Technology Data Exchange (ETDEWEB)

    John Schabron; Joseph Rovani; Mark Sanderson

    2008-02-29

    Mercury continuous emissions monitoring systems (CEMS) are being implemented in over 800 coal-fired power plant stacks. The power industry desires to conduct at least a full year of monitoring before the formal monitoring and reporting requirement begins on January 1, 2009. It is important for the industry to have available reliable, turnkey equipment from CEM vendors. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor generators. The generators are used to calibrate mercury CEMs at power plant sites. The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005 requires that calibration be performed with NIST-traceable standards (Federal Register 2007). Traceability procedures will be defined by EPA. An initial draft traceability protocol was issued by EPA in May 2007 for comment. In August 2007, EPA issued an interim traceability protocol for elemental mercury generators (EPA 2007). The protocol is based on the actual analysis of the output of each calibration unit at several concentration levels ranging initially from about 2-40 {micro}g/m{sup 3} elemental mercury, and in the future down to 0.2 {micro}g/m{sup 3}, and this analysis will be directly traceable to analyses by NIST. The document is divided into two separate sections. The first deals with the qualification of generators by the vendors for use in mercury CEM calibration. The second describes the procedure that the vendors must use to certify the generator models that meet the qualification specifications. The NIST traceable certification is performance based, traceable to analysis using isotope dilution inductively coupled plasma/mass spectrometry performed by NIST in Gaithersburg, MD. The

  13. SKIRT: the design of a suite of input models for Monte Carlo radiative transfer simulations

    CERN Document Server

    Baes, Maarten

    2015-01-01

    The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can...

  14. Simulation Monte Carlo as a method of verification of the characterization of fountains in ophthalmic brachytherapy; Simulacion Monte Carlo como metodo de verificacion de la caracterizacion de fuentes en braquiterapia oftalmica

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz Lora, A.; Miras del Rio, H.; Terron Leon, J. A.

    2013-07-01

    Following the recommendations of the IAEA, and as a further check, they have been Monte Carlo simulation of each one of the plates that are arranged at the Hospital. The objective of the work is the verification of the certificates of calibration and intends to establish criteria of action for its acceptance. (Author)

  15. Parallel Monte Carlo simulation of aerosol dynamics

    KAUST Repository

    Zhou, K.

    2014-01-01

    A highly efficient Monte Carlo (MC) algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process). Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI). The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands) of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD) function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles. 2014 Kun Zhou et al.

  16. Parallel Monte Carlo Simulation of Aerosol Dynamics

    Directory of Open Access Journals (Sweden)

    Kun Zhou

    2014-02-01

    Full Text Available A highly efficient Monte Carlo (MC algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process. Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI. The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles.

  17. How good is a PCR efficiency estimate: Recommendations for precise and robust qPCR efficiency assessments

    Directory of Open Access Journals (Sweden)

    David Svec

    2015-03-01

    Full Text Available We have examined the imprecision in the estimation of PCR efficiency by means of standard curves based on strategic experimental design with large number of technical replicates. In particular, how robust this estimation is in terms of a commonly varying factors: the instrument used, the number of technical replicates performed and the effect of the volume transferred throughout the dilution series. We used six different qPCR instruments, we performed 1–16 qPCR replicates per concentration and we tested 2–10 μl volume of analyte transferred, respectively. We find that the estimated PCR efficiency varies significantly across different instruments. Using a Monte Carlo approach, we find the uncertainty in the PCR efficiency estimation may be as large as 42.5% (95% CI if standard curve with only one qPCR replicate is used in 16 different plates. Based on our investigation we propose recommendations for the precise estimation of PCR efficiency: (1 one robust standard curve with at least 3–4 qPCR replicates at each concentration shall be generated, (2 the efficiency is instrument dependent, but reproducibly stable on one platform, and (3 using a larger volume when constructing serial dilution series reduces sampling error and enables calibration across a wider dynamic range.

  18. CALIBRATED HYDRODYNAMIC MODEL

    Directory of Open Access Journals (Sweden)

    Sezar Gülbaz

    2015-01-01

    Full Text Available The land development and increase in urbanization in a watershed affect water quantityand water quality. On one hand, urbanization provokes the adjustment of geomorphicstructure of the streams, ultimately raises peak flow rate which causes flood; on theother hand, it diminishes water quality which results in an increase in Total SuspendedSolid (TSS. Consequently, sediment accumulation in downstream of urban areas isobserved which is not preferred for longer life of dams. In order to overcome thesediment accumulation problem in dams, the amount of TSS in streams and inwatersheds should be taken under control. Low Impact Development (LID is a BestManagement Practice (BMP which may be used for this purpose. It is a land planningand engineering design method which is applied in managing storm water runoff inorder to reduce flooding as well as simultaneously improve water quality. LID includestechniques to predict suspended solid loads in surface runoff generated over imperviousurban surfaces. In this study, the impact of LID-BMPs on surface runoff and TSS isinvestigated by employing a calibrated hydrodynamic model for Sazlidere Watershedwhich is located in Istanbul, Turkey. For this purpose, a calibrated hydrodynamicmodel was developed by using Environmental Protection Agency Storm WaterManagement Model (EPA SWMM. For model calibration and validation, we set up arain gauge and a flow meter into the field and obtain rainfall and flow rate data. Andthen, we select several LID types such as retention basins, vegetative swales andpermeable pavement and we obtain their influence on peak flow rate and pollutantbuildup and washoff for TSS. Consequently, we observe the possible effects ofLID on surface runoff and TSS in Sazlidere Watershed.

  19. Subtle Monte Carlo Updates in Dense Molecular Systems

    DEFF Research Database (Denmark)

    Bottaro, Sandro; Boomsma, Wouter; Johansson, Kristoffer E.;

    2012-01-01

    Although Markov chain Monte Carlo (MC) simulation is a potentially powerful approach for exploring conformational space, it has been unable to compete with molecular dynamics (MD) in the analysis of high density structural states, such as the native state of globular proteins. Here, we introduce...... as correlations in a multivariate Gaussian distribution. We demonstrate that our method reproduces structural variation in proteins with greater efficiency than current state-of-the-art Monte Carlo methods and has real-time simulation performance on par with molecular dynamics simulations. The presented results...... a kinetic algorithm, CRISP, that greatly enhances the sampling efficiency in all-atom MC simulations of dense systems. The algorithm is based on an exact analytical solution to the classic chain-closure problem, making it possible to express the interdependencies among degrees of freedom in the molecule...

  20. Dynamic Torque Calibration Unit

    Science.gov (United States)

    Agronin, Michael L.; Marchetto, Carl A.

    1989-01-01

    Proposed dynamic torque calibration unit (DTCU) measures torque in rotary actuator components such as motors, bearings, gear trains, and flex couplings. Unique because designed specifically for testing components under low rates. Measures torque in device under test during controlled steady rotation or oscillation. Rotor oriented vertically, supported by upper angular-contact bearing and lower radial-contact bearing that floats axially to prevent thermal expansion from loading bearings. High-load capacity air bearing available to replace ball bearings when higher load capacity or reduction in rate noise required.

  1. Adaptive Multilevel Monte Carlo Simulation

    KAUST Repository

    Hoel, H

    2011-08-23

    This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).

  2. The LOFAR long baseline snapshot calibrator survey

    CERN Document Server

    Moldón, J; Wucknitz, O; Jackson, N; Drabent, A; Carozzi, T; Conway, J; Kapińska, A D; McKean, P; Morabito, L; Varenius, E; Zarka, P; Anderson, J; Asgekar, A; Avruch, I M; Bell, M E; Bentum, M J; Bernardi, G; Best, P; Bîrzan, L; Bregman, J; Breitling, F; Broderick, J W; Brüggen, M; Butcher, H R; Carbone, D; Ciardi, B; de Gasperin, F; de Geus, E; Duscha, S; Eislöffel, J; Engels, D; Falcke, H; Fallows, R A; Fender, R; Ferrari, C; Frieswijk, W; Garrett, M A; Grießmeier, J; Gunst, A W; Hamaker, J P; Hassall, T E; Heald, G; Hoeft, M; Juette, E; Karastergiou, A; Kondratiev, V I; Kramer, M; Kuniyoshi, M; Kuper, G; Maat, P; Mann, G; Markoff, S; McFadden, R; McKay-Bukowski, D; Morganti, R; Munk, H; Norden, M J; Offringa, A R; Orru, E; Paas, H; Pandey-Pommier, M; Pizzo, R; Polatidis, A G; Reich, W; Röttgering, H; Rowlinson, A; Scaife, A M M; Schwarz, D; Sluman, J; Smirnov, O; Stappers, B W; Steinmetz, M; Tagger, M; Tang, Y; Tasse, C; Thoudam, S; Toribio, M C; Vermeulen, R; Vocks, C; van Weeren, R J; White, S; Wise, M W; Yatawatta, S; Zensus, A

    2014-01-01

    Aims. An efficient means of locating calibrator sources for International LOFAR is developed and used to determine the average density of usable calibrator sources on the sky for subarcsecond observations at 140 MHz. Methods. We used the multi-beaming capability of LOFAR to conduct a fast and computationally inexpensive survey with the full International LOFAR array. Sources were pre-selected on the basis of 325 MHz arcminute-scale flux density using existing catalogues. By observing 30 different sources in each of the 12 sets of pointings per hour, we were able to inspect 630 sources in two hours to determine if they possess a sufficiently bright compact component to be usable as LOFAR delay calibrators. Results. Over 40% of the observed sources are detected on multiple baselines between international stations and 86 are classified as satisfactory calibrators. We show that a flat low-frequency spectrum (from 74 to 325 MHz) is the best predictor of compactness at 140 MHz. We extrapolate from our sample to sho...

  3. On-orbit instrument calibration of CALET

    Science.gov (United States)

    Javaid, Amir; Calet Collaboration

    2015-04-01

    The CALorimetric Electron Telescope (CALET) is a high-energy cosmic ray experiment which will be placed on the International Space Station in 2015. Primary goals of CALET are measurement of cosmic ray electron spectra from 1 GeV to 20 TeV, gamma rays from 10 GeV to 10 TeV, and protons and nuclei from 10 GeV up to 1000 TeV. The detector consists of three main components: a Charge Detector (CHD), Imaging Calorimeter (IMC), and Total Absorption Calorimeter (TASC). As CALET is going to work in the ISS orbit space environment, it needs to be calibrated while it is in orbit. Penetrating non-showering protons and helium nuclei are prime candidates for instrument calibration, as they provide a known energy signal for calibrating the detector response. In the present paper, we discuss estimation of CALET's detector efficiency to protons and helium nuclei. Included is a discussion of different galactic cosmic ray and trapped proton models used for flux calculation and simulations performed for detector geometric area and trigger rate calculation. This paper also discusses the importance of the albedo proton flux for the CALET detector calibration. This research was supported by NASA at Louisiana State University under Grant Number NNX11AE01G.

  4. Monte Carlo simulation experiments on box-type radon dosimeter

    Energy Technology Data Exchange (ETDEWEB)

    Jamil, Khalid, E-mail: kjamil@comsats.edu.pk; Kamran, Muhammad; Illahi, Ahsan; Manzoor, Shahid

    2014-11-11

    Epidemiological studies show that inhalation of radon gas ({sup 222}Rn) may be carcinogenic especially to mine workers, people living in closed indoor energy conserved environments and underground dwellers. It is, therefore, of paramount importance to measure the {sup 222}Rn concentrations (Bq/m{sup 3}) in indoors environments. For this purpose, box-type passive radon dosimeters employing ion track detector like CR-39 are widely used. Fraction of the number of radon alphas emitted in the volume of the box type dosimeter resulting in latent track formation on CR-39 is the latent track registration efficiency. Latent track registration efficiency is ultimately required to evaluate the radon concentration which consequently determines the effective dose and the radiological hazards. In this research, Monte Carlo simulation experiments were carried out to study the alpha latent track registration efficiency for box type radon dosimeter as a function of dosimeter’s dimensions and range of alpha particles in air. Two different self developed Monte Carlo simulation techniques were employed namely: (a) Surface ratio (SURA) method and (b) Ray hitting (RAHI) method. Monte Carlo simulation experiments revealed that there are two types of efficiencies i.e. intrinsic efficiency (η{sub int}) and alpha hit efficiency (η{sub hit}). The η{sub int} depends upon only on the dimensions of the dosimeter and η{sub hit} depends both upon dimensions of the dosimeter and range of the alpha particles. The total latent track registration efficiency is the product of both intrinsic and hit efficiencies. It has been concluded that if diagonal length of box type dosimeter is kept smaller than the range of alpha particle then hit efficiency is achieved as 100%. Nevertheless the intrinsic efficiency keeps playing its role. The Monte Carlo simulation experimental results have been found helpful to understand the intricate track registration mechanisms in the box type dosimeter. This paper

  5. Monte Carlo simulation experiments on box-type radon dosimeter

    Science.gov (United States)

    Jamil, Khalid; Kamran, Muhammad; Illahi, Ahsan; Manzoor, Shahid

    2014-11-01

    Epidemiological studies show that inhalation of radon gas (222Rn) may be carcinogenic especially to mine workers, people living in closed indoor energy conserved environments and underground dwellers. It is, therefore, of paramount importance to measure the 222Rn concentrations (Bq/m3) in indoors environments. For this purpose, box-type passive radon dosimeters employing ion track detector like CR-39 are widely used. Fraction of the number of radon alphas emitted in the volume of the box type dosimeter resulting in latent track formation on CR-39 is the latent track registration efficiency. Latent track registration efficiency is ultimately required to evaluate the radon concentration which consequently determines the effective dose and the radiological hazards. In this research, Monte Carlo simulation experiments were carried out to study the alpha latent track registration efficiency for box type radon dosimeter as a function of dosimeter's dimensions and range of alpha particles in air. Two different self developed Monte Carlo simulation techniques were employed namely: (a) Surface ratio (SURA) method and (b) Ray hitting (RAHI) method. Monte Carlo simulation experiments revealed that there are two types of efficiencies i.e. intrinsic efficiency (ηint) and alpha hit efficiency (ηhit). The ηint depends upon only on the dimensions of the dosimeter and ηhit depends both upon dimensions of the dosimeter and range of the alpha particles. The total latent track registration efficiency is the product of both intrinsic and hit efficiencies. It has been concluded that if diagonal length of box type dosimeter is kept smaller than the range of alpha particle then hit efficiency is achieved as 100%. Nevertheless the intrinsic efficiency keeps playing its role. The Monte Carlo simulation experimental results have been found helpful to understand the intricate track registration mechanisms in the box type dosimeter. This paper explains that how radon concentration from the

  6. A Simple Accelerometer Calibrator

    Science.gov (United States)

    Salam, R. A.; Islamy, M. R. F.; Munir, M. M.; Latief, H.; Irsyam, M.; Khairurrijal

    2016-08-01

    High possibility of earthquake could lead to the high number of victims caused by it. It also can cause other hazards such as tsunami, landslide, etc. In that case it requires a system that can examine the earthquake occurrence. Some possible system to detect earthquake is by creating a vibration sensor system using accelerometer. However, the output of the system is usually put in the form of acceleration data. Therefore, a calibrator system for accelerometer to sense the vibration is needed. In this study, a simple accelerometer calibrator has been developed using 12 V DC motor, optocoupler, Liquid Crystal Display (LCD) and AVR 328 microcontroller as controller system. The system uses the Pulse Wave Modulation (PWM) form microcontroller to control the motor rotational speed as response to vibration frequency. The frequency of vibration was read by optocoupler and then those data was used as feedback to the system. The results show that the systems could control the rotational speed and the vibration frequencies in accordance with the defined PWM.

  7. Efficient quadrature rules for illumination integrals from quasi Monte Carlo to Bayesian Monte Carlo

    CERN Document Server

    Marques, Ricardo; Santos, Luís Paulo; Bouatouch, Kadi

    2015-01-01

    Rendering photorealistic images is a costly process which can take up to several days in the case of high quality images. In most cases, the task of sampling the incident radiance function to evaluate the illumination integral is responsible for an important share of the computation time. Therefore, to reach acceptable rendering times, the illumination integral must be evaluated using a limited set of samples. Such a restriction raises the question of how to obtain the most accurate approximation possible with such a limited set of samples. One must thus ensure that sampling produces the highe

  8. Efficient Bayesian inference of subsurface flow models using nested sampling and sparse polynomial chaos surrogates

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-02-01

    An efficient Bayesian calibration method based on the nested sampling (NS) algorithm and non-intrusive polynomial chaos method is presented. Nested sampling is a Bayesian sampling algorithm that builds a discrete representation of the posterior distributions by iteratively re-focusing a set of samples to high likelihood regions. NS allows representing the posterior probability density function (PDF) with a smaller number of samples and reduces the curse of dimensionality effects. The main difficulty of the NS algorithm is in the constrained sampling step which is commonly performed using a random walk Markov Chain Monte-Carlo (MCMC) algorithm. In this work, we perform a two-stage sampling using a polynomial chaos response surface to filter out rejected samples in the Markov Chain Monte-Carlo method. The combined use of nested sampling and the two-stage MCMC based on approximate response surfaces provides significant computational gains in terms of the number of simulation runs. The proposed algorithm is applied for calibration and model selection of subsurface flow models. © 2013.

  9. Measurement of top-quark pair production cross sections and calibration of the top-quark Monte-Carlo mass using LHC run I proton-proton collision data at √(s) = 7 and 8 TeV with the CMS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Kieseler, Jan

    2015-12-15

    In this thesis, measurements of the production cross sections for top-quark pairs and the determination of the top-quark mass are presented. Dileptonic decays of top-quark pairs (t anti t) with two opposite-charged lepton (electron and muon) candidates in the final state are considered. The studied data samples are collected in proton-proton collisions at the CERN Large Hadron Collider with the CMS detector and correspond to integrated luminosities of 5.0 fb{sup -1} and 19.7 fb{sup -1} at center-of-mass energies of √(s) = 7 TeV and √(s) = 8 TeV, respectively. The cross sections, σ{sub t} {sub anti} {sub t}, are measured in the fiducial detector volume (visible phase space), defined by the kinematics of the top-quark decay products, and are extrapolated to the full phase space. The visible cross sections are extracted in a simultaneous binned-likelihood fit to multi-differential distributions of final-state observables, categorized according to the multiplicity of jets associated to b quarks (b jets) and other jets in each event. The fit is performed with emphasis on a consistent treatment of correlations between systematic uncertainties and taking into account features of the t anti t event topology. By comparison with predictions from the Standard Model at next-to-next-to leading order (NNLO) accuracy, the top-quark pole mass, m{sub t}{sup pole}, is extracted from the measured cross sections for different state-of-the-art PDF sets. Furthermore, the top-quark mass parameter used in Monte-Carlo simulations, m{sub t}{sup MC}, is determined using the distribution of the invariant mass of a lepton candidate and the leading b jet in the event, m{sub lb}. Being defined by the kinematics of the top-quark decay, this observable is unaffected by the description of the top-quark production mechanism. Events are selected from the data collected at √(s) = 8 TeV that contain at least two jets and one b jet in addition to the lepton candidate pair. A novel technique is

  10. Poster — Thur Eve — 14: Improving Tissue Segmentation for Monte Carlo Dose Calculation using DECT

    Energy Technology Data Exchange (ETDEWEB)

    Di Salvio, A.; Bedwani, S.; Carrier, J-F. [Centre hospitalier de l' Université de Montréal (Canada); Bouchard, H. [National Physics Laboratory, Teddington (United Kingdom)

    2014-08-15

    Purpose: To improve Monte Carlo dose calculation accuracy through a new tissue segmentation technique with dual energy CT (DECT). Methods: Electron density (ED) and effective atomic number (EAN) can be extracted directly from DECT data with a stoichiometric calibration method. Images are acquired with Monte Carlo CT projections using the user code egs-cbct and reconstructed using an FDK backprojection algorithm. Calibration is performed using projections of a numerical RMI phantom. A weighted parameter algorithm then uses both EAN and ED to assign materials to voxels from DECT simulated images. This new method is compared to a standard tissue characterization from single energy CT (SECT) data using a segmented calibrated Hounsfield unit (HU) to ED curve. Both methods are compared to the reference numerical head phantom. Monte Carlo simulations on uniform phantoms of different tissues using dosxyz-nrc show discrepancies in depth-dose distributions. Results: Both SECT and DECT segmentation methods show similar performance assigning soft tissues. Performance is however improved with DECT in regions with higher density, such as bones, where it assigns materials correctly 8% more often than segmentation with SECT, considering the same set of tissues and simulated clinical CT images, i.e. including noise and reconstruction artifacts. Furthermore, Monte Carlo results indicate that kV photon beam depth-dose distributions can double between two tissues of density higher than muscle. Conclusions: A direct acquisition of ED and the added information of EAN with DECT data improves tissue segmentation and increases the accuracy of Monte Carlo dose calculation in kV photon beams.

  11. Evaluation of path-history-based fluorescence Monte Carlo method for photon migration in heterogeneous media.

    Science.gov (United States)

    Jiang, Xu; Deng, Yong; Luo, Zhaoyang; Wang, Kan; Lian, Lichao; Yang, Xiaoquan; Meglinski, Igor; Luo, Qingming

    2014-12-29

    The path-history-based fluorescence Monte Carlo method used for fluorescence tomography imaging reconstruction has attracted increasing attention. In this paper, we first validate the standard fluorescence Monte Carlo (sfMC) method by experimenting with a cylindrical phantom. Then, we describe a path-history-based decoupled fluorescence Monte Carlo (dfMC) method, analyze different perturbation fluorescence Monte Carlo (pfMC) methods, and compare the calculation accuracy and computational efficiency of the dfMC and pfMC methods using the sfMC method as a reference. The results show that the dfMC method is more accurate and efficient than the pfMC method in heterogeneous medium.

  12. Experimental and Monte Carlo evaluation of an ionization chamber in a {sup 60}Co beam

    Energy Technology Data Exchange (ETDEWEB)

    Perini, Ana P.; Neves, Lucio Pereira, E-mail: anapaula.perini@ufu.br [Universidade Federal de Uberlandia (INFIS/UFU), MG (Brazil). Instituto de Fisica; Santos, William S.; Caldas, Linda V.E. [Instituto de Pesquisas Energeticas e Nucleres (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    Recently a special parallel-plate ionization chamber was developed and characterized at the Instituto de Pesquisas Energeticas e Nucleares. The operational tests presented results within the recommended limits. In order to determine the influence of some components of the ionization chamber on its response, Monte Carlo simulations were carried out. The experimental and simulation results pointed out that the dosimeter evaluated in the present work has favorable properties to be applied to {sup 60}Co dosimetry at calibration laboratories. (author)

  13. Experimental and Monte Carlo evaluation of an ionization chamber in a 60Co beam

    Science.gov (United States)

    Perini, A. P.; Neves, L. P.; Santos, W. S.; Caldas, L. V. E.

    2016-07-01

    Recently a special parallel-plate ionization chamber was developed and characterized at the Instituto de Pesquisas Energeticas e Nucleares. The operational tests presented results within the recommended limits. In order to determine the influence of some components of the ionization chamber on its response, Monte Carlo simulations were carried out. The experimental and simulation results pointed out that the dosimeter evaluated in the present work has favorable properties to be applied to 60Co dosimetry at calibration laboratories.

  14. Cell-veto Monte Carlo algorithm for long-range systems

    Science.gov (United States)

    Kapfer, Sebastian C.; Krauth, Werner

    2016-09-01

    We present a rigorous efficient event-chain Monte Carlo algorithm for long-range interacting particle systems. Using a cell-veto scheme within the factorized Metropolis algorithm, we compute each single-particle move with a fixed number of operations. For slowly decaying potentials such as Coulomb interactions, screening line charges allow us to take into account periodic boundary conditions. We discuss the performance of the cell-veto Monte Carlo algorithm for general inverse-power-law potentials, and illustrate how it provides a new outlook on one of the prominent bottlenecks in large-scale atomistic Monte Carlo simulations.

  15. A separable shadow Hamiltonian hybrid Monte Carlo method.

    Science.gov (United States)

    Sweet, Christopher R; Hampton, Scott S; Skeel, Robert D; Izaguirre, Jesús A

    2009-11-07

    Hybrid Monte Carlo (HMC) is a rigorous sampling method that uses molecular dynamics (MD) as a global Monte Carlo move. The acceptance rate of HMC decays exponentially with system size. The shadow hybrid Monte Carlo (SHMC) was previously introduced to reduce this performance degradation by sampling instead from the shadow Hamiltonian defined for MD when using a symplectic integrator. SHMC's performance is limited by the need to generate momenta for the MD step from a nonseparable shadow Hamiltonian. We introduce the separable shadow Hamiltonian hybrid Monte Carlo (S2HMC) method based on a formulation of the leapfrog/Verlet integrator that corresponds to a separable shadow Hamiltonian, which allows efficient generation of momenta. S2HMC gives the acceptance rate of a fourth order integrator at the cost of a second-order integrator. Through numerical experiments we show that S2HMC consistently gives a speedup greater than two over HMC for systems with more than 4000 atoms for the same variance. By comparison, SHMC gave a maximum speedup of only 1.6 over HMC. S2HMC has the additional advantage of not requiring any user parameters beyond those of HMC. S2HMC is available in the program PROTOMOL 2.1. A Python version, adequate for didactic purposes, is also in MDL (http://mdlab.sourceforge.net/s2hmc).

  16. On a full Monte Carlo approach to quantum mechanics

    Science.gov (United States)

    Sellier, J. M.; Dimov, I.

    2016-12-01

    The Monte Carlo approach to numerical problems has shown to be remarkably efficient in performing very large computational tasks since it is an embarrassingly parallel technique. Additionally, Monte Carlo methods are well known to keep performance and accuracy with the increase of dimensionality of a given problem, a rather counterintuitive peculiarity not shared by any known deterministic method. Motivated by these very peculiar and desirable computational features, in this work we depict a full Monte Carlo approach to the problem of simulating single- and many-body quantum systems by means of signed particles. In particular we introduce a stochastic technique, based on the strategy known as importance sampling, for the computation of the Wigner kernel which, so far, has represented the main bottleneck of this method (it is equivalent to the calculation of a multi-dimensional integral, a problem in which complexity is known to grow exponentially with the dimensions of the problem). The introduction of this stochastic technique for the kernel is twofold: firstly it reduces the complexity of a quantum many-body simulation from non-linear to linear, secondly it introduces an embarassingly parallel approach to this very demanding problem. To conclude, we perform concise but indicative numerical experiments which clearly illustrate how a full Monte Carlo approach to many-body quantum systems is not only possible but also advantageous. This paves the way towards practical time-dependent, first-principle simulations of relatively large quantum systems by means of affordable computational resources.

  17. Calibrating Flavour Tagging Algorithms using $t\\bar{t}$ events with the ATLAS Detector at $\\sqrt{s}=13$ TeV

    CERN Document Server

    Bell, Andrew Stuart; The ATLAS collaboration

    2016-01-01

    $b$-jets are identified in the ATLAS experiment using a complex multivariate algorithm. In many analyses, the performance of this algorithm in signal and background processes is estimated using Monte Carlo simulated events. As the event and detector simulation may not give a perfect account of real events, and given the large number of changes between Run-1 and Run-2, it is vital to calibrate the performance of this algorithm with data. The $t\\bar{t}$ Probability Distribution Function method has been employed to measure the $b$-jet identification efficiency in data using a combinatorial likelihood approach. Results are presented incorporating the first $3.2~\\text{fb}^{-1}$ of $pp$ collisions at $\\sqrt{s} = 13~\\text{TeV}$ collected by the ATLAS detector during Run-2.

  18. Calibrated Ultra Fast Image Simulations for the Dark Energy Survey

    CERN Document Server

    Bruderer, Claudio; Refregier, Alexandre; Amara, Adam; Berge, Joel; Gamper, Lukas

    2015-01-01

    Weak lensing by large-scale structure is a powerful technique to probe the dark components of the universe. To understand the measurement process of weak lensing and the associated systematic effects, image simulations are becoming increasingly important. For this purpose we present a first implementation of the $\\textit{Monte Carlo Control Loops}$ ($\\textit{MCCL}$; Refregier & Amara 2014), a coherent framework for studying systematic effects in weak lensing. It allows us to model and calibrate the shear measurement process using image simulations from the Ultra Fast Image Generator (UFig; Berge et al. 2013). We apply this framework to a subset of the data taken during the Science Verification period (SV) of the Dark Energy Survey (DES). We calibrate the UFig simulations to be statistically consistent with DES images. We then perform tolerance analyses by perturbing the simulation parameters and study their impact on the shear measurement at the one-point level. This allows us to determine the relative im...

  19. Design and calibration of the AWCC for measuring uranium hexafluoride

    Energy Technology Data Exchange (ETDEWEB)

    Wenz, T.R.; Menlove, H.O.; WSalton, G.; Baca, J.

    1995-08-01

    An Active Well Coincidence Counter (AWCC) has been modified to measure variable enrichment uranium hexafluoride (UF{sub 6}) in storage bottles. An active assay technique was used to measure the {sup 235}U content because of the small quantity (nominal loading of 2 kg UF{sub 6}) and nonuniform distribution of UF{sub 6} in the storage bottles. A new insert was designed for the AWCC composed of graphite containing four americium-lithium sources. Monte Carlo calculations were used to design the insert and to calibrate the detector. Benchmark measurements and calculations were performed using uranium oxide resulted in assay values that agreed within 2 to 3% of destructive assay values. In addition to UF{sub 6}, the detector was also calibrated for HEU ingots, billets, and alloy scrap using the standard Mode 1 end-plug configuration.

  20. Cross calibration of the H.E.S.S. telescopes

    Energy Technology Data Exchange (ETDEWEB)

    Jankowsky, David; Jung-Richardt, Ira [ECAP, Universitaet Erlangen-Nuernberg (Germany)

    2016-07-01

    The H.E.S.S. experiment consists of five imaging atmospheric Cherenkov telescopes. Four smaller, identical ones have a mirror area of 108 m{sup 2} and a larger one that has a mirror area of 614 m{sup 2}. To guarantee high quality data and the best possible physical output it is essential that all data are well understood. This talk presents a possible method to check the responses of such mixed telescope systems: the inter and cross calibration. The main idea behind this calibration is to compare the reconstructed image amplitudes (number of measured photo electrons) or energies of the individual telescopes pairwise and to search for differences in the responses. To illustrate the usability of the methods and their implications on data taking without systematical effects from the telescope array, this talk shows results which were obtained with the help of Monte Carlo simulations.

  1. Potential of modern technologies for improvement of in vivo calibration.

    Science.gov (United States)

    Franck, D; de Carlan, L; Fisher, H; Pierrat, N; Schlagbauer, M; Wahl, W

    2007-01-01

    In the frame of IDEA project, a research programme has been carried out to study the potential of the reconstruction of numerical anthropomorphic phantoms based on personal physiological data obtained by computed tomography (CT) and Magnetic Resonance Imaging (MRI) for calibration in in vivo monitoring. As a result, new procedures have been developed taking advantage of recent progress in image processing codes that allow, after scanning and rapidly reconstructing a realistic voxel phantom, to convert the whole measurement geometry into computer file to be used on line for MCNP (Monte Carlo N-Particule code) calculations. The present paper overviews the major abilities of the OEDIPE software studies made in the frame of the IDEA project, on the examples of calibration for lung monitoring as well as whole body counting of a real patient.

  2. Accelerated GPU based SPECT Monte Carlo simulations.

    Science.gov (United States)

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-07

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational

  3. Accelerated GPU based SPECT Monte Carlo simulations

    Science.gov (United States)

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-01

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency

  4. Internal Water Vapor Photoacoustic Calibration

    Science.gov (United States)

    Pilgrim, Jeffrey S.

    2009-01-01

    Water vapor absorption is ubiquitous in the infrared wavelength range where photoacoustic trace gas detectors operate. This technique allows for discontinuous wavelength tuning by temperature-jumping a laser diode from one range to another within a time span suitable for photoacoustic calibration. The use of an internal calibration eliminates the need for external calibrated reference gases. Commercial applications include an improvement of photoacoustic spectrometers in all fields of use.

  5. Field calibration of cup anemometers

    DEFF Research Database (Denmark)

    Kristensen, L.; Jensen, G.; Hansen, A.

    2001-01-01

    An outdoor calibration facility for cup anemometers, where the signals from 10 anemometers of which at least one is a reference can be can be recorded simultaneously, has been established. The results are discussed with special emphasis on the statisticalsignificance of the calibration expressions....... It is concluded that the method has the advantage that many anemometers can be calibrated accurately with a minimum of work and cost. The obvious disadvantage is that the calibration of a set of anemometersmay take more than one month in order to have wind speeds covering a sufficiently large magnitude range...

  6. Radiological Calibration and Standards Facility

    Data.gov (United States)

    Federal Laboratory Consortium — PNNL maintains a state-of-the-art Radiological Calibration and Standards Laboratory on the Hanford Site at Richland, Washington. Laboratory staff provide expertise...

  7. SURF Model Calibration Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-10

    SURF and SURFplus are high explosive reactive burn models for shock initiation and propagation of detonation waves. They are engineering models motivated by the ignition & growth concept of high spots and for SURFplus a second slow reaction for the energy release from carbon clustering. A key feature of the SURF model is that there is a partial decoupling between model parameters and detonation properties. This enables reduced sets of independent parameters to be calibrated sequentially for the initiation and propagation regimes. Here we focus on a methodology for tting the initiation parameters to Pop plot data based on 1-D simulations to compute a numerical Pop plot. In addition, the strategy for tting the remaining parameters for the propagation regime and failure diameter is discussed.

  8. RX130 Robot Calibration

    Science.gov (United States)

    Fugal, Mario

    2012-10-01

    In order to create precision magnets for an experiment at Oak Ridge National Laboratory, a new reverse engineering method has been proposed that uses the magnetic scalar potential to solve for the currents necessary to produce the desired field. To make the magnet it is proposed to use a copper coated G10 form, upon which a drill, mounted on a robotic arm, will carve wires. The accuracy required in the manufacturing of the wires exceeds nominal robot capabilities. However, due to the rigidity as well as the precision servo motor and harmonic gear drivers, there are robots capable of meeting this requirement with proper calibration. Improving the accuracy of an RX130 to be within 35 microns (the accuracy necessary of the wires) is the goal of this project. Using feedback from a displacement sensor, or camera and inverse kinematics it is possible to achieve this accuracy.

  9. AVATAR -- Automatic variance reduction in Monte Carlo calculations

    Energy Technology Data Exchange (ETDEWEB)

    Van Riper, K.A.; Urbatsch, T.J.; Soran, P.D. [and others

    1997-05-01

    AVATAR{trademark} (Automatic Variance And Time of Analysis Reduction), accessed through the graphical user interface application, Justine{trademark}, is a superset of MCNP{trademark} that automatically invokes THREEDANT{trademark} for a three-dimensional deterministic adjoint calculation on a mesh independent of the Monte Carlo geometry, calculates weight windows, and runs MCNP. Computational efficiency increases by a factor of 2 to 5 for a three-detector oil well logging tool model. Human efficiency increases dramatically, since AVATAR eliminates the need for deep intuition and hours of tedious handwork.

  10. Monte Carlo simulations of charge transport in heterogeneous organic semiconductors

    Science.gov (United States)

    Aung, Pyie Phyo; Khanal, Kiran; Luettmer-Strathmann, Jutta

    2015-03-01

    The efficiency of organic solar cells depends on the morphology and electronic properties of the active layer. Research teams have been experimenting with different conducting materials to achieve more efficient solar panels. In this work, we perform Monte Carlo simulations to study charge transport in heterogeneous materials. We have developed a coarse-grained lattice model of polymeric photovoltaics and use it to generate active layers with ordered and disordered regions. We determine carrier mobilities for a range of conditions to investigate the effect of the morphology on charge transport.

  11. Cuartel San Carlos. Yacimiento veterano

    Directory of Open Access Journals (Sweden)

    Mariana Flores

    2007-01-01

    Full Text Available El Cuartel San Carlos es un monumento histórico nacional (1986 de finales del siglo XVIII (1785-1790, caracterizado por sufrir diversas adversidades en su construcción y soportar los terremotos de 1812 y 1900. En el año 2006, el organismo encargado de su custodia, el Instituto de Patrimonio Cultural del Ministerio de Cultura, ejecutó tres etapas de exploración arqueológica, que abarcaron las áreas Traspatio, Patio Central y las Naves Este y Oeste de la edificación. Este trabajo reseña el análisis de la documentación arqueológica obtenida en el sitio, a partir de la realización de dicho proyecto, denominado EACUSAC (Estudio Arqueológico del Cuartel San Carlos, que representa además, la tercera campaña realizada en el sitio. La importancia de este yacimiento histórico, radica en su participación en los acontecimientos que propiciaron conflictos de poder durante el surgimiento de la República y en los sucesos políticos del siglo XX. De igual manera, se encontró en el sitio una amplia muestra de materiales arqueológicos que reseñan un estilo de vida cotidiana militar, así como las dinámicas sociales internas ocurridas en el San Carlos, como lugar estratégico para la defensa de los diferentes regímenes que atravesó el país, desde la época del imperialismo español hasta nuestros días.

  12. Monte Carlo approach to turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Dueben, P.; Homeier, D.; Muenster, G. [Muenster Univ. (Germany). Inst. fuer Theoretische Physik; Jansen, K. [DESY, Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Mesterhazy, D. [Humboldt Univ., Berlin (Germany). Inst. fuer Physik

    2009-11-15

    The behavior of the one-dimensional random-force-driven Burgers equation is investigated in the path integral formalism on a discrete space-time lattice. We show that by means of Monte Carlo methods one may evaluate observables, such as structure functions, as ensemble averages over different field realizations. The regularization of shock solutions to the zero-viscosity limit (Hopf-equation) eventually leads to constraints on lattice parameters required for the stability of the simulations. Insight into the formation of localized structures (shocks) and their dynamics is obtained. (orig.)

  13. Luis Carlos López

    Directory of Open Access Journals (Sweden)

    Rafael Maya

    1979-04-01

    Full Text Available Entre los poetasa del Centenario tuvo Luis Carlos López mucha popularidad en el extranjero, desde la publicación de su primer libro. Creo que su obra llamó la atención de filósofos como Unamuno y, si no estoy equivocado, Darío se refirió a ella en términos elogiosos. En Colombia ha sido encomiada hiperbólicamente por algunos, a tiemp que otros no le conceden mayor mérito.

  14. Source geometry factors for HDR 192Ir brachytherapy secondary standard well-type ionization chamber calibrations

    Science.gov (United States)

    Shipley, D. R.; Sander, T.; Nutbrown, R. F.

    2015-03-01

    Well-type ionization chambers are used for measuring the source strength of radioactive brachytherapy sources before clinical use. Initially, the well chambers are calibrated against a suitable national standard. For high dose rate (HDR) 192Ir, this calibration is usually a two-step process. Firstly, the calibration source is traceably calibrated against an air kerma primary standard in terms of either reference air kerma rate or air kerma strength. The calibrated 192Ir source is then used to calibrate the secondary standard well-type ionization chamber. Calibration laboratories are usually only equipped with one type of HDR 192Ir source. If the clinical source type is different from that used for the calibration of the well chamber at the standards laboratory, a source geometry factor, ksg, is required to correct the calibration coefficient for any change of the well chamber response due to geometric differences between the sources. In this work we present source geometry factors for six different HDR 192Ir brachytherapy sources which have been determined using Monte Carlo techniques for a specific ionization chamber, the Standard Imaging HDR 1000 Plus well chamber with a type 70010 HDR iridium source holder. The calculated correction factors were normalized to the old and new type of calibration source used at the National Physical Laboratory. With the old Nucletron microSelectron-v1 (classic) HDR 192Ir calibration source, ksg was found to be in the range 0.983 to 0.999 and with the new Isodose Control HDR 192Ir Flexisource ksg was found to be in the range 0.987 to 1.004 with a relative uncertainty of 0.4% (k = 2). Source geometry factors for different combinations of calibration sources, clinical sources, well chambers and associated source holders, can be calculated with the formalism discussed in this paper.

  15. Bayesian Calibration of the Community Land Model using Surrogates

    Energy Technology Data Exchange (ETDEWEB)

    Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Sargsyan, K.; Swiler, Laura P.

    2015-01-01

    We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditioned on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural error in CLM under two error models. We find that accurate surrogate models can be created for CLM in most cases. The posterior distributions lead to better prediction than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters’ distributions significantly. The structural error model reveals a correlation time-scale which can potentially be used to identify physical processes that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.

  16. Bayesian calibration of the Community Land Model using surrogates

    Energy Technology Data Exchange (ETDEWEB)

    Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Swiler, Laura Painton

    2014-02-01

    We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditional on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural error in CLM under two error models. We find that surrogate models can be created for CLM in most cases. The posterior distributions are more predictive than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters' distributions significantly. The structural error model reveals a correlation time-scale which can be used to identify the physical process that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.

  17. Revised absolute amplitude calibration of the LOPES experiment

    CERN Document Server

    Link, K; Apel, W D; Arteaga-Velázquez, J C; Bähren, L; Bekk, K; Bertaina, M; Biermann, P L; Blümer, J; Bozdog, H; Brancus, I M; Cantoni, E; Chiavassa, A; Daumiller, K; de Souza, V; Di Pierro, F; Doll, P; Engel, R; Falcke, H; Fuchs, B; Gemmeke, H; Grupen, C; Haungs, A; Heck, D; Hiller, R; Hörandel, J R; Horneffer, A; Huber, D; Isar, P G; Kampert, K-H; Kang, D; Krömer, O; Kuijpers, J; Łuczak, P; Ludwig, M; Mathes, H J; Melissas, M; Morello, C; Oehlschläger, J; Palmieri, N; Pierog, T; Rautenberg, J; Rebel, H; Roth, M; Rühle, C; Saftoiu, A; Schieler, H; Schmidt, A; Schoo, S; Schröder, F G; Sima, O; Toma, G; Trinchero, G C; Weindl, A; Wochele, J; Zabierowski, J; Zensus, J A

    2015-01-01

    One of the main aims of the LOPES experiment was the evaluation of the absolute amplitude of the radio signal of air showers. This is of special interest since the radio technique offers the possibility for an independent and highly precise determination of the energy scale of cosmic rays on the basis of signal predictions from Monte Carlo simulations. For the calibration of the amplitude measured by LOPES we used an external source. Previous comparisons of LOPES measurements and simulations of the radio signal amplitude predicted by CoREAS revealed a discrepancy of the order of a factor of two. A re-measurement of the reference calibration source, now performed for the free field, was recently performed by the manufacturer. The updated calibration values lead to a lowering of the reconstructed electric field measured by LOPES by a factor of $2.6 \\pm 0.2$ and therefore to a significantly better agreement with CoREAS simulations. We discuss the updated calibration and its impact on the LOPES analysis results.

  18. Monte Carlo techniques in radiation therapy

    CERN Document Server

    Verhaegen, Frank

    2013-01-01

    Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...

  19. Methods for fitting of efficiency curves obtained by means of HPGe gamma rays spectrometers; Metodos de ajuste de curvas de eficiencia obtidas por meio de espectrometros de HPGe

    Energy Technology Data Exchange (ETDEWEB)

    Cardoso, Vanderlei

    2002-07-01

    The present work describes a few methodologies developed for fitting efficiency curves obtained by means of a HPGe gamma-ray spectrometer. The interpolated values were determined by simple polynomial fitting and polynomial fitting between the ratio of experimental peak efficiency and total efficiency, calculated by Monte Carlo technique, as a function of gamma-ray energy. Moreover, non-linear fitting has been performed using a segmented polynomial function and applying the Gauss-Marquardt method. For the peak area obtainment different methodologies were developed in order to estimate the background area under the peak. This information was obtained by numerical integration or by using analytical functions associated to the background. One non-calibrated radioactive source has been included in the curve efficiency in order to provide additional calibration points. As a by-product, it was possible to determine the activity of this non-calibrated source. For all fittings developed in the present work the covariance matrix methodology was used, which is an essential procedure in order to give a complete description of the partial uncertainties involved. (author)

  20. Novel Quantum Monte Carlo Approaches for Quantum Liquids

    Science.gov (United States)

    Rubenstein, Brenda M.

    the eventual hope is to apply this algorithm to the exploration of yet unidentified high-pressure, low-temperature phases of hydrogen, I employ this algorithm to determine whether or not quantum hard spheres can form a low-temperature bcc solid if exchange is not taken into account. In the final chapter of this thesis, I use Path Integral Monte Carlo once again to explore whether glassy para-hydrogen exhibits superfluidity. Physicists have long searched for ways to coax hydrogen into becoming a superfluid. I present evidence that, while glassy hydrogen does not crystallize at the temperatures at which hydrogen might become a superfluid, it nevertheless does not exhibit superfluidity. This is because the average binding energy per p-H2 molecule poses a severe barrier to exchange regardless of whether the system is crystalline. All in all, this work extends the reach of Quantum Monte Carlo methods to new systems and brings the power of existing methods to bear on new problems. Portions of this work have been published in Rubenstein, PRE (2010) and Rubenstein, PRA (2012) [167;169]. Other papers not discussed here published during my Ph.D. include Rubenstein, BPJ (2008) and Rubenstein, PRL (2012) [166;168]. The work in Chapters 6 and 7 is currently unpublished. [166] Brenda M. Rubenstein, Ivan Coluzza, and Mark A. Miller. Controlling the folding and substrate-binding of proteins using polymer brushes. Physical Review Letters, 108(20):208104, May 2012. [167] Brenda M. Rubenstein, J.E. Gubernatis, and J.D. Doll. Comparative monte carlo efficiency by monte carlo analysis. Physical Review E, 82(3):036701, September 2010. [168] Brenda M. Rubenstein and Laura J. Kaufman. The role of extracellular matrix in glioma invasion: A cellular potts model approach. Biophysical Journal, 95(12):5661-- 5680, December 2008. [169] Brenda M. Rubenstein, Shiwei Zhang, and David R. Reichman. Finite-temperature auxiliary-field quantum monte carlo for bose-fermi mixtures. Physical Review A, 86

  1. A Fully Bayesian Approach to Improved Calibration and Prediction of Groundwater Models With Structure Error

    Science.gov (United States)

    Xu, T.; Valocchi, A. J.

    2014-12-01

    Effective water resource management typically relies on numerical models to analyse groundwater flow and solute transport processes. These models are usually subject to model structure error due to simplification and/or misrepresentation of the real system. As a result, the model outputs may systematically deviate from measurements, thus violating a key assumption for traditional regression-based calibration and uncertainty analysis. On the other hand, model structure error induced bias can be described statistically in an inductive, data-driven way based on historical model-to-measurement misfit. We adopt a fully Bayesian approach that integrates a Gaussian process error model to account for model structure error to the calibration, prediction and uncertainty analysis of groundwater models. The posterior distributions of parameters of the groundwater model and the Gaussian process error model are jointly inferred using DREAM, an efficient Markov chain Monte Carlo sampler. We test the usefulness of the fully Bayesian approach towards a synthetic case study of surface-ground water interaction under changing pumping conditions. We first illustrate through this example that traditional least squares regression without accounting for model structure error yields biased parameter estimates due to parameter compensation as well as biased predictions. In contrast, the Bayesian approach gives less biased parameter estimates. Moreover, the integration of a Gaussian process error model significantly reduces predictive bias and leads to prediction intervals that are more consistent with observations. The results highlight the importance of explicit treatment of model structure error especially in circumstances where subsequent decision-making and risk analysis require accurate prediction and uncertainty quantification. In addition, the data-driven error modelling approach is capable of extracting more information from observation data than using a groundwater model alone.

  2. Multi-Instrument Inter-Calibration (MIIC System

    Directory of Open Access Journals (Sweden)

    Chris Currey

    2016-11-01

    Full Text Available In order to have confidence in the long-term records of atmospheric and surface properties derived from satellite measurements it is important to know the stability and accuracy of the actual radiance or reflectance measurements. Climate quality measurements require accurate calibration of space-borne instruments. Inter-calibration is the process that ties the calibration of a target instrument to a more accurate, preferably SI-traceable, reference instrument by matching measurements in time, space, wavelength, and view angles. A major challenge for any inter-calibration study is to find and acquire matched samples from within the large data volumes distributed across Earth science data centers. Typically less than 0.1% of the instrument data are required for inter-calibration analysis. Software tools and networking middleware are necessary for intelligent selection and retrieval of matched samples from multiple instruments on separate spacecraft.  This paper discusses the Multi-Instrument Inter-Calibration (MIIC system, a web-based software framework used by the Climate Absolute Radiance and Refractivity Observatory (CLARREO Pathfinder mission to simplify the data management mechanics of inter-calibration. MIIC provides three main services: (1 inter-calibration event prediction; (2 data acquisition; and (3 data analysis. The combination of event prediction and powerful server-side functions reduces the data volume required for inter-calibration studies by several orders of magnitude, dramatically reducing network bandwidth and disk storage needs. MIIC provides generic retrospective analysis services capable of sifting through large data volumes of existing instrument data. The MIIC tiered design deployed at large institutional data centers can help international organizations, such as Global Space Based Inter-Calibration System (GSICS, more efficiently acquire matched data from multiple data centers. In this paper we describe the MIIC

  3. Design of a transportable high efficiency fast neutron spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Roecker, C., E-mail: calebroecker@berkeley.edu [Department of Nuclear Engineering, University of California at Berkeley, CA 94720 (United States); Bernstein, A.; Bowden, N.S. [Nuclear and Chemical Sciences Division, Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); Cabrera-Palmer, B. [Radiation and Nuclear Detection Systems, Sandia National Laboratories, Livermore, CA 94550 (United States); Dazeley, S. [Nuclear and Chemical Sciences Division, Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); Gerling, M.; Marleau, P.; Sweany, M.D. [Radiation and Nuclear Detection Systems, Sandia National Laboratories, Livermore, CA 94550 (United States); Vetter, K. [Department of Nuclear Engineering, University of California at Berkeley, CA 94720 (United States); Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2016-08-01

    A transportable fast neutron detection system has been designed and constructed for measuring neutron energy spectra and flux ranging from tens to hundreds of MeV. The transportability of the spectrometer reduces the detector-related systematic bias between different neutron spectra and flux measurements, which allows for the comparison of measurements above or below ground. The spectrometer will measure neutron fluxes that are of prohibitively low intensity compared to the site-specific background rates targeted by other transportable fast neutron detection systems. To measure low intensity high-energy neutron fluxes, a conventional capture-gating technique is used for measuring neutron energies above 20 MeV and a novel multiplicity technique is used for measuring neutron energies above 100 MeV. The spectrometer is composed of two Gd containing plastic scintillator detectors arranged around a lead spallation target. To calibrate and characterize the position dependent response of the spectrometer, a Monte Carlo model was developed and used in conjunction with experimental data from gamma ray sources. Multiplicity event identification algorithms were developed and used with a Cf-252 neutron multiplicity source to validate the Monte Carlo model Gd concentration and secondary neutron capture efficiency. The validated Monte Carlo model was used to predict an effective area for the multiplicity and capture gating analyses. For incident neutron energies between 100 MeV and 1000 MeV with an isotropic angular distribution, the multiplicity analysis predicted an effective area of 500 cm{sup 2} rising to 5000 cm{sup 2}. For neutron energies above 20 MeV, the capture-gating analysis predicted an effective area between 1800 cm{sup 2} and 2500 cm{sup 2}. The multiplicity mode was found to be sensitive to the incident neutron angular distribution.

  4. Mean field simulation for Monte Carlo integration

    CERN Document Server

    Del Moral, Pierre

    2013-01-01

    In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko

  5. Approaching Chemical Accuracy with Quantum Monte Carlo

    OpenAIRE

    Petruzielo, Frank R.; Toulouse, Julien; Umrigar, C. J.

    2012-01-01

    International audience; A quantum Monte Carlo study of the atomization energies for the G2 set of molecules is presented. Basis size dependence of diffusion Monte Carlo atomization energies is studied with a single determinant Slater-Jastrow trial wavefunction formed from Hartree-Fock orbitals. With the largest basis set, the mean absolute deviation from experimental atomization energies for the G2 set is 3.0 kcal/mol. Optimizing the orbitals within variational Monte Carlo improves the agreem...

  6. Tectonic calibrations in molecular dating

    Institute of Scientific and Technical Information of China (English)

    Ullasa KODANDARAMAIAH

    2011-01-01

    Molecular dating techniques require the use of calibrations, which are usually fossil or geological vicariance-based.Fossil calibrations have been criticised because they result only in minimum age estimates. Based on a historical biogeographic perspective, Ⅰ suggest that vicariance-based calibrations are more dangerous. Almost all analytical methods in historical biogeography are strongly biased towards inferring vicariance, hence vicariance identified through such methods is unreliable. Other studies, especially of groups found on Gondwanan fragments, have simply assumed vicariance. Although it was previously believed that vicariance was the predominant mode of speciation, mounting evidence now indicates that speciation by dispersal is common, dominating vicariance in several groups. Moreover, the possibility of speciation having occurred before the said geological event cannot be precluded. Thus, geological calibrations can under- or overestimate times, whereas fossil calibrations always result in minimum estimates. Another major drawback of vicariant calibrations is the problem of circular reasoning when the resulting estimates are used to infer ages of biogeographic events. Ⅰ argue that fossil-based dating is a superior alternative to vicariance, primarily because the strongest assumption in the latter, that speciation was caused by the said geological process, is more often than not the most tenuous. When authors prefer to use a combination of fossil and vicariant calibrations, one suggestion is to report results both with and without inclusion of the geological constraints. Relying solely on vicariant calibrations should be strictly avoided.

  7. 1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO

    Energy Technology Data Exchange (ETDEWEB)

    T. EVANS; ET AL

    2000-08-01

    We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.

  8. Monte Carlo Treatment Planning for Advanced Radiotherapy

    DEFF Research Database (Denmark)

    Cronholm, Rickard

    and validation of a Monte Carlo model of a medical linear accelerator (i), converting a CT scan of a patient to a Monte Carlo compliant phantom (ii) and translating the treatment plan parameters (including beam energy, angles of incidence, collimator settings etc) to a Monte Carlo input file (iii). A protocol...... previous algorithms since it uses delineations of structures in order to include and/or exclude certain media in various anatomical regions. This method has the potential to reduce anatomically irrelevant media assignment. In house MATLAB scripts translating the treatment plan parameters to Monte Carlo...

  9. Cobalt source calibration

    Energy Technology Data Exchange (ETDEWEB)

    Rizvi, H.M.

    1999-12-03

    The data obtained from these tests determine the dose rate of the two cobalt sources in SRTC. Building 774-A houses one of these sources while the other resides in room C-067 of Building 773-A. The data from this experiment shows the following: (1) The dose rate of the No.2 cobalt source in Building 774-A measured 1.073 x 10{sup 5} rad/h (June 17, 1999). The dose rate of the Shepherd Model 109 Gamma cobalt source in Building 773-A measured 9.27 x 10{sup 5} rad/h (June 25, 1999). These rates come from placing the graduated cylinder containing the dosimeter solution in the center of the irradiation chamber. (2) Two calibration tests in the 774-A source placed the graduated cylinder with the dosimeter solution approximately 1.5 inches off center in the axial direction. This movement of the sample reduced the measured dose rate 0.92% from 1.083 x 10{sup 5} rad/h to 1.073 x 10{sup 5} rad/h. and (3) A similar test in the cobalt source in 773-A placed the graduated cylinder approximately 2.0 inches off center in the axial direction. This change in position reduced the measured dose rate by 10.34% from 1.036 x 10{sup 6} to 9.27 x 10{sup 5}. This testing used chemical dosimetry to measure the dose rate of a radioactive source. In this method, one determines the dose by the chemical change that takes place in the dosimeter. For this calibration experiment, the author used a Fricke (ferrous ammonium sulfate) dosimeter. This solution works well for dose rates to 10{sup 7} rad/h. During irradiation of the Fricke dosimeter solution the Fe{sup 2+} ions ionize to Fe{sup 3+}. When this occurs, the solution acquires a slightly darker tint (not visible to the human eye). To determine the magnitude of the change in Fe ions, one places the solution in an UV-VIS Spectrophotometer. The UV-VIS Spectrophotometer measures the absorbency of the solution. Dividing the absorbency by the total time (in minutes) of exposure yields the dose rate.

  10. Calibration and measurement of {sup 210}Pb using two independent techniques

    Energy Technology Data Exchange (ETDEWEB)

    Villa, M. [Centro de Investigacion, Tecnologia e Innovacion, CITIUS, Universidad de Sevilla, Av. Reina Mercedes 4B, 41012 Sevilla (Spain)], E-mail: mvilla@us.es; Hurtado, S. [Centro de Investigacion, Tecnologia e Innovacion, CITIUS, Universidad de Sevilla, Av. Reina Mercedes 4B, 41012 Sevilla (Spain); Manjon, G.; Garcia-Tenorio, R. [Departamento de Fisica Aplicada II, E.T.S. Arquitectura, Universidad de Sevilla, Av. Reina Mercedes 2, 41012 Sevilla (Spain)

    2007-10-15

    An experimental procedure has been developed for a rapid and accurate determination of the activity concentration of {sup 210}Pb in sediments by liquid scintillation counting (LSC). Additionally, an alternative technique using {gamma}-spectrometry and Monte Carlo simulation has been developed. A radiochemical procedure, based on radium and barium sulphates co-precipitation have been applied to isolate the Pb-isotopes. {sup 210}Pb activity measurements were done in a low background scintillation spectrometer Quantulus 1220. A calibration of the liquid scintillation spectrometer, including its {alpha}/{beta} discrimination system, has been made, in order to minimize background and, additionally, some improvements are suggested for the calculation of the {sup 210}Pb activity concentration, taking into account that {sup 210}Pb counting efficiency cannot be accurately determined. Therefore, the use of an effective radiochemical yield, which can be empirically evaluated, is proposed. {sup 210}Pb activity concentration in riverbed sediments from an area affected by NORM wastes has been determined using both the proposed method. Results using {gamma}-spectrometry and LSC are compared to the results obtained following indirect {alpha}-spectrometry ({sup 210}Po) method.

  11. Calibration of semiconductor detectors in the 200-8500 keV range at VNIIM.

    Science.gov (United States)

    Tereshchenko, Evgeny E; Moiseev, Nikolay

    2012-09-01

    At the ionising radiation department of the D.I. Mendeleyev Institute for Metrology, a semiconductor detector was calibrated in the energy range 200-8500 keV using (n,2γ) and (n,γ) reactions. Separate cylindrical targets (77 mm diameter and 10mm height) were made from mercuric sulphate, sodium chloride and metallic titanium. A (252)Cf spontaneous fission neutron source, placed in 150 mm diameter polyethylene ball, was used to generate thermal neutrons. The optimal target dimensions were determined taking into account the thermal neutron cross-sections and gamma-radiation attenuations in the target materials. The influence of the background radiation induced by neutrons from the walls, floors and ceilings was also taken into account. The shapes of the efficiency curves for point and volume sources in the 200-8500 keV range have been investigated. The experimental results are in good agreement with Monte-Carlo calculations. The emission rate of the 6.13 MeV photons from a (238)Pu-(13)C source was determined with an expanded uncertainty, U(c), of 10% (k=2).

  12. The Advanced LIGO Photon Calibrators

    CERN Document Server

    Karki, S; Kandhasamy, S; Abbott, B P; Abbott, T D; Anders, E H; Berliner, J; Betzwieser, J; Daveloza, H P; Cahillane, C; Canete, L; Conley, C; Gleason, J R; Goetz, E; Kissel, J S; Izumi, K; Mendell, G; Quetschke, V; Rodruck, M; Sachdev, S; Sadecki, T; Schwinberg, P B; Sottile, A; Wade, M; Weinstein, A J; West, M; Savage, R L

    2016-01-01

    The two interferometers of the Laser Interferometry Gravitaional-wave Observatory (LIGO) recently detected gravitational waves from the mergers of binary black hole systems. Accurate calibration of the output of these detectors was crucial for the observation of these events, and the extraction of parameters of the sources. The principal tools used to calibrate the responses of the second-generation (Advanced) LIGO detectors to gravitational waves are systems based on radiation pressure and referred to as Photon Calibrators. These systems, which were completely redesigned for Advanced LIGO, include several significant upgrades that enable them to meet the calibration requirements of second-generation gravitational wave detectors in the new era of gravitational-wave astronomy. We report on the design, implementation, and operation of these Advanced LIGO Photon Calibrators that are currently providing fiducial displacements on the order of $10^{-18}$ m/$\\sqrt{\\textrm{Hz}}$ with accuracy and precision of better ...

  13. The Advanced LIGO photon calibrators

    Science.gov (United States)

    Karki, S.; Tuyenbayev, D.; Kandhasamy, S.; Abbott, B. P.; Abbott, T. D.; Anders, E. H.; Berliner, J.; Betzwieser, J.; Cahillane, C.; Canete, L.; Conley, C.; Daveloza, H. P.; De Lillo, N.; Gleason, J. R.; Goetz, E.; Izumi, K.; Kissel, J. S.; Mendell, G.; Quetschke, V.; Rodruck, M.; Sachdev, S.; Sadecki, T.; Schwinberg, P. B.; Sottile, A.; Wade, M.; Weinstein, A. J.; West, M.; Savage, R. L.

    2016-11-01

    The two interferometers of the Laser Interferometry Gravitational-wave Observatory (LIGO) recently detected gravitational waves from the mergers of binary black hole systems. Accurate calibration of the output of these detectors was crucial for the observation of these events and the extraction of parameters of the sources. The principal tools used to calibrate the responses of the second-generation (Advanced) LIGO detectors to gravitational waves are systems based on radiation pressure and referred to as photon calibrators. These systems, which were completely redesigned for Advanced LIGO, include several significant upgrades that enable them to meet the calibration requirements of second-generation gravitational wave detectors in the new era of gravitational-wave astronomy. We report on the design, implementation, and operation of these Advanced LIGO photon calibrators that are currently providing fiducial displacements on the order of 1 0-18m /√{Hz } with accuracy and precision of better than 1%.

  14. TIME CALIBRATED OSCILLOSCOPE SWEEP CIRCUIT

    Science.gov (United States)

    Smith, V.L.; Carstensen, H.K.

    1959-11-24

    An improved time calibrated sweep circuit is presented, which extends the range of usefulness of conventional oscilloscopes as utilized for time calibrated display applications in accordance with U. S. Patent No. 2,832,002. Principal novelty resides in the provision of a pair of separate signal paths, each of which is phase and amplitude adjustable, to connect a high-frequency calibration oscillator to the output of a sawtooth generator also connected to the respective horizontal deflection plates of an oscilloscope cathode ray tube. The amplitude and phase of the calibration oscillator signals in the two signal paths are adjusted to balance out feedthrough currents capacitively coupled at high frequencies of the calibration oscillator from each horizontal deflection plate to the vertical plates of the cathode ray tube.

  15. Automated calibration of multistatic arrays

    Energy Technology Data Exchange (ETDEWEB)

    Henderer, Bruce

    2017-03-14

    A method is disclosed for calibrating a multistatic array having a plurality of transmitter and receiver pairs spaced from one another along a predetermined path and relative to a plurality of bin locations, and further being spaced at a fixed distance from a stationary calibration implement. A clock reference pulse may be generated, and each of the transmitters and receivers of each said transmitter/receiver pair turned on at a monotonically increasing time delay interval relative to the clock reference pulse. Ones of the transmitters and receivers may be used such that a previously calibrated transmitter or receiver of a given one of the transmitter/receiver pairs is paired with a subsequently un-calibrated one of the transmitters or receivers of an immediately subsequently positioned transmitter/receiver pair, to calibrate the transmitter or receiver of the immediately subsequent transmitter/receiver pair.

  16. Error in Monte Carlo, quasi-error in Quasi-Monte Carlo

    OpenAIRE

    Kleiss, R. H. P.; Lazopoulos, A.

    2006-01-01

    While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the abscence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction o...

  17. Improving Langley calibrations by reducing diurnal variations of aerosol Ångström parameters

    Directory of Open Access Journals (Sweden)

    A. Kreuter

    2013-01-01

    Full Text Available Errors in the sun photometer calibration constant lead to artificial diurnal variations, symmetric around solar noon, of the retrieved aerosol optical depth (AOD and the associated Ångström exponent α and its curvature γ. We show in simulations that within the uncertainty of state-of-the-art Langley calibrations, these diurnal variations of α and γ can be significant in low AOD conditions, while those of AOD are negligible. We implement a weighted Monte Carlo method of finding an improved calibration constant by minimizing the diurnal variations in α and γ and apply the method to sun photometer data of a clear day in Innsbruck, Austria. The results show that our method can be used to improve the calibrations in two of the four wavelength channels by up to a factor of 3.6.

  18. Improving Langley calibrations by reducing diurnal variations of aerosol Ångström parameters

    Directory of Open Access Journals (Sweden)

    A. Kreuter

    2012-09-01

    Full Text Available Errors in the sun photometer calibration constant lead to artificial diurnal variations, symmetric around solar noon, of the retrieved Aerosol Optical Depth (AOD and the associated Ångström exponent α and its curvature γ. We show in simulations that within the uncertainty of state-of-the-art Langley calibrations, these diurnal variations of α and γ can be significant in low AOD conditions, while those of AOD are negligible. We implement a weighted Monte-Carlo method of finding an improved calibration constant by minimizing the diurnal variations in α and γ and apply the method to sun photometer data of a clear day in Innsbruck, Austria. The results show that our method can be used to improve the calibrations in two of the four wavelength channels by up to a factor of 3.6.

  19. Monte Carlo Euler approximations of HJM term structure financial models

    KAUST Repository

    Björk, Tomas

    2012-11-22

    We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.

  20. Research on GPU Acceleration for Monte Carlo Criticality Calculation

    Science.gov (United States)

    Xu, Qi; Yu, Ganglin; Wang, Kan

    2014-06-01

    The Monte Carlo neutron transport method can be naturally parallelized by multi-core architectures due to the dependency between particles during the simulation. The GPU+CPU heterogeneous parallel mode has become an increasingly popular way of parallelism in the field of scientific supercomputing. Thus, this work focuses on the GPU acceleration method for the Monte Carlo criticality simulation, as well as the computational efficiency that GPUs can bring. The "neutron transport step" is introduced to increase the GPU thread occupancy. In order to test the sensitivity of the MC code's complexity, a 1D one-group code and a 3D multi-group general purpose code are respectively transplanted to GPUs, and the acceleration effects are compared. The result of numerical experiments shows considerable acceleration effect of the "neutron transport step" strategy. However, the performance comparison between the 1D code and the 3D code indicates the poor scalability of MC codes on GPUs.

  1. A Monte Carlo Model of Light Propagation in Nontransparent Tissue

    Institute of Scientific and Technical Information of China (English)

    姚建铨; 朱水泉; 胡海峰; 王瑞康

    2004-01-01

    To sharpen the imaging of structures, it is vital to develop a convenient and efficient quantitative algorithm of the optical coherence tomography (OCT) sampling. In this paper a new Monte Carlo model is set up and how light propagates in bio-tissue is analyzed in virtue of mathematics and physics equations. The relations,in which light intensity of Class 1 and Class 2 light with different wavelengths changes with their permeation depth,and in which Class 1 light intensity (signal light intensity) changes with the probing depth, and in which angularly resolved diffuse reflectance and diffuse transmittance change with the exiting angle, are studied. The results show that Monte Carlo simulation results are consistent with the theory data.

  2. A Monte Carlo algorithm for simulating fermions on Lefschetz thimbles

    CERN Document Server

    Alexandru, Andrei; Bedaque, Paulo

    2016-01-01

    A possible solution of the notorious sign problem preventing direct Monte Carlo calculations for systems with non-zero chemical potential is to deform the integration region in the complex plane to a Lefschetz thimble. We investigate this approach for a simple fermionic model. We introduce an easy to implement Monte Carlo algorithm to sample the dominant thimble. Our algorithm relies only on the integration of the gradient flow in the numerically stable direction, which gives it a distinct advantage over the other proposed algorithms. We demonstrate the stability and efficiency of the algorithm by applying it to an exactly solvable fermionic model and compare our results with the analytical ones. We report a very good agreement for a certain region in the parameter space where the dominant contribution comes from a single thimble, including a region where standard methods suffer from a severe sign problem. However, we find that there are also regions in the parameter space where the contribution from multiple...

  3. Accelerated Monte Carlo simulations with restricted Boltzmann machines

    Science.gov (United States)

    Huang, Li; Wang, Lei

    2017-01-01

    Despite their exceptional flexibility and popularity, Monte Carlo methods often suffer from slow mixing times for challenging statistical physics problems. We present a general strategy to overcome this difficulty by adopting ideas and techniques from the machine learning community. We fit the unnormalized probability of the physical model to a feed-forward neural network and reinterpret the architecture as a restricted Boltzmann machine. Then, exploiting its feature detection ability, we utilize the restricted Boltzmann machine to propose efficient Monte Carlo updates to speed up the simulation of the original physical system. We implement these ideas for the Falicov-Kimball model and demonstrate an improved acceptance ratio and autocorrelation time near the phase transition point.

  4. Accelerate Monte Carlo Simulations with Restricted Boltzmann Machines

    CERN Document Server

    Huang, Li

    2016-01-01

    Despite their exceptional flexibility and popularity, the Monte Carlo methods often suffer from slow mixing times for challenging statistical physics problems. We present a general strategy to overcome this difficulty by adopting ideas and techniques from the machine learning community. We fit the unnormalized probability of the physical model to a feedforward neural network and reinterpret the architecture as a restricted Boltzmann machine. Then, exploiting its feature detection ability, we utilize the restricted Boltzmann machine for efficient Monte Carlo updates and to speed up the simulation of the original physical system. We implement these ideas for the Falicov-Kimball model and demonstrate improved acceptance ratio and autocorrelation time near the phase transition point.

  5. A calibrated Franklin chimes

    Science.gov (United States)

    Gonta, Igor; Williams, Earle

    1994-05-01

    Benjamin Franklin devised a simple yet intriguing device to measure electrification in the atmosphere during conditions of foul weather. He constructed a system of bells, one of which was attached to a conductor that was suspended vertically above his house. The device is illustrated in a well-known painting of Franklin (Cohen, 1985). The elevated conductor acquired a potential due to the electric field in the atmosphere and caused a brass ball to oscillate between two bells. The purpose of this study is to extend Franklin's idea by constructing a set of 'chimes' which will operate both in fair and in foul weather conditions. In addition, a mathematical relationship will be established between the frequency of oscillation of a metallic sphere in a simplified geometry and the potential on one plate due to the electrification of the atmosphere. Thus it will be possible to calibrate the 'Franklin Chimes' and to obtain a nearly instantaneous measurement of the potential of the elevated conductor in both fair and foul weather conditions.

  6. Mercury Continuous Emmission Monitor Calibration

    Energy Technology Data Exchange (ETDEWEB)

    John Schabron; Eric Kalberer; Ryan Boysen; William Schuster; Joseph Rovani

    2009-03-12

    Mercury continuous emissions monitoring systems (CEMs) are being implemented in over 800 coal-fired power plant stacks throughput the U.S. Western Research Institute (WRI) is working closely with the Electric Power Research Institute (EPRI), the National Institute of Standards and Technology (NIST), and the Environmental Protection Agency (EPA) to facilitate the development of the experimental criteria for a NIST traceability protocol for dynamic elemental mercury vapor calibrators/generators. These devices are used to calibrate mercury CEMs at power plant sites. The Clean Air Mercury Rule (CAMR) which was published in the Federal Register on May 18, 2005 and vacated by a Federal appeals court in early 2008 required that calibration be performed with NIST-traceable standards. Despite the vacature, mercury emissions regulations in the future will require NIST traceable calibration standards, and EPA does not want to interrupt the effort towards developing NIST traceability protocols. The traceability procedures will be defined by EPA. An initial draft traceability protocol was issued by EPA in May 2007 for comment. In August 2007, EPA issued a conceptual interim traceability protocol for elemental mercury calibrators. The protocol is based on the actual analysis of the output of each calibration unit at several concentration levels ranging initially from about 2-40 {micro}g/m{sup 3} elemental mercury, and in the future down to 0.2 {micro}g/m{sup 3}, and this analysis will be directly traceable to analyses by NIST. The EPA traceability protocol document is divided into two separate sections. The first deals with the qualification of calibrator models by the vendors for use in mercury CEM calibration. The second describes the procedure that the vendors must use to certify the calibrators that meet the qualification specifications. The NIST traceable certification is performance based, traceable to analysis using isotope dilution inductively coupled plasma

  7. Digital background calibration of charge pump based pipelined ADC

    Science.gov (United States)

    Singh, Anil; Agarwal, Alpana

    2016-11-01

    In the presented work, digital background calibration of a charge pump based pipelined ADC is presented. A 10-bit 100 MS/s pipelined ADC is designed using TSMC 0.18 µm CMOS technology operating on a 1.8 V power supply voltage. A power efficient opamp-less charge pump based technique is chosen to achieve the desired stage voltage gain of 2 and digital background calibration is used to calibrate the inter-stage gain error. After calibration, the ADC achieves an SNDR of 66.78 dB and SFDR of 79.3 dB. Also, DNL improves to +0.6/-0.4 LSB and INL improves from +9.3/-9.6 LSB to within ±0.5 LSB, consuming 16.53 mW of power.

  8. Radio interferometric gain calibration as a complex optimization problem

    CERN Document Server

    Smirnov, Oleg

    2015-01-01

    Recent developments in optimization theory have extended some traditional algorithms for least-squares optimization of real-valued functions (Gauss-Newton, Levenberg-Marquardt, etc.) into the domain of complex functions of a complex variable. This employs a formalism called the Wirtinger derivative, and derives a full-complex Jacobian counterpart to the conventional real Jacobian. We apply these developments to the problem of radio interferometric gain calibration, and show how the general complex Jacobian formalism, when combined with conventional optimization approaches, yields a whole new family of calibration algorithms, including those for the polarized and direction-dependent gain regime. We further extend the Wirtinger calculus to an operator-based matrix calculus for describing the polarized calibration regime. Using approximate matrix inversion results in computationally efficient implementations; we show that some recently proposed calibration algorithms such as StefCal and peeling can be understood...

  9. Cosmology with gamma-ray bursts. I. The Hubble diagram through the calibrated Ep,i-Eiso correlation

    Science.gov (United States)

    Demianski, Marek; Piedipalumbo, Ester; Sawant, Disha; Amati, Lorenzo

    2017-02-01

    Context. Gamma-ray bursts (GRBs) are the most energetics explosions in the Universe. They are detectable up to very high redshifts. They may therefore be used to study the expansion rate of the Universe and to investigate the observational properties of dark energy, provided that empirical correlations between spectral and intensity properties are appropriately calibrated. Aims: We used the type Ia supernova (SN) luminosity distances to calibrate the correlation between the peak photon energy, Ep,i, and the isotropic equivalent radiated energy, Eiso in GRBs. With this correlation, we tested the reliability of applying these phenomena to measure cosmological parameters and to obtain indications on the basic properties and evolution of dark energy. Methods: Using 162 GRBs with measured redshifts and spectra as of the end of 2013, we applied a local regression technique to calibrate the Ep,i-Eiso correlation against the type Ia SN data to build a calibrated GRB Hubble diagram. We tested the possible redshift dependence of the correlation and its effect on the Hubble diagram. Finally, we used the GRB Hubble diagram to investigate the dark energy equation of state (EOS). To accomplish this, we focused on the so-called Chevalier-Polarski-Linder (CPL) parametrization of the dark energy EOS and implemented the Markov chain Monte Carlo (MCMC) method to efficiently sample the space of cosmological parameters. Results: Our analysis shows once more that the Ep,i-Eiso correlation has no significant redshift dependence. Therefore the high-redshift GRBs can be used as a cosmological tool to determine the basic cosmological parameters and to test different models of dark energy in the redshift region (), which is unexplored by the SNIa and baryonic acoustic oscillations data. Our updated calibrated Hubble diagram of GRBs provides some marginal indication (at 1σ level) of an evolving dark energy EOS. A significant enlargement of the GRB sample and improvements in the accuracy of

  10. Efficiency transfer using the GEANT4 code of CERN for HPGe gamma spectrometry.

    Science.gov (United States)

    Chagren, S; Ben Tekaya, M; Reguigui, N; Gharbi, F

    2016-01-01

    In this work we apply the GEANT4 code of CERN to calculate the peak efficiency in High Pure Germanium (HPGe) gamma spectrometry using three different procedures. The first is a direct calculation. The second corresponds to the usual case of efficiency transfer between two different configurations at constant emission energy assuming a reference point detection configuration and the third, a new procedure, consists on the transfer of the peak efficiency between two detection configurations emitting the gamma ray in different energies assuming a "virtual" reference point detection configuration. No pre-optimization of the detector geometrical characteristics was performed before the transfer to test the ability of the efficiency transfer to reduce the effect of the ignorance on their real magnitude on the quality of the transferred efficiency. The obtained and measured efficiencies were found in good agreement for the two investigated methods of efficiency transfer. The obtained agreement proves that Monte Carlo method and especially the GEANT4 code constitute an efficient tool to obtain accurate detection efficiency values. The second investigated efficiency transfer procedure is useful to calibrate the HPGe gamma detector for any emission energy value for a voluminous source using one point source detection efficiency emitting in a different energy as a reference efficiency. The calculations preformed in this work were applied to the measurement exercise of the EUROMET428 project. A measurement exercise where an evaluation of the full energy peak efficiencies in the energy range 60-2000 keV for a typical coaxial p-type HpGe detector and several types of source configuration: point sources located at various distances from the detector and a cylindrical box containing three matrices was performed.

  11. Seepage Calibration Model and Seepage Testing Data

    Energy Technology Data Exchange (ETDEWEB)

    P. Dixon

    2004-02-17

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM is developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA (see upcoming REV 02 of CRWMS M&O 2000 [153314]), which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model (see BSC 2003 [161530]). The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross Drift to obtain the permeability structure for the seepage model; (3) to use inverse modeling to calibrate the SCM and to estimate seepage-relevant, model-related parameters on the drift scale; (4) to estimate the epistemic uncertainty of the derived parameters, based on the goodness-of-fit to the observed data and the sensitivity of calculated seepage with respect to the parameters of interest; (5) to characterize the aleatory uncertainty of

  12. Mexican national pyronometer network calibration

    Science.gov (United States)

    VAldes, M.; Villarreal, L.; Estevez, H.; Riveros, D.

    2013-12-01

    In order to take advantage of the solar radiation as an alternate energy source it is necessary to evaluate the spatial and temporal availability. The Mexican National Meterological Service (SMN) has a network with 136 meteorological stations, each coupled with a pyronometer for measuring the global solar radiation. Some of these stations had not been calibrated in several years. The Mexican Department of Energy (SENER) in order to count on a reliable evaluation of the solar resource funded this project to calibrate the SMN pyrometer network and validate the data. The calibration of the 136 pyronometers by the intercomparison method recommended by the World Meterological Organization (WMO) requires lengthy observations and specific environmental conditions such as clear skies and a stable atmosphere, circumstances that determine the site and season of the calibration. The Solar Radiation Section of the Instituto de Geofísica of the Universidad Nacional Autónoma de México is a Regional Center of the WMO and is certified to carry out the calibration procedures and emit certificates. We are responsible for the recalibration of the pyronometer network of the SMN. A continuous emission solar simulator with exposed areas with 30cm diameters was acquired to reduce the calibration time and not depend on atmospheric conditions. We present the results of the calibration of 10 thermopile pyronometers and one photovoltaic cell by the intercomparison method with more than 10000 observations each and those obtained with the solar simulator.

  13. El lenguaje de Carlos Alonso

    Directory of Open Access Journals (Sweden)

    Bárbara Bustamante

    2005-10-01

    Full Text Available El talento de Carlos Alonso (Argentina, 1929 ha logrado conquistar un lenguaje con estilo propio. La creación de dibujos, pinturas, pasteles y tintas, collages y grabados fijaron en el campo visual la proyección de su subjetividad. Tanto la imagen como la palabra explicitan una visión crítica de la realidad, que tensiona al espectador obligándolo a una condición reflexiva y comprometida con el mensaje; este es el aspecto más destacado por los historiadores del arte. Sin embargo, la presente investigación pretende focalizar aspectos icónicos y plásticos de su hacer.

  14. The wall correction factor for a spherical ionization chamber used in brachytherapy source calibration

    Energy Technology Data Exchange (ETDEWEB)

    Piermattei, A [Istituto di Fisica, Universita Cattolica S Cuore, Rome (Italy); Azario, L [Istituto di Fisica, Universita Cattolica S Cuore, Rome (Italy); Fidanzio, A [Istituto di Fisica, Universita Cattolica S Cuore, Rome (Italy); Viola, P [Istituto di Fisica, Universita Cattolica S Cuore, Rome (Italy); Dell' Omo, C [Istituto di Fisica, Universita Cattolica S Cuore, Rome (Italy); Iadanza, L [Centro di Riferimento Oncologico della Basilicata-Rionero in Vulture, Pz (Italy); Fusco, V [Centro di Riferimento Oncologico della Basilicata-Rionero in Vulture, Pz (Italy); Lagares, J I [Universidad de Sevilla, Facultad de Medicina, Dpto Fisiologia Medica y Biofisica, Sevilla (Spain); Capote, R [Universidad de Sevilla, Facultad de Medicina, Dpto Fisiologia Medica y Biofisica, Sevilla (Spain)

    2003-12-21

    The effect of wall chamber attenuation and scattering is one of the most important corrections that must be determined when the linear interpolation method between two calibration factors of an ionization chamber is used. For spherical ionization chambers the corresponding correction factors A{sub w} have to be determined by a non-linear trend of the response as a function of the wall thickness. The Monte Carlo and experimental data here reported show that the A{sub w} factors obtained for an Exradin A4 chamber, used in the brachytherapy source calibration, in terms of reference air kerma rate, are up to 1.2% greater than the values obtained by the linear extrapolation method for the studied beam qualities. Using the A{sub w} factors derived from Monte Carlo calculations, the accuracy of the calibration factor N{sub K,Ir} for the Exradin A4, obtained by the interpolation between two calibration factors, improves about 0.6%. The discrepancy between the new calculated factor and that obtained using the complete calibration curve of the ion-chamber and the {sup 192}Ir spectrum is only 0.1%.

  15. An introduction to Monte Carlo methods

    NARCIS (Netherlands)

    Walter, J. -C.; Barkema, G. T.

    2015-01-01

    Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo sim

  16. Challenges of Monte Carlo Transport

    Energy Technology Data Exchange (ETDEWEB)

    Long, Alex Roberts [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-10

    These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and, finally, the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.

  17. Comparative evaluation of photon cross section libraries for materials of interest in PET Monte Carlo simulations

    CERN Document Server

    Zaidi, H

    1999-01-01

    the many applications of Monte Carlo modelling in nuclear medicine imaging make it desirable to increase the accuracy and computational speed of Monte Carlo codes. The accuracy of Monte Carlo simulations strongly depends on the accuracy in the probability functions and thus on the cross section libraries used for photon transport calculations. A comparison between different photon cross section libraries and parametrizations implemented in Monte Carlo simulation packages developed for positron emission tomography and the most recent Evaluated Photon Data Library (EPDL97) developed by the Lawrence Livermore National Laboratory was performed for several human tissues and common detector materials for energies from 1 keV to 1 MeV. Different photon cross section libraries and parametrizations show quite large variations as compared to the EPDL97 coefficients. This latter library is more accurate and was carefully designed in the form of look-up tables providing efficient data storage, access, and management. Toge...

  18. Carlos Restrepo. Un verdadero Maestro

    Directory of Open Access Journals (Sweden)

    Pelayo Correa

    2009-12-01

    Full Text Available Carlos Restrepo fue el primer profesor de Patología y un miembro ilustre del grupo de pioneros que fundaron la Facultad de Medicina de la Universidad del Valle. Estos pioneros convergieron en Cali en la década de 1950, en posesión de un espíritu renovador y creativo que emprendió con mucho éxito la labor de cambiar la cultura académica del Valle del Cauca. Ellos encontraron una sociedad apacible, que disfrutaba de la generosidad de su entorno, sin deseos de romper las tradiciones centenarias de estilo de vida sencillo y satisfecho. Cuando los hijos tenían el deseo y la capacidad de seguir estudios universitarios, especialmente en el área de la medicina, la familia los enviaba a climas menos cálidos, que supuestamente favorecían la función cerebral y la acumulación de conocimientos. Los pioneros de la educación médica en el Valle del Cauca, en buena parte reclutados en universidades nacionales y extranjeras, sabían muy bien que el ambiente vallecaucano no impide una formación universitaria de primera clase. Carlos Restrepo era prototipo del espíritu de cambio y formación intelectual de las nuevas generaciones. Lo manifestaba de múltiples maneras, en buena parte con su genio alegre, extrovertido, optimista, de risa fácil y contagiosa. Pero esta fase amable de su personalidad no ocultaba su tarea formativa; exigía de sus discípulos dedicación y trabajo duro, con fidelidad expresados en memorables caricaturas que exageraban su genio ocasionalmente explosivo. El grupo de pioneros se enfocó con un espíritu de total entrega (tiempo completo y dedicación exclusiva y organizó la nueva Facultad en bien definidos y estructurados departamentos: Anatomía, Bioquímica, Fisiología, Farmacología, Patología, Medicina Interna, Cirugía, Obstetricia y Ginecología, Psiquiatría y Medicina Preventiva. Los departamentos integraron sus funciones primordiales en la enseñanza, la investigación y el servicio a la comunidad. El centro

  19. Jet energy calibration in ATLAS

    CERN Document Server

    Schouten, Doug

    A correct energy calibration for jets is essential to the success of the ATLAS experi- ment. In this thesis I study a method for deriving an in situ jet energy calibration for the ATLAS detector. In particular, I show the applicability of the missing transverse energy projection fraction method. This method is shown to set the correct mean energy for jets. Pileup effects due to the high luminosities at ATLAS are also stud- ied. I study the correlations in lateral distributions of pileup energy, as well as the luminosity dependence of the in situ calibration metho

  20. Calibrating System for Vacuum Gauges

    Institute of Scientific and Technical Information of China (English)

    MengJun; YangXiaotian; HaoBinggan; HouShengjun; HuZhenjun

    2003-01-01

    In order to measure the vacuum degree, a lot of vacuum gauges will be used in CSR vacuum system. We bought several types of vacuum gauges. We know that different typos of vacuum gauges or even one type of vacuum gauges have different measure results in same condition, so they must be calibrated. But it seems impossible for us to send so many gauges to the calibrating station outside because of the high price. So the best choice is to build a second class calibrating station for vacuum gauges by ourselves (Fig.l).

  1. Retrodirective Radar Calibration Nanosatellite

    Science.gov (United States)

    2013-07-01

    Triple Junction solar cells with 28.3% efficiency. Power is regulated and distributed using various Maxim and Texas Instrument (TI) components such as...mission. On its nadir-facing side is an An- tenna Development Corporation quadrifilar helix an- tenna used for receiving and transmitting RF pulse sig- nals...orbital model is made from processing this GPS data, which is then made available to all RADCAL users, including the original range requesting

  2. Rare event simulation using Monte Carlo methods

    CERN Document Server

    Rubino, Gerardo

    2009-01-01

    In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. ...

  3. Atomistic Monte Carlo Simulation of Lipid Membranes

    Directory of Open Access Journals (Sweden)

    Daniel Wüstner

    2014-01-01

    Full Text Available Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA for the phospholipid dipalmitoylphosphatidylcholine (DPPC. We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol.

  4. Camera calibration method of binocular stereo vision based on OpenCV

    Science.gov (United States)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  5. Monte Carlo simulation of the LENA detector system

    Energy Technology Data Exchange (ETDEWEB)

    Howard, C., E-mail: choward@unc.edu [Department of Physics and Astronomy, University of North Carolina, Chapel Hill, NC 27599-3255 (United States); Triangle Universities Nuclear Laboratory, Durham, NC 27708-0308 (United States); Iliadis, C.; Champagne, A.E. [Department of Physics and Astronomy, University of North Carolina, Chapel Hill, NC 27599-3255 (United States); Triangle Universities Nuclear Laboratory, Durham, NC 27708-0308 (United States)

    2013-11-21

    Many nuclear astrophysics experiments use the singles energy spectrum to measure nuclear cross-sections. It has been shown in previous publications that the use of a high purity germanium (HPGe) detector and a NaI(Tl) annulus in coincidence can lower the background, allowing the measurement of smaller cross-sections. In our previous work, our simulation was only capable of determining both full-energy peak relative efficiencies. Here, we present work which extends our simulation so that we can predict absolute efficiencies, and both coincidence gate efficiencies. We first show that the full-energy peak and the total energy singles efficiency of our HPGe detector simulation agrees well with calibration data. We then present the full-energy peak and total energy efficiency for the NaI(Tl) annulus. Results are presented for our coincidence efficiencies, using three examples. These examples are a comparison to the decay of the 151 keV resonance in the {sup 18}O(p, γ){sup 19}F reaction, a {sup 22}Na point-like calibration source, and {sup 26}Al nuclei distributed in a meteorite fragment. In each case, we present a comparison of data to the simulation and show that, within our uncertainties, we can accurately simulate our measured intensities. -- Highlights: •We create a simulation of our HPGe detector and NaI annulus. •We compare our model to various calibration sources. •We compare energy gating using the simulation. •The simulation predict efficiencies as observed in the data.

  6. Monte-Carlo-simulation for measuring the radioactivity of waste material to optimize the accuracy of measurement; Monte-Carlo-Simulationsrechnungen zur Aktivitaetsbestimmung des Messgutes in Freimessanlagen zur Optimierung der Messgenauigkeit

    Energy Technology Data Exchange (ETDEWEB)

    Weggen, J.; Simiae, S.; Breckow, J. [Fachhochschule Giessen-Friedberg (DE). Inst. fuer Medizinische Physik und Strahlenschutz (IMPS)

    2009-07-01

    Associated with dismantling nuclear power plants is the production of a huge mass of radioactive waste material. This waste must be controlled in order to determine whether or not it can be released for exemption or clearance. In practise, frequently the total-gamma measuring method is used in order to get a high mass flow. To calibrate the measuring system a high operating expense is necessary. In this paper a new approach is presented, to simulate the geometry calibration with a computer program. The software EGSnrc is based on Monte-Carlo algorithm to simulate the particle and photon transport within material. By means of this program it is possible to calculate calibration factors which characterize the energy absorption of the measured material. The results of the simulation are plausible. It should be possible to substitute the practical method by the computer simulation. Further investigation are required, e.g. the comparison with conventional calibration methods to consolidate the presented method. (orig.)

  7. The role of research efficiency in the evolution of scientific productivity and impact: An agent-based model

    Energy Technology Data Exchange (ETDEWEB)

    You, Zhi-Qiang [Alibaba Research Center for Complexity Sciences, Hangzhou Normal University, Hangzhou 311121 (China); Institute of Information Economy and Alibaba Business College, Hangzhou Normal University, Hangzhou 311121 (China); Han, Xiao-Pu, E-mail: xp@hznu.edu.cn [Alibaba Research Center for Complexity Sciences, Hangzhou Normal University, Hangzhou 311121 (China); Institute of Information Economy and Alibaba Business College, Hangzhou Normal University, Hangzhou 311121 (China); Hadzibeganovic, Tarik, E-mail: tarik.hadzibeganovic@gmail.com [Department of Psychology, University of Graz, 8010 Graz (Austria)

    2016-02-22

    We introduce an agent-based model to investigate the effects of production efficiency (PE) and hot field tracing capability (HFTC) on productivity and impact of scientists embedded in a competitive research environment. Agents compete to publish and become cited by occupying the nodes of a citation network calibrated by real-world citation datasets. Our Monte-Carlo simulations reveal that differences in individual performance are strongly related to PE, whereas HFTC alone cannot provide sustainable academic careers under intensely competitive conditions. Remarkably, the negative effect of high competition levels on productivity can be buffered by elevated research efficiency if simultaneously HFTC is sufficiently low. - Highlights: • We study the role of production efficiency (PE) and research topic selectivity in the evolution of performance in academia. • In our model, agents compete to publish and become cited by occupying the nodes of an artificial citation network. • Our agent-based model is calibrated by using datasets from the APS journals and the arxiv.org online preprint repository. • Individual performance is strongly affected by PE, whereas topic selectivity cannot significantly enhance academic success. • With even minimal reductions of research efficiency gaps, fairly profound boosts of scientific careers can be achieved.

  8. Quantum Monte Carlo with directed loops.

    Science.gov (United States)

    Syljuåsen, Olav F; Sandvik, Anders W

    2002-10-01

    We introduce the concept of directed loops in stochastic series expansion and path-integral quantum Monte Carlo methods. Using the detailed balance rules for directed loops, we show that it is possible to smoothly connect generally applicable simulation schemes (in which it is necessary to include backtracking processes in the loop construction) to more restricted loop algorithms that can be constructed only for a limited range of Hamiltonians (where backtracking can be avoided). The "algorithmic discontinuities" between general and special points (or regions) in parameter space can hence be eliminated. As a specific example, we consider the anisotropic S=1/2 Heisenberg antiferromagnet in an external magnetic field. We show that directed-loop simulations are very efficient for the full range of magnetic fields (zero to the saturation point) and anisotropies. In particular, for weak fields and anisotropies, the autocorrelations are significantly reduced relative to those of previous approaches. The back-tracking probability vanishes continuously as the isotropic Heisenberg point is approached. For the XY model, we show that back tracking can be avoided for all fields extending up to the saturation field. The method is hence particularly efficient in this case. We use directed-loop simulations to study the magnetization process in the two-dimensional Heisenberg model at very low temperatures. For LxL lattices with L up to 64, we utilize the step structure in the magnetization curve to extract gaps between different spin sectors. Finite-size scaling of the gaps gives an accurate estimate of the transverse susceptibility in the thermodynamic limit: chi( perpendicular )=0.0659+/-0.0002.

  9. kmos: A lattice kinetic Monte Carlo framework

    Science.gov (United States)

    Hoffmann, Max J.; Matera, Sebastian; Reuter, Karsten

    2014-07-01

    Kinetic Monte Carlo (kMC) simulations have emerged as a key tool for microkinetic modeling in heterogeneous catalysis and other materials applications. Systems, where site-specificity of all elementary reactions allows a mapping onto a lattice of discrete active sites, can be addressed within the particularly efficient lattice kMC approach. To this end we describe the versatile kmos software package, which offers a most user-friendly implementation, execution, and evaluation of lattice kMC models of arbitrary complexity in one- to three-dimensional lattice systems, involving multiple active sites in periodic or aperiodic arrangements, as well as site-resolved pairwise and higher-order lateral interactions. Conceptually, kmos achieves a maximum runtime performance which is essentially independent of lattice size by generating code for the efficiency-determining local update of available events that is optimized for a defined kMC model. For this model definition and the control of all runtime and evaluation aspects kmos offers a high-level application programming interface. Usage proceeds interactively, via scripts, or a graphical user interface, which visualizes the model geometry, the lattice occupations and rates of selected elementary reactions, while allowing on-the-fly changes of simulation parameters. We demonstrate the performance and scaling of kmos with the application to kMC models for surface catalytic processes, where for given operation conditions (temperature and partial pressures of all reactants) central simulation outcomes are catalytic activity and selectivities, surface composition, and mechanistic insight into the occurrence of individual elementary processes in the reaction network.

  10. Calibration of "Babyline" RP instruments

    CERN Multimedia

    2015-01-01

      If you have old RP instrumentation of the “Babyline” type, as shown in the photo, please contact the Radiation Protection Group (Joffrey Germa, 73171) to have the instrument checked and calibrated. Thank you. Radiation Protection Group

  11. Field calibration of cup anemometers

    Energy Technology Data Exchange (ETDEWEB)

    Kristensen, L.; Jensen, G.; Hansen, A.; Kirkegaard, P.

    2001-01-01

    An outdoor calibration facility for cup anemometers, where the signals from 10 anemometers of which at least one is a reference can be recorded simultaneously, has been established. The results are discussed with special emphasis on the statistical significance of the calibration expressions. It is concluded that the method has the advantage that many anemometers can be calibrated accurately with a minimum of work and cost. The obvious disadvantage is that the calibration of a set of anemometers may take more than one month in order to have wind speeds covering a sufficiently large magnitude range in a wind direction sector where we can be sure that the instruments are exposed to identical, simultaneous wind flows. Another main conclusion is that statistical uncertainty must be carefully evaluated since the individual 10 minute wind-speed averages are not statistically independent. (au)

  12. K X-ray fluorescent source for energy-channel calibration of the spectrometer

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A new K X-ray fluorescent source for calibrating the X or γ-ray multichannel analyzer spectrometer is introduced. A detailed description of the K fluorescent source device is given. The calibration method used and experimental results obtained are presented. The purity and efficiency of K fluorescence photons from this device are discussed. This new fluorescent source may be used as a substitute for radioactive nuclides for the energy-channel calibration of some MCA spectrometers.

  13. Infrasound Sensor Calibration and Response

    Science.gov (United States)

    2012-09-01

    functions with faster rise times. SUMMARY We have documented past work on the determination of the calibration constant of the LANL infrasound sensor...Monitoring Technologies 735 Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated...National Laboratory ( LANL ) has operated an infrasound sensor calibration chamber that operates over a frequency range of 0.02 to 4 Hz. This chamber has

  14. Pressures Detector Calibration and Measurement

    CERN Document Server

    AUTHOR|(CDS)2156315

    2016-01-01

    This is report of my first and second projects (of 3) in NA61. I did data taking and analysis in order to do calibration of pressure detectors and verified it. I analyzed the data by ROOT software using the C ++ programming language. The first part of my project was determination of calibration factor of pressure sensors. Based on that result, I examined the relation between pressure drop, gas flow rate of in paper filter and its diameter.

  15. Beam Imaging and Luminosity Calibration

    CERN Document Server

    Klute, Markus; Salfeld-Nebgen, Jakob

    2016-01-01

    We discuss a method to reconstruct two-dimensional proton bunch densities using vertex distributions accumulated during LHC beam-beam scans. The $x$-$y$ correlations in the beam shapes are studied and an alternative luminosity calibration technique is introduced. We demonstrate the method on simulated beam-beam scans and estimate the uncertainty on the luminosity calibration associated to the beam-shape reconstruction to be below 1\\%.

  16. Essay on Option Pricing, Hedging and Calibration

    DEFF Research Database (Denmark)

    da Silva Ribeiro, André Manuel

    Quantitative finance is concerned about applying mathematics to financial markets.This thesis is a collection of essays that study different problems in this field: How efficient are option price approximations to calibrate a stochastic volatilitymodel? (Chapter 2) How different is the discretely...... sampled realized variance from the continuouslysampled realized variance? (Chapter 3) How can we do static hedging for a payoff with two assets? (Chapter 4) Can we apply fast Fourier Transform methods to efficiently use interest rateMarkov-functional models? Can we extend them to accommodate othertypes...... variance. We investigated the impact of their assumptions and we present an adjustment for their formula. Our adjustment provides a better approximation to price discretely sampled realized variance options under different market scenarios Static Hedging for Two-Asset Options In this paper we derive...

  17. The ATLAS Inner Detector commissioning and calibration

    CERN Document Server

    Aad, Georges; Abdallah, Jalal; Abdelalim, Ahmed Ali; Abdesselam, Abdelouahab; Abdinov, Ovsat; Abi, Babak; Abolins, Maris; Abramowicz, Halina; Abreu, Henso; Acharya, Bobby Samir; Adams, David; Addy, Tetteh; Adelman, Jahred; Adorisio, Cristina; Adragna, Paolo; Adye, Tim; Aefsky, Scott; Aguilar-Saavedra, Juan Antonio; Aharrouche, Mohamed; Ahlen, Steven; Ahles, Florian; Ahmad, Ashfaq; Ahsan, Mahsana; Aielli, Giulio; Akdogan, Taylan; Åkesson, Torsten Paul Ake; Akimoto, Ginga; Akimov , Andrei; Aktas, Adil; Alam, Mohammad; Alam, Muhammad Aftab; Albrand, Solveig; Aleksa, Martin; Aleksandrov, Igor; Alexa, Calin; Alexander, Gideon; Alexandre, Gauthier; Alexopoulos, Theodoros; Alhroob, Muhammad; Aliev, Malik; Alimonti, Gianluca; Alison, John; Aliyev, Magsud; Allport, Phillip; Allwood-Spiers, Sarah; Almond, John; Aloisio, Alberto; Alon, Raz; Alonso, Alejandro; Alviggi, Mariagrazia; Amako, Katsuya; Amelung, Christoph; Amorim, Antonio; Amorós, Gabriel; Amram, Nir; Anastopoulos, Christos; Andeen, Timothy; Anders, Christoph Falk; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Anduaga, Xabier; Angerami, Aaron; Anghinolfi, Francis; Anjos, Nuno; Annovi, Alberto; Antonaki, Ariadni; Antonelli, Mario; Antonelli, Stefano; Antos, Jaroslav; Antunovic, Bijana; Anulli, Fabio; Aoun, Sahar; Arabidze, Giorgi; Aracena, Ignacio; Arai, Yasuo; Arce, Ayana; Archambault, John-Paul; Arfaoui, Samir; Arguin, Jean-Francois; Argyropoulos, Theodoros; Arik, Metin; Armbruster, Aaron James; Arnaez, Olivier; Arnault, Christian; Artamonov, Andrei; Arutinov, David; Asai, Makoto; Asai, Shoji; Silva, José; Asfandiyarov, Ruslan; Ask, Stefan; Åsman, Barbro; Asner, David; Asquith, Lily; Assamagan, Ketevi; Astvatsatourov, Anatoli; Atoian, Grigor; Auerbach, Benjamin; Augsten, Kamil; Aurousseau, Mathieu; Austin, Nicholas; Avolio, Giuseppe; Avramidou, Rachel Maria; Ay, Cano; Azuelos, Georges; Azuma, Yuya; Baak, Max; Bach, Andre; Bachacou, Henri; Bachas, Konstantinos; Backes, Moritz; Badescu, Elisabeta; Bagnaia, Paolo; Bai, Yu; Bain, Travis; Baines, John; Baker, Mark; Baker, Oliver Keith; Baker, Sarah; Baltasar Dos Santos Pedrosa, Fernando; Banas, Elzbieta; Banerjee, Piyali; Banerjee, Swagato; Banfi, Danilo; Bangert, Andrea Michelle; Bansal, Vikas; Baranov, Sergei; Barashkou, Andrei; Barber, Tom; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Bardin, Dmitri; Barillari, Teresa; Barisonzi, Marcello; Barklow, Timothy; Barlow, Nick; Barnett, Bruce; Barnett, Michael; Baroncelli, Antonio; Barr, Alan; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Barrillon, Pierre; Bartoldus, Rainer; Bartsch, Detlef; Bates, Richard; Batkova, Lucia; Batley, Richard; Battaglia, Andreas; Battistin, Michele; Bauer, Florian; Bawa, Harinder Singh; Bazalova, Magdalena; Beare, Brian; Beau, Tristan; Beauchemin, Pierre-Hugues; Beccherle, Roberto; Bechtle, Philip; Beck, Graham; Beck, Hans Peter; Beckingham, Matthew; Becks, Karl-Heinz; Beddall, Ayda; Beddall, Andrew; Bednyakov, Vadim; Bee, Christopher; Begel, Michael; Behar Harpaz, Silvia; Behera, Prafulla; Beimforde, Michael; Belanger-Champagne, Camille; Bell, Paul; Bell, William; Bella, Gideon; Bellagamba, Lorenzo; Bellina, Francesco; Bellomo, Massimiliano; Belloni, Alberto; Belotskiy, Konstantin; Beltramello, Olga; Ben Ami, Sagi; Benary, Odette; Benchekroun, Driss; Bendel, Markus; Benedict, Brian Hugues; Benekos, Nektarios; Benhammou, Yan; Benjamin, Douglas; Benoit, Mathieu; Bensinger, James; Benslama, Kamal; Bentvelsen, Stan; Beretta, Matteo; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Berghaus, Frank; Berglund, Elina; Beringer, Jürg; Bernabéu , José; Bernat, Pauline; Bernhard, Ralf; Bernius, Catrin; Berry, Tracey; Bertin, Antonio; Besana, Maria Ilaria; Besson, Nathalie; Bethke, Siegfried; Bianchi, Riccardo-Maria; Bianco, Michele; Biebel, Otmar; Biesiada, Jed; Biglietti, Michela; Bilokon, Halina; Bindi, Marcello; Bingul, Ahmet; Bini, Cesare; Biscarat, Catherine; Bitenc, Urban; Black, Kevin; Blair, Robert; Blanchard, Jean-Baptiste; Blanchot, Georges; Blocker, Craig; Blondel, Alain; Blum, Walter; Blumenschein, Ulrike; Bobbink, Gerjan; Bocci, Andrea; Boehler, Michael; Boek, Jennifer; Boelaert, Nele; Böser, Sebastian; Bogaerts, Joannes Andreas; Bogouch, Andrei; Bohm, Christian; Bohm, Jan; Boisvert, Veronique; Bold, Tomasz; Boldea, Venera; Bondarenko, Valery; Bondioli, Mario; Boonekamp, Maarten; Bordoni, Stefania; Borer, Claudia; Borisov, Anatoly; Borissov, Guennadi; Borjanovic, Iris; Borroni, Sara; Bos, Kors; Boscherini, Davide; Bosman, Martine; Boterenbrood, Hendrik; Bouchami, Jihene; Boudreau, Joseph; Bouhova-Thacker, Evelina Vassileva; Boulahouache, Chaouki; Bourdarios, Claire; Boveia, Antonio; Boyd, James; Boyko, Igor; Bozovic-Jelisavcic, Ivanka; Bracinik, Juraj; Braem, André; Branchini, Paolo; Brandt, Andrew; Brandt, Gerhard; Brandt, Oleg; Bratzler, Uwe; Brau, Benjamin; Brau, James; Braun, Helmut; Brelier, Bertrand; Bremer, Johan; Brenner, Richard; Bressler, Shikma; Britton, Dave; Brochu, Frederic; Brock, Ian; Brock, Raymond; Brodet, Eyal; Bromberg, Carl; Brooijmans, Gustaaf; Brooks, William; Brown, Gareth; Bruckman de Renstrom, Pawel; Bruncko, Dusan; Bruneliere, Renaud; Brunet, Sylvie; Bruni, Alessia; Bruni, Graziano; Bruschi, Marco; Bucci, Francesca; Buchanan, James; Buchholz, Peter; Buckley, Andrew; Budagov, Ioulian; Budick, Burton; Büscher, Volker; Bugge, Lars; Bulekov, Oleg; Bunse, Moritz; Buran, Torleiv; Burckhart, Helfried; Burdin, Sergey; Burgess, Thomas; Burke, Stephen; Busato, Emmanuel; Bussey, Peter; Buszello, Claus-Peter; Butin, Françcois; Butler, Bart; Butler, John; Buttar, Craig; Butterworth, Jonathan; Byatt, Tom; Caballero, Jose; Cabrera Urbán, Susana; Caforio, Davide; Cakir, Orhan; Calafiura, Paolo; Calderini, Giovanni; Calfayan, Philippe; Calkins, Robert; Caloba, Luiz; Calvet, David; Camarri, Paolo; Cameron, David; Campana, Simone; Campanelli, Mario; Canale, Vincenzo; Canelli, Florencia; Canepa, Anadi; Cantero, Josu; Capasso, Luciano; Capeans Garrido, Maria Del Mar; Caprini, Irinel; Caprini, Mihai; Capua, Marcella; Caputo, Regina; Caramarcu, Costin; Cardarelli, Roberto; Carli, Tancredi; Carlino, Gianpaolo; Carminati, Leonardo; Caron, Bryan; Caron, Sascha; Carrillo Montoya, German D.; Carron Montero, Sebastian; Carter, Antony; Carter, Janet; Carvalho, João; Casadei, Diego; Casado, Maria Pilar; Cascella, Michele; Castaneda Hernandez, Alfredo Martin; Castaneda-Miranda, Elizabeth; Castillo Gimenez, Victoria; Castro, Nuno Filipe; Cataldi, Gabriella; Catinaccio, Andrea; Catmore, James; Cattai, Ariella; Cattani, Giordano; Caughron, Seth; Cavalleri, Pietro; Cavalli, Donatella; Cavalli-Sforza, Matteo; Cavasinni, Vincenzo; Ceradini, Filippo; Cerqueira, Augusto Santiago; Cerri, Alessandro; Cerrito, Lucio; Cerutti, Fabio; Cetin, Serkant Ali; Chafaq, Aziz; Chakraborty, Dhiman; Chan, Kevin; Chapman, John Derek; Chapman, John Wehrley; Chareyre, Eve; Charlton, Dave; Chavda, Vikash; Cheatham, Susan; Chekanov, Sergei; Chekulaev, Sergey; Chelkov, Gueorgui; Chen, Hucheng; Chen, Shenjian; Chen, Xin; Cheplakov, Alexander; Chepurnov, Vladimir; Cherkaoui El Moursli, Rajaa; Tcherniatine, Valeri; Chesneanu, Daniela; Cheu, Elliott; Cheung, Sing-Leung; Chevalier, Laurent; Chevallier, Florent; Chiefari, Giovanni; Chikovani, Leila; Childers, John Taylor; Chilingarov, Alexandre; Chiodini, Gabriele; Chizhov, Mihail; Choudalakis, Georgios; Chouridou, Sofia; Christidi, Illectra-Athanasia; Christov, Asen; Chromek-Burckhart, Doris; Chu, Ming-Lee; Chudoba, Jiri; Ciapetti, Guido; Ciftci, Abbas Kenan; Ciftci, Rena; Cinca, Diane; Cindro, Vladimir; Ciobotaru, Matei Dan; Ciocca, Claudia; Ciocio, Alessandra; Cirilli, Manuela; Clark, Allan G.; Clark, Philip James; Cleland, Bill; Clemens, Jean-Claude; Clement, Benoit; Clement, Christophe; Coadou, Yann; Cobal, Marina; Coccaro, Andrea; Cochran, James H.; Coggeshall, James; Cogneras, Eric; Colijn, Auke-Pieter; Collard, Caroline; Collins, Neil; Collins-Tooth, Christopher; Collot, Johann; Colon, German; Conde Muiño, Patricia; Coniavitis, Elias; Conidi, Maria Chiara; Consonni, Michele; Constantinescu, Serban; Conta, Claudio; Conventi, Francesco; Cooke, Mark; Cooper, Ben; Cooper-Sarkar, Amanda; Cooper-Smith, Neil; Copic, Katherine; Cornelissen, Thijs; Corradi, Massimo; Corriveau, Francois; Corso-Radu, Alina; Cortes-Gonzalez, Arely; Cortiana, Giorgio; Costa, Giuseppe; Costa, María José; Costanzo, Davide; Costin, Tudor; Côté, David; Coura Torres, Rodrigo; Courneyea, Lorraine; Cowan, Glen; Cowden, Christopher; Cox, Brian; Cranmer, Kyle; Cranshaw, Jack; Cristinziani, Markus; Crosetti, Giovanni; Crupi, Roberto; Crépé-Renaudin, Sabine; Cuenca Almenar, Cristóbal; Cuhadar Donszelmann, Tulay; Curatolo, Maria; Curtis, Chris; Cwetanski, Peter; Czyczula, Zofia; D'Auria, Saverio; D'Onofrio, Monica; D'Orazio, Alessia; Da Via, Cinzia; Dabrowski, Wladyslaw; Dai, Tiesheng; Dallapiccola, Carlo; Dallison, Steve; Daly, Colin; Dam, Mogens; Danielsson, Hans Olof; Dannheim, Dominik; Dao, Valerio; Darbo, Giovanni; Darlea, Georgiana Lavinia; Davey, Will; Davidek, Tomas; Davidson, Nadia; Davidson, Ruth; Davies, Merlin; Davison, Adam; Dawson, Ian; Daya, Rozmin; De, Kaushik; de Asmundis, Riccardo; De Castro, Stefano; De Castro Faria Salgado, Pedro; De Cecco, Sandro; de Graat, Julien; De Groot, Nicolo; de Jong, Paul; De Mora, Lee; De Oliveira Branco, Miguel; De Pedis, Daniele; De Salvo, Alessandro; De Sanctis, Umberto; De Santo, Antonella; De Vivie De Regie, Jean-Baptiste; Dean, Simon; Dedovich, Dmitri; Degenhardt, James; Dehchar, Mohamed; Del Papa, Carlo; Del Peso, Jose; Del Prete, Tarcisio; Dell'Acqua, Andrea; Dell'Asta, Lidia; Della Pietra, Massimo; della Volpe, Domenico; Delmastro, Marco; Delsart, Pierre-Antoine; Deluca, Carolina; Demers, Sarah; Demichev, Mikhail; Demirkoz, Bilge; Deng, Jianrong; Deng, Wensheng; Denisov, Sergey; Derkaoui, Jamal Eddine; Derue, Frederic; Dervan, Paul; Desch, Klaus Kurt; Deviveiros, Pier-Olivier; Dewhurst, Alastair; DeWilde, Burton; Dhaliwal, Saminder; Dhullipudi, Ramasudhakar; Di Ciaccio, Anna; Di Ciaccio, Lucia; Di Girolamo, Alessandro; Di Girolamo, Beniamino; Di Luise, Silvestro; Di Mattia, Alessandro; Di Nardo, Roberto; Di Simone, Andrea; Di Sipio, Riccardo; Diaz, Marco Aurelio; Diblen, Faruk; Diehl, Edward; Dietrich, Janet; Dietzsch, Thorsten; Diglio, Sara; Dindar Yagci, Kamile; Dingfelder, Jochen; Dionisi, Carlo; Dita, Petre; Dita, Sanda; Dittus, Fridolin; Djama, Fares; Djilkibaev, Rashid; Djobava, Tamar; do Vale, Maria Aline Barros; Do Valle Wemans, André; Doan, Thi Kieu Oanh; Dobos, Daniel; Dobson, Ellie; Dobson, Marc; Doglioni, Caterina; Doherty, Tom; Dolejsi, Jiri; Dolenc, Irena; Dolezal, Zdenek; Dolgoshein, Boris; Dohmae, Takeshi; Donega, Mauro; Donini, Julien; Dopke, Jens; Doria, Alessandra; Dos Anjos, Andre; Dotti, Andrea; Dova, Maria-Teresa; Doxiadis, Alexander; Doyle, Tony; Drasal, Zbynek; Dris, Manolis; Dubbert, Jörg; Duchovni, Ehud; Duckeck, Guenter; Dudarev, Alexey; Dudziak, Fanny; Dührssen , Michael; Duflot, Laurent; Dufour, Marc-Andre; Dunford, Monica; Duran Yildiz, Hatice; Duxfield, Robert; Dwuznik, Michal; Düren, Michael; Ebenstein, William; Ebke, Johannes; Eckweiler, Sebastian; Edmonds, Keith; Edwards, Clive; Egorov, Kirill; Ehrenfeld, Wolfgang; Ehrich, Thies; Eifert, Till; Eigen, Gerald; Einsweiler, Kevin; Eisenhandler, Eric; Ekelof, Tord; El Kacimi, Mohamed; Ellert, Mattias; Elles, Sabine; Ellinghaus, Frank; Ellis, Katherine; Ellis, Nicolas; Elmsheuser, Johannes; Elsing, Markus; Emeliyanov, Dmitry; Engelmann, Roderich; Engl, Albert; Epp, Brigitte; Eppig, Andrew; Erdmann, Johannes; Ereditato, Antonio; Eriksson, Daniel; Ermoline, Iouri; Ernst, Jesse; Ernst, Michael; Ernwein, Jean; Errede, Deborah; Errede, Steven; Ertel, Eugen; Escalier, Marc; Escobar, Carlos; Espinal Curull, Xavier; Esposito, Bellisario; Etienvre, Anne-Isabelle; Etzion, Erez; Evans, Hal; Fabbri, Laura; Fabre, Caroline; Facius, Katrine; Fakhrutdinov, Rinat; Falciano, Speranza; Fang, Yaquan; Fanti, Marcello; Farbin, Amir; Farilla, Addolorata; Farley, Jason; Farooque, Trisha; Farrington, Sinead; Farthouat, Philippe; Fassnacht, Patrick; Fassouliotis, Dimitrios; Fatholahzadeh, Baharak; Fayard, Louis; Fayette, Florent; Febbraro, Renato; Federic, Pavol; Fedin, Oleg; Fedorko, Woiciech; Feligioni, Lorenzo; Felzmann, Ulrich; Feng, Cunfeng; Feng, Eric; Fenyuk, Alexander; Ferencei, Jozef; Ferland, Jonathan; Fernandes, Bruno; Fernando, Waruna; Ferrag, Samir; Ferrando, James; Ferrara, Valentina; Ferrari, Arnaud; Ferrari, Pamela; Ferrari, Roberto; Ferrer, Antonio; Ferrer, Maria Lorenza; Ferrere, Didier; Ferretti, Claudio; Fiascaris, Maria; Fiedler, Frank; Filipčič, Andrej; Filippas, Anastasios; Filthaut, Frank; Fincke-Keeler, Margret; Fiolhais, Miguel; Fiorini, Luca; Firan, Ana; Fischer, Gordon; Fisher, Matthew; Flechl, Martin; Fleck, Ivor; Fleckner, Johanna; Fleischmann, Philipp; Fleischmann, Sebastian; Flick, Tobias; Flores Castillo, Luis; Flowerdew, Michael; Fonseca Martin, Teresa; Formica, Andrea; Forti, Alessandra; Fortin, Dominique; Fournier, Daniel; Fowler, Andrew; Fowler, Ken; Fox, Harald; Francavilla, Paolo; Franchino, Silvia; Francis, David; Franklin, Melissa; Franz, Sebastien; Fraternali, Marco; Fratina, Sasa; Freestone, Julian; French, Sky; Froeschl, Robert; Froidevaux, Daniel; Frost, James; Fukunaga, Chikara; Fullana Torregrosa, Esteban; Fuster, Juan; Gabaldon, Carolina; Gabizon, Ofir; Gadfort, Thomas; Gadomski, Szymon; Gagliardi, Guido; Gagnon, Pauline; Galea, Cristina; Gallas, Elizabeth; Gallo, Valentina Santina; Gallop, Bruce; Gallus, Petr; Galyaev, Eugene; Gan, K K; Gao, Yongsheng; Gaponenko, Andrei; Garcia-Sciveres, Maurice; García, Carmen; García Navarro, José Enrique; Gardner, Robert; Garelli, Nicoletta; Garitaonandia, Hegoi; Garonne, Vincent; Gatti, Claudio; Gaudio, Gabriella; Gautard, Valerie; Gauzzi, Paolo; Gavrilenko, Igor; Gay, Colin; Gaycken, Goetz; Gazis, Evangelos; Ge, Peng; Gee, Norman; Geich-Gimbel, Christoph; Gellerstedt, Karl; Gemme, Claudia; Genest, Marie-Hélène; Gentile, Simonetta; Georgatos, Fotios; George, Simon; Gershon, Avi; Ghazlane, Hamid; Ghodbane, Nabil; Giacobbe, Benedetto; Giagu, Stefano; Giakoumopoulou, Victoria; Giangiobbe, Vincent; Gianotti, Fabiola; Gibbard, Bruce; Gibson, Adam; Gibson, Stephen; Gilbert, Laura; Gilchriese, Murdock; Gilewsky, Valentin; Gingrich, Douglas; Ginzburg, Jonatan; Giokaris, Nikos; Giordani, MarioPaolo; Giordano, Raffaele; Giorgi, Francesco Michelangelo; Giovannini, Paola; Giraud, Pierre-Francois; Girtler, Peter; Giugni, Danilo; Giusti, Paolo; Gjelsten, Børge Kile; Gladilin, Leonid; Glasman, Claudia; Glazov, Alexandre; Glitza, Karl-Walter; Glonti, George; Godfrey, Jennifer; Godlewski, Jan; Goebel, Martin; Göpfert, Thomas; Goeringer, Christian; Gössling, Claus; Göttfert, Tobias; Goggi, Virginio; Goldfarb, Steven; Goldin, Daniel; Golling, Tobias; Gomes, Agostinho; Gomez Fajardo, Luz Stella; Gonçcalo, Ricardo; Gonella, Laura; Gong, Chenwei; González de la Hoz, Santiago; Gonzalez Silva, Laura; Gonzalez-Sevilla, Sergio; Goodson, Jeremiah Jet; Goossens, Luc; Gordon, Howard; Gorelov, Igor; Gorfine, Grant; Gorini, Benedetto; Gorini, Edoardo; Gorišek, Andrej; Gornicki, Edward; Gosdzik, Bjoern; Gosselink, Martijn; Gostkin, Mikhail Ivanovitch; Gough Eschrich, Ivo; Gouighri, Mohamed; Goujdami, Driss; Goulette, Marc Phillippe; Goussiou, Anna; Goy, Corinne; Grabowska-Bold, Iwona; Grafström, Per; Grahn, Karl-Johan; Grancagnolo, Sergio; Grassi, Valerio; Gratchev, Vadim; Grau, Nathan; Gray, Heather; Gray, Julia Ann; Graziani, Enrico; Green, Barry; Greenshaw, Timothy; Greenwood, Zeno Dixon; Gregor, Ingrid-Maria; Grenier, Philippe; Griesmayer, Erich; Griffiths, Justin; Grigalashvili, Nugzar; Grillo, Alexander; Grimm, Kathryn; Grinstein, Sebastian; Grishkevich, Yaroslav; Groh, Manfred; Groll, Marius; Gross, Eilam; Grosse-Knetter, Joern; Groth-Jensen, Jacob; Grybel, Kai; Guicheney, Christophe; Guida, Angelo; Guillemin, Thibault; Guler, Hulya; Gunther, Jaroslav; Guo, Bin; Gusakov, Yury; Gutierrez, Andrea; Gutierrez, Phillip; Guttman, Nir; Gutzwiller, Olivier; Guyot, Claude; Gwenlan, Claire; Gwilliam, Carl; Haas, Andy; Haas, Stefan; Haber, Carl; Hadavand, Haleh Khani; Hadley, David; Haefner, Petra; Hajduk, Zbigniew; Hakobyan, Hrachya; Haller, Johannes; Hamacher, Klaus; Hamilton, Andrew; Hamilton, Samuel; Han, Liang; Hanagaki, Kazunori; Hance, Michael; Handel, Carsten; Hanke, Paul; Hansen, Jørgen Beck; Hansen, Jorn Dines; Hansen, John Renner; Hansen, Peter Henrik; Hansl-Kozanecka, Traudl; Hansson, Per; Hara, Kazuhiko; Hare, Gabriel; Harenberg, Torsten; Harrington, Robert; Harris, Orin; Harrison, Karl; Hartert, Jochen; Hartjes, Fred; Harvey, Alex; Hasegawa, Satoshi; Hasegawa, Yoji; Hassani, Samira; Haug, Sigve; Hauschild, Michael; Hauser, Reiner; Havranek, Miroslav; Hawkes, Christopher; Hawkings, Richard John; Hayakawa, Takashi; Hayward, Helen; Haywood, Stephen; Head, Simon; Hedberg, Vincent; Heelan, Louise; Heim, Sarah; Heinemann, Beate; Heisterkamp, Simon; Helary, Louis; Heller, Mathieu; Hellman, Sten; Helsens, Clement; Hemperek, Tomasz; Henderson, Robert; Henke, Michael; Henrichs, Anna; Henriques Correia, Ana Maria; Henrot-Versille, Sophie; Hensel, Carsten; Henß, Tobias; Hernández Jiménez, Yesenia; Hershenhorn, Alon David; Herten, Gregor; Hertenberger, Ralf; Hervas, Luis; Hessey, Nigel; Higón-Rodriguez, Emilio; Hill, John; Hiller, Karl Heinz; Hillert, Sonja; Hillier, Stephen; Hinchliffe, Ian; Hines, Elizabeth; Hirose, Minoru; Hirsch, Florian; Hirschbuehl, Dominic; Hobbs, John; Hod, Noam; Hodgkinson, Mark; Hodgson, Paul; Hoecker, Andreas; Hoeferkamp, Martin; Hoffman, Julia; Hoffmann, Dirk; Hohlfeld, Marc; Holy, Tomas; Holzbauer, Jenny; Homma, Yasuhiro; Horazdovsky, Tomas; Hori, Takuya; Horn, Claus; Horner, Stephan; Hostachy, Jean-Yves; Hou, Suen; Hoummada, Abdeslam; Howe, Travis; Hrivnac, Julius; Hryn'ova, Tetiana; Hsu, Pai-hsien Jennifer; Hsu, Shih-Chieh; Huang, Guang Shun; Hubacek, Zdenek; Hubaut, Fabrice; Huegging, Fabian; Huffman, Todd Brian; Hughes, Emlyn; Hughes, Gareth; Hurwitz, Martina; Husemann, Ulrich; Huseynov, Nazim; Huston, Joey; Huth, John; Iacobucci, Giuseppe; Iakovidis, Georgios; Ibragimov, Iskander; Iconomidou-Fayard, Lydia; Idarraga, John; Iengo, Paolo; Igonkina, Olga; Ikegami, Yoichi; Ikeno, Masahiro; Ilchenko, Yuri; Iliadis, Dimitrios; Ince, Tayfun; Ioannou, Pavlos; Iodice, Mauro; Irles Quiles, Adrian; Ishikawa, Akimasa; Ishino, Masaya; Ishmukhametov, Renat; Isobe, Tadaaki; Issever, Cigdem; Istin, Serhat; Itoh, Yuki; Ivashin, Anton; Iwanski, Wieslaw; Iwasaki, Hiroyuki; Izen, Joseph; Izzo, Vincenzo; Jackson, Brett; Jackson, John; Jackson, Paul; Jaekel, Martin; Jain, Vivek; Jakobs, Karl; Jakobsen, Sune; Jakubek, Jan; Jana, Dilip; Jankowski, Ernest; Jansen, Eric; Jantsch, Andreas; Janus, Michel; Jarlskog, Göran; Jeanty, Laura; Jen-La Plante, Imai; Jenni, Peter; Jež, Pavel; Jézéquel, Stéphane; Ji, Weina; Jia, Jiangyong; Jiang, Yi; Jimenez Belenguer, Marcos; Jin, Shan; Jinnouchi, Osamu; Joffe, David; Johansen, Marianne; Johansson, Erik; Johansson, Per; Johnert, Sebastian; Johns, Kenneth; Jon-And, Kerstin; Jones, Graham; Jones, Roger; Jones, Tim; Jorge, Pedro; Joseph, John; Juranek, Vojtech; Jussel, Patrick; Kabachenko, Vasily; Kaci, Mohammed; Kaczmarska, Anna; Kado, Marumi; Kagan, Harris; Kagan, Michael; Kaiser, Steffen; Kajomovitz, Enrique; Kalinin, Sergey; Kalinovskaya, Lidia; Kama, Sami; Kanaya, Naoko; Kaneda, Michiru; Kantserov, Vadim; Kanzaki, Junichi; Kaplan, Benjamin; Kapliy, Anton; Kaplon, Jan; Kar, Deepak; Karagounis, Michael; Karagoz, Muge; Karnevskiy, Mikhail; Kartvelishvili, Vakhtang; Karyukhin, Andrey; Kashif, Lashkar; Kasmi, Azzedine; Kass, Richard; Kastanas, Alex; Kastoryano, Michael; Kataoka, Mayuko; Kataoka, Yousuke; Katsoufis, Elias; Katzy, Judith; Kaushik, Venkatesh; Kawagoe, Kiyotomo; Kawamoto, Tatsuo; Kawamura, Gen; Kayl, Manuel; Kayumov, Fred; Kazanin, Vassili; Kazarinov, Makhail; Keates, James Robert; Keeler, Richard; Keener, Paul; Kehoe, Robert; Keil, Markus; Kekelidze, George; Kelly, Marc; Kenyon, Mike; Kepka, Oldrich; Kerschen, Nicolas; Kerševan, Borut Paul; Kersten, Susanne; Kessoku, Kohei; Khakzad, Mohsen; Khalil-zada, Farkhad; Khandanyan, Hovhannes; Khanov, Alexander; Kharchenko, Dmitri; Khodinov, Alexander; Khomich, Andrei; Khoriauli, Gia; Khovanskiy, Nikolai; Khovanskiy, Valery; Khramov, Evgeniy; Khubua, Jemal; Kim, Hyeon Jin; Kim, Min Suk; Kim, Peter; Kim, Shinhong; Kind, Oliver; Kind, Peter; King, Barry; Kirk, Julie; Kirsch, Guillaume; Kirsch, Lawrence; Kiryunin, Andrey; Kisielewska, Danuta; Kittelmann, Thomas; Kiyamura, Hironori; Kladiva, Eduard; Klein, Max; Klein, Uta; Kleinknecht, Konrad; Klemetti, Miika; Klier, Amit; Klimentov, Alexei; Klingenberg, Reiner; Klinkby, Esben; Klioutchnikova, Tatiana; Klok, Peter; Klous, Sander; Kluge, Eike-Erik; Kluge, Thomas; Kluit, Peter; Klute, Markus; Kluth, Stefan; Knecht, Neil; Kneringer, Emmerich; Ko, Byeong Rok; Kobayashi, Tomio; Kobel, Michael; Koblitz, Birger; Kocian, Martin; Kocnar, Antonin; Kodys, Peter; Köneke, Karsten; König, Adriaan; Koenig, Sebastian; Köpke, Lutz; Koetsveld, Folkert; Koevesarki, Peter; Koffas, Thomas; Koffeman, Els; Kohn, Fabian; Kohout, Zdenek; Kohriki, Takashi; Kolanoski, Hermann; Kolesnikov, Vladimir; Koletsou, Iro; Koll, James; Kollar, Daniel; Kolos, Serguei; Kolya, Scott; Komar, Aston; Komaragiri, Jyothsna Rani; Kondo, Takahiko; Kono, Takanori; Konoplich, Rostislav; Konovalov, Serguei; Konstantinidis, Nikolaos; Koperny, Stefan; Korcyl, Krzysztof; Kordas, Kostantinos; Korn, Andreas; Korolkov, Ilya; Korolkova, Elena; Korotkov, Vladislav; Kortner, Oliver; Kortner, Sandra; Kostka, Peter; Kostyukhin, Vadim; Kotov, Serguei; Kotov, Vladislav; Kotov, Konstantin; Kourkoumelis, Christine; Koutsman, Alex; Kowalewski, Robert Victor; Kowalski, Henri; Kowalski, Tadeusz; Kozanecki, Witold; Kozhin, Anatoly; Kral, Vlastimil; Kramarenko, Viktor; Kramberger, Gregor; Krasny, Mieczyslaw Witold; Krasznahorkay, Attila; Kraus, James; Kreisel, Arik; Krejci, Frantisek; Kretzschmar, Jan; Krieger, Nina; Krieger, Peter; Kroeninger, Kevin; Kroha, Hubert; Kroll, Joe; Kroseberg, Juergen; Krstic, Jelena; Kruchonak, Uladzimir; Krüger, Hans; Krumshteyn, Zinovii; Kubota, Takashi; Kuehn, Susanne; Kugel, Andreas; Kuhl, Thorsten; Kuhn, Dietmar; Kukhtin, Victor; Kulchitsky, Yuri; Kuleshov, Sergey; Kummer, Christian; Kuna, Marine; Kunkle, Joshua; Kupco, Alexander; Kurashige, Hisaya; Kurata, Masakazu; Kurochkin, Yurii; Kus, Vlastimil; Kwee, Regina; La Rosa, Alessandro; La Rotonda, Laura; Labbe, Julien; Lacasta, Carlos; Lacava, Francesco; Lacker, Heiko; Lacour, Didier; Lacuesta, Vicente Ramón; Ladygin, Evgueni; Lafaye, Rémi; Laforge, Bertrand; Lagouri, Theodota; Lai, Stanley; Lamanna, Massimo; Lampen, Caleb; Lampl, Walter; Lancon, Eric; Landgraf, Ulrich; Landon, Murrough; Lane, Jenna; Lankford, Andrew; Lanni, Francesco; Lantzsch, Kerstin; Lanza, Agostino; Laplace, Sandrine; Lapoire, Cecile; Laporte, Jean-Francois; Lari, Tommaso; Larner, Aimee; Lassnig, Mario; Laurelli, Paolo; Lavrijsen, Wim; Laycock, Paul; Lazarev, Alexandre; Lazzaro, Alfio; Le Dortz, Olivier; Le Guirriec, Emmanuel; Le Menedeu, Eve; Lebedev, Alexander; Lebel, Céline; LeCompte, Thomas; Ledroit-Guillon, Fabienne Agnes Marie; Lee, Hurng-Chun; Lee, Jason; Lee, Shih-Chang; Lefebvre, Michel; Legendre, Marie; LeGeyt, Benjamin; Legger, Federica; Leggett, Charles; Lehmacher, Marc; Lehmann Miotto, Giovanna; Lei, Xiaowen; Leitner, Rupert; Lellouch, Daniel; Lellouch, Jeremie; Lendermann, Victor; Leney, Katharine; Lenz, Tatiana; Lenzen, Georg; Lenzi, Bruno; Leonhardt, Kathrin; Leroy, Claude; Lessard, Jean-Raphael; Lester, Christopher; Leung Fook Cheong, Annabelle; Levêque, Jessica; Levin, Daniel; Levinson, Lorne; Leyton, Michael; Li, Haifeng; Li, Xuefei; Liang, Zhihua; Liang, Zhijun; Liberti, Barbara; Lichard, Peter; Lichtnecker, Markus; Lie, Ki; Liebig, Wolfgang; Lilley, Joseph; Limosani, Antonio; Limper, Maaike; Lin, Simon; Linnemann, James; Lipeles, Elliot; Lipinsky, Lukas; Lipniacka, Anna; Liss, Tony; Lissauer, David; Lister, Alison; Litke, Alan; Liu, Chuanlei; Liu, Dong; Liu, Hao; Liu, Jianbei; Liu, Minghui; Liu, Tiankuan; Liu, Yanwen; Livan, Michele; Lleres, Annick; Lloyd, Stephen; Lobodzinska, Ewelina; Loch, Peter; Lockman, William; Lockwitz, Sarah; Loddenkoetter, Thomas; Loebinger, Fred; Loginov, Andrey; Loh, Chang Wei; Lohse, Thomas; Lohwasser, Kristin; Lokajicek, Milos; Long, Robin Eamonn; Lopes, Lourenco; Lopez Mateos, David; Losada, Marta; Loscutoff, Peter; Lou, Xinchou; Lounis, Abdenour; Loureiro, Karina; Lovas, Lubomir; Love, Jeremy; Love, Peter; Lowe, Andrew; Lu, Feng; Lubatti, Henry; Luci, Claudio; Lucotte, Arnaud; Ludwig, Andreas; Ludwig, Dörthe; Ludwig, Inga; Luehring, Frederick; Lumb, Debra; Luminari, Lamberto; Lund, Esben; Lund-Jensen, Bengt; Lundberg, Björn; Lundberg, Johan; Lundquist, Johan; Lynn, David; Lys, Jeremy; Lytken, Else; Ma, Hong; Ma, Lian Liang; Macana Goia, Jorge Andres; Maccarrone, Giovanni; Macchiolo, Anna; Maček, Boštjan; Machado Miguens, Joana; Mackeprang, Rasmus; Madaras, Ronald; Mader, Wolfgang; Maenner, Reinhard; Maeno, Tadashi; Mättig, Peter; Mättig, Stefan; Magalhaes Martins, Paulo Jorge; Magradze, Erekle; Mahalalel, Yair; Mahboubi, Kambiz; Mahmood, A.; Maiani, Camilla; Maidantchik, Carmen; Maio, Amélia; Majewski, Stephanie; Makida, Yasuhiro; Makouski, Mikhail; Makovec, Nikola; Malecki, Piotr; Malecki, Pawel; Maleev, Victor; Malek, Fairouz; Mallik, Usha; Malon, David; Maltezos, Stavros; Malyshev, Vladimir; Malyukov, Sergei; Mambelli, Marco; Mameghani, Raphael; Mamuzic, Judita; Mandelli, Luciano; Mandić, Igor; Mandrysch, Rocco; Maneira, José; Mangeard, Pierre-Simon; Manjavidze, Ioseb; Manning, Peter; Manousakis-Katsikakis, Arkadios; Mansoulie, Bruno; Mapelli, Alessandro; Mapelli, Livio; March , Luis; Marchand, Jean-Francois; Marchese, Fabrizio; Marchiori, Giovanni; Marcisovsky, Michal; Marino, Christopher; Marroquim, Fernando; Marshall, Zach; Marti-Garcia, Salvador; Martin, Alex; Martin, Andrew; Martin, Brian; Martin, Brian; Martin, Franck Francois; Martin, Jean-Pierre; Martin, Tim; Martin dit Latour, Bertrand; Martinez, Mario; Martinez Outschoorn, Verena; Martyniuk, Alex; Marzano, Francesco; Marzin, Antoine; Masetti, Lucia; Mashimo, Tetsuro; Mashinistov, Ruslan; Masik, Jiri; Maslennikov, Alexey; Massa, Ignazio; Massol, Nicolas; Mastroberardino, Anna; Masubuchi, Tatsuya; Matricon, Pierre; Matsunaga, Hiroyuki; Matsushita, Takashi; Mattravers, Carly; Maxfield, Stephen; Mayne, Anna; Mazini, Rachid; Mazur, Michael; Mc Donald, Jeffrey; Mc Kee, Shawn Patrick; McCarn, Allison; McCarthy, Robert; McCubbin, Norman; McFarlane, Kenneth; McGlone, Helen; Mchedlidze, Gvantsa; McMahon, Steve; McPherson, Robert; Meade, Andrew; Mechnich, Joerg; Mechtel, Markus; Medinnis, Mike; Meera-Lebbai, Razzak; Meguro, Tatsuma; Mehlhase, Sascha; Mehta, Andrew; Meier, Karlheinz; Meirose, Bernhard; Melachrinos, Constantinos; Mellado Garcia, Bruce Rafael; Mendoza Navas, Luis; Meng, Zhaoxia; Menke, Sven; Meoni, Evelin; Mermod, Philippe; Merola, Leonardo; Meroni, Chiara; Merritt, Frank; Messina, Andrea; Metcalfe, Jessica; Mete, Alaettin Serhan; Meyer, Jean-Pierre; Meyer, Jochen; Meyer, Joerg; Meyer, Thomas Christian; Meyer, W. Thomas; Miao, Jiayuan; Michal, Sebastien; Micu, Liliana; Middleton, Robin; Migas, Sylwia; Mijović, Liza; Mikenberg, Giora; Mikestikova, Marcela; Mikuž, Marko; Miller, David; Mills, Corrinne; Mills, Bill; Milov, Alexander; Milstead, David; Milstein, Dmitry; Minaenko, Andrey; Miñano, Mercedes; Minashvili, Irakli; Mincer, Allen; Mindur, Bartosz; Mineev, Mikhail; Ming, Yao; Mir, Lluisa-Maria; Mirabelli, Giovanni; Misawa, Shigeki; Misiejuk, Andrzej; Mitrevski, Jovan; Mitsou, Vasiliki A.; Miyagawa, Paul; Mjörnmark, Jan-Ulf; Moa, Torbjoern; Moed, Shulamit; Moeller, Victoria; Mönig, Klaus; Möser, Nicolas; Mohr, Wolfgang; Mohrdieck-Möck, Susanne; Moles-Valls, Regina; Molina-Perez, Jorge; Monk, James; Monnier, Emmanuel; Montesano, Simone; Monticelli, Fernando; Moore, Roger; Mora Herrera, Clemencia; Moraes, Arthur; Morais, Antonio; Morel, Julien; Morello, Gianfranco; Moreno, Deywis; Moreno Llácer, María; Morettini, Paolo; Morii, Masahiro; Morley, Anthony Keith; Mornacchi, Giuseppe; Morozov, Sergey; Morris, John; Moser, Hans-Guenther; Mosidze, Maia; Moss, Josh; Mount, Richard; Mountricha, Eleni; Mouraviev, Sergei; Moyse, Edward; Mudrinic, Mihajlo; Mueller, Felix; Mueller, James; Mueller, Klemens; Müller, Thomas; Muenstermann, Daniel; Muir, Alex; Munwes, Yonathan; Murillo Garcia, Raul; Murray, Bill; Mussche, Ido; Musto, Elisa; Myagkov, Alexey; Myska, Miroslav; Nadal, Jordi; Nagai, Koichi; Nagano, Kunihiro; Nagasaka, Yasushi; Nairz, Armin Michael; Nakamura, Koji; Nakano, Itsuo; Nakatsuka, Hiroki; Nanava, Gizo; Napier, Austin; Nash, Michael; Nation, Nigel; Nattermann, Till; Naumann, Thomas; Navarro, Gabriela; Nderitu, Simon Kirichu; Neal, Homer; Nebot, Eduardo; Nechaeva, Polina; Negri, Andrea; Negri, Guido; Nelson, Andrew; Nelson, Timothy Knight; Nemecek, Stanislav; Nemethy, Peter; Nepomuceno, Andre Asevedo; Nessi, Marzio; Neubauer, Mark; Neusiedl, Andrea; Neves, Ricardo; Nevski, Pavel; Newcomer, Mitchel; Nickerson, Richard; Nicolaidou, Rosy; Nicolas, Ludovic; Nicoletti, Giovanni; Nicquevert, Bertrand; Niedercorn, Francois; Nielsen, Jason; Nikiforov, Andriy; Nikolaev, Kirill; Nikolic-Audit, Irena; Nikolopoulos, Konstantinos; Nilsen, Henrik; Nilsson, Paul; Nisati, Aleandro; Nishiyama, Tomonori; Nisius, Richard; Nodulman, Lawrence; Nomachi, Masaharu; Nomidis, Ioannis; Nordberg, Markus; Nordkvist, Bjoern; Notz, Dieter; Novakova, Jana; Nozaki, Mitsuaki; Nožička, Miroslav; Nugent, Ian Michael; Nuncio-Quiroz, Adriana-Elizabeth; Nunes Hanninger, Guilherme; Nunnemann, Thomas; Nurse, Emily; O'Neil, Dugan; O'Shea, Val; Oakham, Gerald; Oberlack, Horst; Ochi, Atsuhiko; Oda, Susumu; Odaka, Shigeru; Odier, Jerome; Ogren, Harold; Oh, Alexander; Oh, Seog; Ohm, Christian; Ohshima, Takayoshi; Ohshita, Hidetoshi; Ohsugi, Takashi; Okada, Shogo; Okawa, Hideki; Okumura, Yasuyuki; Okuyama, Toyonobu; Olchevski, Alexander; Oliveira, Miguel Alfonso; Oliveira Damazio, Denis; Oliver Garcia, Elena; Olivito, Dominick; Olszewski, Andrzej; Olszowska, Jolanta; Omachi, Chihiro; Onofre, António; Onyisi, Peter; Oram, Christopher; Oreglia, Mark; Oren, Yona; Orestano, Domizia; Orlov, Iliya; Oropeza Barrera, Cristina; Orr, Robert; Ortega, Eduardo; Osculati, Bianca; Ospanov, Rustem; Osuna, Carlos; Ottersbach, John; Ould-Saada, Farid; Ouraou, Ahmimed; Ouyang, Qun; Owen, Mark; Owen, Simon; Oyarzun, Alejandro; Ozcan, Veysi Erkcan; Ozone, Kenji; Ozturk, Nurcan; Pacheco Pages, Andres; Padilla Aranda, Cristobal; Paganis, Efstathios; Pahl, Christoph; Paige, Frank; Pajchel, Katarina; Palestini, Sandro; Pallin, Dominique; Palma, Alberto; Palmer, Jody; Pan, Yibin; Panagiotopoulou, Evgenia; Panes, Boris; Panikashvili, Natalia; Panitkin, Sergey; Pantea, Dan; Panuskova, Monika; Paolone, Vittorio; Papadopoulou, Theodora; Park, Su-Jung; Park, Woochun; Parker, Andy; Parodi, Fabrizio; Parsons, John; Parzefall, Ulrich; Pasqualucci, Enrico; Passeri, Antonio; Pastore, Fernanda; Pastore, Francesca; Pásztor , Gabriella; Pataraia, Sophio; Pater, Joleen; Patricelli, Sergio; Pauly, Thilo; Peak, Lawrence; Pecsy, Martin; Pedraza Morales, Maria Isabel; Peleganchuk, Sergey; Peng, Haiping; Penson, Alexander; Penwell, John; Perantoni, Marcelo; Perez, Kerstin; Perez Codina, Estel; Pérez García-Estañ, María Teresa; Perez Reale, Valeria; Perini, Laura; Pernegger, Heinz; Perrino, Roberto; Persembe, Seda; Perus, Antoine; Peshekhonov, Vladimir; Petersen, Brian; Petersen, Troels; Petit, Elisabeth; Petridou, Chariclia; Petrolo, Emilio; Petrucci, Fabrizio; Petschull, Dennis; Petteni, Michele; Pezoa, Raquel; Phan, Anna; Phillips, Alan; Phillips, Peter William; Piacquadio, Giacinto; Piccinini, Maurizio; Piegaia, Ricardo; Pilcher, James; Pilkington, Andrew; Pina, João Antonio; Pinamonti, Michele; Pinfold, James; Pinto, Belmiro; Pizio, Caterina; Placakyte, Ringaile; Plamondon, Mathieu; Pleier, Marc-Andre; Poblaguev, Andrei; Poddar, Sahill; Podlyski, Fabrice; Poggioli, Luc; Pohl, Martin; Polci, Francesco; Polesello, Giacomo; Policicchio, Antonio; Polini, Alessandro; Poll, James; Polychronakos, Venetios; Pomeroy, Daniel; Pommès, Kathy; Ponsot, Patrick; Pontecorvo, Ludovico; Pope, Bernard; Popeneciu, Gabriel Alexandru; Popovic, Dragan; Poppleton, Alan; Popule, Jiri; Portell Bueso, Xavier; Porter, Robert; Pospelov, Guennady; Pospisil, Stanislav; Potekhin, Maxim; Potrap, Igor; Potter, Christina; Potter, Christopher; Potter, Keith; Poulard, Gilbert; Poveda, Joaquin; Prabhu, Robindra; Pralavorio, Pascal; Prasad, Srivas; Pravahan, Rishiraj; Pribyl, Lukas; Price, Darren; Price, Lawrence; Prichard, Paul; Prieur, Damien; Primavera, Margherita; Prokofiev, Kirill; Prokoshin, Fedor; Protopopescu, Serban; Proudfoot, James; Prudent, Xavier; Przysiezniak, Helenka; Psoroulas, Serena; Ptacek, Elizabeth; Purdham, John; Purohit, Milind; Puzo, Patrick; Pylypchenko, Yuriy; Qi, Ming; Qian, Jianming; Qian, Weiming; Qin, Zhonghua; Quadt, Arnulf; Quarrie, David; Quayle, William; Quinonez, Fernando; Raas, Marcel; Radeka, Veljko; Radescu, Voica; Radics, Balint; Rador, Tonguc; Ragusa, Francesco; Rahal, Ghita; Rahimi, Amir; Rajagopalan, Srinivasan; Rammensee, Michael; Rammes, Marcus; Rauscher, Felix; Rauter, Emanuel; Raymond, Michel; Read, Alexander Lincoln; Rebuzzi, Daniela; Redelbach, Andreas; Redlinger, George; Reece, Ryan; Reeves, Kendall; Reinherz-Aronis, Erez; Reinsch, Andreas; Reisinger, Ingo; Reljic, Dusan; Rembser, Christoph; Ren, Zhongliang; Renkel, Peter; Rescia, Sergio; Rescigno, Marco; Resconi, Silvia; Resende, Bernardo; Reznicek, Pavel; Rezvani, Reyhaneh; Richards, Alexander; Richter, Robert; Richter-Was, Elzbieta; Ridel, Melissa; Rijpstra, Manouk; Rijssenbeek, Michael; Rimoldi, Adele; Rinaldi, Lorenzo; Rios, Ryan Randy; Riu, Imma; Rizatdinova, Flera; Rizvi, Eram; Roa Romero, Diego Alejandro; Robertson, Steven; Robichaud-Veronneau, Andree; Robinson, Dave; Robinson, James; Robinson, Mary; Robson, Aidan; Rocha de Lima, Jose Guilherme; Roda, Chiara; Roda Dos Santos, Denis; Rodriguez, Diego; Rodriguez Garcia, Yohany; Roe, Shaun; Røhne, Ole; Rojo, Victoria; Rolli, Simona; Romaniouk, Anatoli; Romanov, Victor; Romeo, Gaston; Romero Maltrana, Diego; Roos, Lydia; Ros, Eduardo; Rosati, Stefano; Rosenbaum, Gabriel; Rosselet, Laurent; Rossetti, Valerio; Rossi, Leonardo Paolo; Rotaru, Marina; Rothberg, Joseph; Rousseau, David; Royon, Christophe; Rozanov, Alexander; Rozen, Yoram; Ruan, Xifeng; Ruckert, Benjamin; Ruckstuhl, Nicole; Rud, Viacheslav; Rudolph, Gerald; Rühr, Frederik; Ruggieri, Federico; Ruiz-Martinez, Aranzazu; Rumyantsev, Leonid; Rurikova, Zuzana; Rusakovich, Nikolai; Rutherfoord, John; Ruwiedel, Christoph; Ruzicka, Pavel; Ryabov, Yury; Ryan, Patrick; Rybkin, Grigori; Rzaeva, Sevda; Saavedra, Aldo; Sadrozinski, Hartmut; Sadykov, Renat; Safai Tehrani, Francesco; Sakamoto, Hiroshi; Salamanna, Giuseppe; Salamon, Andrea; Saleem, Muhammad; Salihagic, Denis; Salnikov, Andrei; Salt, José; Salvachua Ferrando, Belén; Salvatore, Daniela; Salvatore, Pasquale Fabrizio; Salvucci, Antonio; Salzburger, Andreas; Sampsonidis, Dimitrios; Samset, Björn Hallvard; Sandaker, Heidi; Sander, Heinz Georg; Sanders, Michiel; Sandhoff, Marisa; Sandhu, Pawan; Sandstroem, Rikard; Sandvoss, Stephan; Sankey, Dave; Sanny, Bernd; Sansoni, Andrea; Santamarina Rios, Cibran; Santoni, Claudio; Santonico, Rinaldo; Saraiva, João; Sarangi, Tapas; Sarkisyan-Grinbaum, Edward; Sarri, Francesca; Sasaki, Osamu; Sasao, Noboru; Satsounkevitch, Igor; Sauvage, Gilles; Savard, Pierre; Savine, Alexandre; Savinov, Vladimir; Sawyer, Lee; Saxon, David; Says, Louis-Pierre; Sbarra, Carla; Sbrizzi, Antonio; Scannicchio, Diana; Schaarschmidt, Jana; Schacht, Peter; Schäfer, Uli; Schaetzel, Sebastian; Schaffer, Arthur; Schaile, Dorothee; Schamberger, R.~Dean; Schamov, Andrey; Scharf, Veit; Schegelsky, Valery; Scheirich, Daniel; Schernau, Michael; Scherzer, Max; Schiavi, Carlo; Schieck, Jochen; Schioppa, Marco; Schlenker, Stefan; Schmidt, Evelyn; Schmieden, Kristof; Schmitt, Christian; Schmitz, Martin; Schöning, André; Schott, Matthias; Schouten, Doug; Schovancova, Jaroslava; Schram, Malachi; Schreiner, Alexander; Schroeder, Christian; Schroer, Nicolai; Schroers, Marcel; Schultes, Joachim; Schultz-Coulon, Hans-Christian; Schumacher, Jan; Schumacher, Markus; Schumm, Bruce; Schune, Philippe; Schwanenberger, Christian; Schwartzman, Ariel; Schwemling, Philippe; Schwienhorst, Reinhard; Schwierz, Rainer; Schwindling, Jerome; Scott, Bill; Searcy, Jacob; Sedykh, Evgeny; Segura, Ester; Seidel, Sally; Seiden, Abraham; Seifert, Frank; Seixas, José; Sekhniaidze, Givi; Seliverstov, Dmitry; Sellden, Bjoern; Semprini-Cesari, Nicola; Serfon, Cedric; Serin, Laurent; Seuster, Rolf; Severini, Horst; Sevior, Martin; Sfyrla, Anna; Shabalina, Elizaveta; Shamim, Mansoora; Shan, Lianyou; Shank, James; Shao, Qi Tao; Shapiro, Marjorie; Shatalov, Pavel; Shaw, Kate; Sherman, Daniel; Sherwood, Peter; Shibata, Akira; Shimojima, Makoto; Shin, Taeksu; Shmeleva, Alevtina; Shochet, Mel; Shupe, Michael; Sicho, Petr; Sidoti, Antonio; Siegert, Frank; Siegrist, James; Sijacki, Djordje; Silbert, Ohad; Silver, Yiftah; Silverstein, Daniel; Silverstein, Samuel; Simak, Vladislav; Simic, Ljiljana; Simion, Stefan; Simmons, Brinick; Simonyan, Margar; Sinervo, Pekka; Sinev, Nikolai; Sipica, Valentin; Siragusa, Giovanni; Sisakyan, Alexei; Sivoklokov, Serguei; Sjölin, Jörgen; Sjursen, Therese; Skovpen, Kirill; Skubic, Patrick; Slater, Mark; Slavicek, Tomas; Sliwa, Krzysztof; Sloper, John erik; Smakhtin, Vladimir; Smirnov, Sergei; Smirnov, Yuri; Smirnova, Lidia; Smirnova, Oxana; Smith, Ben Campbell; Smith, Douglas; Smith, Kenway; Smizanska, Maria; Smolek, Karel; Snesarev, Andrei; Snow, Steve; Snow, Joel; Snuverink, Jochem; Snyder, Scott; Soares, Mara; Sobie, Randall; Sodomka, Jaromir; Soffer, Abner; Solans, Carlos; Solar, Michael; Solc, Jaroslav; Solfaroli Camillocci, Elena; Solodkov, Alexander; Solovyanov, Oleg; Sondericker, John; Sopko, Vit; Sopko, Bruno; Sosebee, Mark; Soukharev, Andrey; Spagnolo, Stefania; Spanò, Francesco; Spighi, Roberto; Spigo, Giancarlo; Spila, Federico; Spiwoks, Ralf; Spousta, Martin; Spreitzer, Teresa; Spurlock, Barry; St. Denis, Richard Dante; Stahl, Thorsten; Stahlman, Jonathan; Stamen, Rainer; Stancu, Stefan Nicolae; Stanecka, Ewa; Stanek, Robert; Stanescu, Cristian; Stapnes, Steinar; Starchenko, Evgeny; Stark, Jan; Staroba, Pavel; Starovoitov, Pavel; Stastny, Jan; Stavina, Pavel; Steele, Genevieve; Steinbach, Peter; Steinberg, Peter; Stekl, Ivan; Stelzer, Bernd; Stelzer, Harald Joerg; Stelzer-Chilton, Oliver; Stenzel, Hasko; Stevenson, Kyle; Stewart, Graeme; Stockton, Mark; Stoerig, Kathrin; Stoicea, Gabriel; Stonjek, Stefan; Strachota, Pavel; Stradling, Alden; Straessner, Arno; Strandberg, Jonas; Strandberg, Sara; Strandlie, Are; Strauss, Michael; Strizenec, Pavol; Ströhmer, Raimund; Strom, David; Stroynowski, Ryszard; Strube, Jan; Stugu, Bjarne; Sturm, Philipp; Su, Dong; Soh, Dart-yin; Sugaya, Yorihito; Sugimoto, Takuya; Suhr, Chad; Suk, Michal; Sulin, Vladimir; Sultansoy, Saleh; Sumida, Toshi; Sun, Xiaohu; Sundermann, Jan Erik; Suruliz, Kerim; Sushkov, Serge; Susinno, Giancarlo; Sutton, Mark; Suzuki, Takuya; Suzuki, Yu; Sykora, Ivan; Sykora, Tomas; Szymocha, Tadeusz; Sánchez, Javier; Ta, Duc; Tackmann, Kerstin; Taffard, Anyes; Tafirout, Reda; Taga, Adrian; Takahashi, Yuta; Takai, Helio; Takashima, Ryuichi; Takeda, Hiroshi; Takeshita, Tohru; Talby, Mossadek; Talyshev, Alexey; Tamsett, Matthew; Tanaka, Junichi; Tanaka, Reisaburo; Tanaka, Satoshi; Tanaka, Shuji; Tapprogge, Stefan; Tardif, Dominique; Tarem, Shlomit; Tarrade, Fabien; Tartarelli, Giuseppe Francesco; Tas, Petr; Tasevsky, Marek; Tassi, Enrico; Tatarkhanov, Mous; Taylor, Christopher; Taylor, Frank; Taylor, Geoffrey; Taylor, Ryan P.; Taylor, Wendy; Teixeira-Dias, Pedro; Ten Kate, Herman; Teng, Ping-Kun; Tennenbaum-Katan, Yaniv-David; Terada, Susumu; Terashi, Koji; Terron, Juan; Terwort, Mark; Testa, Marianna; Teuscher, Richard; Therhaag, Jan; Thioye, Moustapha; Thoma, Sascha; Thomas, Juergen; Thompson, Stan; Thompson, Emily; Thompson, Peter; Thompson, Paul; Thompson, Ray; Thomson, Evelyn; Thun, Rudolf; Tic, Tomas; Tikhomirov, Vladimir; Tikhonov, Yury; Tipton, Paul; Tique Aires Viegas, Florbela De Jes; Tisserant, Sylvain; Toczek, Barbara; Todorov, Theodore; Todorova-Nova, Sharka; Toggerson, Brokk; Tojo, Junji; Tokár, Stanislav; Tokushuku, Katsuo; Tollefson, Kirsten; Tomasek, Lukas; Tomasek, Michal; Tomoto, Makoto; Tompkins, Lauren; Toms, Konstantin; Tonoyan, Arshak; Topfel, Cyril; Topilin, Nikolai; Torchiani, Ingo; Torrence, Eric; Torró Pastor, Emma; Toth, Jozsef; Touchard, Francois; Tovey, Daniel; Trefzger, Thomas; Tremblet, Louis; Tricoli, Alesandro; Trigger, Isabel Marian; Trincaz-Duvoid, Sophie; Trinh, Thi Nguyet; Tripiana, Martin; Triplett, Nathan; Trischuk, William; Trivedi, Arjun; Trocmé, Benjamin; Troncon, Clara; Trzupek, Adam; Tsarouchas, Charilaos; Tseng, Jeffrey; Tsiakiris, Menelaos; Tsiareshka, Pavel; Tsionou, Dimitra; Tsipolitis, Georgios; Tsiskaridze, Vakhtang; Tskhadadze, Edisher; Tsukerman, Ilya; Tsulaia, Vakhtang; Tsung, Jieh-Wen; Tsuno, Soshi; Tsybychev, Dmitri; Tuggle, Joseph; Turecek, Daniel; Turk Cakir, Ilkay; Turlay, Emmanuel; Tuts, Michael; Twomey, Matthew Shaun; Tylmad, Maja; Tyndel, Mike; Uchida, Kirika; Ueda, Ikuo; Ueno, Ryuichi; Ugland, Maren; Uhlenbrock, Mathias; Uhrmacher, Michael; Ukegawa, Fumihiko; Unal, Guillaume; Undrus, Alexander; Unel, Gokhan; Unno, Yoshinobu; Urbaniec, Dustin; Urkovsky, Evgeny; Urquijo, Phillip; Urrejola, Pedro; Usai, Giulio; Uslenghi, Massimiliano; Vacavant, Laurent; Vacek, Vaclav; Vachon, Brigitte; Vahsen, Sven; Valente, Paolo; Valentinetti, Sara; Valkar, Stefan; Valladolid Gallego, Eva; Vallecorsa, Sofia; Valls Ferrer, Juan Antonio; Van Berg, Richard; van der Graaf, Harry; van der Kraaij, Erik; van der Poel, Egge; van der Ster, Daniel; van Eldik, Niels; van Gemmeren, Peter; van Kesteren, Zdenko; van Vulpen, Ivo; Vandelli, Wainer; Vaniachine, Alexandre; Vankov, Peter; Vannucci, Francois; Vari, Riccardo; Varnes, Erich; Varouchas, Dimitris; Vartapetian, Armen; Varvell, Kevin; Vasilyeva, Lidia; Vassilakopoulos, Vassilios; Vazeille, Francois; Vellidis, Constantine; Veloso, Filipe; Veneziano, Stefano; Ventura, Andrea; Ventura, Daniel; Venturi, Manuela; Venturi, Nicola; Vercesi, Valerio; Verducci, Monica; Verkerke, Wouter; Vermeulen, Jos; Vetterli, Michel; Vichou, Irene; Vickey, Trevor; Viehhauser, Georg; Villa, Mauro; Villani, Giulio; Villaplana Perez, Miguel; Vilucchi, Elisabetta; Vincter, Manuella; Vinek, Elisabeth; Vinogradov, Vladimir; Viret, Sébastien; Virzi, Joseph; Vitale , Antonio; Vitells, Ofer; Vivarelli, Iacopo; Vives Vaque, Francesc; Vlachos, Sotirios; Vlasak, Michal; Vlasov, Nikolai; Vogel, Adrian; Vokac, Petr; Volpi, Matteo; von der Schmitt, Hans; von Loeben, Joerg; von Radziewski, Holger; von Toerne, Eckhard; Vorobel, Vit; Vorwerk, Volker; Vos, Marcel; Voss, Rudiger; Voss, Thorsten Tobias; Vossebeld, Joost; Vranjes, Nenad; Vranjes Milosavljevic, Marija; Vrba, Vaclav; Vreeswijk, Marcel; Vu Anh, Tuan; Vudragovic, Dusan; Vuillermet, Raphael; Vukotic, Ilija; Wagner, Peter; Walbersloh, Jorg; Walder, James; Walker, Rodney; Walkowiak, Wolfgang; Wall, Richard; Wang, Chiho; Wang, Haichen; Wang, Jin; Wang, Song-Ming; Warburton, Andreas; Ward, Patricia; Warsinsky, Markus; Wastie, Roy; Watkins, Peter; Watson, Alan; Watson, Miriam; Watts, Gordon; Watts, Stephen; Waugh, Anthony; Waugh, Ben; Weber, Marc; Weber, Manuel; Weber, Michele; Weber, Pavel; Weidberg, Anthony; Weingarten, Jens; Weiser, Christian; Wellenstein, Hermann; Wells, Phillippa; Wenaus, Torre; Wendler, Shanti; Wengler, Thorsten; Wenig, Siegfried; Wermes, Norbert; Werner, Matthias; Werner, Per; Werth, Michael; Werthenbach, Ulrich; Wessels, Martin; Whalen, Kathleen; White, Andrew; White, Martin; White, Sebastian; Whitehead, Samuel Robert; Whiteson, Daniel; Whittington, Denver; Wicek, Francois; Wicke, Daniel; Wickens, Fred; Wiedenmann, Werner; Wielers, Monika; Wienemann, Peter; Wiglesworth, Craig; Wiik, Liv Antje Mari; Wildauer, Andreas; Wildt, Martin Andre; Wilkens, Henric George; Williams, Eric; Williams, Hugh; Willocq, Stephane; Wilson, John; Wilson, Michael Galante; Wilson, Alan; Wingerter-Seez, Isabelle; Winklmeier, Frank; Wittgen, Matthias; Wolter, Marcin Wladyslaw; Wolters, Helmut; Wosiek, Barbara; Wotschack, Jorg; Woudstra, Martin; Wraight, Kenneth; Wright, Catherine; Wright, Dennis; Wrona, Bozydar; Wu, Sau Lan; Wu, Xin; Wulf, Evan; Wynne, Benjamin; Xaplanteris, Leonidas; Xella, Stefania; Xie, Song; Xu, Da; Xu, Neng; Yamada, Miho; Yamamoto, Akira; Yamamoto, Kyoko; Yamamoto, Shimpei; Yamamura, Taiki; Yamaoka, Jared; Yamazaki, Takayuki; Yamazaki, Yuji; Yan, Zhen; Yang, Haijun; Yang, Un-Ki; Yang, Zhaoyu; Yao, Weiming; Yao, Yushu; Yasu, Yoshiji; Ye, Jingbo; Ye, Shuwei; Yilmaz, Metin; Yoosoofmiya, Reza; Yorita, Kohei; Yoshida, Riktura; Young, Charles; Youssef, Saul; Yu, Dantong; Yu, Jaehoon; Yuan, Li; Yurkewicz, Adam; Zaidan, Remi; Zaitsev, Alexander; Zajacova, Zuzana; Zambrano, Valentina; Zanello, Lucia; Zaytsev, Alexander; Zeitnitz, Christian; Zeller, Michael; Zemla, Andrzej; Zendler, Carolin; Zenin, Oleg; Zenis, Tibor; Zenonos, Zenonas; Zenz, Seth; Zerwas, Dirk; Zevi della Porta, Giovanni; Zhan, Zhichao; Zhang, Huaqiao; Zhang, Jinlong; Zhang, Qizhi; Zhang, Xueyao; Zhao, Long; Zhao, Tianchi; Zhao, Zhengguo; Zhemchugov, Alexey; Zhong, Jiahang; Zhou, Bing; Zhou, Ning; Zhou, Yue; Zhu, Cheng Guang; Zhu, Hongbo; Zhu, Yingchun; Zhuang, Xuai; Zhuravlov, Vadym; Zimmermann, Robert; Zimmermann, Simone; Zimmermann, Stephanie; Ziolkowski, Michael; Zivkovic, Lidija; Zobernig, Georg; Zoccoli, Antonio; zur Nedden, Martin; Zutshi, Vishnu

    2010-01-01

    The ATLAS Inner Detector is a composite tracking system consisting of silicon pixels, silicon strips and straw tubes in a 2 T magnetic field. Its installation was completed in August 2008 and the detector took part in data- taking with single LHC beams and cosmic rays. The initial detector operation, hardware commissioning and in-situ calibrations are described. Tracking performance has been measured with 7.6 million cosmic-ray events, collected using a tracking trigger and reconstructed with modular pattern-recognition and fitting software. The intrinsic hit efficiency and tracking trigger efficiencies are close to 100%. Lorentz angle measurements for both electrons and holes, specific energy-loss calibration and transition radiation turn-on measurements have been performed. Different alignment techniques have been used to reconstruct the detector geometry. After the initial alignment, a transverse impact parameter resolution of 22.1+/-0.9 {\\mu}m and a relative momentum resolution {\\sigma}p/p = (4.83+/-0.16)...

  18. FAST CONVERGENT MONTE CARLO RECEIVER FOR OFDM SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Wu Lili; Liao Guisheng; Bao Zheng; Shang Yong

    2005-01-01

    The paper investigates the problem of the design of an optimal Orthogonal Frequency Division Multiplexing (OFDM) receiver against unknown frequency selective fading. A fast convergent Monte Carlo receiver is proposed. In the proposed method, the Markov Chain Monte Carlo (MCMC) methods are employed for the blind Bayesian detection without channel estimation. Meanwhile, with the exploitation of the characteristics of OFDM systems, two methods are employed to improve the convergence rate and enhance the efficiency of MCMC algorithms.One is the integration of the posterior distribution function with respect to the associated channel parameters, which is involved in the derivation of the objective distribution function; the other is the intra-symbol differential coding for the elimination of the bimodality problem resulting from the presence of unknown fading channels. Moreover, no matrix inversion is needed with the use of the orthogonality property of OFDM modulation and hence the computational load is significantly reduced. Computer simulation results show the effectiveness of the fast convergent Monte Carlo receiver.

  19. A semianalytic Monte Carlo code for modelling LIDAR measurements

    Science.gov (United States)

    Palazzi, Elisa; Kostadinov, Ivan; Petritoli, Andrea; Ravegnani, Fabrizio; Bortoli, Daniele; Masieri, Samuele; Premuda, Margherita; Giovanelli, Giorgio

    2007-10-01

    LIDAR (LIght Detection and Ranging) is an optical active remote sensing technology with many applications in atmospheric physics. Modelling of LIDAR measurements appears useful approach for evaluating the effects of various environmental variables and scenarios as well as of different measurement geometries and instrumental characteristics. In this regard a Monte Carlo simulation model can provide a reliable answer to these important requirements. A semianalytic Monte Carlo code for modelling LIDAR measurements has been developed at ISAC-CNR. The backscattered laser signal detected by the LIDAR system is calculated in the code taking into account the contributions due to the main atmospheric molecular constituents and aerosol particles through processes of single and multiple scattering. The contributions by molecular absorption, ground and clouds reflection are evaluated too. The code can perform simulations of both monostatic and bistatic LIDAR systems. To enhance the efficiency of the Monte Carlo simulation, analytical estimates and expected value calculations are performed. Artificial devices (such as forced collision, local forced collision, splitting and russian roulette) are moreover foreseen by the code, which can enable the user to drastically reduce the variance of the calculation.

  20. E-2C Loads Calibration in DFRC Flight Loads Lab

    Science.gov (United States)

    Schuster, Lawrence S.

    2008-01-01

    Objectives: a) Safely and efficiently perform structural load tests on NAVAIR E-2C aircraft to calibrate strain gage instrumentation installed by NAVAIR; b) Collect load test data and derive loads equations for use in NAVAIR flight tests; and c) Assist flight test team with use of loads equations measurements at PAX River.

  1. Lyman alpha SMM/UVSP absolute calibration and geocoronal correction

    Science.gov (United States)

    Fontenla, Juan M.; Reichmann, Edwin J.

    1987-01-01

    Lyman alpha observations from the Ultraviolet Spectrometer Polarimeter (UVSP) instrument of the Solar Maximum Mission (SMM) spacecraft were analyzed and provide instrumental calibration details. Specific values of the instrument quantum efficiency, Lyman alpha absolute intensity, and correction for geocoronal absorption are presented.

  2. Monte Carlo Study of a 137Cs calibration field of the China institute of atomic energy

    CERN Document Server

    Gao, Fei

    2015-01-01

    The MCNP code was used to study the characteristics of gamma radiation field with collimated beam geometry. A close-to-reality simulation model of the facility was used for calculation air-kerma along the whole range of source-detector-distance (SDD) along the central beam and air-kerma off-axis beam profiles at two different source-detector-distance (SDD). The simulation results were tested by the measured results which were acquired in the Radiation Metrology Center of CIAE. Other characteristics such as the individual contributions of photons scattered in collimator, floor, walls, mobile platform and other parts of the irradiation halls to the total air kerma rate on the beam axis were calculated for the purpose of future improvement of metrological parameters in CIAE. Finally, factors which influence the simulation results were investigated, including e.g., detector volume effects or source density effects.

  3. MONTE CARLO CALCULATION FOR CALIBRATION FUNCTIONS IN TOTAL REFLECTION X—RAY FLUORESCENCE SPECTROMETRY

    Institute of Scientific and Technical Information of China (English)

    范钦敏; 刘亚雯; 等

    1995-01-01

    Simulation approach includes such processes as photon emissions from X-ray tube with a spectral distribution,total reflection on the sample support,photoelectric effect in thin layer sample,as well as characteristic line absorption and detection,The calculation results are in agreement with experimental ones.

  4. Design of Experiments, Model Calibration and Data Assimilation

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Brian J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-07-30

    This presentation provides an overview of emulation, calibration and experiment design for computer experiments. Emulation refers to building a statistical surrogate from a carefully selected and limited set of model runs to predict unsampled outputs. The standard kriging approach to emulation of complex computer models is presented. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Markov chain Monte Carlo (MCMC) algorithms are often used to sample the calibrated parameter distribution. Several MCMC algorithms commonly employed in practice are presented, along with a popular diagnostic for evaluating chain behavior. Space-filling approaches to experiment design for selecting model runs to build effective emulators are discussed, including Latin Hypercube Design and extensions based on orthogonal array skeleton designs and imposed symmetry requirements. Optimization criteria that further enforce space-filling, possibly in projections of the input space, are mentioned. Designs to screen for important input variations are summarized and used for variable selection in a nuclear fuels performance application. This is followed by illustration of sequential experiment design strategies for optimization, global prediction, and rare event inference.

  5. Monte Carlo model for electron degradation in methane

    CERN Document Server

    Bhardwaj, Anil

    2015-01-01

    We present a Monte Carlo model for degradation of 1-10,000 eV electrons in an atmosphere of methane. The electron impact cross sections for CH4 are compiled and analytical representations of these cross sections are used as input to the model.model.Yield spectra, which provides information about the number of inelastic events that have taken place in each energy bin, is used to calculate the yield (or population) of various inelastic processes. The numerical yield spectra, obtained from the Monte Carlo simulations, is represented analytically, thus generating the Analytical Yield Spectra (AYS). AYS is employed to obtain the mean energy per ion pair and efficiencies of various inelastic processes.Mean energy per ion pair for neutral CH4 is found to be 26 (27.8) eV at 10 (0.1) keV. Efficiency calculation showed that ionization is the dominant process at energies >50 eV, for which more than 50% of the incident electron energy is used. Above 25 eV, dissociation has an efficiency of 27%. Below 10 eV, vibrational e...

  6. Discrete angle biasing in Monte Carlo radiation transport

    Energy Technology Data Exchange (ETDEWEB)

    Cramer, S.N.

    1988-05-01

    An angular biasing procedure is presented for use in Monte Carlo radiation transport with discretized scattering angle data. As in more general studies, the method is shown to reduce statistical weight fluctuations when it is combined with the exponential transformation. This discrete data application has a simple analytic form which is problem independent. The results from a sample problem illustrate the variance reduction and efficiency characteristics of the combined biasing procedures, and a large neutron and gamma ray integral experiment is also calculated. A proposal is given for the possible code generation of the biasing parameter p and the preferential direction /ovr/Omega///sub 0/ used in the combined biasing schemes.

  7. Bond-updating mechanism in cluster Monte Carlo calculations

    Science.gov (United States)

    Heringa, J. R.; Blöte, H. W. J.

    1994-03-01

    We study a cluster Monte Carlo method with an adjustable parameter: the number of energy levels of a demon mediating the exchange of bond energy with the heat bath. The efficiency of the algorithm in the case of the three-dimensional Ising model is studied as a function of the number of such levels. The optimum is found in the limit of an infinite number of levels, where the method reproduces the Wolff or the Swendsen-Wang algorithm. In this limit the size distribution of flipped clusters approximates a power law more closely than that for a finite number of energy levels.

  8. Monte Carlo simulation of a prototype photodetector used in radiotherapy

    CERN Document Server

    Kausch, C; Albers, D; Schmidt, R; Schreiber, B

    2000-01-01

    The imaging performance of prototype electronic portal imaging devices (EPID) has been investigated. Monte Carlo simulations have been applied to calculate the modulation transfer function (MTF( f )), the noise power spectrum (NPS( f )) and the detective quantum efficiency (DQE( f )) for different new type of EPIDs, which consist of a detector combination of metal or polyethylene (PE), a phosphor layer of Gd sub 2 O sub 2 S and a flat array of photodiodes. The simulated results agree well with measurements. Based on simulated results, possible optimization of these devices is discussed.

  9. Novel Extrapolation Method in the Monte Carlo Shell Model

    CERN Document Server

    Shimizu, Noritaka; Mizusaki, Takahiro; Otsuka, Takaharu; Abe, Takashi; Honma, Michio

    2010-01-01

    We propose an extrapolation method utilizing energy variance in the Monte Carlo shell model in order to estimate the energy eigenvalue and observables accurately. We derive a formula for the energy variance with deformed Slater determinants, which enables us to calculate the energy variance efficiently. The feasibility of the method is demonstrated for the full $pf$-shell calculation of $^{56}$Ni, and the applicability of the method to a system beyond current limit of exact diagonalization is shown for the $pf$+$g_{9/2}$-shell calculation of $^{64}$Ge.

  10. Monte Carlo Frameworks Building Customisable High-performance C++ Applications

    CERN Document Server

    Duffy, Daniel J

    2011-01-01

    This is one of the first books that describe all the steps that are needed in order to analyze, design and implement Monte Carlo applications. It discusses the financial theory as well as the mathematical and numerical background that is needed to write flexible and efficient C++ code using state-of-the art design and system patterns, object-oriented and generic programming models in combination with standard libraries and tools.   Includes a CD containing the source code for all examples. It is strongly advised that you experiment with the code by compiling it and extending it to suit your ne

  11. Lattice gauge theories and Monte Carlo simulations

    CERN Document Server

    Rebbi, Claudio

    1983-01-01

    This volume is the most up-to-date review on Lattice Gauge Theories and Monte Carlo Simulations. It consists of two parts. Part one is an introductory lecture on the lattice gauge theories in general, Monte Carlo techniques and on the results to date. Part two consists of important original papers in this field. These selected reprints involve the following: Lattice Gauge Theories, General Formalism and Expansion Techniques, Monte Carlo Simulations. Phase Structures, Observables in Pure Gauge Theories, Systems with Bosonic Matter Fields, Simulation of Systems with Fermions.

  12. Fast quantum Monte Carlo on a GPU

    CERN Document Server

    Lutsyshyn, Y

    2013-01-01

    We present a scheme for the parallelization of quantum Monte Carlo on graphical processing units, focusing on bosonic systems and variational Monte Carlo. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent acceleration. Comparing with single core execution, GPU-accelerated code runs over x100 faster. The CUDA code is provided along with the package that is necessary to execute variational Monte Carlo for a system representing liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the latest Kepler architecture K20 GPU. Kepler-specific optimization is discussed.

  13. Monte carlo simulation for soot dynamics

    KAUST Repository

    Zhou, Kun

    2012-01-01

    A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.

  14. Monte Carlo approaches to light nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, J.

    1990-01-01

    Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of {sup 16}O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs.

  15. Calibrating the New Ultracam Osprey Oblique Aerial Sensor

    Science.gov (United States)

    Gruber, M.; Walcher, W.

    2014-03-01

    We present methods and results to calibrate the new oblique sensor UltraCam Osprey which was presented for the first time at the ASPRS 2013 conference and exhibition in Baltimore, MD, March 2013. Even if this was not the first time when oblique sensors were introduced into the market, the UltraCam Osprey did show several new conceptual details which are illustrated in this presentation. The design of the camera is focusing on two important characteristics, a metric nadir component which has been derived from the UltraCam Lp sensor, and collection efficiency through very large swath width. The nadir sensor consists of the 90 megapixel panchromatic camera, true-color RGB, and a near-infrared camera. Adding six oblique camera heads, with two each in forward and backwards direction, results in unmatched oblique collection efficiency. We first explain the camera and cone configuration along with the geometric layout of the sensor system. Then we describe the laboratory setup for geometric calibration of the UltraCam Osprey and the calibration process along with the actual results of one such calibration showing sub-pixel accurate image geometry. This proves that the UltraCam Osprey is a fully calibrated metric camera system suitable for photogrammetric survey applications.

  16. Accurate calibration of RL shunts for piezoelectric vibration damping of flexible structures

    DEFF Research Database (Denmark)

    Høgsberg, Jan Becker; Krenk, Steen

    2016-01-01

    Piezoelectric RL (resistive-inductive) shunts are passive resonant devices used for damping of dominantvibration modes of a flexible structure and their efficiency relies on precise calibration of the shuntcomponents. In the present paper improved calibration accuracy is attained by an extension ...

  17. Calibration of piezoelectric RL shunts with explicit residual mode correction

    DEFF Research Database (Denmark)

    Høgsberg, Jan Becker; Krenk, Steen

    2016-01-01

    Piezoelectric RL (resistive-inductive) shunts are passive resonant devices used for damping of dominant vibration modes of a flexible structure and their efficiency relies on the precise calibration of the shunt components. In the present paper improved calibration accuracy is attained by an exte...

  18. Monte Carlo Comparisons to a Cryogenic Dark Matter Search Detector with low Transition-Edge-Sensor Transition Temperature

    CERN Document Server

    Leman, S W; Brink, P L; Cabrera, B; Cherry, M; Silva, E Do Couto E; Figueroa-Feliciano, E; Kim, P; Mirabolfathi, N; Pyle, M; Resch, R; Sadoulet, B; Serfass, B; Sundqvist, K M; Tomada, A; Young, B A

    2011-01-01

    We present results on phonon quasidiffusion and Transition Edge Sensor (TES) studies in a large, 3 inch diameter, 1 inch thick [100] high purity germanium crystal, cooled to 50 mK in the vacuum of a dilution refrigerator, and exposed with 59.5 keV gamma-rays from an Am-241 calibration source. We compare calibration data with results from a Monte Carlo which includes phonon quasidiffusion and the generation of phonons created by charge carriers as they are drifted across the detector by ionization readout channels. The phonon energy is then parsed into TES based phonon readout channels and input into a TES simulator.

  19. Hydrologic calibration of paired watersheds using a MOSUM approach

    Directory of Open Access Journals (Sweden)

    H. Ssegane

    2015-01-01

    Full Text Available Paired watershed studies have historically been used to quantify hydrologic effects of land use and management practices by concurrently monitoring two neighboring watersheds (a control and a treatment during the calibration (pre-treatment and post-treatment periods. This study characterizes seasonal water table and flow response to rainfall during the calibration period and tests a change detection technique of moving sums of recursive residuals (MOSUM to select calibration periods for each control-treatment watershed pair when the regression coefficients for daily water table elevation (WTE were most stable to reduce regression model uncertainty. The control and treatment watersheds included 1–3 year intensively managed loblolly pine (Pinus taeda L. with natural understory, same age loblolly pine intercropped with switchgrass (Panicum virgatum, 14–15 year thinned loblolly pine with natural understory (control, and switchgrass only. Although monitoring during the calibration period spanned 2009 to 2012, silvicultural operational practices that occurred during this period such as harvesting of existing stand and site preparation for pine and switchgrass establishment may have acted as external factors, potentially shifting hydrologic calibration relationships between control and treatment watersheds. Results indicated that MOSUM was able to detect significant changes in regression parameters for WTE due to silvicultural operations. This approach also minimized uncertainty of calibration relationships which could otherwise mask marginal treatment effects. All calibration relationships developed using this MOSUM method were quantifiable, strong, and consistent with Nash–Sutcliffe Efficiency (NSE greater than 0.97 for WTE and NSE greater than 0.92 for daily flow, indicating its applicability for choosing calibration periods of paired watershed studies.

  20. Effects of temporal variability on HBV model calibration

    Directory of Open Access Journals (Sweden)

    Steven Reinaldo Rusli

    2015-10-01

    Full Text Available This study aimed to investigate the effect of temporal variability on the optimization of the Hydrologiska Byråns Vattenbalansavedlning (HBV model, as well as the calibration performance using manual optimization and average parameter values. By applying the HBV model to the Jiangwan Catchment, whose geological features include lots of cracks and gaps, simulations under various schemes were developed: short, medium-length, and long temporal calibrations. The results show that, with long temporal calibration, the objective function values of the Nash-Sutcliffe efficiency coefficient (NSE, relative error (RE, root mean square error (RMSE, and high flow ratio generally deliver a preferable simulation. Although NSE and RMSE are relatively stable with different temporal scales, significant improvements to RE and the high flow ratio are seen with longer temporal calibration. It is also noted that use of average parameter values does not lead to better simulation results compared with manual optimization. With medium-length temporal calibration, manual optimization delivers the best simulation results, with NSE, RE, RMSE, and the high flow ratio being 0.563 6, 0.122 3, 0.978 8, and 0.854 7, respectively; and calibration using average parameter values delivers NSE, RE, RMSE, and the high flow ratio of 0.481 1, 0.467 6, 1.021 0, and 2.784 0, respectively. Similar behavior is found with long temporal calibration, when NSE, RE, RMSE, and the high flow ratio using manual optimization are 0.525 3, −0.069 2, 1.058 0, and 0.980 0, respectively, as compared with 0.490 3, 0.224 8, 1.096 2, and 0.547 9, respectively, using average parameter values. This study shows that selection of longer periods of temporal calibration in hydrological analysis delivers better simulation in general for water balance analysis.