Quantitative Monte Carlo-based holmium-166 SPECT reconstruction
Elschot, Mattijs; Smits, Maarten L. J.; Nijsen, Johannes F. W.; Lam, Marnix G. E. H.; Zonnenberg, Bernard A.; Bosch, Maurice A. A. J. van den; Jong, Hugo W. A. M. de [Department of Radiology and Nuclear Medicine, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht (Netherlands); Viergever, Max A. [Image Sciences Institute, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht (Netherlands)
2013-11-15
Purpose: Quantitative imaging of the radionuclide distribution is of increasing interest for microsphere radioembolization (RE) of liver malignancies, to aid treatment planning and dosimetry. For this purpose, holmium-166 ({sup 166}Ho) microspheres have been developed, which can be visualized with a gamma camera. The objective of this work is to develop and evaluate a new reconstruction method for quantitative {sup 166}Ho SPECT, including Monte Carlo-based modeling of photon contributions from the full energy spectrum.Methods: A fast Monte Carlo (MC) simulator was developed for simulation of {sup 166}Ho projection images and incorporated in a statistical reconstruction algorithm (SPECT-fMC). Photon scatter and attenuation for all photons sampled from the full {sup 166}Ho energy spectrum were modeled during reconstruction by Monte Carlo simulations. The energy- and distance-dependent collimator-detector response was modeled using precalculated convolution kernels. Phantom experiments were performed to quantitatively evaluate image contrast, image noise, count errors, and activity recovery coefficients (ARCs) of SPECT-fMC in comparison with those of an energy window-based method for correction of down-scattered high-energy photons (SPECT-DSW) and a previously presented hybrid method that combines MC simulation of photopeak scatter with energy window-based estimation of down-scattered high-energy contributions (SPECT-ppMC+DSW). Additionally, the impact of SPECT-fMC on whole-body recovered activities (A{sup est}) and estimated radiation absorbed doses was evaluated using clinical SPECT data of six {sup 166}Ho RE patients.Results: At the same noise level, SPECT-fMC images showed substantially higher contrast than SPECT-DSW and SPECT-ppMC+DSW in spheres ≥17 mm in diameter. The count error was reduced from 29% (SPECT-DSW) and 25% (SPECT-ppMC+DSW) to 12% (SPECT-fMC). ARCs in five spherical volumes of 1.96–106.21 ml were improved from 32%–63% (SPECT-DSW) and 50%–80
GPU-Monte Carlo based fast IMRT plan optimization
Yongbao Li
2014-03-01
, Shi F, Jiang S, Jia X. GPU-Monte Carlo based fast IMRT plan optimization. Int J Cancer Ther Oncol 2014; 2(2:020244. DOI: 10.14319/ijcto.0202.44
Development of a Monte-Carlo based method for calculating the effect of stationary fluctuations
Pettersen, E. E.; Demazire, C.; Jareteg, K.
2015-01-01
that corresponds to the real part of the neutron balance, and one that corresponds to the imaginary part. The two equivalent problems are in nature similar to two subcritical systems driven by external neutron sources, and can thus be treated as such in a Monte Carlo framework. The definition of these two...... of light water reactor conditions in an infinite lattice of fuel pins surrounded by water. The test case highlights flux gradients that are steeper in the Monte Carlo-based transport solution than in the diffusion-based solution. Compared to other Monte Carlo-based methods earlier proposed for carrying out...
MCHITS: Monte Carlo based Method for Hyperlink Induced Topic Search on Networks
Zhaoyan Jin
2013-10-01
Full Text Available Hyperlink Induced Topic Search (HITS is the most authoritative and most widely used personalized ranking algorithm on networks. The HITS algorithm ranks nodes on networks according to power iteration, and has high complexity of computation. This paper models the HITS algorithm with the Monte Carlo method, and proposes Monte Carlo based algorithms for the HITS computation. Theoretical analysis and experiments show that the Monte Carlo based approximate computing of the HITS ranking reduces computing resources a lot while keeping higher accuracy, and is significantly better than related works
Monte Carlo based radial shield design of typical PWR reactor
Gul, Anas; Khan, Rustam; Qureshi, M. Ayub; Azeem, Muhammad Waqar; Raza, S.A. [Pakistan Institute of Engineering and Applied Sciences, Islamabad (Pakistan). Dept. of Nuclear Engineering; Stummer, Thomas [Technische Univ. Wien (Austria). Atominst.
2016-11-15
Neutron and gamma flux and dose equivalent rate distribution are analysed in radial and shields of a typical PWR type reactor based on the Monte Carlo radiation transport computer code MCNP5. The ENDF/B-VI continuous energy cross-section library has been employed for the criticality and shielding analysis. The computed results are in good agreement with the reference results (maximum difference is less than 56 %). It implies that MCNP5 a good tool for accurate prediction of neutron and gamma flux and dose rates in radial shield around the core of PWR type reactors.
A Monte Carlo-based model of gold nanoparticle radiosensitization
Lechtman, Eli Solomon
The goal of radiotherapy is to operate within the therapeutic window - delivering doses of ionizing radiation to achieve locoregional tumour control, while minimizing normal tissue toxicity. A greater therapeutic ratio can be achieved by utilizing radiosensitizing agents designed to enhance the effects of radiation at the tumour. Gold nanoparticles (AuNP) represent a novel radiosensitizer with unique and attractive properties. AuNPs enhance local photon interactions, thereby converting photons into localized damaging electrons. Experimental reports of AuNP radiosensitization reveal this enhancement effect to be highly sensitive to irradiation source energy, cell line, and AuNP size, concentration and intracellular localization. This thesis explored the physics and some of the underlying mechanisms behind AuNP radiosensitization. A Monte Carlo simulation approach was developed to investigate the enhanced photoelectric absorption within AuNPs, and to characterize the escaping energy and range of the photoelectric products. Simulations revealed a 10 3 fold increase in the rate of photoelectric absorption using low-energy brachytherapy sources compared to megavolt sources. For low-energy sources, AuNPs released electrons with ranges of only a few microns in the surrounding tissue. For higher energy sources, longer ranged photoelectric products travelled orders of magnitude farther. A novel radiobiological model called the AuNP radiosensitization predictive (ARP) model was developed based on the unique nanoscale energy deposition pattern around AuNPs. The ARP model incorporated detailed Monte Carlo simulations with experimentally determined parameters to predict AuNP radiosensitization. This model compared well to in vitro experiments involving two cancer cell lines (PC-3 and SK-BR-3), two AuNP sizes (5 and 30 nm) and two source energies (100 and 300 kVp). The ARP model was then used to explore the effects of AuNP intracellular localization using 1.9 and 100 nm Au
Monte Carlo-based Noise Compensation in Coil Intensity Corrected Endorectal MRI
Lui, Dorothy; Haider, Masoom; Wong, Alexander
2015-01-01
Background: Prostate cancer is one of the most common forms of cancer found in males making early diagnosis important. Magnetic resonance imaging (MRI) has been useful in visualizing and localizing tumor candidates and with the use of endorectal coils (ERC), the signal-to-noise ratio (SNR) can be improved. The coils introduce intensity inhomogeneities and the surface coil intensity correction built into MRI scanners is used to reduce these inhomogeneities. However, the correction typically performed at the MRI scanner level leads to noise amplification and noise level variations. Methods: In this study, we introduce a new Monte Carlo-based noise compensation approach for coil intensity corrected endorectal MRI which allows for effective noise compensation and preservation of details within the prostate. The approach accounts for the ERC SNR profile via a spatially-adaptive noise model for correcting non-stationary noise variations. Such a method is useful particularly for improving the image quality of coil i...
Ma, X. B.; Qiu, R. M.; Chen, Y. X.
2017-02-01
Uncertainties regarding fission fractions are essential in understanding antineutrino flux predictions in reactor antineutrino experiments. A new Monte Carlo-based method to evaluate the covariance coefficients between isotopes is proposed. The covariance coefficients are found to vary with reactor burnup and may change from positive to negative because of balance effects in fissioning. For example, between 235U and 239Pu, the covariance coefficient changes from 0.15 to -0.13. Using the equation relating fission fraction and atomic density, consistent uncertainties in the fission fraction and covariance matrix were obtained. The antineutrino flux uncertainty is 0.55%, which does not vary with reactor burnup. The new value is about 8.3% smaller.
A Monte-Carlo based model of the AX-PET demonstrator and its experimental validation.
Solevi, P; Oliver, J F; Gillam, J E; Bolle, E; Casella, C; Chesi, E; De Leo, R; Dissertori, G; Fanti, V; Heller, M; Lai, M; Lustermann, W; Nappi, E; Pauss, F; Rudge, A; Ruotsalainen, U; Schinzel, D; Schneider, T; Séguinot, J; Stapnes, S; Weilhammer, P; Tuna, U; Joram, C; Rafecas, M
2013-08-21
AX-PET is a novel PET detector based on axially oriented crystals and orthogonal wavelength shifter (WLS) strips, both individually read out by silicon photo-multipliers. Its design decouples sensitivity and spatial resolution, by reducing the parallax error due to the layered arrangement of the crystals. Additionally the granularity of AX-PET enhances the capability to track photons within the detector yielding a large fraction of inter-crystal scatter events. These events, if properly processed, can be included in the reconstruction stage further increasing the sensitivity. Its unique features require dedicated Monte-Carlo simulations, enabling the development of the device, interpreting data and allowing the development of reconstruction codes. At the same time the non-conventional design of AX-PET poses several challenges to the simulation and modeling tasks, mostly related to the light transport and distribution within the crystals and WLS strips, as well as the electronics readout. In this work we present a hybrid simulation tool based on an analytical model and a Monte-Carlo based description of the AX-PET demonstrator. It was extensively validated against experimental data, providing excellent agreement.
Dosimetric validation of a commercial Monte Carlo based IMRT planning system.
Grofsmid, Dennis; Dirkx, Maarten; Marijnissen, Hans; Woudstra, Evert; Heijmen, Ben
2010-02-01
Recently a commercial Monte Carlo based IMRT planning system (Monaco version 1.0.0) was released. In this study the dosimetric accuracy of this new planning system was validated. Absolute dose profiles, depth dose curves, and output factors calculated by Monaco were compared with measurements in a water phantom. Different static on-axis and off-axis fields were tested at various source-skin distances for 6, 10, and 18 MV photon beams. Four clinical IMRT plans were evaluated in a water phantom using a linear diode detector array and another six IMRT plans for different tumor sites in solid water using a 2D detector array. In order to evaluate the accuracy of the dose engine near tissue inhomogeneities absolute dose distributions were measured with Gafchromic EBT film in an inhomogeneous slab phantom. For an end-to-end test a four-field IMRT plan was applied to an anthropomorphic lung phantom with a simulated tumor peripherally located in the right lung. Gafchromic EBT film, placed in and around the tumor area, was used to evaluate the dose distribution. Generally, the measured and the calculated dose distributions agreed within 2% dose difference or 2 mm distance-to-agreement. But mainly at interfaces with bone, some larger dose differences could be observed. Based on the results of this study, the authors concluded that the dosimetric accuracy of Monaco is adequate for clinical introduction.
A comprehensive revisit of the ρ meson with improved Monte-Carlo based QCD sum rules
Wang, Qi-Nan; Zhang, Zhu-Feng; Steele, T. G.; Jin, Hong-Ying; Huang, Zhuo-Ran
2017-07-01
We improve the Monte-Carlo based QCD sum rules by introducing the rigorous Hölder-inequality-determined sum rule window and a Breit-Wigner type parametrization for the phenomenological spectral function. In this improved sum rule analysis methodology, the sum rule analysis window can be determined without any assumptions on OPE convergence or the QCD continuum. Therefore, an unbiased prediction can be obtained for the phenomenological parameters (the hadronic mass and width etc.). We test the new approach in the ρ meson channel with re-examination and inclusion of α s corrections to dimension-4 condensates in the OPE. We obtain results highly consistent with experimental values. We also discuss the possible extension of this method to some other channels. Supported by NSFC (11175153, 11205093, 11347020), Open Foundation of the Most Important Subjects of Zhejiang Province, and K. C. Wong Magna Fund in Ningbo University, TGS is Supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), Z. F. Zhang and Z. R. Huang are Grateful to the University of Saskatchewan for its Warm Hospitality
Nievaart, V. A.; Daquino, G. G.; Moss, R. L.
2007-06-01
Boron Neutron Capture Therapy (BNCT) is a bimodal form of radiotherapy for the treatment of tumour lesions. Since the cancer cells in the treatment volume are targeted with 10B, a higher dose is given to these cancer cells due to the 10B(n,α)7Li reaction, in comparison with the surrounding healthy cells. In Petten (The Netherlands), at the High Flux Reactor, a specially tailored neutron beam has been designed and installed. Over 30 patients have been treated with BNCT in 2 clinical protocols: a phase I study for the treatment of glioblastoma multiforme and a phase II study on the treatment of malignant melanoma. Furthermore, activities concerning the extra-corporal treatment of metastasis in the liver (from colorectal cancer) are in progress. The irradiation beam at the HFR contains both neutrons and gammas that, together with the complex geometries of both patient and beam set-up, demands for very detailed treatment planning calculations. A well designed Treatment Planning System (TPS) should obey the following general scheme: (1) a pre-processing phase (CT and/or MRI scans to create the geometric solid model, cross-section files for neutrons and/or gammas); (2) calculations (3D radiation transport, estimation of neutron and gamma fluences, macroscopic and microscopic dose); (3) post-processing phase (displaying of the results, iso-doses and -fluences). Treatment planning in BNCT is performed making use of Monte Carlo codes incorporated in a framework, which includes also the pre- and post-processing phases. In particular, the glioblastoma multiforme protocol used BNCT_rtpe, while the melanoma metastases protocol uses NCTPlan. In addition, an ad hoc Positron Emission Tomography (PET) based treatment planning system (BDTPS) has been implemented in order to integrate the real macroscopic boron distribution obtained from PET scanning. BDTPS is patented and uses MCNP as the calculation engine. The precision obtained by the Monte Carlo based TPSs exploited at Petten
SU-E-T-761: TOMOMC, A Monte Carlo-Based Planning VerificationTool for Helical Tomotherapy
Chibani, O; Ma, C [Fox Chase Cancer Center, Philadelphia, PA (United States)
2015-06-15
Purpose: Present a new Monte Carlo code (TOMOMC) to calculate 3D dose distributions for patients undergoing helical tomotherapy treatments. TOMOMC performs CT-based dose calculations using the actual dynamic variables of the machine (couch motion, gantry rotation, and MLC sequences). Methods: TOMOMC is based on the GEPTS (Gama Electron and Positron Transport System) general-purpose Monte Carlo system (Chibani and Li, Med. Phys. 29, 2002, 835). First, beam models for the Hi-Art Tomotherpy machine were developed for the different beam widths (1, 2.5 and 5 cm). The beam model accounts for the exact geometry and composition of the different components of the linac head (target, primary collimator, jaws and MLCs). The beams models were benchmarked by comparing calculated Pdds and lateral/transversal dose profiles with ionization chamber measurements in water. See figures 1–3. The MLC model was tuned in such a way that tongue and groove effect, inter-leaf and intra-leaf transmission are modeled correctly. See figure 4. Results: By simulating the exact patient anatomy and the actual treatment delivery conditions (couch motion, gantry rotation and MLC sinogram), TOMOMC is able to calculate the 3D patient dose distribution which is in principal more accurate than the one from the treatment planning system (TPS) since it relies on the Monte Carlo method (gold standard). Dose volume parameters based on the Monte Carlo dose distribution can also be compared to those produced by the TPS. Attached figures show isodose lines for a H&N patient calculated by TOMOMC (transverse and sagittal views). Analysis of differences between TOMOMC and TPS is an ongoing work for different anatomic sites. Conclusion: A new Monte Carlo code (TOMOMC) was developed for Tomotherapy patient-specific QA. The next step in this project is implementing GPU computing to speed up Monte Carlo simulation and make Monte Carlo-based treatment verification a practical solution.
Albin, T.; Koschny, D.; Soja, R.; Srama, R.; Poppe, B.
2016-01-01
The Canary Islands Long-Baseline Observatory (CILBO) is a double station meteor camera system (Koschny et al., 2013; Koschny et al., 2014) that consists of 5 cameras. The two cameras considered in this report are ICC7 and ICC9, and are installed on Tenerife and La Palma. They point to the same atmospheric volume between both islands allowing stereoscopic observation of meteors. Since its installation in 2011 and the start of operation in 2012 CILBO has detected over 15000 simultaneously observed meteors. Koschny and Diaz (2002) developed the Meteor Orbit and Trajectory Software (MOTS) to compute the trajectory of such meteors. The software uses the astrometric data from the detection software MetRec (Molau, 1998) and determines the trajectory in geodetic coordinates. This work presents a Monte-Carlo based extension of the MOTS code to compute the orbital elements of simultaneously detected meteors by CILBO.
Composite biasing in Monte Carlo radiative transfer
Baes, Maarten; Lunttila, Tuomas; Bianchi, Simone; Camps, Peter; Juvela, Mika; Kuiper, Rolf
2016-01-01
Biasing or importance sampling is a powerful technique in Monte Carlo radiative transfer, and can be applied in different forms to increase the accuracy and efficiency of simulations. One of the drawbacks of the use of biasing is the potential introduction of large weight factors. We discuss a general strategy, composite biasing, to suppress the appearance of large weight factors. We use this composite biasing approach for two different problems faced by current state-of-the-art Monte Carlo radiative transfer codes: the generation of photon packages from multiple components, and the penetration of radiation through high optical depth barriers. In both cases, the implementation of the relevant algorithms is trivial and does not interfere with any other optimisation techniques. Through simple test models, we demonstrate the general applicability, accuracy and efficiency of the composite biasing approach. In particular, for the penetration of high optical depths, the gain in efficiency is spectacular for the spe...
The Specific Bias in Dynamic Monte Carlo Simulations of Nuclear Reactor
Yamamoto, Toshihisa; Endo, Hiroshi; Ishizu, Tomoko; Tatewaki, Isao
2014-06-01
During the development of Monte-Carlo-based dynamic code system, we have encountered two major Monte-Carlo-specific problems. One is the break down due to "false super-criticality" which is caused by an accidentally large eigenvalue due to statistical error in spite of the fact that the reactor is actually not. The other problem, which is the main topic in this paper, is that the statistical error in power level using the reactivity calculated with Monte Carlo code is not symmetric about its mean but always positively biased. This signifies that the bias is accumulated as the calculation proceeds and consequently results in over-estimation of the final power level. It should be noted that the bias will not eliminated by refining time step as long as the variance is not zero. A preliminary investigation on this matter using the one-group-precursor point kinetic equations was made and it was concluded that the bias in power level is approximately proportional to the product of variance in Monte Carlo calculation and elapsed time. This conclusion was verified with some numerical experiments. This outcome is important in quantifying the required precision of the Monte-Carlo-based reactivity calculations.
van der Graaf, E. R.; Limburg, J.; Koomans, R. L.; Tijs, M.
2011-01-01
The calibration of scintillation detectors for gamma radiation in a well characterized setup can be transferred to other geometries using Monte Carlo simulations to account for the differences between the calibration and the other geometry. In this study a calibration facility was used that is const
Monte-Carlo based prediction of radiochromic film response for hadrontherapy dosimetry
Frisson, T. [Universite de Lyon, F-69622 Lyon (France); CREATIS-LRMN, INSA, Batiment Blaise Pascal, 7 avenue Jean Capelle, 69621 Villeurbanne Cedex (France); Centre Leon Berrard - 28 rue Laennec, F-69373 Lyon Cedex 08 (France)], E-mail: frisson@creatis.insa-lyon.fr; Zahra, N. [Universite de Lyon, F-69622 Lyon (France); IPNL - CNRS/IN2P3 UMR 5822, Universite Lyon 1, Batiment Paul Dirac, 4 rue Enrico Fermi, F-69622 Villeurbanne Cedex (France); Centre Leon Berrard - 28 rue Laennec, F-69373 Lyon Cedex 08 (France); Lautesse, P. [Universite de Lyon, F-69622 Lyon (France); IPNL - CNRS/IN2P3 UMR 5822, Universite Lyon 1, Batiment Paul Dirac, 4 rue Enrico Fermi, F-69622 Villeurbanne Cedex (France); Sarrut, D. [Universite de Lyon, F-69622 Lyon (France); CREATIS-LRMN, INSA, Batiment Blaise Pascal, 7 avenue Jean Capelle, 69621 Villeurbanne Cedex (France); Centre Leon Berrard - 28 rue Laennec, F-69373 Lyon Cedex 08 (France)
2009-07-21
A model has been developed to calculate MD-55-V2 radiochromic film response to ion irradiation. This model is based on photon film response and film saturation by high local energy deposition computed by Monte-Carlo simulation. We have studied the response of the film to photon irradiation and we proposed a calculation method for hadron beams.
Monte-Carlo based prediction of radiochromic film response for hadrontherapy dosimetry
Frisson, T.; Zahra, N.; Lautesse, P.; Sarrut, D.
2009-07-01
A model has been developed to calculate MD-55-V2 radiochromic film response to ion irradiation. This model is based on photon film response and film saturation by high local energy deposition computed by Monte-Carlo simulation. We have studied the response of the film to photon irradiation and we proposed a calculation method for hadron beams.
van der Graaf, E. R.; Limburg, J.; Koomans, R. L.; Tijs, M.
The calibration of scintillation detectors for gamma radiation in a well characterized setup can be transferred to other geometries using Monte Carlo simulations to account for the differences between the calibration and the other geometry. In this study a calibration facility was used that is
Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark.
Renner, F; Wulff, J; Kapsch, R-P; Zink, K
2015-10-01
There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as
Using a Monte-Carlo-based approach to evaluate the uncertainty on fringe projection technique
Molimard, Jérôme
2013-01-01
A complete uncertainty analysis on a given fringe projection set-up has been performed using Monte-Carlo approach. In particular the calibration procedure is taken into account. Two applications are given: at a macroscopic scale, phase noise is predominant whilst at microscopic scale, both phase noise and calibration errors are important. Finally, uncertainty found at macroscopic scale is close to some experimental tests (~100 {\\mu}m).
Geng, Changran; Tang, Xiaobin; Gong, Chunhui; Guan, Fada; Johns, Jesse; Shu, Diyun; Chen, Da
2015-12-01
The active shielding technique has great potential for radiation protection in space exploration because it has the advantage of a significant mass saving compared with the passive shielding technique. This paper demonstrates a Monte Carlo-based approach to evaluating the shielding effectiveness of the active shielding technique using confined magnetic fields (CMFs). The International Commission on Radiological Protection reference anthropomorphic phantom, as well as the toroidal CMF, was modeled using the Monte Carlo toolkit Geant4. The penetrating primary particle fluence, organ-specific dose equivalent, and male effective dose were calculated for particles in galactic cosmic radiation (GCR) and solar particle events (SPEs). Results show that the SPE protons can be easily shielded against, even almost completely deflected, by the toroidal magnetic field. GCR particles can also be more effectively shielded against by increasing the magnetic field strength. Our results also show that the introduction of a structural Al wall in the CMF did not provide additional shielding for GCR; in fact it can weaken the total shielding effect of the CMF. This study demonstrated the feasibility of accurately determining the radiation field inside the environment and evaluating the organ dose equivalents for astronauts under active shielding using the CMF.
Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector
Padilla Cabal, Fatima, E-mail: fpadilla@instec.c [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba); Lopez-Pino, Neivy; Luis Bernal-Castillo, Jose; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D' Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba)
2010-12-15
A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ({sup 241}Am, {sup 133}Ba, {sup 22}Na, {sup 60}Co, {sup 57}Co, {sup 137}Cs and {sup 152}Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.
A Monte Carlo-based treatment-planning tool for ion beam therapy
Böhlen, T T; Dosanjh, M; Ferrari, A; Haberer, T; Parodi, K; Patera, V; Mairan, A
2013-01-01
Ion beam therapy, as an emerging radiation therapy modality, requires continuous efforts to develop and improve tools for patient treatment planning (TP) and research applications. Dose and fluence computation algorithms using the Monte Carlo (MC) technique have served for decades as reference tools for accurate dose computations for radiotherapy. In this work, a novel MC-based treatment-planning (MCTP) tool for ion beam therapy using the pencil beam scanning technique is presented. It allows single-field and simultaneous multiple-fields optimization for realistic patient treatment conditions and for dosimetric quality assurance for irradiation conditions at state-of-the-art ion beam therapy facilities. It employs iterative procedures that allow for the optimization of absorbed dose and relative biological effectiveness (RBE)-weighted dose using radiobiological input tables generated by external RBE models. Using a re-implementation of the local effect model (LEM), theMCTP tool is able to perform TP studies u...
Experimental validation of a rapid Monte Carlo based micro-CT simulator
Colijn, A. P.; Zbijewski, W.; Sasov, A.; Beekman, F. J.
2004-09-01
We describe a newly developed, accelerated Monte Carlo simulator of a small animal micro-CT scanner. Transmission measurements using aluminium slabs are employed to estimate the spectrum of the x-ray source. The simulator incorporating this spectrum is validated with micro-CT scans of physical water phantoms of various diameters, some containing stainless steel and Teflon rods. Good agreement is found between simulated and real data: normalized error of simulated projections, as compared to the real ones, is typically smaller than 0.05. Also the reconstructions obtained from simulated and real data are found to be similar. Thereafter, effects of scatter are studied using a voxelized software phantom representing a rat body. It is shown that the scatter fraction can reach tens of per cents in specific areas of the body and therefore scatter can significantly affect quantitative accuracy in small animal CT imaging.
Monte Carlo based dosimetry for neutron capture therapy of brain tumors
Zaidi, Lilia; Belgaid, Mohamed; Khelifi, Rachid
2016-11-01
Boron Neutron Capture Therapy (BNCT) is a biologically targeted, radiation therapy for cancer which combines neutron irradiation with a tumor targeting agent labeled with a boron10 having a high thermal neutron capture cross section. The tumor area is subjected to the neutron irradiation. After a thermal neutron capture, the excited 11B nucleus fissions into an alpha particle and lithium recoil nucleus. The high Linear Energy Transfer (LET) emitted particles deposit their energy in a range of about 10μm, which is of the same order of cell diameter [1], at the same time other reactions due to neutron activation with body component are produced. In-phantom measurement of physical dose distribution is very important for BNCT planning validation. Determination of total absorbed dose requires complex calculations which were carried out using the Monte Carlo MCNP code [2].
Monte Carlo based statistical power analysis for mediation models: methods and software.
Zhang, Zhiyong
2014-12-01
The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.
Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector.
Cabal, Fatima Padilla; Lopez-Pino, Neivy; Bernal-Castillo, Jose Luis; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D'Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar
2010-12-01
A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ((241)Am, (133)Ba, (22)Na, (60)Co, (57)Co, (137)Cs and (152)Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.
Rivard, Mark J.; Melhus, Christopher S.; Granero, Domingo; Perez-Calatayud, Jose; Ballester, Facundo [Department of Radiation Oncology, Tufts University School of Medicine, Boston, Massachusetts 02111 (United States); Radiation Oncology Department, Physics Section, ' ' La Fe' ' University Hospital, Avenida Campanar 21, E-46009 Valencia (Spain); Department of Atomic, Molecular, and Nuclear Physics, University of Valencia, C/Dr. Moliner 50, E-46100 Burjassot, Spain and IFIC (University of Valencia-CSIC), C/Dr. Moliner 50, E-46100 Burjassot (Spain)
2009-06-15
dosimetry parameter data {<=}0.1 cm was required, and the virtual brachytherapy source data set included over 5000 data points. On the other hand, the lack of consideration for applicator heterogeneity effect caused conventional dose overestimates exceeding an order of magnitude in regions of clinical interest. This approach is rationalized by the improved dose estimates. In conclusion, a new technique was developed to incorporate complex Monte Carlo-based brachytherapy dose distributions into conventional TPS. These results are generalizable to other brachytherapy source types and other TPS.
Monte Carlo based unit commitment procedures for the deregulated market environment
Granelli, G.P.; Marannino, P.; Montagna, M.; Zanellini, F. [Universita di Pavia, Pavia (Italy). Dipartimento di Ingegneria Elettrica
2006-12-15
The unit commitment problem, originally conceived in the framework of short term operation of vertically integrated utilities, needs a thorough re-examination in the light of the ongoing transition towards the open electricity market environment. In this work the problem is re-formulated to adapt unit commitment to the viewpoint of a generation company (GENCO) which is no longer bound to satisfy its load, but is willing to maximize its profits. Moreover, with reference to the present day situation in many countries, the presence of a GENCO (the former monopolist) which is in the position of exerting the market power, requires a careful analysis to be carried out considering the different perspectives of a price taker and of the price maker GENCO. Unit commitment is thus shown to lead to a couple of distinct, yet slightly different problems. The unavoidable uncertainties in load profile and price behaviour over the time period of interest are also taken into account by means of a Monte Carlo simulation. Both the forecasted loads and prices are handled as random variables with a normal multivariate distribution. The correlation between the random input variables corresponding to successive hours of the day was considered by carrying out a statistical analysis of actual load and price data. The whole procedure was tested making use of reasonable approximations of the actual data of the thermal generation units available to come actual GENCOs operating in Italy. (author)
Monte Carlo based water/medium stopping-power ratios for various ICRP and ICRU tissues.
Fernández-Varea, José M; Carrasco, Pablo; Panettieri, Vanessa; Brualla, Lorenzo
2007-11-07
Water/medium stopping-power ratios, s(w,m), have been calculated for several ICRP and ICRU tissues, namely adipose tissue, brain, cortical bone, liver, lung (deflated and inflated) and spongiosa. The considered clinical beams were 6 and 18 MV x-rays and the field size was 10 x 10 cm(2). Fluence distributions were scored at a depth of 10 cm using the Monte Carlo code PENELOPE. The collision stopping powers for the studied tissues were evaluated employing the formalism of ICRU Report 37 (1984 Stopping Powers for Electrons and Positrons (Bethesda, MD: ICRU)). The Bragg-Gray values of s(w,m) calculated with these ingredients range from about 0.98 (adipose tissue) to nearly 1.14 (cortical bone), displaying a rather small variation with beam quality. Excellent agreement, to within 0.1%, is found with stopping-power ratios reported by Siebers et al (2000a Phys. Med. Biol. 45 983-95) for cortical bone, inflated lung and spongiosa. In the case of cortical bone, s(w,m) changes approximately 2% when either ICRP or ICRU compositions are adopted, whereas the stopping-power ratios of lung, brain and adipose tissue are less sensitive to the selected composition. The mass density of lung also influences the calculated values of s(w,m), reducing them by around 1% (6 MV) and 2% (18 MV) when going from deflated to inflated lung.
Iravanian, Shahriar; Kanu, Uche B; Christini, David J
2012-07-01
Cardiac repolarization alternans is an electrophysiologic condition identified by a beat-to-beat fluctuation in action potential waveform. It has been mechanistically linked to instances of T-wave alternans, a clinically defined ECG alternation in T-wave morphology, and associated with the onset of cardiac reentry and sudden cardiac death. Many alternans detection algorithms have been proposed in the past, but the majority have been designed specifically for use with T-wave alternans. Action potential duration (APD) signals obtained from experiments (especially those derived from optical mapping) possess unique characteristics, which requires the development and use of a more appropriate alternans detection method. In this paper, we present a new class of algorithms, based on the Monte Carlo method, for the detection and quantitative measurement of alternans. Specifically, we derive a set of algorithms (one an analytical and more efficient version of the other) and compare its performance with the standard spectral method and the generalized likelihood ratio test algorithm using synthetic APD sequences and optical mapping data obtained from an alternans control experiment. We demonstrate the benefits of the new algorithm in the presence of Gaussian and Laplacian noise and frame-shift errors. The proposed algorithms are well suited for experimental applications, and furthermore, have low complexity and are implementable using fixed-point arithmetic, enabling potential use with implantable cardiac devices.
Markov chain Monte Carlo based analysis of post-translationally modified VDAC gating kinetics.
Tewari, Shivendra G; Zhou, Yifan; Otto, Bradley J; Dash, Ranjan K; Kwok, Wai-Meng; Beard, Daniel A
2014-01-01
The voltage-dependent anion channel (VDAC) is the main conduit for permeation of solutes (including nucleotides and metabolites) of up to 5 kDa across the mitochondrial outer membrane (MOM). Recent studies suggest that VDAC activity is regulated via post-translational modifications (PTMs). Yet the nature and effect of these modifications is not understood. Herein, single channel currents of wild-type, nitrosated, and phosphorylated VDAC are analyzed using a generalized continuous-time Markov chain Monte Carlo (MCMC) method. This developed method describes three distinct conducting states (open, half-open, and closed) of VDAC activity. Lipid bilayer experiments are also performed to record single VDAC activity under un-phosphorylated and phosphorylated conditions, and are analyzed using the developed stochastic search method. Experimental data show significant alteration in VDAC gating kinetics and conductance as a result of PTMs. The effect of PTMs on VDAC kinetics is captured in the parameters associated with the identified Markov model. Stationary distributions of the Markov model suggest that nitrosation of VDAC not only decreased its conductance but also significantly locked VDAC in a closed state. On the other hand, stationary distributions of the model associated with un-phosphorylated and phosphorylated VDAC suggest a reversal in channel conformation from relatively closed state to an open state. Model analyses of the nitrosated data suggest that faster reaction of nitric oxide with Cys-127 thiol group might be responsible for the biphasic effect of nitric oxide on basal VDAC conductance.
Saha, Krishnendu; Straus, Kenneth J; Chen, Yu; Glick, Stephen J
2014-08-28
To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.
Monte Carlo based verification of a beam model used in a treatment planning system
Wieslander, E.; Knöös, T.
2008-02-01
Modern treatment planning systems (TPSs) usually separate the dose modelling into a beam modelling phase, describing the beam exiting the accelerator, followed by a subsequent dose calculation in the patient. The aim of this work is to use the Monte Carlo code system EGSnrc to study the modelling of head scatter as well as the transmission through multi-leaf collimator (MLC) and diaphragms in the beam model used in a commercial TPS (MasterPlan, Nucletron B.V.). An Elekta Precise linear accelerator equipped with an MLC has been modelled in BEAMnrc, based on available information from the vendor regarding the material and geometry of the treatment head. The collimation in the MLC direction consists of leafs which are complemented with a backup diaphragm. The characteristics of the electron beam, i.e., energy and spot size, impinging on the target have been tuned to match measured data. Phase spaces from simulations of the treatment head are used to extract the scatter from, e.g., the flattening filter and the collimating structures. Similar data for the source models used in the TPS are extracted from the treatment planning system, thus a comprehensive analysis is possible. Simulations in a water phantom, with DOSXYZnrc, are also used to study the modelling of the MLC and the diaphragms by the TPS. The results from this study will be helpful to understand the limitations of the model in the TPS and provide knowledge for further improvements of the TPS source modelling.
A Monte Carlo-based treatment planning tool for proton therapy
Mairani, A.; Böhlen, T. T.; Schiavi, A.; Tessonnier, T.; Molinelli, S.; Brons, S.; Battistoni, G.; Parodi, K.; Patera, V.
2013-04-01
In the field of radiotherapy, Monte Carlo (MC) particle transport calculations are recognized for their superior accuracy in predicting dose and fluence distributions in patient geometries compared to analytical algorithms which are generally used for treatment planning due to their shorter execution times. In this work, a newly developed MC-based treatment planning (MCTP) tool for proton therapy is proposed to support treatment planning studies and research applications. It allows for single-field and simultaneous multiple-field optimization in realistic treatment scenarios and is based on the MC code FLUKA. Relative biological effectiveness (RBE)-weighted dose is optimized either with the common approach using a constant RBE of 1.1 or using a variable RBE according to radiobiological input tables. A validated reimplementation of the local effect model was used in this work to generate radiobiological input tables. Examples of treatment plans in water phantoms and in patient-CT geometries together with an experimental dosimetric validation of the plans are presented for clinical treatment parameters as used at the Italian National Center for Oncological Hadron Therapy. To conclude, a versatile MCTP tool for proton therapy was developed and validated for realistic patient treatment scenarios against dosimetric measurements and commercial analytical TP calculations. It is aimed to be used in future for research and to support treatment planning at state-of-the-art ion beam therapy facilities.
Saha, Krishnendu [Ohio Medical Physics Consulting, Dublin, Ohio 43017 (United States); Straus, Kenneth J.; Glick, Stephen J. [Department of Radiology, University of Massachusetts Medical School, Worcester, Massachusetts 01655 (United States); Chen, Yu. [Department of Radiation Oncology, Columbia University, New York, New York 10032 (United States)
2014-08-28
To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.
Quality assessment of Monte Carlo based system response matrices in PET
Cabello, J.; Gillam, J.E. [Valencia Univ. (Spain). Inst. de Fisica Corpuscular; Rafecas, M. [Valencia Univ. (Spain). Inst. de Fisica Corpuscular; Valencia Univ. (Spain). Dept. de Fisica Atomica, Molecular y Nuclear
2011-07-01
Iterative methods are currently accepted as the gold standard image reconstruction methods in nuclear medicine. The quality of the final reconstructed image greatly depends on how well physical processes are modelled in the System-Response- Matrix (SRM). The SRM can be obtained using experimental measurements, or calculated using Monte-Carlo (MC) or analytical methods. Nevertheless, independent on the method, the SRM is always contaminated by a certain level of error. MC based methods have recently gained popularity in calculation of the SRM due to the significant increase in computer power exhibited by regular commercial computers. MC methods can produce high accuracy results, but are subject to statistical noise, which affects the precision of the results. By increasing the number of annihilations simulated, the level of noise observed in the SRM decreases, at the additional cost of increased simulation time and increased file size necessary to store the SRM. The latter also has a negative impact on reconstruction time. A study on the noise of the SRM has been performed from a spatial point of view, identifying specific regions subject to higher levels of noise. This study will enable the calculation of SRM with different levels of statistics depending on the spatial location. A quantitative comparison of images, reconstructed using different SRM realizations, with similar and different levels of statistical quality, has been presented. (orig.)
Baba, Justin S [ORNL; John, Dwayne O [ORNL; Koju, Vijay [ORNL
2015-01-01
The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>10million) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case for many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al.,1 to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.
Monte Carlo-based revised values of dose rate constants at discrete photon energies
T Palani Selvam
2014-01-01
Full Text Available Absorbed dose rate to water at 0.2 cm and 1 cm due to a point isotropic photon source as a function of photon energy is calculated using the EDKnrc user-code of the EGSnrc Monte Carlo system. This code system utilized widely used XCOM photon cross-section dataset for the calculation of absorbed dose to water. Using the above dose rates, dose rate constants are calculated. Air-kerma strength S k needed for deriving dose rate constant is based on the mass-energy absorption coefficient compilations of Hubbell and Seltzer published in the year 1995. A comparison of absorbed dose rates in water at the above distances to the published values reflects the differences in photon cross-section dataset in the low-energy region (difference is up to 2% in dose rate values at 1 cm in the energy range 30-50 keV and up to 4% at 0.2 cm at 30 keV. A maximum difference of about 8% is observed in the dose rate value at 0.2 cm at 1.75 MeV when compared to the published value. S k calculations based on the compilation of Hubbell and Seltzer show a difference of up to 2.5% in the low-energy region (20-50 keV when compared to the published values. The deviations observed in the values of dose rate and S k affect the values of dose rate constants up to 3%.
Weathers, J.B. [Shock, Noise, and Vibration Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: James.Weathers@ngc.com; Luck, R. [Department of Mechanical Engineering, Mississippi State University, 210 Carpenter Engineering Building, P.O. Box ME, Mississippi State, MS 39762-5925 (United States)], E-mail: Luck@me.msstate.edu; Weathers, J.W. [Structural Analysis Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: Jeffrey.Weathers@ngc.com
2009-11-15
The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.
Monte-Carlo based Uncertainty Analysis For CO2 Laser Microchanneling Model
Prakash, Shashi; Kumar, Nitish; Kumar, Subrata
2016-09-01
CO2 laser microchanneling has emerged as a potential technique for the fabrication of microfluidic devices on PMMA (Poly-methyl-meth-acrylate). PMMA directly vaporizes when subjected to high intensity focused CO2 laser beam. This process results in clean cut and acceptable surface finish on microchannel walls. Overall, CO2 laser microchanneling process is cost effective and easy to implement. While fabricating microchannels on PMMA using a CO2 laser, the maximum depth of the fabricated microchannel is the key feature. There are few analytical models available to predict the maximum depth of the microchannels and cut channel profile on PMMA substrate using a CO2 laser. These models depend upon the values of thermophysical properties of PMMA and laser beam parameters. There are a number of variants of transparent PMMA available in the market with different values of thermophysical properties. Therefore, for applying such analytical models, the values of these thermophysical properties are required to be known exactly. Although, the values of laser beam parameters are readily available, extensive experiments are required to be conducted to determine the value of thermophysical properties of PMMA. The unavailability of exact values of these property parameters restrict the proper control over the microchannel dimension for given power and scanning speed of the laser beam. In order to have dimensional control over the maximum depth of fabricated microchannels, it is necessary to have an idea of uncertainty associated with the predicted microchannel depth. In this research work, the uncertainty associated with the maximum depth dimension has been determined using Monte Carlo method (MCM). The propagation of uncertainty with different power and scanning speed has been predicted. The relative impact of each thermophysical property has been determined using sensitivity analysis.
Monte Carlo based protocol for cell survival and tumour control probability in BNCT
Ye, Sung-Joon
1999-02-01
A mathematical model to calculate the theoretical cell survival probability (nominally, the cell survival fraction) is developed to evaluate preclinical treatment conditions for boron neutron capture therapy (BNCT). A treatment condition is characterized by the neutron beam spectra, single or bilateral exposure, and the choice of boron carrier drug (boronophenylalanine (BPA) or boron sulfhydryl hydride (BSH)). The cell survival probability defined from Poisson statistics is expressed with the cell-killing yield, the (n, ) reaction density, and the tolerable neutron fluence. The radiation transport calculation from the neutron source to tumours is carried out using Monte Carlo methods: (i) reactor-based BNCT facility modelling to yield the neutron beam library at an irradiation port; (ii) dosimetry to limit the neutron fluence below a tolerance dose (10.5 Gy-Eq); (iii) calculation of the (n, ) reaction density in tumours. A shallow surface tumour could be effectively treated by single exposure producing an average cell survival probability of - for probable ranges of the cell-killing yield for the two drugs, while a deep tumour will require bilateral exposure to achieve comparable cell kills at depth. With very pure epithermal beams eliminating thermal, low epithermal and fast neutrons, the cell survival can be decreased by factors of 2-10 compared with the unmodified neutron spectrum. A dominant effect of cell-killing yield on tumour cell survival demonstrates the importance of choice of boron carrier drug. However, these calculations do not indicate an unambiguous preference for one drug, due to the large overlap of tumour cell survival in the probable ranges of the cell-killing yield for the two drugs. The cell survival value averaged over a bulky tumour volume is used to predict the overall BNCT therapeutic efficacy, using a simple model of tumour control probability (TCP).
Graves, Yan Jiang; Jia, Xun; Jiang, Steve B
2013-03-21
The γ-index test has been commonly adopted to quantify the degree of agreement between a reference dose distribution and an evaluation dose distribution. Monte Carlo (MC) simulation has been widely used for the radiotherapy dose calculation for both clinical and research purposes. The goal of this work is to investigate both theoretically and experimentally the impact of the MC statistical fluctuation on the γ-index test when the fluctuation exists in the reference, the evaluation, or both dose distributions. To the first order approximation, we theoretically demonstrated in a simplified model that the statistical fluctuation tends to overestimate γ-index values when existing in the reference dose distribution and underestimate γ-index values when existing in the evaluation dose distribution given the original γ-index is relatively large for the statistical fluctuation. Our numerical experiments using realistic clinical photon radiation therapy cases have shown that (1) when performing a γ-index test between an MC reference dose and a non-MC evaluation dose, the average γ-index is overestimated and the gamma passing rate decreases with the increase of the statistical noise level in the reference dose; (2) when performing a γ-index test between a non-MC reference dose and an MC evaluation dose, the average γ-index is underestimated when they are within the clinically relevant range and the gamma passing rate increases with the increase of the statistical noise level in the evaluation dose; (3) when performing a γ-index test between an MC reference dose and an MC evaluation dose, the gamma passing rate is overestimated due to the statistical noise in the evaluation dose and underestimated due to the statistical noise in the reference dose. We conclude that the γ-index test should be used with caution when comparing dose distributions computed with MC simulation.
Monte Carlo based NMR simulations of open fractures in porous media
Lukács, Tamás; Balázs, László
2014-05-01
According to the basic principles of nuclear magnetic resonance (NMR), a measurement's free induction decay curve has an exponential characteristic and its parameter is the transversal relaxation time, T2, given by the Bloch equations in rotating frame. In our simulations we are observing that particular case when the bulk's volume is neglectable to the whole system, the vertical movement is basically zero, hence the diffusion part of the T2 relation can be editted out. This small-apertured situations are common in sedimentary layers, and the smallness of the observed volume enable us to calculate with just the bulk relaxation and the surface relaxation. The simulation uses the Monte-Carlo method, so it is based on a random-walk generator which provides the brownian motions of the particles by uniformly distributed, pseudorandom generated numbers. An attached differential equation assures the bulk relaxation, the initial and the iterated conditions guarantee the simulation's replicability and enable having consistent estimations. We generate an initial geometry of a plain segment with known height, with given number of particles, the spatial distribution is set to equal to each simulation, and the surface-volume ratio remains at a constant value. It follows that to the given thickness of the open fracture, from the fitted curve's parameter, the surface relaxivity is determinable. The calculated T2 distribution curves are also indicating the inconstancy in the observed fracture situations. The effect of varying the height of the lamina at a constant diffusion coefficient also produces characteristic anomaly and for comparison we have run the simulation with the same initial volume, number of particles and conditions in spherical bulks, their profiles are clear and easily to understand. The surface relaxation enables us to estimate the interaction beetwen the materials of boundary with this two geometrically well-defined bulks, therefore the distribution takes as a
Tadvani, Jalil Khajepour; Falamaki, Cavus
2008-07-23
It is demonstrated for the first time that mesoporous PS structures obtained by the electrochemical etching of p(+)(100) oriented silicon wafers might assume the peculiarity of invariance of the first peak positions in their pore size distribution curves, albeit for current densities far from the electropolishing region and at constant electrolyte composition. A new Monte Carlo-based simulation model is presented that predicts reasonably the pore size distribution of the PS layers and the observed invariance of peak position with respect to changes in current density. The main highlight of the new model is the introduction of a 'light avalanche breakdown' process in a mathematical fashion. The model is also able to predict an absolute value of 4.23 Å for the smallest pore created experimentally. It is discussed that the latter value has an exact physical meaning: it corresponds with great accuracy to the width of a void created on the surface due to the exclusion of one Si atom.
Tadvani, Jalil Khajepour [Ceramics Department, Materials and Energy Research Center, PO Box 14155-4777, Tehran (Iran, Islamic Republic of); Falamaki, Cavus [Chemical Engineering Department, Amirkabir University of Technology, Hafez Avenue, PO Box 15875-4413, Tehran (Iran, Islamic Republic of)
2008-07-23
It is demonstrated for the first time that mesoporous PS structures obtained by the electrochemical etching of p{sup +}(100) oriented silicon wafers might assume the peculiarity of invariance of the first peak positions in their pore size distribution curves, albeit for current densities far from the electropolishing region and at constant electrolyte composition. A new Monte Carlo-based simulation model is presented that predicts reasonably the pore size distribution of the PS layers and the observed invariance of peak position with respect to changes in current density. The main highlight of the new model is the introduction of a 'light avalanche breakdown' process in a mathematical fashion. The model is also able to predict an absolute value of 4.23 A for the smallest pore created experimentally. It is discussed that the latter value has an exact physical meaning: it corresponds with great accuracy to the width of a void created on the surface due to the exclusion of one Si atom.
Monte Carlo-based diode design for correction-less small field dosimetry
Charles, P. H.; Crowe, S. B.; Kairn, T.; Knight, R. T.; Hill, B.; Kenny, J.; Langton, C. M.; Trapp, J. V.
2013-07-01
Due to their small collecting volume, diodes are commonly used in small field dosimetry. However, the relative sensitivity of a diode increases with decreasing small field size. Conversely, small air gaps have been shown to cause a significant decrease in the sensitivity of a detector as the field size is decreased. Therefore, this study uses Monte Carlo simulations to look at introducing air upstream to diodes such that they measure with a constant sensitivity across all field sizes in small field dosimetry. Varying thicknesses of air were introduced onto the upstream end of two commercial diodes (PTW 60016 photon diode and PTW 60017 electron diode), as well as a theoretical unenclosed silicon chip using field sizes as small as 5 mm × 5 mm. The metric \\frac{{D_{w,Q} }}{{D_{Det,Q} }} used in this study represents the ratio of the dose to a point of water to the dose to the diode active volume, for a particular field size and location. The optimal thickness of air required to provide a constant sensitivity across all small field sizes was found by plotting \\frac{{D_{w,Q} }}{{D_{Det,Q} }} as a function of introduced air gap size for various field sizes, and finding the intersection point of these plots. That is, the point at which \\frac{{D_{w,Q} }}{{D_{Det,Q} }} was constant for all field sizes was found. The optimal thickness of air was calculated to be 3.3, 1.15 and 0.10 mm for the photon diode, electron diode and unenclosed silicon chip, respectively. The variation in these results was due to the different design of each detector. When calculated with the new diode design incorporating the upstream air gap, k_{Q_{clin} ,Q_{msr} }^{f_{clin} ,f_{msr} } was equal to unity to within statistical uncertainty (0.5%) for all three diodes. Cross-axis profile measurements were also improved with the new detector design. The upstream air gap could be implanted on the commercial diodes via a cap consisting of the air cavity surrounded by water equivalent material. The
Wang, Song; Gardner, Joseph K; Gordon, John J; Li, Weidong; Clews, Luke; Greer, Peter B; Siebers, Jeffrey V
2009-08-01
The aim of this study is to present an efficient method to generate imager-specific Monte Carlo (MC)-based dose kernels for amorphous silicon-based electronic portal image device dose prediction and determine the effective backscattering thicknesses for such imagers. EPID field size-dependent responses were measured for five matched Varian accelerators from three institutions with 6 MV beams at the source to detector distance (SDD) of 105 cm. For two imagers, measurements were made with and without the imager mounted on the robotic supporting arm. Monoenergetic energy deposition kernels with 0-2.5 cm of water backscattering thicknesses were simultaneously computed by MC to a high precision. For each imager, the backscattering thickness required to match measured field size responses was determined. The monoenergetic kernel method was validated by comparing measured and predicted field size responses at 150 cm SDD, 10 x 10 cm2 multileaf collimator (MLC) sliding window fields created with 5, 10, 20, and 50 mm gaps, and a head-and-neck (H&N) intensity modulated radiation therapy (IMRT) patient field. Field size responses for the five different imagers deviated by up to 1.3%. When imagers were removed from the robotic arms, response deviations were reduced to 0.2%. All imager field size responses were captured by using between 1.0 and 1.6 cm backscatter. The predicted field size responses by the imager-specific kernels matched measurements for all involved imagers with the maximal deviation of 0.34%. The maximal deviation between the predicted and measured field size responses at 150 cm SDD is 0.39%. The maximal deviation between the predicted and measured MLC sliding window fields is 0.39%. For the patient field, gamma analysis yielded that 99.0% of the pixels have gamma < 1 by the 2%, 2 mm criteria with a 3% dose threshold. Tunable imager-specific kernels can be generated rapidly and accurately in a single MC simulation. The resultant kernels are imager position
Monte Carlo-based multiphysics coupling analysis of x-ray pulsar telescope
Li, Liansheng; Deng, Loulou; Mei, Zhiwu; Zuo, Fuchang; Zhou, Hao
2015-10-01
X-ray pulsar telescope (XPT) is a complex optical payload, which involves optical, mechanical, electrical and thermal disciplines. The multiphysics coupling analysis (MCA) plays an important role in improving the in-orbit performance. However, the conventional MCA methods encounter two serious problems in dealing with the XTP. One is that both the energy and reflectivity information of X-ray can't be taken into consideration, which always misunderstands the essence of XPT. Another is that the coupling data can't be transferred automatically among different disciplines, leading to computational inefficiency and high design cost. Therefore, a new MCA method for XPT is proposed based on the Monte Carlo method and total reflective theory. The main idea, procedures and operational steps of the proposed method are addressed in detail. Firstly, it takes both the energy and reflectivity information of X-ray into consideration simultaneously. And formulate the thermal-structural coupling equation and multiphysics coupling analysis model based on the finite element method. Then, the thermalstructural coupling analysis under different working conditions has been implemented. Secondly, the mirror deformations are obtained using construction geometry function. Meanwhile, the polynomial function is adopted to fit the deformed mirror and meanwhile evaluate the fitting error. Thirdly, the focusing performance analysis of XPT can be evaluated by the RMS. Finally, a Wolter-I XPT is taken as an example to verify the proposed MCA method. The simulation results show that the thermal-structural coupling deformation is bigger than others, the vary law of deformation effect on the focusing performance has been obtained. The focusing performances of thermal-structural, thermal, structural deformations have degraded 30.01%, 14.35% and 7.85% respectively. The RMS of dispersion spot are 2.9143mm, 2.2038mm and 2.1311mm. As a result, the validity of the proposed method is verified through
Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy
Martinez-Rovira, I.; Sempau, J.; Prezado, Y. [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain) and ID17 Biomedical Beamline, European Synchrotron Radiation Facility (ESRF), 6 rue Jules Horowitz B.P. 220, F-38043 Grenoble Cedex (France); Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain); Laboratoire Imagerie et modelisation en neurobiologie et cancerologie, UMR8165, Centre National de la Recherche Scientifique (CNRS), Universites Paris 7 et Paris 11, Bat 440., 15 rue Georges Clemenceau, F-91406 Orsay Cedex (France)
2012-05-15
Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at
Zhang, Junlong; Li, Yongping; Huang, Guohe; Chen, Xi; Bao, Anming
2016-07-01
Without a realistic assessment of parameter uncertainty, decision makers may encounter difficulties in accurately describing hydrologic processes and assessing relationships between model parameters and watershed characteristics. In this study, a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis (MCMC-MFA) method is developed, which can not only generate samples of parameters from a well constructed Markov chain and assess parameter uncertainties with straightforward Bayesian inference, but also investigate the individual and interactive effects of multiple parameters on model output through measuring the specific variations of hydrological responses. A case study is conducted for addressing parameter uncertainties in the Kaidu watershed of northwest China. Effects of multiple parameters and their interactions are quantitatively investigated using the MCMC-MFA with a three-level factorial experiment (totally 81 runs). A variance-based sensitivity analysis method is used to validate the results of parameters' effects. Results disclose that (i) soil conservation service runoff curve number for moisture condition II (CN2) and fraction of snow volume corresponding to 50% snow cover (SNO50COV) are the most significant factors to hydrological responses, implying that infiltration-excess overland flow and snow water equivalent represent important water input to the hydrological system of the Kaidu watershed; (ii) saturate hydraulic conductivity (SOL_K) and soil evaporation compensation factor (ESCO) have obvious effects on hydrological responses; this implies that the processes of percolation and evaporation would impact hydrological process in this watershed; (iii) the interactions of ESCO and SNO50COV as well as CN2 and SNO50COV have an obvious effect, implying that snow cover can impact the generation of runoff on land surface and the extraction of soil evaporative demand in lower soil layers. These findings can help enhance the hydrological model
Mueller, David S.
2017-01-01
This paper presents a method using Monte Carlo simulations for assessing uncertainty of moving-boat acoustic Doppler current profiler (ADCP) discharge measurements using a software tool known as QUant, which was developed for this purpose. Analysis was performed on 10 data sets from four Water Survey of Canada gauging stations in order to evaluate the relative contribution of a range of error sources to the total estimated uncertainty. The factors that differed among data sets included the fraction of unmeasured discharge relative to the total discharge, flow nonuniformity, and operator decisions about instrument programming and measurement cross section. As anticipated, it was found that the estimated uncertainty is dominated by uncertainty of the discharge in the unmeasured areas, highlighting the importance of appropriate selection of the site, the instrument, and the user inputs required to estimate the unmeasured discharge. The main contributor to uncertainty was invalid data, but spatial inhomogeneity in water velocity and bottom-track velocity also contributed, as did variation in the edge velocity, uncertainty in the edge distances, edge coefficients, and the top and bottom extrapolation methods. To a lesser extent, spatial inhomogeneity in the bottom depth also contributed to the total uncertainty, as did uncertainty in the ADCP draft at shallow sites. The estimated uncertainties from QUant can be used to assess the adequacy of standard operating procedures. They also provide quantitative feedback to the ADCP operators about the quality of their measurements, indicating which parameters are contributing most to uncertainty, and perhaps even highlighting ways in which uncertainty can be reduced. Additionally, QUant can be used to account for self-dependent error sources such as heading errors, which are a function of heading. The results demonstrate the importance of a Monte Carlo method tool such as QUant for quantifying random and bias errors when
Ren, Lixia; He, Li; Lu, Hongwei; Chen, Yizhong
2016-08-01
A new Monte Carlo-based interval transformation analysis (MCITA) is used in this study for multi-criteria decision analysis (MCDA) of naphthalene-contaminated groundwater management strategies. The analysis can be conducted when input data such as total cost, contaminant concentration and health risk are represented as intervals. Compared to traditional MCDA methods, MCITA-MCDA has the advantages of (1) dealing with inexactness of input data represented as intervals, (2) mitigating computational time due to the introduction of Monte Carlo sampling method, (3) identifying the most desirable management strategies under data uncertainty. A real-world case study is employed to demonstrate the performance of this method. A set of inexact management alternatives are considered in each duration on the basis of four criteria. Results indicated that the most desirable management strategy lied in action 15 for the 5-year, action 8 for the 10-year, action 12 for the 15-year, and action 2 for the 20-year management.
Bias in Dynamic Monte Carlo Alpha Calculations
Sweezy, Jeremy Ed [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Nolen, Steven Douglas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Adams, Terry R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Trahan, Travis John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-02-06
A 1/N bias in the estimate of the neutron time-constant (commonly denoted as α) has been seen in dynamic neutronic calculations performed with MCATK. In this paper we show that the bias is most likely caused by taking the logarithm of a stochastic quantity. We also investigate the known bias due to the particle population control method used in MCATK. We conclude that this bias due to the particle population control method is negligible compared to other sources of bias.
Monte Carlo based approach to the LS–NaI 4πβ–γ anticoincidence extrapolation and uncertainty.
Fitzgerald, R
2016-03-01
The 4πβ–γ anticoincidence method is used for the primary standardization of β−, β+, electron capture (EC), α, and mixed-mode radionuclides. Efficiency extrapolation using one or more γ ray coincidence gates is typically carried out by a low-order polynomial fit. The approach presented here is to use a Geant4-based Monte Carlo simulation of the detector system to analyze the efficiency extrapolation. New code was developed to account for detector resolution, direct γ ray interaction with the PMT, and implementation of experimental β-decay shape factors. The simulation was tuned to 57Co and 60Co data, then tested with 99mTc data, and used in measurements of 18F, 129I, and 124I. The analysis method described here offers a more realistic activity value and uncertainty than those indicated from a least-squares fit alone.
Discrete angle biasing in Monte Carlo radiation transport
Cramer, S.N.
1988-05-01
An angular biasing procedure is presented for use in Monte Carlo radiation transport with discretized scattering angle data. As in more general studies, the method is shown to reduce statistical weight fluctuations when it is combined with the exponential transformation. This discrete data application has a simple analytic form which is problem independent. The results from a sample problem illustrate the variance reduction and efficiency characteristics of the combined biasing procedures, and a large neutron and gamma ray integral experiment is also calculated. A proposal is given for the possible code generation of the biasing parameter p and the preferential direction /ovr/Omega///sub 0/ used in the combined biasing schemes.
J. Tonttila
2013-08-01
Full Text Available A new method for parameterizing the subgrid variations of vertical velocity and cloud droplet number concentration (CDNC is presented for general circulation models (GCMs. These parameterizations build on top of existing parameterizations that create stochastic subgrid cloud columns inside the GCM grid cells, which can be employed by the Monte Carlo independent column approximation approach for radiative transfer. The new model version adds a description for vertical velocity in individual subgrid columns, which can be used to compute cloud activation and the subgrid distribution of the number of cloud droplets explicitly. Autoconversion is also treated explicitly in the subcolumn space. This provides a consistent way of simulating the cloud radiative effects with two-moment cloud microphysical properties defined at subgrid scale. The primary impact of the new parameterizations is to decrease the CDNC over polluted continents, while over the oceans the impact is smaller. Moreover, the lower CDNC induces a stronger autoconversion of cloud water to rain. The strongest reduction in CDNC and cloud water content over the continental areas promotes weaker shortwave cloud radiative effects (SW CREs even after retuning the model. However, compared to the reference simulation, a slightly stronger SW CRE is seen e.g. over mid-latitude oceans, where CDNC remains similar to the reference simulation, and the in-cloud liquid water content is slightly increased after retuning the model.
Tuija Kangasmaa
2012-01-01
Full Text Available Simultaneous Tl-201/Tc-99m dual isotope myocardial perfusion SPECT is seriously hampered by down-scatter from Tc-99m into the Tl-201 energy window. This paper presents and optimises the ordered-subsets-expectation-maximisation-(OS-EM- based reconstruction algorithm, which corrects the down-scatter using an efficient Monte Carlo (MC simulator. The algorithm starts by first reconstructing the Tc-99m image with attenuation, collimator response, and MC-based scatter correction. The reconstructed Tc-99m image is then used as an input for an efficient MC-based down-scatter simulation of Tc-99m photons into the Tl-201 window. This down-scatter estimate is finally used in the Tl-201 reconstruction to correct the crosstalk between the two isotopes. The mathematical 4D NCAT phantom and physical cardiac phantoms were used to optimise the number of OS-EM iterations where the scatter estimate is updated and the number of MC simulated photons. The results showed that two scatter update iterations and 105 simulated photons are enough for the Tc-99m and Tl-201 reconstructions, whereas 106 simulated photons are needed to generate good quality down-scatter estimates. With these parameters, the entire Tl-201/Tc-99m dual isotope reconstruction can be accomplished in less than 3 minutes.
Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming
2011-02-01
High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.
Kangasmaa, Tuija; Kuikka, Jyrki; Sohlberg, Antti
2012-01-01
Simultaneous Tl-201/Tc-99m dual isotope myocardial perfusion SPECT is seriously hampered by down-scatter from Tc-99m into the Tl-201 energy window. This paper presents and optimises the ordered-subsets-expectation-maximisation-(OS-EM-) based reconstruction algorithm, which corrects the down-scatter using an efficient Monte Carlo (MC) simulator. The algorithm starts by first reconstructing the Tc-99m image with attenuation, collimator response, and MC-based scatter correction. The reconstructed Tc-99m image is then used as an input for an efficient MC-based down-scatter simulation of Tc-99m photons into the Tl-201 window. This down-scatter estimate is finally used in the Tl-201 reconstruction to correct the crosstalk between the two isotopes. The mathematical 4D NCAT phantom and physical cardiac phantoms were used to optimise the number of OS-EM iterations where the scatter estimate is updated and the number of MC simulated photons. The results showed that two scatter update iterations and 10(5) simulated photons are enough for the Tc-99m and Tl-201 reconstructions, whereas 10(6) simulated photons are needed to generate good quality down-scatter estimates. With these parameters, the entire Tl-201/Tc-99m dual isotope reconstruction can be accomplished in less than 3 minutes.
Fast and accurate Monte Carlo-based system response modeling for a digital whole-body PET
Sun, Xiangyu; Li, Yanzhao; Yang, Lingli; Wang, Shuai; Zhang, Bo; Xiao, Peng; Xie, Qingguo
2017-03-01
Recently, we have developed a digital whole-body PET scanner based on multi-voltage threshold (MVT) digitizers. To mitigate the impact of resolution degrading factors, an accurate system response is calculated by Monte Carlo simulation, which is computationally expensive. To address the problem, here we improve the method of using symmetries by simulating an axial wedge region. This approach takes full advantage of intrinsic symmetries in the cylindrical PET system without significantly increasing the computation cost in the process of symmetries. A total of 4224 symmetries are exploited. It took 17 days to generate the system maxtrix on 160 cores of Xeon 2.5 GHz. Both simulation and experimental data are used to evaluate the accuracy of system response modeling. The simulation studies show the full-width-half-maximum of a line source being 2.1 mm and 3.8 mm at the center of FOV and 200 mm at the center of FOV. Experimental results show the 2.4 mm rods in the Derenzo phantom image, which can be well distinguished.
Toropova, Alla P; Toropov, Andrey A; Veselinović, Aleksandar M; Veselinović, Jovana B; Leszczynska, Danuta; Leszczynski, Jerzy
2016-11-01
Quantitative structure-activity relationships (QSARs) for toxicity of a large set of 758 organic compounds to Daphnia magna were built up. The simplified molecular input-line entry system (SMILES) was used to represent the molecular structure. The Correlation and Logic (CORAL) software was utilized as a tool to develop the QSAR models. These models are built up using the Monte Carlo method and according to the principle "QSAR is a random event" if one checks a group of random distributions in the visible training set and the invisible validation set. Three distributions of the data into the visible training, calibration, and invisible validation sets are examined. The predictive potentials (i.e., statistical characteristics for the invisible validation set of the best model) are as follows: n = 87, r(2) = 0.8377, root mean square error = 0.564. The mechanistic interpretations and the domain of applicability of built models are suggested and discussed. Environ Toxicol Chem 2016;35:2691-2697. © 2016 SETAC. © 2016 SETAC.
Wierling, Christoph; Kühn, Alexander; Hache, Hendrik; Daskalaki, Andriani; Maschke-Dutz, Elisabeth; Peycheva, Svetlana; Li, Jian; Herwig, Ralf; Lehrach, Hans
2012-08-15
Cancer is known to be a complex disease and its therapy is difficult. Much information is available on molecules and pathways involved in cancer onset and progression and this data provides a valuable resource for the development of predictive computer models that can help to identify new potential drug targets or to improve therapies. Modeling cancer treatment has to take into account many cellular pathways usually leading to the construction of large mathematical models. The development of such models is complicated by the fact that relevant parameters are either completely unknown, or can at best be measured under highly artificial conditions. Here we propose an approach for constructing predictive models of such complex biological networks in the absence of accurate knowledge on parameter values, and apply this strategy to predict the effects of perturbations induced by anti-cancer drug target inhibitions on an epidermal growth factor (EGF) signaling network. The strategy is based on a Monte Carlo approach, in which the kinetic parameters are repeatedly sampled from specific probability distributions and used for multiple parallel simulations. Simulation results from different forms of the model (e.g., a model that expresses a certain mutation or mutation pattern or the treatment by a certain drug or drug combination) can be compared with the unperturbed control model and used for the prediction of the perturbation effects. This framework opens the way to experiment with complex biological networks in the computer, likely to save costs in drug development and to improve patient therapy.
D'Amours, Michel; Pouliot, Jean; Dagnault, Anne; Verhaegen, Frank; Beaulieu, Luc
2011-12-01
Brachytherapy planning software relies on the Task Group report 43 dosimetry formalism. This formalism, based on a water approximation, neglects various heterogeneous materials present during treatment. Various studies have suggested that these heterogeneities should be taken into account to improve the treatment quality. The present study sought to demonstrate the feasibility of incorporating Monte Carlo (MC) dosimetry within an inverse planning algorithm to improve the dose conformity and increase the treatment quality. The method was based on precalculated dose kernels in full patient geometries, representing the dose distribution of a brachytherapy source at a single dwell position using MC simulations and the Geant4 toolkit. These dose kernels are used by the inverse planning by simulated annealing tool to produce a fast MC-based plan. A test was performed for an interstitial brachytherapy breast treatment using two different high-dose-rate brachytherapy sources: the microSelectron iridium-192 source and the electronic brachytherapy source Axxent operating at 50 kVp. A research version of the inverse planning by simulated annealing algorithm was combined with MC to provide a method to fully account for the heterogeneities in dose optimization, using the MC method. The effect of the water approximation was found to depend on photon energy, with greater dose attenuation for the lower energies of the Axxent source compared with iridium-192. For the latter, an underdosage of 5.1% for the dose received by 90% of the clinical target volume was found. A new method to optimize afterloading brachytherapy plans that uses MC dosimetric information was developed. Including computed tomography-based information in MC dosimetry in the inverse planning process was shown to take into account the full range of scatter and heterogeneity conditions. This led to significant dose differences compared with the Task Group report 43 approach for the Axxent source. Copyright © 2011
D' Amours, Michel [Departement de Radio-Oncologie et Centre de Recherche en Cancerologie de l' Universite Laval, Hotel-Dieu de Quebec, Quebec, QC (Canada); Department of Physics, Physics Engineering, and Optics, Universite Laval, Quebec, QC (Canada); Pouliot, Jean [Department of Radiation Oncology, University of California, San Francisco, School of Medicine, San Francisco, CA (United States); Dagnault, Anne [Departement de Radio-Oncologie et Centre de Recherche en Cancerologie de l' Universite Laval, Hotel-Dieu de Quebec, Quebec, QC (Canada); Verhaegen, Frank [Department of Radiation Oncology, Maastro Clinic, GROW Research Institute, Maastricht University Medical Centre, Maastricht (Netherlands); Department of Oncology, McGill University, Montreal, QC (Canada); Beaulieu, Luc, E-mail: beaulieu@phy.ulaval.ca [Departement de Radio-Oncologie et Centre de Recherche en Cancerologie de l' Universite Laval, Hotel-Dieu de Quebec, Quebec, QC (Canada); Department of Physics, Physics Engineering, and Optics, Universite Laval, Quebec, QC (Canada)
2011-12-01
Purpose: Brachytherapy planning software relies on the Task Group report 43 dosimetry formalism. This formalism, based on a water approximation, neglects various heterogeneous materials present during treatment. Various studies have suggested that these heterogeneities should be taken into account to improve the treatment quality. The present study sought to demonstrate the feasibility of incorporating Monte Carlo (MC) dosimetry within an inverse planning algorithm to improve the dose conformity and increase the treatment quality. Methods and Materials: The method was based on precalculated dose kernels in full patient geometries, representing the dose distribution of a brachytherapy source at a single dwell position using MC simulations and the Geant4 toolkit. These dose kernels are used by the inverse planning by simulated annealing tool to produce a fast MC-based plan. A test was performed for an interstitial brachytherapy breast treatment using two different high-dose-rate brachytherapy sources: the microSelectron iridium-192 source and the electronic brachytherapy source Axxent operating at 50 kVp. Results: A research version of the inverse planning by simulated annealing algorithm was combined with MC to provide a method to fully account for the heterogeneities in dose optimization, using the MC method. The effect of the water approximation was found to depend on photon energy, with greater dose attenuation for the lower energies of the Axxent source compared with iridium-192. For the latter, an underdosage of 5.1% for the dose received by 90% of the clinical target volume was found. Conclusion: A new method to optimize afterloading brachytherapy plans that uses MC dosimetric information was developed. Including computed tomography-based information in MC dosimetry in the inverse planning process was shown to take into account the full range of scatter and heterogeneity conditions. This led to significant dose differences compared with the Task Group report
Chi, Y; Li, Y; Tian, Z; Gu, X; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States)
2015-06-15
Purpose: Pencil-beam or superposition-convolution type dose calculation algorithms are routinely used in inverse plan optimization for intensity modulated radiation therapy (IMRT). However, due to their limited accuracy in some challenging cases, e.g. lung, the resulting dose may lose its optimality after being recomputed using an accurate algorithm, e.g. Monte Carlo (MC). It is the objective of this study to evaluate the feasibility and advantages of a new method to include MC in the treatment planning process. Methods: We developed a scheme to iteratively perform MC-based beamlet dose calculations and plan optimization. In the MC stage, a GPU-based dose engine was used and the particle number sampled from a beamlet was proportional to its optimized fluence from the previous step. We tested this scheme in four lung cancer IMRT cases. For each case, the original plan dose, plan dose re-computed by MC, and dose optimized by our scheme were obtained. Clinically relevant dosimetric quantities in these three plans were compared. Results: Although the original plan achieved a satisfactory PDV dose coverage, after re-computing doses using MC method, it was found that the PTV D95% were reduced by 4.60%–6.67%. After re-optimizing these cases with our scheme, the PTV coverage was improved to the same level as in the original plan, while the critical OAR coverages were maintained to clinically acceptable levels. Regarding the computation time, it took on average 144 sec per case using only one GPU card, including both MC-based beamlet dose calculation and treatment plan optimization. Conclusion: The achieved dosimetric gains and high computational efficiency indicate the feasibility and advantages of the proposed MC-based IMRT optimization method. Comprehensive validations in more patient cases are in progress.
Minimising biases in full configuration interaction quantum Monte Carlo
Vigor, W. A.; Spencer, J. S.; Bearpark, M. J.; Thom, A. J. W.
2015-03-01
We show that Full Configuration Interaction Quantum Monte Carlo (FCIQMC) is a Markov chain in its present form. We construct the Markov matrix of FCIQMC for a two determinant system and hence compute the stationary distribution. These solutions are used to quantify the dependence of the population dynamics on the parameters defining the Markov chain. Despite the simplicity of a system with only two determinants, it still reveals a population control bias inherent to the FCIQMC algorithm. We investigate the effect of simulation parameters on the population control bias for the neon atom and suggest simulation setups to, in general, minimise the bias. We show a reweight ing scheme to remove the bias caused by population control commonly used in diffusion Monte Carlo [Umrigar et al., J. Chem. Phys. 99, 2865 (1993)] is effective and recommend its use as a post processing step.
Minimising biases in full configuration interaction quantum Monte Carlo.
Vigor, W A; Spencer, J S; Bearpark, M J; Thom, A J W
2015-03-14
We show that Full Configuration Interaction Quantum Monte Carlo (FCIQMC) is a Markov chain in its present form. We construct the Markov matrix of FCIQMC for a two determinant system and hence compute the stationary distribution. These solutions are used to quantify the dependence of the population dynamics on the parameters defining the Markov chain. Despite the simplicity of a system with only two determinants, it still reveals a population control bias inherent to the FCIQMC algorithm. We investigate the effect of simulation parameters on the population control bias for the neon atom and suggest simulation setups to, in general, minimise the bias. We show a reweight ing scheme to remove the bias caused by population control commonly used in diffusion Monte Carlo [Umrigar et al., J. Chem. Phys. 99, 2865 (1993)] is effective and recommend its use as a post processing step.
Petrizzi, L.; Batistoni, P.; Migliori, S. [Associazione EURATOM ENEA sulla Fusione, Frascati (Roma) (Italy); Chen, Y.; Fischer, U.; Pereslavtsev, P. [Association FZK-EURATOM Forschungszentrum Karlsruhe (Germany); Loughlin, M. [EURATOM/UKAEA Fusion Association, Culham Science Centre, Abingdon, Oxfordshire, OX (United Kingdom); Secco, A. [Nice Srl Via Serra 33 Camerano Casasco AT (Italy)
2003-07-01
In deuterium-deuterium (D-D) and deuterium-tritium (D-T) fusion plasmas neutrons are produced causing activation of JET machine components. For safe operation and maintenance it is important to be able to predict the induced activation and the resulting shut down dose rates. This requires a suitable system of codes which is capable of simulating both the neutron induced material activation during operation and the decay gamma radiation transport after shut-down in the proper 3-D geometry. Two methodologies to calculate the dose rate in fusion devices have been developed recently and applied to fusion machines, both using the MCNP Monte Carlo code. FZK has developed a more classical approach, the rigorous 2-step (R2S) system in which MCNP is coupled to the FISPACT inventory code with an automated routing. ENEA, in collaboration with the ITER Team, has developed an alternative approach, the direct 1 step method (D1S). Neutron and decay gamma transport are handled in one single MCNP run, using an ad hoc cross section library. The intention was to tightly couple the neutron induced production of a radio-isotope and the emission of its decay gammas for an accurate spatial distribution and a reliable calculated statistical error. The two methods have been used by the two Associations to calculate the dose rate in five positions of JET machine, two inside the vacuum chamber and three outside, at cooling times between 1 second and 1 year after shutdown. The same MCNP model and irradiation conditions have been assumed. The exercise has been proposed and financed in the frame of the Fusion Technological Program of the JET machine. The scope is to supply the designers with the most reliable tool and data to calculate the dose rate on fusion machines. Results showed that there is a good agreement: the differences range between 5-35%. The next step to be considered in 2003 will be an exercise in which the comparison will be done with dose-rate data from JET taken during and
Cassola, V. F.; Kramer, R.; Brayner, C.; Khoury, H. J.
2010-08-01
Does the posture of a patient have an effect on the organ and tissue absorbed doses caused by x-ray examinations? This study aims to find the answer to this question, based on Monte Carlo (MC) simulations of commonly performed x-ray examinations using adult phantoms modelled to represent humans in standing as well as in the supine posture. The recently published FASH (female adult mesh) and MASH (male adult mesh) phantoms have the standing posture. In a first step, both phantoms were updated with respect to their anatomy: glandular tissue was separated from adipose tissue in the breasts, visceral fat was separated from subcutaneous fat, cartilage was segmented in ears, nose and around the thyroid, and the mass of the right lung is now 15% greater than the left lung. The updated versions are called FASH2_sta and MASH2_sta (sta = standing). Taking into account the gravitational effects on organ position and fat distribution, supine versions of the FASH2 and the MASH2 phantoms have been developed in this study and called FASH2_sup and MASH2_sup. MC simulations of external whole-body exposure to monoenergetic photons and partial-body exposure to x-rays have been made with the standing and supine FASH2 and MASH2 phantoms. For external whole-body exposure for AP and PA projection with photon energies above 30 keV, the effective dose did not change by more than 5% when the posture changed from standing to supine or vice versa. Apart from that, the supine posture is quite rare in occupational radiation protection from whole-body exposure. However, in the x-ray diagnosis supine posture is frequently used for patients submitted to examinations. Changes of organ absorbed doses up to 60% were found for simulations of chest and abdomen radiographs if the posture changed from standing to supine or vice versa. A further increase of differences between posture-specific organ and tissue absorbed doses with increasing whole-body mass is to be expected.
Ureba, A.; Pereira-Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Salguero, F. J.; Leal, A.
2013-07-01
The use of Monte Carlo (MC) has shown an improvement in the accuracy of the calculation of the dose compared to other analytics algorithms installed on the systems of business planning, especially in the case of non-standard situations typical of complex techniques such as IMRT and VMAT. Our treatment planning system called CARMEN, is based on the complete simulation, both the beam transport in the head of the accelerator and the patient, and simulation designed for efficient operation in terms of the accuracy of the estimate and the required computation times. (Author)
Petroccia, Heather; Mendenhall, Nancy; Liu, Chihray; Hammer, Clifford; Culberson, Wesley; Thar, Tim; Mitchell, Tom; Li, Zuofeng; Bolch, Wesley
2017-08-01
Historical radiotherapy treatment plans lack 3D images sets required for estimating mean organ doses to patients. Alternatively, Monte Carlo-based models of radiotherapy devices coupled with whole-body computational phantoms can permit estimates of historical in-field and out-of-field organ doses as needed for studies associating radiation exposure and late tissue toxicities. In recreating historical patient treatments with 60Co based systems, the major components to be modeled include the source capsule, surrounding shielding layers, collimators (both fixed and adjustable), and trimmers as needed to vary field size. In this study, a computational model and experimental validation of the Theratron T-1000 are presented. Model validation is based upon in-field commissioning data collected at the University of Florida, published out-of-field data from the British Journal of Radiology (BJR) Supplement 25, and out-of-field measurements performed at the University of Wisconsin’s Accredited Dosimetry Calibration Laboratory (UWADCL). The computational model of the Theratron T-1000 agrees with central axis percentage depth dose data to within 2% for 6 × 6 to 30 × 30 cm2 fields. Out-of-field doses were found to vary between 0.6% to 2.4% of central axis dose at 10 cm from field edge and 0.42% to 0.97% of central axis dose at 20 cm from the field edge, all at 5 cm depth. Absolute and relative differences between computed and measured out-of-field doses varied between ±2.5% and ±100%, respectively, at distances up to 60 cm from the central axis. The source-term model was subsequently combined with patient-morphometry matched computational hybrid phantoms as a method for estimating in-field and out-of-field organ doses for patients treated for Hodgkin’s Lymphoma. By changing field size and position, and adding patient-specific field shaping blocks, more complex historical treatment set-ups can be to recreated, particularly those
Barrera, C A; Moran, M J
2007-08-21
The Neutron Imaging System (NIS) is one of seven ignition target diagnostics under development for the National Ignition Facility. The NIS is required to record hot-spot (13-15 MeV) and downscattered (6-10 MeV) images with a resolution of 10 microns and a signal-to-noise ratio (SNR) of 10 at the 20% contour. The NIS is a valuable diagnostic since the downscattered neutrons reveal the spatial distribution of the cold fuel during an ignition attempt, providing important information in the case of a failed implosion. The present study explores the parameter space of several line-of-sight (LOS) configurations that could serve as the basis for the final design. Six commercially available organic scintillators were experimentally characterized for their light emission decay profile and neutron sensitivity. The samples showed a long lived decay component that makes direct recording of a downscattered image impossible. The two best candidates for the NIS detector material are: EJ232 (BC422) plastic fibers or capillaries filled with EJ399B. A Monte Carlo-based end-to-end model of the NIS was developed to study the imaging capabilities of several LOS configurations and verify that the recovered sources meet the design requirements. The model includes accurate neutron source distributions, aperture geometries (square pinhole, triangular wedge, mini-penumbral, annular and penumbral), their point spread functions, and a pixelated scintillator detector. The modeling results show that a useful downscattered image can be obtained by recording the primary peak and the downscattered images, and then subtracting a decayed version of the former from the latter. The difference images need to be deconvolved in order to obtain accurate source distributions. The images are processed using a frequency-space modified-regularization algorithm and low-pass filtering. The resolution and SNR of these sources are quantified by using two surrogate sources. The simulations show that all LOS
Improved analysis of bias in Monte Carlo criticality safety
Haley, Thomas C.
2000-08-01
Criticality safety, the prevention of nuclear chain reactions, depends on Monte Carlo computer codes for most commercial applications. One major shortcoming of these codes is the limited accuracy of the atomic and nuclear data files they depend on. In order to apply a code and its data files to a given criticality safety problem, the code must first be benchmarked against similar problems for which the answer is known. The difference between a code prediction and the known solution is termed the "bias" of the code. Traditional calculations of the bias for application to commercial criticality problems are generally full of assumptions and lead to large uncertainties which must be conservatively factored into the bias as statistical tolerances. Recent trends in storing commercial nuclear fuel---narrowed regulatory margins of safety, degradation of neutron absorbers, the desire to use higher enrichment fuel, etc.---push the envelope of criticality safety. They make it desirable to minimize uncertainty in the bias to accommodate these changes, and they make it vital to understand what assumptions are safe to make under what conditions. A set of improved procedures is proposed for (1) developing multivariate regression bias models, and (2) applying multivariate regression bias models. These improved procedures lead to more accurate estimates of the bias and much smaller uncertainties about this estimate, while also generally providing more conservative results. The drawback is that the procedures are not trivial and are highly labor intensive to implement. The payback in savings in margin to criticality and conservatism for calculations near regulatory and safety limits may be worth this cost. To develop these procedures, a bias model using the statistical technique of weighted least squares multivariate regression is developed in detail. Problems that can occur from a weak statistical analysis are highlighted, and a solid statistical method for developing the bias
Czarnecki, Damian; Poppe, Björn; Zink, Klemens
2017-06-01
The impact of removing the flattening filter in clinical electron accelerators on the relationship between dosimetric quantities such as beam quality specifiers and the mean photon and electron energies of the photon radiation field was investigated by Monte Carlo simulations. The purpose of this work was to determine the uncertainties when using the well-known beam quality specifiers or energy-based beam specifiers as predictors of dosimetric photon field properties when removing the flattening filter. Monte Carlo simulations applying eight different linear accelerator head models with and without flattening filter were performed in order to generate realistic radiation sources and calculate field properties such as restricted mass collision stopping power ratios (L¯/ρ)airwater, mean photon and secondary electron energies. To study the impact of removing the flattening filter on the beam quality correction factors kQ , this factor for detailed ionization chamber models was calculated by Monte Carlo simulations. Stopping power ratios (L¯/ρ)airwater and kQ values for different ionization chambers as a function of TPR1020 and %dd(10)x were calculated. Moreover, mean photon energies in air and at the point of measurement in water as well as mean secondary electron energies at the point of measurement were calculated. The results revealed that removing the flattening filter led to a change within 0.3% in the relationship between %dd(10)x and (L¯/ρ)airwater, whereby the relationship between TPR1020 and (L¯/ρ)airwater changed up to 0.8% for high energy photon beams. However, TPR1020 was a good predictor of (L¯/ρ)airwater for both types of linear accelerator with energies mean photon energy below the linear accelerators head as well as at the point of measurement may not be suitable as a predictor of (L¯/ρ)airwater and kQ to merge the dosimetry of both linear accelerator types. It was possible to derive (L¯/ρ)airwater using the mean secondary electron energy
Automated Monte Carlo biasing for photon-generated electrons near surfaces.
Franke, Brian Claude; Crawford, Martin James; Kensek, Ronald Patrick
2009-09-01
This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.
Partridge, D.G.; Vrugt, J.A.; Tunved, P.; Ekman, A.M.L.; Struthers, H.; Sooroshian, A.
2012-01-01
This paper presents a novel approach to investigate cloud-aerosol interactions by coupling a Markov chain Monte Carlo (MCMC) algorithm to an adiabatic cloud parcel model. Despite the number of numerical cloud-aerosol sensitivity studies previously conducted few have used statistical analysis tools t
S. Maiti
2011-03-01
Full Text Available Koyna region is well-known for its triggered seismic activities since the hazardous earthquake of M=6.3 occurred around the Koyna reservoir on 10 December 1967. Understanding the shallow distribution of resistivity pattern in such a seismically critical area is vital for mapping faults, fractures and lineaments. However, deducing true resistivity distribution from the apparent resistivity data lacks precise information due to intrinsic non-linearity in the data structures. Here we present a new technique based on the Bayesian neural network (BNN theory using the concept of Hybrid Monte Carlo (HMC/Markov Chain Monte Carlo (MCMC simulation scheme. The new method is applied to invert one and two-dimensional Direct Current (DC vertical electrical sounding (VES data acquired around the Koyna region in India. Prior to apply the method on actual resistivity data, the new method was tested for simulating synthetic signal. In this approach the objective/cost function is optimized following the Hybrid Monte Carlo (HMC/Markov Chain Monte Carlo (MCMC sampling based algorithm and each trajectory was updated by approximating the Hamiltonian differential equations through a leapfrog discretization scheme. The stability of the new inversion technique was tested in presence of correlated red noise and uncertainty of the result was estimated using the BNN code. The estimated true resistivity distribution was compared with the results of singular value decomposition (SVD-based conventional resistivity inversion results. Comparative results based on the HMC-based Bayesian Neural Network are in good agreement with the existing model results, however in some cases, it also provides more detail and precise results, which appears to be justified with local geological and structural details. The new BNN approach based on HMC is faster and proved to be a promising inversion scheme to interpret complex and non-linear resistivity problems. The HMC-based BNN results
Steinczinger, Zsuzsanna; Jóvári, Pál; Pusztai, László
2017-01-01
Neutron- and x-ray weighted total structure factors of liquid water have been calculated on the basis of the intermolecular parts of partial radial distribution functions resulting from various computer simulations. The approach includes reverse Monte Carlo (RMC) modelling of these partials, using realistic flexible molecules, and the calculation of experimental diffraction data, including the intramolecular contributions, from the RMC particle configurations. The procedure has been applied to ten sets of intermolecular partial radial distribution functions obtained from various computer simulations, including one set from an ab initio molecular dynamics, of water. It is found that modern polarizable water potentials, such as SWM4-DP and BK3 are the most successful in reproducing measured diffraction data.
Lazos, Dimitrios; Pokhrel, Damodar; Su, Zhong; Lu, Jun; Williamson, Jeffrey F.
2008-03-01
Fast and accurate modeling of cone-beam CT (CBCT) x-ray projection data can improve CBCT image quality either by linearizing projection data for each patient prior to image reconstruction (thereby mitigating detector blur/lag, spectral hardening, and scatter artifacts) or indirectly by supporting rigorous comparative simulation studies of competing image reconstruction and processing algorithms. In this study, we compare Monte Carlo-computed x-ray projections with projections experimentally acquired from our Varian Trilogy CBCT imaging system for phantoms of known design. Our recently developed Monte Carlo photon-transport code, PTRAN, was used to compute primary and scatter projections for cylindrical phantom of known diameter (NA model 76-410) with and without bow-tie filter and antiscatter grid for both full- and half-fan geometries. These simulations were based upon measured 120 kVp spectra, beam profiles, and flat-panel detector (4030CB) point-spread function. Compound Poisson- process noise was simulated based upon measured beam output. Computed projections were compared to flat- and dark-field corrected 4030CB images where scatter profiles were estimated by subtracting narrow axial-from full axial width 4030CB profiles. In agreement with the literature, the difference between simulated and measured projection data is of the order of 6-8%. The measurement of the scatter profiles is affected by the long tails of the detector PSF. Higher accuracy can be achieved mainly by improving the beam modeling and correcting the non linearities induced by the detector PSF.
Measure of Bias Cancellation in Fixed-Node Quantum Monte Carlo
Dubecký, Matúš
2016-01-01
We introduce a measure of fixed-node (FN) bias cancellation useful for a priori assessment of FN diffusion Monte Carlo (FN-DMC) energy differences, based on post-Hartree-Fock natural orbital occupation numbers. The proposed quantity reflects the non-equivalency of static correlations in trial wave functions and uncovers the nature of biases observed in some small noncovalent complexes.
Zhang, Rong; Verkruysse, Wim; Aguilar, Guillermo; Nelson, J Stuart
2005-09-07
Both diffusion approximation (DA) and Monte Carlo (MC) models have been used to simulate light distribution in multilayered human skin with or without discrete blood vessels. However, no detailed comparison of the light distribution, heat generation and induced thermal damage between these two models has been done for discrete vessels. Three models were constructed: (1) MC-based finite element method (FEM) model, referred to as MC-FEM; (2) DA-based FEM with simple scaling factors according to chromophore concentrations (SFCC) in the epidermis and vessels, referred to as DA-FEM-SFCC; and (3) DA-FEM with improved scaling factors (ISF) obtained by equalizing the total light energy depositions that are solved from the DA and MC models in the epidermis and vessels, respectively, referred to as DA-FEM-ISF. The results show that DA-FEM-SFCC underestimates the light energy deposition in the epidermis and vessels when compared to MC-FEM. The difference is nonlinearly dependent on wavelength, dermal blood volume fraction, vessel size and depth, etc. Thus, the temperature and damage profiles are also dramatically different. DA-FEM-ISF achieves much better results in calculating heat generation and induced thermal damage when compared to MC-FEM, and has the advantages of both calculation speed and accuracy. The disadvantage is that a multidimensional ISF table is needed for DA-FEM-ISF to be a practical modelling tool.
Tseung, H Wan Chan; Kreofsky, C R; Ma, D; Beltran, C
2016-01-01
Purpose: To demonstrate the feasibility of fast Monte Carlo (MC) based inverse biological planning for the treatment of head and neck tumors in spot-scanning proton therapy. Methods: Recently, a fast and accurate Graphics Processor Unit (GPU)-based MC simulation of proton transport was developed and used as the dose calculation engine in a GPU-accelerated IMPT optimizer. Besides dose, the dose-averaged linear energy transfer (LETd) can be simultaneously scored, which makes biological dose (BD) optimization possible. To convert from LETd to BD, a linear relation was assumed. Using this novel optimizer, inverse biological planning was applied to 4 patients: 2 small and 1 large thyroid tumor targets, and 1 glioma case. To create these plans, constraints were placed to maintain the physical dose (PD) within 1.25 times the prescription while maximizing target BD. For comparison, conventional IMRT and IMPT plans were created for each case in Eclipse (Varian, Inc). The same critical structure PD constraints were use...
Harsányi, I.; Pusztai, L.
2012-11-01
We report on a comparison of three interaction potential models of water (SPC/E, TIP4P-2005, and SWM4-DP) for describing the structure of concentrated aqueous lithium chloride solutions. Classical molecular dynamics simulations have been carried out and total scattering structure factors, calculated from the particle configurations, were compared with experimental diffraction data. Later, reverse Monte Carlo structural modelling was applied for refining molecular dynamics results, so that particle configurations consistent with neutron and X-ray diffraction data could be prepared that, at the same time, were as close as possible to the final stage of the molecular dynamics simulations. Partial radial distribution functions, first neighbors, and angular correlations were analysed further from the best fitting particle configurations. It was found that none of the water potential models describe the structure perfectly; overall, the SWM4-DP model seems to be the most promising. At the highest concentrations the SPC/E model appears to provide the best approximation of the water structure, whereas the TIP4P-2005 model proved to be the most successful for estimating the lithium-oxygen partial radial distribution function at each concentration.
D. G. Partridge
2012-03-01
Full Text Available This paper presents a novel approach to investigate cloud-aerosol interactions by coupling a Markov chain Monte Carlo (MCMC algorithm to an adiabatic cloud parcel model. Despite the number of numerical cloud-aerosol sensitivity studies previously conducted few have used statistical analysis tools to investigate the global sensitivity of a cloud model to input aerosol physiochemical parameters. Using numerically generated cloud droplet number concentration (CDNC distributions (i.e. synthetic data as cloud observations, this inverse modelling framework is shown to successfully estimate the correct calibration parameters, and their underlying posterior probability distribution.
The employed analysis method provides a new, integrative framework to evaluate the global sensitivity of the derived CDNC distribution to the input parameters describing the lognormal properties of the accumulation mode aerosol and the particle chemistry. To a large extent, results from prior studies are confirmed, but the present study also provides some additional insights. There is a transition in relative sensitivity from very clean marine Arctic conditions where the lognormal aerosol parameters representing the accumulation mode aerosol number concentration and mean radius and are found to be most important for determining the CDNC distribution to very polluted continental environments (aerosol concentration in the accumulation mode >1000 cm^{−3} where particle chemistry is more important than both number concentration and size of the accumulation mode.
The competition and compensation between the cloud model input parameters illustrates that if the soluble mass fraction is reduced, the aerosol number concentration, geometric standard deviation and mean radius of the accumulation mode must increase in order to achieve the same CDNC distribution.
This study demonstrates that inverse modelling provides a flexible, transparent and
Jin, L; Eldib, A; Li, J; Price, R; Ma, C [Fox Chase Cancer Center, Philadelphia, PA (United States)
2015-06-15
Purpose: Uneven nose surfaces and air cavities underneath and the use of bolus present complexity and dose uncertainty when using a single electron energy beam to plan treatments of nose skin with a pencil beam-based planning system. This work demonstrates more accurate dose calculation and more optimal planning using energy and intensity modulated electron radiotherapy (MERT) delivered with a pMLC. Methods: An in-house developed Monte Carlo (MC)-based dose calculation/optimization planning system was employed for treatment planning. Phase space data (6, 9, 12 and 15 MeV) were used as an input source for MC dose calculations for the linac. To reduce the scatter-caused penumbra, a short SSD (61 cm) was used. Our previous work demonstrates good agreement in percentage depth dose and off-axis dose between calculations and film measurement for various field sizes. A MERT plan was generated for treating the nose skin using a patient geometry and a dose volume histogram (DVH) was obtained. The work also shows the comparison of 2D dose distributions between a clinically used conventional single electron energy plan and the MERT plan. Results: The MERT plan resulted in improved target dose coverage as compared to the conventional plan, which demonstrated a target dose deficit at the field edge. The conventional plan showed higher dose normal tissue irradiation underneath the nose skin while the MERT plan resulted in improved conformity and thus reduces normal tissue dose. Conclusion: This preliminary work illustrates that MC-based MERT planning is a promising technique in treating nose skin, not only providing more accurate dose calculation, but also offering an improved target dose coverage and conformity. In addition, this technique may eliminate the necessity of bolus, which often produces dose delivery uncertainty due to the air gaps that may exist between the bolus and skin.
Edimo, P; Clermont, C; Kwato, M G; Vynckier, S
2009-09-01
In the present work, Monte Carlo (MC) models of electron beams (energies 4, 12 and 18MeV) from an Elekta SL25 medical linear accelerator were simulated using EGSnrc/BEAMnrc user code. The calculated dose distributions were benchmarked by comparison with measurements made in a water phantom for a wide range of open field sizes and insert combinations, at a single source-to-surface distance (SSD) of 100cm. These BEAMnrc models were used to evaluate the accuracy of a commercial MC dose calculation engine for electron beam treatment planning (Oncentra MasterPlan Treament Planning System (OMTPS) version 1.4, Nucletron) for two energies, 4 and 12MeV. Output factors were furthermore measured in the water phantom and compared to BEAMnrc and OMTPS. The overall agreement between predicted and measured output factors was comparable for both BEAMnrc and OMTPS, except for a few asymmetric and/or small insert cutouts, where larger deviations between measurements and the values predicted from BEAMnrc as well as OMTPS computations were recorded. However, in the heterogeneous phantom, differences between BEAMnrc and measurements ranged from 0.5 to 2.0% between two ribs and 0.6-1.0% below the ribs, whereas the range difference between OMTPS and measurements was the same (0.5-4.0%) in both areas. With respect to output factors, the overall agreement between BEAMnrc and measurements was usually within 1.0% whereas differences up to nearly 3.0% were observed for OMTPS. This paper focuses on a comparison for clinical cases, including the effects of electron beam attenuations in a heterogeneous phantom. It, therefore, complements previously reported data (only based on measurements) in one other paper on commissioning of the VMC++ dose calculation engine. These results demonstrate that the VMC++ algorithm is more robust in predicting dose distribution than Pencil beam based algorithms for the electron beams investigated.
基于蒙特卡罗模拟的可转换债券定价研究%Monte Carlo-based pricing of convertible bonds
赵洋; 赵立臣
2009-01-01
The paper applied the least-square Monte Carlo method proposed by Longstaff,et al.to price convertible bond,so as to solve the problem of prcing the path-dependent clauses and American option features embeded in convertible bonds.Convertible bonds are complex hybrid securities being subject to equity risk,credit risk,and interest rate risk.In the established pricing model,the assumption of constant volatility is relaxed and the volatility is estimated with GARCH (1,1),according to TF model,the credit risk is represented with the credit risk spread,and the yield curve is estimsted with Nelson-Siegel method.By empirical research,it is found that the convertible bonds in China are underpriced by 2% to 3%.%使用Longstaff等提出的最小二乘蒙特卡罗方法为可转换债券定价,从而解决为可转换债券中路径依赖条款和美式期权进行定价的难题.可转换债券是复杂的结构化产品,同时受股价风险、信用风险和利率风险影响.在建立的定价模型中,股价过程放松波动率为常数的假设,用GARCH(1,1)模型估计波动率,信用风险根据TF模型用常数信用利差代表,收益率曲线用Nelson-Siegel方法估计.通过实证检验发现国内可转换债券市场存在低估,低估幅度在2%～3%之间.
He, Li; Huang, Gordon; Lu, Hongwei; Wang, Shuo; Xu, Yi
2012-06-15
This paper presents a global uncertainty and sensitivity analysis (GUSA) framework based on global sensitivity analysis (GSA) and generalized likelihood uncertainty estimation (GLUE) methods. Quasi-Monte Carlo (QMC) is employed by GUSA to obtain realizations of uncertain parameters, which are then input to the simulation model for analysis. Compared to GLUE, GUSA can not only evaluate global sensitivity and uncertainty of modeling parameter sets, but also quantify the uncertainty in modeling prediction sets. Moreover, GUSA's another advantage lies in alleviation of computational effort, since those globally-insensitive parameters can be identified and removed from the uncertain-parameter set. GUSA is applied to a practical petroleum-contaminated site in Canada to investigate free product migration and recovery processes under aquifer remediation operations. Results from global sensitivity analysis show that (1) initial free product thickness has the most significant impact on total recovery volume but least impact on residual free product thickness and recovery rate; (2) total recovery volume and recovery rate are sensitive to residual LNAPL phase saturations and soil porosity. Results from uncertainty predictions reveal that the residual thickness would remain high and almost unchanged after about half-year of skimmer-well scheme; the rather high residual thickness (0.73-1.56 m 20 years later) indicates that natural attenuation would not be suitable for the remediation. The largest total recovery volume would be from water pumping, followed by vacuum pumping, and then skimmer. The recovery rates of the three schemes would rapidly decrease after 2 years (less than 0.05 m(3)/day), thus short-term remediation is not suggested. Copyright © 2012 Elsevier B.V. All rights reserved.
Both, J.P.; Nimal, J.C.; Vergnaud, T. (CEA Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France). Service d' Etudes des Reacteurs et de Mathematiques Appliquees)
1990-01-01
We discuss an automated biasing procedure for generating the parameters necessary to achieve efficient Monte Carlo biasing shielding calculations. The biasing techniques considered here are exponential transform and collision biasing deriving from the concept of the biased game based on the importance function. We use a simple model of the importance function with exponential attenuation as the distance to the detector increases. This importance function is generated on a three-dimensional mesh including geometry and with graph theory algorithms. This scheme is currently being implemented in the third version of the neutron and gamma ray transport code TRIPOLI-3. (author).
Sikora, M; Dohm, O; Alber, M
2007-08-07
A dedicated, efficient Monte Carlo (MC) accelerator head model for intensity modulated stereotactic radiosurgery treatment planning is needed to afford a highly accurate simulation of tiny IMRT fields. A virtual source model (VSM) of a mini multi-leaf collimator (MLC) (the Elekta Beam Modulator (EBM)) is presented, allowing efficient generation of particles even for small fields. The VSM of the EBM is based on a previously published virtual photon energy fluence model (VEF) (Fippel et al 2003 Med. Phys. 30 301) commissioned with large field measurements in air and in water. The original commissioning procedure of the VEF, based on large field measurements only, leads to inaccuracies for small fields. In order to improve the VSM, it was necessary to change the VEF model by developing (1) a method to determine the primary photon source diameter, relevant for output factor calculations, (2) a model of the influence of the flattening filter on the secondary photon spectrum and (3) a more realistic primary photon spectrum. The VSM model is used to generate the source phase space data above the mini-MLC. Later the particles are transmitted through the mini-MLC by a passive filter function which significantly speeds up the time of generation of the phase space data after the mini-MLC, used for calculation of the dose distribution in the patient. The improved VSM model was commissioned for 6 and 15 MV beams. The results of MC simulation are in very good agreement with measurements. Less than 2% of local difference between the MC simulation and the diamond detector measurement of the output factors in water was achieved. The X, Y and Z profiles measured in water with an ion chamber (V = 0.125 cm(3)) and a diamond detector were used to validate the models. An overall agreement of 2%/2 mm for high dose regions and 3%/2 mm in low dose regions between measurement and MC simulation for field sizes from 0.8 x 0.8 cm(2) to 16 x 21 cm(2) was achieved. An IMRT plan film verification
D. G. Partridge
2011-07-01
Full Text Available This paper presents a novel approach to investigate cloud-aerosol interactions by coupling a Markov Chain Monte Carlo (MCMC algorithm to a pseudo-adiabatic cloud parcel model. Despite the number of numerical cloud-aerosol sensitivity studies previously conducted few have used statistical analysis tools to investigate the sensitivity of a cloud model to input aerosol physiochemical parameters. Using synthetic data as observed values of cloud droplet number concentration (CDNC distribution, this inverse modelling framework is shown to successfully converge to the correct calibration parameters.
The employed analysis method provides a new, integrative framework to evaluate the sensitivity of the derived CDNC distribution to the input parameters describing the lognormal properties of the accumulation mode and the particle chemistry. To a large extent, results from prior studies are confirmed, but the present study also provides some additional insightful findings. There is a clear transition from very clean marine Arctic conditions where the aerosol parameters representing the mean radius and geometric standard deviation of the accumulation mode are found to be most important for determining the CDNC distribution to very polluted continental environments (aerosol concentration in the accumulation mode >1000 cm^{−3} where particle chemistry is more important than both number concentration and size of the accumulation mode.
The competition and compensation between the cloud model input parameters illustrate that if the soluble mass fraction is reduced, both the number of particles and geometric standard deviation must increase and the mean radius of the accumulation mode must increase in order to achieve the same CDNC distribution.
For more polluted aerosol conditions, with a reduction in soluble mass fraction the parameter correlation becomes weaker and more non-linear over the range of possible solutions
Fragoso, Margarida; Wen Ning; Kumar, Sanath; Liu Dezhi; Ryu, Samuel; Movsas, Benjamin; Munther, Ajlouni; Chetty, Indrin J, E-mail: ichetty1@hfhs.or [Henry Ford Health System, Detroit, MI (United States)
2010-08-21
Modern cancer treatment techniques, such as intensity-modulated radiation therapy (IMRT) and stereotactic body radiation therapy (SBRT), have greatly increased the demand for more accurate treatment planning (structure definition, dose calculation, etc) and dose delivery. The ability to use fast and accurate Monte Carlo (MC)-based dose calculations within a commercial treatment planning system (TPS) in the clinical setting is now becoming more of a reality. This study describes the dosimetric verification and initial clinical evaluation of a new commercial MC-based photon beam dose calculation algorithm, within the iPlan v.4.1 TPS (BrainLAB AG, Feldkirchen, Germany). Experimental verification of the MC photon beam model was performed with film and ionization chambers in water phantoms and in heterogeneous solid-water slabs containing bone and lung-equivalent materials for a 6 MV photon beam from a Novalis (BrainLAB) linear accelerator (linac) with a micro-multileaf collimator (m{sub 3} MLC). The agreement between calculated and measured dose distributions in the water phantom verification tests was, on average, within 2%/1 mm (high dose/high gradient) and was within {+-}4%/2 mm in the heterogeneous slab geometries. Example treatment plans in the lung show significant differences between the MC and one-dimensional pencil beam (PB) algorithms within iPlan, especially for small lesions in the lung, where electronic disequilibrium effects are emphasized. Other user-specific features in the iPlan system, such as options to select dose to water or dose to medium, and the mean variance level, have been investigated. Timing results for typical lung treatment plans show the total computation time (including that for processing and I/O) to be less than 10 min for 1-2% mean variance (running on a single PC with 8 Intel Xeon X5355 CPUs, 2.66 GHz). Overall, the iPlan MC algorithm is demonstrated to be an accurate and efficient dose algorithm, incorporating robust tools for MC
Zhang Di; Zankl, Maria; DeMarco, John J.; Cagnon, Chris H.; Angel, Erin; Turner, Adam C.; McNitt-Gray, Michael F. [David Geffen School of Medicine at UCLA, Los Angeles, California 90024 (United States); German Research Center for Environmental Health (GmbH), Institute of Radiation Protection, Helmholtz Zentrum Muenchen, Ingolstaedter Landstrasse 1, 85764 Neuherberg (Germany); David Geffen School of Medicine at UCLA, Los Angeles, California 90024 (United States)
2009-12-15
Purpose: Previous work has demonstrated that there are significant dose variations with a sinusoidal pattern on the peripheral of a CTDI 32 cm phantom or on the surface of an anthropomorphic phantom when helical CT scanning is performed, resulting in the creation of ''hot'' spots or ''cold'' spots. The purpose of this work was to perform preliminary investigations into the feasibility of exploiting these variations to reduce dose to selected radiosensitive organs solely by varying the tube start angle in CT scans. Methods: Radiation dose to several radiosensitive organs (including breasts, thyroid, uterus, gonads, and eye lenses) resulting from MDCT scans were estimated using Monte Carlo simulation methods on voxelized patient models, including GSF's Baby, Child, and Irene. Dose to fetus was also estimated using four pregnant female models based on CT images of the pregnant patients. Whole-body scans were simulated using 120 kVp, 300 mAs, both 28.8 and 40 mm nominal collimations, and pitch values of 1.5, 1.0, and 0.75 under a wide range of start angles (0 deg. - 340 deg. in 20 deg. increments). The relationship between tube start angle and organ dose was examined for each organ, and the potential dose reduction was calculated. Results: Some organs exhibit a strong dose variation, depending on the tube start angle. For small peripheral organs (e.g., the eye lenses of the Baby phantom at pitch 1.5 with 40 mm collimation), the minimum dose can be 41% lower than the maximum dose, depending on the tube start angle. In general, larger dose reductions occur for smaller peripheral organs in smaller patients when wider collimation is used. Pitch 1.5 and pitch 0.75 have different mechanisms of dose reduction. For pitch 1.5 scans, the dose is usually lowest when the tube start angle is such that the x-ray tube is posterior to the patient when it passes the longitudinal location of the organ. For pitch 0.75 scans, the dose is lowest
Voigts-Rhetz, P von; Zink, K [Technische Hochschule Mittelhessen - University of Applied Sciences, Giessen, Hessen (Germany)
2014-06-01
Purpose: All present dosimetry protocols recommend well-guarded parallel-plate ion chambers for electron dosimetry. For the guard-less Markus chamber an energy dependent fluence perturbation correction pcav is given. This perturbation correction was experimentally determined by van der Plaetsen by comparison of the read-out of a Markus and a NACP chamber, which was assumed to be “perturbation-free”. Aim of the present study is a Monte Carlo based reiteration of this experiment. Methods: Detailed models of four parallel-plate chambers (Roos, Markus, NACP and Advanced Markus) were designed using the Monte Carlo code EGSnrc and placed in a water phantom. For all chambers the dose to the active volume filled with low density water was calculated for 13 clinical electron spectra (E{sub 0}=6-21 MeV) at the depth of maximum and at the reference depth under reference conditions. In all cases the chamber's reference point was positioned at the depth of measurement. Moreover, the dose to water DW was calculated in a small water voxel positioned at the same depth. Results: The calculated dose ratio D{sub NACP}/D{sub Markus}, which according to van der Plaetsen reflects the fluence perturbation correction of the Markus chamber, deviates less from unity than the values given by van der Plaetsen's but exhibits a similar energy dependence. The same holds for the dose ratios of the other well guarded chambers. But, in comparison to water, the Markus chamber reveals the smallest overall perturbation correction which is nearly energy independent at both investigated depths. Conclusion: The simulations principally confirm the energy dependence of the dose ratio D{sub NACP}/D{sub Markus} as published by van der Plaetsen. But, as shown by our simulations of the ratio D{sub W}/D{sub Markus}, the conclusion drawn in all dosimetry protocols is questionable: in contrast to all well-guarded chambers the guard-less Markus chamber reveals the smallest overall perturbation
De Smedt, B; Fippel, M; Reynaert, N; Thierens, H
2006-06-01
In order to evaluate the performance of denoising algorithms applied to Monte Carlo calculated dose distributions, conventional evaluation methods (rms difference, 1% and 2% difference) can be used. However, it is illustrated that these evaluation methods sometimes underestimate the introduction of bias, since possible bias effects are averaged out over the complete dose distribution. In the present work, a new evaluation method is introduced based on a sliding window superimposed on a difference dose distribution (reference dose-noisy/denoised dose). To illustrate its importance, a new denoising technique (ANRT) is presented based upon a combination of the principles of bilateral filtering and Savitzky-Golay filters. This technique is very conservative in order to limit the introduction of bias in high dose gradient regions. ANRT is compared with IRON for three challenging cases, namely an electron and photon beam impinging on heterogeneous phantoms and two IMRT treatment plans of head-and-neck cancer patients to determine the clinical relevance of the obtained results. For the electron beam case, IRON outperforms ANRT concerning the smoothing capabilities, while no differences in systematic bias are observed. However, for the photon beam case, although ANRT and IRON perform equally well on the conventional evaluation tests (rms difference, 1% and 2% difference), IRON clearly introduces much more bias in the penumbral regions while ANRT seems to introduce no bias at all. When applied to the IMRT patient cases, both denoising methods perform equally well regarding smoothing and bias introduction. This is probably caused by the summation of a large set of different beam segments, decreasing dose gradients compared to a single beam. A reduction in calculation time without introducing large systematic bias can shorten a Monte Carlo treatment planning process considerably and is therefore very useful for the initial trial and error phase of the treatment planning
The exchange bias phenomenon in uncompensated interfaces: theory and Monte Carlo simulations.
Billoni, O V; Cannas, S A; Tamarit, F A
2011-09-28
We performed Monte Carlo simulations of a bilayer system composed of two thin films, one ferromagnetic (FM) and the other antiferromagnetic (AFM). Two lattice structures for the films were considered: simple cubic and body centered cubic (bcc). We imposed an uncompensated interfacial spin structure in both lattice structures; in particular we emulated an FeF2-FM system in the case of the bcc lattice. Our analysis focused on the incidence of the interfacial strength interactions between the films, J(eb), and the effect of thermal fluctuations on the bias field, H(EB). We first performed Monte Carlo simulations on a microscopic model based on classical Heisenberg spin variables. To analyze the simulation results we also introduced a simplified model that assumes coherent rotation of spins located on the same layer parallel to the interface. We found that, depending on the AFM film anisotropy to exchange ratio, the bias field is controlled either by the intrinsic pinning of a domain wall parallel to the interface or by the stability of the first AFM layer (quasi-domain wall) near the interface.
A Monte Carlo study of the seagrass-induced depth bias in bathymetric lidar.
Wang, Chi-Kuei; Philpot, William; Kim, Minsu; Lei, Hou-Meng
2011-04-11
A bathymetric lidar survey is the most cost efficient method of producing bathymetric maps in near shore areas where the ocean bottom is both highly variable and of greatest importance for shipping and recreation. So far, not much attention has been paid to the influence of bottom materials on the lidar signals. This study addresses this issue using a Monte Carlo modeling technique. The Monte Carlo simulation includes a plane parallel water body and a flat bottom with or without seagrass. The seagrass canopy structure is adopted from Zimmerman (2003). Both the surface of the seagrass leaves and the bottom are assumed to be Lambertian. Convolution with the lidar pulse function followed by the median operator is used to reduce the variance of the resultant lidar waveform. Two seagrass orientation arrangements are modeled: seagrass in still water with random leaf orientation and seagrass with a uniform orientation as would be expected when under the influence of a water current. For each case, two maximum canopy heights, 0.5 m and 1 m, three shoot densities, 100, 500, and 1000, and three bending angles, 5, 25, and 45 degrees, are considered. The seagrass is found to induce a depth bias that is proportional to an effective leaf area index (eLAI) and the contrast in reflectance between the seagrass and the bottom material.
Arabi, Hossein; Asl, Ali Reza Kamali; Ay, Mohammad Reza; Zaidi, Habib
Objective: The purpose of this work is to evaluate the impact of optimization of magnification on performance parameters of the variable resolution X-ray (VRX) CT scanner. MethodsA realistic model based on an actual VRX CT scanner was implemented in the GATE Monte Carlo simulation platform. To
Arabi, Hossein; Asl, Ali Reza Kamali; Ay, Mohammad Reza; Zaidi, Habib
2015-01-01
Objective: The purpose of this work is to evaluate the impact of optimization of magnification on performance parameters of the variable resolution X-ray (VRX) CT scanner. MethodsA realistic model based on an actual VRX CT scanner was implemented in the GATE Monte Carlo simulation platform. To evalu
Jung, J; Pelletier, C [East Carolina University, Greenville, NC (United States); Lee, C [University of Michigan, Ann Arbor, MI (United States); Kim, J [University of Pittsburgh Medical Center, Pittsburgh, PA (United States); Pyakuryal, A; Lee, C [National Cancer Institute, Rockville, MD (United States)
2015-06-15
Purpose: Organ doses for the Hodgkin’s lymphoma patients treated with cobalt-60 radiation were estimated using an anthropomorphic model and Monte Carlo modeling. Methods: A cobalt-60 treatment unit modeled in the BEAMnrc Monte Carlo code was used to produce phase space data. The Monte Carlo simulation was verified with percent depth dose measurement in water at various field sizes. Radiation transport through the lung blocks were modeled by adjusting the weights of phase space data. We imported a precontoured adult female hybrid model and generated a treatment plan. The adjusted phase space data and the human model were imported to the XVMC Monte Carlo code for dose calculation. The organ mean doses were estimated and dose volume histograms were plotted. Results: The percent depth dose agreement between measurement and calculation in water phantom was within 2% for all field sizes. The mean organ doses of heart, left breast, right breast, and spleen for the selected case were 44.3, 24.1, 14.6 and 3.4 Gy, respectively with the midline prescription dose of 40.0 Gy. Conclusion: Organ doses were estimated for the patient group whose threedimensional images are not available. This development may open the door to more accurate dose reconstruction and estimates of uncertainties in secondary cancer risk for Hodgkin’s lymphoma patients. This work was partially supported by the intramural research program of the National Institutes of Health, National Cancer Institute, Division of Cancer Epidemiology and Genetics.
Buividovich, P V
2015-01-01
We discuss the feasibility of applying Diagrammatic Monte-Carlo algorithms to the weak-coupling expansions of asymptotically free quantum field theories, taking the large-$N$ limit of the $O(N)$ sigma-model as the simplest example where exact results are available. We use stereographic mapping from the sphere to the real plane to set up the perturbation theory, which results in a small bare mass term proportional to the coupling $\\lambda$. Counting the powers of coupling associated with higher-order interaction vertices, we arrive at the double-series representation for the dynamically generated mass gap in powers of both $\\lambda$ and $\\log(\\lambda)$, which converges quite quickly to the exact non-perturbative answer. We also demonstrate that it is feasible to obtain the coefficients of these double series by a Monte-Carlo sampling in the space of Feynman diagrams. In particular, the sign problem of such sampling becomes milder at small $\\lambda$, that is, close to the continuum limit.
GZP型60Co源剂量学参数的蒙特卡洛模拟%A Monte Carlo-based dosimetric study of the GZP 60Co source
王先良; 袁珂; 唐斌; 康盛伟; 黎杰; 肖明勇; 李晓兰; 李林涛; 王培
2016-01-01
目的 GZP型60Co源高剂量率后装机在临床中已有应用,模拟计算GZP型60Co源的剂量学参数.方法 使用EGSnrc蒙特卡洛软件模拟计算已知的BEBIG60Co源(Co0.A86)剂量学参数,与其结果进行对比,验证方法的可行性.对GZP型高剂量率后装机60Co源进行建模,用同样方法模拟计算GZP型60Co源剂量学参数.结果 对BEBIG 60Co源,结果与标准数据吻合很好,单位活度空气比释动能强度SK/A相差0.2％,剂量率常数∧相差1.0％,径向剂量函数gL(r)和各向异性函数F(r,θ)曲线吻合.计算得到的GZP型60Co源(1、2)号通道的SK/A和∧分别是3.011×10-7 cGycm2h-1Bq-1和1.118 cGyh-1U-1,GZP (3)号通道60Co源的SK/A和∧分别是3.002× 10-7 cGycm2h-1Bq-1和1.110 cGyh-1U-1,gL(r)、F(r,θ)和水模中单位空气比释动能强度的剂量率参照AAPM推荐列出.结论 研究结果可用于GZP型60Co源的计划系统中,也可以作为GZP型60Co源的质量控制.%Objective To simulate and calculate the dosimetric parameters of the GZP 60Co source that has been clinically used in high-dose-rate brachytherapy.Methods The EGSnrc Monte Carlo software was used to simulate and calculate the dosimetric parameters of a well known BEBIG 60Co source (Co0.A86).The results were compared with the actual parameters to verify the feasibility of this method.A Monte Carlo model of the GZP 60Co source for high-dose-rate brachytherapy was established to simulate and calculate its dosimetric parameters in the same way.Results For the BEBIG 60Co source,the resuhs were well accorded with the standard.The air-kerma strength per unit activity (SK/A) and dose rate constant (∧)deviated from the standard by 0.2％ and 1.0％,respectively.The curves of the radial dose function gL(r) and the anisotropy function F (r,θ) fit well.For the GZP 60Co source,the SK/A and ∧values were calculated as 3.011 × 10-7 cGycm2h-1Bq-1 and 1.118 cGyh-1 U-1 in channel l&2 and 3.002× 10-7 cGycm2h-1 Bq-1 and 1.110 cGyh-1U
Bednarz, Bryan; Athar, Basit; Xu, X. George [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02108 and Department of Mechanical Aerospace and Nuclear Engineering, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States)
2010-05-15
Purpose: A physician's decision regarding an ideal treatment approach (i.e., radiation, surgery, and/or hormonal) for prostate carcinoma is traditionally based on a variety of metrics. One of these metrics is the risk of radiation-induced second primary cancer following radiation treatments. The aim of this study was to investigate the significance of second cancer risks in out-of-field organs from 3D-CRT and IMRT treatments of prostate carcinoma compared to baseline cancer risks in these organs. Methods: Monte Carlo simulations were performed using a detailed medical linear accelerator model and an anatomically realistic adult male whole-body phantom. A four-field box treatment, a four-field box treatment plus a six-field boost, and a seven-field IMRT treatment were simulated. Using BEIR VII risk models, the age-dependent lifetime attributable risks to various organs outside the primary beam with a known predilection for cancer were calculated using organ-averaged equivalent doses. Results: The four-field box treatment had the lowest treatment-related second primary cancer risks to organs outside the primary beam ranging from 7.3x10{sup -9} to 2.54x10{sup -5}%/MU depending on the patients age at exposure and second primary cancer site. The risks to organs outside the primary beam from the four-field box and six-field boost and the seven-field IMRT were nearly equivalent. The risks from the four-field box and six-field boost ranged from 1.39x10{sup -8} to 1.80x10{sup -5}%/MU, and from the seven-field IMRT ranged from 1.60x10{sup -9} to 1.35x10{sup -5}%/MU. The second cancer risks in all organs considered from each plan were below the baseline risks. Conclusions: The treatment-related second cancer risks in organs outside the primary beam due to 3D-CRT and IMRT is small. New risk assessment techniques need to be investigated to address the concern of radiation-induced second cancers from prostate treatments, particularly focusing on risks to organs inside the
Bednarz, Bryan; Athar, Basit; Xu, X. George
2010-01-01
Purpose: A physician’s decision regarding an ideal treatment approach (i.e., radiation, surgery, and∕or hormonal) for prostate carcinoma is traditionally based on a variety of metrics. One of these metrics is the risk of radiation-induced second primary cancer following radiation treatments. The aim of this study was to investigate the significance of second cancer risks in out-of-field organs from 3D-CRT and IMRT treatments of prostate carcinoma compared to baseline cancer risks in these organs. Methods: Monte Carlo simulations were performed using a detailed medical linear accelerator model and an anatomically realistic adult male whole-body phantom. A four-field box treatment, a four-field box treatment plus a six-field boost, and a seven-field IMRT treatment were simulated. Using BEIR VII risk models, the age-dependent lifetime attributable risks to various organs outside the primary beam with a known predilection for cancer were calculated using organ-averaged equivalent doses. Results: The four-field box treatment had the lowest treatment-related second primary cancer risks to organs outside the primary beam ranging from 7.3×10−9 to 2.54×10−5%∕MU depending on the patients age at exposure and second primary cancer site. The risks to organs outside the primary beam from the four-field box and six-field boost and the seven-field IMRT were nearly equivalent. The risks from the four-field box and six-field boost ranged from 1.39×10−8 to 1.80×10−5%∕MU, and from the seven-field IMRT ranged from 1.60×10−9 to 1.35×10−5%∕MU. The second cancer risks in all organs considered from each plan were below the baseline risks. Conclusions: The treatment-related second cancer risks in organs outside the primary beam due to 3D-CRT and IMRT is small. New risk assessment techniques need to be investigated to address the concern of radiation-induced second cancers from prostate treatments, particularly focusing on risks to organs inside the primary beam
Bai, Peng; Siepmann, J Ilja
2017-02-14
Particle swap moves between phases are usually the rate-limiting step for Gibbs ensemble Monte Carlo (GEMC) simulations of fluid phase equilibria at low reduced temperatures because the acceptance probabilities for these moves can become very low for molecules with articulated architecture and/or highly directional interactions. The configurational-bias Monte Carlo (CBMC) technique can greatly increase the acceptance probabilities, but the efficiency of the CBMC algorithm is influenced by multiple parameters. In this work we assess the performance of different CBMC strategies for GEMC simulations using the SPC/E and TIP4P water models at 283, 343, and 473 K, demonstrate that much higher acceptance probabilities can be achieved than previously reported in the literature, and make recommendations for CBMC strategies leading to optimal efficiency.
Mendenhall, Marcus H
2011-01-01
In Monte-Carlo codes such as Geant4, it is often important to adjust reaction cross sections to reduce the variance of calculations of relatively rare events, in a technique known as non-analogous Monte-Carlo. We present the theory and sample code for a Geant4 process which allows the cross section of a G4VDiscreteProcess to be scaled, while adjusting track weights so as to mitigate the effects of altered primary beam depletion induced by the cross section change. This allows us to increase the cross section of nuclear reactions by factors exceeding 10^{4} (in appropriate cases), without distorting the results of energy deposition calculations or coincidence rates. The procedure is also valid for bias factors less than unity, which is useful, for example, in problems which involve computation of particle penetration deep into a target, such as occurs in atmospheric showers or in shielding.
Mendenhall, Marcus H., E-mail: marcus.h.mendenhall@vanderbilt.edu [Vanderbilt University, Department of Electrical Engineering, P.O. Box 351824B, Nashville, TN 37235 (United States); Weller, Robert A., E-mail: robert.a.weller@vanderbilt.edu [Vanderbilt University, Department of Electrical Engineering, P.O. Box 351824B, Nashville, TN 37235 (United States)
2012-03-01
In Monte Carlo particle transport codes, it is often important to adjust reaction cross-sections to reduce the variance of calculations of relatively rare events, in a technique known as non-analog Monte Carlo. We present the theory and sample code for a Geant4 process which allows the cross-section of a G4VDiscreteProcess to be scaled, while adjusting track weights so as to mitigate the effects of altered primary beam depletion induced by the cross-section change. This makes it possible to increase the cross-section of nuclear reactions by factors exceeding 10{sup 4} (in appropriate cases), without distorting the results of energy deposition calculations or coincidence rates. The procedure is also valid for bias factors less than unity, which is useful in problems that involve the computation of particle penetration deep into a target (e.g. atmospheric showers or shielding studies).
Tripoli 3: a Monte Carlo code with a powerful and automatic biasing
Vergnaud, T.; Nimal, J.C.; Both, J.P.
1993-06-01
The TRIPOLI-3 program kept all the capabilities of the TRIPOLI-2 but offers many other possibilities. The most interesting is the automatization of the biasing which makes easier the user`s work significantly. The visualization of the spatial and energetic distribution of particles during the simulation allows to check the biasing.
TRIPOLI-3: A Monte Carlo code with a powerful and automatic biasing
Vergnaud, T. (Div. des Reacteurs Nucleaires, DMT/SERMA/LEPP, CEA Saclay, 91 - Gif-sur-Yvette (France)); Nimal, J.C. (Div. des Reacteurs Nucleaires, DMT/SERMA/LEPP, CEA Saclay, 91 - Gif-sur-Yvette (France)); Both, J.P. (Div. des Reacteurs Nucleaires, DMT/SERMA/LEPP, CEA Saclay, 91 - Gif-sur-Yvette (France))
1993-04-01
The TRIPOLI-3 program kept all the capabilities of the TRIPOLI-2 but offers many other possibilities. The most interesting is the automatization of the biasing which makes easier the user's work significantly. The visualization of the spatial and energetic distribution of particles during the simulation allows to check the biasing. (orig.)
Monte Carlo Based Toy Model for Fission Process
Kurniadi, R; Viridi, S
2014-01-01
Fission yield has been calculated notoriously by two calculations approach, macroscopic approach and microscopic approach. This work will proposes another calculation approach which the nucleus is treated as a toy model. The toy model of fission yield is a preliminary method that use random number as a backbone of the calculation. Because of nucleus as a toy model hence the fission process does not represent real fission process in nature completely. Fission event is modeled by one random number. The number is assumed as width of distribution probability of nucleon position in compound nuclei when fission process is started. The toy model is formed by Gaussian distribution of random number that randomizes distance like between particle and central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. These three points have different Gaussian distribution parameters such as mean ({\\mu}CN, {\\mu}L, {\\mu}R), and standard d...
Development and evaluation of Monte Carlo-based SPECT reconstruction
Xiao, J.
2009-01-01
Single Photon Emission Computed Tomography (SPECT) is one of the most applied molecular imaging techniques to diagnose human diseases, e.g., of the heart, the brain or in oncology. For example, cardiac SPECT imaging plays a central role in diagnosing coronary heart diseases by providing clinicians w
A Markov Chain Monte Carlo Based Method for System Identification
Glaser, R E; Lee, C L; Nitao, J J; Hanley, W G
2002-10-22
This paper describes a novel methodology for the identification of mechanical systems and structures from vibration response measurements. It combines prior information, observational data and predictive finite element models to produce configurations and system parameter values that are most consistent with the available data and model. Bayesian inference and a Metropolis simulation algorithm form the basis for this approach. The resulting process enables the estimation of distributions of both individual parameters and system-wide states. Attractive features of this approach include its ability to: (1) provide quantitative measures of the uncertainty of a generated estimate; (2) function effectively when exposed to degraded conditions including: noisy data, incomplete data sets and model misspecification; (3) allow alternative estimates to be produced and compared, and (4) incrementally update initial estimates and analysis as more data becomes available. A series of test cases based on a simple fixed-free cantilever beam is presented. These results demonstrate that the algorithm is able to identify the system, based on the stiffness matrix, given applied force and resultant nodal displacements. Moreover, it effectively identifies locations on the beam where damage (represented by a change in elastic modulus) was specified.
A Monte Carlo Based Analysis of Optimal Design Criteria
2011-11-09
MATLAB’s fmincon or SolvOpt, developed by A. Kuntsevich and F. Kappel [18, 17], with four variations of the constraint implementation. We denote by (C1...Statistics, John Wiley & Sons, Inc., New York, NY, 1981. [17] F. Kappel and A. V. Kuntsevich , An implementation of Shor’s r-algorithm, Computational...Optimization and Applications, 15 (2000), 193–205. [18] A. Kuntsevich and F. Kappel, SolvOpt, retrieved December 2009, from http://www.kfunigraz.ac.at
A Monte Carlo based spent fuel analysis safeguards strategy assessment
Fensin, Michael L [Los Alamos National Laboratory; Tobin, Stephen J [Los Alamos National Laboratory; Swinhoe, Martyn T [Los Alamos National Laboratory; Menlove, Howard O [Los Alamos National Laboratory; Sandoval, Nathan P [Los Alamos National Laboratory
2009-01-01
Safeguarding nuclear material involves the detection of diversions of significant quantities of nuclear materials, and the deterrence of such diversions by the risk of early detection. There are a variety of motivations for quantifying plutonium in spent fuel assemblies by means of nondestructive assay (NDA) including the following: strengthening the capabilities of the International Atomic Energy Agencies ability to safeguards nuclear facilities, shipper/receiver difference, input accountability at reprocessing facilities and burnup credit at repositories. Many NDA techniques exist for measuring signatures from spent fuel; however, no single NDA technique can, in isolation, quantify elemental plutonium and other actinides of interest in spent fuel. A study has been undertaken to determine the best integrated combination of cost effective techniques for quantifying plutonium mass in spent fuel for nuclear safeguards. A standardized assessment process was developed to compare the effective merits and faults of 12 different detection techniques in order to integrate a few techniques and to down-select among the techniques in preparation for experiments. The process involves generating a basis burnup/enrichment/cooling time dependent spent fuel assembly library, creating diversion scenarios, developing detector models and quantifying the capability of each NDA technique. Because hundreds of input and output files must be managed in the couplings of data transitions for the different facets of the assessment process, a graphical user interface (GUI) was development that automates the process. This GUI allows users to visually create diversion scenarios with varied replacement materials, and generate a MCNPX fixed source detector assessment input file. The end result of the assembly library assessment is to select a set of common source terms and diversion scenarios for quantifying the capability of each of the 12 NDA techniques. We present here the generalized assessment process, the techniques employed to automate the coupled facets of the assessment process, and the standard burnup/enrichment/cooling time dependent spent fuel assembly library. We also clearly define the diversion scenarios that will be analyzed during the standardized assessments. Though this study is currently limited to generic PWR assemblies, it is expected that the results of the assessment will yield an adequate spent fuel analysis strategy knowledge that will help the down-select process for other reactor types.
Hardware acceleration of Monte Carlo-based simulations
Echeverría Aramendi, Pedro
2011-01-01
During the last years there has been an enormous advance in FPGAs. Traditionally, FPGAs have been used mainly for prototyping as they offer significant advantages at a suitable low cost: flexibility and verification easiness. Their flexibility allows the implementation of different generations of a given application and provides space to designers to modify implementations until the very last moment, or even correct mistakes once the product has been released. Second, the verification of a de...
张振铎; 张彬
2012-01-01
提出一种基于Monte Carlo法的光电跟踪测量系统的分析方法,使用坐标变换方法对光电经纬仪建立了包含照准差、横轴差、竖轴差、传感器误差和编码器误差准确的Verilog-A模型,使用最坏情况法和Monte Carlo法分析了各种误差源对系统性能的影响.并对双站交汇的布站进行了优化,在考虑经纬仪本身误差源和站点位置误差的情况下,使用Monte Carlo法计算了针对特定弹道轨迹的最优布站选择.该方法对光电跟踪测量系统设计具有一定的指导作用.%A new measure error analytic method based on Monte Carlo is provided. The accurate Verilog-A model is established based on coordinate transformation, including collimate error, vertical axes error, horizontal axes error, sensor induced error and encoder induced error. The worst condition analysis and Monte Carlo analysis are used to calculate the effect of various error sources toward system characteristic. The station location design problem of double intercross measure for customizing ballistic trajectory is analyzed using Monte Carlo method, taking consideration of the theodolite induced error and station location error.
Monte Carlo simulations of microchannel plate detectors I: steady-state voltage bias results
Ming Wu, Craig Kruschwitz, Dane Morgan, Jiaming Morgan
2008-07-01
X-ray detectors based on straight-channel microchannel plates (MCPs) are a powerful diagnostic tool for two-dimensional, time-resolved imaging and timeresolved x-ray spectroscopy in the fields of laser-driven inertial confinement fusion and fast z-pinch experiments. Understanding the behavior of microchannel plates as used in such detectors is critical to understanding the data obtained. The subject of this paper is a Monte Carlo computer code we have developed to simulate the electron cascade in a microchannel plate under a static applied voltage. Also included in the simulation is elastic reflection of low-energy electrons from the channel wall, which is important at lower voltages. When model results were compared to measured microchannel plate sensitivities, good agreement was found. Spatial resolution simulations of MCP-based detectors were also presented and found to agree with experimental measurements.
Automatic Monte-Carlo Tuning for Minimum Bias Events at the LHC
Kama, Sami; Kolanoski, Hermann
The Large Hadron Collider near Geneva Switzerland will ultimately collide protons at a center-of-mass energy of $14\\tev$ and $40\\mhz$ bunch crossing rate with a luminosity of $\\lumi{10^{34}}$. At each bunch crossing about 20 soft proton-proton interactions are expected to happen. In order to study new phenomena and improve our current knowledge of the physics these events must be understood. However, the physics of soft interactions are not completely known at such high energies. Different phenomenological models, trying to explain these interactions, are implemented in several Monte-Carlo (MC) programs such as PYTHIA, PHOJET and EPOS. Some parameters in such MC programs can be tuned to improve the agreement with the data. In this thesis a new method for tuning the MC programs, based on Genetic Algorithms and distributed analysis techniques have been presented. This method represents the first and fully automated MC tuning technique that is based on true MC distributions. It ...
Automatic Monte-Carlo tuning for minimum bias events at the LHC
Kama, Sami
2010-06-22
The Large Hadron Collider near Geneva Switzerland will ultimately collide protons at a center-of-mass energy of 14 TeV and 40 MHz bunch crossing rate with a luminosity of L=10{sup 34} cm{sup -2}s{sup -1}. At each bunch crossing about 20 soft proton-proton interactions are expected to happen. In order to study new phenomena and improve our current knowledge of the physics these events must be understood. However, the physics of soft interactions are not completely known at such high energies. Different phenomenological models, trying to explain these interactions, are implemented in several Monte-Carlo (MC) programs such as PYTHIA, PHOJET and EPOS. Some parameters in such MC programs can be tuned to improve the agreement with the data. In this thesis a new method for tuning the MC programs, based on Genetic Algorithms and distributed analysis techniques have been presented. This method represents the first and fully automated MC tuning technique that is based on true MC distributions. It is an alternative to parametrization-based automatic tuning. This new method is used in finding new tunes for PYTHIA 6 and 8. These tunes are compared to the tunes found by alternative methods, such as the PROFESSOR framework and manual tuning, and found to be equivalent or better. Charged particle multiplicity, dN{sub ch}/d{eta}, Lorentz-invariant yield, transverse momentum and mean transverse momentum distributions at various center-of-mass energies are generated using default tunes of EPOS, PHOJET and the Genetic Algorithm tunes of PYTHIA 6 and 8. These distributions are compared to measurements from UA5, CDF, CMS and ATLAS in order to investigate the best model available. Their predictions for the ATLAS detector at LHC energies have been investigated both with generator level and full detector simulation studies. Comparison with the data did not favor any model implemented in the generators, but EPOS is found to describe investigated distributions better. New data from ATLAS and
Yüksel, Yusuf; Akıncı, Ümit
2016-12-01
Using Monte Carlo simulations, we have investigated the dynamic phase transition properties of magnetic nanoparticles with ferromagnetic core coated by an antiferromagnetic shell structure. Effects of field amplitude and frequency on the thermal dependence of magnetizations, magnetization reversal mechanisms during hysteresis cycles, as well as on the exchange bias and coercive fields have been examined, and the feasibility of applying dynamic magnetic fields on the particle have been discussed for technological and biomedical purposes.
Nurse, E; The ATLAS collaboration
2010-01-01
Charged particle distributions from p p collisions at 0.9 and 7 TeV measured with the ATLAS detector are presented. The distributions are shown in diffraction suppressed and diffraction en- hanced minimum bias event samples. The fraction of events in the diffraction enhanced sample to the inclusive sample is used to constrain the relative diffractive cross section. Underlying event distributions, where the charged particle density in a region transverse to the hard interaction is plotted as a function of the transverse momentum of the leading charged particle, are also pre- sented. In addition a new ATLAS tune to the diffraction suppressed minimum bias and underlying event data is presented.
Monte Carlo techniques in radiation therapy
Verhaegen, Frank
2013-01-01
Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...
Boyle, Paul M; Houchens, Brent C; Kim, Albert S
2013-06-01
Pressure-driven flow through a channel with membrane walls is modeled for high particulate volume fractions of 10%. Particle transport is influenced by Brownian diffusion, shear-induced diffusion, and convection due to the axial crossflow. The particles are also subject to electrostatic double layer repulsion and van der Waals attraction, from both particle-particle and particle-membrane interactions. Force Bias Monte Carlo (FBMC) simulations predict the deposition of the particles onto the membranes, where both hydrodynamics and the change in particle potentials determine the probability that a proposed move is accepted. The particle volume fraction is used to determine an apparent local viscosity observed by the continuum flow. As particles migrate, the crossflow velocity field evolves in quasi-steady fashion with each time instance appearing fully developed in the downstream direction. Particles subject to combined hydrodynamic and electric effects (electrostatic double layer repulsion and van der Waals attraction) reach a more stable steady-state as compared to systems with only hydrodynamic effects considered. As expected, at higher crossflow Reynolds numbers more particles remain in the crossflow free stream.
An automated Monte-Carlo based method for the calculation of cascade summing factors
Jackson, M. J.; Britton, R.; Davies, A. V.; McLarty, J. L.; Goodwin, M.
2016-10-01
A versatile method has been developed to calculate cascade summing factors for use in quantitative gamma-spectrometry analysis procedures. The proposed method is based solely on Evaluated Nuclear Structure Data File (ENSDF) nuclear data, an X-ray energy library, and accurate efficiency characterisations for single detector counting geometries. The algorithm, which accounts for γ-γ, γ-X, γ-511 and γ-e- coincidences, can be applied to any design of gamma spectrometer and can be expanded to incorporate any number of nuclides. Efficiency characterisations can be derived from measured or mathematically modelled functions, and can accommodate both point and volumetric source types. The calculated results are shown to be consistent with an industry standard gamma-spectrometry software package. Additional benefits including calculation of cascade summing factors for all gamma and X-ray emissions, not just the major emission lines, are also highlighted.
Visvikis, D. [INSERM U650, LaTIM, University Hospital Medical School, F-29609 Brest (France)]. E-mail: Visvikis.Dimitris@univ-brest.fr; Lefevre, T. [INSERM U650, LaTIM, University Hospital Medical School, F-29609 Brest (France); Lamare, F. [INSERM U650, LaTIM, University Hospital Medical School, F-29609 Brest (France); Kontaxakis, G. [ETSI Telecomunicacion Universidad Politecnica de Madrid, Ciudad Universitaria, s/n 28040, Madrid (Spain); Santos, A. [ETSI Telecomunicacion Universidad Politecnica de Madrid, Ciudad Universitaria, s/n 28040, Madrid (Spain); Darambara, D. [Department of Physics, School of Engineering and Physical Sciences, University of Surrey, Guildford (United Kingdom)
2006-12-20
The majority of present position emission tomography (PET) animal systems are based on the coupling of high-density scintillators and light detectors. A disadvantage of these detector configurations is the compromise between image resolution, sensitivity and energy resolution. In addition, current combined imaging devices are based on simply placing back-to-back and in axial alignment different apparatus without any significant level of software or hardware integration. The use of semiconductor CdZnTe (CZT) detectors is a promising alternative to scintillators for gamma-ray imaging systems. At the same time CZT detectors have the potential properties necessary for the construction of a truly integrated imaging device (PET/SPECT/CT). The aims of this study was to assess the performance of different small animal PET scanner architectures based on CZT pixellated detectors and compare their performance with that of state of the art existing PET animal scanners. Different scanner architectures were modelled using GATE (Geant4 Application for Tomographic Emission). Particular scanner design characteristics included an overall cylindrical scanner format of 8 and 24 cm in axial and transaxial field of view, respectively, and a temporal coincidence window of 8 ns. Different individual detector modules were investigated, considering pixel pitch down to 0.625 mm and detector thickness from 1 to 5 mm. Modified NEMA NU2-2001 protocols were used in order to simulate performance based on mouse, rat and monkey imaging conditions. These protocols allowed us to directly compare the performance of the proposed geometries with the latest generation of current small animal systems. Results attained demonstrate the potential for higher NECR with CZT based scanners in comparison to scintillator based animal systems.
Fast online Monte Carlo-based IMRT planning for the MRI linear accelerator
Bol, G.H.; Hissoiny, S.; Lagendijk, J. J. W.; Raaymakers, B. W.
2012-01-01
The MRI accelerator, a combination of a 6 MV linear accelerator with a 1.5 T MRI, facilitates continuous patient anatomy updates regarding translations, rotations and deformations of targets and organs at risk. Accounting for these demands high speed, online intensity-modulated radiotherapy (IMRT) r
Monte-Carlo-based studies of a polarized positron source for International Linear Collider (ILC)
Dollan, Ralph; Laihem, Karim; Schälicke, Andreas
2006-04-01
The full exploitation of the physics potential of an International Linear Collider (ILC) requires the development of a polarized positron beam. New concepts of polarized positron sources are based on the development of circularly polarized photon sources. The polarized photons create electron-positron pairs in a thin target and transfer their polarization state to the outgoing leptons. To achieve a high level of positron polarization the understanding of the production mechanisms in the target is crucial. Therefore, a general framework for the simulation of polarized processes with GEANT4 is under development. In this contribution the current status of the project and its application to a study of the positron production process for the ILC is presented.
Monte Carlo based studies of a polarized positron source for international linear collider (ILC).
Schälicke, A.; Dollan, R.; Laihem, K.
2006-01-01
The full exploitation of the physics potential of an International Linear Collider (ILC) requires the development of a polarized positron beam. New concepts of polarized positron sources are based on the development of circularly polarized photon sources. The polarized photons create electron-positron pairs in a thin target and transfer their polarization state to the outgoing leptons. To achieve a high level of positron polarization the understanding of the production mechanisms in the targe...
Quantum Monte Carlo simulation
Wang, Yazhen
2011-01-01
Contemporary scientific studies often rely on the understanding of complex quantum systems via computer simulation. This paper initiates the statistical study of quantum simulation and proposes a Monte Carlo method for estimating analytically intractable quantities. We derive the bias and variance for the proposed Monte Carlo quantum simulation estimator and establish the asymptotic theory for the estimator. The theory is used to design a computational scheme for minimizing the mean square er...
Geometrical and Monte Carlo projectors in 3D PET reconstruction
Aguiar, Pablo; Rafecas López, Magdalena; Ortuno, Juan Enrique; Kontaxakis, George; Santos, Andrés; Pavía, Javier; Ros, Domènec
2010-01-01
Purpose: In the present work, the authors compare geometrical and Monte Carlo projectors in detail. The geometrical projectors considered were the conventional geometrical Siddon ray-tracer (S-RT) and the orthogonal distance-based ray-tracer (OD-RT), based on computing the orthogonal distance from the center of image voxel to the line-of-response. A comparison of these geometrical projectors was performed using different point spread function (PSF) models. The Monte Carlo-based method under c...
Simulating currency substitution bias
M. Boon (Martin); C.J.M. Kool (Clemens); C.G. de Vries (Casper)
1989-01-01
textabstractThe sign and size of estimates of the elasticity of currency substitution critically depend on the definition of the oppurtunity costs of holding money. We investigate possible biases by means of Monte Carlo experiments, as sufficient real data are not available.
Grant B. Morgan
2015-02-01
Full Text Available Bi-factor confirmatory factor models have been influential in research on cognitive abilities because they often better fit the data than correlated factors and higher-order models. They also instantiate a perspective that differs from that offered by other models. Motivated by previous work that hypothesized an inherent statistical bias of fit indices favoring the bi-factor model, we compared the fit of correlated factors, higher-order, and bi-factor models via Monte Carlo methods. When data were sampled from a true bi-factor structure, each of the approximate fit indices was more likely than not to identify the bi-factor solution as the best fitting. When samples were selected from a true multiple correlated factors structure, approximate fit indices were more likely overall to identify the correlated factors solution as the best fitting. In contrast, when samples were generated from a true higher-order structure, approximate fit indices tended to identify the bi-factor solution as best fitting. There was extensive overlap of fit values across the models regardless of true structure. Although one model may fit a given dataset best relative to the other models, each of the models tended to fit the data well in absolute terms. Given this variability, models must also be judged on substantive and conceptual grounds.
Ashley Rankine
2015-01-01
Full Text Available Step-and-shoot (S&S intensity-modulated radiotherapy (IMRT using the XiO treatment planning system (TPS has been routinely used for patients receiving postprostatectomy radiotherapy (PPRT. After installing the Monaco, a pilot study was undertaken with five patients to compare XiO with Monaco (V2.03 TPS for PPRT with respect to plan quality for S&S as well as volumetric-modulated arc therapy (VMAT. Monaco S&S showed higher mean clinical target volume (CTV coverage (99.85% than both XiO S&S (97.98%, P = 0.04 and Monaco VMAT (99.44, P = 0.02. Rectal V60Gy volumes were lower for Monaco S&S compared to XiO (46.36% versus 58.06%, P = 0.001 and Monaco VMAT (46.36% versus 54.66%, P = 0.02. Rectal V60Gy volume was lowest for Monaco S&S and superior to XiO (mean 19.89% versus 31.25%, P = 0.02. Rectal V60Gy volumes were lower for Monaco VMAT compared to XiO (21.09% versus 31.25%, P = 0.02. Other organ-at-risk (OAR parameters were comparable between TPSs. Compared to XiO S&S, Monaco S&S plans had fewer segments (78.6 versus 116.8 segments, P = 0.02, lower total monitor units (MU (677.6 MU versus 770.7 MU, P = 0.01, and shorter beam-on times (5.7 min versus 7.6 min, P = 0.03. This pilot study suggests that Monaco S&S improves CTV coverage, OAR doses, and planning and treatment times for PPRT.
Nievaart, V.A.; Legrady, D.; Moss, R.L.; Kloosterman, J.L.; Van der Hagen, T.H.; Van Dam, H.
2007-01-01
This paper deals with the application of the adjoint transport theory in order to optimize Monte Carlo based radiotherapy treatment planning. The technique is applied to Boron Neutron Capture Therapy where most often mixed beams of neutrons and gammas are involved. In normal forward Monte Carlo simu
Jensen, Henning Tarp; Robinson, Sherman; Tarp, Finn
. For the 15 sample countries, the results indicate that the agricultural price incentive bias, which was generally perceived to exist during the 1980s, was largely eliminated during the 1990s. The results also demonstrate that general equilibrium effects and country-specific characteristics - including trade...... shares and intersectoral linkages - are crucial for determining the sign and magnitude of trade policy bias. The GE-ERP measure is therefore uniquely suited to capture the full impact of trade policies on agricultural price incentives. A Monte Carlo procedure confirms that the results are robust...
Sendhil Mullainathan; Andrei Shleifer
2002-01-01
There are two different types of media bias. One bias, which we refer to as ideology, reflects a news outlet's desire to affect reader opinions in a particular direction. The second bias, which we refer to as spin, reflects the outlet's attempt to simply create a memorable story. We examine competition among media outlets in the presence of these biases. Whereas competition can eliminate the effect of ideological bias, it actually exaggerates the incentive to spin stories.
Hewstone, Miles; Rubin, Mark; Willis, Hazel
2002-01-01
This chapter reviews the extensive literature on bias in favor of in-groups at the expense of out-groups. We focus on five issues and identify areas for future research: (a) measurement and conceptual issues (especially in-group favoritism vs. out-group derogation, and explicit vs. implicit measures of bias); (b) modern theories of bias highlighting motivational explanations (social identity, optimal distinctiveness, uncertainty reduction, social dominance, terror management); (c) key moderators of bias, especially those that exacerbate bias (identification, group size, status and power, threat, positive-negative asymmetry, personality and individual differences); (d) reduction of bias (individual vs. intergroup approaches, especially models of social categorization); and (e) the link between intergroup bias and more corrosive forms of social hostility.
Evaluation of the material assignment method used by a Monte Carlo treatment planning system.
Isambert, A; Brualla, L; Lefkopoulos, D
2009-12-01
An evaluation of the conversion process from Hounsfield units (HU) to material composition in computerised tomography (CT) images, employed by the Monte Carlo based treatment planning system ISOgray (DOSIsoft), is presented. A boundary in the HU for the material conversion between "air" and "lung" materials was determined based on a study using 22 patients. The dosimetric consequence of the new boundary was quantitatively evaluated for a lung patient plan.
Monte Carlo uncertainty analyses for integral beryllium experiments
Fischer, U; Tsige-Tamirat, H
2000-01-01
The novel Monte Carlo technique for calculating point detector sensitivities has been applied to two representative beryllium transmission experiments with the objective to investigate the sensitivity of important responses such as the neutron multiplication and to assess the related uncertainties due to the underlying cross-section data uncertainties. As an important result, it has been revealed that the neutron multiplication power of beryllium can be predicted with good accuracy using state-of-the-art nuclear data evaluations. Severe discrepancies do exist for the spectral neutron flux distribution that would transmit into significant uncertainties of the calculated neutron spectra and of the nuclear blanket performance in blanket design calculations. With regard to this, it is suggested to re-analyse the secondary energy and angle distribution data of beryllium by means of Monte Carlo based sensitivity and uncertainty calculations. Related code development work is underway.
Visibility assessment : Monte Carlo characterization of temporal variability.
Laulainen, N.; Shannon, J.; Trexler, E. C., Jr.
1997-12-12
Current techniques for assessing the benefits of certain anthropogenic emission reductions are largely influenced by limitations in emissions data and atmospheric modeling capability and by the highly variant nature of meteorology. These data and modeling limitations are likely to continue for the foreseeable future, during which time important strategic decisions need to be made. Statistical atmospheric quality data and apportionment techniques are used in Monte-Carlo models to offset serious shortfalls in emissions, entrainment, topography, statistical meteorology data and atmospheric modeling. This paper describes the evolution of Department of Energy (DOE) Monte-Carlo based assessment models and the development of statistical inputs. A companion paper describes techniques which are used to develop the apportionment factors used in the assessment models.
Monte Carlo techniques for analyzing deep penetration problems
Cramer, S.N.; Gonnord, J.; Hendricks, J.S.
1985-01-01
A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications. 29 refs.
Iba, Yukito
2000-01-01
``Extended Ensemble Monte Carlo''is a generic term that indicates a set of algorithms which are now popular in a variety of fields in physics and statistical information processing. Exchange Monte Carlo (Metropolis-Coupled Chain, Parallel Tempering), Simulated Tempering (Expanded Ensemble Monte Carlo), and Multicanonical Monte Carlo (Adaptive Umbrella Sampling) are typical members of this family. Here we give a cross-disciplinary survey of these algorithms with special emphasis on the great f...
Selvam, T Palani; Vandana, S; Bakshi, A K; Babu, D A R
2016-02-01
Spencer-Attix (SA) and Bragg-Gray (BG) mass-collision-stopping-power ratios of tissue-to-air are calculated using a modified version of EGSnrc-based SPRRZnrc user-code for the International Organization for Standardization (ISO) beta sources such as (147)Pm, (85)Kr, (90)Sr/(90)Y and (106)Ru/(106)Rh. The ratios are calculated at 5 and 70 µm depths along the central axis of the unit density ICRU-4-element tissue phantom as a function of air-cavity lengths of the extrapolation chamber l = 0.025-0.25 cm. The study shows that the BG values are independent of l and agree well with the ISO-reported values for the above sources. The overall variation in the SA values is ∼0.3% for all the investigated sources, when l is varied from 0.025 to 0.25 cm. As energy of the beta increases the SA stopping-power ratio for a given cavity length decreases. For example, SA values of (147)Pm are higher by ∼2% when compared with the corresponding values of (106)Ru/(106)Rh source. SA stopping-power ratios are higher than the BG stopping-power ratios and the degree of variation depends on type of source and the value of l. For example, the difference is up to 0.7 % at l = 0.025 cm for the (90)Sr/(90)Y source. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Quantum Monte Carlo with variable spins.
Melton, Cody A; Bennett, M Chandler; Mitas, Lubos
2016-06-28
We investigate the inclusion of variable spins in electronic structure quantum Monte Carlo, with a focus on diffusion Monte Carlo with Hamiltonians that include spin-orbit interactions. Following our previous introduction of fixed-phase spin-orbit diffusion Monte Carlo, we thoroughly discuss the details of the method and elaborate upon its technicalities. We present a proof for an upper-bound property for complex nonlocal operators, which allows for the implementation of T-moves to ensure the variational property. We discuss the time step biases associated with our particular choice of spin representation. Applications of the method are also presented for atomic and molecular systems. We calculate the binding energies and geometry of the PbH and Sn2 molecules, as well as the electron affinities of the 6p row elements in close agreement with experiments.
Quantum Monte Carlo with Variable Spins
Melton, Cody A; Mitas, Lubos
2016-01-01
We investigate the inclusion of variable spins in electronic structure quantum Monte Carlo, with a focus on diffusion Monte Carlo with Hamiltonians that include spin-orbit interactions. Following our previous introduction of fixed-phase spin-orbit diffusion Monte Carlo (FPSODMC), we thoroughly discuss the details of the method and elaborate upon its technicalities. We present a proof for an upper-bound property for complex nonlocal operators, which allows for the implementation of T-moves to ensure the variational property. We discuss the time step biases associated with our particular choice of spin representation. Applications of the method are also presented for atomic and molecular systems. We calculate the binding energies and geometry of the PbH and Sn$_2$ molecules, as well as the electron affinities of the 6$p$ row elements in close agreement with experiments.
Bakshi, A K; Chatterjee, S; Palani Selvam, T; Dhabekar, B S
2010-07-01
In the present study, the energy dependence of response of some popular thermoluminescent dosemeters (TLDs) have been investigated such as LiF:Mg,Ti, LiF:Mg,Cu,P and CaSO(4):Dy to synchrotron radiation in the energy range of 10-34 keV. The study utilised experimental, Monte Carlo and analytical methods. The Monte Carlo calculations were based on the EGSnrc and FLUKA codes. The calculated energy response of all the TLDs using the EGSnrc and FLUKA codes shows excellent agreement with each other. The analytically calculated response shows good agreement with the Monte Carlo calculated response in the low-energy region. In the case of CaSO(4):Dy, the Monte Carlo-calculated energy response is smaller by a factor of 3 at all energies in comparison with the experimental response when polytetrafluoroethylene (PTFE) (75 % by wt) is included in the Monte Carlo calculations. When PTFE is ignored in the Monte Carlo calculations, the difference between the calculated and experimental response decreases (both responses are comparable >25 keV). For the LiF-based TLDs, the Monte Carlo-based response shows reasonable agreement with the experimental response.
Metrics for Diagnosing Undersampling in Monte Carlo Tally Estimates
Perfetti, Christopher M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Div.; Rearden, Bradley T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Div.
2015-01-01
This study explored the potential of using Markov chain convergence diagnostics to predict the prevalence and magnitude of biases due to undersampling in Monte Carlo eigenvalue and flux tally estimates. Five metrics were applied to two models of pressurized water reactor fuel assemblies and their potential for identifying undersampling biases was evaluated by comparing the calculated test metrics with known biases in the tallies. Three of the five undersampling metrics showed the potential to accurately predict the behavior of undersampling biases in the responses examined in this study.
Brown, F.B.; Sutton, T.M.
1996-02-01
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
Bardenet, R.
2012-01-01
ISBN:978-2-7598-1032-1; International audience; Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC) methods. We give intuition on the theoretic...
Dunn, William L
2012-01-01
Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble
Development of a space radiation Monte Carlo computer simulation based on the FLUKA and ROOT codes
Pinsky, L; Ferrari, A; Sala, P; Carminati, F; Brun, R
2001-01-01
This NASA funded project is proceeding to develop a Monte Carlo-based computer simulation of the radiation environment in space. With actual funding only initially in place at the end of May 2000, the study is still in the early stage of development. The general tasks have been identified and personnel have been selected. The code to be assembled will be based upon two major existing software packages. The radiation transport simulation will be accomplished by updating the FLUKA Monte Carlo program, and the user interface will employ the ROOT software being developed at CERN. The end-product will be a Monte Carlo-based code which will complement the existing analytic codes such as BRYNTRN/HZETRN presently used by NASA to evaluate the effects of radiation shielding in space. The planned code will possess the ability to evaluate the radiation environment for spacecraft and habitats in Earth orbit, in interplanetary space, on the lunar surface, or on a planetary surface such as Mars. Furthermore, it will be usef...
Monte Carlo scatter correction for SPECT
Liu, Zemei
The goal of this dissertation is to present a quantitatively accurate and computationally fast scatter correction method that is robust and easily accessible for routine applications in SPECT imaging. A Monte Carlo based scatter estimation method is investigated and developed further. The Monte Carlo simulation program SIMIND (Simulating Medical Imaging Nuclear Detectors), was specifically developed to simulate clinical SPECT systems. The SIMIND scatter estimation (SSE) method was developed further using a multithreading technique to distribute the scatter estimation task across multiple threads running concurrently on multi-core CPU's to accelerate the scatter estimation process. An analytical collimator that ensures less noise was used during SSE. The research includes the addition to SIMIND of charge transport modeling in cadmium zinc telluride (CZT) detectors. Phenomena associated with radiation-induced charge transport including charge trapping, charge diffusion, charge sharing between neighboring detector pixels, as well as uncertainties in the detection process are addressed. Experimental measurements and simulation studies were designed for scintillation crystal based SPECT and CZT based SPECT systems to verify and evaluate the expanded SSE method. Jaszczak Deluxe and Anthropomorphic Torso Phantoms (Data Spectrum Corporation, Hillsborough, NC, USA) were used for experimental measurements and digital versions of the same phantoms employed during simulations to mimic experimental acquisitions. This study design enabled easy comparison of experimental and simulated data. The results have consistently shown that the SSE method performed similarly or better than the triple energy window (TEW) and effective scatter source estimation (ESSE) methods for experiments on all the clinical SPECT systems. The SSE method is proven to be a viable method for scatter estimation for routine clinical use.
Improved version of the PHOBOS Glauber Monte Carlo
Loizides, C; Steinberg, P
2014-01-01
Glauber models are used to calculate geometric quantities in the initial state of heavy ion collisions, such as impact parameter, number of participating nucleons and initial eccentricity. Experimental heavy-ion collaboration, in particular at RHIC and LHC, use Glauber Model calculations for various geometric observables. In this document, we describe the assumptions inherent to the approach, and provide an updated implementation (v2) of the Monte Carlo based Glauber Model calculation, which originally was used by the PHOBOS collaboration. The main improvement w.r.t. the earlier version (arXiv:0805.4411) are the inclusion of tritium, Helium-3, and Uranium, as well as the treatment of deformed nuclei and Glauber-Gribov fluctuations of the proton in p+A collisions. A users' guide (updated to reflect changes in v2) is provided for running various calculations.
Iotti, Rita C.; Rossi, Fausto
2013-07-01
The operation of state-of-the-art optoelectronic quantum devices may be significantly affected by the presence of a nonequilibrium quasiparticle population to which the carrier subsystem is unavoidably coupled. This situation is particularly evident in new-generation semiconductor-heterostructure-based quantum emitters, operating both in the mid-infrared as well as in the terahertz (THz) region of the electromagnetic spectrum. In this paper, we present a Monte Carlo-based global kinetic approach, suitable for the investigation of a combined carrier-phonon nonequilibrium dynamics in realistic devices, and discuss its application with a prototypical resonant-phonon THz emitting quantum cascade laser design.
MCOR - Monte Carlo depletion code for reference LWR calculations
Puente Espel, Federico, E-mail: fup104@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Tippayakul, Chanatip, E-mail: cut110@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Ivanov, Kostadin, E-mail: kni1@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Misu, Stefan, E-mail: Stefan.Misu@areva.com [AREVA, AREVA NP GmbH, Erlangen (Germany)
2011-04-15
Research highlights: > Introduction of a reference Monte Carlo based depletion code with extended capabilities. > Verification and validation results for MCOR. > Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations. Additionally
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.
Monte Carlo Simulation of Partially Confined Flexible Polymers
Hermsen, G.F.; de Geeter, B.A.; van der Vegt, N.F.A.; Wessling, Matthias
2002-01-01
We have studied conformational properties of flexible polymers partially confined to narrow pores of different size using configurational biased Monte Carlo simulations under athermal conditions. The asphericity of the chain has been studied as a function of its center of mass position along the por
Monte Carlo Simulation of Partially Confined Flexible Polymers
Hermsen, G.F.; de Geeter, B.A.; van der Vegt, N.F.A.; Wessling, Matthias
2002-01-01
We have studied conformational properties of flexible polymers partially confined to narrow pores of different size using configurational biased Monte Carlo simulations under athermal conditions. The asphericity of the chain has been studied as a function of its center of mass position along the
2014-01-01
Can raising awareness of racial bias subsequently reduce that bias? We address this question by exploiting the widespread media attention highlighting racial bias among professional basketball referees that occurred in May 2007 following the release of an academic study. Using new data, we confirm that racial bias persisted in the years after the study's original sample, but prior to the media coverage. Subsequent to the media coverage though, the bias completely disappeared. We examine poten...
Monte Carlo transition probabilities
Lucy, L. B.
2001-01-01
Transition probabilities governing the interaction of energy packets and matter are derived that allow Monte Carlo NLTE transfer codes to be constructed without simplifying the treatment of line formation. These probabilities are such that the Monte Carlo calculation asymptotically recovers the local emissivity of a gas in statistical equilibrium. Numerical experiments with one-point statistical equilibrium problems for Fe II and Hydrogen confirm this asymptotic behaviour. In addition, the re...
Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments
Pevey, Ronald E.
2005-09-15
Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL.
Einstein, Gnanatheepam; Udayakumar, Kanniyappan; Aruna, Prakasarao; Ganesan, Singaravelu
2017-03-01
Fluorescence of Protein has been widely used in diagnostic oncology for characterizing cellular metabolism. However, the intensity of fluorescence emission is affected due to the absorbers and scatterers in tissue, which may lead to error in estimating exact protein content in tissue. Extraction of intrinsic fluorescence from measured fluorescence has been achieved by different methods. Among them, Monte Carlo based method yields the highest accuracy for extracting intrinsic fluorescence. In this work, we have attempted to generate a lookup table for Monte Carlo simulation of fluorescence emission by protein. Furthermore, we fitted the generated lookup table using an empirical relation. The empirical relation between measured and intrinsic fluorescence is validated using tissue phantom experiments. The proposed relation can be used for estimating intrinsic fluorescence of protein for real-time diagnostic applications and thereby improving the clinical interpretation of fluorescence spectroscopic data.
A parallel systematic-Monte Carlo algorithm for exploring conformational space.
Perez-Riverol, Yasset; Vera, Roberto; Mazola, Yuliet; Musacchio, Alexis
2012-01-01
Computational algorithms to explore the conformational space of small molecules are complex and computer demand field in chemoinformatics. In this paper a hybrid algorithm to explore the conformational space of organic molecules is presented. This hybrid algorithm is based in a systematic search approach combined with a Monte Carlo based method in order to obtain an ensemble of low-energy conformations simulating the flexibility of small chemical compounds. The Monte Carlo method uses the Metropolis criterion to accept or reject a conformation through an in-house implementation of the MMFF94s force field to calculate the conformational energy. The parallel design of this algorithm, based on the message passing interface (MPI) paradigm, was implemented. The results showed a performance increase in the terms of speed and efficiency.
Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry
Bostani, Maryam, E-mail: mbostani@mednet.ucla.edu; McMillan, Kyle; Cagnon, Chris H.; McNitt-Gray, Michael F. [Departments of Biomedical Physics and Radiology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90024 (United States); Mueller, Jonathon W. [United States Air Force, Keesler Air Force Base, Biloxi, Mississippi 39534 (United States); Cody, Dianna D. [University of Texas M.D. Anderson Cancer Center, Houston, Texas 77030 (United States); DeMarco, John J. [Departments of Biomedical Physics and Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90024 (United States)
2015-02-15
Purpose: The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. Methods: MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for all exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. Results: The calculated mean percent difference between TLD measurements and Monte Carlo simulations was −4.9% with standard deviation of 8.7% and a range of −22.7% to 5.7%. Conclusions: The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.
Using Supervised Learning to Improve Monte Carlo Integral Estimation
Tracey, Brendan; Alonso, Juan J
2011-01-01
Monte Carlo (MC) techniques are often used to estimate integrals of a multivariate function using randomly generated samples of the function. In light of the increasing interest in uncertainty quantification and robust design applications in aerospace engineering, the calculation of expected values of such functions (e.g. performance measures) becomes important. However, MC techniques often suffer from high variance and slow convergence as the number of samples increases. In this paper we present Stacked Monte Carlo (StackMC), a new method for post-processing an existing set of MC samples to improve the associated integral estimate. StackMC is based on the supervised learning techniques of fitting functions and cross validation. It should reduce the variance of any type of Monte Carlo integral estimate (simple sampling, importance sampling, quasi-Monte Carlo, MCMC, etc.) without adding bias. We report on an extensive set of experiments confirming that the StackMC estimate of an integral is more accurate than ...
Accelerating Monte Carlo Renderers by Ray Histogram Fusion
Mauricio Delbracio
2015-03-01
Full Text Available This paper details the recently introduced Ray Histogram Fusion (RHF filter for accelerating Monte Carlo renderers [M. Delbracio et al., Boosting Monte Carlo Rendering by Ray Histogram Fusion, ACM Transactions on Graphics, 33 (2014]. In this filter, each pixel in the image is characterized by the colors of the rays that reach its surface. Pixels are compared using a statistical distance on the associated ray color distributions. Based on this distance, it decides whether two pixels can share their rays or not. The RHF filter is consistent: as the number of samples increases, more evidence is required to average two pixels. The algorithm provides a significant gain in PSNR, or equivalently accelerates the rendering process by using many fewer Monte Carlo samples without observable bias. Since the RHF filter depends only on the Monte Carlo samples color values, it can be naturally combined with all rendering effects.
Hrivnacova, I; Berejnov, V V; Brun, R; Carminati, F; Fassò, A; Futo, E; Gheata, A; Caballero, I G; Morsch, Andreas
2003-01-01
The concept of Virtual Monte Carlo (VMC) has been developed by the ALICE Software Project to allow different Monte Carlo simulation programs to run without changing the user code, such as the geometry definition, the detector response simulation or input and output formats. Recently, the VMC classes have been integrated into the ROOT framework, and the other relevant packages have been separated from the AliRoot framework and can be used individually by any other HEP project. The general concept of the VMC and its set of base classes provided in ROOT will be presented. Existing implementations for Geant3, Geant4 and FLUKA and simple examples of usage will be described.
Doronin, Alexander; Rushmeier, Holly E.; Meglinski, Igor; Bykov, Alexander V.
2016-03-01
We present a new Monte Carlo based approach for the modelling of Bidirectional Scattering-Surface Reflectance Distribution Function (BSSRDF) for accurate rendering of human skin appearance. The variations of both skin tissues structure and the major chromophores are taken into account correspondingly to the different ethnic and age groups. The computational solution utilizes HTML5, accelerated by the graphics processing units (GPUs), and therefore is convenient for the practical use at the most of modern computer-based devices and operating systems. The results of imitation of human skin reflectance spectra, corresponding skin colours and examples of 3D faces rendering are presented and compared with the results of phantom studies.
BIASED BEARINGS-ONIKY PARAMETER ESTIMATION FOR BISTATIC SYSTEM
Xu Benlian; Wang Zhiquan
2007-01-01
According to the biased angles provided by the bistatic sensors,the necessary condition of observability and Cramer-Rao low bounds for the bistatic system are derived and analyzed,respectively.Additionally,a dual Kalman filter method is presented with the purpose of eliminating the effect of biased angles on the state variable estimation.Finally,Monte-Carlo simulations are conducted in the observable scenario.Simulation results show that the proposed theory holds true,and the dual Kalman filter method can estimate state variable and biased angles simultaneously.Furthermore,the estimated results can achieve their Cramer-Rao low bounds.
Modelling hadronic interactions in cosmic ray Monte Carlo generators
Pierog Tanguy
2015-01-01
Full Text Available Currently the uncertainty in the prediction of shower observables for different primary particles and energies is dominated by differences between hadronic interaction models. The LHC data on minimum bias measurements can be used to test Monte Carlo generators and these new constraints will help to reduce the uncertainties in air shower predictions. In this article, after a short introduction on air showers and Monte Carlo generators, we will show the results of the comparison between the updated version of high energy hadronic interaction models EPOS LHC and QGSJETII-04 with LHC data. Results for air shower simulations and their consequences on comparisons with air shower data will be discussed.
A New Bias Corrected Version of Heteroscedasticity Consistent Covariance Estimator
Munir Ahmed
2016-06-01
Full Text Available In the presence of heteroscedasticity, different available flavours of the heteroscedasticity consistent covariance estimator (HCCME are used. However, the available literature shows that these estimators can be considerably biased in small samples. Cribari–Neto et al. (2000 introduce a bias adjustment mechanism and give the modified White estimator that becomes almost bias-free even in small samples. Extending these results, Cribari-Neto and Galvão (2003 present a similar bias adjustment mechanism that can be applied to a wide class of HCCMEs’. In the present article, we follow the same mechanism as proposed by Cribari-Neto and Galvão to give bias-correction version of HCCME but we use adaptive HCCME rather than the conventional HCCME. The Monte Carlo study is used to evaluate the performance of our proposed estimators.
Chul Chung
2007-12-01
Full Text Available We estimate the CPI bias in Korea by employing the approach of Engel’s Law as suggested by Hamilton (2001. This paper is the first attempt to estimate the bias using Korean panel data, Korean Labor and Income Panel Study(KLIPS. Following Hamilton’s model with nonlinear specification correction, our estimation result shows that the cumulative CPI bias over the sample period (2000-2005 was 0.7 percent annually. This CPI bias implies that about 21 percent of the inflation rate during the period can be attributed to the bias. In light of purchasing power parity, we provide an interpretation of the estimated bias.
Simundić, Ana-Maria
2013-01-01
By writing scientific articles we communicate science among colleagues and peers. By doing this, it is our responsibility to adhere to some basic principles like transparency and accuracy. Authors, journal editors and reviewers need to be concerned about the quality of the work submitted for publication and ensure that only studies which have been designed, conducted and reported in a transparent way, honestly and without any deviation from the truth get to be published. Any such trend or deviation from the truth in data collection, analysis, interpretation and publication is called bias. Bias in research can occur either intentionally or unintentionally. Bias causes false conclusions and is potentially misleading. Therefore, it is immoral and unethical to conduct biased research. Every scientist should thus be aware of all potential sources of bias and undertake all possible actions to reduce or minimize the deviation from the truth. This article describes some basic issues related to bias in research.
Germano, Fabrizio
2008-01-01
Within the spokes model of Chen and Riordan (2007) that allows for non-localized competition among arbitrary numbers of media outlets, we quantify the effect of concentration of ownership on quality and bias of media content. A main result shows that too few commercial outlets, or better, too few separate owners of commercial outlets can lead to substantial bias in equilibrium. Increasing the number of outlets (commercial and non-commercial) tends to bring down this bias; but the strongest ef...
Cecilia Maya
2004-12-01
Full Text Available El método Monte Carlo se aplica a varios casos de valoración de opciones financieras. El método genera una buena aproximación al comparar su precisión con la de otros métodos numéricos. La estimación que produce la versión Cruda de Monte Carlo puede ser aún más exacta si se recurre a metodologías de reducción de la varianza entre las cuales se sugieren la variable antitética y de la variable de control. Sin embargo, dichas metodologías requieren un esfuerzo computacional mayor por lo cual las mismas deben ser evaluadas en términos no sólo de su precisión sino también de su eficiencia.
Monte Carlo and nonlinearities
Dauchet, Jérémi; Blanco, Stéphane; Caliot, Cyril; Charon, Julien; Coustet, Christophe; Hafi, Mouna El; Eymet, Vincent; Farges, Olivier; Forest, Vincent; Fournier, Richard; Galtier, Mathieu; Gautrais, Jacques; Khuong, Anaïs; Pelissier, Lionel; Piaud, Benjamin; Roger, Maxime; Terrée, Guillaume; Weitz, Sebastian
2016-01-01
The Monte Carlo method is widely used to numerically predict systems behaviour. However, its powerful incremental design assumes a strong premise which has severely limited application so far: the estimation process must combine linearly over dimensions. Here we show that this premise can be alleviated by projecting nonlinearities on a polynomial basis and increasing the configuration-space dimension. Considering phytoplankton growth in light-limited environments, radiative transfer in planetary atmospheres, electromagnetic scattering by particles and concentrated-solar-power-plant productions, we prove the real world usability of this advance on four test-cases that were so far regarded as impracticable by Monte Carlo approaches. We also illustrate an outstanding feature of our method when applied to sharp problems with interacting particles: handling rare events is now straightforward. Overall, our extension preserves the features that made the method popular: addressing nonlinearities does not compromise o...
Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
Interpretation biases in paranoia.
Savulich, George; Freeman, Daniel; Shergill, Sukhi; Yiend, Jenny
2015-01-01
Information in the environment is frequently ambiguous in meaning. Emotional ambiguity, such as the stare of a stranger, or the scream of a child, encompasses possible good or bad emotional consequences. Those with elevated vulnerability to affective disorders tend to interpret such material more negatively than those without, a phenomenon known as "negative interpretation bias." In this study we examined the relationship between vulnerability to psychosis, measured by trait paranoia, and interpretation bias. One set of material permitted broadly positive/negative (valenced) interpretations, while another allowed more or less paranoid interpretations, allowing us to also investigate the content specificity of interpretation biases associated with paranoia. Regression analyses (n=70) revealed that trait paranoia, trait anxiety, and cognitive inflexibility predicted paranoid interpretation bias, whereas trait anxiety and cognitive inflexibility predicted negative interpretation bias. In a group comparison those with high levels of trait paranoia were negatively biased in their interpretations of ambiguous information relative to those with low trait paranoia, and this effect was most pronounced for material directly related to paranoid concerns. Together these data suggest that a negative interpretation bias occurs in those with elevated vulnerability to paranoia, and that this bias may be strongest for material matching paranoid beliefs. We conclude that content-specific biases may be important in the cause and maintenance of paranoid symptoms.
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications Group; Univ. of New Mexico, Albuquerque, NM (United States). Nuclear Engineering Dept.
2016-11-29
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations
Ditto, Peter H; Wojcik, Sean P; Chen, Eric Evan; Grady, Rebecca Hofstein; Ringel, Megan M
2015-01-01
Duarte et al. are right to worry about political bias in social psychology but they underestimate the ease of correcting it. Both liberals and conservatives show partisan bias that often worsens with cognitive sophistication. More non-liberals in social psychology is unlikely to speed our convergence upon the truth, although it may broaden the questions we ask and the data we collect.
Das-Smaal, E.A.
1990-01-01
On what grounds can we conclude that an act of categorization is biased? In this chapter, it is contended that in the absence of objective norms of what categories actually are, biases in categorization can only be specified in relation to theoretical understandings of categorization. Therefore, the
Jackknife bias reduction for polychotomous logistic regression.
Bull, S B; Greenwood, C M; Hauck, W W
1997-03-15
Despite theoretical and empirical evidence that the usual MLEs can be misleading in finite samples and some evidence that bias reduced estimates are less biased and more efficient, they have not seen a wide application in practice. One can obtain bias reduced estimates by jackknife methods, with or without full iteration, or by use of higher order terms in a Taylor series expansion of the log-likelihood to approximate asymptotic bias. We provide details of these methods for polychotomous logistic regression with a nominal categorical response. We conducted a Monte Carlo comparison of the jackknife and Taylor series estimates in moderate sample sizes in a general logistic regression setting, to investigate dichotomous and trichotomous responses and a mixture of correlated and uncorrelated binary and normal covariates. We found an approximate two-step jackknife and the Taylor series methods useful when the ratio of the number of observations to the number of parameters is greater than 15, but we cannot recommend the two-step and the fully iterated jackknife estimates when this ratio is less than 20, especially when there are large effects, binary covariates, or multicollinearity in the covariates.
2012-01-01
The 5th edition of the "Monts Jura Jazz Festival" will take place at the Esplanade du Lac in Divonne-les-Bains, France on September 21 and 22. This festival organized by the CERN Jazz Club and supported by the CERN Staff Association is becoming a major musical event in the Geneva region. International Jazz artists like Didier Lockwood and David Reinhardt are part of this year outstanding program. Full program and e-tickets are available on the festival website. Don't miss this great festival!
Jazz Club
2012-01-01
The 5th edition of the "Monts Jura Jazz Festival" that will take place on September 21st and 22nd 2012 at the Esplanade du Lac in Divonne-les-Bains. This festival is organized by the "CERN Jazz Club" with the support of the "CERN Staff Association". This festival is a major musical event in the French/Swiss area and proposes a world class program with jazz artists such as D.Lockwood and D.Reinhardt. More information on http://www.jurajazz.com.
LMC: Logarithmantic Monte Carlo
Mantz, Adam B.
2017-06-01
LMC is a Markov Chain Monte Carlo engine in Python that implements adaptive Metropolis-Hastings and slice sampling, as well as the affine-invariant method of Goodman & Weare, in a flexible framework. It can be used for simple problems, but the main use case is problems where expensive likelihood evaluations are provided by less flexible third-party software, which benefit from parallelization across many nodes at the sampling level. The parallel/adaptive methods use communication through MPI, or alternatively by writing/reading files, and mostly follow the approaches pioneered by CosmoMC (ascl:1106.025).
Goldman, Saul
1983-10-01
A method we call energy-scaled displacement Monte Carlo (ESDMC) whose purpose is to improve sampling efficiency and thereby speed up convergence rates in Monte Carlo calculations is presented. The method involves scaling the maximum displacement a particle may make on a trial move to the particle's configurational energy. The scaling is such that on the average, the most stable particles make the smallest moves and the most energetic particles the largest moves. The method is compared to Metropolis Monte Carlo (MMC) and Force Bias Monte Carlo of (FBMC) by applying all three methods to a dense Lennard-Jones fluid at two temperatures, and to hot ST2 water. The functions monitored as the Markov chains developed were, for the Lennard-Jones case: melting, radial distribution functions, internal energies, and heat capacities. For hot ST2 water, we monitored energies and heat capacities. The results suggest that ESDMC samples configuration space more efficiently than either MMC or FBMC in these systems for the biasing parameters used here. The benefit from using ESDMC seemed greatest for the Lennard-Jones systems.
Matthew Gentzkow; Jesse M. Shapiro
2005-01-01
A Bayesian consumer who is uncertain about the quality of an information source will infer that the source is of higher quality when its reports conform to the consumer's prior expectations. We use this fact to build a model of media bias in which firms slant their reports toward the prior beliefs of their customers in order to build a reputation for quality. Bias emerges in our model even though it can make all market participants worse off. The model predicts that bias will be less severe w...
Biased predecision processing.
Brownstein, Aaron L
2003-07-01
Decision makers conduct biased predecision processing when they restructure their mental representation of the decision environment to favor one alternative before making their choice. The question of whether biased predecision processing occurs has been controversial since L. Festinger (1957) maintained that it does not occur. The author reviews relevant research in sections on theories of cognitive dissonance, decision conflict, choice certainty, action control, action phases, dominance structuring, differentiation and consolidation, constructive processing, motivated reasoning, and groupthink. Some studies did not find evidence of biased predecision processing, but many did. In the Discussion section, the moderators are summarized and used to assess the theories.
Unified nonequilibrium dynamical theory for exchange bias and training effects
Zhang Kai-Cheng; Liu Bang-Gui
2009-01-01
We have investigated the exchange bias and training effect in the ferromagnetie/antiferromagnetic (FM/AF)heterostructures using a unified Monte Carlo dynamical approach. The magnetization of the uncompensated AF layer is still open after the first field cycling is finished. Our simulated results show obvious shift of hysteresis loops (exchange bias) and cycling dependence of exchange bias (training effect) when the temperature is below 45 K. The exchange bias field decreases with decreasing cooling rate or increasing temperature and the number of the field cycling. Essentially,these two effects can be explained on the basis of the microscopical coexistence of both reversible and irreversible moment reversals of the AF domains. Our simulations are useful to understand the real magnetization dynamics of such magnetic heterostructures.
Berkson’s bias, selection bias, and missing data
Westreich, Daniel
2012-01-01
While Berkson’s bias is widely recognized in the epidemiologic literature, it remains underappreciated as a model of both selection bias and bias due to missing data. Simple causal diagrams and 2×2 tables illustrate how Berkson’s bias connects to collider bias and selection bias more generally, and show the strong analogies between Berksonian selection bias and bias due to missing data. In some situations, considerations of whether data are missing at random or missing not at random is less i...
Introduction to Unconscious Bias
Schmelz, Joan T.
2010-05-01
We all have biases, and we are (for the most part) unaware of them. In general, men and women BOTH unconsciously devalue the contributions of women. This can have a detrimental effect on grant proposals, job applications, and performance reviews. Sociology is way ahead of astronomy in these studies. When evaluating identical application packages, male and female University psychology professors preferred 2:1 to hire "Brian” over "Karen” as an assistant professor. When evaluating a more experienced record (at the point of promotion to tenure), reservations were expressed four times more often when the name was female. This unconscious bias has a repeated negative effect on Karen's career. This talk will introduce the concept of unconscious bias and also give recommendations on how to address it using an example for a faculty search committee. The process of eliminating unconscious bias begins with awareness, then moves to policy and practice, and ends with accountability.
Quantifying the Effect of Undersampling in Monte Carlo Simulations Using SCALE
Perfetti, Christopher M [ORNL; Rearden, Bradley T [ORNL
2014-01-01
This study explores the effect of undersampling in Monte Carlo calculations on tally estimates and tally variance estimates for burnup credit applications. Steady-state Monte Carlo simulations were performed for models of several critical systems with varying degrees of spatial and isotopic complexity and the impact of undersampling on eigenvalue and flux estimates was examined. Using an inadequate number of particle histories in each generation was found to produce an approximately 100 pcm bias in the eigenvalue estimates, and biases that exceeded 10% in fuel pin flux estimates.
Improving PWR core simulations by Monte Carlo uncertainty analysis and Bayesian inference
Castro, Emilio; Buss, Oliver; Garcia-Herranz, Nuria; Hoefer, Axel; Porsch, Dieter
2016-01-01
A Monte Carlo-based Bayesian inference model is applied to the prediction of reactor operation parameters of a PWR nuclear power plant. In this non-perturbative framework, high-dimensional covariance information describing the uncertainty of microscopic nuclear data is combined with measured reactor operation data in order to provide statistically sound, well founded uncertainty estimates of integral parameters, such as the boron letdown curve and the burnup-dependent reactor power distribution. The performance of this methodology is assessed in a blind test approach, where we use measurements of a given reactor cycle to improve the prediction of the subsequent cycle. As it turns out, the resulting improvement of the prediction quality is impressive. In particular, the prediction uncertainty of the boron letdown curve, which is of utmost importance for the planning of the reactor cycle length, can be reduced by one order of magnitude by including the boron concentration measurement information of the previous...
Increasingly minimal bias routing
Bataineh, Abdulla; Court, Thomas; Roweth, Duncan
2017-02-21
A system and algorithm configured to generate diversity at the traffic source so that packets are uniformly distributed over all of the available paths, but to increase the likelihood of taking a minimal path with each hop the packet takes. This is achieved by configuring routing biases so as to prefer non-minimal paths at the injection point, but increasingly prefer minimal paths as the packet proceeds, referred to herein as Increasing Minimal Bias (IMB).
Biased causal inseparable game
Bhattacharya, Some Sankar
2015-01-01
Here we study the \\emph{causal inseparable} game introduced in [\\href{http://www.nature.com/ncomms/journal/v3/n10/full/ncomms2076.html}{Nat. Commun. {\\bf3}, 1092 (2012)}], but it's biased version. Two separated parties, Alice and Bob, generate biased bits (say input bit) in their respective local laboratories. Bob generates another biased bit (say decision bit) which determines their goal: whether Alice has to guess Bob's bit or vice-verse. Under the assumption that events are ordered with respect to some global causal relation, we show that the success probability of this biased causal game is upper bounded, giving rise to \\emph{biased causal inequality} (BCI). In the \\emph{process matrix} formalism, which is locally in agreement with quantum physics but assume no global causal order, we show that there exist \\emph{inseparable} process matrices that violate the BCI for arbitrary bias in the decision bit. In such scenario we also derive the maximal violation of the BCI under local operations involving tracele...
A pure-sampling quantum Monte Carlo algorithm.
Ospadov, Egor; Rothstein, Stuart M
2015-01-14
The objective of pure-sampling quantum Monte Carlo is to calculate physical properties that are independent of the importance sampling function being employed in the calculation, save for the mismatch of its nodal hypersurface with that of the exact wave function. To achieve this objective, we report a pure-sampling algorithm that combines features of forward walking methods of pure-sampling and reptation quantum Monte Carlo (RQMC). The new algorithm accurately samples properties from the mixed and pure distributions simultaneously in runs performed at a single set of time-steps, over which extrapolation to zero time-step is performed. In a detailed comparison, we found RQMC to be less efficient. It requires different sets of time-steps to accurately determine the energy and other properties, such as the dipole moment. We implement our algorithm by systematically increasing an algorithmic parameter until the properties converge to statistically equivalent values. As a proof in principle, we calculated the fixed-node energy, static α polarizability, and other one-electron expectation values for the ground-states of LiH and water molecules. These quantities are free from importance sampling bias, population control bias, time-step bias, extrapolation-model bias, and the finite-field approximation. We found excellent agreement with the accepted values for the energy and a variety of other properties for those systems.
Morillon, B.
1996-12-31
With most of the traditional and contemporary techniques, it is still impossible to solve the transport equation if one takes into account a fully detailed geometry and if one studies precisely the interactions between particles and matters. Only the Monte Carlo method offers such a possibility. However with significant attenuation, the natural simulation remains inefficient: it becomes necessary to use biasing techniques where the solution of the adjoint transport equation is essential. The Monte Carlo code Tripoli has been using such techniques successfully for a long time with different approximate adjoint solutions: these methods require from the user to find out some parameters. If this parameters are not optimal or nearly optimal, the biases simulations may bring about small figures of merit. This paper presents a description of the most important biasing techniques of the Monte Carlo code Tripoli ; then we show how to calculate the importance function for general geometry with multigroup cases. We present a completely automatic biasing technique where the parameters of the biased simulation are deduced from the solution of the adjoint transport equation calculated by collision probabilities. In this study we shall estimate the importance function through collision probabilities method and we shall evaluate its possibilities thanks to a Monte Carlo calculation. We compare different biased simulations with the importance function calculated by collision probabilities for one-group and multigroup problems. We have run simulations with new biasing method for one-group transport problems with isotropic shocks and for multigroup problems with anisotropic shocks. The results show that for the one-group and homogeneous geometry transport problems the method is quite optimal without splitting and russian roulette technique but for the multigroup and heterogeneous X-Y geometry ones the figures of merit are higher if we add splitting and russian roulette technique.
Marcus, Ryan C. [Los Alamos National Laboratory
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
Monte Carlo methods for electromagnetics
Sadiku, Matthew NO
2009-01-01
Until now, novices had to painstakingly dig through the literature to discover how to use Monte Carlo techniques for solving electromagnetic problems. Written by one of the foremost researchers in the field, Monte Carlo Methods for Electromagnetics provides a solid understanding of these methods and their applications in electromagnetic computation. Including much of his own work, the author brings together essential information from several different publications.Using a simple, clear writing style, the author begins with a historical background and review of electromagnetic theory. After addressing probability and statistics, he introduces the finite difference method as well as the fixed and floating random walk Monte Carlo methods. The text then applies the Exodus method to Laplace's and Poisson's equations and presents Monte Carlo techniques for handing Neumann problems. It also deals with whole field computation using the Markov chain, applies Monte Carlo methods to time-varying diffusion problems, and ...
Synchrotron stereotactic radiotherapy: dosimetry by Fricke gel and Monte Carlo simulations.
Boudou, Caroline; Biston, Marie-Claude; Corde, Stéphanie; Adam, Jean-François; Ferrero, Claudio; Estève, François; Elleaume, Hélène
2004-11-21
Synchrotron stereotactic radiotherapy (SSR) consists in loading the tumour with a high atomic number element (Z), and exposing it to monochromatic x-rays from a synchrotron source (50-100 keV), in stereotactic conditions. The dose distribution results from both the stereotactic monochromatic x-ray irradiation and the presence of the high Z element. The purpose of this preliminary study was to evaluate the two-dimensional dose distribution resulting solely from the irradiation geometry, using Monte Carlo simulations and a Fricke gel dosimeter. The verification of a Monte Carlo-based dosimetry was first assessed by depth dose measurements in a water tank. We thereafter used a Fricke dosimeter to compare Monte Carlo simulations with dose measurements. The Fricke dosimeter is a solution containing ferrous ions which are oxidized to ferric ions under ionizing radiation, proportionally to the absorbed dose. A cylindrical phantom filled with Fricke gel was irradiated in stereotactic conditions over several slices with a continuous beam (beam section = 0.1 x 1 cm2). The phantom and calibration vessels were then imaged by nuclear magnetic resonance. The measured doses were fairly consistent with those predicted by Monte Carlo simulations. However, the measured maximum absolute dose was 10% underestimated regarding calculation. The loss of information in the higher region of dose is explained by the diffusion of ferric ions. Monte Carlo simulation is the most accurate tool for dosimetry including complex geometries made of heterogeneous materials. Although the technique requires improvements, gel dosimetry remains an essential tool for the experimental verification of dose distribution in SSR with millimetre precision.
Diagnosing Undersampling in Monte Carlo Eigenvalue and Flux Tally Estimates
Perfetti, Christopher M [ORNL; Rearden, Bradley T [ORNL
2015-01-01
This study explored the impact of undersampling on the accuracy of tally estimates in Monte Carlo (MC) calculations. Steady-state MC simulations were performed for models of several critical systems with varying degrees of spatial and isotopic complexity, and the impact of undersampling on eigenvalue and fuel pin flux/fission estimates was examined. This study observed biases in MC eigenvalue estimates as large as several percent and biases in fuel pin flux/fission tally estimates that exceeded tens, and in some cases hundreds, of percent. This study also investigated five statistical metrics for predicting the occurrence of undersampling biases in MC simulations. Three of the metrics (the Heidelberger-Welch RHW, the Geweke Z-Score, and the Gelman-Rubin diagnostics) are commonly used for diagnosing the convergence of Markov chains, and two of the methods (the Contributing Particles per Generation and Tally Entropy) are new convergence metrics developed in the course of this study. These metrics were implemented in the KENO MC code within the SCALE code system and were evaluated for their reliability at predicting the onset and magnitude of undersampling biases in MC eigenvalue and flux tally estimates in two of the critical models. Of the five methods investigated, the Heidelberger-Welch RHW, the Gelman-Rubin diagnostics, and Tally Entropy produced test metrics that correlated strongly to the size of the observed undersampling biases, indicating their potential to effectively predict the size and prevalence of undersampling biases in MC simulations.
Exchange bias of patterned systems: Model and numerical simulation
Garcia, Griselda [Facultad de Fisica, P. Universidad Catolica de Chile, Casilla 306, Santiago 7820436 (Chile); Centro para el Desarrollo de la Nanociencia y la Nanotecnologia, CEDENNA, Avda. Ecuador 3493, Santiago (Chile); Kiwi, Miguel, E-mail: mkiwi@puc.c [Facultad de Fisica, P. Universidad Catolica de Chile, Casilla 306, Santiago 7820436 (Chile); Centro para el Desarrollo de la Nanociencia y la Nanotecnologia, CEDENNA, Avda. Ecuador 3493, Santiago (Chile); Mejia-Lopez, Jose; Ramirez, Ricardo [Facultad de Fisica, P. Universidad Catolica de Chile, Casilla 306, Santiago 7820436 (Chile); Centro para el Desarrollo de la Nanociencia y la Nanotecnologia, CEDENNA, Avda. Ecuador 3493, Santiago (Chile)
2010-11-15
The magnitude of the exchange bias field of patterned systems exhibits a notable increase in relation to the usual bilayer systems, where a continuous ferromagnetic film is deposited on an antiferromagnet insulator. Here we develop a model, and implement a Monte Carlo calculation, to interpret the experimental observations which is consistent with experimental results, on the basis of assuming a small fraction of spins pinned ferromagnetically in the antiferromagnetic interface layer.
Metropolis Methods for Quantum Monte Carlo Simulations
Ceperley, D. M.
2003-01-01
Since its first description fifty years ago, the Metropolis Monte Carlo method has been used in a variety of different ways for the simulation of continuum quantum many-body systems. This paper will consider some of the generalizations of the Metropolis algorithm employed in quantum Monte Carlo: Variational Monte Carlo, dynamical methods for projector monte carlo ({\\it i.e.} diffusion Monte Carlo with rejection), multilevel sampling in path integral Monte Carlo, the sampling of permutations, ...
Richet, Y
2006-12-15
Criticality Monte Carlo calculations aim at estimating the effective multiplication factor (k-effective) for a fissile system through iterations simulating neutrons propagation (making a Markov chain). Arbitrary initialization of the neutron population can deeply bias the k-effective estimation, defined as the mean of the k-effective computed at each iteration. A simplified model of this cycle k-effective sequence is built, based on characteristics of industrial criticality Monte Carlo calculations. Statistical tests, inspired by Brownian bridge properties, are designed to discriminate stationarity of the cycle k-effective sequence. The initial detected transient is, then, suppressed in order to improve the estimation of the system k-effective. The different versions of this methodology are detailed and compared, firstly on a plan of numerical tests fitted on criticality Monte Carlo calculations, and, secondly on real criticality calculations. Eventually, the best methodologies observed in these tests are selected and allow to improve industrial Monte Carlo criticality calculations. (author)
Distinguishing Selection Bias and Confounding Bias in Comparative Effectiveness Research.
Haneuse, Sebastien
2016-04-01
Comparative effectiveness research (CER) aims to provide patients and physicians with evidence-based guidance on treatment decisions. As researchers conduct CER they face myriad challenges. Although inadequate control of confounding is the most-often cited source of potential bias, selection bias that arises when patients are differentially excluded from analyses is a distinct phenomenon with distinct consequences: confounding bias compromises internal validity, whereas selection bias compromises external validity. Despite this distinction, however, the label "treatment-selection bias" is being used in the CER literature to denote the phenomenon of confounding bias. Motivated by an ongoing study of treatment choice for depression on weight change over time, this paper formally distinguishes selection and confounding bias in CER. By formally distinguishing selection and confounding bias, this paper clarifies important scientific, design, and analysis issues relevant to ensuring validity. First is that the 2 types of biases may arise simultaneously in any given study; even if confounding bias is completely controlled, a study may nevertheless suffer from selection bias so that the results are not generalizable to the patient population of interest. Second is that the statistical methods used to mitigate the 2 biases are themselves distinct; methods developed to control one type of bias should not be expected to address the other. Finally, the control of selection and confounding bias will often require distinct covariate information. Consequently, as researchers plan future studies of comparative effectiveness, care must be taken to ensure that all data elements relevant to both confounding and selection bias are collected.
Measuring agricultural policy bias
Jensen, Henning Tarp; Robinson, Sherman; Tarp, Finn
2010-01-01
Measurement is a key issue in the literature on price incentive bias induced by trade policy. We introduce a general equilibrium measure of the relative effective rate of protection, which generalizes earlier protection measures. For our fifteen sample countries, results indicate that the agricul......Measurement is a key issue in the literature on price incentive bias induced by trade policy. We introduce a general equilibrium measure of the relative effective rate of protection, which generalizes earlier protection measures. For our fifteen sample countries, results indicate...... protection measure is therefore uniquely suited to capture the full impact of trade policies on relative agricultural price incentives....
Lectures on Monte Carlo methods
Madras, Neal
2001-01-01
Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati
Graf, Peter A.; Stewart, Gordon; Lackner, Matthew; Dykes, Katherine; Veers, Paul
2016-05-01
Long-term fatigue loads for floating offshore wind turbines are hard to estimate because they require the evaluation of the integral of a highly nonlinear function over a wide variety of wind and wave conditions. Current design standards involve scanning over a uniform rectangular grid of metocean inputs (e.g., wind speed and direction and wave height and period), which becomes intractable in high dimensions as the number of required evaluations grows exponentially with dimension. Monte Carlo integration offers a potentially efficient alternative because it has theoretical convergence proportional to the inverse of the square root of the number of samples, which is independent of dimension. In this paper, we first report on the integration of the aeroelastic code FAST into NREL's systems engineering tool, WISDEM, and the development of a high-throughput pipeline capable of sampling from arbitrary distributions, running FAST on a large scale, and postprocessing the results into estimates of fatigue loads. Second, we use this tool to run a variety of studies aimed at comparing grid-based and Monte Carlo-based approaches with calculating long-term fatigue loads. We observe that for more than a few dimensions, the Monte Carlo approach can represent a large improvement in computational efficiency, but that as nonlinearity increases, the effectiveness of Monte Carlo is correspondingly reduced. The present work sets the stage for future research focusing on using advanced statistical methods for analysis of wind turbine fatigue as well as extreme loads.
Sjöstrand, Torbjörn
2009-01-01
Given the current landscape in experimental high-energy physics, these lectures are focused on applications of event generators for hadron colliders like the Tevatron and LHC. Section 2 contains a first overview of the physics picture and the generator landscape. Thereafter section 3 describes the usage of matrix elements, section 4 the important topics of initial- and final-state showers, and section 5 how showers can be matched to different hard processes. The issue of multiparton interactions and their role in mimimum-bias and underlying-event physics is introduced in section 6, followed by some comments on hadronization in section 7. The article concludes with an outlook on the ongoing generator-development work in section 8.
Zalk, Sue Rosenberg; And Others
This study investigated children's sex biased attitudes as a function of the sex, age, and race of the child as well as a geographical-SES factor. Two attitudes were measured on a 55-item questionnaire: Sex Pride (attributing positive characteristics to a child of the same sex) and Sex Prejudice (attributing negative characteristics to a child of…
Nimal, J.C.; Vergnaud, T. (CEA Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France))
1990-01-01
This paper describes the most important features of the Monte Carlo code TRIPOLI-2. This code solves the Boltzmann equation in three-dimensional geometries for coupled neutron and gamma rays problems. A particular emphasis is devoted to the biasing techniques, which are very important for deep penetration. Future developments in TRIPOLI are described in the conclusion. (author).
Monte Carlo integration on GPU
Kanzaki, J.
2010-01-01
We use a graphics processing unit (GPU) for fast computations of Monte Carlo integrations. Two widely used Monte Carlo integration programs, VEGAS and BASES, are parallelized on GPU. By using $W^{+}$ plus multi-gluon production processes at LHC, we test integrated cross sections and execution time for programs in FORTRAN and C on CPU and those on GPU. Integrated results agree with each other within statistical errors. Execution time of programs on GPU run about 50 times faster than those in C...
Monte Carlo Methods in ICF (LIRPP Vol. 13)
Zimmerman, George B.
2016-10-01
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved SOX in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
Mahady, Kyle; Tan, Shida; Greenzweig, Yuval; Livengood, Richard; Raveh, Amir; Rack, Philip
2017-01-01
We present an updated version of our Monte-Carlo based code for the simulation of ion beam sputtering. This code simulates the interaction of energetic ions with a target, and tracks the cumulative damage, enabling it to simulate the dynamic evolution of nanostructures as material is removed. The updated code described in this paper is significantly faster, permitting the inclusion of new features, namely routines to handle interstitial atoms, and to reduce the surface energy as the structure would otherwise develop energetically unfavorable surface porosity. We validate our code against the popular Monte-Carlo code SRIM-TRIM, and study the development of nanostructures from Ne+ ion beam milling in a copper target.
Venema, Victor; Lindau, Ralf
2016-04-01
In an accompanying talk we show that well-homogenized national dataset warm more than temperatures from global collections averaged over the region of common coverage. In this poster we want to present auxiliary work about possible biases in the raw observations and on how well relative statistical homogenization can remove trend biases. There are several possible causes of cooling biases, which have not been studied much. Siting could be an important factor. Urban stations tend to move away from the centre to better locations. Many stations started inside of urban areas and are nowadays more outside. Even for villages the temperature difference between the centre and edge can be 0.5°C. When a city station moves to an airport, which often happened around WWII, this takes the station (largely) out of the urban heat island. During the 20th century the Stevenson screen was established as the dominant thermometer screen. This screen protected the thermometer much better against radiation than earlier designs. Deficits of earlier measurement methods have artificially warmed the temperatures in the 19th century. Newer studies suggest we may have underestimated the size of this bias. Currently we are in a transition to Automatic Weather Stations. The net global effect of this transition is not clear at this moment. Irrigation on average decreases the 2m-temperature by about 1 degree centigrade. At the same time, irrigation has increased significantly during the last century. People preferentially live in irrigated areas and weather stations serve agriculture. Thus it is possible that there is a higher likelihood that weather stations are erected in irrigated areas than elsewhere. In this case irrigation could lead to a spurious cooling trend. In the Parallel Observations Science Team of the International Surface Temperature Initiative (ISTI-POST) we are studying influence of the introduction of Stevenson screens and Automatic Weather Stations using parallel measurements
Olowofoyeku, AA
2016-01-01
This article addresses the issues attending common law collegiate courts’ engagements with allegations of bias within their own ranks. It will be argued that, in such cases, it would be inappropriate to involve the collegiate panel or any member thereof in the decision, since such involvement inevitably encounters difficulties. The common law’s dilemmas require drastic solutions, but the common law arguably is illequipped to implement the required change. The answer, it will be argued, is ...
Behavioral Biases in Interpersonal Contexts
N. Liu (Ning)
2017-01-01
markdownabstractThis thesis presents evidence suggesting that the same types of biases in individual decision making under uncertainty pertain in interpersonal contexts. The chapters above demonstrate in specific contexts how specific interpersonal factors attenuate, amplify, or replicate these bias
TRIPOLI-3: a neutron/photon Monte Carlo transport code
Nimal, J.C.; Vergnaud, T. [Commissariat a l' Energie Atomique, Gif-sur-Yvette (France). Service d' Etudes de Reacteurs et de Mathematiques Appliquees
2001-07-01
The present version of TRIPOLI-3 solves the transport equation for coupled neutron and gamma ray problems in three dimensional geometries by using the Monte Carlo method. This code is devoted both to shielding and criticality problems. The most important feature for particle transport equation solving is the fine treatment of the physical phenomena and sophisticated biasing technics useful for deep penetrations. The code is used either for shielding design studies or for reference and benchmark to validate cross sections. Neutronic studies are essentially cell or small core calculations and criticality problems. TRIPOLI-3 has been used as reference method, for example, for resonance self shielding qualification. (orig.)
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros
2016-08-29
In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . âˆž>h0>h1â‹¯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. Â© 2016 Elsevier B.V.
Monte Carlo Treatment Planning for Molecular Targeted Radiotherapy within the MINERVA System
Lehmann, J; Siantar, C H; Wessol, D E; Wemple, C A; Nigg, D; Cogliati, J; Daly, T; Descalle, M; Flickinger, T; Pletcher, D; DeNardo, G
2004-09-22
The aim of this project is to extend accurate and patient-specific treatment planning to new treatment modalities, such as molecular targeted radiation therapy, incorporating previously crafted and proven Monte Carlo and deterministic computation methods. A flexible software environment is being created that allows planning radiation treatment for these new modalities and combining different forms of radiation treatment with consideration of biological effects. The system uses common input interfaces, medical image sets for definition of patient geometry, and dose reporting protocols. Previously, the Idaho National Engineering and Environmental Laboratory (INEEL), Montana State University (MSU), and Lawrence Livermore National Laboratory (LLNL) had accrued experience in the development and application of Monte Carlo-based, three-dimensional, computational dosimetry and treatment planning tools for radiotherapy in several specialized areas. In particular, INEEL and MSU have developed computational dosimetry systems for neutron radiotherapy and neutron capture therapy, while LLNL has developed the PEREGRINE computational system for external beam photon-electron therapy. Building on that experience, the INEEL and MSU are developing the MINERVA (Modality Inclusive Environment for Radiotherapeutic Variable Analysis) software system as a general framework for computational dosimetry and treatment planning for a variety of emerging forms of radiotherapy. In collaboration with this development, LLNL has extended its PEREGRINE code to accommodate internal sources for molecular targeted radiotherapy (MTR), and has interfaced it with the plug-in architecture of MINERVA. Results from the extended PEREGRINE code have been compared to published data from other codes, and found to be in general agreement (EGS4 - 2%, MCNP - 10%)(Descalle et al. 2003). The code is currently being benchmarked against experimental data. The interpatient variability of the drug pharmacokinetics in MTR
Monte Carlo treatment planning for molecular targeted radiotherapy within the MINERVA system
Lehmann, Joerg [University of California, Lawrence Livermore National Laboratory, 7000 East Ave, Livermore, CA 94550 (United States); Siantar, Christine Hartmann [University of California, Lawrence Livermore National Laboratory, 7000 East Ave, Livermore, CA 94550 (United States); Wessol, Daniel E [Idaho National Engineering and Environmental Laboratory, PO Box 1625, Idaho Falls, ID 83415-3885 (United States); Wemple, Charles A [Idaho National Engineering and Environmental Laboratory, PO Box 1625, Idaho Falls, ID 83415-3885 (United States); Nigg, David [Idaho National Engineering and Environmental Laboratory, PO Box 1625, Idaho Falls, ID 83415-3885 (United States); Cogliati, Josh [Department of Computer Science, Montana State University, Bozeman, MT 59717 (United States); Daly, Tom [University of California, Lawrence Livermore National Laboratory, 7000 East Ave, Livermore, CA 94550 (United States); Descalle, Marie-Anne [University of California, Lawrence Livermore National Laboratory, 7000 East Ave, Livermore, CA 94550 (United States); Flickinger, Terry [University of California, Lawrence Livermore National Laboratory, 7000 East Ave, Livermore, CA 94550 (United States); Pletcher, David [University of California, Lawrence Livermore National Laboratory, 7000 East Ave, Livermore, CA 94550 (United States); DeNardo, Gerald [University of California Davis, School of Medicine, Sacramento, CA 95817 (United States)
2005-03-07
The aim of this project is to extend accurate and patient-specific treatment planning to new treatment modalities, such as molecular targeted radiation therapy, incorporating previously crafted and proven Monte Carlo and deterministic computation methods. A flexible software environment is being created that allows planning radiation treatment for these new modalities and combining different forms of radiation treatment with consideration of biological effects. The system uses common input interfaces, medical image sets for definition of patient geometry and dose reporting protocols. Previously, the Idaho National Engineering and Environmental Laboratory (INEEL), Montana State University (MSU) and Lawrence Livermore National Laboratory (LLNL) had accrued experience in the development and application of Monte Carlo based, three-dimensional, computational dosimetry and treatment planning tools for radiotherapy in several specialized areas. In particular, INEEL and MSU have developed computational dosimetry systems for neutron radiotherapy and neutron capture therapy, while LLNL has developed the PEREGRINE computational system for external beam photon-electron therapy. Building on that experience, the INEEL and MSU are developing the MINERVA (modality inclusive environment for radiotherapeutic variable analysis) software system as a general framework for computational dosimetry and treatment planning for a variety of emerging forms of radiotherapy. In collaboration with this development, LLNL has extended its PEREGRINE code to accommodate internal sources for molecular targeted radiotherapy (MTR), and has interfaced it with the plugin architecture of MINERVA. Results from the extended PEREGRINE code have been compared to published data from other codes, and found to be in general agreement (EGS4-2%, MCNP-10%) (Descalle et al 2003 Cancer Biother. Radiopharm. 18 71-9). The code is currently being benchmarked against experimental data. The interpatient variability of the
Quantum Monte Carlo Endstation for Petascale Computing
Lubos Mitas
2011-01-26
published papers, 15 invited talks and lectures nationally and internationally. My former graduate student and postdoc Dr. Michal Bajdich, who was supported byt this grant, is currently a postdoc with ORNL in the group of Dr. F. Reboredo and Dr. P. Kent and is using the developed tools in a number of DOE projects. The QWalk package has become a truly important research tool used by the electronic structure community and has attracted several new developers in other research groups. Our tools use several types of correlated wavefunction approaches, variational, diffusion and reptation methods, large-scale optimization methods for wavefunctions and enables to calculate energy differences such as cohesion, electronic gaps, but also densities and other properties, using multiple runs one can obtain equations of state for given structures and beyond. Our codes use efficient numerical and Monte Carlo strategies (high accuracy numerical orbitals, multi-reference wave functions, highly accurate correlation factors, pairing orbitals, force biased and correlated sampling Monte Carlo), are robustly parallelized and enable to run on tens of thousands cores very efficiently. Our demonstration applications were focused on the challenging research problems in several fields of materials science such as transition metal solids. We note that our study of FeO solid was the first QMC calculation of transition metal oxides at high pressures.
Monte Carlo simulation as a tool to predict blasting fragmentation based on the Kuz Ram model
Morin, Mario A.; Ficarazzo, Francesco
2006-04-01
Rock fragmentation is considered the most important aspect of production blasting because of its direct effects on the costs of drilling and blasting and on the economics of the subsequent operations of loading, hauling and crushing. Over the past three decades, significant progress has been made in the development of new technologies for blasting applications. These technologies include increasingly sophisticated computer models for blast design and blast performance prediction. Rock fragmentation depends on many variables such as rock mass properties, site geology, in situ fracturing and blasting parameters and as such has no complete theoretical solution for its prediction. However, empirical models for the estimation of size distribution of rock fragments have been developed. In this study, a blast fragmentation Monte Carlo-based simulator, based on the Kuz-Ram fragmentation model, has been developed to predict the entire fragmentation size distribution, taking into account intact and joints rock properties, the type and properties of explosives and the drilling pattern. Results produced by this simulator were quite favorable when compared with real fragmentation data obtained from a blast quarry. It is anticipated that the use of Monte Carlo simulation will increase our understanding of the effects of rock mass and explosive properties on the rock fragmentation by blasting, as well as increase our confidence in these empirical models. This understanding will translate into improvements in blasting operations, its corresponding costs and the overall economics of open pit mines and rock quarries.
A Monte Carlo model for 3D grain evolution during welding
Rodgers, Theron M.; Mitchell, John A.; Tikare, Veena
2017-09-01
Welding is one of the most wide-spread processes used in metal joining. However, there are currently no open-source software implementations for the simulation of microstructural evolution during a weld pass. Here we describe a Potts Monte Carlo based model implemented in the SPPARKS kinetic Monte Carlo computational framework. The model simulates melting, solidification and solid-state microstructural evolution of material in the fusion and heat-affected zones of a weld. The model does not simulate thermal behavior, but rather utilizes user input parameters to specify weld pool and heat-affect zone properties. Weld pool shapes are specified by Bézier curves, which allow for the specification of a wide range of pool shapes. Pool shapes can range from narrow and deep to wide and shallow representing different fluid flow conditions within the pool. Surrounding temperature gradients are calculated with the aide of a closest point projection algorithm. The model also allows simulation of pulsed power welding through time-dependent variation of the weld pool size. Example simulation results and comparisons with laboratory weld observations demonstrate microstructural variation with weld speed, pool shape, and pulsed-power.
Assessing Bias in Search Engines.
Mowshowitz, Abbe; Kawaguchi, Akira
2002-01-01
Addresses the measurement of bias in search engines on the Web, defining bias as the balance and representation of items in a collection retrieved from a database for a set of queries. Assesses bias by measuring the deviation from the ideal of the distribution produced by a particular search engine. (Author/LRW)
Monte Carlo modelling of Schottky diode for rectenna simulation
Bernuchon, E.; Aniel, F.; Zerounian, N.; Grimault-Jacquin, A. S.
2017-09-01
Before designing a detector circuit, the electrical parameters extraction of the Schottky diode is a critical step. This article is based on a Monte-Carlo (MC) solver of the Boltzmann Transport Equation (BTE) including different transport mechanisms at the metal-semiconductor contact such as image force effect or tunneling. The weight of tunneling and thermionic current is quantified according to different degrees of tunneling modelling. The I-V characteristic highlights the dependence of the ideality factor and the current saturation with bias. Harmonic Balance (HB) simulation on a rectifier circuit within Advanced Design System (ADS) software shows that considering non-linear ideality factor and saturation current for the electrical model of the Schottky diode does not seem essential. Indeed, bias independent values extracted in forward regime on I-V curve are sufficient. However, the non-linear series resistance extracted from a small signal analysis (SSA) strongly influences the conversion efficiency at low input powers.
Highly Efficient Monte-Carlo for Estimating the Unavailability of Markov Dynamic System1）
XIAOGang; DENGLi; ZHANGBen-Ai; ZHUJian-Shi
2004-01-01
Monte Carlo simulation has become an important tool for estimating the reliability andavailability of dynamic system, since conventional numerical methods are no longer efficient whenthe size of the system to solve is large. However, evaluating by a simulation the probability of oc-currence of very rare events means playing a very large number of histories of the system, whichleads to unacceptable computing time. Highly efficient Monte Carlo should be worked out. In thispaper, based on the integral equation describing state transitions of Markov dynamic system, a u-niform Monte Carlo for estimating unavailability is presented. Using free-flight estimator, directstatistical estimation Monte Carlo is achieved. Using both free-flight estimator and biased proba-bility space of sampling, weighted statistical estimation Monte Carlo is also achieved. Five MonteCarlo schemes, including crude simulation, analog simulation, statistical estimation based oncrude and analog simulation, and weighted statistical estimation, are used for calculating the un-availability of a repairable Con/3/30 : F system. Their efficiencies are compared with each other.The results show the weighted statistical estimation Monte Carlo has the smallest variance and thehighest efficiency in very rare events simulation.
Equilibrium Statistics: Monte Carlo Methods
Kröger, Martin
Monte Carlo methods use random numbers, or ‘random’ sequences, to sample from a known shape of a distribution, or to extract distribution by other means. and, in the context of this book, to (i) generate representative equilibrated samples prior being subjected to external fields, or (ii) evaluate high-dimensional integrals. Recipes for both topics, and some more general methods, are summarized in this chapter. It is important to realize, that Monte Carlo should be as artificial as possible to be efficient and elegant. Advanced Monte Carlo ‘moves’, required to optimize the speed of algorithms for a particular problem at hand, are outside the scope of this brief introduction. One particular modern example is the wavelet-accelerated MC sampling of polymer chains [406].
Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality
Bishara, Anthony J.; Hittner, James B.
2015-01-01
It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared…
Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality
Bishara, Anthony J.; Hittner, James B.
2015-01-01
It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared…
Anna Russo
Full Text Available Short peptides can be designed in silico and synthesized through automated techniques, making them advantageous and versatile protein binders. A number of docking-based algorithms allow for a computational screening of peptides as binders. Here we developed ex-novo peptides targeting the maltose site of the Maltose Binding Protein, the prototypical system for the study of protein ligand recognition. We used a Monte Carlo based protocol, to computationally evolve a set of octapeptides starting from a polialanine sequence. We screened in silico the candidate peptides and characterized their binding abilities by surface plasmon resonance, fluorescence and electrospray ionization mass spectrometry assays. These experiments showed the designed binders to recognize their target with micromolar affinity. We finally discuss the obtained results in the light of further improvement in the ex-novo optimization of peptide based binders.
Stepanek, J; Laissue, J A; Lyubimova, N; Di Michiel, F; Slatkin, D N
2000-01-01
Microbeam radiation therapy (MRT) is a currently experimental method of radiotherapy which is mediated by an array of parallel microbeams of synchrotron-wiggler-generated X-rays. Suitably selected, nominally supralethal doses of X-rays delivered to parallel microslices of tumor-bearing tissues in rats can be either palliative or curative while causing little or no serious damage to contiguous normal tissues. Although the pathogenesis of MRT-mediated tumor regression is not understood, as in all radiotherapy such understanding will be based ultimately on our understanding of the relationships among the following three factors: (1) microdosimetry, (2) damage to normal tissues, and (3) therapeutic efficacy. Although physical microdosimetry is feasible, published information on MRT microdosimetry to date is computational. This report describes Monte Carlo-based computational MRT microdosimetry using photon and/or electron scattering and photoionization cross-section data in the 1 e V through 100 GeV range distrib...
Drecourt, J.-P.; Madsen, H.; Rosbjerg, Dan
2006-01-01
. The colored noise filter formulation is extended to correct both time correlated and uncorrelated model error components. A more stable version of the separate filter without feedback is presented. The filters are implemented in an ensemble framework using Latin hypercube sampling. The techniques...... are illustrated on a simple one-dimensional groundwater problem. The results show that the presented filters outperform the standard Kalman filter and that the implementations with bias feedback work in more general conditions than the implementations without feedback. 2005 Elsevier Ltd. All rights reserved....
Monte Carlo Hamiltonian: Linear Potentials
LUO Xiang-Qian; LIU Jin-Jiang; HUANG Chun-Qing; JIANG Jun-Qin; Helmut KROGER
2002-01-01
We further study the validity of the Monte Carlo Hamiltonian method. The advantage of the method,in comparison with the standard Monte Carlo Lagrangian approach, is its capability to study the excited states. Weconsider two quantum mechanical models: a symmetric one V(x) = |x|/2; and an asymmetric one V(x) = ∞, forx ＜ 0 and V(x) = x, for x ≥ 0. The results for the spectrum, wave functions and thermodynamical observables are inagreement with the analytical or Runge-Kutta calculations.
Proton Upset Monte Carlo Simulation
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
Çatlı, Serap, E-mail: serapcatli@hotmail.com [Gazi University, Faculty of Sciences, 06500 Teknikokullar, Ankara (Turkey); Tanır, Güneş [Gazi University, Faculty of Sciences, 06500 Teknikokullar, Ankara (Turkey)
2013-10-01
The present study aimed to investigate the effects of titanium, titanium alloy, and stainless steel hip prostheses on dose distribution based on the Monte Carlo simulation method, as well as the accuracy of the Eclipse treatment planning system (TPS) at 6 and 18 MV photon energies. In the present study the pencil beam convolution (PBC) method implemented in the Eclipse TPS was compared to the Monte Carlo method and ionization chamber measurements. The present findings show that if high-Z material is used in prosthesis, large dose changes can occur due to scattering. The variance in dose observed in the present study was dependent on material type, density, and atomic number, as well as photon energy; as photon energy increased back scattering decreased. The dose perturbation effect of hip prostheses was significant and could not be predicted accurately by the PBC method for hip prostheses. The findings show that for accurate dose calculation the Monte Carlo-based TPS should be used in patients with hip prostheses.
Catlı, Serap; Tanır, Güneş
2013-01-01
The present study aimed to investigate the effects of titanium, titanium alloy, and stainless steel hip prostheses on dose distribution based on the Monte Carlo simulation method, as well as the accuracy of the Eclipse treatment planning system (TPS) at 6 and 18MV photon energies. In the present study the pencil beam convolution (PBC) method implemented in the Eclipse TPS was compared to the Monte Carlo method and ionization chamber measurements. The present findings show that if high-Z material is used in prosthesis, large dose changes can occur due to scattering. The variance in dose observed in the present study was dependent on material type, density, and atomic number, as well as photon energy; as photon energy increased back scattering decreased. The dose perturbation effect of hip prostheses was significant and could not be predicted accurately by the PBC method for hip prostheses. The findings show that for accurate dose calculation the Monte Carlo-based TPS should be used in patients with hip prostheses.
Monte-Carlo scatter correction for cone-beam computed tomography with limited scan field-of-view
Bertram, Matthias; Sattel, Timo; Hohmann, Steffen; Wiegert, Jens
2008-03-01
In flat detector cone-beam computed tomography (CBCT), scattered radiation is a major source of image degradation, making accurate a posteriori scatter correction inevitable. A potential solution to this problem is provided by computerized scatter correction based on Monte-Carlo simulations. Using this technique, the detected distributions of X-ray scatter are estimated for various viewing directions using Monte-Carlo simulations of an intermediate reconstruction. However, as a major drawback, for standard CBCT geometries and with standard size flat detectors such as mounted on interventional C-arms, the scan field of view is too small to accommodate the human body without lateral truncations, and thus this technique cannot be readily applied. In this work, we present a novel method for constructing a model of the object in a laterally and possibly also axially extended field of view, which enables meaningful application of Monte-Carlo based scatter correction even in case of heavy truncations. Evaluation is based on simulations of a clinical CT data set of a human abdomen, which strongly exceeds the field of view of the simulated C-arm based CBCT imaging geometry. By using the proposed methodology, almost complete removal of scatter-caused inhomogeneities is demonstrated in reconstructed images.
A continuation multilevel Monte Carlo algorithm
Collier, Nathan
2014-09-05
We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error tolerance is satisfied. CMLMC assumes discretization hierarchies that are defined a priori for each level and are geometrically refined across levels. The actual choice of computational work across levels is based on parametric models for the average cost per sample and the corresponding variance and weak error. These parameters are calibrated using Bayesian estimation, taking particular notice of the deepest levels of the discretization hierarchy, where only few realizations are available to produce the estimates. The resulting CMLMC estimator exhibits a non-trivial splitting between bias and statistical contributions. We also show the asymptotic normality of the statistical error in the MLMC estimator and justify in this way our error estimate that allows prescribing both required accuracy and confidence in the final result. Numerical results substantiate the above results and illustrate the corresponding computational savings in examples that are described in terms of differential equations either driven by random measures or with random coefficients. © 2014, Springer Science+Business Media Dordrecht.
Fission Matrix Capability for MCNP Monte Carlo
Carney, Sean E. [Los Alamos National Laboratory; Brown, Forrest B. [Los Alamos National Laboratory; Kiedrowski, Brian C. [Los Alamos National Laboratory; Martin, William R. [Los Alamos National Laboratory
2012-09-05
spatially low-order kernel, the fundamental eigenvector of which should converge faster than that of continuous kernel. We can then redistribute the fission bank to match the fundamental fission matrix eigenvector, effectively eliminating all higher modes. For all computations here biasing is not used, with the intention of comparing the unaltered, conventional Monte Carlo process with the fission matrix results. The source convergence of standard Monte Carlo criticality calculations are, to some extent, always subject to the characteristics of the problem. This method seeks to partially eliminate this problem-dependence by directly calculating the spatial coupling. The primary cost of this, which has prevented widespread use since its inception [2,3,4], is the extra storage required. To account for the coupling of all N spatial regions to every other region requires storing N{sup 2} values. For realistic problems, where a fine resolution is required for the suppression of discretization error, the storage becomes inordinate. Two factors lead to a renewed interest here: the larger memory available on modern computers and the development of a better storage scheme based on physical intuition. When the distance between source and fission events is short compared with the size of the entire system, saving memory by accounting for only local coupling introduces little extra error. We can gain other information from directly tallying the fission kernel: higher eigenmodes and eigenvalues. Conventional Monte Carlo cannot calculate this data - here we have a way to get new information for multiplying systems. In Ref. [5], higher mode eigenfunctions are analyzed for a three-region 1-dimensional problem and 2-dimensional homogenous problem. We analyze higher modes for more realistic problems. There is also the question of practical use of this information; here we examine a way of using eigenmode information to address the negative confidence interval bias due to inter
Monte Carlo Particle Lists: MCPL
Kittelmann, Thomas; Knudsen, Erik B; Willendrup, Peter; Cai, Xiao Xiao; Kanaki, Kalliopi
2016-01-01
A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular simulation packages.
Applications of Monte Carlo Methods in Calculus.
Gordon, Sheldon P.; Gordon, Florence S.
1990-01-01
Discusses the application of probabilistic ideas, especially Monte Carlo simulation, to calculus. Describes some applications using the Monte Carlo method: Riemann sums; maximizing and minimizing a function; mean value theorems; and testing conjectures. (YP)
Editorial bias in scientific publications.
Matías-Guiu, J; García-Ramos, R
2011-01-01
Many authors believe that there are biases in scientific publications. Editorial biases include publication bias; which refers to those situations where the results influence the editor's decision, and editorial bias refers to those situations where factors related with authors or their environment influence the decision. This paper includes an analysis of the situation of editorial biases. One bias is where mainly articles with positive results are accepted, as opposed to those with negative results. Another is latent bias, where positive results are published before those with negative results. In order to examine editorial bias, this paper analyses the influence of where the article originated; the country or continent, academic centre of origin, belonging to cooperative groups, and the maternal language of the authors. The article analyses biases in the editorial process in the publication of funded clinical trials. Editorial biases exists. Authors, when submitting their manuscript, should analyse different journals and decide where their article will receive adequate treatment. Copyright © 2010 Sociedad Española de Neurología. Published by Elsevier Espana. All rights reserved.
On the time scale associated with Monte Carlo simulations.
Bal, Kristof M; Neyts, Erik C
2014-11-28
Uniform-acceptance force-bias Monte Carlo (fbMC) methods have been shown to be a powerful technique to access longer timescales in atomistic simulations allowing, for example, phase transitions and growth. Recently, a new fbMC method, the time-stamped force-bias Monte Carlo (tfMC) method, was derived with inclusion of an estimated effective timescale; this timescale, however, does not seem able to explain some of the successes the method. In this contribution, we therefore explicitly quantify the effective timescale tfMC is able to access for a variety of systems, namely a simple single-particle, one-dimensional model system, the Lennard-Jones liquid, an adatom on the Cu(100) surface, a silicon crystal with point defects and a highly defected graphene sheet, in order to gain new insights into the mechanisms by which tfMC operates. It is found that considerable boosts, up to three orders of magnitude compared to molecular dynamics, can be achieved for solid state systems by lowering of the apparent activation barrier of occurring processes, while not requiring any system-specific input or modifications of the method. We furthermore address the pitfalls of using the method as a replacement or complement of molecular dynamics simulations, its ability to explicitly describe correct dynamics and reaction mechanisms, and the association of timescales to MC simulations in general.
On the time scale associated with Monte Carlo simulations
Bal, Kristof M., E-mail: kristof.bal@uantwerpen.be; Neyts, Erik C. [Department of Chemistry, University of Antwerp, Research Group PLASMANT, Universiteitsplein 1, 2610 Wilrijk, Antwerp (Belgium)
2014-11-28
Uniform-acceptance force-bias Monte Carlo (fbMC) methods have been shown to be a powerful technique to access longer timescales in atomistic simulations allowing, for example, phase transitions and growth. Recently, a new fbMC method, the time-stamped force-bias Monte Carlo (tfMC) method, was derived with inclusion of an estimated effective timescale; this timescale, however, does not seem able to explain some of the successes the method. In this contribution, we therefore explicitly quantify the effective timescale tfMC is able to access for a variety of systems, namely a simple single-particle, one-dimensional model system, the Lennard-Jones liquid, an adatom on the Cu(100) surface, a silicon crystal with point defects and a highly defected graphene sheet, in order to gain new insights into the mechanisms by which tfMC operates. It is found that considerable boosts, up to three orders of magnitude compared to molecular dynamics, can be achieved for solid state systems by lowering of the apparent activation barrier of occurring processes, while not requiring any system-specific input or modifications of the method. We furthermore address the pitfalls of using the method as a replacement or complement of molecular dynamics simulations, its ability to explicitly describe correct dynamics and reaction mechanisms, and the association of timescales to MC simulations in general.
Outcome predictability biases learning.
Griffiths, Oren; Mitchell, Chris J; Bethmont, Anna; Lovibond, Peter F
2015-01-01
Much of contemporary associative learning research is focused on understanding how and when the associative history of cues affects later learning about those cues. Very little work has investigated the effects of the associative history of outcomes on human learning. Three experiments extended the "learned irrelevance" paradigm from the animal conditioning literature to examine the influence of an outcome's prior predictability on subsequent learning of relationships between cues and that outcome. All 3 experiments found evidence for the idea that learning is biased by the prior predictability of the outcome. Previously predictable outcomes were readily associated with novel predictive cues, whereas previously unpredictable outcomes were more readily associated with novel nonpredictive cues. This finding highlights the importance of considering the associative history of outcomes, as well as cues, when interpreting multistage designs. Associative and cognitive explanations of this certainty matching effect are discussed.
Chiruta, Daniel; Linares, J; Dahoo, Pierre-Richard; Dimian, Mihai
2015-01-01
.... In this contribution we solve the corresponding Hamiltonian for a three-dimensional SCO system taking into account short-range and long-range interaction using a biased Monte Carlo entropic sampling...
Haplotype association analyses in resources of mixed structure using Monte Carlo testing
Thomas Alun
2010-12-01
Full Text Available Abstract Background Genomewide association studies have resulted in a great many genomic regions that are likely to harbor disease genes. Thorough interrogation of these specific regions is the logical next step, including regional haplotype studies to identify risk haplotypes upon which the underlying critical variants lie. Pedigrees ascertained for disease can be powerful for genetic analysis due to the cases being enriched for genetic disease. Here we present a Monte Carlo based method to perform haplotype association analysis. Our method, hapMC, allows for the analysis of full-length and sub-haplotypes, including imputation of missing data, in resources of nuclear families, general pedigrees, case-control data or mixtures thereof. Both traditional association statistics and transmission/disequilibrium statistics can be performed. The method includes a phasing algorithm that can be used in large pedigrees and optional use of pseudocontrols. Results Our new phasing algorithm substantially outperformed the standard expectation-maximization algorithm that is ignorant of pedigree structure, and hence is preferable for resources that include pedigree structure. Through simulation we show that our Monte Carlo procedure maintains the correct type 1 error rates for all resource types. Power comparisons suggest that transmission-disequilibrium statistics are superior for performing association in resources of only nuclear families. For mixed structure resources, however, the newly implemented pseudocontrol approach appears to be the best choice. Results also indicated the value of large high-risk pedigrees for association analysis, which, in the simulations considered, were comparable in power to case-control resources of the same sample size. Conclusions We propose hapMC as a valuable new tool to perform haplotype association analyses, particularly for resources of mixed structure. The availability of meta-association and haplotype-mining modules in
Determination of the detective quantum efficiency of gamma camera systems: a Monte Carlo study.
Eriksson, Ida; Starck, Sven-Ake; Båth, Magnus
2010-01-01
The purpose of the present work was to investigate the validity of using the Monte Carlo technique for determining the detective quantum efficiency (DQE) of a gamma camera system and to use this technique in investigating the DQE behaviour of a gamma camera system and its dependency on a number of relevant parameters. The Monte Carlo-based software SIMIND, simulating a complete gamma camera system, was used in the present study. The modulation transfer function (MTF) of the system was determined from simulated images of a point source of (99m)Tc, positioned at different depths in a water phantom. Simulations were performed using different collimators and energy windows. The MTF of the system was combined with the photon yield and the sensitivity, obtained from the simulations, to form the frequency-dependent DQE of the system. As figure-of-merit (FOM), the integral of the 2D DQE was used. The simulated DQE curves agreed well with published data. As expected, there was a strong dependency of the shape and magnitude of the DQE curve on the collimator, energy window and imaging position. The highest FOM was obtained for a lower energy threshold of 127 keV for objects close to the detector and 131 keV for objects deeper in the phantom, supporting an asymmetric window setting to reduce scatter. The Monte Carlo software SIMIND can be used to determine the DQE of a gamma camera system from a simulated point source alone. The optimal DQE results in the present study were obtained for parameter settings close to the clinically used settings.
(U) Introduction to Monte Carlo Methods
Hungerford, Aimee L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-03-20
Monte Carlo methods are very valuable for representing solutions to particle transport problems. Here we describe a “cook book” approach to handling the terms in a transport equation using Monte Carlo methods. Focus is on the mechanics of a numerical Monte Carlo code, rather than the mathematical foundations of the method.
Exchange bias in a generalized Meiklejohn-Bean approach
Binek, Ch. E-mail: binek@kleemann.uni-duisburg.de; Hochstrat, A.; Kleemann, W
2001-09-01
A generalized Meiklejohn-Bean model is considered in order to derive an analytic expression for the dependence of the exchange bias field on the layer thickness involved in ferromagnetic/antiferromagnetic heterosystems, on the orientation of the applied magnetic field with respect to the magnetic easy axes and on the quenched magnetization M{sub AF} of the antiferromagnetic pinning layer. While M{sub AF} is a well-known feature of field-cooled dilute antiferromagnets, it seems to occur quite generally also in pure AF pinning substrates. The new analytic expressions are successfully compared with recent experimental results and Monte Carlo investigations.
Bias-reduced estimation of long memory stochastic volatility
Frederiksen, Per; Nielsen, Morten Ørregaard
We propose to use a variant of the local polynomial Whittle estimator to estimate the memory parameter in volatility for long memory stochastic volatility models with potential nonstation- arity in the volatility process. We show that the estimator is asymptotically normal and capable of obtaining...... bias reduction as well as a rate of convergence arbitrarily close to the parametric rate, n1=2. A Monte Carlo study is conducted to support the theoretical results, and an analysis of daily exchange rates demonstrates the empirical usefulness of the estimators....
Importance biasing scheme implemented in the PRIZMA code
Kandiev, I.Z.; Malyshkin, G.N. [Russian Federal Nuclear Center-All-Russia Scientific-Technical Inst. of Technical Physics, Snezhinsk (Russian Federation)
1997-12-31
PRIZMA code is intended for Monte Carlo calculations of linear radiation transport problems. The code has wide capabilities to describe geometry, sources, material composition, and to obtain parameters specified by user. There is a capability to calculate path of particle cascade (including neutrons, photons, electrons, positrons and heavy charged particles) taking into account possible transmutations. Importance biasing scheme was implemented to solve the problems which require calculation of functionals related to small probabilities (for example, problems of protection against radiation, problems of detection, etc.). The scheme enables to adapt trajectory building algorithm to problem peculiarities.
Theoretical investigation of exchange bias
Xiong Zhi-Jie; Wang Huai-Yu; Ding Ze-Jun
2007-01-01
The exchange bias of bilayer magnetic films consisting of ferromagnetic (FM) and antiferromagnetic (AFM) layers in an uncompensated case is studied by use of the many-body Green's function method of quantum statistical theory.The effects of the layer thickness and temperature and the interfacial coupling strength on the exchange bias HE are investigated. The dependence of the exchange bias HE on the FM layer thickness and temperature is qualitatively in agreement with experimental results. When temperature varies, both the coercivity HC and HE decrease with the temperature increasing. For each FM thickness, there exists a least AFM thickness in which the exchange bias occurs,which is called pinning thickness.
Lu, Shih-I
2005-05-15
Ab initio calculations of transition state structure and reaction enthalpy of the F + H2-->HF + H reaction has been carried out by the fixed-node diffusion quantum Monte Carlo method in this study. The Monte Carlo sampling is based on the Ornstein-Uhlenbeck random walks guided by a trial wave function constructed from the floating spherical Gaussian orbitals and spherical Gaussian geminals. The Monte Carlo calculated barrier height of 1.09(16) kcal/mol is consistent with the experimental values, 0.86(10)/1.18(10) kcal/mol, and the calculated value from the multireference-type coupled-cluster (MRCC) calculation with the aug-cc-pVQZ(F)/cc-pVQZ(H) basis set, 1.11 kcal/mol. The Monte Carlo-based calculation also gives a similar value of the reaction enthalpy, -32.00(4) kcal/mol, compared with the experimental value, -32.06(17) kcal/mol, and the calculated value from a MRCC/aug-cc-pVQZ(F)/cc-pVQZ(H) calculation, -31.94 kcal/mol. This study clearly indicates a further application of the random-walk-based approach in the field of quantum chemical calculation.
Soft QCD in ATLAS: Minimum bias and diffraction studies
Sarkisyan-Grinbaum, EK; The ATLAS collaboration
2011-01-01
We present measurements on charged particle production in proton-proton collisions at centre-of-mass energies of root(s)= 0.9, 2.36 and 7 TeV recorded with the ATLAS detector at the Large Hadron Collider. Events were collected using a single-arm minimum bias trigger, charged tracks are measured with high precision in the inner tracking system. Minimum bias analysis uses data samples at all three energies, while diffractive events are studied using a sample of events at roo(s})= 7 TeV. To study diffractive interactions, the events that have hits on exactly one side of the ATLAS detector were selected. The charged particle multiplicity, pseudorapidity and transverse momentum spectra are analiyzed and compared to the predictions by various Monte Carlo models.
Density matrix quantum Monte Carlo
Blunt, N S; Spencer, J S; Foulkes, W M C
2013-01-01
This paper describes a quantum Monte Carlo method capable of sampling the full density matrix of a many-particle system, thus granting access to arbitrary reduced density matrices and allowing expectation values of complicated non-local operators to be evaluated easily. The direct sampling of the density matrix also raises the possibility of calculating previously inaccessible entanglement measures. The algorithm closely resembles the recently introduced full configuration interaction quantum Monte Carlo method, but works all the way from infinite to zero temperature. We explain the theory underlying the method, describe the algorithm, and introduce an importance-sampling procedure to improve the stochastic efficiency. To demonstrate the potential of our approach, the energy and staggered magnetization of the isotropic antiferromagnetic Heisenberg model on small lattices and the concurrence of one-dimensional spin rings are compared to exact or well-established results. Finally, the nature of the sign problem...
Efficient kinetic Monte Carlo simulation
Schulze, Tim P.
2008-02-01
This paper concerns kinetic Monte Carlo (KMC) algorithms that have a single-event execution time independent of the system size. Two methods are presented—one that combines the use of inverted-list data structures with rejection Monte Carlo and a second that combines inverted lists with the Marsaglia-Norman-Cannon algorithm. The resulting algorithms apply to models with rates that are determined by the local environment but are otherwise arbitrary, time-dependent and spatially heterogeneous. While especially useful for crystal growth simulation, the algorithms are presented from the point of view that KMC is the numerical task of simulating a single realization of a Markov process, allowing application to a broad range of areas where heterogeneous random walks are the dominate simulation cost.
Adaptive Multilevel Monte Carlo Simulation
Hoel, H
2011-08-23
This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).
Bias in clinical intervention research
Gluud, Lise Lotte
2006-01-01
Research on bias in clinical trials may help identify some of the reasons why investigators sometimes reach the wrong conclusions about intervention effects. Several quality components for the assessment of bias control have been suggested, but although they seem intrinsically valid, empirical...
Cirino, Robert
Non-language elements of bias in mass media--such as images, sounds, tones of voices, inflection, and facial expressions--are invariably integrated with the choice of language. Further, they have an emotional impact that is often greater than that of language. It is essential that the teacher of English deal with this non-language bias since it is…
Sequential biases in accumulating evidence
Huggins, Richard; Dogo, Samson Henry
2015-01-01
Whilst it is common in clinical trials to use the results of tests at one phase to decide whether to continue to the next phase and to subsequently design the next phase, we show that this can lead to biased results in evidence synthesis. Two new kinds of bias associated with accumulating evidence, termed ‘sequential decision bias’ and ‘sequential design bias’, are identified. Both kinds of bias are the result of making decisions on the usefulness of a new study, or its design, based on the previous studies. Sequential decision bias is determined by the correlation between the value of the current estimated effect and the probability of conducting an additional study. Sequential design bias arises from using the estimated value instead of the clinically relevant value of an effect in sample size calculations. We considered both the fixed‐effect and the random‐effects models of meta‐analysis and demonstrated analytically and by simulations that in both settings the problems due to sequential biases are apparent. According to our simulations, the sequential biases increase with increased heterogeneity. Minimisation of sequential biases arises as a new and important research area necessary for successful evidence‐based approaches to the development of science. © 2015 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd. PMID:26626562
Desjacques, Vincent; Schmidt, Fabian
2016-01-01
This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a pedagogical proof of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which includes the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in i...
Publication bias in epidemiological studies.
Siddiqi, Nazish
2011-06-01
Communication of research findings is the utmost responsibility of all scientists. Publication bias occurs if scientific studies with negative or null results fail to get published. This can happen due to bias in submitting, reviewing, accepting, publishing or aggregating scientific literature that fails to show positive results on a particular topic. Publication bias can make scientific literature unrepresentative of the actual research studies. This can give the reader a false impression about the beneficial effects of a particular treatment or intervention and can influence clinical decision making. Publication bias is more common than it is actually considered to be, but there are ways to detect and prevent it. This paper comments on the occurrence, types and consequences of publication bias and the strategies employed to detect and control it.
Revival of test bias research in preemployment testing.
Aguinis, Herman; Culpepper, Steven A; Pierce, Charles A
2010-07-01
We developed a new analytic proof and conducted Monte Carlo simulations to assess the effects of methodological and statistical artifacts on the relative accuracy of intercept- and slope-based test bias assessment. The main simulation design included 3,185,000 unique combinations of a wide range of values for true intercept- and slope-based test bias, total sample size, proportion of minority group sample size to total sample size, predictor (i.e., preemployment test scores) and criterion (i.e., job performance) reliability, predictor range restriction, correlation between predictor scores and the dummy-coded grouping variable (e.g., ethnicity), and mean difference between predictor scores across groups. Results based on 15 billion 925 million individual samples of scores and more than 8 trillion 662 million individual scores raise questions about the established conclusion that test bias in preemployment testing is nonexistent and, if it exists, it only occurs regarding intercept-based differences that favor minority group members. Because of the prominence of test fairness in the popular media, legislation, and litigation, our results point to the need to revive test bias research in preemployment testing.
Rasch, Kevin M.; Hu, Shuming; Mitas, Lubos [Center for High Performance Simulation and Department of Physics, North Carolina State University, Raleigh, North Carolina 27695 (United States)
2014-01-28
We elucidate the origin of large differences (two-fold or more) in the fixed-node errors between the first- vs second-row systems for single-configuration trial wave functions in quantum Monte Carlo calculations. This significant difference in the valence fixed-node biases is studied across a set of atoms, molecules, and also Si, C solid crystals. We show that the key features which affect the fixed-node errors are the differences in electron density and the degree of node nonlinearity. The findings reveal how the accuracy of the quantum Monte Carlo varies across a variety of systems, provide new perspectives on the origins of the fixed-node biases in calculations of molecular and condensed systems, and carry implications for pseudopotential constructions for heavy elements.
Rasch, Kevin M.; Hu, Shuming; Mitas, Lubos
2014-01-01
We elucidate the origin of large differences (two-fold or more) in the fixed-node errors between the first- vs second-row systems for single-configuration trial wave functions in quantum Monte Carlo calculations. This significant difference in the valence fixed-node biases is studied across a set of atoms, molecules, and also Si, C solid crystals. We show that the key features which affect the fixed-node errors are the differences in electron density and the degree of node nonlinearity. The findings reveal how the accuracy of the quantum Monte Carlo varies across a variety of systems, provide new perspectives on the origins of the fixed-node biases in calculations of molecular and condensed systems, and carry implications for pseudopotential constructions for heavy elements.
ISAJET: a Monte Carlo event generator for pp and anti pp interactions. Version 3
Paige, F.E.; Protopopescu, S.D.
1982-09-01
ISAJET is a Monte Carlo computer program which simulates pp and anti pp reactions at high energy. It can generate minimum bias events representative of the total inelastic cross section, high PT hadronic events, and Drell-Yan events with a virtual ..gamma.., W/sup + -/, or Z/sup 0/. It is based on perturbative QCD and phenomeno-logical models for jet fragmentation.
Administrative bias in South Africa
E S Nwauche
2005-01-01
Full Text Available This article reviews the interpretation of section 6(2(aii of the Promotion of Administrative Justice Act which makes an administrator “biased or reasonably suspected of bias” a ground of judicial review. In this regard, the paper reviews the determination of administrative bias in South Africa especially highlighting the concept of institutional bias. The paper notes that inspite of the formulation of the bias ground of review the test for administrative bias is the reasonable apprehension test laid down in the case of President of South Africa v South African Rugby Football Union(2 which on close examination is not the same thing. Accordingly the paper urges an alternative interpretation that is based on the reasonable suspicion test enunciated in BTR Industries South Africa (Pty Ltd v Metal and Allied Workers Union and R v Roberts. Within this context, the paper constructs a model for interpreting the bias ground of review that combines the reasonable suspicion test as interpreted in BTR Industries and R v Roberts, the possibility of the waiver of administrative bias, the curative mechanism of administrative appeal as well as some level of judicial review exemplified by the jurisprudence of article 6(1 of the European Convention of Human Rights, especially in the light of the contemplation of the South African Magistrate Court as a jurisdictional route of judicial review.
Cognitive Bias in Systems Verification
Larson, Steve
2012-01-01
Working definition of cognitive bias: Patterns by which information is sought and interpreted that can lead to systematic errors in decisions. Cognitive bias is used in diverse fields: Economics, Politics, Intelligence, Marketing, to name a few. Attempts to ground cognitive science in physical characteristics of the cognitive apparatus exceed our knowledge. Studies based on correlations; strict cause and effect is difficult to pinpoint. Effects cited in the paper and discussed here have been replicated many times over, and appear sound. Many biases have been described, but it is still unclear whether they are all distinct. There may only be a handful of fundamental biases, which manifest in various ways. Bias can effect system verification in many ways . Overconfidence -> Questionable decisions to deploy. Availability -> Inability to conceive critical tests. Representativeness -> Overinterpretation of results. Positive Test Strategies -> Confirmation bias. Debiasing at individual level very difficult. The potential effect of bias on the verification process can be managed, but not eliminated. Worth considering at key points in the process.
Cognitive Bias in Systems Verification
Larson, Steve
2012-01-01
Working definition of cognitive bias: Patterns by which information is sought and interpreted that can lead to systematic errors in decisions. Cognitive bias is used in diverse fields: Economics, Politics, Intelligence, Marketing, to name a few. Attempts to ground cognitive science in physical characteristics of the cognitive apparatus exceed our knowledge. Studies based on correlations; strict cause and effect is difficult to pinpoint. Effects cited in the paper and discussed here have been replicated many times over, and appear sound. Many biases have been described, but it is still unclear whether they are all distinct. There may only be a handful of fundamental biases, which manifest in various ways. Bias can effect system verification in many ways . Overconfidence -> Questionable decisions to deploy. Availability -> Inability to conceive critical tests. Representativeness -> Overinterpretation of results. Positive Test Strategies -> Confirmation bias. Debiasing at individual level very difficult. The potential effect of bias on the verification process can be managed, but not eliminated. Worth considering at key points in the process.
Arctic Clouds and Sea Ice Inhomogeneities and Plane-parallel Biases
Rozwadowska, A.; Cahalan, R. F.
Monte Carlo simulations of the expected influence of non-uniformity in cloud struc- ture and surface albedo on shortwave radiative fluxes in the Arctic atmosphere are presented. In particular, plane-parallel biases in cloud albedo and transmittance are studied for non-absorbing low-level all-liquid stratus clouds over sea ice. The "abso- lute bias" is defined as the difference between the cloud albedo or transmittance for the uniform or plane-parallel case, and the albedo or transmittance for nonuniform conditions with the same mean cloud optical thickness and the same mean surface albedo, averaged over a given area (i.e. bias > 0 means plane-parallel overestimates). Ranges of means and standard deviations of input parameters typical of Arctic con- ditions are determined from the FIRE-ACE/SHEBA/ARM experiment. We determine the sensitivity of the bias with respect to the following: domain averaged means and spatial variances of cloud optical thickness and surface albedo, shape of the surface reflectance function, presence of a scattering layer under the clouds, and solar zenith angle. The simulations show that the biases in Arctic conditions are generally lower than in subtropical stratocumulus. The magnitudes of the absolute biases are unlikely to exceed 0.02 for albedo and 0.05 for transmittance. The "relative bias" expresses the absolute bias as a percentage of the actual cloud albedo or transmittance. The mag- nitude of the relative bias in albedo is typically below 2% over the reflective Arctic surface, while the magnitude of the relative bias in transmittance can exceed 10% . Over ice free ocean, it is well known that the albedo bias is strictly positive but in the Arctic it can change sign when the surface bias contribution dominates over the cloud contribution. On the other hand, the transmittance bias remains strictly negative in the Arctic, regardless of surface conditions. The influence of cloud variability on the bi- ases strongly decreases with an
Development of Monte Carlo decay gamma-ray transport calculation system
Sato, Satoshi [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment; Kawasaki, Nobuo [Fujitsu Ltd., Tokyo (Japan); Kume, Etsuo [Japan Atomic Energy Research Inst., Center for Promotion of Computational Science and Engineering, Tokai, Ibaraki (Japan)
2001-06-01
In the DT fusion reactor, it is critical concern to evaluate the decay gamma-ray biological dose rates after the reactor shutdown exactly. In order to evaluate the decay gamma-ray biological dose rates exactly, three dimensional Monte Carlo decay gamma-ray transport calculation system have been developed by connecting the three dimensional Monte Carlo particle transport calculation code and the induced activity calculation code. The developed calculation system consists of the following four functions. (1) The operational neutron flux distribution is calculated by the three dimensional Monte Carlo particle transport calculation code. (2) The induced activities are calculated by the induced activity calculation code. (3) The decay gamma-ray source distribution is obtained from the induced activities. (4) The decay gamma-rays are generated by using the decay gamma-ray source distribution, and the decay gamma-ray transport calculation is conducted by the three dimensional Monte Carlo particle transport calculation code. In order to reduce the calculation time drastically, a biasing system for the decay gamma-ray source distribution has been developed, and the function is also included in the present system. In this paper, the outline and the detail of the system, and the execution example are reported. The evaluation for the effect of the biasing system is also reported. (author)
Toward a Monte Carlo program for simulating vapor-liquid phase equilibria from first principles
McGrath, M; Siepmann, J I; Kuo, I W; Mundy, C J; Vandevondele, J; Sprik, M; Hutter, J; Mohamed, F; Krack, M; Parrinello, M
2004-10-20
Efficient Monte Carlo algorithms are combined with the Quickstep energy routines of CP2K to develop a program that allows for Monte Carlo simulations in the canonical, isobaric-isothermal, and Gibbs ensembles using a first principles description of the physical system. Configurational-bias Monte Carlo techniques and pre-biasing using an inexpensive approximate potential are employed to increase the sampling efficiency and to reduce the frequency of expensive ab initio energy evaluations. The new Monte Carlo program has been validated through extensive comparison with molecular dynamics simulations using the programs CPMD and CP2K. Preliminary results for the vapor-liquid coexistence properties (T = 473 K) of water using the Becke-Lee-Yang-Parr exchange and correlation energy functionals, a triple-zeta valence basis set augmented with two sets of d-type or p-type polarization functions, and Goedecker-Teter-Hutter pseudopotentials are presented. The preliminary results indicate that this description of water leads to an underestimation of the saturated liquid density and heat of vaporization and, correspondingly, an overestimation of the saturated vapor pressure.
Biases from neutrino bias: to worry or not to worry?
Raccanelli, Alvise; Verde, Licia; Villaescusa-Navarro, Francisco
2017-01-01
The relation between the halo field and the matter fluctuations (halo bias), in the presence of massive neutrinos depends on the total neutrino mass, massive neutrinos introduce an additional scale-dependence of the bias which is usually neglected in cosmological analyses. We investigate the magnitude of the systematic effect on interesting cosmological parameters induced by neglecting this scale dependence, finding that while it is not a problem for current surveys, it is non-negligible for ...
High-resolution and Monte Carlo additions to the SASKTRAN radiative transfer model
D. J. Zawada
2015-06-01
Full Text Available The Optical Spectrograph and InfraRed Imaging System (OSIRIS instrument on board the Odin spacecraft has been measuring limb-scattered radiance since 2001. The vertical radiance profiles measured as the instrument nods are inverted, with the aid of the SASKTRAN radiative transfer model, to obtain vertical profiles of trace atmospheric constituents. Here we describe two newly developed modes of the SASKTRAN radiative transfer model: a high-spatial-resolution mode and a Monte Carlo mode. The high-spatial-resolution mode is a successive-orders model capable of modelling the multiply scattered radiance when the atmosphere is not spherically symmetric; the Monte Carlo mode is intended for use as a highly accurate reference model. It is shown that the two models agree in a wide variety of solar conditions to within 0.2 %. As an example case for both models, Odin–OSIRIS scans were simulated with the Monte Carlo model and retrieved using the high-resolution model. A systematic bias of up to 4 % in retrieved ozone number density between scans where the instrument is scanning up or scanning down was identified. The bias is largest when the sun is near the horizon and the solar scattering angle is far from 90°. It was found that calculating the multiply scattered diffuse field at five discrete solar zenith angles is sufficient to eliminate the bias for typical Odin–OSIRIS geometries.
Monte Carlo approach to turbulence
Dueben, P.; Homeier, D.; Muenster, G. [Muenster Univ. (Germany). Inst. fuer Theoretische Physik; Jansen, K. [DESY, Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Mesterhazy, D. [Humboldt Univ., Berlin (Germany). Inst. fuer Physik
2009-11-15
The behavior of the one-dimensional random-force-driven Burgers equation is investigated in the path integral formalism on a discrete space-time lattice. We show that by means of Monte Carlo methods one may evaluate observables, such as structure functions, as ensemble averages over different field realizations. The regularization of shock solutions to the zero-viscosity limit (Hopf-equation) eventually leads to constraints on lattice parameters required for the stability of the simulations. Insight into the formation of localized structures (shocks) and their dynamics is obtained. (orig.)
Approaching Chemical Accuracy with Quantum Monte Carlo
Petruzielo, Frank R.; Toulouse, Julien; Umrigar, C. J.
2012-01-01
International audience; A quantum Monte Carlo study of the atomization energies for the G2 set of molecules is presented. Basis size dependence of diffusion Monte Carlo atomization energies is studied with a single determinant Slater-Jastrow trial wavefunction formed from Hartree-Fock orbitals. With the largest basis set, the mean absolute deviation from experimental atomization energies for the G2 set is 3.0 kcal/mol. Optimizing the orbitals within variational Monte Carlo improves the agreem...
Mean field simulation for Monte Carlo integration
Del Moral, Pierre
2013-01-01
In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko
Measuring Type Ia Supernova Populations of Stretch and Color and Predicting Distance Biases
Scolnic, Daniel
2016-01-01
Simulations of Type Ia Supernovae (SNIa) surveys are a critical tool for correcting biases in the analysis of SNIa to infer cosmological parameters. Large scale Monte Carlo simulations include a thorough treatment of observation history, measurement noise, intrinsic scatter models and selection effects. In this paper, we improve simulations with a robust technique to evaluate the underlying populations of SNIa color and stretch that correlate with luminosity. In typical analyses, the standardized SNIa brightness is determined from linear `Tripp' relations between the light curve color and luminosity and between stretch and luminosity. However, this solution produces Hubble residual biases because intrinsic scatter and measurement noise result in measured color and stretch values that do not follow the Tripp relation. We find a $10\\sigma$ bias (up to 0.3 mag) in Hubble residuals versus color and $5\\sigma$ bias (up to 0.2 mag) in Hubble residuals versus stretch in a joint sample of 920 spectroscopically confirm...
Magnetic bearings with zero bias
Brown, Gerald V.; Grodsinsky, Carlos M.
1991-01-01
A magnetic bearing operating without a bias field has supported a shaft rotating at speeds up to 12,000 rpm with the usual four power supplies and with only two. A magnetic bearing is commonly operated with a bias current equal to half of the maximum current allowable in its coils. This linearizes the relation between net force and control current and improves the force slewing rate and hence the band width. The steady bias current dissipates power, even when no force is required from the bearing. The power wasted is equal to two-thirds of the power at maximum force output. Examined here is the zero bias idea. The advantages and disadvantages are noted.
MLE's bias pathology motivates MCMLE
Yatracos, Yannis G.
2013-01-01
Maximum likelihood estimates are often biased. It is shown that this pathology is inherent to the traditional ML estimation method for two or more parameters, thus motivating from a different angle the use of MCMLE.
Cognitive biases and language universals
Baronchelli, Andrea; Puglisi, Andrea
2013-01-01
Language universals have been longly attributed to an innate Universal Grammar. An alternative explanation states that linguistic universals emerged independently in every language in response to shared cognitive, though non language-specific, biases. A computational model has recently shown how this could be the case, focusing on the paradigmatic example of the universal properties of color naming patterns, and producing results in accurate agreement with the experimental data. Here we investigate thoroughly the role of a cognitive bias in the framework of this model. We study how, and to what extent, the structure of the bias can influence the corresponding linguistic universal patterns. We show also that the cultural history of a group of speakers introduces population-specific constraints that act against the uniforming pressure of the cognitive bias, and we clarify the interplay between these two forces. We believe that our simulations can help to shed light on the possible mechanisms at work in the evol...
Kwee, R E; The ATLAS collaboration
2010-01-01
Since the restart of the LHC in November 2009, ATLAS has collected inelastic pp-collisions to perform first measurements on charged particle densities. These measurements will help to constrain various models describing phenomenologically soft parton interactions. Understanding the trigger efficiencies for different event types are therefore crucial to minimize any possible bias in the event selection. ATLAS uses two main minimum bias triggers, featuring complementary detector components and trigger levels. While a hardware based first trigger level situated in the forward regions with 2.09 < |eta| < 3.8 has been proven to select pp-collisions very efficiently, the Inner Detector based minimum bias trigger uses a random seed on filled bunches and central tracking detectors for the event selection. Both triggers were essential for the analysis of kinematic spectra of charged particles. Their performance and trigger efficiency measurements as well as studies on possible bias sources will be presen...
Monte Carlo Treatment Planning for Advanced Radiotherapy
Cronholm, Rickard
and validation of a Monte Carlo model of a medical linear accelerator (i), converting a CT scan of a patient to a Monte Carlo compliant phantom (ii) and translating the treatment plan parameters (including beam energy, angles of incidence, collimator settings etc) to a Monte Carlo input file (iii). A protocol...... previous algorithms since it uses delineations of structures in order to include and/or exclude certain media in various anatomical regions. This method has the potential to reduce anatomically irrelevant media assignment. In house MATLAB scripts translating the treatment plan parameters to Monte Carlo...
1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO
T. EVANS; ET AL
2000-08-01
We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.
Preferences, country bias, and international trade
S. Roy (Santanu); J.M.A. Viaene (Jean-Marie)
1998-01-01
textabstractAnalyzes international trade where consumer preferences exhibit country bias. Why country biases arise; How trade can occur in the presence of country bias; Implication for the pattern of trade and specialization.
Preferences, country bias, and international trade
S. Roy (Santanu); J.M.A. Viaene (Jean-Marie)
1998-01-01
textabstractAnalyzes international trade where consumer preferences exhibit country bias. Why country biases arise; How trade can occur in the presence of country bias; Implication for the pattern of trade and specialization.
Greatbatch, Richard; Drews, Annika; Ding, Hui; Latif, Mojib; Park, Wonsun
2016-04-01
The North Atlantic cold bias, associated with a too zonal path of the North Atlantic Current and a missing "northwest corner", is a common problem in coupled climate and forecast models. The bias affects the North Atlantic and European climate mean state, variability and predictability. We investigate the use of a flow field correction to adjust the path of the North Atlantic Current as well as additional corrections to the surface heat and freshwater fluxes. Results using the Kiel Climate Model show that the flow field correction allows a northward flow into the northwest corner, largely eliminating the bias below the surface layer. A surface cold bias remains but can be eliminated by additionally correcting the surface freshwater flux, without adjusting the surface heat flux seen by the ocean model. A model version in which only the surface fluxes of heat and freshwater are corrected continues to exhibit the incorrect path of the North Atlantic Current and a strong subsurface bias. Removing the bias impacts the multi-decadal time scale variability in the model and leads to a better representation of the SST pattern associated with the Atlantic Multidecadal Variability than the uncorrected model.
Error in Monte Carlo, quasi-error in Quasi-Monte Carlo
Kleiss, R. H. P.; Lazopoulos, A.
2006-01-01
While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the abscence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction o...
Uncertainties in s-process nucleosynthesis in massive stars determined by Monte Carlo variations
Nishimura (西村信哉), N.; Hirschi, R.; Rauscher, T.; Murphy, A. St. J.; Cescutti, G.
2017-08-01
The s-process in massive stars produces the weak component of the s-process (nuclei up to A ∼ 90), in amounts that match solar abundances. For heavier isotopes, such as barium, production through neutron capture is significantly enhanced in very metal-poor stars with fast rotation. However, detailed theoretical predictions for the resulting final s-process abundances have important uncertainties caused both by the underlying uncertainties in the nuclear physics (principally neutron-capture reaction and β-decay rates) as well as by the stellar evolution modelling. In this work, we investigated the impact of nuclear-physics uncertainties relevant to the s-process in massive stars. Using a Monte Carlo based approach, we performed extensive nuclear reaction network calculations that include newly evaluated upper and lower limits for the individual temperature-dependent reaction rates. We found that most of the uncertainty in the final abundances is caused by uncertainties in the neutron-capture rates, while β-decay rate uncertainties affect only a few nuclei near s-process branchings. The s-process in rotating metal-poor stars shows quantitatively different uncertainties and key reactions, although the qualitative characteristics are similar. We confirmed that our results do not significantly change at different metallicities for fast rotating massive stars in the very low metallicity regime. We highlight which of the identified key reactions are realistic candidates for improved measurement by future experiments.
Calculation of photon pulse height distribution using deterministic and Monte Carlo methods
Akhavan, Azadeh; Vosoughi, Naser
2015-12-01
Radiation transport techniques which are used in radiation detection systems comprise one of two categories namely probabilistic and deterministic. However, probabilistic methods are typically used in pulse height distribution simulation by recreating the behavior of each individual particle, the deterministic approach, which approximates the macroscopic behavior of particles by solution of Boltzmann transport equation, is being developed because of its potential advantages in computational efficiency for complex radiation detection problems. In current work linear transport equation is solved using two methods including collided components of the scalar flux algorithm which is applied by iterating on the scattering source and ANISN deterministic computer code. This approach is presented in one dimension with anisotropic scattering orders up to P8 and angular quadrature orders up to S16. Also, multi-group gamma cross-section library required for this numerical transport simulation is generated in a discrete appropriate form. Finally, photon pulse height distributions are indirectly calculated by deterministic methods that approvingly compare with those from Monte Carlo based codes namely MCNPX and FLUKA.
Tsalach, A.; Metzger, Y.; Breskin, I.; Zeitak, R.; Shechter, R.
2014-03-01
Development of techniques for continuous measurement of regional blood flow, and in particular cerebral blood flow (CBF), is essential for monitoring critical care patients. Recently, a novel technique, based on ultrasound modulation of light was developed for non-invasive, continuous CBF monitoring (termed ultrasound-tagged light (UTL or UT-NIRS)), and shown to correlate with readings of 133 Xe SPECT1 and laser Doppler2. Coherent light is introduced into the tissue concurrently with an Ultrasound (US) field. Displacement of scattering centers within the sampled volume induced by Brownian motion, blood flow and the US field affects the photons' temporal correlation. Hence, the temporal fluctuations of the obtained speckle pattern provide dynamic information about the blood flow. We developed a comprehensive simulation, combining the effects of Brownian motion, US and flow on the obtained speckle pattern. Photons trajectories within the tissue are generated using a Monte-Carlo based model. Then, the temporal changes in the optical path due to displacement of scattering centers are determined, and the corresponding interference pattern over time is derived. Finally, the light intensity autocorrelation function of a single speckle is calculated, from which the tissue decorrelation time is determined. The simulation's results are compared with in-vitro experiments, using a digital correlator, demonstrating decorrelation time prediction within the 95% confidence interval. This model may assist in the development of optical based methods for blood flow measurements and particularly, in methods using the acousto-optic effect.
Avrorin, E. N.; Tsvetokhin, A. G.; Xenofontov, A. I.; Kourbatova, E. I.; Regens, J. L.
2002-02-26
This paper presents the results of an ongoing research and development project conducted by Russian institutions in Moscow and Snezhinsk, supported by the International Science and Technology Center (ISTC), in collaboration with the University of Oklahoma. The joint study focuses on developing and applying analytical tools to effectively characterize contaminant transport and assess risks associated with migration of radionuclides and heavy metals in the water column and sediments of large reservoirs or lakes. The analysis focuses on the development and evaluation of theoretical-computational models that describe the distribution of radioactive wastewater within a reservoir and characterize the associated radiation field as well as estimate doses received from radiation exposure. The analysis focuses on the development and evaluation of Monte Carlo-based, theoretical-computational methods that are applied to increase the precision of results and to reduce computing time for estimating the characteristics the radiation field emitted from the contaminated wastewater layer. The calculated migration of radionuclides is used to estimate distributions of radiation doses that could be received by an exposed population based on exposure to radionuclides from specified volumes of discrete aqueous sources. The calculated dose distributions can be used to support near-term and long-term decisions about priorities for environmental remediation and stewardship.
Monte Carlo simulation of the kinetic effects on GaAs/GaAs(001) MBE growth
Ageev, Oleg A.; Solodovnik, Maxim S.; Balakirev, Sergey V.; Mikhaylin, Ilya A.; Eremenko, Mikhail M.
2017-01-01
The molecular beam epitaxial growth of GaAs on the GaAs(001)-(2×4) surface is investigated using a kinetic Monte Carlo-based method. The developed algorithm permits to focus on the kinetic effects in a wide range of growth conditions and enables considerable computational speedup. The simulation results show that the growth rate has a dramatic influence upon both the island morphology and Ga surface diffusion length. The average island size reduces with increasing growth rate while the island density increases with increasing growth rate as well as As4/Ga beam equivalent pressure ratio. As the growth rate increases, the island density becomes weaker dependent upon the As4/Ga pressure ratio and approaches to a saturation value. We also discuss three characteristics of Ga surface diffusion, namely a diffusion length of a Ga adatom deposited first, an average diffusion length, and an island spacing as an average distance between islands. The calculations show that the As4/Ga pressure ratio dependences of these characteristics obey the same law, but with different coefficients. An increase of the As4/Ga pressure ratio leads to a decrease in both the diffusion length and island spacing. However, its influence becomes stronger with increasing growth rate for the first Ga adatom diffusion length and weaker for the average diffusion length and for the island spacing.
Application de la methode des sous-groupes au calcul Monte-Carlo multigroupe
Martin, Nicolas
effects of the scattering reaction consistent with the subgroup method. In this study, we generalize the Discrete Angle Technique, already proposed for homogeneous, multigroup cross sections, to isotopic cross sections on the form of probability tables. In this technique, the angular density is discretized into probability tables. Similarly to the cross-section case, a moment approach is used to compute the probability tables for the scattering cosine. (4) The introduction of a leakage model based on the B1 fundamental mode approximation. Unlike deterministic lattice packages, most Monte Carlo-based lattice physics codes do not include leakage models. However the generation of homogenized and condensed group constants (cross sections, diffusion coefficients) require the critical flux. This project has involved the development of a program into the DRAGON framework, written in Fortran 2003 and wrapped with a driver in C, the GANLIB 5. Choosing Fortran 2003 has permitted the use of some modern features, such as the definition of objects and methods, data encapsulation and polymorphism. The validation of the proposed code has been performed by comparison with other numerical methods: (1) The continuous-energy Monte Carlo method of the SERPENT code. (2) The Collision Probability (CP) method and the discrete ordinates (SN) method of the DRAGON lattice code. (3) The multigroup Monte Carlo code MORET, coupled with the DRAGON code. Benchmarks used in this work are representative of some industrial configurations encountered in reactor and criticality-safety calculations: (1)Pressurized Water Reactors (PWR) cells and assemblies. (2) Canada-Deuterium Uranium Reactors (CANDU-6) clusters. (3) Critical experiments from the ICSBEP handbook (International Criticality Safety Benchmark Evaluation Program).
The estimation method of GPS instrumental biases
无
2001-01-01
A model of estimating the global positioning system (GPS) instrumental biases and the methods to calculate the relative instrumental biases of satellite and receiver are presented. The calculated results of GPS instrumental biases, the relative instrumental biases of satellite and receiver, and total electron content (TEC) are also shown. Finally, the stability of GPS instrumental biases as well as that of satellite and receiver instrumental biases are evaluated, indicating that they are very stable during a period of two months and a half.
Schreiber, Eric C; Chang, Sha X
2012-08-01
Microbeam radiation therapy (MRT) is an experimental radiotherapy technique that has shown potent antitumor effects with minimal damage to normal tissue in animal studies. This unique form of radiation is currently only produced in a few large synchrotron accelerator research facilities in the world. To promote widespread translational research on this promising treatment technology we have proposed and are in the initial development stages of a compact MRT system that is based on carbon nanotube field emission x-ray technology. We report on a Monte Carlo based feasibility study of the compact MRT system design. Monte Carlo calculations were performed using EGSnrc-based codes. The proposed small animal research MRT device design includes carbon nanotube cathodes shaped to match the corresponding MRT collimator apertures, a common reflection anode with filter, and a MRT collimator. Each collimator aperture is sized to deliver a beam width ranging from 30 to 200 μm at 18.6 cm source-to-axis distance. Design parameters studied with Monte Carlo include electron energy, cathode design, anode angle, filtration, and collimator design. Calculations were performed for single and multibeam configurations. Increasing the energy from 100 kVp to 160 kVp increased the photon fluence through the collimator by a factor of 1.7. Both energies produced a largely uniform fluence along the long dimension of the microbeam, with 5% decreases in intensity near the edges. The isocentric dose rate for 160 kVp was calculated to be 700 Gy∕min∕A in the center of a 3 cm diameter target. Scatter contributions resulting from collimator size were found to produce only small (<7%) changes in the dose rate for field widths greater than 50 μm. Dose vs depth was weakly dependent on filtration material. The peak-to-valley ratio varied from 10 to 100 as the separation between adjacent microbeams varies from 150 to 1000 μm. Monte Carlo simulations demonstrate that the proposed compact MRT system
Monte Carlo simulations of ABC stacked kagome lattice films.
Yerzhakov, H V; Plumer, M L; Whitehead, J P
2016-05-18
Properties of films of geometrically frustrated ABC stacked antiferromagnetic kagome layers are examined using Metropolis Monte Carlo simulations. The impact of having an easy-axis anisotropy on the surface layers and cubic anisotropy in the interior layers is explored. The spin structure at the surface is shown to be different from that of the bulk 3D fcc system, where surface axial anisotropy tends to align spins along the surface [1 1 1] normal axis. This alignment then propagates only weakly to the interior layers through exchange coupling. Results are shown for the specific heat, magnetization and sub-lattice order parameters for both surface and interior spins in three and six layer films as a function of increasing axial surface anisotropy. Relevance to the exchange bias phenomenon in IrMn3 films is discussed.
Monte Carlo simulations of ABC stacked kagome lattice films
Yerzhakov, H. V.; Plumer, M. L.; Whitehead, J. P.
2016-05-01
Properties of films of geometrically frustrated ABC stacked antiferromagnetic kagome layers are examined using Metropolis Monte Carlo simulations. The impact of having an easy-axis anisotropy on the surface layers and cubic anisotropy in the interior layers is explored. The spin structure at the surface is shown to be different from that of the bulk 3D fcc system, where surface axial anisotropy tends to align spins along the surface [1 1 1] normal axis. This alignment then propagates only weakly to the interior layers through exchange coupling. Results are shown for the specific heat, magnetization and sub-lattice order parameters for both surface and interior spins in three and six layer films as a function of increasing axial surface anisotropy. Relevance to the exchange bias phenomenon in IrMn3 films is discussed.
Optimal mesh hierarchies in Multilevel Monte Carlo methods
Von Schwerin, Erik
2016-01-08
I will discuss how to choose optimal mesh hierarchies in Multilevel Monte Carlo (MLMC) simulations when computing the expected value of a quantity of interest depending on the solution of, for example, an Ito stochastic differential equation or a partial differential equation with stochastic data. I will consider numerical schemes based on uniform discretization methods with general approximation orders and computational costs. I will compare optimized geometric and non-geometric hierarchies and discuss how enforcing some domain constraints on parameters of MLMC hierarchies affects the optimality of these hierarchies. I will also discuss the optimal tolerance splitting between the bias and the statistical error contributions and its asymptotic behavior. This talk presents joint work with N.Collier, A.-L.Haji-Ali, F. Nobile, and R. Tempone.
Modelling of scintillator based flat-panel detectors with Monte-Carlo simulations
Reims, N.; Sukowski, F.; Uhlmann, N.
2011-01-01
Scintillator based flat panel detectors are state of the art in the field of industrial X-ray imaging applications. Choosing the proper system and setup parameters for the vast range of different applications can be a time consuming task, especially when developing new detector systems. Since the system behaviour cannot always be foreseen easily, Monte-Carlo (MC) simulations are keys to gain further knowledge of system components and their behaviour for different imaging conditions. In this work we used two Monte-Carlo based models to examine an indirect converting flat panel detector, specifically the Hamamatsu C9312SK. We focused on the signal generation in the scintillation layer and its influence on the spatial resolution of the whole system. The models differ significantly in their level of complexity. The first model gives a global description of the detector based on different parameters characterizing the spatial resolution. With relatively small effort a simulation model can be developed which equates the real detector regarding signal transfer. The second model allows a more detailed insight of the system. It is based on the well established cascade theory, i.e. describing the detector as a cascade of elemental gain and scattering stages, which represent the built in components and their signal transfer behaviour. In comparison to the first model the influence of single components especially the important light spread behaviour in the scintillator can be analysed in a more differentiated way. Although the implementation of the second model is more time consuming both models have in common that a relatively small amount of system manufacturer parameters are needed. The results of both models were in good agreement with the measured parameters of the real system.
,
2015-01-01
We present a sophisticated likelihood reconstruction algorithm for shower-image analysis of imaging Cherenkov telescopes. The reconstruction algorithm is based on the comparison of the camera pixel amplitudes with the predictions from a Monte Carlo based model. Shower parameters are determined by a maximisation of a likelihood function. Maximisation of the likelihood as a function of shower fit parameters is performed using a numerical non-linear optimisation technique. A related reconstruction technique has already been developed by the CAT and the H.E.S.S. experiments, and provides a more precise direction and energy reconstruction of the photon induced shower compared to the second moment of the camera image analysis. Examples are shown of the performance of the analysis on simulated gamma-ray data from the VERITAS array.
Wen, De-Qi; Liu, Wei; Gao, Fei; Lieberman, M. A.; Wang, You-Nian
2016-08-01
A hybrid model, i.e. a global model coupled bidirectionally with a parallel Monte-Carlo collision (MCC) sheath model, is developed to investigate an inductively coupled discharge with a bias source. This hybrid model can self-consistently reveal the interaction between the bulk plasma and the radio frequency (rf) bias sheath. More specifically, the plasma parameters affecting characteristics of rf bias sheath (sheath length and self-bias) are calculated by a global model and the effect of the rf bias sheath on the bulk plasma is determined by the voltage drop of the rf bias sheath. Moreover, specific numbers of ions are tracked in the rf bias sheath and ultimately the ion energy distribution function (IEDF) incident on the bias electrode is obtained. To validate this model, both bulk plasma density and IEDF on the bias electrode in an argon discharge are compared with experimental measurements, and a good agreement is obtained. The advantage of this model is that it can quickly calculate the bulk plasma density and IEDF on the bias electrode, which are of practical interest in industrial plasma processing, and the model could be easily extended to serve for industrial gases.
Langevin Monte Carlo filtering for target tracking
Iglesias Garcia, Fernando; Bocquel, Melanie; Driessen, Hans
2015-01-01
This paper introduces the Langevin Monte Carlo Filter (LMCF), a particle filter with a Markov chain Monte Carlo algorithm which draws proposals by simulating Hamiltonian dynamics. This approach is well suited to non-linear filtering problems in high dimensional state spaces where the bootstrap filte
An introduction to Monte Carlo methods
Walter, J. -C.; Barkema, G. T.
2015-01-01
Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo sim
An introduction to Monte Carlo methods
Walter, J. -C.; Barkema, G. T.
2015-01-01
Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo sim
Challenges of Monte Carlo Transport
Long, Alex Roberts [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-10
These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and, finally, the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.
The MC21 Monte Carlo Transport Code
Sutton TM, Donovan TJ, Trumbull TH, Dobreff PS, Caro E, Griesheimer DP, Tyburski LJ, Carpenter DC, Joo H
2007-01-09
MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities.
Automatic variance reduction for Monte Carlo simulations via the local importance function transform
Turner, S.A.
1996-02-01
The author derives a transformed transport problem that can be solved theoretically by analog Monte Carlo with zero variance. However, the Monte Carlo simulation of this transformed problem cannot be implemented in practice, so he develops a method for approximating it. The approximation to the zero variance method consists of replacing the continuous adjoint transport solution in the transformed transport problem by a piecewise continuous approximation containing local biasing parameters obtained from a deterministic calculation. He uses the transport and collision processes of the transformed problem to bias distance-to-collision and selection of post-collision energy groups and trajectories in a traditional Monte Carlo simulation of ``real`` particles. He refers to the resulting variance reduction method as the Local Importance Function Transform (LIFI) method. He demonstrates the efficiency of the LIFT method for several 3-D, linearly anisotropic scattering, one-group, and multigroup problems. In these problems the LIFT method is shown to be more efficient than the AVATAR scheme, which is one of the best variance reduction techniques currently available in a state-of-the-art Monte Carlo code. For most of the problems considered, the LIFT method produces higher figures of merit than AVATAR, even when the LIFT method is used as a ``black box``. There are some problems that cause trouble for most variance reduction techniques, and the LIFT method is no exception. For example, the author demonstrates that problems with voids, or low density regions, can cause a reduction in the efficiency of the LIFT method. However, the LIFT method still performs better than survival biasing and AVATAR in these difficult cases.
Automatic variance reduction for Monte Carlo simulations via the local importance function transform
Turner, S.A.
1996-02-01
The author derives a transformed transport problem that can be solved theoretically by analog Monte Carlo with zero variance. However, the Monte Carlo simulation of this transformed problem cannot be implemented in practice, so he develops a method for approximating it. The approximation to the zero variance method consists of replacing the continuous adjoint transport solution in the transformed transport problem by a piecewise continuous approximation containing local biasing parameters obtained from a deterministic calculation. He uses the transport and collision processes of the transformed problem to bias distance-to-collision and selection of post-collision energy groups and trajectories in a traditional Monte Carlo simulation of ``real`` particles. He refers to the resulting variance reduction method as the Local Importance Function Transform (LIFI) method. He demonstrates the efficiency of the LIFT method for several 3-D, linearly anisotropic scattering, one-group, and multigroup problems. In these problems the LIFT method is shown to be more efficient than the AVATAR scheme, which is one of the best variance reduction techniques currently available in a state-of-the-art Monte Carlo code. For most of the problems considered, the LIFT method produces higher figures of merit than AVATAR, even when the LIFT method is used as a ``black box``. There are some problems that cause trouble for most variance reduction techniques, and the LIFT method is no exception. For example, the author demonstrates that problems with voids, or low density regions, can cause a reduction in the efficiency of the LIFT method. However, the LIFT method still performs better than survival biasing and AVATAR in these difficult cases.
Ma, Jianzhong; Amos, Christopher I; Warwick Daw, E
2007-09-01
Although extended pedigrees are often sampled through probands with extreme levels of a quantitative trait, Markov chain Monte Carlo (MCMC) methods for segregation and linkage analysis have not been able to perform ascertainment corrections. Further, the extent to which ascertainment of pedigrees leads to biases in the estimation of segregation and linkage parameters has not been previously studied for MCMC procedures. In this paper, we studied these issues with a Bayesian MCMC approach for joint segregation and linkage analysis, as implemented in the package Loki. We first simulated pedigrees ascertained through individuals with extreme values of a quantitative trait in spirit of the sequential sampling theory of Cannings and Thompson [Cannings and Thompson [1977] Clin. Genet. 12:208-212]. Using our simulated data, we detected no bias in estimates of the trait locus location. However, in addition to allele frequencies, when the ascertainment threshold was higher than or close to the true value of the highest genotypic mean, bias was also found in the estimation of this parameter. When there were multiple trait loci, this bias destroyed the additivity of the effects of the trait loci, and caused biases in the estimation all genotypic means when a purely additive model was used for analyzing the data. To account for pedigree ascertainment with sequential sampling, we developed a Bayesian ascertainment approach and implemented Metropolis-Hastings updates in the MCMC samplers used in Loki. Ascertainment correction greatly reduced biases in parameter estimates. Our method is designed for multiple, but a fixed number of trait loci.
Gender bias in academic recruitment
Abramo, Giovanni; D’Angelo, Ciriaco Andrea; Rosati, Francesco
2016-01-01
It is well known that women are underrepresented in the academic systems of many countries. Gender discrimination is one of the factors that could contribute to this phenomenon. This study considers a recent national academic recruitment campaign in Italy, examining whether women are subject...... to more or less bias than men. The findings show that no gender-related differences occur among the candidates who benefit from positive bias, while among those candidates affected by negative bias, the incidence of women is lower than that of men. Among the factors that determine success in a competition...... for an academic position, the number of the applicant’s career years in the same university as the committee members assumes greater weight for male candidates than for females. Being of the same gender as the committee president is also a factor that assumes greater weight for male applicants. On the other hand...
Anchoring Bias in Online Voting
Yang, Zimo; Zhou, Tao
2012-01-01
Voting online with explicit ratings could largely reflect people's preferences and objects' qualities, but ratings are always irrational, because they may be affected by many unpredictable factors like mood, weather, as well as other people's votes. By analyzing two real systems, this paper reveals a systematic bias embedding in the individual decision-making processes, namely people tend to give a low rating after a low rating, as well as a high rating following a high rating. This so-called \\emph{anchoring bias} is validated via extensive comparisons with null models, and numerically speaking, the extent of bias decays with interval voting number in a logarithmic form. Our findings could be applied in the design of recommender systems and considered as important complementary materials to previous knowledge about anchoring effects on financial trades, performance judgements, auctions, and so on.
Without Bias: A Guidebook for Nondiscriminatory Communication.
Pickens, Judy E., Ed.; And Others
This guidebook discusses ways to eliminate various types of discrimination from business communications. Separately authored chapters discuss eliminating racial and ethnic bias; eliminating sexual bias; achieving communication sensitive about handicaps of disabled persons; eliminating bias from visual media; eliminating bias from meetings,…
The Truth and Bias Model of Judgment
West, Tessa V.; Kenny, David A.
2011-01-01
We present a new model for the general study of how the truth and biases affect human judgment. In the truth and bias model, judgments about the world are pulled by 2 primary forces, the truth force and the bias force, and these 2 forces are interrelated. The truth and bias model differentiates force and value, where the force is the strength of…
The Truth and Bias Model of Judgment
West, Tessa V.; Kenny, David A.
2011-01-01
We present a new model for the general study of how the truth and biases affect human judgment. In the truth and bias model, judgments about the world are pulled by 2 primary forces, the truth force and the bias force, and these 2 forces are interrelated. The truth and bias model differentiates force and value, where the force is the strength of…
Unpacking the Evidence of Gender Bias
Fulmer, Connie L.
2010-01-01
The purpose of this study was to investigate gender bias in pre-service principals using the Gender-Leader Implicit Association Test. Analyses of student-learning narratives revealed how students made sense of gender bias (biased or not-biased) and how each reacted to evidence (surprised or not-surprised). Two implications were: (1) the need for…
Measurement Bias Detection through Factor Analysis
Barendse, M. T.; Oort, F. J.; Werner, C. S.; Ligtvoet, R.; Schermelleh-Engel, K.
2012-01-01
Measurement bias is defined as a violation of measurement invariance, which can be investigated through multigroup factor analysis (MGFA), by testing across-group differences in intercepts (uniform bias) and factor loadings (nonuniform bias). Restricted factor analysis (RFA) can also be used to detect measurement bias. To also enable nonuniform…
Codon Pair Bias Is a Direct Consequence of Dinucleotide Bias
Dusan Kunec
2016-01-01
Full Text Available Codon pair bias is a remarkably stable characteristic of a species. Although functionally uncharacterized, robust virus attenuation was achieved by recoding of viral proteins using underrepresented codon pairs. Because viruses replicate exclusively inside living cells, we posited that their codon pair preferences reflect those of their host(s. Analysis of many human viruses showed, however, that the encoding of viruses is influenced only marginally by host codon pair preferences. Furthermore, examination of codon pair preferences of vertebrate, insect, and arthropod-borne viruses revealed that the latter do not utilize codon pairs overrepresented in arthropods more frequently than other viruses. We found, however, that codon pair bias is a direct consequence of dinucleotide bias. We conclude that codon pair bias does not play a major role in the encoding of viral proteins and that virus attenuation by codon pair deoptimization has the same molecular underpinnings as attenuation based on an increase in CpG/TpA dinucleotides.
The Threshold of Embedded M Collider Bias and Confounding Bias
Kelcey, Benjamin; Carlisle, Joanne
2011-01-01
Of particular import to this study, is collider bias originating from stratification on retreatment variables forming an embedded M or bowtie structural design. That is, rather than assume an M structural design which suggests that "X" is a collider but not a confounder, the authors adopt what they consider to be a more reasonable…
A Glauber Monte Carlo Approach to Stock Market Dynamics
Castiglione, Filippo; Stauffer, Dietrich; Pandey, Ras
2001-03-01
A computer simulation model is used to study the evolution of stock price and the distribution of price fluctuation. Effects of trading momentum and price resistance are considered by a Glauber Monte Carlo approach. The price resistance is described by an elastic energy Ee = e \\cdot x \\cdot |x| while the momentum bias Ep = - b \\cdot y, where, x = (p(t) - p(0))/x_m, y = (p(t+1) - p(t))/ym with the stock price at time t, p(t), and xm and ym are the maximum absolute values up to the current time step. e and b are elastic and momentum bias coefficients. Trades are executed via the herding percolation model with the number (n_s) of trading groups depend on the size of the group, i.e., ns = N/s^τ where N is the size of the market and τ = 2.5. Probability to buy (a/2) and sell (a/2) or remain inactive (1-a) depends on the activity a with the execution probability to buy Wb = e^-E/ [1 + e^-E] and sell with 1-W_b. The distribution of price fluctuation (P(y)) shows a long time tail with a power-law, P(y) ~ y^-μ, μ ~= 4 at e = 1.0, b = 5. The volatility auto-correlation function (c(τ)) shows a reasonable behavior (positive) over several iterations.
Spike Inference from Calcium Imaging using Sequential Monte Carlo Methods
NeuroData; Paninski, L
2015-01-01
Vogelstein JT, Paninski L. Spike Inference from Calcium Imaging using Sequential Monte Carlo Methods. Statistical and Applied Mathematical Sciences Institute (SAMSI) Program on Sequential Monte Carlo Methods, 2008
Makovicka, L.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J. [Universite de Franche-Comte, Equipe IRMA/ENISYS/FEMTO-ST, UMR6174 CNRS, 25 - Montbeliard (France); Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Salomon, M. [Universite de Franche-Comte, Equipe AND/LIFC, 90 - Belfort (France)
2009-01-15
Monte Carlo codes, precise but slow, are very important tools in the vast majority of specialities connected to Radiation Physics, Radiation Protection and Dosimetry. A discussion about some other computing solutions is carried out; solutions not only based on the enhancement of computer power, or on the 'biasing'used for relative acceleration of these codes (in the case of photons), but on more efficient methods (A.N.N. - artificial neural network, C.B.R. - case-based reasoning - or other computer science techniques) already and successfully used for a long time in other scientific or industrial applications and not only Radiation Protection or Medical Dosimetry. (authors)
Ratio Bias and Policy Preferences
Pedersen, Rasmus Tue
2016-01-01
Numbers permeate modern political communication. While current scholarship on framing effects has focused on the persuasive effects of words and arguments, this article shows that framing of numbers can also substantially affect policy preferences. Such effects are caused by ratio bias, which is ...
Perception bias in route choice
Vreeswijk, Jacob Dirk; Thomas, Tom; van Berkum, Eric C.; van Arem, Bart
2014-01-01
Travel time is probably one of the most studied attributes in route choice. Recently, perception of travel time received more attention as several studies have shown its importance in explaining route choice behavior. In particular, travel time estimates by travelers appear to be biased against
Perception bias in route choice
Vreeswijk, Jacob Dirk; Thomas, Tom; van Berkum, Eric C.; van Arem, Bart
2014-01-01
Travel time is probably one of the most studied attributes in route choice. Recently, perception of travel time received more attention as several studies have shown its importance in explaining route choice behavior. In particular, travel time estimates by travelers appear to be biased against non-
Attentional bias in math anxiety
Rubinsten, Orly; Eidlin, Hili; Wohl, Hadas; Akibli, Orly
2015-01-01
Cognitive theory from the field of general anxiety suggests that the tendency to display attentional bias toward negative information results in anxiety. Accordingly, the current study aims to investigate whether attentional bias is involved in math anxiety (MA) as well (i.e., a persistent negative reaction to math). Twenty seven participants (14 with high levels of MA and 13 with low levels of MA) were presented with a novel computerized numerical version of the well established dot probe task. One of six types of prime stimuli, either math related or typically neutral, was presented on one side of a computer screen. The prime was preceded by a probe (either one or two asterisks) that appeared in either the prime or the opposite location. Participants had to discriminate probe identity (one or two asterisks). Math anxious individuals reacted faster when the probe was at the location of the numerical related stimuli. This suggests the existence of attentional bias in MA. That is, for math anxious individuals, the cognitive system selectively favored the processing of emotionally negative information (i.e., math related words). These findings suggest that attentional bias is linked to unduly intense MA symptoms. PMID:26528208
Attentional Bias in Math Anxiety
Orly eRubinsten
2015-10-01
Full Text Available Cognitive theory from the field of general anxiety suggests that the tendency to display attentional bias toward negative information results in anxiety. Accordingly, the current study aims to investigate whether attentional bias is involved in math anxiety as well (i.e., a persistent negative reaction to math. Twenty seven participants (14 with high levels of math anxiety and 13 with low levels of math anxiety were presented with a novel computerized numerical version of the well established dot probe task. One of 6 types of prime stimuli, either math related or typically neutral, were presented on one side of a computer screen. The prime was preceded by a probe (either one or two asterisks that appeared in either the prime or the opposite location. Participants had to discriminate probe identity (one or two asterisks. Math anxious individuals reacted faster when the probe was at the location of the numerical related stimuli. This suggests the existence of attentional bias in math anxiety. That is, for math anxious individuals, the cognitive system selectively favored the processing of emotionally negative information (i.e., math related words. These findings suggest that attentional bias is linked to unduly intense math anxiety symptoms.
Stereotype Formation : Biased by Association
Le Pelley, Mike E.; Reimers, Stian J.; Calvini, Guglielmo; Spears, Russell; Beesley, Tom; Murphy, Robin A.
2010-01-01
We propose that biases in attitude and stereotype formation might arise as a result of learned differences ill the extent its which social groups have previously been predictive elf behavioral or physical properties Experiments 1 and 2 demonstrate that differences in the experienced predictiveness o
Sex Bias in Counseling Materials
Harway, Michele
1977-01-01
This article reviews findings of bias in counseling materials and presents results of three original studies. Indications are that textbooks used by practitioners present the sexes in stereotypical fashion, and a greater proportion of college catalog context is devoted to men than to women. (Author)
Perception bias in route choice
Vreeswijk, J.D.; Thomas, T.; Berkum, van E.C.; Arem, van B.
2014-01-01
Travel time is probably one of the most studied attributes in route choice. Recently, perception of travel time received more attention as several studies have shown its importance in explaining route choice behavior. In particular, travel time estimates by travelers appear to be biased against non-
Measurement Bias in Multilevel Data
Jak, Suzanne; Oort, Frans J.; Dolan, Conor V.
2014-01-01
Measurement bias can be detected using structural equation modeling (SEM), by testing measurement invariance with multigroup factor analysis (Jöreskog, 1971;Meredith, 1993;Sörbom, 1974) MIMIC modeling (Muthén, 1989) or restricted factor analysis (Oort, 1992,1998). In educational research, data often
Measurement bias in multilevel data
Jak, S.; Oort, F.J.; Dolan, C.V.
2014-01-01
Measurement bias can be detected using structural equation modeling (SEM), by testing measurement invariance with multigroup factor analysis (Jöreskog, 1971;Meredith, 1993;Sörbom, 1974) MIMIC modeling (Muthén, 1989) or restricted factor analysis (Oort, 1992,1998). In educational research, data often
Attentional bias in math anxiety.
Rubinsten, Orly; Eidlin, Hili; Wohl, Hadas; Akibli, Orly
2015-01-01
Cognitive theory from the field of general anxiety suggests that the tendency to display attentional bias toward negative information results in anxiety. Accordingly, the current study aims to investigate whether attentional bias is involved in math anxiety (MA) as well (i.e., a persistent negative reaction to math). Twenty seven participants (14 with high levels of MA and 13 with low levels of MA) were presented with a novel computerized numerical version of the well established dot probe task. One of six types of prime stimuli, either math related or typically neutral, was presented on one side of a computer screen. The prime was preceded by a probe (either one or two asterisks) that appeared in either the prime or the opposite location. Participants had to discriminate probe identity (one or two asterisks). Math anxious individuals reacted faster when the probe was at the location of the numerical related stimuli. This suggests the existence of attentional bias in MA. That is, for math anxious individuals, the cognitive system selectively favored the processing of emotionally negative information (i.e., math related words). These findings suggest that attentional bias is linked to unduly intense MA symptoms.
Monte Carlo approaches to light nuclei
Carlson, J.
1990-01-01
Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of {sup 16}O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs.
Monte carlo simulation for soot dynamics
Zhou, Kun
2012-01-01
A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.
Lattice gauge theories and Monte Carlo simulations
Rebbi, Claudio
1983-01-01
This volume is the most up-to-date review on Lattice Gauge Theories and Monte Carlo Simulations. It consists of two parts. Part one is an introductory lecture on the lattice gauge theories in general, Monte Carlo techniques and on the results to date. Part two consists of important original papers in this field. These selected reprints involve the following: Lattice Gauge Theories, General Formalism and Expansion Techniques, Monte Carlo Simulations. Phase Structures, Observables in Pure Gauge Theories, Systems with Bosonic Matter Fields, Simulation of Systems with Fermions.
Quantum Monte Carlo for minimum energy structures
Wagner, Lucas K
2010-01-01
We present an efficient method to find minimum energy structures using energy estimates from accurate quantum Monte Carlo calculations. This method involves a stochastic process formed from the stochastic energy estimates from Monte Carlo that can be averaged to find precise structural minima while using inexpensive calculations with moderate statistical uncertainty. We demonstrate the applicability of the algorithm by minimizing the energy of the H2O-OH- complex and showing that the structural minima from quantum Monte Carlo calculations affect the qualitative behavior of the potential energy surface substantially.
Fast quantum Monte Carlo on a GPU
Lutsyshyn, Y
2013-01-01
We present a scheme for the parallelization of quantum Monte Carlo on graphical processing units, focusing on bosonic systems and variational Monte Carlo. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent acceleration. Comparing with single core execution, GPU-accelerated code runs over x100 faster. The CUDA code is provided along with the package that is necessary to execute variational Monte Carlo for a system representing liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the latest Kepler architecture K20 GPU. Kepler-specific optimization is discussed.
High resolution and Monte Carlo additions to the SASKTRAN radiative transfer model
D. J. Zawada
2015-03-01
Full Text Available The OSIRIS instrument on board the Odin spacecraft has been measuring limb scattered radiance since 2001. The vertical radiance profiles measured as the instrument nods are inverted, with the aid of the SASKTRAN radiative transfer model, to obtain vertical profiles of trace atmospheric constituents. Here we describe two newly developed modes of the SASKTRAN radiative transfer model: a high spatial resolution mode, and a Monte Carlo mode. The high spatial resolution mode is a successive orders model capable of modelling the multiply scattered radiance when the atmosphere is not spherically symmetric; the Monte Carlo mode is intended for use as a highly accurate reference model. It is shown that the two models agree in a wide variety of solar conditions to within 0.2%. As an example case for both models, Odin-OSIRIS scans were simulated with the Monte Carlo model and retrieved using the high resolution model. A systematic bias of up to 4% in retrieved ozone number density between scans where the instrument is scanning up or scanning down was identified. It was found that calculating the multiply scattered diffuse field at five discrete solar zenith angles is sufficient to eliminate the bias for typical Odin-OSIRIS geometries.
Bias Adjusted Precipitation Threat Scores
F. Mesinger
2008-04-01
Full Text Available Among the wide variety of performance measures available for the assessment of skill of deterministic precipitation forecasts, the equitable threat score (ETS might well be the one used most frequently. It is typically used in conjunction with the bias score. However, apart from its mathematical definition the meaning of the ETS is not clear. It has been pointed out (Mason, 1989; Hamill, 1999 that forecasts with a larger bias tend to have a higher ETS. Even so, the present author has not seen this having been accounted for in any of numerous papers that in recent years have used the ETS along with bias "as a measure of forecast accuracy".
A method to adjust the threat score (TS or the ETS so as to arrive at their values that correspond to unit bias in order to show the model's or forecaster's accuracy in extit{placing} precipitation has been proposed earlier by the present author (Mesinger and Brill, the so-called dH/dF method. A serious deficiency however has since been noted with the dH/dF method in that the hypothetical function that it arrives at to interpolate or extrapolate the observed value of hits to unit bias can have values of hits greater than forecast when the forecast area tends to zero. Another method is proposed here based on the assumption that the increase in hits per unit increase in false alarms is proportional to the yet unhit area. This new method removes the deficiency of the dH/dF method. Examples of its performance for 12 months of forecasts by three NCEP operational models are given.
Measurement of the $B^-$ lifetime using a simulation free approach for trigger bias correction
Aaltonen, T.; /Helsinki Inst. of Phys.; Adelman, J.; /Chicago U., EFI; Alvarez Gonzalez, B.; /Cantabria Inst. of Phys.; Amerio, S.; /INFN, Padua; Amidei, D.; /Michigan U.; Anastassov, A.; /Northwestern U.; Annovi, A.; /Frascati; Antos, J.; /Comenius U.; Apollinari, G.; /Fermilab; Appel, J.; /Fermilab; Apresyan, A.; /Purdue U. /Waseda U.
2010-04-01
The collection of a large number of B hadron decays to hadronic final states at the CDF II detector is possible due to the presence of a trigger that selects events based on track impact parameters. However, the nature of the selection requirements of the trigger introduces a large bias in the observed proper decay time distribution. A lifetime measurement must correct for this bias and the conventional approach has been to use a Monte Carlo simulation. The leading sources of systematic uncertainty in the conventional approach are due to differences between the data and the Monte Carlo simulation. In this paper they present an analytic method for bias correction without using simulation, thereby removing any uncertainty between data and simulation. This method is presented in the form of a measurement of the lifetime of the B{sup -} using the mode B{sup -} {yields} D{sup 0}{pi}{sup -}. The B{sup -} lifetime is measured as {tau}{sub B{sup -}} = 1.663 {+-} 0.023 {+-} 0.015 ps, where the first uncertainty is statistical and the second systematic. This new method results in a smaller systematic uncertainty in comparison to methods that use simulation to correct for the trigger bias.
Deasy, Joseph O; Wickerhauser, M Victor; Picard, Mathieu
2002-10-01
The Monte Carlo dose calculation method works by simulating individual energetic photons or electrons as they traverse a digital representation of the patient anatomy. However, Monte Carlo results fluctuate until a large number of particles are simulated. We propose wavelet threshold de-noising as a postprocessing step to accelerate convergence of Monte Carlo dose calculations. A sampled rough function (such as Monte Carlo noise) gives wavelet transform coefficients which are more nearly equal in amplitude than those of a sampled smooth function. Wavelet hard-threshold de-noising sets to zero those wavelet coefficients which fall below a threshold; the image is then reconstructed. We implemented the computationally efficient 9,7-biorthogonal filters in the C language. Transform results were averaged over transform origin selections to reduce artifacts. A method for selecting best threshold values is described. The algorithm requires about 336 floating point arithmetic operations per dose grid point. We applied wavelet threshold de-noising to two two-dimensional dose distributions: a dose distribution generated by 10 MeV electrons incident on a water phantom with a step-heterogeneity, and a slice from a lung heterogeneity phantom. Dose distributions were simulated using the Integrated Tiger Series Monte Carlo code. We studied threshold selection, resulting dose image smoothness, and resulting dose image accuracy as a function of the number of source particles. For both phantoms, with a suitable value of the threshold parameter, voxel-to-voxel noise was suppressed with little introduction of bias. The roughness of wavelet de-noised dose distributions (according to a Laplacian metric) was nearly independent of the number of source electrons, though the accuracy of the de-noised dose image improved with increasing numbers of source electrons. We conclude that wavelet shrinkage de-noising is a promising method for effectively accelerating Monte Carlo dose calculations
Information environment, behavioral biases, and home bias in analysts’ recommendations
Farooq, Omar; Taouss, Mohammed
2012-01-01
’ recommendations. Using a large data of analysts’ recommendations from Asian emerging markets, we show that local analysts issue more optimistic recommendations than their foreign counterparts. However, optimism difference between the two groups is greater for firms with poor information environment. Our results......Can information environment of a firm explain home bias in analysts’ recommendations? Can the extent of agency problems explain optimism difference between foreign and local analysts? This paper answers these questions by documenting the effect of information environment on home bias in analysts...... show that optimism difference between the two groups is more than twice as much in firms with poor information environment than in firms with better information environment. We argue that poor information environment pose greater information asymmetries to foreign analysts regarding local firms...
11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing
Nuyens, Dirk
2016-01-01
This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.
Aasta film - joonisfilm "Mont Blanc" / Verni Leivak
Leivak, Verni, 1966-
2002-01-01
Eesti Filmiajakirjanike Ühing andis aasta 2001 parima filmi tiitli Priit Tenderi joonisfilmile "Mont Blanc" : Eesti Joonisfilm 2001.Ka filmikriitikute eelistused kinodes ja televisioonis 2001. aastal näidatud filmide osas
Simulation and the Monte Carlo method
Rubinstein, Reuven Y
2016-01-01
Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...
Avariide kiuste Monte Carlosse / Aare Arula
Arula, Aare
2007-01-01
Vt. ka Tehnika dlja Vsehh nr. 3, lk. 26-27. 26. jaanuaril 1937 Tallinnast Monte Carlo tähesõidule startinud Karl Siitanit ja tema meeskonda ootasid ees seiklused, mis oleksid neile peaaegu elu maksnud
Avariide kiuste Monte Carlosse / Aare Arula
Arula, Aare
2007-01-01
Vt. ka Tehnika dlja Vsehh nr. 3, lk. 26-27. 26. jaanuaril 1937 Tallinnast Monte Carlo tähesõidule startinud Karl Siitanit ja tema meeskonda ootasid ees seiklused, mis oleksid neile peaaegu elu maksnud
Monte Carlo simulations for plasma physics
Okamoto, M.; Murakami, S.; Nakajima, N.; Wang, W.X. [National Inst. for Fusion Science, Toki, Gifu (Japan)
2000-07-01
Plasma behaviours are very complicated and the analyses are generally difficult. However, when the collisional processes play an important role in the plasma behaviour, the Monte Carlo method is often employed as a useful tool. For examples, in neutral particle injection heating (NBI heating), electron or ion cyclotron heating, and alpha heating, Coulomb collisions slow down high energetic particles and pitch angle scatter them. These processes are often studied by the Monte Carlo technique and good agreements can be obtained with the experimental results. Recently, Monte Carlo Method has been developed to study fast particle transports associated with heating and generating the radial electric field. Further it is applied to investigating the neoclassical transport in the plasma with steep gradients of density and temperatures which is beyong the conventional neoclassical theory. In this report, we briefly summarize the researches done by the present authors utilizing the Monte Carlo method. (author)
Predator trapping on Monte Vista NWR
US Fish and Wildlife Service, Department of the Interior — This letter is summarizing the status of predator trapping on Monte Vista National Wildlife refuge in light of the referendum passes in the State of Colorado banning...
Quantum Monte Carlo Calculations of Light Nuclei
Pieper, Steven C
2007-01-01
During the last 15 years, there has been much progress in defining the nuclear Hamiltonian and applying quantum Monte Carlo methods to the calculation of light nuclei. I describe both aspects of this work and some recent results.
Improved Monte Carlo Renormalization Group Method
Gupta, R.; Wilson, K. G.; Umrigar, C.
1985-01-01
An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.
Monte Carlo methods for particle transport
Haghighat, Alireza
2015-01-01
The Monte Carlo method has become the de facto standard in radiation transport. Although powerful, if not understood and used appropriately, the method can give misleading results. Monte Carlo Methods for Particle Transport teaches appropriate use of the Monte Carlo method, explaining the method's fundamental concepts as well as its limitations. Concise yet comprehensive, this well-organized text: * Introduces the particle importance equation and its use for variance reduction * Describes general and particle-transport-specific variance reduction techniques * Presents particle transport eigenvalue issues and methodologies to address these issues * Explores advanced formulations based on the author's research activities * Discusses parallel processing concepts and factors affecting parallel performance Featuring illustrative examples, mathematical derivations, computer algorithms, and homework problems, Monte Carlo Methods for Particle Transport provides nuclear engineers and scientists with a practical guide ...
Monte Vista NWR Water Use Report- 1964
US Fish and Wildlife Service, Department of the Interior — This report summarizes water use at Monte Vista NWR for 1964. The document includes summaries of 1964 water use, 1965 water program recommendations, and proposed...
Smart detectors for Monte Carlo radiative transfer
Baes, Maarten
2008-01-01
Many optimization techniques have been invented to reduce the noise that is inherent in Monte Carlo radiative transfer simulations. As the typical detectors used in Monte Carlo simulations do not take into account all the information contained in the impacting photon packages, there is still room to optimize this detection process and the corresponding estimate of the surface brightness distributions. We want to investigate how all the information contained in the distribution of impacting photon packages can be optimally used to decrease the noise in the surface brightness distributions and hence to increase the efficiency of Monte Carlo radiative transfer simulations. We demonstrate that the estimate of the surface brightness distribution in a Monte Carlo radiative transfer simulation is similar to the estimate of the density distribution in an SPH simulation. Based on this similarity, a recipe is constructed for smart detectors that take full advantage of the exact location of the impact of the photon pack...
Quantum Monte Carlo approaches for correlated systems
Becca, Federico
2017-01-01
Over the past several decades, computational approaches to studying strongly-interacting systems have become increasingly varied and sophisticated. This book provides a comprehensive introduction to state-of-the-art quantum Monte Carlo techniques relevant for applications in correlated systems. Providing a clear overview of variational wave functions, and featuring a detailed presentation of stochastic samplings including Markov chains and Langevin dynamics, which are developed into a discussion of Monte Carlo methods. The variational technique is described, from foundations to a detailed description of its algorithms. Further topics discussed include optimisation techniques, real-time dynamics and projection methods, including Green's function, reptation and auxiliary-field Monte Carlo, from basic definitions to advanced algorithms for efficient codes, and the book concludes with recent developments on the continuum space. Quantum Monte Carlo Approaches for Correlated Systems provides an extensive reference ...
Pheasant hunting on the Monte Vista NWR
US Fish and Wildlife Service, Department of the Interior — This letter to the Alamosa/Monte Vista NWR Refuge Manager discusses the need to alter management of pheasants in the area to halt the continued decline in population...
Aasta film - joonisfilm "Mont Blanc" / Verni Leivak
Leivak, Verni, 1966-
2002-01-01
Eesti Filmiajakirjanike Ühing andis aasta 2001 parima filmi tiitli Priit Tenderi joonisfilmile "Mont Blanc" : Eesti Joonisfilm 2001.Ka filmikriitikute eelistused kinodes ja televisioonis 2001. aastal näidatud filmide osas
Ray, Robert L
2016-01-01
Two-particle correlation measurements and analysis are an important component of the relativistic heavy-ion physics program. In particular, particle pair-number correlations on two-dimensional transverse momentum ($p_t$) allow unique access to soft, semi-hard and hard-scattering processes in these collisions. Precise measurements of this type of correlation are essential for understanding the dynamics in heavy-ion collisions. However, transverse momentum correlation measurements are especially vulnerable to statistical and systematic biases. In this paper the origins of these large bias effects are explained and mathematical correlation forms are derived from mean-$p_t$ fluctuation quantities in the literature in an effort to minimize bias. Monte Carlo simulations are then used to test the degree to which each correlation definition leads to unbiased results in realistic applications. Several correlation forms are shown to be unacceptable for data analysis applications while several others are shown to reprod...
Twins like to be seen: Observational biases affecting spectroscopically selected binary stars
Cantrell, Andrew G
2014-01-01
Massive binary stars undergo qualitatively different evolution when the two components are similar in mass ('twins'), and the abundance of twin binaries is therefore important to understanding a wide range of astrophysical phenomena. We reconsider the results of Pinsonneault & Stanek (2006), who argue that a large proportion of binary stars have nearly equal-mass components; we find that their data imply a relatively small number of such 'twins.' We argue that samples of double-lined spectroscopic binaries are biased towards systems with nearly equal-brightness components. We present a Monte-Carlo model of this bias, which simultaneously explains the abundance of twins in the unevolved binaries of Pinsonneault & Stanek (2006), and the lack of twins in their evolved systems. After accounting for the bias, we find that their observed mass ratios may be consistent with a variety of intrinsic distributions, including either a flat distribution or a Salpeter distribution. We conclude that the observed over...
Bartalini, P.; Kryukov, A.; Selyuzhenkov, Ilya V.; Sherstnev, A.; Vologdin, A.
2004-01-01
We present the Monte-Carlo events Data Base (MCDB) project and its development plans. MCDB facilitates communication between authors of Monte-Carlo generators and experimental users. It also provides a convenient book-keeping and an easy access to generator level samples. The first release of MCDB is now operational for the CMS collaboration. In this paper we review the main ideas behind MCDB and discuss future plans to develop this Data Base further within the CERN LCG framework.
Monte Carlo Algorithms for Linear Problems
DIMOV, Ivan
2000-01-01
MSC Subject Classification: 65C05, 65U05. Monte Carlo methods are a powerful tool in many fields of mathematics, physics and engineering. It is known, that these methods give statistical estimates for the functional of the solution by performing random sampling of a certain chance variable whose mathematical expectation is the desired functional. Monte Carlo methods are methods for solving problems using random variables. In the book [16] edited by Yu. A. Shreider one can find the followin...
The Feynman Path Goes Monte Carlo
Sauer, Tilman
2001-01-01
Path integral Monte Carlo (PIMC) simulations have become an important tool for the investigation of the statistical mechanics of quantum systems. I discuss some of the history of applying the Monte Carlo method to non-relativistic quantum systems in path-integral representation. The principle feasibility of the method was well established by the early eighties, a number of algorithmic improvements have been introduced in the last two decades.
Monte Carlo Hamiltonian:Inverse Potential
LUO Xiang-Qian; CHENG Xiao-Ni; Helmut KR(O)GER
2004-01-01
The Monte Carlo Hamiltonian method developed recently allows to investigate the ground state and low-lying excited states of a quantum system,using Monte Carlo(MC)algorithm with importance sampling.However,conventional MC algorithm has some difficulties when applied to inverse potentials.We propose to use effective potential and extrapolation method to solve the problem.We present examples from the hydrogen system.
Self-consistent kinetic lattice Monte Carlo
Horsfield, A.; Dunham, S.; Fujitani, Hideaki
1999-07-01
The authors present a brief description of a formalism for modeling point defect diffusion in crystalline systems using a Monte Carlo technique. The main approximations required to construct a practical scheme are briefly discussed, with special emphasis on the proper treatment of charged dopants and defects. This is followed by tight binding calculations of the diffusion barrier heights for charged vacancies. Finally, an application of the kinetic lattice Monte Carlo method to vacancy diffusion is presented.
Error in Monte Carlo, quasi-error in Quasi-Monte Carlo
Kleiss, R H
2006-01-01
While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the abscence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction of an estimator of stochastic nature, based on the ensemble of pointsets with a particular discrepancy value. We investigate the consequences of this choice and give some first empirical results on the suggested estimators.
Types of Research Bias Encountered in IR.
Gabr, Ahmed; Kallini, Joseph Ralph; Desai, Kush; Hickey, Ryan; Thornburg, Bartley; Kulik, Laura; Lewandowski, Robert J; Salem, Riad
2016-04-01
Bias is a systemic error in studies that leads to inaccurate deductions. Relevant biases in the field of IR and interventional oncology were identified after reviewing articles published in the Journal of Vascular and Interventional Radiology and CardioVascular and Interventional Radiology. Biases cited in these articles were divided into three categories: preinterventional (health care access, participation, referral, and sample biases), periinterventional (contamination, investigator, and operator biases), and postinterventional (guarantee-time, lead time, loss to follow-up, recall, and reporting biases). Copyright © 2016 SIR. Published by Elsevier Inc. All rights reserved.
Probability biases as Bayesian inference
Andre; C. R. Martins
2006-11-01
Full Text Available In this article, I will show how several observed biases in human probabilistic reasoning can be partially explained as good heuristics for making inferences in an environment where probabilities have uncertainties associated to them. Previous results show that the weight functions and the observed violations of coalescing and stochastic dominance can be understood from a Bayesian point of view. We will review those results and see that Bayesian methods should also be used as part of the explanation behind other known biases. That means that, although the observed errors are still errors under the be understood as adaptations to the solution of real life problems. Heuristics that allow fast evaluations and mimic a Bayesian inference would be an evolutionary advantage, since they would give us an efficient way of making decisions. %XX In that sense, it should be no surprise that humans reason with % probability as it has been observed.
Belief bias and relational reasoning.
Roberts, Maxwell J; Sykes, Elizabeth D A
2003-01-01
When people evaluate categorical syllogisms, they tend to reject unbelievable conclusions and accept believable ones irrespective of their validity. Typically, this effect is particularly marked for invalid conclusions that are possible, but do not necessarily follow, given the premises. However, smaller believability effects can also be detected for other types of conclusion. Three experiments are reported here, in which an attempt was made to determine whether belief bias effects can manifest themselves on the relational inference task. Subjects evaluated the validity of conclusions such as William the Conqueror was king after the Pyramids were built (temporal task) or Manchester is north of Bournemouth (spatial task) with respect to their premises. All of the major findings for equivalent categorical syllogism tasks were replicated. However, the overall size of the main effect of believability appears to be related to task presentation, a phenomenon not previously identified for categorical syllogisms and which current theories of belief bias have difficulty explaining.
Mindfulness reduces the correspondence bias.
Hopthrow, Tim; Hooper, Nic; Mahmood, Lynsey; Meier, Brian P; Weger, Ulrich
2017-03-01
The correspondence bias (CB) refers to the idea that people sometimes give undue weight to dispositional rather than situational factors when explaining behaviours and attitudes. Three experiments examined whether mindfulness, a non-judgmental focus on the present moment, could reduce the CB. Participants engaged in a brief mindfulness exercise (the raisin task), a control task, or an attention to detail task before completing a typical CB measure involving an attitude-attribution paradigm. The results indicated that participants in the mindfulness condition experienced a significant reduction in the CB compared to participants in the control or attention to detail conditions. These results suggest that mindfulness training can play a unique role in reducing social biases related to person perception.
Opinion Dynamics with Confirmation Bias
Allahverdyan, A E
2014-01-01
Background: Confirmation bias is the tendency to acquire or evaluate new information in a way that is consistent with one's preexisting beliefs. It is omnipresent in psychology, economics, and even scientific practices. Prior theoretical research of this phenomenon has mainly focused on its economic implications possibly missing its potential connections with broader notions of cognitive science. Methodology/Principal Findings: We formulate a (non-Bayesian) model for revising subjective probabilistic opinion of a confirmationally-biased agent in the light of a persuasive opinion. The revision rule ensures that the agent does not react to persuasion that is either far from his current opinion or coincides with it. We demonstrate that the model accounts for the basic phenomenology of the social judgment theory, and allows to study various phenomena such as cognitive dissonance and boomerang effect. The model also displays the order of presentation effect|when consecutively exposed to two opinions, the preferenc...
Are temperature reconstructions regionally biased?
Bothe, O
2012-01-01
Are temperature reconstructions possibly biased due to regionally differing density of utilized proxy-networks? This question is assessed utilizing a simple process-based forward model of tree growth in the virtual reality of two simulations of the climate of the last millennium with different amplitude of solar forcing variations. The pseudo-tree ring series cluster in high latitudes of the northern hemisphere and east Asia. Only weak biases are found for the full network. However, for a strong solar forcing amplitude the high latitudes indicate a warmer first half of the last millennium while mid-latitudes and Asia were slightly colder than the extratropical hemispheric average. Reconstruction skill is weak or non-existent for two simple reconstruction schemes, and comparison of virtual reality target and reconstructions reveals strong deficiencies. The temporal resolution of the proxies has an influence on the reconstruction task and results are sensitive to the construction of the proxy-network. Existing ...
Competition and Commercial Media Bias
A. Blasco; F. Sobbrio
2011-01-01
This paper reviews the empirical evidence on commercial media bias (i.e., advertisers influence over media accuracy) and then introduces a simple model to summarize the main elements of the theoretical literature. The analysis provides three main policy insights for media regulators: i) Media regulators should target their monitoring efforts towards news contents upon which advertisers are likely to share similar preferences; ii) In advertising industries characterized by high correlation in ...
BEHAVIORAL BIASES IN TRADING SECURITIES
Turcan Ciprian Sebastian
2010-12-01
Full Text Available The main thesis of this paper represents the importance and the effects that human behavior has over capital markets. It is important to see the link between the asset valuation and investor sentiment that motivate to pay for an asset a certain prices over/below the intrinsic value. The main behavioral aspects discussed are emotional factors such as: fear of regret, overconfidence, perseverance, loss aversion ,heuristic biases, misinformation and thinking errors, herding and their consequences.
Measuring bias from unbiased observable
Lee, Seokcheon
2014-01-01
Since Kaiser introduced galaxies as a biased tracer of the underlying total mass field, the linear galaxies bias, b(z) appears ubiquitously both in theoretical calculations and in observational measurements related to galaxy surveys. However, the generic approaches to the galaxy density is a non-local and stochastic function of the underlying dark matter density and it becomes difficult to make the analytic form of b(z). Due to this fact, b(z) is known as a nuisance parameter and the effort has been made to measure bias free observable quantities. We provide the exact and analytic function of b(z) which also can be measured from galaxy surveys using the redshift space distortions parameters, more accurately unbiased observable \\beta \\sigma_{\\rm{gal}} = f \\sigma_8. We also introduce approximate solutions for b(z) for different gravity theories. One can generalize these approximate solutions to be exact when one solves the exact evolutions for the dark matter density fluctuation of given gravity theories. These...
Response bias in plaintiffs' histories.
Lees-Haley, P R; Williams, C W; Zasler, N D; Marguilies, S; English, L T; Stevens, K B
1997-11-01
This study investigated response bias in self-reported history of factors relevant to the assessment of traumatic brain injury, toxic brain injury and related emotional distress. Response bias refers to systematic error in self-report data. A total of 446 subjects (comprising 131 litigating and 315 non-litigating adults from five locations in the United States) completed a symptom questionnaire. Data were obtained from university faculty and students, from patients in clinics specializing in physiatry neurology, and family medicine, and from plaintiffs undergoing forensic neuropsychological evaluations. Comparisons were made for litigant and non litigant ratings of their past and current cognitive and emotional functioning, including life in general, ability to concentrate, memory, depression, anxiety, alcohol, drugs, ability to work or attend school, irritability, headaches, confusion, self-esteem, and fatigue. Although there is no basis for hypothesizing plaintiffs to be healthier than the general population, plaintiffs rated their pre-injury functioning superior to non-plaintiffs. These findings suggest that response biases need to be taken into account by forensic examiners when relying on litigants' self-reports of pre-injury status.
Opinion dynamics with confirmation bias.
Allahverdyan, Armen E; Galstyan, Aram
2014-01-01
Confirmation bias is the tendency to acquire or evaluate new information in a way that is consistent with one's preexisting beliefs. It is omnipresent in psychology, economics, and even scientific practices. Prior theoretical research of this phenomenon has mainly focused on its economic implications possibly missing its potential connections with broader notions of cognitive science. We formulate a (non-Bayesian) model for revising subjective probabilistic opinion of a confirmationally-biased agent in the light of a persuasive opinion. The revision rule ensures that the agent does not react to persuasion that is either far from his current opinion or coincides with it. We demonstrate that the model accounts for the basic phenomenology of the social judgment theory, and allows to study various phenomena such as cognitive dissonance and boomerang effect. The model also displays the order of presentation effect-when consecutively exposed to two opinions, the preference is given to the last opinion (recency) or the first opinion (primacy) -and relates recency to confirmation bias. Finally, we study the model in the case of repeated persuasion and analyze its convergence properties. The standard Bayesian approach to probabilistic opinion revision is inadequate for describing the observed phenomenology of persuasion process. The simple non-Bayesian model proposed here does agree with this phenomenology and is capable of reproducing a spectrum of effects observed in psychology: primacy-recency phenomenon, boomerang effect and cognitive dissonance. We point out several limitations of the model that should motivate its future development.
Opinion dynamics with confirmation bias.
Armen E Allahverdyan
Full Text Available Confirmation bias is the tendency to acquire or evaluate new information in a way that is consistent with one's preexisting beliefs. It is omnipresent in psychology, economics, and even scientific practices. Prior theoretical research of this phenomenon has mainly focused on its economic implications possibly missing its potential connections with broader notions of cognitive science.We formulate a (non-Bayesian model for revising subjective probabilistic opinion of a confirmationally-biased agent in the light of a persuasive opinion. The revision rule ensures that the agent does not react to persuasion that is either far from his current opinion or coincides with it. We demonstrate that the model accounts for the basic phenomenology of the social judgment theory, and allows to study various phenomena such as cognitive dissonance and boomerang effect. The model also displays the order of presentation effect-when consecutively exposed to two opinions, the preference is given to the last opinion (recency or the first opinion (primacy -and relates recency to confirmation bias. Finally, we study the model in the case of repeated persuasion and analyze its convergence properties.The standard Bayesian approach to probabilistic opinion revision is inadequate for describing the observed phenomenology of persuasion process. The simple non-Bayesian model proposed here does agree with this phenomenology and is capable of reproducing a spectrum of effects observed in psychology: primacy-recency phenomenon, boomerang effect and cognitive dissonance. We point out several limitations of the model that should motivate its future development.
Mapping systematic errors in helium abundance determinations using Markov Chain Monte Carlo
Aver, Erik; Skillman, Evan D
2010-01-01
Monte Carlo techniques have been used to evaluate the statistical and systematic uncertainties in the helium abundances derived from extragalactic H~II regions. The helium abundance is sensitive to several physical parameters associated with the H~II region. In this work, we introduce Markov Chain Monte Carlo (MCMC) methods to efficiently explore the parameter space and determine the helium abundance, the physical parameters, and the uncertainties derived from observations of metal poor nebulae. Experiments with synthetic data show that the MCMC method is superior to previous implementations (based on flux perturbation) in that it is not affected by biases due to non-physical parameter space. The MCMC analysis allows a detailed exploration of degeneracies, and, in particular, a false minimum that occurs at large values of optical depth in the He~I emission lines. We demonstrate that introducing the electron temperature derived from the [O~III] emission lines as a prior, in a very conservative manner, produces...
RUDOLF STOCKAR
2010-07-01
Full Text Available A newly opened excavation in the Cassina beds of the Lower Meride Limestone (Monte San Giorgio UNESCO WHL, Canton Ticino, Southern Alps has yielded a small collection of Ladinian plant fossils, together with vertebrate (mostly fish and invertebrate remains. The flora contains at least five species; conifer remains assignable to the genera Elatocladus, Voltzia and ?Pelourdea are the most common elements. A new species, Elatocladus cassinae n. sp., is formally described. Co-occurring with the conifers are seed ferns (Ptilozamites and a few putative cycadalean remains (?Taeniopteris. Among the identified genera, only Voltzia has previously been reported from Monte San Giorgio. The fossils presented in this paper indicate that a diversified flora thrived in the region during the Ladinian. Floral composition and preservation patterns are suggestive of a taphonomically-biased record and a relatively far-away source area.
Fung, Wing K; Yu, Kexin; Yang, Yingrui; Zhou, Ji-Yuan
2016-08-08
Monte Carlo evaluation of resampling-based tests is often conducted in statistical analysis. However, this procedure is generally computationally intensive. The pooling resampling-based method has been developed to reduce the computational burden but the validity of the method has not been studied before. In this article, we first investigate the asymptotic properties of the pooling resampling-based method and then propose a novel Monte Carlo evaluation procedure namely the n-times pooling resampling-based method. Theorems as well as simulations show that the proposed method can give smaller or comparable root mean squared errors and bias with much less computing time, thus can be strongly recommended especially for evaluating highly computationally intensive hypothesis testing procedures in genetic epidemiology.
MUSiC A General Search for Deviations from Monte Carlo Predictions in CMS
Biallass, Philipp
2009-01-01
A model independent analysis approach in CMS is presented, systematically scanning the data for deviations from the Monte Carlo expectation. Such an analysis can contribute to the understanding of the detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. The importance of systematic uncertainties is outlined, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving Supersymmetry and new heavy gauge bosons are used as an input to the search algorithm.
MUSIC -- An Automated Scan for Deviations between Data and Monte Carlo Simulation
CMS Collaboration
We present a model independent analysis approach, systematically scanning the data for deviations from the Monte Carlo expectation. Such an analysis can contribute to the understanding of the detector and the tuning of the event generators. Due to the minimal theoretical bias this approach is sensitive to a variety of models, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. We outline the importance of systematic uncertainties, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving supersymmetry and new heavy gauge bosons have been used as an input to the search algorithm. %Several models involving supersymmetry, new heavy gauge bosons and leptoquarks, as well as possible detector ef...
MUSiC - A general search for deviations from monte carlo predictions in CMS
Biallass, Philipp A.; CMS Collaboration
2009-06-01
A model independent analysis approach in CMS is presented, systematically scanning the data for deviations from the Monte Carlo expectation. Such an analysis can contribute to the understanding of the detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. The importance of systematic uncertainties is outlined, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving Supersymmetry and new heavy gauge bosons are used as an input to the search algorithm.
MUSiC - A Generic Search for Deviations from Monte Carlo Predictions in CMS
Hof, Carsten
2009-05-01
We present a model independent analysis approach, systematically scanning the data for deviations from the Standard Model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. We outline the importance of systematic uncertainties, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving supersymmetry and new heavy gauge bosons have been used as an input to the search algorithm.
MUSiC - A general search for deviations from Monte Carlo predictions in CMS
Biallass, Philipp A, E-mail: biallass@cern.c [Physics Institute IIIA, RWTH Aachen, Physikzentrum, 52056 Aachen (Germany)
2009-06-01
A model independent analysis approach in CMS is presented, systematically scanning the data for deviations from the Monte Carlo expectation. Such an analysis can contribute to the understanding of the detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. The importance of systematic uncertainties is outlined, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving Supersymmetry and new heavy gauge bosons are used as an input to the search algorithm.
Monte Carlo EM加速算法%Acceleration of Monte Carlo EM Algorithm
罗季
2008-01-01
EM算法是近年来常用的求后验众数的估计的一种数据增广算法,但由于求出其E步中积分的显示表达式有时很困难,甚至不可能,限制了其应用的广泛性.而Monte Carlo EM算法很好地解决了这个问题,将EM算法中E步的积分用Monte Carlo模拟来有效实现,使其适用性大大增强.但无论是EM算法,还是Monte Carlo EM算法,其收敛速度都是线性的,被缺损信息的倒数所控制,当缺损数据的比例很高时,收敛速度就非常缓慢.而Newton-Raphson算法在后验众数的附近具有二次收敛速率.本文提出Monte Carlo EM加速算法,将Monte Carlo EM算法与Newton-Raphson算法结合,既使得EM算法中的E步用Monte Carlo模拟得以实现,又证明了该算法在后验众数附近具有二次收敛速度.从而使其保留了Monte Carlo EM算法的优点,并改进了Monte Carlo EM算法的收敛速度.本文通过数值例子,将Monte Carlo EM加速算法的结果与EM算法、Monte Carlo EM算法的结果进行比较,进一步说明了Monte Carlo EM加速算法的优良性.
Matrilateral Bias in Human Grandmothering
Martin Daly
2017-09-01
Full Text Available Children receive more care and resources from their maternal grandmothers than from their paternal grandmothers. This asymmetry is the “matrilateral bias” in grandmaternal investment. Here, we synopsize the evolutionary theories that predict such a bias, and review evidence of its cross-cultural generality and magnitude. Evolutionists have long maintained that investing in a daughter’s child yields greater fitness returns, on average, than investing in a son’s child because of paternity uncertainty: the son’s putative progeny may have been sired by someone else. Recent theoretical work has identified an additional natural selective basis for the matrilateral bias that may be no less important: supporting grandchildren lightens the load on their mother, increasing her capacity to pursue her fitness in other ways, and if she invests those gains either in her natal relatives or in children of a former or future partner, fitness returns accrue to the maternal, but not the paternal, grandmother. In modern democracies, where kinship is reckoned bilaterally and no postmarital residence norms restrict grandmaternal access to grandchildren, many studies have found large matrilateral biases in contact, childcare, and emotional closeness. In other societies, patrilineal ideology and postmarital residence with the husband’s kin (virilocality might be expected to have produced a patrilateral bias instead, but the available evidence refutes this hypothesis. In hunter-gatherers, regardless of professed norms concerning kinship and residence, mothers get needed help at and after childbirth from their mothers, not their mothers-in-law. In traditional agricultural and pastoral societies, patrilineal and virilocal norms are common, but young mothers still turn to their natal families for crucial help, and several studies have documented benefits, including reduced child mortality, associated with access to maternal, but not paternal, grandmothers. Even
Bias-correction in vector autoregressive models
Engsted, Tom; Pedersen, Thomas Quistgaard
2014-01-01
We analyze the properties of various methods for bias-correcting parameter estimates in both stationary and non-stationary vector autoregressive models. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study......, we show that when the model is stationary this simple bias formula compares very favorably to bootstrap bias-correction, both in terms of bias and mean squared error. In non-stationary models, the analytical bias formula performs noticeably worse than bootstrapping. Both methods yield a notable...... improvement over ordinary least squares. We pay special attention to the risk of pushing an otherwise stationary model into the non-stationary region of the parameter space when correcting for bias. Finally, we consider a recently proposed reduced-bias weighted least squares estimator, and we find...
The Probability Distribution for a Biased Spinner
Foster, Colin
2012-01-01
This article advocates biased spinners as an engaging context for statistics students. Calculating the probability of a biased spinner landing on a particular side makes valuable connections between probability and other areas of mathematics. (Contains 2 figures and 1 table.)
The Probability Distribution for a Biased Spinner
Foster, Colin
2012-01-01
This article advocates biased spinners as an engaging context for statistics students. Calculating the probability of a biased spinner landing on a particular side makes valuable connections between probability and other areas of mathematics. (Contains 2 figures and 1 table.)
A Pharmacological Primer of Biased Agonism
Andresen, Bradley T.
2011-01-01
Biased agonism is one of the fastest growing topics in G protein-coupled receptor pharmacology; moreover, biased agonists are used in the clinic today: carvedilol (Coreg®) is a biased agonist of beta-adrenergic receptors. However, there is a general lack of understanding of biased agonism when compared to traditional pharmacological terminology. Therefore, this review is designed to provide a basic introduction to classical pharmacology as well as G protein-coupled receptor signal transductio...
Attentional bias predicts heroin relapse following treatment
M.A.E. Marissen; I.H.A. Franken; A.J. Waters; P. Blanken; W. van den Brink; V.M. Hendriks
2006-01-01
Aims Previous studies have shown that abstinent heroin addicts exhibit an attentional bias to heroin-related stimuli. It has been suggested that attentional bias may represent a vulnerability to relapse into drug use. In the present study, the predictive value of pre-treatment attentional bias on re
Using Newspapers to Study Media Bias.
Kirman, Joseph M.
1992-01-01
Suggests that students can learn to recognize media bias by studying media reports of current events or historical topics. Describes a study unit using media coverage of the second anniversary of the Palestinian uprising against Israel. Discusses lesson objectives, planning, defining bias teaching procedures, and criteria for determining bias. (DK)
Culturally Biased Assumptions in Counseling Psychology
Pedersen, Paul B.
2003-01-01
Eight clusters of culturally biased assumptions are identified for further discussion from Leong and Ponterotto's (2003) article. The presence of cultural bias demonstrates that cultural bias is so robust and pervasive that is permeates the profession of counseling psychology, even including those articles that effectively attack cultural bias…
Theory of Finite Size Effects for Electronic Quantum Monte Carlo Calculations of Liquids and Solids
Holzmann, Markus; Morales, Miguel A; Tubmann, Norm M; Ceperley, David M; Pierleoni, Carlo
2016-01-01
Concentrating on zero temperature Quantum Monte Carlo calculations of electronic systems, we give a general description of the theory of finite size extrapolations of energies to the thermodynamic limit based on one and two-body correlation functions. We introduce new effective procedures, such as using the potential and wavefunction split-up into long and short range functions to simplify the method and we discuss how to treat backflow wavefunctions. Then we explicitly test the accuracy of our method to correct finite size errors on example hydrogen and helium many-body systems and show that the finite size bias can be drastically reduced for even small systems.
Fixed-node errors in quantum Monte Carlo: interplay of electron density and node nonlinearities
Rasch, Kevin M; Mitas, Lubos
2013-01-01
We elucidate the origin of large differences (twofold or more) in valence fixed-node errors between the first- vs second-row atom systems for single-configuration trial wave functions. The differences are studied on a set of atoms, molecules, and Si, C solids. These systems are valence isoelectronic and have similar correlation energies, bond patterns, geometries, ground states, and symmetries. We show that the key reasons are the differences between the electron densities combined with the degree of node nonlinearities. The findings reveal how the accuracy of the quantum Monte Carlo varies across a variety of systems and provide new perspectives on the origins of the fixed-node biases.
First Monte Carlo analysis of fragmentation functions from single-inclusive $e^+ e^-$ annihilation
Sato, N; Melnitchouk, W; Hirai, M; Kumano, S; Accardi, A
2016-01-01
We perform the first iterative Monte Carlo (IMC) analysis of fragmentation functions constrained by all available data from single-inclusive $e^+ e^-$ annihilation into pions and kaons. The IMC method eliminates potential bias in traditional analyses based on single fits introduced by fixing parameters not well contrained by the data and provides a statistically rigorous determination of uncertainties. Our analysis reveals specific features of fragmentation functions using the new IMC methodology and those obtained from previous analyses, especially for light quarks and for strange quark fragmentation to kaons.
Monte Carlo simulations of the luminosity function of hot white dwarfs
Torres, S; Krzesinski, J; Kleinman, S J
2012-01-01
We present a detailed Monte Carlo simulation of the population of the hot branch of the white dwarf luminosity function. We used the most up-to-date stellar evolutionary models and we implemented a full description of the observational selection biases. Our theoretical results are compared with the luminosity function of hot white dwarfs obtained from the Sloan Digital Sky Survey (SDSS), for both DA and non-DA white dwarfs. For non-DA white dwarfs we find an excellent agreement with the observational data, while for DA white dwarfs our simulations show some discrepancies with the observations for the brightest luminosity bins, those corresponding to L>= 10 L_sun.
First Monte Carlo analysis of fragmentation functions from single-inclusive e+e- annihilation
Sato, Nobuo; Ethier, J. J.; Melnitchouk, W.; Hirai, M.; Kumano, S.; Accardi, A.; Jefferson Lab Angular Momentum Collaboration
2016-12-01
We perform the first iterative Monte Carlo (IMC) analysis of fragmentation functions constrained by all available data from single-inclusive e+e- annihilation into pions and kaons. The IMC method eliminates potential bias in traditional analyses based on single fits introduced by fixing parameters not well constrained by the data and provides a statistically rigorous determination of uncertainties. Our analysis reveals specific features of fragmentation functions using the new IMC methodology and those obtained from previous analyses, especially for light quarks and for strange quark fragmentation to kaons.
Approaching Chemical Accuracy with Quantum Monte Carlo
Petruzielo, F R; Umrigar, C J
2012-01-01
A quantum Monte Carlo study of the atomization energies for the G2 set of molecules is presented. Basis size dependence of diffusion Monte Carlo atomization energies is studied with a single determinant Slater-Jastrow trial wavefunction formed from Hartree-Fock orbitals. With the largest basis set, the mean absolute deviation from experimental atomization energies for the G2 set is 3.0 kcal/mol. Optimizing the orbitals within variational Monte Carlo improves the agreement between diffusion Monte Carlo and experiment, reducing the mean absolute deviation to 2.1 kcal/mol. Moving beyond a single determinant Slater-Jastrow trial wavefunction, diffusion Monte Carlo with a small complete active space Slater-Jastrow trial wavefunction results in near chemical accuracy. In this case, the mean absolute deviation from experimental atomization energies is 1.2 kcal/mol. It is shown from calculations on systems containing phosphorus that the accuracy can be further improved by employing a larger active space.
Strangeness production in minimum bias and jet data
Lancaster, Justin [Duke Univ., Durham, NC (United States)
2003-08-04
For the first time, the K_{S} production inside jets originating from 1.8 TeV Tevatron proton-antiproton collisions is researched utilizing the CDF data at Fermilab. Prior to the study of KS production inside jets, the KS production in the Minimum Bias events is examined. The properties of K_{S} production, such as the values of
, $\\left(\\frac{dN_{KS}}{dη}\\right)$, lifetime, and invariant cross-section, are found to be consistent with other Minimum Bias publications. After this, the number of K_{S} and tracks inside 0.7 jet cones are computed along with the trigger, background, and efficiency corrections for both the data and the HERWIG+QFL (event generator + detector simulator) Monte Carlo. Furthermore, the fragmentation functions are contrasted with those from the e^{+}e^{-} machines. In the data, the number of K_{S} per jet increases and then reaches a plateau as a function of the jet E_{T}. In particular, the number of K_{S} per jet within 1.5 < p_{T} < 10.0 GeV is determined to be 0.156 ± 0.007, 0.206 ± 0.011, and 0.199 ± 0.011 for the 20–50 GeV, 50–100 GeV, and 100–150 GeV jets. Conversely, the number of tracks per jet in the data strictly grows with the jet ET, and its values within 1.5 < p_{T} < 10.0 GeV are 2.816 ± 0.008, 5.107 ± 0.009, and 5.972 ± 0.008 for the 20–50 GeV, 50–100 GeV, and 100–150 GeV cases. These data results are then compared with those from the HERWIG+QFL Monte Carlo. The Herwig+QFL Monte Carlo results are in agreement to within 10% as to the number of tracks per jet. Moreover, the number of K_{S} per jet, the data and the Monte Carlo agree to within 5% for the 20–50 GeV case. However, the HERWIG+QFL Monte Carlo KS per jet values are increasingly above those of the data for K_{S} inside the 50–100 GeV jets (around 20% too high) and 100–150 GeV jets (approximately 35% too high). We conclude that the HERWIG
A Review of Studies on Media Bias at Home
辛一丹
2015-01-01
Bias is widely existed nowadays.Domestic scholars have done a lot of research on the bias,especially the media bias.They studied the media bias from different perspectives,such as the bias on China image,the bias of a certain media FOX,the bias on the venerable group,the bias on women and so on.The author plans to give a review of the studies on media bias at home in this paper.
A Review of Studies on Media Bias at Home
辛一丹
2015-01-01
Bias is widely existed nowadays. Domestic scholars have done a lot of research on the bias, especially the media bias. They studied the media bias from different perspectives, such as the bias on China image,the bias of a certain media FOX, the bias on the venerable group, the bias on women and so on. The author plans to give a review of the studies on media bias at home in this paper.
Opinion Dynamics with Confirmation Bias
Allahverdyan, Armen E.; Galstyan, Aram
2014-01-01
Background Confirmation bias is the tendency to acquire or evaluate new information in a way that is consistent with one's preexisting beliefs. It is omnipresent in psychology, economics, and even scientific practices. Prior theoretical research of this phenomenon has mainly focused on its economic implications possibly missing its potential connections with broader notions of cognitive science. Methodology/Principal Findings We formulate a (non-Bayesian) model for revising subjective probabilistic opinion of a confirmationally-biased agent in the light of a persuasive opinion. The revision rule ensures that the agent does not react to persuasion that is either far from his current opinion or coincides with it. We demonstrate that the model accounts for the basic phenomenology of the social judgment theory, and allows to study various phenomena such as cognitive dissonance and boomerang effect. The model also displays the order of presentation effect–when consecutively exposed to two opinions, the preference is given to the last opinion (recency) or the first opinion (primacy) –and relates recency to confirmation bias. Finally, we study the model in the case of repeated persuasion and analyze its convergence properties. Conclusions The standard Bayesian approach to probabilistic opinion revision is inadequate for describing the observed phenomenology of persuasion process. The simple non-Bayesian model proposed here does agree with this phenomenology and is capable of reproducing a spectrum of effects observed in psychology: primacy-recency phenomenon, boomerang effect and cognitive dissonance. We point out several limitations of the model that should motivate its future development. PMID:25007078
Bias in Peripheral Depression Biomarkers
Carvalho, André F; Köhler, Cristiano A; Brunoni, André R
2016-01-01
BACKGROUND: To aid in the differentiation of individuals with major depressive disorder (MDD) from healthy controls, numerous peripheral biomarkers have been proposed. To date, no comprehensive evaluation of the existence of bias favoring the publication of significant results or inflating effect...... sizes has been conducted. METHODS: Here, we performed a comprehensive review of meta-analyses of peripheral nongenetic biomarkers that could discriminate individuals with MDD from nondepressed controls. PubMed/MEDLINE, EMBASE, and PsycINFO databases were searched through April 10, 2015. RESULTS: From 15...
Ratio Bias and Policy Preferences
Pedersen, Rasmus Tue
2016-01-01
Numbers permeate modern political communication. While current scholarship on framing effects has focused on the persuasive effects of words and arguments, this article shows that framing of numbers can also substantially affect policy preferences. Such effects are caused by ratio bias, which...... is a general tendency to focus on numerators and pay insufficient attention to denominators in ratios. Using a population-based survey experiment, I demonstrate how differently framed but logically equivalent representations of the exact same numerical value can have large effects on citizens’ preferences...
Magnetoelectric switching of exchange bias.
Borisov, Pavel; Hochstrat, Andreas; Chen, Xi; Kleemann, Wolfgang; Binek, Christian
2005-03-25
The perpendicular exchange bias field, H(EB), of the magnetoelectric heterostructure Cr2O3(111)/(Co/Pt)(3) changes sign after field cooling to below the Néel temperature of Cr2O3 in either parallel or antiparallel axial magnetic and electric freezing fields. The switching of H(EB) is explained by magnetoelectrically induced antiferromagnetic single domains which extend to the interface, where the direction of their end spins controls the sign of H(EB). Novel applications in magnetoelectronic devices seem possible.
Vahid Moslemi
2011-03-01
Full Text Available Introduction: In brachytherapy, radioactive sources are placed close to the tumor, therefore, small changes in their positions can cause large changes in the dose distribution. This emphasizes the need for computerized treatment planning. The usual method for treatment planning of cervix brachytherapy uses conventional radiographs in the Manchester system. Nowadays, because of their advantages in locating the source positions and the surrounding tissues, CT and MRI images are replacing conventional radiographs. In this study, we used CT images in Monte Carlo based dose calculation for brachytherapy treatment planning, using an interface software to create the geometry file required in the MCNP code. The aim of using the interface software is to facilitate and speed up the geometry set-up for simulations based on the patient’s anatomy. This paper examines the feasibility of this method in cervix brachytherapy and assesses its accuracy and speed. Material and Methods: For dosimetric measurements regarding the treatment plan, a pelvic phantom was made from polyethylene in which the treatment applicators could be placed. For simulations using CT images, the phantom was scanned at 120 kVp. Using an interface software written in MATLAB, the CT images were converted into MCNP input file and the simulation was then performed. Results: Using the interface software, preparation time for the simulations of the applicator and surrounding structures was approximately 3 minutes; the corresponding time needed in the conventional MCNP geometry entry being approximately 1 hour. The discrepancy in the simulated and measured doses to point A was 1.7% of the prescribed dose. The corresponding dose differences between the two methods in rectum and bladder were 3.0% and 3.7% of the prescribed dose, respectively. Comparing the results of simulation using the interface software with those of simulation using the standard MCNP geometry entry showed a less than 1
A Monte Carlo Method for Making the SDSS u-Band Magnitude More Accurate
Gu, Jiayin; Du, Cuihua; Zuo, Wenbo; Jing, Yingjie; Wu, Zhenyu; Ma, Jun; Zhou, Xu
2016-10-01
We develop a new Monte Carlo-based method to convert the Sloan Digital Sky Survey (SDSS) u-band magnitude to the south Galactic Cap of the u-band Sky Survey (SCUSS) u-band magnitude. Due to the increased accuracy of SCUSS u-band measurements, the converted u-band magnitude becomes more accurate compared with the original SDSS u-band magnitude, in particular at the faint end. The average u-magnitude error (for both SDSS and SCUSS) of numerous main-sequence stars with 0.2\\lt g-r\\lt 0.8 increases as the g-band magnitude becomes fainter. When g = 19.5, the average magnitude error of the SDSS u is 0.11. When g = 20.5, the average SDSS u error rises to 0.22. However, at this magnitude, the average magnitude error of the SCUSS u is just half as much as that of the SDSS u. The SDSS u-band magnitudes of main-sequence stars with 0.2\\lt g-r\\lt 0.8 and 18.5\\lt g\\lt 20.5 are converted, therefore the maximum average error of the converted u-band magnitudes is 0.11. The potential application of this conversion is to derive a more accurate photometric metallicity calibration from SDSS observations, especially for the more distant stars. Thus, we can explore stellar metallicity distributions either in the Galactic halo or some stream stars.
Random Numbers and Monte Carlo Methods
Scherer, Philipp O. J.
Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.
SMCTC: Sequential Monte Carlo in C++
Adam M. Johansen
2009-04-01
Full Text Available Sequential Monte Carlo methods are a very general class of Monte Carlo methodsfor sampling from sequences of distributions. Simple examples of these algorithms areused very widely in the tracking and signal processing literature. Recent developmentsillustrate that these techniques have much more general applicability, and can be appliedvery eectively to statistical inference problems. Unfortunately, these methods are oftenperceived as being computationally expensive and dicult to implement. This articleseeks to address both of these problems.A C++ template class library for the ecient and convenient implementation of verygeneral Sequential Monte Carlo algorithms is presented. Two example applications areprovided: a simple particle lter for illustrative purposes and a state-of-the-art algorithmfor rare event estimation.
Shell model the Monte Carlo way
Ormand, W.E.
1995-03-01
The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.
A brief introduction to Monte Carlo simulation.
Bonate, P L
2001-01-01
Simulation affects our life every day through our interactions with the automobile, airline and entertainment industries, just to name a few. The use of simulation in drug development is relatively new, but its use is increasing in relation to the speed at which modern computers run. One well known example of simulation in drug development is molecular modelling. Another use of simulation that is being seen recently in drug development is Monte Carlo simulation of clinical trials. Monte Carlo simulation differs from traditional simulation in that the model parameters are treated as stochastic or random variables, rather than as fixed values. The purpose of this paper is to provide a brief introduction to Monte Carlo simulation methods.
CosmoPMC: Cosmology Population Monte Carlo
Kilbinger, Martin; Cappe, Olivier; Cardoso, Jean-Francois; Fort, Gersende; Prunet, Simon; Robert, Christian P; Wraith, Darren
2011-01-01
We present the public release of the Bayesian sampling algorithm for cosmology, CosmoPMC (Cosmology Population Monte Carlo). CosmoPMC explores the parameter space of various cosmological probes, and also provides a robust estimate of the Bayesian evidence. CosmoPMC is based on an adaptive importance sampling method called Population Monte Carlo (PMC). Various cosmology likelihood modules are implemented, and new modules can be added easily. The importance-sampling algorithm is written in C, and fully parallelised using the Message Passing Interface (MPI). Due to very little overhead, the wall-clock time required for sampling scales approximately with the number of CPUs. The CosmoPMC package contains post-processing and plotting programs, and in addition a Monte-Carlo Markov chain (MCMC) algorithm. The sampling engine is implemented in the library pmclib, and can be used independently. The software is available for download at http://www.cosmopmc.info.
Quantum speedup of Monte Carlo methods.
Montanaro, Ashley
2015-09-08
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.
Adiabatic optimization versus diffusion Monte Carlo methods
Jarret, Michael; Jordan, Stephen P.; Lackey, Brad
2016-10-01
Most experimental and theoretical studies of adiabatic optimization use stoquastic Hamiltonians, whose ground states are expressible using only real nonnegative amplitudes. This raises a question as to whether classical Monte Carlo methods can simulate stoquastic adiabatic algorithms with polynomial overhead. Here we analyze diffusion Monte Carlo algorithms. We argue that, based on differences between L1 and L2 normalized states, these algorithms suffer from certain obstructions preventing them from efficiently simulating stoquastic adiabatic evolution in generality. In practice however, we obtain good performance by introducing a method that we call Substochastic Monte Carlo. In fact, our simulations are good classical optimization algorithms in their own right, competitive with the best previously known heuristic solvers for MAX-k -SAT at k =2 ,3 ,4 .
Self-learning Monte Carlo method
Liu, Junwei; Qi, Yang; Meng, Zi Yang; Fu, Liang
2017-01-01
Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of a general and efficient update algorithm for large size systems close to the phase transition, for which local updates perform badly. In this Rapid Communication, we propose a general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. We demonstrate the efficiency of SLMC in a spin model at the phase transition point, achieving a 10-20 times speedup.
Monte Carlo strategies in scientific computing
Liu, Jun S
2008-01-01
This paperback edition is a reprint of the 2001 Springer edition This book provides a self-contained and up-to-date treatment of the Monte Carlo method and develops a common framework under which various Monte Carlo techniques can be "standardized" and compared Given the interdisciplinary nature of the topics and a moderate prerequisite for the reader, this book should be of interest to a broad audience of quantitative researchers such as computational biologists, computer scientists, econometricians, engineers, probabilists, and statisticians It can also be used as the textbook for a graduate-level course on Monte Carlo methods Many problems discussed in the alter chapters can be potential thesis topics for masters’ or PhD students in statistics or computer science departments Jun Liu is Professor of Statistics at Harvard University, with a courtesy Professor appointment at Harvard Biostatistics Department Professor Liu was the recipient of the 2002 COPSS Presidents' Award, the most prestigious one for sta...
A Simulation Platform for Quantifying Survival Bias
Mayeda, Elizabeth Rose; Tchetgen Tchetgen, Eric J; Power, Melinda C
2016-01-01
Bias due to selective mortality is a potential concern in many studies and is especially relevant in cognitive aging research because cognitive impairment strongly predicts subsequent mortality. Biased estimation of the effect of an exposure on rate of cognitive decline can occur when mortality i......-mortality situations. This simulation platform provides a flexible tool for evaluating biases in studies with high mortality, as is common in cognitive aging research.......Bias due to selective mortality is a potential concern in many studies and is especially relevant in cognitive aging research because cognitive impairment strongly predicts subsequent mortality. Biased estimation of the effect of an exposure on rate of cognitive decline can occur when mortality...... platform with which to quantify the expected bias in longitudinal studies of determinants of cognitive decline. We evaluated potential survival bias in naive analyses under several selective survival scenarios, assuming that exposure had no effect on cognitive decline for anyone in the population. Compared...
Numeracy and framing bias in epilepsy.
Choi, Hyunmi; Wong, John B; Mendiratta, Anil; Heiman, Gary A; Hamberger, Marla J
2011-01-01
Patients with epilepsy are frequently confronted with complex treatment decisions. Communicating treatment risks is often difficult because patients may have difficulty with basic statistical concepts (i.e., low numeracy) or might misconceive the statistical information based on the way information is presented, a phenomenon known as "framing bias." We assessed numeracy and framing bias in 95 adults with chronic epilepsy and explored cognitive correlates of framing bias. Compared with normal controls, patients with epilepsy had significantly poorer performance on the Numeracy scale (P=0.02), despite a higher level of education than normal controls (Pframing bias. Abstract problem solving performance correlated with the degree of framing bias (r=0.631, Pframing bias. Poor numeracy and susceptibility framing bias place patients with epilepsy at risk for uninformed decisions.
Al-Subeihi, Ala' A.A., E-mail: subeihi@yahoo.com [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); BEN-HAYYAN-Aqaba International Laboratories, Aqaba Special Economic Zone Authority (ASEZA), P. O. Box 2565, Aqaba 77110 (Jordan); Alhusainy, Wasma; Kiwamoto, Reiko; Spenkelink, Bert [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); Bladeren, Peter J. van [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); Nestec S.A., Avenue Nestlé 55, 1800 Vevey (Switzerland); Rietjens, Ivonne M.C.M.; Punt, Ans [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands)
2015-03-01
The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1′-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1′-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1′-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1′-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1′-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1′-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment. - Highlights: • Interindividual human differences in methyleugenol bioactivation were simulated. • This was done using in vitro incubations, PBK modeling
STUDI PEMODELAN DAN PERHITUNGAN TRANSPORT MONTE CARLO DALAM TERAS HTR PEBBLE BED
Zuhair .
2013-01-01
Full Text Available Konsep sistem energi VHTR baik yang berbahan bakar pebble (VHTR pebble bed maupun blok prismatik (VHTR prismatik menarik perhatian fisikawan reaktor nuklir. Salah satu kelebihan teknologi bahan bakar bola adalah menawarkan terobosan teknologi pengisian bahan bakar tanpa harus menghentikan produksi listrik. Selain itu, partikel bahan bakar pebble dengan kernel uranium oksida (UO2 atau uranium oksikarbida (UCO yang dibalut TRISO dan pelapisan silikon karbida (SiC dianggap sebagai opsi utama dengan pertimbangan performa tinggi pada burn-up bahan bakar dan temperatur tinggi. Makalah ini mendiskusikan pemodelan dan perhitungan transport Monte Carlo dalam teras HTR pebble bed. HTR pebble bed adalah reaktor berpendingin gas temperatur tinggi dan bermoderator grafit dengan kemampuan kogenerasi. Perhitungan dikerjakan dengan program MCNP5 pada temperatur 1200 K. Pustaka data nuklir energi kontinu ENDF/B-V dan ENDF/B-VI dimanfaatkan untuk melengkapi analisis. Hasil perhitungan secara keseluruhan menunjukkan konsistensi dengan nilai keff yang hampir sama untuk pustaka data nuklir yang digunakan. Pustaka ENDF/B-VI (66c selalu memproduksi keff lebih besar dibandingkan ENDF/B-V (50c maupun ENDF/B-VI (60c dengan bias kurang dari 0,25%. Kisi BCC memprediksi keff hampir selalu lebih kecil daripada kisi lainnya, khususnya FCC. Nilai keff kisi BCC lebih dekat dengan kisi FCC dengan bias kurang dari 0,19% sedangkan dengan kisi SH bias perhitungannya kurang dari 0,22%. Fraksi packing yang sedikit berbeda (BCC= 61%, SH= 60,459% tidak membuat bias perhitungan menjadi berbeda jauh. Estimasi keff ketiga model kisi menyimpulkan bahwa model BCC lebih bisa diadopsi dalam perhitungan HTR pebble bed dibandingkan model FCC dan SH. Verifikasi hasil estimasi ini perlu dilakukan dengan simulasi Monte Carlo atau bahkan program deterministik lainnya guna optimisasi perhitungan teras reaktor temperatur tinggi. Kata-kunci: kernel, TRISO, bahan bakar pebble, HTR pebble bed
Parallel Markov chain Monte Carlo simulations.
Ren, Ruichao; Orkoulas, G
2007-06-07
With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.
Monte Carlo Hamiltonian：Linear Potentials
LUOXiang－Qian; HelmutKROEGER; 等
2002-01-01
We further study the validity of the Monte Carlo Hamiltonian method .The advantage of the method,in comparison with the standard Monte Carlo Lagrangian approach,is its capability to study the excited states.We consider two quantum mechanical models:a symmetric one V(x)=/x/2;and an asymmetric one V(x)==∞,for x<0 and V(x)=2,for x≥0.The results for the spectrum,wave functions and thermodynamical observables are in agreement with the analytical or Runge-Kutta calculations.
Monte Carlo dose distributions for radiosurgery
Perucha, M.; Leal, A.; Rincon, M.; Carrasco, E. [Sevilla Univ. (Spain). Dept. Fisiologia Medica y Biofisica; Sanchez-Doblado, F. [Sevilla Univ. (Spain). Dept. Fisiologia Medica y Biofisica]|[Hospital Univ. Virgen Macarena, Sevilla (Spain). Servicio de Oncologia Radioterapica; Nunez, L. [Clinica Puerta de Hierro, Madrid (Spain). Servicio de Radiofisica; Arrans, R.; Sanchez-Calzado, J.A.; Errazquin, L. [Hospital Univ. Virgen Macarena, Sevilla (Spain). Servicio de Oncologia Radioterapica; Sanchez-Nieto, B. [Royal Marsden NHS Trust (United Kingdom). Joint Dept. of Physics]|[Inst. of Cancer Research, Sutton, Surrey (United Kingdom)
2001-07-01
The precision of Radiosurgery Treatment planning systems is limited by the approximations of their algorithms and by their dosimetrical input data. This fact is especially important in small fields. However, the Monte Carlo methods is an accurate alternative as it considers every aspect of particle transport. In this work an acoustic neurinoma is studied by comparing the dose distribution of both a planning system and Monte Carlo. Relative shifts have been measured and furthermore, Dose-Volume Histograms have been calculated for target and adjacent organs at risk. (orig.)
Monte carlo simulations of organic photovoltaics.
Groves, Chris; Greenham, Neil C
2014-01-01
Monte Carlo simulations are a valuable tool to model the generation, separation, and collection of charges in organic photovoltaics where charges move by hopping in a complex nanostructure and Coulomb interactions between charge carriers are important. We review the Monte Carlo techniques that have been applied to this problem, and describe the results of simulations of the various recombination processes that limit device performance. We show how these processes are influenced by the local physical and energetic structure of the material, providing information that is useful for design of efficient photovoltaic systems.
Monte Carlo simulation of neutron scattering instruments
Seeger, P.A.
1995-12-31
A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width.
The Rational Hybrid Monte Carlo Algorithm
Clark, M A
2006-01-01
The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.
The Rational Hybrid Monte Carlo algorithm
Clark, Michael
2006-12-01
The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.
Fast sequential Monte Carlo methods for counting and optimization
Rubinstein, Reuven Y; Vaisman, Radislav
2013-01-01
A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the
Reducing neutron multiplicity counting bias for plutonium warhead authentication
Goettsche, Malte
2015-06-05
Confidence in future nuclear arms control agreements could be enhanced by direct verification of warheads. It would include warhead authentication. This is the assessment based on measurements whether a declaration that a specific item is a nuclear warhead is true. An information barrier can be used to protect sensitive information during measurements. It could for example show whether attributes such as a fissile mass exceeding a threshold are met without indicating detailed measurement results. Neutron multiplicity measurements would be able to assess a plutonium fissile mass attribute if it were possible to show that their bias is low. Plutonium measurements have been conducted with the He-3 based Passive Scrap Multiplicity Counter. The measurement data has been used as a reference to test the capacity of the Monte Carlo code MCNPX-PoliMi to simulate neutron multiplicity measurements. The simulation results with their uncertainties are in agreement with the experimental results. It is essential to use cross-sections which include neutron scattering with the detector's polyethylene molecular structure. Further MCNPX-PoliMi simulations have been conducted in order to study bias that occurs when measuring samples with large plutonium masses such as warheads. Simulation results of solid and hollow metal spheres up to 6000 g show that the masses are underpredicted by as much as 20%. The main source of this bias has been identified in the false assumption that the neutron multiplication does not depend on the position where a spontaneous fission event occurred. The multiplication refers to the total number of neutrons leaking a sample after a primary spontaneous fission event, taking induced fission into consideration. The correction of the analysis has been derived and implemented in a MATLAB code. It depends on four geometry-dependent correction coefficients. When the sample configuration is fully known, these can be exactly determined and remove this type of
Monte Carlo methods in AB initio quantum chemistry quantum Monte Carlo for molecules
Lester, William A; Reynolds, PJ
1994-01-01
This book presents the basic theory and application of the Monte Carlo method to the electronic structure of atoms and molecules. It assumes no previous knowledge of the subject, only a knowledge of molecular quantum mechanics at the first-year graduate level. A working knowledge of traditional ab initio quantum chemistry is helpful, but not essential.Some distinguishing features of this book are: Clear exposition of the basic theory at a level to facilitate independent study. Discussion of the various versions of the theory: diffusion Monte Carlo, Green's function Monte Carlo, and release n
Use of Monte Carlo Methods in brachytherapy; Uso del metodo de Monte Carlo en braquiterapia
Granero Cabanero, D.
2015-07-01
The Monte Carlo method has become a fundamental tool for brachytherapy dosimetry mainly because no difficulties associated with experimental dosimetry. In brachytherapy the main handicap of experimental dosimetry is the high dose gradient near the present sources making small uncertainties in the positioning of the detectors lead to large uncertainties in the dose. This presentation will review mainly the procedure for calculating dose distributions around a fountain using the Monte Carlo method showing the difficulties inherent in these calculations. In addition we will briefly review other applications of the method of Monte Carlo in brachytherapy dosimetry, as its use in advanced calculation algorithms, calculating barriers or obtaining dose applicators around. (Author)
On the use of stochastic approximation Monte Carlo for Monte Carlo integration
Liang, Faming
2009-03-01
The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration via a dynamically weighted estimator by calling some results from the literature of nonhomogeneous Markov chains. Our numerical results indicate that SAMC can yield significant savings over conventional Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, for the problems for which the energy landscape is rugged. © 2008 Elsevier B.V. All rights reserved.
Propagation of Isotopic Bias and Uncertainty to Criticality Safety Analyses of PWR Waste Packages
Radulescu, Georgeta [ORNL
2010-06-01
Burnup credit methodology is economically advantageous because significantly higher loading capacity may be achieved for spent nuclear fuel (SNF) casks based on this methodology as compared to the loading capacity based on a fresh fuel assumption. However, the criticality safety analysis for establishing the loading curve based on burnup credit becomes increasingly complex as more parameters accounting for spent fuel isotopic compositions are introduced to the safety analysis. The safety analysis requires validation of both depletion and criticality calculation methods. Validation of a neutronic-depletion code consists of quantifying the bias and the uncertainty associated with the bias in predicted SNF compositions caused by cross-section data uncertainty and by approximations in the calculational method. The validation is based on comparison between radiochemical assay (RCA) data and calculated isotopic concentrations for fuel samples representative of SNF inventory. The criticality analysis methodology for commercial SNF disposal allows burnup credit for 14 actinides and 15 fission product isotopes in SNF compositions. The neutronic-depletion method for disposal criticality analysis employing burnup credit is the two-dimensional (2-D) depletion sequence TRITON (Transport Rigor Implemented with Time-dependent Operation for Neutronic depletion)/NEWT (New ESC-based Weighting Transport code) and the 44GROUPNDF5 crosssection library in the Standardized Computer Analysis for Licensing Evaluation (SCALE 5.1) code system. The SCALE 44GROUPNDF5 cross section library is based on the Evaluated Nuclear Data File/B Version V (ENDF/B-V) library. The criticality calculation code for disposal criticality analysis employing burnup credit is General Monte Carlo N-Particle (MCNP) Transport Code. The purpose of this calculation report is to determine the bias on the calculated effective neutron multiplication factor, k{sub eff}, due to the bias and bias uncertainty associated with
Observations and Models of Galaxy Assembly Bias
Campbell, Duncan A.
2017-01-01
The assembly history of dark matter haloes imparts various correlations between a halo’s physical properties and its large scale environment, i.e. assembly bias. It is common for models of the galaxy-halo connection to assume that galaxy properties are only a function of halo mass, implicitly ignoring how assembly bias may affect galaxies. Recently, programs to model and constrain the degree to which galaxy properties are influenced by assembly bias have been undertaken; however, the extent and character of galaxy assembly bias remains a mystery. Nevertheless, characterizing and modeling galaxy assembly bias is an important step in understanding galaxy evolution and limiting any systematic effects assembly bias may pose in cosmological measurements using galaxy surveys.I will present work on modeling and constraining the effect of assembly bias in two galaxy properties: stellar mass and star-formation rate. Conditional abundance matching allows for these galaxy properties to be tied to halo formation history to a variable degree, making studies of the relative strength of assembly bias possible. Galaxy-galaxy clustering and galactic conformity, the degree to which galaxy color is correlated between neighbors, are sensitive observational measures of galaxy assembly bias. I will show how these measurements can be used to constrain galaxy assembly bias and the peril of ignoring it.
Kinetic Monte Carlo simulations of void lattice formation during irradiation
Heinisch, H. L.; Singh, B. N.
2003-11-01
Over the last decade, molecular dynamics simulations of displacement cascades have revealed that glissile clusters of self-interstitial crowdions are formed directly in cascades and that they migrate one-dimensionally along close-packed directions with extremely low activation energies. Occasionally, under various conditions, a crowdion cluster can change its Burgers vector and glide along a different close-packed direction. The recently developed production bias model (PBM) of microstructure evolution under irradiation has been structured specifically to take into account the unique properties of the vacancy and interstitial clusters produced in the cascades. Atomic-scale kinetic Monte Carlo (KMC) simulations have played a useful role in understanding the defect reaction kinetics of one-dimensionally migrating crowdion clusters as a function of the frequency of direction changes. This has made it possible to incorporate the migration properties of crowdion clusters and changes in reaction kinetics into the PBM. In the present paper we utilize similar KMC simulations to investigate the significant role that crowdion clusters can play in the formation and stability of void lattices. The creation of stable void lattices, starting from a random distribution of voids, is simulated by a KMC model in which vacancies migrate three-dimensionally and self-interstitial atom (SIA) clusters migrate one-dimensionally, interrupted by directional changes. The necessity of both one-dimensional migration and Burgers vectors changes of SIA clusters for the production of stable void lattices is demonstrated, and the effects of the frequency of Burgers vector changes are described.
Thermally activated repolarization of antiferromagnetic particles: Monte Carlo dynamics
Soloviev, S. V.; Popkov, A. F.; Knizhnik, A. A.; Iskandarova, I. M.
2017-02-01
Based on the equation of motion of an antiferromagnetic moment, taking into account a random field of thermal fluctuations, we propose a Monte Carlo (MC) scheme for the numerical simulation of the evolutionary dynamics of an antiferromagnetic particle, corresponding to the Langevin dynamics in the Kramers theory for the two-well potential. Conditions for the selection of the sphere of fluctuations of random deviations of the antiferromagnetic vector at an MC time step are found. A good agreement with the theory of Kramers thermal relaxation is demonstrated for varying temperatures and heights of energy barrier over a wide range of integration time steps in an overdamped regime. Based on the developed scheme, we performed illustrative calculations of the temperature drift of the exchange bias under the fast annealing of a ferromagnet-antiferromagnet structure, taking into account the random variation of anisotropy directions in antiferromagnetic grains and their sizes. The proposed approach offers promise for modeling magnetic sensors and spintronic memory devices containing heterostructures with antiferromagnetic layers.
Forecasts: uncertain, inaccurate and biased?
Nicolaisen, Morten Skou; Ambrasaite, Inga; Salling, Kim Bang
2012-01-01
Cost Benefit Analysis (CBA) is the dominating methodology for appraisal of transport infrastructure projects across the globe. In order to adequately assess the costs and benefits of such projects two types of forecasts are crucial to the validity of the appraisal. First are the forecasts...... accuracy of project benefits. This paper presents results from an on-going research project on uncertainties in transport project evaluation (UNITE) that find forecasts of demand to be not only uncertain, but at times also highly inaccurate and often displaying a concerning degree of bias. Demand for road...... projects appear to be systematically underestimated, while demand for rail projects appears to be systematically overestimated. We compare the findings in the present study with those of previous studies and discuss the implications for the validity of project appraisal in the form of CBA...
Modeling confirmation bias and polarization
Del Vicario, Michela; Caldarelli, Guido; Stanley, H Eugene; Quattrociocchi, Walter
2016-01-01
Online users tend to select claims that adhere to their system of beliefs and to ignore dissenting information. Confirmation bias, indeed, plays a pivotal role in viral phenomena. Furthermore, the wide availability of content on the web fosters the aggregation of likeminded people where debates tend to enforce group polarization. Such a configuration might alter the public debate and thus the formation of the public opinion. In this paper we provide a mathematical model to study online social debates and the related polarization dynamics. We assume the basic updating rule of the Bounded Confidence Model (BCM) and we develop two variations a) the Rewire with Bounded Confidence Model (RBCM), in which discordant links are broken until convergence is reached; and b) the Unbounded Confidence Model, under which the interaction among discordant pairs of users is allowed even with a negative feedback, either with the rewiring step (RUCM) or without it (UCM). From numerical simulations we find that the new models (UCM...
Social reward shapes attentional biases.
Anderson, Brian A
2016-01-01
Paying attention to stimuli that predict a reward outcome is important for an organism to survive and thrive. When visual stimuli are associated with tangible, extrinsic rewards such as money or food, these stimuli acquire high attentional priority and come to automatically capture attention. In humans and other primates, however, many behaviors are not motivated directly by such extrinsic rewards, but rather by the social feedback that results from performing those behaviors. In the present study, I examine whether positive social feedback can similarly influence attentional bias. The results show that stimuli previously associated with a high probability of positive social feedback elicit value-driven attentional capture, much like stimuli associated with extrinsic rewards. Unlike with extrinsic rewards, however, such stimuli also influence task-specific motivation. My findings offer a potential mechanism by which social reward shapes the information that we prioritize when perceiving the world around us.
Forecasts: uncertain, inaccurate and biased?
Nicolaisen, Morten Skou; Ambrasaite, Inga; Salling, Kim Bang
2012-01-01
Cost Benefit Analysis (CBA) is the dominating methodology for appraisal of transport infrastructure projects across the globe. In order to adequately assess the costs and benefits of such projects two types of forecasts are crucial to the validity of the appraisal. First are the forecasts...... of construction costs, which account for the majority of total project costs. Second are the forecasts of travel time savings, which account for the majority of total project benefits. The latter of these is, inter alia, determined by forecasts of travel demand, which we shall use as a proxy for the forecasting...... accuracy of project benefits. This paper presents results from an on-going research project on uncertainties in transport project evaluation (UNITE) that find forecasts of demand to be not only uncertain, but at times also highly inaccurate and often displaying a concerning degree of bias. Demand for road...
A comparison of Monte Carlo generators
Golan, Tomasz
2014-01-01
A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and $\\pi^+$ two-dimensional energy vs cosine distribution.
Monte Carlo Tools for Jet Quenching
Zapp, Korinna
2011-01-01
A thorough understanding of jet quenching on the basis of multi-particle final states and jet observables requires new theoretical tools. This talk summarises the status and propects of the theoretical description of jet quenching in terms of Monte Carlo generators.
An Introduction to Monte Carlo Methods
Raeside, D. E.
1974-01-01
Reviews the principles of Monte Carlo calculation and random number generation in an attempt to introduce the direct and the rejection method of sampling techniques as well as the variance-reduction procedures. Indicates that the increasing availability of computers makes it possible for a wider audience to learn about these powerful methods. (CC)
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the intr
Scalable Domain Decomposed Monte Carlo Particle Transport
O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)
2013-12-05
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.
Monte Carlo methods beyond detailed balance
Schram, Raoul D.; Barkema, Gerard T.
2015-01-01
Monte Carlo algorithms are nearly always based on the concept of detailed balance and ergodicity. In this paper we focus on algorithms that do not satisfy detailed balance. We introduce a general method for designing non-detailed balance algorithms, starting from a conventional algorithm satisfying
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the intr
An analysis of Monte Carlo tree search
James, S
2017-02-01
Full Text Available Monte Carlo Tree Search (MCTS) is a family of directed search algorithms that has gained widespread attention in recent years. Despite the vast amount of research into MCTS, the effect of modifications on the algorithm, as well as the manner...
Monte Carlo Simulation of Counting Experiments.
Ogden, Philip M.
A computer program to perform a Monte Carlo simulation of counting experiments was written. The program was based on a mathematical derivation which started with counts in a time interval. The time interval was subdivided to form a binomial distribution with no two counts in the same subinterval. Then the number of subintervals was extended to…
Biased random walks on multiplex networks
Battiston, Federico; Latora, Vito
2015-01-01
Biased random walks on complex networks are a particular type of walks whose motion is biased on properties of the destination node, such as its degree. In recent years they have been exploited to design efficient strategies to explore a network, for instance by constructing maximally mixing trajectories or by sampling homogeneously the nodes. In multiplex networks, the nodes are related through different types of links (layers or communication channels), and the presence of connections at different layers multiplies the number of possible paths in the graph. In this work we introduce biased random walks on multiplex networks and provide analytical solutions for their long-term properties such as the stationary distribution and the entropy rate. We focus on degree-biased walks and distinguish between two subclasses of random walks: extensive biased walks consider the properties of each node separately at each layer, intensive biased walks deal instead with intrinsically multiplex variables. We study the effec...
Professional Culture and Climate: Addressing Unconscious Bias
Knezek, Patricia
2016-10-01
Unconscious bias reflects expectations or stereotypes that influence our judgments of others (regardless of our own group). Everyone has unconscious biases. The end result of unconscious bias can be an accumulation of advantage or disadvantage that impacts the long term career success of individuals, depending on which biases they are subject to. In order to foster a professional culture and climate, being aware of these unconscious biases and mitigating against them is a first step. This is particularly important when judgements are needed, such as in cases for recruitment, choice of speakers for conferences, and even reviewing papers submitted for publication. This presentation will cover how unconscious bias manifests itself, what evidence exists to demonstrate it exists, and ways it can be addressed.
Symmetry as Bias: Rediscovering Special Relativity
Lowry, Michael R.
1992-01-01
This paper describes a rational reconstruction of Einstein's discovery of special relativity, validated through an implementation: the Erlanger program. Einstein's discovery of special relativity revolutionized both the content of physics and the research strategy used by theoretical physicists. This research strategy entails a mutual bootstrapping process between a hypothesis space for biases, defined through different postulated symmetries of the universe, and a hypothesis space for physical theories. The invariance principle mutually constrains these two spaces. The invariance principle enables detecting when an evolving physical theory becomes inconsistent with its bias, and also when the biases for theories describing different phenomena are inconsistent. Structural properties of the invariance principle facilitate generating a new bias when an inconsistency is detected. After a new bias is generated. this principle facilitates reformulating the old, inconsistent theory by treating the latter as a limiting approximation. The structural properties of the invariance principle can be suitably generalized to other types of biases to enable primal-dual learning.
Smart darting diffusion Monte Carlo: Applications to lithium ion-Stockmayer clusters.
Christensen, H M; Jake, L C; Curotto, E
2016-05-07
In a recent investigation [K. Roberts et al., J. Chem. Phys. 136, 074104 (2012)], we have shown that, for a sufficiently complex potential, the Diffusion Monte Carlo (DMC) random walk can become quasiergodic, and we have introduced smart darting-like moves to improve the sampling. In this article, we systematically characterize the bias that smart darting moves introduce in the estimate of the ground state energy of a bosonic system. We then test a simple approach to eliminate completely such bias from the results. The approach is applied for the determination of the ground state of lithium ion-n-dipoles clusters in the n = 8-20 range. For these, the smart darting diffusion Monte Carlo simulations find the same ground state energy and mixed-distribution as the traditional approach for n systems we find that while the ground state energies agree quantitatively with or without smart darting moves, the mixed-distributions can be significantly different. Some evidence is offered to conclude that introducing smart darting-like moves in traditional DMC simulations may produce a more reliable ground state mixed-distribution.
Political Accountability, Electoral Control, and Media Bias
Adachi, Takanori; Hizen, Yoichi
2012-01-01
Are anti-establishment mass media really useful in preventing politicians from behaving dishonestly? This paper proposes a voting model for analyzing how differences in the direction of media bias affect politicians' behavior. In particular, the probability of corruption by an incumbent is higher (than that in the case of no media bias) if and only if the mass media have some degree of "anti-incumbent" bias (i.e., information favorable to the incumbent is converted into unfavorable news about...
Electric control of exchange bias training.
Echtenkamp, W; Binek, Ch
2013-11-01
Voltage-controlled exchange bias training and tunability are introduced. Isothermal voltage pulses are used to reverse the antiferromagnetic order parameter of magnetoelectric Cr(2)O(3), and thus continuously tune the exchange bias of an adjacent CoPd film. Voltage-controlled exchange bias training is initialized by tuning the antiferromagnetic interface into a nonequilibrium state incommensurate with the underlying bulk. Interpretation of these hitherto unreported effects contributes to new understanding in electrically controlled magnetism.
Electric Control of Exchange Bias Training
Echtenkamp, W.; Binek, Ch.
2013-11-01
Voltage-controlled exchange bias training and tunability are introduced. Isothermal voltage pulses are used to reverse the antiferromagnetic order parameter of magnetoelectric Cr2O3, and thus continuously tune the exchange bias of an adjacent CoPd film. Voltage-controlled exchange bias training is initialized by tuning the antiferromagnetic interface into a nonequilibrium state incommensurate with the underlying bulk. Interpretation of these hitherto unreported effects contributes to new understanding in electrically controlled magnetism.
When Do Children Exhibit a "Yes" Bias?
Okanda, Mako; Itakura, Shoji
2010-01-01
This study investigated whether one hundred and thirty-five 3- to 6-year-old children exhibit a yes bias to various yes-no questions and whether their knowledge status affects the production of a yes bias. Three-year-olds exhibited a yes bias to all yes-no questions such as "preference-object" and "knowledge-object" questions pertaining to…
Monte Carlo radiation transport in external beam radiotherapy
Çeçen, Yiğit
2013-01-01
The use of Monte Carlo in radiation transport is an effective way to predict absorbed dose distributions. Monte Carlo modeling has contributed to a better understanding of photon and electron transport by radiotherapy physicists. The aim of this review is to introduce Monte Carlo as a powerful radiation transport tool. In this review, photon and electron transport algorithms for Monte Carlo techniques are investigated and a clinical linear accelerator model is studied for external beam radiot...
Guidelines for reducing bias in nursing examinations.
Klisch, M L
1994-01-01
As our nation becomes more diversified, many schools of nursing strive to improve the recruitment and retention of English as a Second Language (ESL) and minority nursing students. An important aspect of this commitment to diversity is the reduction of biased items in nursing examinations, with the goal of making the evaluation process fair for all students. The author defines test and item bias, provides examples of biased items, and presents specific guidelines for decreasing item bias in teacher-made nursing examinations. A discussion of the related topic of whether ESL students should be given extended testing time is included.
Bayesian long branch attraction bias and corrections.
Susko, Edward
2015-03-01
Previous work on the star-tree paradox has shown that Bayesian methods suffer from a long branch attraction bias. That work is extended to settings involving more taxa and partially resolved trees. The long branch attraction bias is confirmed to arise more broadly and an additional source of bias is found. A by-product of the analysis is methods that correct for biases toward particular topologies. The corrections can be easily calculated using existing Bayesian software. Posterior support for a set of two or more trees can thus be supplemented with corrected versions to cross-check or replace results. Simulations show the corrections to be highly effective.
Attribution bias and social anxiety in schizophrenia
Amelie M. Achim
2016-06-01
Full Text Available Studies on attribution biases in schizophrenia have produced mixed results, whereas such biases have been more consistently reported in people with anxiety disorders. Anxiety comorbidities are frequent in schizophrenia, in particular social anxiety disorder, which could influence their patterns of attribution biases. The objective of the present study was thus to determine if individuals with schizophrenia and a comorbid social anxiety disorder (SZ+ show distinct attribution biases as compared with individuals with schizophrenia without social anxiety (SZ− and healthy controls. Attribution biases were assessed with the Internal, Personal, and Situational Attributions Questionnaire in 41 individual with schizophrenia and 41 healthy controls. Results revealed the lack of the normal externalizing bias in SZ+, whereas SZ− did not significantly differ from healthy controls on this dimension. The personalizing bias was not influenced by social anxiety but was in contrast linked with delusions, with a greater personalizing bias in individuals with current delusions. Future studies on attribution biases in schizophrenia should carefully document symptom presentation, including social anxiety.
Steffen, Jason H
2015-01-01
Motivated by recent discussions, both in private and in the literature, we use a Monte Carlo simulation of planetary systems to investigate sources of bias in determining the mass-radius distribution of exoplanets for the two primary techniques used to measure planetary masses---Radial Velocities (RVs) and Transit Timing Variations (TTVs). We assert that mass measurements derived from these two methods are comparably reliable---as the physics underlying their respective signals is well understood. Nevertheless, their sensitivity to planet mass varies with the properties of the planets themselves. We find that for a given planet size, the RV method tends to find planets with higher mass while the sensitivity of TTVs is more uniform. This ``sensitivity bias'' implies that a complete census of TTV systems is likely to yield a more robust estimate of the mass-radius distribution provided there are not important physical differences between planets near and far from mean-motion resonance. We discuss differences in...
Modeling confirmation bias and polarization
Del Vicario, Michela; Scala, Antonio; Caldarelli, Guido; Stanley, H. Eugene; Quattrociocchi, Walter
2017-01-01
Online users tend to select claims that adhere to their system of beliefs and to ignore dissenting information. Confirmation bias, indeed, plays a pivotal role in viral phenomena. Furthermore, the wide availability of content on the web fosters the aggregation of likeminded people where debates tend to enforce group polarization. Such a configuration might alter the public debate and thus the formation of the public opinion. In this paper we provide a mathematical model to study online social debates and the related polarization dynamics. We assume the basic updating rule of the Bounded Confidence Model (BCM) and we develop two variations a) the Rewire with Bounded Confidence Model (RBCM), in which discordant links are broken until convergence is reached; and b) the Unbounded Confidence Model, under which the interaction among discordant pairs of users is allowed even with a negative feedback, either with the rewiring step (RUCM) or without it (UCM). From numerical simulations we find that the new models (UCM and RUCM), unlike the BCM, are able to explain the coexistence of two stable final opinions, often observed in reality. Lastly, we present a mean field approximation of the newly introduced models. PMID:28074874
Ganichev, Sergey D.; Bel'Kov, Vasily V.; Tarasenko, Sergey A.; Danilov, Sergey N.; Giglberger, Stephan; Hoffmann, Christoph; Ivchenko, Eougenious L.; Weiss, Dieter; Wegscheider, Werner; Gerl, Christian; Schuh, Dieter; Stahl, Joachim; de Boeck, Jo; Borghs, Gustaaf; Prettl, Wilhelm
2006-09-01
The generation, manipulation and detection of spin-polarized electrons in low-dimensional semiconductors are at the heart of spintronics. Pure spin currents, that is, fluxes of magnetization without charge current, are quite attractive in this respect. A paradigmatic example is the spin Hall effect, where an electrical current drives a transverse spin current and causes a non-equilibrium spin accumulation observed near the sample boundary. Here we provide evidence for an another effect causing spin currents which is fundamentally different from the spin Hall effect. In contrast to the spin Hall effect, it does not require an electric current to flow: without bias the spin separation is achieved by spin-dependent scattering of electrons in media with suitable symmetry. We show, by free-carrier absorption of terahertz (THz) radiation, that spin currents flow in a wide range of temperatures. Moreover, the experimental results provide evidence that simple electron gas heating by any means is already sufficient to yield spin separation due to spin-dependent energy-relaxation processes.
Modeling confirmation bias and polarization
Del Vicario, Michela; Scala, Antonio; Caldarelli, Guido; Stanley, H. Eugene; Quattrociocchi, Walter
2017-01-01
Online users tend to select claims that adhere to their system of beliefs and to ignore dissenting information. Confirmation bias, indeed, plays a pivotal role in viral phenomena. Furthermore, the wide availability of content on the web fosters the aggregation of likeminded people where debates tend to enforce group polarization. Such a configuration might alter the public debate and thus the formation of the public opinion. In this paper we provide a mathematical model to study online social debates and the related polarization dynamics. We assume the basic updating rule of the Bounded Confidence Model (BCM) and we develop two variations a) the Rewire with Bounded Confidence Model (RBCM), in which discordant links are broken until convergence is reached; and b) the Unbounded Confidence Model, under which the interaction among discordant pairs of users is allowed even with a negative feedback, either with the rewiring step (RUCM) or without it (UCM). From numerical simulations we find that the new models (UCM and RUCM), unlike the BCM, are able to explain the coexistence of two stable final opinions, often observed in reality. Lastly, we present a mean field approximation of the newly introduced models.
Media bias under direct and indirect government control: when is the bias smaller?
Abhra Roy
2015-01-01
We present an analytical framework to compare media bias under direct and indirect government control. In this context, we show that direct control can lead to a smaller bias and higher welfare than indirect control. We further show that the size of the advertising market affects media bias only under direct control. Media bias, under indirect control, is not affected by the size of the advertising market.
Understanding Unconscious Bias and Unintentional Racism
Moule, Jean
2009-01-01
Unconscious biases affect one's relationships, whether they are fleeting relationships in airports or longer term relationships between teachers and students, teachers and parents, teachers and other educators. In this article, the author argues that understanding one's possible biases is essential for developing community in schools.…
Belief biases and volatility of assets
Lei-Sun, Wen-Zou, Hui
2014-10-01
Based on an overlapping generation model, this paper introduces the noise traders with belief biases and rational traders. With an equilibrium analysis, this paper examines the volatility of risky asset. The results show that the belief biases, the probability of economy state, and the domain capability are all the factors that have effects on the volatility of the market.
COVARIATION BIAS AND THE RETURN OF FEAR
de Jong, Peter; VANDENHOUT, MA; MERCKELBACH, H
1995-01-01
Several studies have indicated that phobic fear is accompanied by a covariation bias, i.e. that phobic Ss tend to overassociate fear relevant stimuli and aversive outcomes. Such a covariation bias seems to be a fairly direct and powerful way to confirm danger expectations and enhance fear. Therefore
Bounding the bias of contrastive divergence learning
Fischer, Anja; Igel, Christian
2011-01-01
Optimization based on k-step contrastive divergence (CD) has become a common way to train restricted Boltzmann machines (RBMs). The k-step CD is a biased estimator of the log-likelihood gradient relying on Gibbs sampling. We derive a new upper bound for this bias. Its magnitude depends on k...
Length-biased Weighted Maxwell Distribution
Kanak Modi
2015-12-01
Full Text Available The concept of length-biased distribution can be employed in development of proper models for life-time data. In this paper, we develop the length-biased form of Weighted Maxwell distribution (WMD. We study the statistical properties of the derived distribution including moments, moment generating function, hazard rate, reverse hazard rate, Shannon entropy and estimation of parameters
Developmental Changes in the Whole Number Bias
Braithwaite, David W.; Siegler, Robert S.
2017-01-01
Many students' knowledge of fractions is adversely affected by whole number bias, the tendency to focus on the separate whole number components (numerator and denominator) of a fraction rather than on the fraction's integrated magnitude (ratio of numerator to denominator). Although whole number bias appears early in the fraction learning process…
Reducing status quo bias in choice experiments
Bonnichsen, Ole; Ladenburg, Jacob
In stated preference literature, the tendency to choose the alternative representing the status quo situation seems to exceed real life status quo effects. Accordingly, status quo bias can be a problem. In Choice Experiments, status quo bias is found to be strongly correlated with protest attitudes...
Reducing status quo bias in choice experiments
Bonnichsen, Ole; Ladenburg, Jacob
2015-01-01
to be superior, i.e. a status quo effect. However, in the stated preference literature, the tendency to choose the alternative representing the status quo situation seems to exceed real life status quo effects. Accordingly, status quo bias can be a problem. In the Choice Experiment literature, status quo bias...
Distinctive Characteristics of Sexual Orientation Bias Crimes
Stacey, Michele
2011-01-01
Despite increased attention in the area of hate crime research in the past 20 years, sexual orientation bias crimes have rarely been singled out for study. When these types of crimes are looked at, the studies are typically descriptive in nature. This article seeks to increase our knowledge of sexual orientation bias by answering the question:…
On Measurement Bias in Causal Inference
Pearl, Judea
2012-01-01
This paper addresses the problem of measurement errors in causal inference and highlights several algebraic and graphical methods for eliminating systematic bias induced by such errors. In particulars, the paper discusses the control of partially observable confounders in parametric and non parametric models and the computational problem of obtaining bias-free effect estimates in such models.
Understanding Implicit Bias: What Educators Should Know
Staats, Cheryl
2016-01-01
The desire to ensure the best for children is precisely why educators should become aware of the concept of implicit bias: the attitudes or stereotypes that affect our understanding, actions, and decisions in an unconscious manner. Operating outside of our conscious awareness, implicit biases are pervasive, and they can challenge even the most…
Racially Biased Policing: Determinants of Citizen Perceptions
Weitzer, Ronald; Tuch, Steven A.
2005-01-01
The current controversy surrounding racial profiling in America has focused renewed attention on the larger issue of racial bias by the police. Yet little is known about the extent of police racial bias and even less about public perceptions of the problem. This article analyzes recent national survey data on citizens' views of and reported…
Distinctive Characteristics of Sexual Orientation Bias Crimes
Stacey, Michele
2011-01-01
Despite increased attention in the area of hate crime research in the past 20 years, sexual orientation bias crimes have rarely been singled out for study. When these types of crimes are looked at, the studies are typically descriptive in nature. This article seeks to increase our knowledge of sexual orientation bias by answering the question:…
Hybrid Monte Carlo with Chaotic Mixing
Kadakia, Nirag
2016-01-01
We propose a hybrid Monte Carlo (HMC) technique applicable to high-dimensional multivariate normal distributions that effectively samples along chaotic trajectories. The method is predicated on the freedom of choice of the HMC momentum distribution, and due to its mixing properties, exhibits sample-to-sample autocorrelations that decay far faster than those in the traditional hybrid Monte Carlo algorithm. We test the methods on distributions of varying correlation structure, finding that the proposed technique produces superior covariance estimates, is less reliant on step-size tuning, and can even function with sparse or no momentum re-sampling. The method presented here is promising for more general distributions, such as those that arise in Bayesian learning of artificial neural networks and in the state and parameter estimation of dynamical systems.
Monte Carlo study of real time dynamics
Alexandru, Andrei; Bedaque, Paulo F; Vartak, Sohan; Warrington, Neill C
2016-01-01
Monte Carlo studies involving real time dynamics are severely restricted by the sign problem that emerges from highly oscillatory phase of the path integral. In this letter, we present a new method to compute real time quantities on the lattice using the Schwinger-Keldysh formalism via Monte Carlo simulations. The key idea is to deform the path integration domain to a complex manifold where the phase oscillations are mild and the sign problem is manageable. We use the previously introduced "contraction algorithm" to create a Markov chain on this alternative manifold. We substantiate our approach by analyzing the quantum mechanical anharmonic oscillator. Our results are in agreement with the exact ones obtained by diagonalization of the Hamiltonian. The method we introduce is generic and in principle applicable to quantum field theory albeit very slow. We discuss some possible improvements that should speed up the algorithm.
Multilevel sequential Monte-Carlo samplers
Jasra, Ajay
2016-01-05
Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.
Monte Carlo Simulation for Particle Detectors
Pia, Maria Grazia
2012-01-01
Monte Carlo simulation is an essential component of experimental particle physics in all the phases of its life-cycle: the investigation of the physics reach of detector concepts, the design of facilities and detectors, the development and optimization of data reconstruction software, the data analysis for the production of physics results. This note briefly outlines some research topics related to Monte Carlo simulation, that are relevant to future experimental perspectives in particle physics. The focus is on physics aspects: conceptual progress beyond current particle transport schemes, the incorporation of materials science knowledge relevant to novel detection technologies, functionality to model radiation damage, the capability for multi-scale simulation, quantitative validation and uncertainty quantification to determine the predictive power of simulation. The R&D on simulation for future detectors would profit from cooperation within various components of the particle physics community, and synerg...
An enhanced Monte Carlo outlier detection method.
Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi
2015-09-30
Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc.
Multilevel Monte Carlo Approaches for Numerical Homogenization
Efendiev, Yalchin R.
2015-10-01
In this article, we study the application of multilevel Monte Carlo (MLMC) approaches to numerical random homogenization. Our objective is to compute the expectation of some functionals of the homogenized coefficients, or of the homogenized solutions. This is accomplished within MLMC by considering different sizes of representative volumes (RVEs). Many inexpensive computations with the smallest RVE size are combined with fewer expensive computations performed on larger RVEs. Likewise, when it comes to homogenized solutions, different levels of coarse-grid meshes are used to solve the homogenized equation. We show that, by carefully selecting the number of realizations at each level, we can achieve a speed-up in the computations in comparison to a standard Monte Carlo method. Numerical results are presented for both one-dimensional and two-dimensional test-cases that illustrate the efficiency of the approach.
Monte Carlo simulations on SIMD computer architectures
Burmester, C.P.; Gronsky, R. [Lawrence Berkeley Lab., CA (United States); Wille, L.T. [Florida Atlantic Univ., Boca Raton, FL (United States). Dept. of Physics
1992-03-01
Algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SMM) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carlo updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures.
Inhomogeneous Monte Carlo simulations of dermoscopic spectroscopy
Gareau, Daniel S.; Li, Ting; Jacques, Steven; Krueger, James
2012-03-01
Clinical skin-lesion diagnosis uses dermoscopy: 10X epiluminescence microscopy. Skin appearance ranges from black to white with shades of blue, red, gray and orange. Color is an important diagnostic criteria for diseases including melanoma. Melanin and blood content and distribution impact the diffuse spectral remittance (300-1000nm). Skin layers: immersion medium, stratum corneum, spinous epidermis, basal epidermis and dermis as well as laterally asymmetric features (eg. melanocytic invasion) were modeled in an inhomogeneous Monte Carlo model.
Handbook of Markov chain Monte Carlo
Brooks, Steve
2011-01-01
""Handbook of Markov Chain Monte Carlo"" brings together the major advances that have occurred in recent years while incorporating enough introductory material for new users of MCMC. Along with thorough coverage of the theoretical foundations and algorithmic and computational methodology, this comprehensive handbook includes substantial realistic case studies from a variety of disciplines. These case studies demonstrate the application of MCMC methods and serve as a series of templates for the construction, implementation, and choice of MCMC methodology.
Accelerated Monte Carlo by Embedded Cluster Dynamics
Brower, R. C.; Gross, N. A.; Moriarty, K. J. M.
1991-07-01
We present an overview of the new methods for embedding Ising spins in continuous fields to achieve accelerated cluster Monte Carlo algorithms. The methods of Brower and Tamayo and Wolff are summarized and variations are suggested for the O( N) models based on multiple embedded Z2 spin components and/or correlated projections. Topological features are discussed for the XY model and numerical simulations presented for d=2, d=3 and mean field theory lattices.
Implicit Social Biases in People With Autism.
Birmingham, Elina; Stanley, Damian; Nair, Remya; Adolphs, Ralph
2015-11-01
Implicit social biases are ubiquitous and are known to influence social behavior. A core diagnostic criterion of autism spectrum disorders (ASD) is abnormal social behavior. We investigated the extent to which individuals with ASD might show a specific attenuation of implicit social biases, using Implicit Association Tests (IATs) involving social (gender, race) and nonsocial (nature, shoes) categories. High-functioning adults with ASD showed intact but reduced IAT effects relative to healthy control participants. We observed no selective attenuation of implicit social (vs. nonsocial) biases in our ASD population. To extend these results, we supplemented our healthy control data with data collected from a large online sample from the general population and explored correlations between autistic traits and IAT effects. We observed no systematic relationship between autistic traits and implicit social biases in our online and control samples. Taken together, these results suggest that implicit social biases, as measured by the IAT, are largely intact in ASD.
Adaptive Variable Bias Magnetic Bearing Control
Johnson, Dexter; Brown, Gerald V.; Inman, Daniel J.
1998-01-01
Most magnetic bearing control schemes use a bias current with a superimposed control current to linearize the relationship between the control current and the force it delivers. With the existence of the bias current, even in no load conditions, there is always some power consumption. In aerospace applications, power consumption becomes an important concern. In response to this concern, an alternative magnetic bearing control method, called Adaptive Variable Bias Control (AVBC), has been developed and its performance examined. The AVBC operates primarily as a proportional-derivative controller with a relatively slow, bias current dependent, time-varying gain. The AVBC is shown to reduce electrical power loss, be nominally stable, and provide control performance similar to conventional bias control. Analytical, computer simulation, and experimental results are presented in this paper.
Are all biases missing data problems?
Howe, Chanelle J; Cain, Lauren E; Hogan, Joseph W
2015-09-01
Estimating causal effects is a frequent goal of epidemiologic studies. Traditionally, there have been three established systematic threats to consistent estimation of causal effects. These three threats are bias due to confounders, selection, and measurement error. Confounding, selection, and measurement bias have typically been characterized as distinct types of biases. However, each of these biases can also be characterized as missing data problems that can be addressed with missing data solutions. Here we describe how the aforementioned systematic threats arise from missing data as well as review methods and their related assumptions for reducing each bias type. We also link the assumptions made by the reviewed methods to the missing completely at random (MCAR) and missing at random (MAR) assumptions made in the missing data framework that allow for valid inferences to be made based on the observed, incomplete data.