2012-01-01
The 5th edition of the "Monts Jura Jazz Festival" will take place at the Esplanade du Lac in Divonne-les-Bains, France on September 21 and 22. This festival organized by the CERN Jazz Club and supported by the CERN Staff Association is becoming a major musical event in the Geneva region. International Jazz artists like Didier Lockwood and David Reinhardt are part of this year outstanding program. Full program and e-tickets are available on the festival website. Don't miss this great festival!
Jazz Club
2012-01-01
The 5th edition of the "Monts Jura Jazz Festival" that will take place on September 21st and 22nd 2012 at the Esplanade du Lac in Divonne-les-Bains. This festival is organized by the "CERN Jazz Club" with the support of the "CERN Staff Association". This festival is a major musical event in the French/Swiss area and proposes a world class program with jazz artists such as D.Lockwood and D.Reinhardt. More information on http://www.jurajazz.com.
2008-01-01
Šveitsi firma Jura Impressa J5 espressomasin tunnistati juba kolmandat korda järjest parimaks täisautomaatsete kohvimasinate testiga. 2007. a. on J5 saanud ka tootedisaini auhinna ja Red Doti disainiauhinna
Energy Technology Data Exchange (ETDEWEB)
Mazurek, M. [Institute of Geological Sciences, University of Berne, Berne (Switzerland); Haller de, A. [Earth and Environmental Sciences, University of Geneva, Geneva (Switzerland)
2017-04-15
Data pertinent to pore-water composition in Opalinus Clay in the Mont Terri and Mont Russelin anticlines have been collected over the last 20 years from long-term in situ pore-water sampling in dedicated boreholes, from laboratory analyses on drill cores and from the geochemical characteristics of vein infills. Together with independent knowledge on regional geology, an attempt is made here to constrain the geochemical evolution of the pore-waters. Following basin inversion and the establishment of continental conditions in the late Cretaceous, the Malm limestones acted as a fresh-water upper boundary leading to progressive out-diffusion of salinity from the originally marine pore-waters of the Jurassic low-permeability sequence. Model calculations suggest that at the end of the Palaeogene, pore-water salinity in Opalinus Clay was about half the original value. In the Chattian/Aquitanian, partial evaporation of sea-water occurred. It is postulated that brines diffused into the underlying sequence over a period of several Myr, resulting in an increase of salinity in Opalinus Clay to levels observed today. This hypothesis is further supported by the isotopic signatures of SO{sub 4}{sup 2-} and {sup 87}Sr/{sup 86}Sr in current pore-waters. These are not simple binary mixtures of sea and meteoric water, but their Cl{sup -} and stable water-isotope signatures can be potentially explained by a component of partially evaporated sea-water. After the re-establishment of fresh-water conditions on the surface and the formation of the Jura Fold and Thrust Belt, erosion caused the activation of aquifers embedding the low-permeability sequence, leading to the curved profiles of various pore-water tracers that are observed today. Fluid flow triggered by deformation events during thrusting and folding of the anticlines occurred and is documented by infrequent vein infills in major fault structures. However, this flow was spatially focussed and of limited duration and so did not
How hard were the Jura mountains pushed?
Energy Technology Data Exchange (ETDEWEB)
Hindle, D
2008-09-15
The mechanical twinning of calcite is believed to record past differential stress values, but validating results in the context of past tectonic situations has been rarely attempted. Using assumptions of linear gradients of stress components with depth, a stress gradient based on twinning palaeopiezometry is derived for the Swiss Molasse Basin, the indenter region to the Jura fold and thrust belt. When integrated into a model of the retrodeformed Jura-Molasse system, allowing horizontal stress concentration and conservation along the original taper geometry, the stress profile proves consistent with the position of the Jura-Molasse (ftb-indenter) transition. The model demonstrates mechanically why the Plateau Molasse portion of the Molasse Basin remained relatively undeformed when transmitting tectonic forces applied to the Jura mountains. (author)
How hard were the Jura mountains pushed?
International Nuclear Information System (INIS)
Hindle, D.
2008-01-01
The mechanical twinning of calcite is believed to record past differential stress values, but validating results in the context of past tectonic situations has been rarely attempted. Using assumptions of linear gradients of stress components with depth, a stress gradient based on twinning palaeopiezometry is derived for the Swiss Molasse Basin, the indenter region to the Jura fold and thrust belt. When integrated into a model of the retrodeformed Jura-Molasse system, allowing horizontal stress concentration and conservation along the original taper geometry, the stress profile proves consistent with the position of the Jura-Molasse (ftb-indenter) transition. The model demonstrates mechanically why the Plateau Molasse portion of the Molasse Basin remained relatively undeformed when transmitting tectonic forces applied to the Jura mountains. (author)
2012-01-01
Fifty years ago, a week-long school for physicists took place in Saint Cergue, in the Jura mountains not far from CERN. Its focus was on using emulsion techniques, but its legacy was much more far reaching. Last week I was in Fukuoka, Japan, on the last day of a direct descendent – the first Asia–Europe–Pacific School of High-Energy Physics (AEPSHEP). That first small school in 1962 was the precursor to the annual European Schools of High-Energy Physics, which are organised jointly by CERN and the Joint Institute for Nuclear Research (JINR) in countries that are a member state of either (or both) of the organisations. They led in turn to the CERN–Latin-American School of High-Energy Physics, first held in Brazil in 2001. The aim of these schools is not only to give young particle physicists the opportunity to learn from leading experts in the field, but also to nurture from the start communication among researchers from different regions. CERN and JI...
Directory of Open Access Journals (Sweden)
Yann Dubois
2012-12-01
Full Text Available La construction européenne a redéfini la signification et les fonctions des frontières nationales. Cet article s’intéresse à cette mutation sous l’angle des pratiques des habitants de l’Arc jurassien franco-suisse. Une enquête par questionnaire a permis de mesurer l’intensité de certaines pratiques spatiales transfrontalières réalisées pendant le temps libre (achats, loisirs, etc. et d’en déterminer les logiques sous-jacentes. L’effet frontière se manifeste sous la forme d’un différentiel de prix (coût de la vie, taux de change, taxation de certains produits et d’un différentiel de connaissances (manque d’informations sur le pays voisin, habitude, etc. qui freinent ou incitent le franchissement de la frontière. L’effet frontière est toutefois atténué dans le contexte territorial étudié par un effet de centralité impliquant un différentiel d’offre (attraction des communes françaises à vocation résidentielle par les centres urbains helvétiques. La combinaison de ces effets explique l’intensité et l’orientation des pratiques spatiales transfrontalières.The European integration has been redefining the meaning and functions of national borders. This paper addresses this mutation from the perspective of the inhabitants’ spatial practices in the French-Swiss Jura region. Through a questionnaire survey we have measured the intensity of cross-border spatial practices done in the free time (purchasing, leisure, etc. and determined the underlying logics. The border effect appears in the form of a price differential (cost of living, exchange rate, taxation of some goods and a knowledge differential (lack of information on the neighbouring country, habits, etc. that curb or stimulate border crossing. The border effect is however mitigated in the spatial context under study by a centrality effect involving a supply differential (attraction of residential French municipalities by Swiss urban centres. The
Maillot, Bertrand; Caer, Typhaine; Souloumiac, Pauline; Nussbaum, Christophe
2014-05-01
The Jura fold-and-thrust belt is classically interpreted as a thin-skin belt developed over a triasic décollement, which is itself topping Permo-carboniferous E-W transpressive grabens delimited by N-S strike-slip faults. These faults have been reactivated in eo-oligocene times as normal faults. Today, the basement is seismically active, suggesting that the Jura belt involves some amount of basement deformation. We tested both thin and thick-skin hypotheses using a simple rheological prototype with two potential décollements : a Triassic horizon extending below Jura and Molasse basin, and the upper-lower crust interface rooted deep south of the Alpine front close to the Penninic nappes region. Using the theory of limit analysis combined with automatic adaptive meshing, we demonstrate that the main Jura Triassic décollement can be activated with the present day topography, if its friction angle is below 5°, a counter-intuitive result, that was not foreseen by sand box models. In contrast, a thick-skin deformation involving all the upper crust is possible either only south of the Jura below the topographic depression of the Molasse basin if the upper-lower crust interface has an equivalent friction angle above 4.6°, or far beyond it towards the North, if it is weaker. Active thick-skin thrusting within the Jura belt requires further assumptions on the existence of weak zones, for which a good candidate could be the inherited eo-oligocene normal faults as previously suggested in the litterature. We also demonstrated the potential major role of the topographic depression of the Molasse basin in conveying deformation from the Alps to the Jura, and in localising thick-skin thrusting.
Prospection and catalogue of sites for geothermal probes in the canton of Jura. Final report
International Nuclear Information System (INIS)
Rieben, C.; Adatte, P.
1996-10-01
The aim of this report is to establish the different possibilities of using so called 'Earth Probes' in Karstic areas. In Switzerland, most of the existing vertical Earth Probes are located on the plateau. In calcareous regions, whether in the Jura or in the Prealps, this type of equipment has not yet been used much because of its interference with the Karstic aquifers which bear important drinking water resources. However, an important demand exists in these areas, as can be witnessed in the canton of Jura for example where some 300 vertical probes have already been drilled. In 1996, almost 10% of the licence's request for heating installations involved Geothermic Probes. Unfortunately, in the absence of a coherent management policy, both local and regional water resources are being endangered by the multiplication of these Geothermic Probes. It would therefore appear to be necessary to elaborate a site implantation assessment methodology specific to limestone areas. Such an assessment methodology should not only further groundwater conservation, but also promote Geothermy in the future. This multicriteria approach should integrate Karstic aquifer specificities by taking into account both the above ground factors but also the underground parameters. The result of the application of this particular approach in the canton Jura is shown on a 1:50'000 scale map where potential Earth Probe sites are localised and on a 1:5'000 scale land registry map of the City of Delement. In comparison with the current unmanaged situation, this approach has the effect of restricting the use of Earth Probes, as it would be forbid their use in eight townships. However, it must be underlined that 97% of the Jura population lives in built-up areas where the unrestricted exploitation of Geothermy is permitted. The application of this particular approach as regards to other Karstic areas would enable a wide expansion of implantation possibilities of this type of equipment. (author) figs
Klusen und verwandte Formen im Schweizer Jura
Directory of Open Access Journals (Sweden)
R. Hantke
4th, in view of the fact that the joints are known to have been caused by recent plate-tectonic processes, the same must be assumed for the kluses: the latter owe their genesis to complicated geologic lineaments, folds and shear faults. This fact has practical consequences: During the construction of tunnels underneath a klus one has to take into consideration that the disturbance in the landscape represented by a klus my well reach geologically far into the basement. 5th,the erosion of the kluses oecurred in parallel to the direction of the joints. In this instance, the debris produced by the tectonic processes and by frost action was removed by the mechanical and chemical action of the water. During the cold times and cold spells during warm times this water was mainly melt-water. 6th, special studies are necessary for the determination of the quantity of debris that was removed. The time-span available for this removal is much longer than commonly assumed: it begins with the first tectonic foldnig, in the Jura mountains already in mid-Miocene, 15 Ma ago.
Natural radionuclides concentration in agricultural products and water from the Monte Alegre region
International Nuclear Information System (INIS)
Gouvea, Vandir A.; Melo, Vicente P.; Binns, Donald A.C.; Santos, Pedro L. dos
1997-01-01
Measurements to determine the content of natural radionuclides were performed in agricultural products in the brazilian Central Amazon Basin Monte Alegre region, for the soil-plant transfer calculation. these measurements were concentrated in the Ingles de Souza agricultural settlement, were several uranium and thorium occurrences exist in geological formations called Monte Alegre and Faro. The values obtained in foodstuff cultivated in the anomalous region are 10 times higher than those ones observed in the Alenquer region, which is the chosen region due to its low level natural radioactivity and its proximity to the anomalous region. (author). 9 refs., 4 tabs
Steen Magnussen
2009-01-01
Areas burned annually in 29 Canadian forest fire regions show a patchy and irregular correlation structure that significantly influences the distribution of annual totals for Canada and for groups of regions. A binary Monte Carlo Markov Chain (MCMC) is constructed for the purpose of joint simulation of regional areas burned in forest fires. For each year the MCMC...
Spatial variability of caesium-137 activities in soils in the Jura mountains
International Nuclear Information System (INIS)
Pimou-Heumou, G.; Lucot, E.; Crini, N.; Briot, M.; Badot, P.M.
2011-01-01
275 soil samples were taken in the catchment area of the upper part of the Doubs river located in the Jura mountains according to a sampling strategy designed to evaluate the extent of the spatial variability of 137 Cs activities and to identify its main sources. 137 Cs activities ranged between about 1000 and 12000 Bq.m -2 with an average of approximately 3600 Bq.m -2 . The spatial variability of the contamination is high: 137 Cs activity shows statistically significant links with altitude, soil organic matter and land cover, whereas the other studied parameters, i.e. soil type and topographic position, do not constitute significant sources of variation. These results are discussed in terms of evaluation of the radioactive contamination on a regional scale. They show that to be satisfactory, a sampling strategy must necessarily take into account the various types of land cover. (authors)
Energy Technology Data Exchange (ETDEWEB)
Belicev, P [Vojnotehnicki Inst., Belgrade (Yugoslavia)
1988-07-01
An outline of the problems encountered in the multigroup calculations of the neutron transport in the resonance region is given. The difference between subgroup and multigroup approximation is described briefly. The features of the Monte Carlo code SUBGR are presented. The results of the calculations of the neutron transmission and albedo for infinite iron slabs are given. (author)
Paleomagnetism and tectonics of the Jura arcuate mountain belt in France and Switzerland
Gehring, Andreas U.; Keller, Peter; Heller, Friedrich
1991-02-01
Goethite and hematite in ferriferous oolitic beds of Callovian age from the Jura mountains (Switzerland, France) carry either pre- and/or post-tectonic magnetization. The frequent pre-tectonic origin of goethite magnetization indicates a temperature range during formation of the arcuate Jura mountain belt below the goethite Néel temperature of about 100°C. The scatter of the pre-tectonic paleomagnetic directions ( D = 11.5° E, I = 55.5°; α95 = 4.7) which reside both in goethite and hematite, provides strong evidence that the arcuate mountain belt was shaped without significant rotation. The paleomagnetic results support tectonic thin-skinned models for the formation of the Jura mountain belt.
From the central Jura mountains to the molasse basin (France and Switzerland)
Energy Technology Data Exchange (ETDEWEB)
Sommaruga, A. [Institut de Géophysique, University of Lausanne, Bâtiment Amphipôle, Lausanne (Switzerland)
2011-07-01
This illustrated article discusses the geology of the area covering the Swiss Jura chain of mountains and the molasse basin which is to be found to the south-east of the mountain chain. The geological setting with the Jura Mountains and the molasse basin are described, as are the rocks to be found there. Their structures and faults are discussed in detail and their origin and formation are described. The paper presents a number of geological profiles and maps. The methods used to explore these structures are noted, which also indicated the presence of permo-carboniferous troughs in the molasse basin.
From the central Jura mountains to the molasse basin (France and Switzerland)
International Nuclear Information System (INIS)
Sommaruga, A.
2011-01-01
This illustrated article discusses the geology of the area covering the Swiss Jura chain of mountains and the molasse basin which is to be found to the south-east of the mountain chain. The geological setting with the Jura Mountains and the molasse basin are described, as are the rocks to be found there. Their structures and faults are discussed in detail and their origin and formation are described. The paper presents a number of geological profiles and maps. The methods used to explore these structures are noted, which also indicated the presence of permo-carboniferous troughs in the molasse basin
Salasa, Ana; Mercadoc, María I; Zampini, Iris C; Ponessa, Graciela I; Isla, María I
2016-05-01
Propolis production by honey bees is the result of a selective harvest of exudates from plants in the neighborhood of the hive. This product is used in Argentina as a food supplement and alternative medicine. The aim of this study was to determine the botanical origin of propolis from the arid regions of Monte of Argentina using rapid histochemical techniques and by comparison of TLC and HPLC-DAD chromatographic profiles with extract profiles obtained from Zuccagnia punctata, Larrea divaricata and Larrea cuneifolia, plant species that grow in the study area as a natural community named "jarillal". Microscopical analysis revealed the presence of several Z. punctata structures, such as multicellular trichomes, leaflets, stems and young leaves. Remarkable was the richness of the propolis in two bioactive chalcones, also present in Z. punctata resin; these compounds can be regarded as possible markers for propolis identification and justify its use as a dietary supplement, functional food and medicinal product. This study indicates that the source of resin used by honey bees to produce propolis in the Monte region of Argentina is only Z. punctata, a native shrub widespread in this phytogeographical region, while other more abundant species (L. divaricata and L. cuneifolia) in the region were not found, indicating that this propolis could be defined as a mono-resin, type-Zuccagnia.
Directory of Open Access Journals (Sweden)
Zhong Wu
2017-04-01
Full Text Available Since AASHTO released the Mechanistic-Empirical Pavement Design Guide (MEPDG for public review in 2004, many highway research agencies have performed sensitivity analyses using the prototype MEPDG design software. The information provided by the sensitivity analysis is essential for design engineers to better understand the MEPDG design models and to identify important input parameters for pavement design. In literature, different studies have been carried out based on either local or global sensitivity analysis methods, and sensitivity indices have been proposed for ranking the importance of the input parameters. In this paper, a regional sensitivity analysis method, Monte Carlo filtering (MCF, is presented. The MCF method maintains many advantages of the global sensitivity analysis, while focusing on the regional sensitivity of the MEPDG model near the design criteria rather than the entire problem domain. It is shown that the information obtained from the MCF method is more helpful and accurate in guiding design engineers in pavement design practices. To demonstrate the proposed regional sensitivity method, a typical three-layer flexible pavement structure was analyzed at input level 3. A detailed procedure to generate Monte Carlo runs using the AASHTOWare Pavement ME Design software was provided. The results in the example show that the sensitivity ranking of the input parameters in this study reasonably matches with that in a previous study under a global sensitivity analysis. Based on the analysis results, the strengths, practical issues, and applications of the MCF method were further discussed.
Monte-Carlo simulations of neutron shielding for the ATLAS forward region
Stekl, I; Kovalenko, V E; Vorobel, V; Leroy, C; Piquemal, F; Eschbach, R; Marquet, C
2000-01-01
The effectiveness of different types of neutron shielding for the ATLAS forward region has been studied by means of Monte-Carlo simulations and compared with the results of an experiment performed at the CERN PS. The simulation code is based on GEANT, FLUKA, MICAP and GAMLIB. GAMLIB is a new library including processes with gamma-rays produced in (n, gamma), (n, n'gamma) neutron reactions and is interfaced to the MICAP code. The effectiveness of different types of shielding against neutrons and gamma-rays, composed from different types of material, such as pure polyethylene, borated polyethylene, lithium-filled polyethylene, lead and iron, were compared. The results from Monte-Carlo simulations were compared to the results obtained from the experiment. The simulation results reproduce the experimental data well. This agreement supports the correctness of the simulation code used to describe the generation, spreading and absorption of neutrons (up to thermal energies) and gamma-rays in the shielding materials....
Hindle, David; Kley, Jonas
2016-04-01
be dealt with by conditioning the top surface of the model to "trend" towards the present day topographic profile along the cross section, in a crude proxy for erosion. In the case of the Jura-Molasse fold thrust belt, the basal boundary condition also very likely plays a significant role in the thrust-belts evolution. A large, extra component of regional basement uplift appears to have occurred across the Swiss Molasse and Jura, according to geological indicators like the present day position and altitude of Miocene marine sedimentary units. In general, the Jura-Molasse example is thus highly instructive in the difficulties of incorporating all necessary geological realities into a numerical forward model of a specific geological situation. Despite all this, we find that using a numerical forward model of minimal complexity (three rheological layers as opposed to at least eight suggested by the rheological stratigraphy of the chain) with no pre-existing weaknesses to predetermine locations of faults, we easily achieve a good facsimile of at least the distribution of shortening across the Jura-Molasse system. Localisation of shortening occurs on approximately similar numbers of major faults as in reality, and their positions in the section are also broadly similar to those known from field data. Dynamic parameters like stress evolution, recovered from the model, are also in broad agreement with paleostress level indicators from the Jura-Molasse. In our first experiments, we have used a grid of variations of basic mechanical parameters (friction of basal layer and strength of main, limestone unit) to map the model responses over a range of parameter space and search for the best fitting response. The potential to automate such searches and continuously optimise the fit to real data is clearly also there, given sufficient computer capacity. Hence, we can envisage a time when cross section balancing will be combined with and improved by a subsequent stage of forward
Geomorphological approach in karstic domain: importance of underground water in the Jura mountains.
Rabin, Mickael; Sue, Christian; Champagnac, Jean Daniel; Bichet, Vincent; Carry, Nicolas; Eichenberger, Urs; Mudry, Jacques; Valla, Pierre
2014-05-01
The Jura mountain belt is the north-westernmost and one of the most recent expressions of the Alpine orogeny (i.e. Mio-Pliocene times). The Jura has been well studied from a structural framework, but still remains the source of scientific debates, especially regarding its current and recent tectonic activity [Laubscher, 1992; Burkhard and Sommaruga, 1998]. It is deemed to be always in a shortening state, according to leveling data [Jouanne et al., 1998] and neotectonic observations [Madritsch et al., 2010]. However, the few GPS data available on the Jura do not show evidence of shortening, but rather a low-magnitude extension parallel to the arc [Walpersdorf et al., 2006]. Moreover, the traditionally accepted assumption of a collisional activity of the Jura raises the question of its geodynamic origin. The Western Alps are themselves in a post-collisional regime and characterized by a noticeable isostatic-related extension, due to the interaction between buoyancy forces and external dynamics [Sue et al., 2007]. Quantitative morphotectonic approaches have been increasingly used in active mountain belts to infer relationship between climates and tectonics in landscape evolution [Whipple, 2009]. In this study, we propose to apply morphometric tools to calcareous bedrock, in a slowly deformed mountain belt. In particular, we have used watersheds metrics determination and associated river profiles analysis to allow quantifying the degree and nature of the equilibrium between the tectonic forcing and the fluvial erosional agent [Kirby and Whipple, 2001]. Indeed, long-term river profiles evolution is controlled by climatic and tectonic forcing through the following expression [Whipple and Tucker, 1999]: S = (U / K) 1/n Am/n (with U: uplift rate, K: empirical erodibility factor, function of hydrological and geological settings; A: drained area, m, n: empirical parameters). We present here a systematic analysis of river profiles applied to the main drainage system of the
Weatherill, Graeme; Burton, Paul W.
2010-09-01
The Aegean is the most seismically active and tectonically complex region in Europe. Damaging earthquakes have occurred here throughout recorded history, often resulting in considerable loss of life. The Monte Carlo method of probabilistic seismic hazard analysis (PSHA) is used to determine the level of ground motion likely to be exceeded in a given time period. Multiple random simulations of seismicity are generated to calculate, directly, the ground motion for a given site. Within the seismic hazard analysis we explore the impact of different seismic source models, incorporating both uniform zones and distributed seismicity. A new, simplified, seismic source model, derived from seismotectonic interpretation, is presented for the Aegean region. This is combined into the epistemic uncertainty analysis alongside existing source models for the region, and models derived by a K-means cluster analysis approach. Seismic source models derived using the K-means approach offer a degree of objectivity and reproducibility into the otherwise subjective approach of delineating seismic sources using expert judgment. Similar review and analysis is undertaken for the selection of peak ground acceleration (PGA) attenuation models, incorporating into the epistemic analysis Greek-specific models, European models and a Next Generation Attenuation model. Hazard maps for PGA on a "rock" site with a 10% probability of being exceeded in 50 years are produced and different source and attenuation models are compared. These indicate that Greek-specific attenuation models, with their smaller aleatory variability terms, produce lower PGA hazard, whilst recent European models and Next Generation Attenuation (NGA) model produce similar results. The Monte Carlo method is extended further to assimilate epistemic uncertainty into the hazard calculation, thus integrating across several appropriate source and PGA attenuation models. Site condition and fault-type are also integrated into the hazard
Energy Technology Data Exchange (ETDEWEB)
NONE
2005-07-01
This collection of three short articles present four examples of installations using renewable forms of energy that are operated by the Bernese power utility BKW-FMB in Switzerland. Brief details are given on the company, one of the largest electricity utilities in Switzerland. The first article deals with the photovoltaics (PV) installations on the roof of the new national soccer stadium in Berne and one of the oldest Swiss PV-installations on Mont-Soleil in the Jura mountains. Also covered are the efforts made by the utility and its partners in the area of marketing the power under the trade name '1 to 1 energy'. Information for the general public is provided in visitor centres and also by the solar-powered boat on the Lake of Bienne. A further article deals with wind power from the Mont-Crosin Site in the Jura mountains. The third article describes a hydro-electric power station on the river Aare, which, thanks to measures taken in the ecological area, carries the 'Naturemade Star' label for the power it produces.
Hybrid Monte Carlo-Diffusion Method For Light Propagation in Tissue With a Low-Scattering Region
Hayashi, Toshiyuki; Kashio, Yoshihiko; Okada, Eiji
2003-06-01
The heterogeneity of the tissues in a head, especially the low-scattering cerebrospinal fluid (CSF) layer surrounding the brain has previously been shown to strongly affect light propagation in the brain. The radiosity-diffusion method, in which the light propagation in the CSF layer is assumed to obey the radiosity theory, has been employed to predict the light propagation in head models. Although the CSF layer is assumed to be a nonscattering region in the radiosity-diffusion method, fine arachnoid trabeculae cause faint scattering in the CSF layer in real heads. A novel approach, the hybrid Monte Carlo-diffusion method, is proposed to calculate the head models, including the low-scattering region in which the light propagation does not obey neither the diffusion approximation nor the radiosity theory. The light propagation in the high-scattering region is calculated by means of the diffusion approximation solved by the finite-element method and that in the low-scattering region is predicted by the Monte Carlo method. The intensity and mean time of flight of the detected light for the head model with a low-scattering CSF layer calculated by the hybrid method agreed well with those by the Monte Carlo method, whereas the results calculated by means of the diffusion approximation included considerable error caused by the effect of the CSF layer. In the hybrid method, the time-consuming Monte Carlo calculation is employed only for the thin CSF layer, and hence, the computation time of the hybrid method is dramatically shorter than that of the Monte Carlo method.
Monte Carlo Simulation Modeling of a Regional Stroke Team's Use of Telemedicine.
Torabi, Elham; Froehle, Craig M; Lindsell, Christopher J; Moomaw, Charles J; Kanter, Daniel; Kleindorfer, Dawn; Adeoye, Opeolu
2016-01-01
The objective of this study was to evaluate operational policies that may improve the proportion of eligible stroke patients within a population who would receive intravenous recombinant tissue plasminogen activator (rt-PA) and minimize time to treatment in eligible patients. In the context of a regional stroke team, the authors examined the effects of staff location and telemedicine deployment policies on the timeliness of thrombolytic treatment, and estimated the efficacy and cost-effectiveness of six different policies. A process map comprising the steps from recognition of stroke symptoms to intravenous administration of rt-PA was constructed using data from published literature combined with expert opinion. Six scenarios were investigated: telemedicine deployment (none, all, or outer-ring hospitals only) and staff location (center of region or anywhere in region). Physician locations were randomly generated based on their zip codes of residence and work. The outcomes of interest were onset-to-treatment (OTT) time, door-to-needle (DTN) time, and the proportion of patients treated within 3 hours. A Monte Carlo simulation of the stroke team care-delivery system was constructed based on a primary data set of 121 ischemic stroke patients who were potentially eligible for treatment with rt-PA. With the physician located randomly in the region, deploying telemedicine at all hospitals in the region (compared with partial or no telemedicine) would result in the highest rates of treatment within 3 hours (80% vs. 75% vs. 70%) and the shortest OTT (148 vs. 164 vs. 176 minutes) and DTN (45 vs. 61 vs. 73 minutes) times. However, locating the on-call physician centrally coupled with partial telemedicine deployment (five of the 17 hospitals) would be most cost-effective with comparable eligibility and treatment times. Given the potential societal benefits, continued efforts to deploy telemedicine appear warranted. Aligning the incentives between those who would have to fund
International Nuclear Information System (INIS)
Wiacek, U.; Krynicka, E.
2005-02-01
Monte Carlo simulations of the pulsed neutron experiment in two- region systems (two concentric spheres and two coaxial finite cylinders) are presented. The MCNP code is used. Aqueous solutions of H 3 BO 3 or KCl are used in the inner region. The outer region is the moderator of Plexiglas. Standard data libraries of the thermal neutron scattering cross-sections of hydrogen in hydrogenous substances are used. The time-dependent thermal neutron transport is simulated when the inner region has a constant size and the external size of the surrounding outer region is variable. The time decay constant of the thermal neutron flux in the system is found in each simulation. The results of the simulations are compared with results of real pulsed neutron experiments on the corresponding systems. (author)
Modeling the cathode region of noble gas mixture discharges using Monte Carlo simulation
International Nuclear Information System (INIS)
Donko, Z.; Janossy, M.
1992-10-01
A model of the cathode dark space of DC glow discharges was developed in order to study the effects caused by mixing small amounts (≤2%) of other noble gases (Ne, Ar, Kr and Xe) to He. The motion of charged particles was described by Monte Carlo simulation. Several discharge parameters (electron and ion energy distribution functions, electron and ion current densities, reduced ionization coefficients, and current density-voltage characteristics) were obtained. Small amounts of admixtures were found to modify significantly the discharge parameters. Current density-voltage characteristics obtained from the model showed good agreement with experimental data. (author) 40 refs.; 14 figs
Seismicity, state of stress and induced seismicity in the molasse basin and Jura (N-Switzerland)
Energy Technology Data Exchange (ETDEWEB)
Deichmann, N. [Schweizerischer Erdbebendienst, ETH Zuerich, Zuerich (Switzerland); Burlini, L. [Institut of Geology, ETH Zuerich, Zuerich (Switzerland)
2010-07-01
This illustrated report for the Swiss Federal Office of Energy (SFOE) is one of a series of appendices dealing with the potential for geological sequestration of CO{sub 2} in Switzerland. This report takes a look at the seismicity, state of stress and induced seismicity in the molasse basin and Jura Mountains in northern Switzerland. Data collected since 1983 by the Swiss Earthquake Service and the National Cooperative for the Disposal of Radioactive Wastes NAGRA on the tectonics and seismic properties of North-western Switzerland is noted. The results are illustrated with a number of maps and graphical representations and are discussed in detail. Cases of induced seismicity as resulting from both natural and man-made causes are examined.
Kouznetsov, A.; Cully, C. M.
2017-12-01
During enhanced magnetic activities, large ejections of energetic electrons from radiation belts are deposited in the upper polar atmosphere where they play important roles in its physical and chemical processes, including VLF signals subionospheric propagation. Electron deposition can affect D-Region ionization, which are estimated based on ionization rates derived from energy depositions. We present a model of D-region ion production caused by an arbitrary (in energy and pitch angle) distribution of fast (10 keV - 1 MeV) electrons. The model relies on a set of pre-calculated results obtained using a general Monte Carlo approach with the latest version of the MCNP6 (Monte Carlo N-Particle) code for the explicit electron tracking in magnetic fields. By expressing those results using the ionization yield functions, the pre-calculated results are extended to cover arbitrary magnetic field inclinations and atmospheric density profiles, allowing ionization rate altitude profile computations in the range of 20 and 200 km at any geographic point of interest and date/time by adopting results from an external atmospheric density model (e.g. NRLMSISE-00). The pre-calculated MCNP6 results are stored in a CDF (Common Data Format) file, and IDL routines library is written to provide an end-user interface to the model.
Region-oriented CT image representation for reducing computing time of Monte Carlo simulations
International Nuclear Information System (INIS)
Sarrut, David; Guigues, Laurent
2008-01-01
Purpose. We propose a new method for efficient particle transportation in voxelized geometry for Monte Carlo simulations. We describe its use for calculating dose distribution in CT images for radiation therapy. Material and methods. The proposed approach, based on an implicit volume representation named segmented volume, coupled with an adapted segmentation procedure and a distance map, allows us to minimize the number of boundary crossings, which slows down simulation. The method was implemented with the GEANT4 toolkit and compared to four other methods: One box per voxel, parameterized volumes, octree-based volumes, and nested parameterized volumes. For each representation, we compared dose distribution, time, and memory consumption. Results. The proposed method allows us to decrease computational time by up to a factor of 15, while keeping memory consumption low, and without any modification of the transportation engine. Speeding up is related to the geometry complexity and the number of different materials used. We obtained an optimal number of steps with removal of all unnecessary steps between adjacent voxels sharing a similar material. However, the cost of each step is increased. When the number of steps cannot be decreased enough, due for example, to the large number of material boundaries, such a method is not considered suitable. Conclusion. This feasibility study shows that optimizing the representation of an image in memory potentially increases computing efficiency. We used the GEANT4 toolkit, but we could potentially use other Monte Carlo simulation codes. The method introduces a tradeoff between speed and geometry accuracy, allowing computational time gain. However, simulations with GEANT4 remain slow and further work is needed to speed up the procedure while preserving the desired accuracy
Durand, Loriane; Favre, Eliane
2017-01-01
Mon Travail de Bachelor a pour but premier de prendre connaissance de la situation de la violence dans les services d’aide contrainte du canton du Jura. La violence est-elle une réalité ? De quels types de violence s’agit-il ? Pourquoi cette violence, quelles en sont les raisons ? Cette violence augmente-t-elle le stress dans la profession d’assistant social ?
Cusnir, Ruslan; Christl, Marcus; Steinmann, Philipp; Bochud, François; Froidevaux, Pascal
2017-06-01
The interaction of trace environmental plutonium with dissolved natural organic matter (NOM) plays an important role on its mobility and bioavailability in freshwater environments. Here we explore the speciation and biogeochemical behavior of Pu in freshwaters of the karst system in the Swiss Jura Mountains. Chemical extraction and ultrafiltration methods were complemented by diffusive gradients in thin films technique (DGT) to measure the dissolved and bioavailable Pu fraction in water. Accelerator mass spectrometry (AMS) was used to accurately determine Pu in this pristine environment. Selective adsorption of Pu (III, IV) on silica gel showed that 88% of Pu in the mineral water is found in +V oxidation state, possibly in a highly soluble [PuO2+(CO3)n]m- form. Ultrafiltration experiments at 10 kDa yielded a similar fraction of colloid-bound Pu in the organic-rich and in mineral water (18-25%). We also found that the concentrations of Pu measured by DGT in mineral water are similar to the bulk concentration, suggesting that dissolved Pu is readily available for biouptake. Sequential elution (SE) of Pu from aquatic plants revealed important co-precipitation of potentially labile Pu (60-75%) with calcite fraction within outer compartment of the plants. Hence, we suggest that plutonium is fully available for biological uptake in both mineral and organic-rich karstic freshwaters.
Olsson, Anna; Arlig, Asa; Carlsson, Gudrun Alm; Gustafsson, Agnetha
2007-09-01
The image quality of single photon emission computed tomography (SPECT) depends on the reconstruction algorithm used. The purpose of the present study was to evaluate parameters in ordered subset expectation maximization (OSEM) and to compare systematically with filtered back-projection (FBP) for reconstruction of regional cerebral blood flow (rCBF) SPECT, incorporating attenuation and scatter correction. The evaluation was based on the trade-off between contrast recovery and statistical noise using different sizes of subsets, number of iterations and filter parameters. Monte Carlo simulated SPECT studies of a digital human brain phantom were used. The contrast recovery was calculated as measured contrast divided by true contrast. Statistical noise in the reconstructed images was calculated as the coefficient of variation in pixel values. A constant contrast level was reached above 195 equivalent maximum likelihood expectation maximization iterations. The choice of subset size was not crucial as long as there were > or = 2 projections per subset. The OSEM reconstruction was found to give 5-14% higher contrast recovery than FBP for all clinically relevant noise levels in rCBF SPECT. The Butterworth filter, power 6, achieved the highest stable contrast recovery level at all clinically relevant noise levels. The cut-off frequency should be chosen according to the noise level accepted in the image. Trade-off plots are shown to be a practical way of deciding the number of iterations and subset size for the OSEM reconstruction and can be used for other examination types in nuclear medicine.
Confidential unit exclusion at the Regional Blood bank in Montes Claros: Fundação Hemominas
Directory of Open Access Journals (Sweden)
Caroline Nogueira Maia
2012-01-01
Full Text Available OBJECTIVE: This study aimed at analyzing the rate of self-exclusion at the Regional Blood Bank in Montes Claros. METHODS: Data of self-excluding donors from August 2008 to August 2010 were analyzed. The following variables were considered: age, marital status, gender, ethnical background, blood group, Rh factor, number of donations, type of donation and serologic results. RESULTS: During the analyzed period, 34,778 individuals donated blood, 215 (0.62% of which were self-excluded; 12% of donors did not answer, 6.3% ballots were spoilt and 13.6% of the responses were considered non-compliant. The profile of the donors was: male (81.9%, single (50.7%, aged between 19 and 29 years old (52.1%, Mulatto (48.3%, blood group O (32.1% and positive Rh (32.1%. Most individuals were donating for the 2nd to 5th time (43.7% and had negative serology (94.4%. CONCLUSIONS: It was not evident that self-excluding donors had higher rates of seropositivity.
Directory of Open Access Journals (Sweden)
P. Sjögren
2007-06-01
Full Text Available Many European mires show a layer of increased decomposition and minerogenic content close to the mire surface. Although the phenomenon is widely recognised, there have been few investigations of its distribution, cause and effect. In this study, nine peat profiles from the Alps and Jura Mountains in central Europe were studied to assess general trends in the upper peat stratigraphy. Analyses of pollen and fungal spore content in two profiles indicates that near-surface changes in decomposition are related to recent historical changes in grazing intensity of the surrounding landscape. Reduced trampling pressure and/or decreased nutrient input allowed partial Sphagnum regeneration in the western Alps and Jura Mountains from AD 1940–60, and in the eastern Alps from AD 1820–60. The results are considered in the context of climate and land use, and future implications for mire development in a changing environment are discussed. Many high-altitude mires in the area are now in a Sphagnum peat re-growth state, but future land use and climatic change will determine whether they will develop towards raised bog or forest carr.
International Nuclear Information System (INIS)
Kemp, A.G.; Stephen, L.
1999-01-01
This paper summarises the results of a study using the Monte Carlo simulation to examine activity levels in the regions of the UK continental shelf under different oil and gas prices. Details of the methodology, data, and assumptions used are given, and the production of oil and gas, new field investment, aggregate operating expenditures, and gross revenues under different price scenarios are addressed. The total potential oil and gas production under the different price scenarios for 2000-2013 are plotted. (UK)
DEFF Research Database (Denmark)
Lenoir, Jonathan; Gégout, J.C.; Dupouey, J.L.
2010-01-01
Question: How strong are climate warming-driven changes within mid-elevation forest communities? Observations of plant community change within temperate mountain forest ecosystems in response to recent warming are scarce in comparison to high-elevation alpine and nival ecosystems, perhaps...... reflecting the confounding influence of forest stand dynamics. Location: Jura Mountains (France and Switzerland). Methods: We assessed changes in plant community composition by surveying 154 Abies alba forest vegetation relevés (550-1,350 m a.s.l.) in 1989 and 2007. Over this period, temperatures increased...... while precipitation did not change. Correspondence analysis (CA) and ecological indicator values were used to measure changes in plant community composition. Relevés in even- and uneven-aged stands were analysed separately to determine the influence of forest stand dynamics. We also analysed changes...
Cocilio, Rodrigo A. Nieva; Blanco, Graciela M.; Acosta, Juan C.
2016-01-01
Abstract This study aimed to investigate the diet of the gecko Homonota fasciata (Duméril & Bibron, 1836) in a population from Monte of San Juan Province, Argentina, and to analyze possible temporal, sexual, and ontogenetic variations in feeding behavior. We determined the total volume, number, and occurrence frequency of each prey item and calculated the relative importance indexes. We also assessed trophic diversity and trophic equity. Homonota fasciata had a generalist and diverse diet bas...
International Nuclear Information System (INIS)
Wiacek, U.
2006-06-01
The thermal neutron transport in small unhomogeneous system and namely in two- layers where the first one -outer moderator is of hydride type (polyethylene or plexiglas) and the second one - inner is made with other materials is investigated. The diffusional cooling of neutrons has been calculated by means of monte Carlo simulations using MCPN code. Because of un consistency of calculated and measured data the MCPN code library has been modified for polyethylene and plexiglas
Cassan, Cecile; Dione, Michel Mainack; Dereure, Jacques; Diedhiou, Souleymane; Bucheton, Bruno; Hide, Mallorie; Kako, Caroline; Gaye, Oumar; Senghor, Massila; Niang, Abdoul Aziz; Bañuls, Anne-Laure; Faye, Babacar
2016-06-01
Visceral leishmaniasis is not endemic in West Africa. However, high seroprevalence of Leishmania infantum infection (one of the Leishmania species that cause visceral leishmaniasis) was detected in dogs and humans in the Mont Rolland community (close to Thiès, Senegal), despite the lack of reports concerning human clinical cases. Our aim was to genetically characterize this L. infantum population and identify its origin. We thus conducted seven field surveys in 25 villages of the Mont Rolland community between 2005 and 2009 and blood samples were collected from 205 dogs. Serological testing indicated that 92 dogs (44.9%) were positive for Leishmania infection. L. infantum was identified as the cause of infection. Analysis of 29 L. infantum isolates from these dogs by multilocus microsatellite typing and multilocus sequence typing indicated that this population had very limited genetic diversity, low level of heterozygosity and only seven different genotypes (79.3% of all isolates had the same genotype). Multilocus sequence typing showed that the Mont Rolland isolates clustered with strains from the Mediterranean basin and were separated from East African and Asian strains. Therefore, our data suggest a quite recent and unique introduction into Senegal of a L. infantum strain from the Mediterranean basin. Copyright © 2016 Institut Pasteur. Published by Elsevier Masson SAS. All rights reserved.
Cooper, M A
2000-01-01
We present various approximations for the angular distribution of particles emerging from an optically thick, purely isotropically scattering region into a vacuum. Our motivation is to use such a distribution for the Fleck-Canfield random walk method [1] for implicit Monte Carlo (IMC) [2] radiation transport problems. We demonstrate that the cosine distribution recommended in the original random walk paper [1] is a poor approximation to the angular distribution predicted by transport theory. Then we examine other approximations that more closely match the transport angular distribution.
Energy Technology Data Exchange (ETDEWEB)
Brink, H.J.; Baykulov, M.; Gajewski, D.; Yoon, Mi-Kyung [Hamburg Univ. (Germany). Inst. fuer Geophysik
2008-10-23
The developing age of the Permian salt stocks in Schleswig-Holstein (Federal Republic of Germany) reaches from the Trias over the Jura up to the Tertiary period. In the context of the DGMK project 577-1, the reflection-rich salt dome in the East Holstein Jura was investigated seismically by means of the CRS method and speed tomography. These measurements enable the interpretation of a structural style with a substantial tectonically compressive component. Magnetotelluric measurements point to potential natural gas source rocks. The drift of the Permian salt contributes to the contact support of new red sandstone on the pre-salinar layers. Migration ways are opened which contributed to a filling of structurally high new red sandstone reservoirs in small depths.
International Nuclear Information System (INIS)
Barz, H.U.; Bertram, W.
1992-01-01
Embrittlement of pressure vessel material caused by neutron irradiation is a very important problem for VVER-440 reactors. For the estimation of the fracture risk highly reliable neutron fluence values are necessary. For this reason a special theoretical determination of space dependent neutron fluences has been performed mainly on the basis of Monte-Carlo calculations. The described method allows the accurate calculation of neutron fluences near the pressure vessel in the height of the core region for all reactor histories and loading cycles in an efficient manner. To illustrate the accuracy of the suggested method a comparison with experimental results was done. The calculated neutron fluence values can be used for planning the loading schemes of each reactor according to the safety requirements against brittle fracture. (orig.)
International Nuclear Information System (INIS)
Pena, J; Sanchez-Doblado, F; Capote, R; Terron, J A; Gomez, F
2006-01-01
Reference dosimetry of photon fields is a well-established subject and currently available protocols (such as the IAEA TRS-398 and AAPM TG-51) provide methods for converting the ionization chamber (IC) reading into dose to water, provided reference conditions of charged particle equilibrium (CPE) are fulfilled. But these protocols cannot deal with the build-up region, where the lack of CPE limits the applicability of the cavity theorems and so the chamber correction factors become depth dependent. By explicitly including the IC geometry in the Monte Carlo simulations, depth-dependent dose correction factors are calculated for a PTW 30001 0.6 cm 3 ion chamber in the build-up region of the 6 MV photon beam. The corrected percentage depth dose (PDD) agrees within 2% with that measured using the NACP 02 plane-parallel ion chamber in the build-up region at depths greater than 0.4 cm, where the Farmer chamber wall reaches the phantom surface
International Nuclear Information System (INIS)
Shotyk, W.; Steinmann, P.
1994-01-01
The dominant inorganic anions and cations, and dissolved organic carbon have been measured in the pore waters expressed from peat cores taken from two Sphagnum bogs in the Jura Mountains of Switzerland: Etang de la Gruyere (EGr) consists of > 6 m of peat representing more than 12,000 yr of peat formation while at La Tourbiere de Genevez (TGe) approximately 1.5 m of peat have accumulated over the past 5,000 yr. The pore-water analyses of the core taken at EGr show that the first 100 cm of the core are influenced only by atmospheric inputs. Relative to the average composition of rainwater in this area, Na + is enriched throughout the pore-water profiles, K 2+ is neither enriched nor depleted, Mg 2+ is significantly depleted in the deeper pore waters and Ca 2+ strongly depleted through the profile. The dominant process affecting the cations in these waters is ion exchange, with the peats behaving like a simple cation exchanger with ion preference decreasing in the order Ca 2+ >Mg 2+ >H + >K + much-greater than Na + . In contrast, at TGe the pH increases from pH approximately 4 at the surface to pH 5 at 80 cm. The Cl - and K + concentrations are up to 10 times higher than rainwater values because of mixing of the bog pore water with nearby groundwaters. The Mg 2+ and Ca 2+ concentrations increase with depth to concentrations up to 10 times higher than rainwater values, mainly because of the increasing importance of mineral dissolution within the profile
International Nuclear Information System (INIS)
Yeh, Chi-Yuan; Tung, Chuan-Jung; Chao, Tsi-Chain; Lin, Mu-Han; Lee, Chung-Chi
2014-01-01
The purpose of this study was to examine dose distribution of a skull base tumor and surrounding critical structures in response to high dose intensity-modulated radiosurgery (IMRS) with Monte Carlo (MC) simulation using a dual resolution sandwich phantom. The measurement-based Monte Carlo (MBMC) method (Lin et al., 2009) was adopted for the study. The major components of the MBMC technique involve (1) the BEAMnrc code for beam transport through the treatment head of a Varian 21EX linear accelerator, (2) the DOSXYZnrc code for patient dose simulation and (3) an EPID-measured efficiency map which describes non-uniform fluence distribution of the IMRS treatment beam. For the simulated case, five isocentric 6 MV photon beams were designed to deliver a total dose of 1200 cGy in two fractions to the skull base tumor. A sandwich phantom for the MBMC simulation was created based on the patient's CT scan of a skull base tumor [gross tumor volume (GTV)=8.4 cm 3 ] near the right 8th cranial nerve. The phantom, consisted of a 1.2-cm thick skull base region, had a voxel resolution of 0.05×0.05×0.1 cm 3 and was sandwiched in between 0.05×0.05×0.3 cm 3 slices of a head phantom. A coarser 0.2×0.2×0.3 cm 3 single resolution (SR) phantom was also created for comparison with the sandwich phantom. A particle history of 3×10 8 for each beam was used for simulations of both the SR and the sandwich phantoms to achieve a statistical uncertainty of <2%. Our study showed that the planning target volume (PTV) receiving at least 95% of the prescribed dose (VPTV95) was 96.9%, 96.7% and 99.9% for the TPS, SR, and sandwich phantom, respectively. The maximum and mean doses to large organs such as the PTV, brain stem, and parotid gland for the TPS, SR and sandwich MC simulations did not show any significant difference; however, significant dose differences were observed for very small structures like the right 8th cranial nerve, right cochlea, right malleus and right semicircular
International Nuclear Information System (INIS)
Dubi, A.; Gerstl, S.A.W.
1979-05-01
The contributon Monte Carlo method is based on a new recipe to calculate target responses by means of volume integral of the contributon current in a region between the source and the detector. A comprehensive description of the method, its implementation in the general-purpose MCNP code, and results of the method for realistic nonhomogeneous, energy-dependent problems are presented. 23 figures, 10 tables
Directory of Open Access Journals (Sweden)
Anne-Lise Head-König
2011-05-01
Full Text Available This paper aims at retracing the important phases of migrations in the alpine regions and the Jura from the Middle Ages up to the middle of the 20th century. Migration has always functioned as a necessary complement to the resources of the inhabitants of the upland regions and it increases when the economic disparity with the lowlands becomes more marked. A striking characteristic of such migration is the great diversity that can be observed, since not only the destinations of the migrants varied from community to community, but also different forms of mobility coexisted within the same territory. Migration might be seasonal, pluriannual, lifelong or even definitive. It is also notable that the various types of migration can be observed to be part of a plurisecular tradition, apart from some significant exceptions, such as the emigration of the Walser, enforced migrations and the new types of migration as from the second half of the nineteenth century. The mobility of part of the population was also a consequence of modifications deriving from changes in the prevalent type of production (animal husbandry instead of the cultivation of cereals, as well as from demographic factors. In addition to these factors one can observe the role played by political institutions throughout the period under study: seigneurial power in the Middle Ages, the communal and cantonal instances until the second half of the nineteenth century, and afterwards the federal authorities.Cette contribution vise à montrer les grandes phases des mouvements migratoires du Moyen Age au milieu du XXe siècle dans les mondes alpin et jurassien suisses. La migration a toujours été une complémentarité nécessaire aux ressources de la montagne, et elle s'amplifie lorsque les disparités économiques avec le plat pays s'accroissent. Elle se caractérise par une forte diversité, puisque non seulement les destinations et les aires d'établissement des migrants peuvent varier
Ojala, J; Hyödynmaa, S; Barańczyk, R; Góra, E; Waligórski, M P R
2014-03-01
Electron radiotherapy is applied to treat the chest wall close to the mediastinum. The performance of the GGPB and eMC algorithms implemented in the Varian Eclipse treatment planning system (TPS) was studied in this region for 9 and 16 MeV beams, against Monte Carlo (MC) simulations, point dosimetry in a water phantom and dose distributions calculated in virtual phantoms. For the 16 MeV beam, the accuracy of these algorithms was also compared over the lung-mediastinum interface region of an anthropomorphic phantom, against MC calculations and thermoluminescence dosimetry (TLD). In the phantom with a lung-equivalent slab the results were generally congruent, the eMC results for the 9 MeV beam slightly overestimating the lung dose, and the GGPB results for the 16 MeV beam underestimating the lung dose. Over the lung-mediastinum interface, for 9 and 16 MeV beams, the GGPB code underestimated the lung dose and overestimated the dose in water close to the lung, compared to the congruent eMC and MC results. In the anthropomorphic phantom, results of TLD measurements and MC and eMC calculations agreed, while the GGPB code underestimated the lung dose. Good agreement between TLD measurements and MC calculations attests to the accuracy of "full" MC simulations as a reference for benchmarking TPS codes. Application of the GGPB code in chest wall radiotherapy may result in significant underestimation of the lung dose and overestimation of dose to the mediastinum, affecting plan optimization over volumes close to the lung-mediastinum interface, such as the lung or heart. Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
DEFF Research Database (Denmark)
Boisen, Jørn
2011-01-01
skulle ændres, da de sendte sexarbejderne under jorden og i armene på mafiaen. Den lovgivning, der blev kendt ugyldig i Ontario, minder meget om den danske, vurderer Jørn Boisen. Han kalder modellen for den repressive, da det er den mest paradoksale og hykleriske model. Den har nemlig til hensigt...
International Nuclear Information System (INIS)
Panettieri, Vanessa; Barsoum, Pierre; Westermark, Mathias; Brualla, Lorenzo; Lax, Ingmar
2009-01-01
Background and purpose: In tangential beam treatments accurate dose calculation of the absorbed dose in the build-up region is of major importance, in particular when the target has superficial extension close to the skin. In most analytical treatment planning systems (TPSs) calculations depend on the experimental measurements introduced by the user in which accuracy might be limited by the type of detector employed to perform them. To quantify the discrepancy between analytically calculated and delivered dose in the build-up region, near the skin of a patient, independent Monte Carlo (MC) simulations using the PENELOPE code were performed. Dose distributions obtained with MC simulations were compared with those given by the Pencil Beam Convolution (PBC) algorithm and the Analytical Anisotropic Algorithm (AAA) implemented in the commercial TPS Eclipse. Material and methods: A cylindrical phantom was used to approximate the breast contour of a patient for MC simulations and the TPS. Calculations of the absorbed doses were performed for 6 and 18 MV beams for four different angles of incidence: 15 deg., 30 deg., 45 deg. and 75 deg. and different field sizes: 3 x 3 cm 2 , 10 x 10 cm 2 and 40 x 40 cm 2 . Absorbed doses along the phantom central axis were obtained with both the PBC algorithm and the AAA and compared to those estimated by the MC simulations. Additionally, a breast patient case was calculated with two opposed 6 MV photon beams using all the aforementioned analytical and stochastic algorithms. Results: For the 6 MV photon beam in the phantom case, both the PBC algorithm and the AAA tend to underestimate the absorbed dose in the build-up region in comparison to MC results. These differences are clinically irrelevant and are included in a 1 mm range. This tendency is also confirmed in the breast patient case. For the 18 MV beam the PBC algorithm underestimates the absorbed dose with respect to the AAA. In comparison to MC simulations the PBC algorithm tends
William Salas; Steve Hagen
2013-01-01
This presentation will provide an overview of an approach for quantifying uncertainty in spatial estimates of carbon emission from land use change. We generate uncertainty bounds around our final emissions estimate using a randomized, Monte Carlo (MC)-style sampling technique. This approach allows us to combine uncertainty from different sources without making...
Directory of Open Access Journals (Sweden)
Ofelia Naab
Full Text Available Con el fin de evaluar la flora utilizada por Apis mellifera L. fueron analizadas muestras de miel inmadura y cargas corbiculares de dos apiarios demostradores ubicados en la Provincia Fitogeográfica del Monte, Provincia de La Pampa. Las muestras se extrajeron periódicamente durante la primavera y fueron analizadas aplicando las técnicas melisopalinológicas convencionales. La vegetación arbustiva nativa presentó la mayor abundancia y el mayor número de especies en óptima floración en noviembre. Las familias más representadas en los espectros polínicos de mieles inmaduras y de cargas corbiculares fueron: Zygophyllaceae ( Larrea divaricata Cav., Rhamnaceae ( Condalia microphylla Cav., Solanaceae ( Lycium sp., Asteraceae ( Senecio subulatus Don ex Hook. & Arn. y Verbenaceae ( Glandularia sp. - Junellia sp. - Verbena sp.. Los análisis polínicos evidenciaron que las especies nativas ofrecieron al mismo tiempo recursos nectaríferos y poliníferos sin embargo se observó una alta selección de pocos recursos florales. La oferta floral produjo mieles monoflorales de L. divaricata , C. microphylla y Lycium sp. Ambos apiarios pudieron diferenciarse teniendo en cuenta la diversidad de tipos polínicos y la presencia de ciertos taxones en las categorías de polen dominante y secundario.In order to evaluate the utilized flora by Apis mellifera L. we analized inmmature honey samples and corbicular pollen loads from two demonstrative apiaries located in the Monte Phytogeographical Province of La Pampa. The samples were periodically collected during springtime and were analyzed using the conventional melissopalynological techniques. The native flora presented the major abundance and the highest number of species at an optimum flowering level in november. The most represented families in the pollen spectrum of immature honeys and corbicular loads were: Zygophyllaceae ( Larrea divaricata Cav., Rhamnaceae ( Condalia microphylla Cav., Solanaceae
Energy Technology Data Exchange (ETDEWEB)
Clauer, N. [Laboratoire d’Hydrologie et de Géochimie de Strasbourg (CNRS-UdS), Strasbourg (France); Techer, I. [Equipe Associée, Chrome, Université de Nîmes, Nîmes (France); Nussbaum, Ch. [Swiss Geological Survey, Federal Office of Topography Swisstopo, Wabern (Switzerland); Laurich, B. [Structural Geology, Tectonics and Geomechanics, RWTH Aachen University, Aachen (Germany); Laurich, B. [Federal Institute for Geosciences and Natural Resources BGR, Hannover (Germany)
2017-04-15
calcite of the Opalinus Clay inside the Main Fault, as well as that of its microstructures and the same features of the sediments above and below. To envision a Priabonian seawater supply, there is a need for its storage without a significant evolution in its Sr isotopic composition until the final deformation of the area. The paleo-hydrogeological context calls for a possible infiltration of the seawater into a limestone karst located above the Opalinus Clay that could have acted as the storage reservoir. The karstic nature of this reservoir also explains why the {sup 87}Sr/{sup 86}Sr of the fluids was not modified significantly until expulsion. An alternative storage could have been provided by the regional faulting system that developed during the contemporary regional rifting of the Rhine Graben. The fluid expulsion started along these extensional faults during the further Upper Eocene-Lower Oligocene rifting phase. Later, the thin-skinned deformation of the Jura Belt affected the Mont Terri region in the form of the Main Fault, probably between approximately 9 and 4 Ma on the basis of preliminary K-Ar ages of nanometer-sized authigenic illite crystals recovered from gouge samples. (authors)
Leuzinger, L.; Kocsis, L.; Billon-Bruyat, J.-P.; Spezzaferri, S.; Vennemann, T.
2015-12-01
Chondrichthyan teeth (sharks, rays, and chimaeras) are mineralized in isotopic equilibrium with the surrounding water, and parameters such as water temperature and salinity can be inferred from the oxygen isotopic composition (δ18Op) of their bioapatite. We analysed a new chondrichthyan assemblage, as well as teeth from bony fish (Pycnodontiformes). All specimens are from Kimmeridgian coastal marine deposits of the Swiss Jura (vicinity of Porrentruy, Ajoie district, NW Switzerland). While the overall faunal composition and the isotopic composition of bony fish are generally consistent with marine conditions, unusually low δ18Op values were measured for the hybodont shark Asteracanthus. These values are also lower compared to previously published data from older European Jurassic localities. Additional analyses on material from Solothurn (Kimmeridgian, NW Switzerland) also have comparable, low-18O isotopic compositions for Asteracanthus. The data are hence interpreted to represent a so far unique, freshwater-influenced isotopic composition for this shark that is classically considered a marine genus. While reproduction in freshwater or brackish realms is established for other hybodonts, a similar behaviour for Asteracanthus is proposed here. Regular excursions into lower salinity waters can be linked to the age of the deposits and correspond to an ecological adaptation, most likely driven by the Kimmeridgian transgression and by the competition of the hybodont shark Asteracanthus with the rapidly diversifying neoselachians (modern sharks).
Yanites, Brian J.; Becker, Jens K.; Madritsch, Herfried; Schnellmann, Michael; Ehlers, Todd A.
2017-11-01
Landscape evolution is a product of the forces that drive geomorphic processes (e.g., tectonics and climate) and the resistance to those processes. The underlying lithology and structural setting in many landscapes set the resistance to erosion. This study uses a modified version of the Channel-Hillslope Integrated Landscape Development (CHILD) landscape evolution model to determine the effect of a spatially and temporally changing erodibility in a terrain with a complex base level history. Specifically, our focus is to quantify how the effects of variable lithology influence transient base level signals. We set up a series of numerical landscape evolution models with increasing levels of complexity based on the lithologic variability and base level history of the Jura Mountains of northern Switzerland. The models are consistent with lithology (and therewith erodibility) playing an important role in the transient evolution of the landscape. The results show that the erosion rate history at a location depends on the rock uplift and base level history, the range of erodibilities of the different lithologies, and the history of the surface geology downstream from the analyzed location. Near the model boundary, the history of erosion is dominated by the base level history. The transient wave of incision, however, is quite variable in the different model runs and depends on the geometric structure of lithology used. It is thus important to constrain the spatiotemporal erodibility patterns downstream of any given point of interest to understand the evolution of a landscape subject to variable base level in a quantitative framework.
Firmani, G.; Matta, J.
2012-04-01
The expansion of mining in the Pilbara region of Western Australia is resulting in the need to develop better water strategies to make below water table resources accessible, manage surplus water and deal with water demands for processing ore and construction. In all these instances, understanding the local and regional hydrogeology is fundamental to allow sustainable mining; minimising the impacts to the environment. An understanding of the uncertainties of the hydrogeology is necessary to quantify the risks and make objective decisions rather than relying on subjective judgements. The aim of this paper is to review some of the methods proposed by the published literature and find approaches that can be practically implemented in an attempt to estimate model uncertainties. In particular, this paper adopts two general probabilistic approaches that address the parametric uncertainty estimation and its propagation in predictive scenarios: the first order analysis and Monte Carlo simulations. A case example application of the two techniques is also presented for the dewatering strategy of a large below water table open cut iron ore mine in the Pilbara region of Western Australia. This study demonstrates the weakness of the deterministic approach, as the coefficients of variation of some model parameters were greater than 1.0; and suggests a review of the model calibration method and conceptualisation. The uncertainty propagation into predictive scenarios was calculated assuming the parameters with a coefficient of variation higher than 0.25 as deterministic, due to computational difficulties to achieve an accurate result with the Monte Carlo method. The conclusion of this case study was that the first order analysis appears to be a successful and simple tool when the coefficients of variation of calibrated parameters are less than 0.25.
Monte Carlo Transport for Electron Thermal Transport
Chenhall, Jeffrey; Cao, Duc; Moses, Gregory
2015-11-01
The iSNB (implicit Schurtz Nicolai Busquet multigroup electron thermal transport method of Cao et al. is adapted into a Monte Carlo transport method in order to better model the effects of non-local behavior. The end goal is a hybrid transport-diffusion method that combines Monte Carlo Transport with a discrete diffusion Monte Carlo (DDMC). The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the method will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.
Dunn, William L
2012-01-01
Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble
Directory of Open Access Journals (Sweden)
Bardenet Rémi
2013-07-01
Full Text Available Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.
Jagtap, A S; Palani Selvam, T; Patil, B J; Chavan, S T; Pethe, S N; Kulkarni, Gauri; Dahiwale, S S; Bhoraskar, V N; Dhole, S D
2016-12-01
A Telecobalt unit has wide range of applications in cancer treatments and is used widely in many countries all around the world. Estimation of surface dose in Cobalt-60 teletherapy machine becomes important since clinically useful photon beam consist of contaminated electrons during the patient treatment. EGSnrc along with the BEAMnrc user code was used to model the Theratron 780E telecobalt unit. Central axis depth dose profiles including surface doses have been estimated for the field sizes of 0×0, 6×6, 10×10, 15×15, 20×20, 25×25, 30×30cm 2 and at Source-to-surface distance (SSD) of 60 and 80cm. Surface dose was measured experimentally by the Gafchromic RTQA2 films and are in good agreement with the simulation results. The central axis depth dose data are compared with the data available from the British Journal of Radiology report no. 25. Contribution of contaminated electrons has also been calculated using Monte Carlo simulation by the different parts of the Cobalt-60 head for different field size and SSD's. Moreover, depth dose curve in zero area field size is calculated by extrapolation method and compared with the already published data. They are found in good agreement. Copyright © 2016 Elsevier Ltd. All rights reserved.
Banaee, Nooshin; Asgari, Sepideh; Nedaie, Hassan Ali
2018-07-01
The accuracy of penumbral measurements in radiotherapy is pivotal because dose planning computers require accurate data to adequately modeling the beams, which in turn are used to calculate patient dose distributions. Gamma knife is a non-invasive intracranial technique based on principles of the Leksell stereotactic system for open deep brain surgeries, invented and developed by Professor Lars Leksell. The aim of this study is to compare the penumbra widths of Leksell Gamma Knife model C and Gamma ART 6000. Initially, the structure of both systems were simulated by using Monte Carlo MCNP6 code and after validating the accuracy of simulation, beam profiles of different collimators were plotted. MCNP6 beam profile calculations showed that the penumbra values of Leksell Gamma knife model C and Gamma ART 6000 for 18, 14, 8 and 4 mm collimators are 9.7, 7.9, 4.3, 2.6 and 8.2, 6.9, 3.6, 2.4, respectively. The results of this study showed that since Gamma ART 6000 has larger solid angle in comparison with Gamma Knife model C, it produces better beam profile penumbras than Gamma Knife model C in the direct plane. Copyright © 2017 Elsevier Ltd. All rights reserved.
Rameil, Niels
2008-12-01
Early diagenetic dolomitization is a common feature in cyclic shallow-water carbonates throughout the geologic record. After their generation, dolomites may be subject to dedolomitization (re-calcification of dolomites), e.g. by contact with meteoric water during emersion. These patterns of dolomitization and subsequent dedolomitization frequently play a key role in unravelling the development and history of a carbonate platform. On the basis of excellent outcrops, detailed logging and sampling and integrating sedimentological work, high-resolution sequence stratigraphic interpretations, and isotope analyses (O, C), conceptual models on early diagenetic dolomitization and dedolomitization and their underlying mechanisms were developed for the Upper Jurassic / Lower Cretaceous Jura platform in north-western Switzerland and eastern France. Three different types of early diagenetic dolomites and two types of dedolomites were observed. Each is defined by a distinct petrographic/isotopic signature and a distinct spatial distribution pattern. Different types of dolomites are interpreted to have been formed by different mechanisms, such as shallow seepage reflux, evaporation on tidal flats, and microbially mediated selective dolomitization of burrows. Depending on the type of dolomite, sea water with normal marine to slightly enhanced salinities is proposed as dolomitizing fluid. Based on the data obtained, the main volume of dolomite was precipitated by a reflux mechanism that was switched on and off by high-frequency sea-level changes. It appears, however, that more than one dolomitization mechanism was active (pene)contemporaneously or several processes alternated in time. During early diagenesis, percolating meteoric waters obviously played an important role in the dedolomitization of carbonate rocks that underlie exposure surfaces. Cyclostratigraphic interpretation of the sedimentary succession allows for estimates on the timing of early diagenetic (de
International Nuclear Information System (INIS)
Kolbun, N.; Leveque, Ph.; Abboud, F.; Bol, A.; Vynckier, S.; Gallez, B.
2010-01-01
Purpose: The experimental determination of doses at proximal distances from radioactive sources is difficult because of the steepness of the dose gradient. The goal of this study was to determine the relative radial dose distribution for a low dose rate 192 Ir wire source using electron paramagnetic resonance imaging (EPRI) and to compare the results to those obtained using Gafchromic EBT film dosimetry and Monte Carlo (MC) simulations. Methods: Lithium formate and ammonium formate were chosen as the EPR dosimetric materials and were used to form cylindrical phantoms. The dose distribution of the stable radiation-induced free radicals in the lithium formate and ammonium formate phantoms was assessed by EPRI. EBT films were also inserted inside in ammonium formate phantoms for comparison. MC simulation was performed using the MCNP4C2 software code. Results: The radical signal in irradiated ammonium formate is contained in a single narrow EPR line, with an EPR peak-to-peak linewidth narrower than that of lithium formate (∼0.64 and 1.4 mT, respectively). The spatial resolution of EPR images was enhanced by a factor of 2.3 using ammonium formate compared to lithium formate because its linewidth is about 0.75 mT narrower than that of lithium formate. The EPRI results were consistent to within 1% with those of Gafchromic EBT films and MC simulations at distances from 1.0 to 2.9 mm. The radial dose values obtained by EPRI were about 4% lower at distances from 2.9 to 4.0 mm than those determined by MC simulation and EBT film dosimetry. Conclusions: Ammonium formate is a suitable material under certain conditions for use in brachytherapy dosimetry using EPRI. In this study, the authors demonstrated that the EPRI technique allows the estimation of the relative radial dose distribution at short distances for a 192 Ir wire source.
Murthy, K. P. N.
2001-01-01
An introduction to the basics of Monte Carlo is given. The topics covered include, sample space, events, probabilities, random variables, mean, variance, covariance, characteristic function, chebyshev inequality, law of large numbers, central limit theorem (stable distribution, Levy distribution), random numbers (generation and testing), random sampling techniques (inversion, rejection, sampling from a Gaussian, Metropolis sampling), analogue Monte Carlo and Importance sampling (exponential b...
International Nuclear Information System (INIS)
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described
International Nuclear Information System (INIS)
Mercier, B.
1985-04-01
We have shown that the transport equation can be solved with particles, like the Monte-Carlo method, but without random numbers. In the Monte-Carlo method, particles are created from the source, and are followed from collision to collision until either they are absorbed or they leave the spatial domain. In our method, particles are created from the original source, with a variable weight taking into account both collision and absorption. These particles are followed until they leave the spatial domain, and we use them to determine a first collision source. Another set of particles is then created from this first collision source, and tracked to determine a second collision source, and so on. This process introduces an approximation which does not exist in the Monte-Carlo method. However, we have analyzed the effect of this approximation, and shown that it can be limited. Our method is deterministic, gives reproducible results. Furthermore, when extra accuracy is needed in some region, it is easier to get more particles to go there. It has the same kind of applications: rather problems where streaming is dominant than collision dominated problems
Self-learning Monte Carlo (dynamical biasing)
International Nuclear Information System (INIS)
Matthes, W.
1981-01-01
In many applications the histories of a normal Monte Carlo game rarely reach the target region. An approximate knowledge of the importance (with respect to the target) may be used to guide the particles more frequently into the target region. A Monte Carlo method is presented in which each history contributes to update the importance field such that eventually most target histories are sampled. It is a self-learning method in the sense that the procedure itself: (a) learns which histories are important (reach the target) and increases their probability; (b) reduces the probabilities of unimportant histories; (c) concentrates gradually on the more important target histories. (U.K.)
Variational Monte Carlo Technique
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 8. Variational Monte Carlo Technique: Ground State Energies of Quantum Mechanical Systems. Sukanta Deb. General Article Volume 19 Issue 8 August 2014 pp 713-739 ...
Csilléry, Katalin; Kunstler, Georges; Courbaud, Benoît; Allard, Denis; Lassègues, Pierre; Haslinger, Klaus; Gardiner, Barry
2017-12-01
Damage due to wind-storms and droughts is increasing in many temperate forests, yet little is known about the long-term roles of these key climatic factors in forest dynamics and in the carbon budget. The objective of this study was to estimate individual and coupled effects of droughts and wind-storms on adult tree mortality across a 31-year period in 115 managed, mixed coniferous forest stands from the Western Alps and the Jura mountains. For each stand, yearly mortality was inferred from management records, yearly drought from interpolated fields of monthly temperature, precipitation and soil water holding capacity, and wind-storms from interpolated fields of daily maximum wind speed. We performed a thorough model selection based on a leave-one-out cross-validation of the time series. We compared different critical wind speeds (CWSs) for damage, wind-storm, and stand variables and statistical models. We found that a model including stand characteristics, drought, and storm strength using a CWS of 25 ms -1 performed the best across most stands. Using this best model, we found that drought increased damage risk only in the most southerly forests, and its effect is generally maintained for up to 2 years. Storm strength increased damage risk in all forests in a relatively uniform way. In some stands, we found positive interaction between drought and storm strength most likely because drought weakens trees, and they became more prone to stem breakage under wind-loading. In other stands, we found negative interaction between drought and storm strength, where excessive rain likely leads to soil water saturation making trees more susceptible to overturning in a wind-storm. Our results stress that temporal data are essential to make valid inferences about ecological impacts of disturbance events, and that making inferences about disturbance agents separately can be of limited validity. Under projected future climatic conditions, the direction and strength of these
Monte Carlo codes and Monte Carlo simulator program
International Nuclear Information System (INIS)
Higuchi, Kenji; Asai, Kiyoshi; Suganuma, Masayuki.
1990-03-01
Four typical Monte Carlo codes KENO-IV, MORSE, MCNP and VIM have been vectorized on VP-100 at Computing Center, JAERI. The problems in vector processing of Monte Carlo codes on vector processors have become clear through the work. As the result, it is recognized that these are difficulties to obtain good performance in vector processing of Monte Carlo codes. A Monte Carlo computing machine, which processes the Monte Carlo codes with high performances is being developed at our Computing Center since 1987. The concept of Monte Carlo computing machine and its performance have been investigated and estimated by using a software simulator. In this report the problems in vectorization of Monte Carlo codes, Monte Carlo pipelines proposed to mitigate these difficulties and the results of the performance estimation of the Monte Carlo computing machine by the simulator are described. (author)
Variational Variance Reduction for Monte Carlo Criticality Calculations
International Nuclear Information System (INIS)
Densmore, Jeffery D.; Larsen, Edward W.
2001-01-01
A new variational variance reduction (VVR) method for Monte Carlo criticality calculations was developed. This method employs (a) a variational functional that is more accurate than the standard direct functional, (b) a representation of the deterministically obtained adjoint flux that is especially accurate for optically thick problems with high scattering ratios, and (c) estimates of the forward flux obtained by Monte Carlo. The VVR method requires no nonanalog Monte Carlo biasing, but it may be used in conjunction with Monte Carlo biasing schemes. Some results are presented from a class of criticality calculations involving alternating arrays of fuel and moderator regions
International Nuclear Information System (INIS)
Brown, F.B.
1981-01-01
Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes
Nuclear Inter Jura'87 Proceedings
International Nuclear Information System (INIS)
1988-01-01
The Proceedings of the 8. Congress of the International Nuclear Law Association (INLA) contain the papers presented at the Congress and the ensuing discussions. The following topics were dealt with: new orientations of nuclear law - its convergence and discordance with other fields of law; impact of international treaties; comparison with legal provisions of other high-technology sectors and optimization of nuclear law. Finally, the last session examined the Chernobyl accident and the legal gaps it revealed. (NEA) [fr
Nuclear Inter Jura '77 Proceedings
International Nuclear Information System (INIS)
1977-01-01
These Proceedings of the Third Congress of the International Nuclear Law Association reproduce the papers presented in their original language with an abstract in English as well as the ensuing discussions. A majority of the papers have been translated into English for the Proceedings. The subjects dealt with respectively concerned contractual aspects of nuclear activities, the impact of nuclear power on the environment and public acceptance, radiological protection, third party liability and insurance, harmonisation of licensing regulations, export of nuclear equipment in relation to the Non-Proliferation Treaty and finally, computerization of nuclear law. (NEA) [fr
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 3. Markov Chain Monte Carlo - Examples. Arnab Chakraborty. General Article Volume 7 Issue 3 March 2002 pp 25-34. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/007/03/0025-0034. Keywords.
Monte Carlo and Quasi-Monte Carlo Sampling
Lemieux, Christiane
2009-01-01
Presents essential tools for using quasi-Monte Carlo sampling in practice. This book focuses on issues related to Monte Carlo methods - uniform and non-uniform random number generation, variance reduction techniques. It covers several aspects of quasi-Monte Carlo methods.
Discrete Diffusion Monte Carlo for Electron Thermal Transport
Chenhall, Jeffrey; Cao, Duc; Wollaeger, Ryan; Moses, Gregory
2014-10-01
The iSNB (implicit Schurtz Nicolai Busquet electron thermal transport method of Cao et al. is adapted to a Discrete Diffusion Monte Carlo (DDMC) solution method for eventual inclusion in a hybrid IMC-DDMC (Implicit Monte Carlo) method. The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the iSNB-DDMC method will be presented. This work was supported by Sandia National Laboratory - Albuquerque.
Monte Carlo lattice program KIM
International Nuclear Information System (INIS)
Cupini, E.; De Matteis, A.; Simonini, R.
1980-01-01
The Monte Carlo program KIM solves the steady-state linear neutron transport equation for a fixed-source problem or, by successive fixed-source runs, for the eigenvalue problem, in a two-dimensional thermal reactor lattice. Fluxes and reaction rates are the main quantities computed by the program, from which power distribution and few-group averaged cross sections are derived. The simulation ranges from 10 MeV to zero and includes anisotropic and inelastic scattering in the fast energy region, the epithermal Doppler broadening of the resonances of some nuclides, and the thermalization phenomenon by taking into account the thermal velocity distribution of some molecules. Besides the well known combinatorial geometry, the program allows complex configurations to be represented by a discrete set of points, an approach greatly improving calculation speed
Monte Carlo principles and applications
Energy Technology Data Exchange (ETDEWEB)
Raeside, D E [Oklahoma Univ., Oklahoma City (USA). Health Sciences Center
1976-03-01
The principles underlying the use of Monte Carlo methods are explained, for readers who may not be familiar with the approach. The generation of random numbers is discussed, and the connection between Monte Carlo methods and random numbers is indicated. Outlines of two well established Monte Carlo sampling techniques are given, together with examples illustrating their use. The general techniques for improving the efficiency of Monte Carlo calculations are considered. The literature relevant to the applications of Monte Carlo calculations in medical physics is reviewed.
International Nuclear Information System (INIS)
Rajabalinejad, M.
2010-01-01
To reduce cost of Monte Carlo (MC) simulations for time-consuming processes, Bayesian Monte Carlo (BMC) is introduced in this paper. The BMC method reduces number of realizations in MC according to the desired accuracy level. BMC also provides a possibility of considering more priors. In other words, different priors can be integrated into one model by using BMC to further reduce cost of simulations. This study suggests speeding up the simulation process by considering the logical dependence of neighboring points as prior information. This information is used in the BMC method to produce a predictive tool through the simulation process. The general methodology and algorithm of BMC method are presented in this paper. The BMC method is applied to the simplified break water model as well as the finite element model of 17th Street Canal in New Orleans, and the results are compared with the MC and Dynamic Bounds methods.
Directory of Open Access Journals (Sweden)
Provost Sylvie
2010-12-01
Full Text Available Abstract Background The Canadian healthcare system is currently experiencing important organizational transformations through the reform of primary healthcare (PHC. These reforms vary in scope but share a common feature of proposing the transformation of PHC organizations by implementing new models of PHC organization. These models vary in their performance with respect to client affiliation, utilization of services, experience of care and perceived outcomes of care. Objectives In early 2005 we conducted a study in the two most populous regions of Quebec province (Montreal and Montérégie which assessed the association between prevailing models of primary healthcare (PHC and population-level experience of care. The goal of the present research project is to track the evolution of PHC organizational models and their relative performance through the reform process (from 2005 until 2010 and to assess factors at the organizational and contextual levels that are associated with the transformation of PHC organizations and their performance. Methods/Design This study will consist of three interrelated surveys, hierarchically nested. The first survey is a population-based survey of randomly-selected adults from two populous regions in the province of Quebec. This survey will assess the current affiliation of people with PHC organizations, their level of utilization of healthcare services, attributes of their experience of care, reception of preventive and curative services and perception of unmet needs for care. The second survey is an organizational survey of PHC organizations assessing aspects related to their vision, organizational structure, level of resources, and clinical practice characteristics. This information will serve to develop a taxonomy of organizations using a mixed methods approach of factorial analysis and principal component analysis. The third survey is an assessment of the organizational context in which PHC organizations are
Levesque, Jean-Frédéric; Pineault, Raynald; Provost, Sylvie; Tousignant, Pierre; Couture, Audrey; Da Silva, Roxane Borgès; Breton, Mylaine
2010-12-01
The Canadian healthcare system is currently experiencing important organizational transformations through the reform of primary healthcare (PHC). These reforms vary in scope but share a common feature of proposing the transformation of PHC organizations by implementing new models of PHC organization. These models vary in their performance with respect to client affiliation, utilization of services, experience of care and perceived outcomes of care. In early 2005 we conducted a study in the two most populous regions of Quebec province (Montreal and Montérégie) which assessed the association between prevailing models of primary healthcare (PHC) and population-level experience of care. The goal of the present research project is to track the evolution of PHC organizational models and their relative performance through the reform process (from 2005 until 2010) and to assess factors at the organizational and contextual levels that are associated with the transformation of PHC organizations and their performance. This study will consist of three interrelated surveys, hierarchically nested. The first survey is a population-based survey of randomly-selected adults from two populous regions in the province of Quebec. This survey will assess the current affiliation of people with PHC organizations, their level of utilization of healthcare services, attributes of their experience of care, reception of preventive and curative services and perception of unmet needs for care. The second survey is an organizational survey of PHC organizations assessing aspects related to their vision, organizational structure, level of resources, and clinical practice characteristics. This information will serve to develop a taxonomy of organizations using a mixed methods approach of factorial analysis and principal component analysis. The third survey is an assessment of the organizational context in which PHC organizations are evolving. The five year prospective period will serve as a natural
International Nuclear Information System (INIS)
Wollaber, Allan Benton
2016-01-01
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating @@), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
International Nuclear Information System (INIS)
Creutz, M.
1986-01-01
The author discusses a recently developed algorithm for simulating statistical systems. The procedure interpolates between molecular dynamics methods and canonical Monte Carlo. The primary advantages are extremely fast simulations of discrete systems such as the Ising model and a relative insensitivity to random number quality. A variation of the algorithm gives rise to a deterministic dynamics for Ising spins. This model may be useful for high speed simulation of non-equilibrium phenomena
Energy Technology Data Exchange (ETDEWEB)
Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
Energy Technology Data Exchange (ETDEWEB)
Brockway, D.; Soran, P.; Whalen, P.
1985-01-01
A Monte Carlo algorithm to efficiently calculate static alpha eigenvalues, N = ne/sup ..cap alpha..t/, for supercritical systems has been developed and tested. A direct Monte Carlo approach to calculating a static alpha is to simply follow the buildup in time of neutrons in a supercritical system and evaluate the logarithmic derivative of the neutron population with respect to time. This procedure is expensive, and the solution is very noisy and almost useless for a system near critical. The modified approach is to convert the time-dependent problem to a static ..cap alpha../sup -/eigenvalue problem and regress ..cap alpha.. on solutions of a/sup -/ k/sup -/eigenvalue problem. In practice, this procedure is much more efficient than the direct calculation, and produces much more accurate results. Because the Monte Carlo codes are intrinsically three-dimensional and use elaborate continuous-energy cross sections, this technique is now used as a standard for evaluating other calculational techniques in odd geometries or with group cross sections.
Hybrid SN/Monte Carlo research and results
International Nuclear Information System (INIS)
Baker, R.S.
1993-01-01
The neutral particle transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S N ) and stochastic (Monte Carlo) methods are applied. The Monte Carlo and S N regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid Monte Carlo/S N method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S N is well suited for by themselves. The hybrid method has been successfully applied to realistic shielding problems. The vectorized Monte Carlo algorithm in the hybrid method has been ported to the massively parallel architecture of the Connection Machine. Comparisons of performance on a vector machine (Cray Y-MP) and the Connection Machine (CM-2) show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when realistic problems requiring variance reduction are considered. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may be expected to perform well
Monte Carlo method for array criticality calculations
International Nuclear Information System (INIS)
Dickinson, D.; Whitesides, G.E.
1976-01-01
The Monte Carlo method for solving neutron transport problems consists of mathematically tracing paths of individual neutrons collision by collision until they are lost by absorption or leakage. The fate of the neutron after each collision is determined by the probability distribution functions that are formed from the neutron cross-section data. These distributions are sampled statistically to establish the successive steps in the neutron's path. The resulting data, accumulated from following a large number of batches, are analyzed to give estimates of k/sub eff/ and other collision-related quantities. The use of electronic computers to produce the simulated neutron histories, initiated at Los Alamos Scientific Laboratory, made the use of the Monte Carlo method practical for many applications. In analog Monte Carlo simulation, the calculation follows the physical events of neutron scattering, absorption, and leakage. To increase calculational efficiency, modifications such as the use of statistical weights are introduced. The Monte Carlo method permits the use of a three-dimensional geometry description and a detailed cross-section representation. Some of the problems in using the method are the selection of the spatial distribution for the initial batch, the preparation of the geometry description for complex units, and the calculation of error estimates for region-dependent quantities such as fluxes. The Monte Carlo method is especially appropriate for criticality safety calculations since it permits an accurate representation of interacting units of fissile material. Dissimilar units, units of complex shape, moderators between units, and reflected arrays may be calculated. Monte Carlo results must be correlated with relevant experimental data, and caution must be used to ensure that a representative set of neutron histories is produced
Directory of Open Access Journals (Sweden)
Turpin M.
2012-02-01
Full Text Available This study concerns an approach for dolomite quantification and stoichiometry calculation by using X-ray diffractometry coupled with cell and Rietveld refinements and equipped with a newly substantial database of dolomite composition. A greater accuracy and precision are obtained for quantifying dolomite as well as other mineral phases and calculating dolomite stoichiometry compared to the classical “Lumsden line” and previous methods. The applicability of this approach is verified on dolomite reference material (Eugui and on Triassic (Upper Muschelkalk-Lettenkohle carbonates from the French Jura. The approach shown here is applicable to bulk dolostones as well as to specific dolomite cements and was combined with petrographical and isotopic analyses. Upper Muschelkalk dolomites were formed during burial dolomitization under fluids characterized by increased temperature and variable isotopic composition through burial. This is clear from their Ca content in dolomites which gradually approaches an ideal stoichiometry (from 53.16% to 51.19% through increasing dolomitization. Lettenkohle dolostones consist of near-ideal stoichiometric (51.06%Ca and well-ordered dolomites associated with anhydrite relicts. They originated through both sabkha and burial dolomitization. This contribution gives an improved method for the characterization of different dolomite types and their distinct traits in sedimentary rocks, which allows a better evaluation of their reservoir potential. Cette étude propose une approche pour la quantification de la dolomite et le calcul de sa stoechiométrie grâce à l’utilisation de la diffraction des rayons X couplée aux affinements de maille et de Rietveld et complétée par de nombreuses données issues de la littérature. Elle permet d’obtenir une meilleure justesse et précision pour la quantification de la dolomite (et des autres phases minérales ainsi que pour le calcul de sa stoechiométrie par rapport à l
International Nuclear Information System (INIS)
Bossart, P.; Nussbaum, C.
2007-01-01
The international Mont Terri project started in January 1996. Research is carried out in the Mont Terri rock laboratory, an underground facility near the security gallery of the Mont Terri motorway tunnel (vicinity of St-Ursanne, Canton of Jura, Switzerland). The aim of the project is the geological, hydrogeological, geochemical and geotechnical characterisation of a clay formation, specifically of the Opalinus Clay. Twelve Partners from European countries and Japan participate in the project. These are ANDRA, BGR, CRIEPI, ENRESA, GRS, HSK, IRSN, JAEA, NAGRA, OBAYASHI, SCK.CEN and swisstopo. Since 2006, swisstopo acts as operator of the rock laboratory and is responsible for the implementation of the research programme decided by the partners. The three following reports are milestones in the research history of the Mont Terri project. It was the first time that an in-situ heating test with about 20 observation boreholes over a time span of several years was carried out in a clay formation. The engineered barrier emplacement experiment has been extended due to very encouraging measurement results and is still going on. The ventilation test was and is a challenge, especially in the very narrow microtunnel. All three projects were financially supported by the European Commission and the Swiss State Secretariat for Education and Research. The three important scientific and technical reports, which are presented in the following, have been provided by a number of scientists, engineers and technicians from the Partners, but also from national research organisations and private contractors. Many fruitful meetings where held, at the rock laboratory and at other facilities, not to forget the weeks and months of installation and testing work carried out by the technicians and engineers. The corresponding names and organisations are listed in detail in the reports. Special thanks are going to the co-ordinators of the three projects for their motivation of the team during
A note on simultaneous Monte Carlo tests
DEFF Research Database (Denmark)
Hahn, Ute
In this short note, Monte Carlo tests of goodness of fit for data of the form X(t), t ∈ I are considered, that reject the null hypothesis if X(t) leaves an acceptance region bounded by an upper and lower curve for some t in I. A construction of the acceptance region is proposed that complies to a...... to a given target level of rejection, and yields exact p-values. The construction is based on pointwise quantiles, estimated from simulated realizations of X(t) under the null hypothesis....
International Nuclear Information System (INIS)
Lupton, L.R.; Keller, N.A.
1982-09-01
The design of a positron emission tomography (PET) ring camera involves trade-offs between such things as sensitivity, resolution and cost. As a design aid, a Monte Carlo simulation of a single-ring camera system has been developed. The model includes a source-filled phantom, collimators, detectors, and optional shadow shields and inter-crystal septa. Individual gamma rays are tracked within the system materials until they escape, are absorbed, or are detected. Compton and photelectric interactions are modelled. All system dimensions are variable within the computation. Coincidence and singles data are recorded according to type (true or scattered), annihilation origin, and detected energy. Photon fluxes at various points of interest, such as the edge of the phantom and the collimator, are available. This report reviews the basics of PET, describes the physics involved in the simulation, and provides detailed outlines of the routines
2003-01-01
MGS MOC Release No. MOC2-387, 10 June 2003This is a Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) wide angle view of the Charitum Montes, south of Argyre Planitia, in early June 2003. The seasonal south polar frost cap, composed of carbon dioxide, has been retreating southward through this area since spring began a month ago. The bright features toward the bottom of this picture are surfaces covered by frost. The picture is located near 57oS, 43oW. North is at the top, south is at the bottom. Sunlight illuminates the scene from the upper left. The area shown is about 217 km (135 miles) wide.
Coevolution Based Adaptive Monte Carlo Localization (CEAMCL
Directory of Open Access Journals (Sweden)
Luo Ronghua
2008-11-01
Full Text Available An adaptive Monte Carlo localization algorithm based on coevolution mechanism of ecological species is proposed. Samples are clustered into species, each of which represents a hypothesis of the robot's pose. Since the coevolution between the species ensures that the multiple distinct hypotheses can be tracked stably, the problem of premature convergence when using MCL in highly symmetric environments can be solved. And the sample size can be adjusted adaptively over time according to the uncertainty of the robot's pose by using the population growth model. In addition, by using the crossover and mutation operators in evolutionary computation, intra-species evolution can drive the samples move towards the regions where the desired posterior density is large. So a small size of samples can represent the desired density well enough to make precise localization. The new algorithm is termed coevolution based adaptive Monte Carlo localization (CEAMCL. Experiments have been carried out to prove the efficiency of the new localization algorithm.
Monte Carlo Methods in Physics
International Nuclear Information System (INIS)
Santoso, B.
1997-01-01
Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained
International Nuclear Information System (INIS)
Bossart, P.; Bernier, F.; Birkholzer, J.
2017-01-01
Geologic repositories for radioactive waste are designed as multi-barrier disposal systems that perform a number of functions including the long-term isolation and containment of waste from the human environment, and the attenuation of radionuclides released to the subsurface. The rock laboratory at Mont Terri (canton Jura, Switzerland) in the Opalinus Clay plays an important role in the development of such repositories. The experimental results gained in the last 20 years are used to study the possible evolution of a repository and investigate processes closely related to the safety functions of a repository hosted in a clay rock. At the same time, these experiments have increased our general knowledge of the complex behaviour of argillaceous formations in response to coupled hydrological, mechanical, thermal, chemical, and biological processes. After presenting the geological setting in and around the Mont Terri rock laboratory and an overview of the mineralogy and key properties of the Opalinus Clay, we give a brief overview of the key experiments that are described in more detail in the following research papers to this Special Issue of the Swiss Journal of Geosciences. These experiments aim to characterise the Opalinus Clay and estimate safety-relevant parameters, test procedures, and technologies for repository construction and waste emplacement. Other aspects covered are: bentonite buffer emplacement, high-pH concrete-clay interaction experiments, anaerobic steel corrosion with hydrogen formation, depletion of hydrogen by microbial activity, and finally, release of radionuclides into the bentonite buffer and the Opalinus Clay barrier. In the case of a spent fuel/high-level waste repository, the time considered in performance assessment for repository evolution is generally 1 million years, starting with a transient phase over the first 10,000 years and followed by an equilibrium phase. Experiments dealing with initial conditions, construction, and waste
Energy Technology Data Exchange (ETDEWEB)
Bossart, P. [Swisstopo, Federal Office of Topography, Wabern (Switzerland); Bernier, F. [Federal Agency for Nuclear Control FANC, Brussels (Belgium); Birkholzer, J. [Lawrence Berkeley National Laboratory, Berkeley (United States); and others
2017-04-15
Geologic repositories for radioactive waste are designed as multi-barrier disposal systems that perform a number of functions including the long-term isolation and containment of waste from the human environment, and the attenuation of radionuclides released to the subsurface. The rock laboratory at Mont Terri (canton Jura, Switzerland) in the Opalinus Clay plays an important role in the development of such repositories. The experimental results gained in the last 20 years are used to study the possible evolution of a repository and investigate processes closely related to the safety functions of a repository hosted in a clay rock. At the same time, these experiments have increased our general knowledge of the complex behaviour of argillaceous formations in response to coupled hydrological, mechanical, thermal, chemical, and biological processes. After presenting the geological setting in and around the Mont Terri rock laboratory and an overview of the mineralogy and key properties of the Opalinus Clay, we give a brief overview of the key experiments that are described in more detail in the following research papers to this Special Issue of the Swiss Journal of Geosciences. These experiments aim to characterise the Opalinus Clay and estimate safety-relevant parameters, test procedures, and technologies for repository construction and waste emplacement. Other aspects covered are: bentonite buffer emplacement, high-pH concrete-clay interaction experiments, anaerobic steel corrosion with hydrogen formation, depletion of hydrogen by microbial activity, and finally, release of radionuclides into the bentonite buffer and the Opalinus Clay barrier. In the case of a spent fuel/high-level waste repository, the time considered in performance assessment for repository evolution is generally 1 million years, starting with a transient phase over the first 10,000 years and followed by an equilibrium phase. Experiments dealing with initial conditions, construction, and waste
Lectures on Monte Carlo methods
Madras, Neal
2001-01-01
Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati
Advanced Multilevel Monte Carlo Methods
Jasra, Ajay; Law, Kody; Suciu, Carina
2017-01-01
This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.
Advanced Multilevel Monte Carlo Methods
Jasra, Ajay
2017-04-24
This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.
Monte Carlo simulation for IRRMA
International Nuclear Information System (INIS)
Gardner, R.P.; Liu Lianyan
2000-01-01
Monte Carlo simulation is fast becoming a standard approach for many radiation applications that were previously treated almost entirely by experimental techniques. This is certainly true for Industrial Radiation and Radioisotope Measurement Applications - IRRMA. The reasons for this include: (1) the increased cost and inadequacy of experimentation for design and interpretation purposes; (2) the availability of low cost, large memory, and fast personal computers; and (3) the general availability of general purpose Monte Carlo codes that are increasingly user-friendly, efficient, and accurate. This paper discusses the history and present status of Monte Carlo simulation for IRRMA including the general purpose (GP) and specific purpose (SP) Monte Carlo codes and future needs - primarily from the experience of the authors
Geology of Maxwell Montes, Venus
Head, J. W.; Campbell, D. B.; Peterfreund, A. R.; Zisk, S. A.
1984-01-01
Maxwell Montes represent the most distinctive topography on the surface of Venus, rising some 11 km above mean planetary radius. The multiple data sets of the Pioneer missing and Earth based radar observations to characterize Maxwell Montes are analyzed. Maxwell Montes is a porkchop shaped feature located at the eastern end of Lakshmi Planum. The main massif trends about North 20 deg West for approximately 1000 km and the narrow handle extends several hundred km West South-West WSW from the north end of the main massif, descending down toward Lakshmi Planum. The main massif is rectilinear and approximately 500 km wide. The southern and northern edges of Maxwell Montes coincide with major topographic boundaries defining the edge of Ishtar Terra.
Adjoint electron Monte Carlo calculations
International Nuclear Information System (INIS)
Jordan, T.M.
1986-01-01
Adjoint Monte Carlo is the most efficient method for accurate analysis of space systems exposed to natural and artificially enhanced electron environments. Recent adjoint calculations for isotropic electron environments include: comparative data for experimental measurements on electronics boxes; benchmark problem solutions for comparing total dose prediction methodologies; preliminary assessment of sectoring methods used during space system design; and total dose predictions on an electronics package. Adjoint Monte Carlo, forward Monte Carlo, and experiment are in excellent agreement for electron sources that simulate space environments. For electron space environments, adjoint Monte Carlo is clearly superior to forward Monte Carlo, requiring one to two orders of magnitude less computer time for relatively simple geometries. The solid-angle sectoring approximations used for routine design calculations can err by more than a factor of 2 on dose in simple shield geometries. For critical space systems exposed to severe electron environments, these potential sectoring errors demand the establishment of large design margins and/or verification of shield design by adjoint Monte Carlo/experiment
Monte Carlo theory and practice
International Nuclear Information System (INIS)
James, F.
1987-01-01
Historically, the first large-scale calculations to make use of the Monte Carlo method were studies of neutron scattering and absorption, random processes for which it is quite natural to employ random numbers. Such calculations, a subset of Monte Carlo calculations, are known as direct simulation, since the 'hypothetical population' of the narrower definition above corresponds directly to the real population being studied. The Monte Carlo method may be applied wherever it is possible to establish equivalence between the desired result and the expected behaviour of a stochastic system. The problem to be solved may already be of a probabilistic or statistical nature, in which case its Monte Carlo formulation will usually be a straightforward simulation, or it may be of a deterministic or analytic nature, in which case an appropriate Monte Carlo formulation may require some imagination and may appear contrived or artificial. In any case, the suitability of the method chosen will depend on its mathematical properties and not on its superficial resemblance to the problem to be solved. The authors show how Monte Carlo techniques may be compared with other methods of solution of the same physical problem
Monte Carlo simulation of neutron scattering instruments
International Nuclear Information System (INIS)
Seeger, P.A.; Daemen, L.L.; Hjelm, R.P. Jr.
1998-01-01
A code package consisting of the Monte Carlo Library MCLIB, the executing code MC RUN, the web application MC Web, and various ancillary codes is proposed as an open standard for simulation of neutron scattering instruments. The architecture of the package includes structures to define surfaces, regions, and optical elements contained in regions. A particle is defined by its vector position and velocity, its time of flight, its mass and charge, and a polarization vector. The MC RUN code handles neutron transport and bookkeeping, while the action on the neutron within any region is computed using algorithms that may be deterministic, probabilistic, or a combination. Complete versatility is possible because the existing library may be supplemented by any procedures a user is able to code. Some examples are shown
The SGHWR version of the Monte Carlo code W-MONTE. Part 1. The theoretical model
International Nuclear Information System (INIS)
Allen, F.R.
1976-03-01
W-MONTE provides a multi-group model of neutron transport in the exact geometry of a reactor lattice using Monte Carlo methods. It is currently restricted to uniform axial properties. Material data is normally obtained from a preliminary WIMS lattice calculation in the transport group structure. The SGHWR version has been required for analysis of zero energy experiments and special aspects of power reactor lattices, such as the unmoderated lattice region above the moderator when drained to dump height. Neutron transport is modelled for a uniform infinite lattice, simultaneously treating the cases of no leakage, radial leakage or axial leakage only, and the combined effects of radial and axial leakage. Multigroup neutron balance edits are incorporated for the separate effects of radial and axial leakage to facilitate the analysis of leakage and to provide effective diffusion theory parameters for core representation in reactor cores. (author)
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan
2016-01-01
In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . âˆž>h0>h1â‹¯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. Â© 2016 Elsevier B.V.
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros
2016-08-29
In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . âˆž>h0>h1â‹¯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. Â© 2016 Elsevier B.V.
Sensitivity analysis for oblique incidence reflectometry using Monte Carlo simulations
DEFF Research Database (Denmark)
Kamran, Faisal; Andersen, Peter E.
2015-01-01
profiles. This article presents a sensitivity analysis of the technique in turbid media. Monte Carlo simulations are used to investigate the technique and its potential to distinguish the small changes between different levels of scattering. We present various regions of the dynamic range of optical...
Energy Technology Data Exchange (ETDEWEB)
Baker, Randal Scott [Univ. of Arizona, Tucson, AZ (United States)
1990-01-01
The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S_{N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S_{N} regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S_{N} is well suited for by themselves. The fully coupled Monte Carlo/S_{N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S_{N} calculation is to be performed. The Monte Carlo region may comprise the entire spatial region for selected energy groups, or may consist of a rectangular area that is either completely or partially embedded in an arbitrary S_{N} region. The Monte Carlo and S_{N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and volumetric sources. The hybrid method has been implemented in the S_{N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and volumetric sources, and linkage subrountines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S_{N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating S_{N} calculations. The special-purpose Monte Carlo routines used are essentially analog, with few variance reduction techniques employed. However, the routines have been successfully vectorized, with approximately a factor of five increase in speed over the non-vectorized version.
Monte Carlo simulation of experiments
International Nuclear Information System (INIS)
Opat, G.I.
1977-07-01
An outline of the technique of computer simulation of particle physics experiments by the Monte Carlo method is presented. Useful special purpose subprograms are listed and described. At each stage the discussion is made concrete by direct reference to the programs SIMUL8 and its variant MONTE-PION, written to assist in the analysis of the radiative decay experiments μ + → e + ν sub(e) antiνγ and π + → e + ν sub(e)γ, respectively. These experiments were based on the use of two large sodium iodide crystals, TINA and MINA, as e and γ detectors. Instructions for the use of SIMUL8 and MONTE-PION are given. (author)
Strategije drevesnega preiskovanja Monte Carlo
VODOPIVEC, TOM
2018-01-01
Po preboju pri igri go so metode drevesnega preiskovanja Monte Carlo (ang. Monte Carlo tree search – MCTS) sprožile bliskovit napredek agentov za igranje iger: raziskovalna skupnost je od takrat razvila veliko variant in izboljšav algoritma MCTS ter s tem zagotovila napredek umetne inteligence ne samo pri igrah, ampak tudi v številnih drugih domenah. Čeprav metode MCTS združujejo splošnost naključnega vzorčenja z natančnostjo drevesnega preiskovanja, imajo lahko v praksi težave s počasno konv...
Energy Technology Data Exchange (ETDEWEB)
Vuataz, F D; Muralt, R [Centre d` Hydrogeologie, Universite de Neuchatel (Switzerland)
1997-12-01
The major goal of this study is to evaluate the potential of warmer and deeper groundwater than the one presently produced at the Centre thermal of Yverdon-les-Bains. Numerous data originating from seismic lines and boreholes allowed to obtain a good understanding of the regional structural geology. However, these data are limited in the faulted zone of Pipechat-CHAMBLON-CHEVRESSY (PCC) crossing the city of Yverdon, and make difficult the detailed structural interpretation on the site of the old thermal spring and the 600 m-deep well. The latter drains the Malm limestone at about 100 m of the south fault plane, which indicates the importance on the hydraulic role played by the main fault or by network of associated faults. The results of a specific vibro-seismic survay carried out in the urban area of Yverdo, closeenough to the Centre thermal, showed the precise location of the anticline axis formed by the PCC fault zone. Individual reflectors have been deciphered and represent the thickness and the structure of the quaternary and molassic sediments, on both sides of fault zone. (orig.)
Is Monte Carlo embarrassingly parallel?
Energy Technology Data Exchange (ETDEWEB)
Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)
2012-07-01
Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)
Is Monte Carlo embarrassingly parallel?
International Nuclear Information System (INIS)
Hoogenboom, J. E.
2012-01-01
Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)
Exact Monte Carlo for molecules
International Nuclear Information System (INIS)
Lester, W.A. Jr.; Reynolds, P.J.
1985-03-01
A brief summary of the fixed-node quantum Monte Carlo method is presented. Results obtained for binding energies, the classical barrier height for H + H 2 , and the singlet-triplet splitting in methylene are presented and discussed. 17 refs
Monte Carlo studies of high-transverse-energy hadronic interactions
International Nuclear Information System (INIS)
Corcoran, M.D.
1985-01-01
A four-jet Monte Carlo calculation has been used to simulate hadron-hadron interactions which deposit high transverse energy into a large-solid-angle calorimeter and limited solid-angle regions of the calorimeter. The calculation uses first-order QCD cross sections to generate two scattered jets and also produces beam and target jets. Field-Feynman fragmentation has been used in the hadronization. The sensitivity of the results to a few features of the Monte Carlo program has been studied. The results are found to be very sensitive to the method used to ensure overall energy conservation after the fragmentation of the four jets is complete. Results are also sensitive to the minimum momentum transfer in the QCD subprocesses and to the distribution of p/sub T/ to the jet axis and the multiplicities in the fragmentation. With reasonable choices of these features of the Monte Carlo program, good agreement with data at Fermilab/CERN SPS energies is obtained, comparable to the agreement achieved with more sophisticated parton-shower models. With other choices, however, the calculation gives qualitatively different results which are in strong disagreement with the data. These results have important implications for extracting physics conclusions from Monte Carlo calculations. It is not possible to test the validity of a particular model or distinguish between different models unless the Monte Carlo results are unambiguous and different models exhibit clearly different behavior
Monte Carlo - Advances and Challenges
International Nuclear Information System (INIS)
Brown, Forrest B.; Mosteller, Russell D.; Martin, William R.
2008-01-01
Abstract only, full text follows: With ever-faster computers and mature Monte Carlo production codes, there has been tremendous growth in the application of Monte Carlo methods to the analysis of reactor physics and reactor systems. In the past, Monte Carlo methods were used primarily for calculating k eff of a critical system. More recently, Monte Carlo methods have been increasingly used for determining reactor power distributions and many design parameters, such as β eff , l eff , τ, reactivity coefficients, Doppler defect, dominance ratio, etc. These advanced applications of Monte Carlo methods are now becoming common, not just feasible, but bring new challenges to both developers and users: Convergence of 3D power distributions must be assured; confidence interval bias must be eliminated; iterated fission probabilities are required, rather than single-generation probabilities; temperature effects including Doppler and feedback must be represented; isotopic depletion and fission product buildup must be modeled. This workshop focuses on recent advances in Monte Carlo methods and their application to reactor physics problems, and on the resulting challenges faced by code developers and users. The workshop is partly tutorial, partly a review of the current state-of-the-art, and partly a discussion of future work that is needed. It should benefit both novice and expert Monte Carlo developers and users. In each of the topic areas, we provide an overview of needs, perspective on past and current methods, a review of recent work, and discussion of further research and capabilities that are required. Electronic copies of all workshop presentations and material will be available. The workshop is structured as 2 morning and 2 afternoon segments: - Criticality Calculations I - convergence diagnostics, acceleration methods, confidence intervals, and the iterated fission probability, - Criticality Calculations II - reactor kinetics parameters, dominance ratio, temperature
HEXANN-EVALU - a Monte Carlo program system for pressure vessel neutron irradiation calculation
International Nuclear Information System (INIS)
Lux, Ivan
1983-08-01
The Monte Carlo program HEXANN and the evaluation program EVALU are intended to calculate Monte Carlo estimates of reaction rates and currents in segments of concentric angular regions around a hexagonal reactor-core region. The report describes the theoretical basis, structure and activity of the programs. Input data preparation guides and a sample problem are also included. Theoretical considerations as well as numerical experimental results suggest the user a nearly optimum way of making use of the Monte Carlo efficiency increasing options included in the program
(U) Introduction to Monte Carlo Methods
Energy Technology Data Exchange (ETDEWEB)
Hungerford, Aimee L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-03-20
Monte Carlo methods are very valuable for representing solutions to particle transport problems. Here we describe a “cook book” approach to handling the terms in a transport equation using Monte Carlo methods. Focus is on the mechanics of a numerical Monte Carlo code, rather than the mathematical foundations of the method.
Energy Technology Data Exchange (ETDEWEB)
Gerodetti, M.
2009-02-15
The 6'097 m long Railway Tunnel under the Mont d'Or (western Switzerland, under the Jura mountains) was constructed at the beginning of the 20{sup th} century and inaugurated on 16 May 1915. During the construction there was an important break-in of water in the tunnel that flooded the whole construction area. Since the completion of the tunnel, the water incursion is drained and conveyed to the Swiss entrance. The flow rate coming from the tunnel is constant at about 120 l/s and didn't show any variation during all the past decades. The idea of using the tunnel water energy in a turbine is thought of since a long time. Considering the present situation on the energy sector, the 'Societe electrique du Chatelard' (the local electricity utility) with the support of the municipal authority, decided now to realize this concept and to turbine the water from the tunnel, also known as 'Bief Rouge', for power generation. The 'Bief Rouge' project consists in catching the flow at the Vallorbe entrance of the tunnel and conducting it into a new penstock down to the river Orbe situated some 65 m downhill where electricity will be produced in a new small-scale power plant. The planned scheme will have an electrical power of 54.5 kW and be located in a new building near the existing sewage pumping station of Vallorbe. The total investment cost is 1.3 million CHF and includes the construction of a new headwater basin, a penstock, a power plant and a tailrace channel as well as the electro-mechanical equipment for power production. Based on a mean annual power production of some 465,000 kWh, the retail price of the kWh has been evaluated to 21 Swiss cents/kWh. (author)
International Nuclear Information System (INIS)
Macdonald, J.L.
1975-08-01
Statistical and deterministic pattern recognition systems are designed to classify the state space of a Monte Carlo transport problem into importance regions. The surfaces separating the regions can be used for particle splitting and Russian roulette in state space in order to reduce the variance of the Monte Carlo tally. Computer experiments are performed to evaluate the performance of the technique using one and two dimensional Monte Carlo problems. Additional experiments are performed to determine the sensitivity of the technique to various pattern recognition and Monte Carlo problem dependent parameters. A system for applying the technique to a general purpose Monte Carlo code is described. An estimate of the computer time required by the technique is made in order to determine its effectiveness as a variance reduction device. It is recommended that the technique be further investigated in a general purpose Monte Carlo code. (auth)
Angular biasing in implicit Monte-Carlo
International Nuclear Information System (INIS)
Zimmerman, G.B.
1994-01-01
Calculations of indirect drive Inertial Confinement Fusion target experiments require an integrated approach in which laser irradiation and radiation transport in the hohlraum are solved simultaneously with the symmetry, implosion and burn of the fuel capsule. The Implicit Monte Carlo method has proved to be a valuable tool for the two dimensional radiation transport within the hohlraum, but the impact of statistical noise on the symmetric implosion of the small fuel capsule is difficult to overcome. We present an angular biasing technique in which an increased number of low weight photons are directed at the imploding capsule. For typical parameters this reduces the required computer time for an integrated calculation by a factor of 10. An additional factor of 5 can also be achieved by directing even smaller weight photons at the polar regions of the capsule where small mass zones are most sensitive to statistical noise
Isotopic depletion with Monte Carlo
International Nuclear Information System (INIS)
Martin, W.R.; Rathkopf, J.A.
1996-06-01
This work considers a method to deplete isotopes during a time- dependent Monte Carlo simulation of an evolving system. The method is based on explicitly combining a conventional estimator for the scalar flux with the analytical solutions to the isotopic depletion equations. There are no auxiliary calculations; the method is an integral part of the Monte Carlo calculation. The method eliminates negative densities and reduces the variance in the estimates for the isotope densities, compared to existing methods. Moreover, existing methods are shown to be special cases of the general method described in this work, as they can be derived by combining a high variance estimator for the scalar flux with a low-order approximation to the analytical solution to the depletion equation
Zimmerman, George B.
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
International Nuclear Information System (INIS)
Zimmerman, G.B.
1997-01-01
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials. copyright 1997 American Institute of Physics
International Nuclear Information System (INIS)
Zimmerman, George B.
1997-01-01
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials
Shell model Monte Carlo methods
International Nuclear Information System (INIS)
Koonin, S.E.; Dean, D.J.; Langanke, K.
1997-01-01
We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; the resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo (SMMC) methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, the thermal and rotational behavior of rare-earth and γ-soft nuclei, and the calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. (orig.)
A contribution Monte Carlo method
International Nuclear Information System (INIS)
Aboughantous, C.H.
1994-01-01
A Contribution Monte Carlo method is developed and successfully applied to a sample deep-penetration shielding problem. The random walk is simulated in most of its parts as in conventional Monte Carlo methods. The probability density functions (pdf's) are expressed in terms of spherical harmonics and are continuous functions in direction cosine and azimuthal angle variables as well as in position coordinates; the energy is discretized in the multigroup approximation. The transport pdf is an unusual exponential kernel strongly dependent on the incident and emergent directions and energies and on the position of the collision site. The method produces the same results obtained with the deterministic method with a very small standard deviation, with as little as 1,000 Contribution particles in both analog and nonabsorption biasing modes and with only a few minutes CPU time
Shell model Monte Carlo methods
International Nuclear Information System (INIS)
Koonin, S.E.
1996-01-01
We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs
Parallel Monte Carlo reactor neutronics
International Nuclear Information System (INIS)
Blomquist, R.N.; Brown, F.B.
1994-01-01
The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved
Elements of Monte Carlo techniques
International Nuclear Information System (INIS)
Nagarajan, P.S.
2000-01-01
The Monte Carlo method is essentially mimicking the real world physical processes at the microscopic level. With the incredible increase in computing speeds and ever decreasing computing costs, there is widespread use of the method for practical problems. The method is used in calculating algorithm-generated sequences known as pseudo random sequence (prs)., probability density function (pdf), test for randomness, extension to multidimensional integration etc
Adaptive Multilevel Monte Carlo Simulation
Hoel, H
2011-08-23
This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).
Geometrical splitting in Monte Carlo
International Nuclear Information System (INIS)
Dubi, A.; Elperin, T.; Dudziak, D.J.
1982-01-01
A statistical model is presented by which a direct statistical approach yielded an analytic expression for the second moment, the variance ratio, and the benefit function in a model of an n surface-splitting Monte Carlo game. In addition to the insight into the dependence of the second moment on the splitting parameters the main importance of the expressions developed lies in their potential to become a basis for in-code optimization of splitting through a general algorithm. Refs
Extending canonical Monte Carlo methods
International Nuclear Information System (INIS)
Velazquez, L; Curilef, S
2010-01-01
In this paper, we discuss the implications of a recently obtained equilibrium fluctuation-dissipation relation for the extension of the available Monte Carlo methods on the basis of the consideration of the Gibbs canonical ensemble to account for the existence of an anomalous regime with negative heat capacities C α with α≈0.2 for the particular case of the 2D ten-state Potts model
International Nuclear Information System (INIS)
Kennedy, D.C. II.
1987-01-01
This is an update on the progress of the BREMMUS Monte Carlo simulator, particularly in its current incarnation, BREM5. The present report is intended only as a follow-up to the Mark II/Granlibakken proceedings, and those proceedings should be consulted for a complete description of the capabilities and goals of the BREMMUS program. The new BREM5 program improves on the previous version of BREMMUS, BREM2, in a number of important ways. In BREM2, the internal loop (oblique) corrections were not treated in consistent fashion, a deficiency that led to renormalization scheme-dependence; i.e., physical results, such as cross sections, were dependent on the method used to eliminate infinities from the theory. Of course, this problem cannot be tolerated in a Monte Carlo designed for experimental use. BREM5 incorporates a new way of treating the oblique corrections, as explained in the Granlibakken proceedings, that guarantees renormalization scheme-independence and dramatically simplifies the organization and calculation of radiative corrections. This technique is to be presented in full detail in a forthcoming paper. BREM5 is, at this point, the only Monte Carlo to contain the entire set of one-loop corrections to electroweak four-fermion processes and renormalization scheme-independence. 3 figures
Statistical implications in Monte Carlo depletions - 051
International Nuclear Information System (INIS)
Zhiwen, Xu; Rhodes, J.; Smith, K.
2010-01-01
As a result of steady advances of computer power, continuous-energy Monte Carlo depletion analysis is attracting considerable attention for reactor burnup calculations. The typical Monte Carlo analysis is set up as a combination of a Monte Carlo neutron transport solver and a fuel burnup solver. Note that the burnup solver is a deterministic module. The statistical errors in Monte Carlo solutions are introduced into nuclide number densities and propagated along fuel burnup. This paper is towards the understanding of the statistical implications in Monte Carlo depletions, including both statistical bias and statistical variations in depleted fuel number densities. The deterministic Studsvik lattice physics code, CASMO-5, is modified to model the Monte Carlo depletion. The statistical bias in depleted number densities is found to be negligible compared to its statistical variations, which, in turn, demonstrates the correctness of the Monte Carlo depletion method. Meanwhile, the statistical variation in number densities generally increases with burnup. Several possible ways of reducing the statistical errors are discussed: 1) to increase the number of individual Monte Carlo histories; 2) to increase the number of time steps; 3) to run additional independent Monte Carlo depletion cases. Finally, a new Monte Carlo depletion methodology, called the batch depletion method, is proposed, which consists of performing a set of independent Monte Carlo depletions and is thus capable of estimating the overall statistical errors including both the local statistical error and the propagated statistical error. (authors)
Direct aperture optimization for IMRT using Monte Carlo generated beamlets
International Nuclear Information System (INIS)
Bergman, Alanah M.; Bush, Karl; Milette, Marie-Pierre; Popescu, I. Antoniu; Otto, Karl; Duzenli, Cheryl
2006-01-01
This work introduces an EGSnrc-based Monte Carlo (MC) beamlet does distribution matrix into a direct aperture optimization (DAO) algorithm for IMRT inverse planning. The technique is referred to as Monte Carlo-direct aperture optimization (MC-DAO). The goal is to assess if the combination of accurate Monte Carlo tissue inhomogeneity modeling and DAO inverse planning will improve the dose accuracy and treatment efficiency for treatment planning. Several authors have shown that the presence of small fields and/or inhomogeneous materials in IMRT treatment fields can cause dose calculation errors for algorithms that are unable to accurately model electronic disequilibrium. This issue may also affect the IMRT optimization process because the dose calculation algorithm may not properly model difficult geometries such as targets close to low-density regions (lung, air etc.). A clinical linear accelerator head is simulated using BEAMnrc (NRC, Canada). A novel in-house algorithm subdivides the resulting phase space into 2.5x5.0 mm 2 beamlets. Each beamlet is projected onto a patient-specific phantom. The beamlet dose contribution to each voxel in a structure-of-interest is calculated using DOSXYZnrc. The multileaf collimator (MLC) leaf positions are linked to the location of the beamlet does distributions. The MLC shapes are optimized using direct aperture optimization (DAO). A final Monte Carlo calculation with MLC modeling is used to compute the final dose distribution. Monte Carlo simulation can generate accurate beamlet dose distributions for traditionally difficult-to-calculate geometries, particularly for small fields crossing regions of tissue inhomogeneity. The introduction of DAO results in an additional improvement by increasing the treatment delivery efficiency. For the examples presented in this paper the reduction in the total number of monitor units to deliver is ∼33% compared to fluence-based optimization methods
Monte-Carlo simulation of heavy-ion collisions
International Nuclear Information System (INIS)
Schenke, Bjoern; Jeon, Sangyong; Gale, Charles
2011-01-01
We present Monte-Carlo simulations for heavy-ion collisions combining PYTHIA and the McGill-AMY formalism to describe the evolution of hard partons in a soft background, modelled using hydrodynamic simulations. MARTINI generates full event configurations in the high p T region that take into account thermal QCD and QED effects as well as effects of the evolving medium. This way it is possible to perform detailed quantitative comparisons with experimental observables.
Weighted-delta-tracking for Monte Carlo particle transport
International Nuclear Information System (INIS)
Morgan, L.W.G.; Kotlyar, D.
2015-01-01
Highlights: • This paper presents an alteration to the Monte Carlo Woodcock tracking technique. • The alteration improves computational efficiency within regions of high absorbers. • The rejection technique is replaced by a statistical weighting mechanism. • The modified Woodcock method is shown to be faster than standard Woodcock tracking. • The modified Woodcock method achieves a lower variance, given a specified accuracy. - Abstract: Monte Carlo particle transport (MCPT) codes are incredibly powerful and versatile tools to simulate particle behavior in a multitude of scenarios, such as core/criticality studies, radiation protection, shielding, medicine and fusion research to name just a small subset applications. However, MCPT codes can be very computationally expensive to run when the model geometry contains large attenuation depths and/or contains many components. This paper proposes a simple modification to the Woodcock tracking method used by some Monte Carlo particle transport codes. The Woodcock method utilizes the rejection method for sampling virtual collisions as a method to remove collision distance sampling at material boundaries. However, it suffers from poor computational efficiency when the sample acceptance rate is low. The proposed method removes rejection sampling from the Woodcock method in favor of a statistical weighting scheme, which improves the computational efficiency of a Monte Carlo particle tracking code. It is shown that the modified Woodcock method is less computationally expensive than standard ray-tracing and rejection-based Woodcock tracking methods and achieves a lower variance, given a specified accuracy
Modified Monte Carlo procedure for particle transport problems
International Nuclear Information System (INIS)
Matthes, W.
1978-01-01
The simulation of photon transport in the atmosphere with the Monte Carlo method forms part of the EURASEP-programme. The specifications for the problems posed for a solution were such, that the direct application of the analogue Monte Carlo method was not feasible. For this reason the standard Monte Carlo procedure was modified in the sense that additional properly weighted branchings at each collision and transport process in a photon history were introduced. This modified Monte Carlo procedure leads to a clear and logical separation of the essential parts of a problem and offers a large flexibility for variance reducing techniques. More complex problems, as foreseen in the EURASEP-programme (e.g. clouds in the atmosphere, rough ocean-surface and chlorophyl-distribution in the ocean) can be handled by recoding some subroutines. This collision- and transport-splitting procedure can of course be performed differently in different space- and energy regions. It is applied here only for a homogeneous problem
Monte Carlo Particle Lists: MCPL
DEFF Research Database (Denmark)
Kittelmann, Thomas; Klinkby, Esben Bryndt; Bergbäck Knudsen, Erik
2017-01-01
A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular...... simulation packages. Program summary: Program Title: MCPL. Program Files doi: http://dx.doi.org/10.17632/cby92vsv5g.1 Licensing provisions: CC0 for core MCPL, see LICENSE file for details. Programming language: C and C++ External routines/libraries: Geant4, MCNP, McStas, McXtrace Nature of problem: Saving...
Monte Carlo techniques in radiation therapy
Verhaegen, Frank
2013-01-01
Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...
Mean field simulation for Monte Carlo integration
Del Moral, Pierre
2013-01-01
In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko
Monte Carlo simulations of neutron scattering instruments
International Nuclear Information System (INIS)
Aestrand, Per-Olof; Copenhagen Univ.; Lefmann, K.; Nielsen, K.
2001-01-01
A Monte Carlo simulation is an important computational tool used in many areas of science and engineering. The use of Monte Carlo techniques for simulating neutron scattering instruments is discussed. The basic ideas, techniques and approximations are presented. Since the construction of a neutron scattering instrument is very expensive, Monte Carlo software used for design of instruments have to be validated and tested extensively. The McStas software was designed with these aspects in mind and some of the basic principles of the McStas software will be discussed. Finally, some future prospects are discussed for using Monte Carlo simulations in optimizing neutron scattering experiments. (R.P.)
Monte Carlo surface flux tallies
International Nuclear Information System (INIS)
Favorite, Jeffrey A.
2010-01-01
Particle fluxes on surfaces are difficult to calculate with Monte Carlo codes because the score requires a division by the surface-crossing angle cosine, and grazing angles lead to inaccuracies. We revisit the standard practice of dividing by half of a cosine 'cutoff' for particles whose surface-crossing cosines are below the cutoff. The theory behind this approximation is sound, but the application of the theory to all possible situations does not account for two implicit assumptions: (1) the grazing band must be symmetric about 0, and (2) a single linear expansion for the angular flux must be applied in the entire grazing band. These assumptions are violated in common circumstances; for example, for separate in-going and out-going flux tallies on internal surfaces, and for out-going flux tallies on external surfaces. In some situations, dividing by two-thirds of the cosine cutoff is more appropriate. If users were able to control both the cosine cutoff and the substitute value, they could use these parameters to make accurate surface flux tallies. The procedure is demonstrated in a test problem in which Monte Carlo surface fluxes in cosine bins are converted to angular fluxes and compared with the results of a discrete ordinates calculation.
On the use of stochastic approximation Monte Carlo for Monte Carlo integration
Liang, Faming
2009-01-01
The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration
Monte Carlo simulation on nuclear energy study. Annual report of Nuclear Code Evaluation Committee
International Nuclear Information System (INIS)
Sakurai, Kiyoshi; Yamamoto, Toshihiro
1999-03-01
In this report, research results discussed in 1998 fiscal year at Nuclear Code Evaluation Special Committee of Nuclear Code Committee were summarised. Present status of Monte Carlo calculation in high energy region investigated / discussed at Monte Carlo simulation working-group and automatic compilation system for MCNP cross sections developed at MCNP high temperature library compilation working-group were described. The 6 papers are indexed individually. (J.P.N.)
Super-Monte Carla : a combined approach to x-ray beam planning
International Nuclear Information System (INIS)
Keall, P.; Hoban, P.
1996-01-01
A new accurate 3-D radiotherapy dose calculation algorithm, Super-Monte Carlo (SMC), has been developed which combines elements of both superposition/convolution and Monte Carlo methods. Currently used clinical dose calculation algorithms (except those based on the superposition method) can have errors of over 10%, especially where significant density inhomogeneities exist, such as in the head and neck, and lung regions. Errors of this magnitude can cause significan departures in the tumour control probability of the actual treatment. (author)
Deconinck , Jean-François
1984-01-01
Le but de cette thèse est de décrire et d'expliquer l'évolution des assemblages argileux dans les Alpes occidentales depuis l'Oxfordien jusqu'au Crétacé supérieur, et de la comparer à celle connue dans l'Atlantique nord grâce aux forages DSDP. Etude géologique, sédimentologique et géochimique. Les assemblages argileux du Mésozolque supérieur (Malm - Crétacé) du domaine subalpin et du Jura mérldional sont étudiés par diffraction des rayons X, microscopie électronique à transmission et analyses...
International Nuclear Information System (INIS)
Moore, J.G.
1974-01-01
The Monte Carlo code MONK is a general program written to provide a high degree of flexibility to the user. MONK is distinguished by its detailed representation of nuclear data in point form i.e., the cross-section is tabulated at specific energies instead of the more usual group representation. The nuclear data are unadjusted in the point form but recently the code has been modified to accept adjusted group data as used in fast and thermal reactor applications. The various geometrical handling capabilities and importance sampling techniques are described. In addition to the nuclear data aspects, the following features are also described; geometrical handling routines, tracking cycles, neutron source and output facilities. 12 references. (U.S.)
Advanced Computational Methods for Monte Carlo Calculations
Energy Technology Data Exchange (ETDEWEB)
Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2018-01-12
This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.
Nested Sampling with Constrained Hamiltonian Monte Carlo
Betancourt, M. J.
2010-01-01
Nested sampling is a powerful approach to Bayesian inference ultimately limited by the computationally demanding task of sampling from a heavily constrained probability distribution. An effective algorithm in its own right, Hamiltonian Monte Carlo is readily adapted to efficiently sample from any smooth, constrained distribution. Utilizing this constrained Hamiltonian Monte Carlo, I introduce a general implementation of the nested sampling algorithm.
Improvement of correlated sampling Monte Carlo methods for reactivity calculations
International Nuclear Information System (INIS)
Nakagawa, Masayuki; Asaoka, Takumi
1978-01-01
Two correlated Monte Carlo methods, the similar flight path and the identical flight path methods, have been improved to evaluate up to the second order change of the reactivity perturbation. Secondary fission neutrons produced by neutrons having passed through perturbed regions in both unperturbed and perturbed systems are followed in a way to have a strong correlation between secondary neutrons in both the systems. These techniques are incorporated into the general purpose Monte Carlo code MORSE, so as to be able to estimate also the statistical error of the calculated reactivity change. The control rod worths measured in the FCA V-3 assembly are analyzed with the present techniques, which are shown to predict the measured values within the standard deviations. The identical flight path method has revealed itself more useful than the similar flight path method for the analysis of the control rod worth. (auth.)
Monte Carlo simulation of a mammographic test phantom
International Nuclear Information System (INIS)
Hunt, R. A.; Dance, D. R.; Pachoud, M.; Carlsson, G. A.; Sandborg, M.; Ullman, G.
2005-01-01
A test phantom, including a wide range of mammographic tissue equivalent materials and test details, was imaged on a digital mammographic system. In order to quantify the effect of scatter on the contrast obtained for the test details, calculations of the scatter-to-primary ratio (S/P) have been made using a Monte Carlo simulation of the digital mammographic imaging chain, grid and test phantom. The results show that the S/P values corresponding to the imaging conditions used were in the range 0.084-0.126. Calculated and measured pixel values in different regions of the image were compared as a validation of the model and showed excellent agreement. The results indicate the potential of Monte Carlo methods in the image quality-patient dose process optimisation, especially in the assessment of imaging conditions not available on standard mammographic units. (authors)
Monte Carlo Treatment Planning for Advanced Radiotherapy
DEFF Research Database (Denmark)
Cronholm, Rickard
This Ph.d. project describes the development of a workflow for Monte Carlo Treatment Planning for clinical radiotherapy plans. The workflow may be utilized to perform an independent dose verification of treatment plans. Modern radiotherapy treatment delivery is often conducted by dynamically...... modulating the intensity of the field during the irradiation. The workflow described has the potential to fully model the dynamic delivery, including gantry rotation during irradiation, of modern radiotherapy. Three corner stones of Monte Carlo Treatment Planning are identified: Building, commissioning...... and validation of a Monte Carlo model of a medical linear accelerator (i), converting a CT scan of a patient to a Monte Carlo compliant phantom (ii) and translating the treatment plan parameters (including beam energy, angles of incidence, collimator settings etc) to a Monte Carlo input file (iii). A protocol...
The MC21 Monte Carlo Transport Code
International Nuclear Information System (INIS)
Sutton TM; Donovan TJ; Trumbull TH; Dobreff PS; Caro E; Griesheimer DP; Tyburski LJ; Carpenter DC; Joo H
2007-01-01
MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities
Monte Carlo simulation in nuclear medicine
International Nuclear Information System (INIS)
Morel, Ch.
2007-01-01
The Monte Carlo method allows for simulating random processes by using series of pseudo-random numbers. It became an important tool in nuclear medicine to assist in the design of new medical imaging devices, optimise their use and analyse their data. Presently, the sophistication of the simulation tools allows the introduction of Monte Carlo predictions in data correction and image reconstruction processes. The availability to simulate time dependent processes opens up new horizons for Monte Carlo simulation in nuclear medicine. In a near future, these developments will allow to tackle simultaneously imaging and dosimetry issues and soon, case system Monte Carlo simulations may become part of the nuclear medicine diagnostic process. This paper describes some Monte Carlo method basics and the sampling methods that were developed for it. It gives a referenced list of different simulation software used in nuclear medicine and enumerates some of their present and prospective applications. (author)
Clinical implementation of full Monte Carlo dose calculation in proton beam therapy
International Nuclear Information System (INIS)
Paganetti, Harald; Jiang, Hongyu; Parodi, Katia; Slopsema, Roelf; Engelsman, Martijn
2008-01-01
The goal of this work was to facilitate the clinical use of Monte Carlo proton dose calculation to support routine treatment planning and delivery. The Monte Carlo code Geant4 was used to simulate the treatment head setup, including a time-dependent simulation of modulator wheels (for broad beam modulation) and magnetic field settings (for beam scanning). Any patient-field-specific setup can be modeled according to the treatment control system of the facility. The code was benchmarked against phantom measurements. Using a simulation of the ionization chamber reading in the treatment head allows the Monte Carlo dose to be specified in absolute units (Gy per ionization chamber reading). Next, the capability of reading CT data information was implemented into the Monte Carlo code to model patient anatomy. To allow time-efficient dose calculation, the standard Geant4 tracking algorithm was modified. Finally, a software link of the Monte Carlo dose engine to the patient database and the commercial planning system was established to allow data exchange, thus completing the implementation of the proton Monte Carlo dose calculation engine ('DoC++'). Monte Carlo re-calculated plans are a valuable tool to revisit decisions in the planning process. Identification of clinically significant differences between Monte Carlo and pencil-beam-based dose calculations may also drive improvements of current pencil-beam methods. As an example, four patients (29 fields in total) with tumors in the head and neck regions were analyzed. Differences between the pencil-beam algorithm and Monte Carlo were identified in particular near the end of range, both due to dose degradation and overall differences in range prediction due to bony anatomy in the beam path. Further, the Monte Carlo reports dose-to-tissue as compared to dose-to-water by the planning system. Our implementation is tailored to a specific Monte Carlo code and the treatment planning system XiO (Computerized Medical Systems Inc
Feasibility Study of Core Design with a Monte Carlo Code for APR1400 Initial core
Energy Technology Data Exchange (ETDEWEB)
Kim, Jinsun; Chang, Do Ik; Seong, Kibong [KEPCO NF, Daejeon (Korea, Republic of)
2014-10-15
The Monte Carlo calculation becomes more popular and useful nowadays due to the rapid progress in computing power and parallel calculation techniques. There have been many attempts to analyze a commercial core by Monte Carlo transport code using the enhanced computer capability, recently. In this paper, Monte Carlo calculation of APR1400 initial core has been performed and the results are compared with the calculation results of conventional deterministic code to find out the feasibility of core design using Monte Carlo code. SERPENT, a 3D continuous-energy Monte Carlo reactor physics burnup calculation code is used for this purpose and the KARMA-ASTRA code system, which is used for a deterministic code of comparison. The preliminary investigation for the feasibility of commercial core design with Monte Carlo code was performed in this study. Simplified core geometry modeling was performed for the reactor core surroundings and reactor coolant model is based on two region model. The reactivity difference at HZP ARO condition between Monte Carlo code and the deterministic code is consistent with each other and the reactivity difference during the depletion could be reduced by adopting the realistic moderator temperature. The reactivity difference calculated at HFP, BOC, ARO equilibrium condition was 180 ±9 pcm, with axial moderator temperature of a deterministic code. The computing time will be a significant burden at this time for the application of Monte Carlo code to the commercial core design even with the application of parallel computing because numerous core simulations are required for actual loading pattern search. One of the remedy will be a combination of Monte Carlo code and the deterministic code to generate the physics data. The comparison of physics parameters with sophisticated moderator temperature modeling and depletion will be performed for a further study.
Importance iteration in MORSE Monte Carlo calculations
International Nuclear Information System (INIS)
Kloosterman, J.L.; Hoogenboom, J.E.
1994-01-01
An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example that shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation
Monte Carlo approaches to light nuclei
International Nuclear Information System (INIS)
Carlson, J.
1990-01-01
Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of 16 O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs
Monte Carlo approaches to light nuclei
Energy Technology Data Exchange (ETDEWEB)
Carlson, J.
1990-01-01
Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of {sup 16}O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs.
Importance iteration in MORSE Monte Carlo calculations
International Nuclear Information System (INIS)
Kloosterman, J.L.; Hoogenboom, J.E.
1994-02-01
An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example, which shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation. (orig.)
Monte carlo simulation for soot dynamics
Zhou, Kun
2012-01-01
A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.
Monte Carlo Codes Invited Session
International Nuclear Information System (INIS)
Trama, J.C.; Malvagi, F.; Brown, F.
2013-01-01
This document lists 22 Monte Carlo codes used in radiation transport applications throughout the world. For each code the names of the organization and country and/or place are given. We have the following computer codes. 1) ARCHER, USA, RPI; 2) COG11, USA, LLNL; 3) DIANE, France, CEA/DAM Bruyeres; 4) FLUKA, Italy and CERN, INFN and CERN; 5) GEANT4, International GEANT4 collaboration; 6) KENO and MONACO (SCALE), USA, ORNL; 7) MC21, USA, KAPL and Bettis; 8) MCATK, USA, LANL; 9) MCCARD, South Korea, Seoul National University; 10) MCNP6, USA, LANL; 11) MCU, Russia, Kurchatov Institute; 12) MONK and MCBEND, United Kingdom, AMEC; 13) MORET5, France, IRSN Fontenay-aux-Roses; 14) MVP2, Japan, JAEA; 15) OPENMC, USA, MIT; 16) PENELOPE, Spain, Barcelona University; 17) PHITS, Japan, JAEA; 18) PRIZMA, Russia, VNIITF; 19) RMC, China, Tsinghua University; 20) SERPENT, Finland, VTT; 21) SUPERMONTECARLO, China, CAS INEST FDS Team Hefei; and 22) TRIPOLI-4, France, CEA Saclay
Advanced computers and Monte Carlo
International Nuclear Information System (INIS)
Jordan, T.L.
1979-01-01
High-performance parallelism that is currently available is synchronous in nature. It is manifested in such architectures as Burroughs ILLIAC-IV, CDC STAR-100, TI ASC, CRI CRAY-1, ICL DAP, and many special-purpose array processors designed for signal processing. This form of parallelism has apparently not been of significant value to many important Monte Carlo calculations. Nevertheless, there is much asynchronous parallelism in many of these calculations. A model of a production code that requires up to 20 hours per problem on a CDC 7600 is studied for suitability on some asynchronous architectures that are on the drawing board. The code is described and some of its properties and resource requirements ae identified to compare with corresponding properties and resource requirements are identified to compare with corresponding properties and resource requirements are identified to compare with corresponding properties and resources of some asynchronous multiprocessor architectures. Arguments are made for programer aids and special syntax to identify and support important asynchronous parallelism. 2 figures, 5 tables
Adaptive Markov Chain Monte Carlo
Jadoon, Khan
2016-08-08
A substantial interpretation of electromagnetic induction (EMI) measurements requires quantifying optimal model parameters and uncertainty of a nonlinear inverse problem. For this purpose, an adaptive Bayesian Markov chain Monte Carlo (MCMC) algorithm is used to assess multi-orientation and multi-offset EMI measurements in an agriculture field with non-saline and saline soil. In the MCMC simulations, posterior distribution was computed using Bayes rule. The electromagnetic forward model based on the full solution of Maxwell\\'s equations was used to simulate the apparent electrical conductivity measured with the configurations of EMI instrument, the CMD mini-Explorer. The model parameters and uncertainty for the three-layered earth model are investigated by using synthetic data. Our results show that in the scenario of non-saline soil, the parameters of layer thickness are not well estimated as compared to layers electrical conductivity because layer thicknesses in the model exhibits a low sensitivity to the EMI measurements, and is hence difficult to resolve. Application of the proposed MCMC based inversion to the field measurements in a drip irrigation system demonstrate that the parameters of the model can be well estimated for the saline soil as compared to the non-saline soil, and provide useful insight about parameter uncertainty for the assessment of the model outputs.
On-the-fly doppler broadening for Monte Carlo codes
International Nuclear Information System (INIS)
Yesilyurt, G.; Martin, W. R.; Brown, F. B.
2009-01-01
A methodology to allow on-the-fly Doppler broadening of neutron cross sections for use in Monte Carlo codes has been developed. The Monte Carlo code only needs to store 0 K cross sections for each isotope and the method will broaden the 0 K cross sections for any isotope in the library to any temperature in the range 77 K-3200 K. The methodology is based on a combination of Taylor series expansions and asymptotic series expansions. The type of series representation was determined by investigating the temperature dependence of U3o8 resonance cross sections in three regions: near the resonance peaks, mid-resonance, and the resonance wings. The coefficients for these series expansions were determined by a regression over the energy and temperature range of interest. Since the resonance parameters are a function of the neutron energy and target nuclide, the ψ and χ functions in the Adler-Adler multi-level resonance model can be represented by series expansions in temperature only, allowing the least number of terms to approximate the temperature dependent cross sections within a given accuracy. The comparison of the broadened cross sections using this methodology with the NJOY cross sections was excellent over the entire temperature range (77 K-3200 K) and energy range. A Monte Carlo code was implemented to apply the combined regression model and used to estimate the additional computing cost which was found to be less than <1%. (authors)
11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing
Nuyens, Dirk
2016-01-01
This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.
Quantum Monte Carlo studies in Hamiltonian lattice gauge theory
International Nuclear Information System (INIS)
Hamer, C.J.; Samaras, M.; Bursill, R.J.
2000-01-01
Full text: The application of Monte Carlo methods to the 'Hamiltonian' formulation of lattice gauge theory has been somewhat neglected, and lags at least ten years behind the classical Monte Carlo simulations of Euclidean lattice gauge theory. We have applied a Green's Function Monte Carlo algorithm to lattice Yang-Mills theories in the Hamiltonian formulation, combined with a 'forward-walking' technique to estimate expectation values and correlation functions. In this approach, one represents the wave function in configuration space by a discrete ensemble of random walkers, and application of the time development operator is simulated by a diffusion and branching process. The approach has been used to estimate the ground-state energy and Wilson loop values in the U(1) theory in (2+1)D, and the SU(3) Yang-Mills theory in (3+1)D. The finite-size scaling behaviour has been explored, and agrees with the predictions of effective Lagrangian theory, and weak-coupling expansions. Crude estimates of the string tension are derived, which agree with previous results at intermediate couplings; but more accurate results for larger loops will be required to establish scaling behaviour at weak couplings. A drawback to this method is that it is necessary to introduce a 'trial' or 'guiding wave function' to guide the walkers towards the most probable regions of configuration space, in order to achieve convergence and accuracy. The 'forward-walking' estimates should be independent of this guidance, but in fact for the SU(3) case they turn out to be sensitive to the choice of trial wave function. It would be preferable to use some sort of Metropolis algorithm instead to produce a correct distribution of walkers: this may point in the direction of a Path Integral Monte Carlo approach
Quantum Monte Carlo approaches for correlated systems
Becca, Federico
2017-01-01
Over the past several decades, computational approaches to studying strongly-interacting systems have become increasingly varied and sophisticated. This book provides a comprehensive introduction to state-of-the-art quantum Monte Carlo techniques relevant for applications in correlated systems. Providing a clear overview of variational wave functions, and featuring a detailed presentation of stochastic samplings including Markov chains and Langevin dynamics, which are developed into a discussion of Monte Carlo methods. The variational technique is described, from foundations to a detailed description of its algorithms. Further topics discussed include optimisation techniques, real-time dynamics and projection methods, including Green's function, reptation and auxiliary-field Monte Carlo, from basic definitions to advanced algorithms for efficient codes, and the book concludes with recent developments on the continuum space. Quantum Monte Carlo Approaches for Correlated Systems provides an extensive reference ...
Monte Carlo simulations for plasma physics
International Nuclear Information System (INIS)
Okamoto, M.; Murakami, S.; Nakajima, N.; Wang, W.X.
2000-07-01
Plasma behaviours are very complicated and the analyses are generally difficult. However, when the collisional processes play an important role in the plasma behaviour, the Monte Carlo method is often employed as a useful tool. For examples, in neutral particle injection heating (NBI heating), electron or ion cyclotron heating, and alpha heating, Coulomb collisions slow down high energetic particles and pitch angle scatter them. These processes are often studied by the Monte Carlo technique and good agreements can be obtained with the experimental results. Recently, Monte Carlo Method has been developed to study fast particle transports associated with heating and generating the radial electric field. Further it is applied to investigating the neoclassical transport in the plasma with steep gradients of density and temperatures which is beyong the conventional neoclassical theory. In this report, we briefly summarize the researches done by the present authors utilizing the Monte Carlo method. (author)
Frontiers of quantum Monte Carlo workshop: preface
International Nuclear Information System (INIS)
Gubernatis, J.E.
1985-01-01
The introductory remarks, table of contents, and list of attendees are presented from the proceedings of the conference, Frontiers of Quantum Monte Carlo, which appeared in the Journal of Statistical Physics
Avariide kiuste Monte Carlosse / Aare Arula
Arula, Aare
2007-01-01
Vt. ka Tehnika dlja Vsehh nr. 3, lk. 26-27. 26. jaanuaril 1937 Tallinnast Monte Carlo tähesõidule startinud Karl Siitanit ja tema meeskonda ootasid ees seiklused, mis oleksid neile peaaegu elu maksnud
Monte Carlo code development in Los Alamos
International Nuclear Information System (INIS)
Carter, L.L.; Cashwell, E.D.; Everett, C.J.; Forest, C.A.; Schrandt, R.G.; Taylor, W.M.; Thompson, W.L.; Turner, G.D.
1974-01-01
The present status of Monte Carlo code development at Los Alamos Scientific Laboratory is discussed. A brief summary is given of several of the most important neutron, photon, and electron transport codes. 17 references. (U.S.)
Experience with the Monte Carlo Method
Energy Technology Data Exchange (ETDEWEB)
Hussein, E M.A. [Department of Mechanical Engineering University of New Brunswick, Fredericton, N.B., (Canada)
2007-06-15
Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed.
Experience with the Monte Carlo Method
International Nuclear Information System (INIS)
Hussein, E.M.A.
2007-01-01
Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed
A continuation multilevel Monte Carlo algorithm
Collier, Nathan; Haji Ali, Abdul Lateef; Nobile, Fabio; von Schwerin, Erik; Tempone, Raul
2014-01-01
We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error
Aasta film - joonisfilm "Mont Blanc" / Verni Leivak
Leivak, Verni, 1966-
2002-01-01
Eesti Filmiajakirjanike Ühing andis aasta 2001 parima filmi tiitli Priit Tenderi joonisfilmile "Mont Blanc" : Eesti Joonisfilm 2001.Ka filmikriitikute eelistused kinodes ja televisioonis 2001. aastal näidatud filmide osas
Simulation and the Monte Carlo method
Rubinstein, Reuven Y
2016-01-01
Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...
Hybrid Monte Carlo methods in computational finance
Leitao Rodriguez, A.
2017-01-01
Monte Carlo methods are highly appreciated and intensively employed in computational finance in the context of financial derivatives valuation or risk management. The method offers valuable advantages like flexibility, easy interpretation and straightforward implementation. Furthermore, the
Bartalini, P.; Kryukov, A.; Selyuzhenkov, Ilya V.; Sherstnev, A.; Vologdin, A.
2004-01-01
We present the Monte-Carlo events Data Base (MCDB) project and its development plans. MCDB facilitates communication between authors of Monte-Carlo generators and experimental users. It also provides a convenient book-keeping and an easy access to generator level samples. The first release of MCDB is now operational for the CMS collaboration. In this paper we review the main ideas behind MCDB and discuss future plans to develop this Data Base further within the CERN LCG framework.
Multilevel Monte Carlo in Approximate Bayesian Computation
Jasra, Ajay
2017-02-13
In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.
Monte Carlo method applied to medical physics
International Nuclear Information System (INIS)
Oliveira, C.; Goncalves, I.F.; Chaves, A.; Lopes, M.C.; Teixeira, N.; Matos, B.; Goncalves, I.C.; Ramalho, A.; Salgado, J.
2000-01-01
The main application of the Monte Carlo method to medical physics is dose calculation. This paper shows some results of two dose calculation studies and two other different applications: optimisation of neutron field for Boron Neutron Capture Therapy and optimization of a filter for a beam tube for several purposes. The time necessary for Monte Carlo calculations - the highest boundary for its intensive utilisation - is being over-passed with faster and cheaper computers. (author)
Monte Carlo technique for local perturbations in multiplying systems
International Nuclear Information System (INIS)
Bernnat, W.
1974-01-01
The use of the Monte Carlo method for the calculation of reactivity perturbations in multiplying systems due to changes in geometry or composition requires a correlated sampling technique to make such calculations economical or in the case of very small perturbations even feasible. The technique discussed here is suitable for local perturbations. Very small perturbation regions will be treated by an adjoint mode. The perturbation of the source distribution due to the changed system and its reaction on the reactivity worth or other values of interest is taken into account by a fission matrix method. The formulation of the method and its application are discussed. 10 references. (U.S.)
Skin fluorescence model based on the Monte Carlo technique
Churmakov, Dmitry Y.; Meglinski, Igor V.; Piletsky, Sergey A.; Greenhalgh, Douglas A.
2003-10-01
The novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account spatial distribution of fluorophores following the collagen fibers packing, whereas in epidermis and stratum corneum the distribution of fluorophores assumed to be homogeneous. The results of simulation suggest that distribution of auto-fluorescence is significantly suppressed in the NIR spectral region, while fluorescence of sensor layer embedded in epidermis is localized at the adjusted depth. The model is also able to simulate the skin fluorescence spectra.
Monte Carlo simulation of particle-induced bit upsets
Wrobel, Frédéric; Touboul, Antoine; Vaillé, Jean-Roch; Boch, Jérôme; Saigné, Frédéric
2017-09-01
We investigate the issue of radiation-induced failures in electronic devices by developing a Monte Carlo tool called MC-Oracle. It is able to transport the particles in device, to calculate the energy deposited in the sensitive region of the device and to calculate the transient current induced by the primary particle and the secondary particles produced during nuclear reactions. We compare our simulation results with SRAM experiments irradiated with neutrons, protons and ions. The agreement is very good and shows that it is possible to predict the soft error rate (SER) for a given device in a given environment.
Monte Carlo simulation of particle-induced bit upsets
Directory of Open Access Journals (Sweden)
Wrobel Frédéric
2017-01-01
Full Text Available We investigate the issue of radiation-induced failures in electronic devices by developing a Monte Carlo tool called MC-Oracle. It is able to transport the particles in device, to calculate the energy deposited in the sensitive region of the device and to calculate the transient current induced by the primary particle and the secondary particles produced during nuclear reactions. We compare our simulation results with SRAM experiments irradiated with neutrons, protons and ions. The agreement is very good and shows that it is possible to predict the soft error rate (SER for a given device in a given environment.
Monte Carlo study of double exchange interaction in manganese oxide
Energy Technology Data Exchange (ETDEWEB)
Naa, Christian Fredy, E-mail: chris@cphys.fi.itb.ac.id [Physics Department, Faculty of Mathematics and Natural Science, Institut Teknologi Bandung, Jalan Ganesha 10 Bandung (Indonesia); Unité de Dynamique et Structure des Matérioux Moléculaires, Université Littoral Côte d’Opale, Maison de la Reserche Blaise Pascal 50, rue Ferdinand Buisson, Calais, France email (France); Suprijadi,, E-mail: supri@fi.itb.ac.id; Viridi, Sparisoma, E-mail: dudung@fi.itb.ac.id; Djamal, Mitra, E-mail: mitra@fi.itb.ac.id [Physics Department, Faculty of Mathematics and Natural Science, Institut Teknologi Bandung, Jalan Ganesha 10 Bandung (Indonesia); Fasquelle, Didier, E-mail: didier.fasquelle@univ-littoral.fr [Unité de Dynamique et Structure des Matérioux Moléculaires, Université Littoral Côte d’Opale, Maison de la Reserche Blaise Pascal 50, rue Ferdinand Buisson, Calais, France email (France)
2015-09-30
In this paper we study the magnetoresistance properties attributed by double exchange (DE) interaction in manganese oxide by Monte Carlo simulation. We construct a model based on mixed-valence Mn{sup 3+} and Mn{sup 4+} on the general system of Re{sub 2/3}Ae{sub 1/3}MnO{sub 3} in two dimensional system. The conduction mechanism is based on probability of e{sub g} electrons hopping from Mn{sup 3+} to Mn{sup 4+}. The resistivity dependence on temperature and the external magnetic field are presented and the validity with related experimental results are discussed. We use the resistivity power law to fit our data on metallic region and basic activated behavior on insulator region. On metallic region, we found our result agree well with the quantum theory of DE interaction. From general arguments, we found our simulation agree qualitatively with experimental results.
Monte Carlo Volcano Seismic Moment Tensors
Waite, G. P.; Brill, K. A.; Lanza, F.
2015-12-01
Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.
Interface methods for hybrid Monte Carlo-diffusion radiation-transport simulations
International Nuclear Information System (INIS)
Densmore, Jeffery D.
2006-01-01
Discrete diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Monte Carlo simulations in diffusive media. An important aspect of DDMC is the treatment of interfaces between diffusive regions, where DDMC is used, and transport regions, where standard Monte Carlo is employed. Three previously developed methods exist for treating transport-diffusion interfaces: the Marshak interface method, based on the Marshak boundary condition, the asymptotic interface method, based on the asymptotic diffusion-limit boundary condition, and the Nth-collided source technique, a scheme that allows Monte Carlo particles to undergo several collisions in a diffusive region before DDMC is used. Numerical calculations have shown that each of these interface methods gives reasonable results as part of larger radiation-transport simulations. In this paper, we use both analytic and numerical examples to compare the ability of these three interface techniques to treat simpler, transport-diffusion interface problems outside of a more complex radiation-transport calculation. We find that the asymptotic interface method is accurate regardless of the angular distribution of Monte Carlo particles incident on the interface surface. In contrast, the Marshak boundary condition only produces correct solutions if the incident particles are isotropic. We also show that the Nth-collided source technique has the capacity to yield accurate results if spatial cells are optically small and Monte Carlo particles are allowed to undergo many collisions within a diffusive region before DDMC is employed. These requirements make the Nth-collided source technique impractical for realistic radiation-transport calculations
Monte Carlo Numerical Models for Nuclear Logging Applications
Directory of Open Access Journals (Sweden)
Fusheng Li
2012-06-01
Full Text Available Nuclear logging is one of most important logging services provided by many oil service companies. The main parameters of interest are formation porosity, bulk density, and natural radiation. Other services are also provided from using complex nuclear logging tools, such as formation lithology/mineralogy, etc. Some parameters can be measured by using neutron logging tools and some can only be measured by using a gamma ray tool. To understand the response of nuclear logging tools, the neutron transport/diffusion theory and photon diffusion theory are needed. Unfortunately, for most cases there are no analytical answers if complex tool geometry is involved. For many years, Monte Carlo numerical models have been used by nuclear scientists in the well logging industry to address these challenges. The models have been widely employed in the optimization of nuclear logging tool design, and the development of interpretation methods for nuclear logs. They have also been used to predict the response of nuclear logging systems for forward simulation problems. In this case, the system parameters including geometry, materials and nuclear sources, etc., are pre-defined and the transportation and interactions of nuclear particles (such as neutrons, photons and/or electrons in the regions of interest are simulated according to detailed nuclear physics theory and their nuclear cross-section data (probability of interacting. Then the deposited energies of particles entering the detectors are recorded and tallied and the tool responses to such a scenario are generated. A general-purpose code named Monte Carlo N– Particle (MCNP has been the industry-standard for some time. In this paper, we briefly introduce the fundamental principles of Monte Carlo numerical modeling and review the physics of MCNP. Some of the latest developments of Monte Carlo Models are also reviewed. A variety of examples are presented to illustrate the uses of Monte Carlo numerical models
Successful vectorization - reactor physics Monte Carlo code
International Nuclear Information System (INIS)
Martin, W.R.
1989-01-01
Most particle transport Monte Carlo codes in use today are based on the ''history-based'' algorithm, wherein one particle history at a time is simulated. Unfortunately, the ''history-based'' approach (present in all Monte Carlo codes until recent years) is inherently scalar and cannot be vectorized. In particular, the history-based algorithm cannot take advantage of vector architectures, which characterize the largest and fastest computers at the current time, vector supercomputers such as the Cray X/MP or IBM 3090/600. However, substantial progress has been made in recent years in developing and implementing a vectorized Monte Carlo algorithm. This algorithm follows portions of many particle histories at the same time and forms the basis for all successful vectorized Monte Carlo codes that are in use today. This paper describes the basic vectorized algorithm along with descriptions of several variations that have been developed by different researchers for specific applications. These applications have been mainly in the areas of neutron transport in nuclear reactor and shielding analysis and photon transport in fusion plasmas. The relative merits of the various approach schemes will be discussed and the present status of known vectorization efforts will be summarized along with available timing results, including results from the successful vectorization of 3-D general geometry, continuous energy Monte Carlo. (orig.)
A hybrid transport-diffusion method for Monte Carlo radiative-transfer simulations
International Nuclear Information System (INIS)
Densmore, Jeffery D.; Urbatsch, Todd J.; Evans, Thomas M.; Buksas, Michael W.
2007-01-01
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Monte Carlo particle-transport simulations in diffusive media. If standard Monte Carlo is used in such media, particle histories will consist of many small steps, resulting in a computationally expensive calculation. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Each discrete step replaces many small Monte Carlo steps, thus increasing the efficiency of the simulation. In addition, given that DDMC is based on a diffusion equation, it should produce accurate solutions if used judiciously. In practice, DDMC is combined with standard Monte Carlo to form a hybrid transport-diffusion method that can accurately simulate problems with both diffusive and non-diffusive regions. In this paper, we extend previously developed DDMC techniques in several ways that improve the accuracy and utility of DDMC for nonlinear, time-dependent, radiative-transfer calculations. The use of DDMC in these types of problems is advantageous since, due to the underlying linearizations, optically thick regions appear to be diffusive. First, we employ a diffusion equation that is discretized in space but is continuous in time. Not only is this methodology theoretically more accurate than temporally discretized DDMC techniques, but it also has the benefit that a particle's time is always known. Thus, there is no ambiguity regarding what time to assign a particle that leaves an optically thick region (where DDMC is used) and begins transporting by standard Monte Carlo in an optically thin region. Also, we treat the interface between optically thick and optically thin regions with an improved method, based on the asymptotic diffusion-limit boundary condition, that can produce accurate results regardless of the angular distribution of the incident Monte Carlo particles. Finally, we develop a technique for estimating radiation momentum deposition during the
A Monte Carlo study of the two-dimensional melting mechanism
Allen, M.P.; Frenkel, D.; Gignac, W.; Mctaque, J.P.
1983-01-01
We report here a Monte Carlo study of the thermodynamic and structural properties of a two-dimensional system of 2500 particles interacting by a repulsive inverse sixth power potential. Particular effort was made in the melting region, both to identify the defect structures and to ascertain the
One group neutron flux at a point in a cylindrical reactor cell calculated by Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Kocic, A [Institute of Nuclear Sciences Vinca, Beograd (Serbia and Montenegro)
1974-01-15
Mean values of the neutron flux over material regions and the neutron flux at space points in a cylindrical annular cell (one group model) have been calculated by Monte Carlo. The results are compared with those obtained by an improved collision probability method (author)
International Nuclear Information System (INIS)
Fischer, H.
1988-01-01
Glauconite investigations: the main problem in dating glauconites lies in the identification of authigenic minerals which have not been influenced by post-sedimentary processes. The age determination on glauconites from the three different tectonic units: the Jura mountains, the molasse basin and the Helvetic nappes, yield inconsistent results. Up to 35% too young K-Ar ''ages'' of glauconites from limestones from the Helvetic nappes can be traced to partial Ar loss caused by sediment-lithification and tectonic events. Sr-isotope stratigraphy: multiple analyses of recent samples from the Mediterranean Sea and from the North Atlantic show that the 87 Sr/ 86 Sr isotope ratios correspond well. In a stratigraphic ideal section from the Upper marine molasse a resolution of 206 Pb/ 238 U method zircons from the Fish Canyon Tuff were measured and yielded ages of 28.49±0.10, 24.46±0.11 and 28.46±0.13 Ma. These values correspond well with the published mean value of zircon and apatite fission track age of 28.4±0.7 Ma. Thus, the U-Pb method for dating young volcanic minerals seems to be suitable. However, the published mean value (''solid state age'') of Naeser et al. (1981) is higher than the published (''gas age'') mean value of 27.2±0.7 Ma based on biotite, sanidine, hornblende and plagioclase. (author) figs., tabs., refs
Monte Carlo strategies in scientific computing
Liu, Jun S
2008-01-01
This paperback edition is a reprint of the 2001 Springer edition This book provides a self-contained and up-to-date treatment of the Monte Carlo method and develops a common framework under which various Monte Carlo techniques can be "standardized" and compared Given the interdisciplinary nature of the topics and a moderate prerequisite for the reader, this book should be of interest to a broad audience of quantitative researchers such as computational biologists, computer scientists, econometricians, engineers, probabilists, and statisticians It can also be used as the textbook for a graduate-level course on Monte Carlo methods Many problems discussed in the alter chapters can be potential thesis topics for masters’ or PhD students in statistics or computer science departments Jun Liu is Professor of Statistics at Harvard University, with a courtesy Professor appointment at Harvard Biostatistics Department Professor Liu was the recipient of the 2002 COPSS Presidents' Award, the most prestigious one for sta...
Random Numbers and Monte Carlo Methods
Scherer, Philipp O. J.
Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.
Off-diagonal expansion quantum Monte Carlo.
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Reflections on early Monte Carlo calculations
International Nuclear Information System (INIS)
Spanier, J.
1992-01-01
Monte Carlo methods for solving various particle transport problems developed in parallel with the evolution of increasingly sophisticated computer programs implementing diffusion theory and low-order moments calculations. In these early years, Monte Carlo calculations and high-order approximations to the transport equation were seen as too expensive to use routinely for nuclear design but served as invaluable aids and supplements to design with less expensive tools. The earliest Monte Carlo programs were quite literal; i.e., neutron and other particle random walk histories were simulated by sampling from the probability laws inherent in the physical system without distoration. Use of such analogue sampling schemes resulted in a good deal of time being spent in examining the possibility of lowering the statistical uncertainties in the sample estimates by replacing simple, and intuitively obvious, random variables by those with identical means but lower variances
Monte Carlo simulation of Markov unreliability models
International Nuclear Information System (INIS)
Lewis, E.E.; Boehm, F.
1984-01-01
A Monte Carlo method is formulated for the evaluation of the unrealibility of complex systems with known component failure and repair rates. The formulation is in terms of a Markov process allowing dependences between components to be modeled and computational efficiencies to be achieved in the Monte Carlo simulation. Two variance reduction techniques, forced transition and failure biasing, are employed to increase computational efficiency of the random walk procedure. For an example problem these result in improved computational efficiency by more than three orders of magnitudes over analog Monte Carlo. The method is generalized to treat problems with distributed failure and repair rate data, and a batching technique is introduced and shown to result in substantial increases in computational efficiency for an example problem. A method for separating the variance due to the data uncertainty from that due to the finite number of random walks is presented. (orig.)
Shell model the Monte Carlo way
International Nuclear Information System (INIS)
Ormand, W.E.
1995-01-01
The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined
Shell model the Monte Carlo way
Energy Technology Data Exchange (ETDEWEB)
Ormand, W.E.
1995-03-01
The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.
SPQR: a Monte Carlo reactor kinetics code
International Nuclear Information System (INIS)
Cramer, S.N.; Dodds, H.L.
1980-02-01
The SPQR Monte Carlo code has been developed to analyze fast reactor core accident problems where conventional methods are considered inadequate. The code is based on the adiabatic approximation of the quasi-static method. This initial version contains no automatic material motion or feedback. An existing Monte Carlo code is used to calculate the shape functions and the integral quantities needed in the kinetics module. Several sample problems have been devised and analyzed. Due to the large statistical uncertainty associated with the calculation of reactivity in accident simulations, the results, especially at later times, differ greatly from deterministic methods. It was also found that in large uncoupled systems, the Monte Carlo method has difficulty in handling asymmetric perturbations
Regionalism, Regionalization and Regional Development
Directory of Open Access Journals (Sweden)
Liviu C. Andrei
2016-03-01
Full Text Available Sustained development is a concept associating other concepts, in its turn, in the EU practice, e.g. regionalism, regionalizing and afferent policies, here including structural policies. This below text, dedicated to integration concepts, will limit on the other hand to regionalizing, otherwise an aspect typical to Europe and to the EU. On the other hand, two aspects come up to strengthen this field of ideas, i.e. the region (al-regionalism-(regional development triplet has either its own history or precise individual outline of terms.
Current and future applications of Monte Carlo
International Nuclear Information System (INIS)
Zaidi, H.
2003-01-01
Full text: The use of radionuclides in medicine has a long history and encompasses a large area of applications including diagnosis and radiation treatment of cancer patients using either external or radionuclide radiotherapy. The 'Monte Carlo method'describes a very broad area of science, in which many processes, physical systems, and phenomena are simulated by statistical methods employing random numbers. The general idea of Monte Carlo analysis is to create a model, which is as similar as possible to the real physical system of interest, and to create interactions within that system based on known probabilities of occurrence, with random sampling of the probability density functions (pdfs). As the number of individual events (called 'histories') is increased, the quality of the reported average behavior of the system improves, meaning that the statistical uncertainty decreases. The use of the Monte Carlo method to simulate radiation transport has become the most accurate means of predicting absorbed dose distributions and other quantities of interest in the radiation treatment of cancer patients using either external or radionuclide radiotherapy. The same trend has occurred for the estimation of the absorbed dose in diagnostic procedures using radionuclides as well as the assessment of image quality and quantitative accuracy of radionuclide imaging. As a consequence of this generalized use, many questions are being raised primarily about the need and potential of Monte Carlo techniques, but also about how accurate it really is, what would it take to apply it clinically and make it available widely to the nuclear medicine community at large. Many of these questions will be answered when Monte Carlo techniques are implemented and used for more routine calculations and for in-depth investigations. In this paper, the conceptual role of the Monte Carlo method is briefly introduced and followed by a survey of its different applications in diagnostic and therapeutic
Monte Carlo simulation applied to alpha spectrometry
International Nuclear Information System (INIS)
Baccouche, S.; Gharbi, F.; Trabelsi, A.
2007-01-01
Alpha particle spectrometry is a widely-used analytical method, in particular when we deal with pure alpha emitting radionuclides. Monte Carlo simulation is an adequate tool to investigate the influence of various phenomena on this analytical method. We performed an investigation of those phenomena using the simulation code GEANT of CERN. The results concerning the geometrical detection efficiency in different measurement geometries agree with analytical calculations. This work confirms that Monte Carlo simulation of solid angle of detection is a very useful tool to determine with very good accuracy the detection efficiency.
Simplified monte carlo simulation for Beijing spectrometer
International Nuclear Information System (INIS)
Wang Taijie; Wang Shuqin; Yan Wuguang; Huang Yinzhi; Huang Deqiang; Lang Pengfei
1986-01-01
The Monte Carlo method based on the functionization of the performance of detectors and the transformation of values of kinematical variables into ''measured'' ones by means of smearing has been used to program the Monte Carlo simulation of the performance of the Beijing Spectrometer (BES) in FORTRAN language named BESMC. It can be used to investigate the multiplicity, the particle type, and the distribution of four-momentum of the final states of electron-positron collision, and also the response of the BES to these final states. Thus, it provides a measure to examine whether the overall design of the BES is reasonable and to decide the physical topics of the BES
Burnup calculations using Monte Carlo method
International Nuclear Information System (INIS)
Ghosh, Biplab; Degweker, S.B.
2009-01-01
In the recent years, interest in burnup calculations using Monte Carlo methods has gained momentum. Previous burn up codes have used multigroup transport theory based calculations followed by diffusion theory based core calculations for the neutronic portion of codes. The transport theory methods invariably make approximations with regard to treatment of the energy and angle variables involved in scattering, besides approximations related to geometry simplification. Cell homogenisation to produce diffusion, theory parameters adds to these approximations. Moreover, while diffusion theory works for most reactors, it does not produce accurate results in systems that have strong gradients, strong absorbers or large voids. Also, diffusion theory codes are geometry limited (rectangular, hexagonal, cylindrical, and spherical coordinates). Monte Carlo methods are ideal to solve very heterogeneous reactors and/or lattices/assemblies in which considerable burnable poisons are used. The key feature of this approach is that Monte Carlo methods permit essentially 'exact' modeling of all geometrical detail, without resort to ene and spatial homogenization of neutron cross sections. Monte Carlo method would also be better for in Accelerator Driven Systems (ADS) which could have strong gradients due to the external source and a sub-critical assembly. To meet the demand for an accurate burnup code, we have developed a Monte Carlo burnup calculation code system in which Monte Carlo neutron transport code is coupled with a versatile code (McBurn) for calculating the buildup and decay of nuclides in nuclear materials. McBurn is developed from scratch by the authors. In this article we will discuss our effort in developing the continuous energy Monte Carlo burn-up code, McBurn. McBurn is intended for entire reactor core as well as for unit cells and assemblies. Generally, McBurn can do burnup of any geometrical system which can be handled by the underlying Monte Carlo transport code
Improvements for Monte Carlo burnup calculation
Energy Technology Data Exchange (ETDEWEB)
Shenglong, Q.; Dong, Y.; Danrong, S.; Wei, L., E-mail: qiangshenglong@tsinghua.org.cn, E-mail: d.yao@npic.ac.cn, E-mail: songdr@npic.ac.cn, E-mail: luwei@npic.ac.cn [Nuclear Power Inst. of China, Cheng Du, Si Chuan (China)
2015-07-01
Monte Carlo burnup calculation is development trend of reactor physics, there would be a lot of work to be done for engineering applications. Based on Monte Carlo burnup code MOI, non-fuel burnup calculation methods and critical search suggestions will be mentioned in this paper. For non-fuel burnup, mixed burnup mode will improve the accuracy of burnup calculation and efficiency. For critical search of control rod position, a new method called ABN based on ABA which used by MC21 will be proposed for the first time in this paper. (author)
A keff calculation method by Monte Carlo
International Nuclear Information System (INIS)
Shen, H; Wang, K.
2008-01-01
The effective multiplication factor (k eff ) is defined as the ratio between the number of neutrons in successive generations, which definition is adopted by most Monte Carlo codes (e.g. MCNP). Also, it can be thought of as the ratio of the generation rate of neutrons by the sum of the leakage rate and the absorption rate, which should exclude the effect of the neutron reaction such as (n, 2n) and (n, 3n). This article discusses the Monte Carlo method for k eff calculation based on the second definition. A new code has been developed and the results are presented. (author)
Monte Carlo electron/photon transport
International Nuclear Information System (INIS)
Mack, J.M.; Morel, J.E.; Hughes, H.G.
1985-01-01
A review of nonplasma coupled electron/photon transport using Monte Carlo method is presented. Remarks are mainly restricted to linerarized formalisms at electron energies from 1 keV to 1000 MeV. Applications involving pulse-height estimation, transport in external magnetic fields, and optical Cerenkov production are discussed to underscore the importance of this branch of computational physics. Advances in electron multigroup cross-section generation is reported, and its impact on future code development assessed. Progress toward the transformation of MCNP into a generalized neutral/charged-particle Monte Carlo code is described. 48 refs
Monte Carlo simulation of neutron scattering instruments
International Nuclear Information System (INIS)
Seeger, P.A.
1995-01-01
A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width
Monte Carlo applications to radiation shielding problems
International Nuclear Information System (INIS)
Subbaiah, K.V.
2009-01-01
Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling of physical and mathematical systems to compute their results. However, basic concepts of MC are both simple and straightforward and can be learned by using a personal computer. Uses of Monte Carlo methods require large amounts of random numbers, and it was their use that spurred the development of pseudorandom number generators, which were far quicker to use than the tables of random numbers which had been previously used for statistical sampling. In Monte Carlo simulation of radiation transport, the history (track) of a particle is viewed as a random sequence of free flights that end with an interaction event where the particle changes its direction of movement, loses energy and, occasionally, produces secondary particles. The Monte Carlo simulation of a given experimental arrangement (e.g., an electron beam, coming from an accelerator and impinging on a water phantom) consists of the numerical generation of random histories. To simulate these histories we need an interaction model, i.e., a set of differential cross sections (DCS) for the relevant interaction mechanisms. The DCSs determine the probability distribution functions (pdf) of the random variables that characterize a track; 1) free path between successive interaction events, 2) type of interaction taking place and 3) energy loss and angular deflection in a particular event (and initial state of emitted secondary particles, if any). Once these pdfs are known, random histories can be generated by using appropriate sampling methods. If the number of generated histories is large enough, quantitative information on the transport process may be obtained by simply averaging over the simulated histories. The Monte Carlo method yields the same information as the solution of the Boltzmann transport equation, with the same interaction model, but is easier to implement. In particular, the simulation of radiation
Simulation of transport equations with Monte Carlo
International Nuclear Information System (INIS)
Matthes, W.
1975-09-01
The main purpose of the report is to explain the relation between the transport equation and the Monte Carlo game used for its solution. The introduction of artificial particles carrying a weight provides one with high flexibility in constructing many different games for the solution of the same equation. This flexibility opens a way to construct a Monte Carlo game for the solution of the adjoint transport equation. Emphasis is laid mostly on giving a clear understanding of what to do and not on the details of how to do a specific game
Monte Carlo dose distributions for radiosurgery
International Nuclear Information System (INIS)
Perucha, M.; Leal, A.; Rincon, M.; Carrasco, E.
2001-01-01
The precision of Radiosurgery Treatment planning systems is limited by the approximations of their algorithms and by their dosimetrical input data. This fact is especially important in small fields. However, the Monte Carlo methods is an accurate alternative as it considers every aspect of particle transport. In this work an acoustic neurinoma is studied by comparing the dose distribution of both a planning system and Monte Carlo. Relative shifts have been measured and furthermore, Dose-Volume Histograms have been calculated for target and adjacent organs at risk. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Benmosbah, M. [Laboratoire de Chimie Physique et Rayonnement Alain Chambaudet, UMR CEA E4, Universite de Franche-Comte, 16 route de Gray, 25030 Besancon Cedex (France); Groetz, J.E. [Laboratoire de Chimie Physique et Rayonnement Alain Chambaudet, UMR CEA E4, Universite de Franche-Comte, 16 route de Gray, 25030 Besancon Cedex (France)], E-mail: jegroetz@univ-fcomte.fr; Crovisier, P. [Service de Protection contre les Rayonnements, CEA Valduc, 21120 Is/Tille (France); Asselineau, B. [Laboratoire de Metrologie et de Dosimetrie des Neutrons, IRSN, Cadarache BP3, 13115 St Paul-lez-Durance (France); Truffert, H.; Cadiou, A. [AREVA NC, Etablissement de la Hague, DQSSE/PR/E/D, 50444 Beaumont-Hague Cedex (France)
2008-08-11
Proton recoil spectra were calculated for various spherical proportional counters using Monte Carlo simulation combined with the finite element method. Electric field lines and strength were calculated by defining an appropriate mesh and solving the Laplace equation with the associated boundary conditions, taking into account the geometry of every counter. Thus, different regions were defined in the counter with various coefficients for the energy deposition in the Monte Carlo transport code MCNPX. Results from the calculations are in good agreement with measurements for three different gas pressures at various neutron energies.
Fast sequential Monte Carlo methods for counting and optimization
Rubinstein, Reuven Y; Vaisman, Radislav
2013-01-01
A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the
Specialized Monte Carlo codes versus general-purpose Monte Carlo codes
International Nuclear Information System (INIS)
Moskvin, Vadim; DesRosiers, Colleen; Papiez, Lech; Lu, Xiaoyi
2002-01-01
The possibilities of Monte Carlo modeling for dose calculations and optimization treatment are quite limited in radiation oncology applications. The main reason is that the Monte Carlo technique for dose calculations is time consuming while treatment planning may require hundreds of possible cases of dose simulations to be evaluated for dose optimization. The second reason is that general-purpose codes widely used in practice, require an experienced user to customize them for calculations. This paper discusses the concept of Monte Carlo code design that can avoid the main problems that are preventing wide spread use of this simulation technique in medical physics. (authors)
On the use of stochastic approximation Monte Carlo for Monte Carlo integration
Liang, Faming
2009-03-01
The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration via a dynamically weighted estimator by calling some results from the literature of nonhomogeneous Markov chains. Our numerical results indicate that SAMC can yield significant savings over conventional Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, for the problems for which the energy landscape is rugged. © 2008 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Densmore, Jeffery D.; Larsen, Edward W.
2001-01-01
Recently, it has been shown that the figure of merit (FOM) of Monte Carlo source-detector problems can be enhanced by using a variational rather than a direct functional to estimate the detector response. The direct functional, which is traditionally employed in Monte Carlo simulations, requires an estimate of the solution of the forward problem within the detector region. The variational functional is theoretically more accurate than the direct functional, but it requires estimates of the solutions of the forward and adjoint source-detector problems over the entire phase-space of the problem. In recent work, we have performed Monte Carlo simulations using the variational functional by (a) approximating the adjoint solution deterministically and representing this solution as a function in phase-space and (b) estimating the forward solution using Monte Carlo. We have called this general procedure variational variance reduction (VVR). The VVR method is more computationally expensive per history than traditional Monte Carlo because extra information must be tallied and processed. However, the variational functional yields a more accurate estimate of the detector response. Our simulations have shown that the VVR reduction in variance usually outweighs the increase in cost, resulting in an increased FOM. In recent work on source-detector problems, we have calculated the adjoint solution deterministically and represented this solution as a linear-in-angle, histogram-in-space function. This procedure has several advantages over previous implementations: (a) it requires much less adjoint information to be stored and (b) it is highly efficient for diffusive problems, due to the accurate linear-in-angle representation of the adjoint solution. (Traditional variance-reduction methods perform poorly for diffusive problems.) Here, we extend this VVR method to Monte Carlo criticality calculations, which are often diffusive and difficult for traditional variance-reduction methods
Parallel processing Monte Carlo radiation transport codes
International Nuclear Information System (INIS)
McKinney, G.W.
1994-01-01
Issues related to distributed-memory multiprocessing as applied to Monte Carlo radiation transport are discussed. Measurements of communication overhead are presented for the radiation transport code MCNP which employs the communication software package PVM, and average efficiency curves are provided for a homogeneous virtual machine
Monte Carlo determination of heteroepitaxial misfit structures
DEFF Research Database (Denmark)
Baker, J.; Lindgård, Per-Anker
1996-01-01
We use Monte Carlo simulations to determine the structure of KBr overlayers on a NaCl(001) substrate, a system with large (17%) heteroepitaxial misfit. The equilibrium relaxation structure is determined for films of 2-6 ML, for which extensive helium-atom scattering data exist for comparison...
The Monte Carlo applied for calculation dose
International Nuclear Information System (INIS)
Peixoto, J.E.
1988-01-01
The Monte Carlo method is showed for the calculation of absorbed dose. The trajectory of the photon is traced simulating sucessive interaction between the photon and the substance that consist the human body simulator. The energy deposition in each interaction of the simulator organ or tissue per photon is also calculated. (C.G.C.) [pt
Monte Carlo code for neutron radiography
International Nuclear Information System (INIS)
Milczarek, Jacek J.; Trzcinski, Andrzej; El-Ghany El Abd, Abd; Czachor, Andrzej
2005-01-01
The concise Monte Carlo code, MSX, for simulation of neutron radiography images of non-uniform objects is presented. The possibility of modeling the images of objects with continuous spatial distribution of specific isotopes is included. The code can be used for assessment of the scattered neutron component in neutron radiograms
Monte Carlo code for neutron radiography
Energy Technology Data Exchange (ETDEWEB)
Milczarek, Jacek J. [Institute of Atomic Energy, Swierk, 05-400 Otwock (Poland)]. E-mail: jjmilcz@cyf.gov.pl; Trzcinski, Andrzej [Institute for Nuclear Studies, Swierk, 05-400 Otwock (Poland); El-Ghany El Abd, Abd [Institute of Atomic Energy, Swierk, 05-400 Otwock (Poland); Nuclear Research Center, PC 13759, Cairo (Egypt); Czachor, Andrzej [Institute of Atomic Energy, Swierk, 05-400 Otwock (Poland)
2005-04-21
The concise Monte Carlo code, MSX, for simulation of neutron radiography images of non-uniform objects is presented. The possibility of modeling the images of objects with continuous spatial distribution of specific isotopes is included. The code can be used for assessment of the scattered neutron component in neutron radiograms.
Monte Carlo method in neutron activation analysis
International Nuclear Information System (INIS)
Majerle, M.; Krasa, A.; Svoboda, O.; Wagner, V.; Adam, J.; Peetermans, S.; Slama, O.; Stegajlov, V.I.; Tsupko-Sitnikov, V.M.
2009-01-01
Neutron activation detectors are a useful technique for the neutron flux measurements in spallation experiments. The study of the usefulness and the accuracy of this method at similar experiments was performed with the help of Monte Carlo codes MCNPX and FLUKA
Atomistic Monte Carlo simulation of lipid membranes
DEFF Research Database (Denmark)
Wüstner, Daniel; Sklenar, Heinz
2014-01-01
Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction...... of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol....
Computer system for Monte Carlo experimentation
International Nuclear Information System (INIS)
Grier, D.A.
1986-01-01
A new computer system for Monte Carlo Experimentation is presented. The new system speeds and simplifies the process of coding and preparing a Monte Carlo Experiment; it also encourages the proper design of Monte Carlo Experiments, and the careful analysis of the experimental results. A new functional language is the core of this system. Monte Carlo Experiments, and their experimental designs, are programmed in this new language; those programs are compiled into Fortran output. The Fortran output is then compiled and executed. The experimental results are analyzed with a standard statistics package such as Si, Isp, or Minitab or with a user-supplied program. Both the experimental results and the experimental design may be directly loaded into the workspace of those packages. The new functional language frees programmers from many of the details of programming an experiment. Experimental designs such as factorial, fractional factorial, or latin square are easily described by the control structures and expressions of the language. Specific mathematical modes are generated by the routines of the language
Scalable Domain Decomposed Monte Carlo Particle Transport
Energy Technology Data Exchange (ETDEWEB)
O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)
2013-12-05
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.
Monte Carlo methods beyond detailed balance
Schram, Raoul D.; Barkema, Gerard T.|info:eu-repo/dai/nl/101275080
2015-01-01
Monte Carlo algorithms are nearly always based on the concept of detailed balance and ergodicity. In this paper we focus on algorithms that do not satisfy detailed balance. We introduce a general method for designing non-detailed balance algorithms, starting from a conventional algorithm satisfying
Monte Carlo studies of ZEPLIN III
Dawson, J; Davidge, D C R; Gillespie, J R; Howard, A S; Jones, W G; Joshi, M; Lebedenko, V N; Sumner, T J; Quenby, J J
2002-01-01
A Monte Carlo simulation of a two-phase xenon dark matter detector, ZEPLIN III, has been achieved. Results from the analysis of a simulated data set are presented, showing primary and secondary signal distributions from low energy gamma ray events.
Biases in Monte Carlo eigenvalue calculations
Energy Technology Data Exchange (ETDEWEB)
Gelbard, E.M.
1992-12-01
The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the ``fixed-source`` case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated (``replicated``) over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here.
Biases in Monte Carlo eigenvalue calculations
Energy Technology Data Exchange (ETDEWEB)
Gelbard, E.M.
1992-01-01
The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the fixed-source'' case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated ( replicated'') over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here.
Dynamic bounds coupled with Monte Carlo simulations
Energy Technology Data Exchange (ETDEWEB)
Rajabalinejad, M., E-mail: M.Rajabalinejad@tudelft.n [Faculty of Civil Engineering, Delft University of Technology, Delft (Netherlands); Meester, L.E. [Delft Institute of Applied Mathematics, Delft University of Technology, Delft (Netherlands); Gelder, P.H.A.J.M. van; Vrijling, J.K. [Faculty of Civil Engineering, Delft University of Technology, Delft (Netherlands)
2011-02-15
For the reliability analysis of engineering structures a variety of methods is known, of which Monte Carlo (MC) simulation is widely considered to be among the most robust and most generally applicable. To reduce simulation cost of the MC method, variance reduction methods are applied. This paper describes a method to reduce the simulation cost even further, while retaining the accuracy of Monte Carlo, by taking into account widely present monotonicity. For models exhibiting monotonic (decreasing or increasing) behavior, dynamic bounds (DB) are defined, which in a coupled Monte Carlo simulation are updated dynamically, resulting in a failure probability estimate, as well as a strict (non-probabilistic) upper and lower bounds. Accurate results are obtained at a much lower cost than an equivalent ordinary Monte Carlo simulation. In a two-dimensional and a four-dimensional numerical example, the cost reduction factors are 130 and 9, respectively, where the relative error is smaller than 5%. At higher accuracy levels, this factor increases, though this effect is expected to be smaller with increasing dimension. To show the application of DB method to real world problems, it is applied to a complex finite element model of a flood wall in New Orleans.
Dynamic bounds coupled with Monte Carlo simulations
Rajabali Nejad, Mohammadreza; Meester, L.E.; van Gelder, P.H.A.J.M.; Vrijling, J.K.
2011-01-01
For the reliability analysis of engineering structures a variety of methods is known, of which Monte Carlo (MC) simulation is widely considered to be among the most robust and most generally applicable. To reduce simulation cost of the MC method, variance reduction methods are applied. This paper
Design and analysis of Monte Carlo experiments
Kleijnen, Jack P.C.; Gentle, J.E.; Haerdle, W.; Mori, Y.
2012-01-01
By definition, computer simulation or Monte Carlo models are not solved by mathematical analysis (such as differential calculus), but are used for numerical experimentation. The goal of these experiments is to answer questions about the real world; i.e., the experimenters may use their models to
Some problems on Monte Carlo method development
International Nuclear Information System (INIS)
Pei Lucheng
1992-01-01
This is a short paper on some problems of Monte Carlo method development. The content consists of deep-penetration problems, unbounded estimate problems, limitation of Mdtropolis' method, dependency problem in Metropolis' method, random error interference problems and random equations, intellectualisation and vectorization problems of general software
Monte Carlo simulations in theoretical physic
International Nuclear Information System (INIS)
Billoire, A.
1991-01-01
After a presentation of the MONTE CARLO method principle, the method is applied, first to the critical exponents calculations in the three dimensions ISING model, and secondly to the discrete quantum chromodynamic with calculation times in function of computer power. 28 refs., 4 tabs
Monte Carlo method for random surfaces
International Nuclear Information System (INIS)
Berg, B.
1985-01-01
Previously two of the authors proposed a Monte Carlo method for sampling statistical ensembles of random walks and surfaces with a Boltzmann probabilistic weight. In the present paper we work out the details for several models of random surfaces, defined on d-dimensional hypercubic lattices. (orig.)
Monte Carlo simulation of the microcanonical ensemble
International Nuclear Information System (INIS)
Creutz, M.
1984-01-01
We consider simulating statistical systems with a random walk on a constant energy surface. This combines features of deterministic molecular dynamics techniques and conventional Monte Carlo simulations. For discrete systems the method can be programmed to run an order of magnitude faster than other approaches. It does not require high quality random numbers and may also be useful for nonequilibrium studies. 10 references
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the
Coded aperture optimization using Monte Carlo simulations
International Nuclear Information System (INIS)
Martineau, A.; Rocchisani, J.M.; Moretti, J.L.
2010-01-01
Coded apertures using Uniformly Redundant Arrays (URA) have been unsuccessfully evaluated for two-dimensional and three-dimensional imaging in Nuclear Medicine. The images reconstructed from coded projections contain artifacts and suffer from poor spatial resolution in the longitudinal direction. We introduce a Maximum-Likelihood Expectation-Maximization (MLEM) algorithm for three-dimensional coded aperture imaging which uses a projection matrix calculated by Monte Carlo simulations. The aim of the algorithm is to reduce artifacts and improve the three-dimensional spatial resolution in the reconstructed images. Firstly, we present the validation of GATE (Geant4 Application for Emission Tomography) for Monte Carlo simulations of a coded mask installed on a clinical gamma camera. The coded mask modelling was validated by comparison between experimental and simulated data in terms of energy spectra, sensitivity and spatial resolution. In the second part of the study, we use the validated model to calculate the projection matrix with Monte Carlo simulations. A three-dimensional thyroid phantom study was performed to compare the performance of the three-dimensional MLEM reconstruction with conventional correlation method. The results indicate that the artifacts are reduced and three-dimensional spatial resolution is improved with the Monte Carlo-based MLEM reconstruction.
Biases in Monte Carlo eigenvalue calculations
International Nuclear Information System (INIS)
Gelbard, E.M.
1992-01-01
The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the ''fixed-source'' case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated (''replicated'') over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here
Monte Carlo studies of uranium calorimetry
International Nuclear Information System (INIS)
Brau, J.; Hargis, H.J.; Gabriel, T.A.; Bishop, B.L.
1985-01-01
Detailed Monte Carlo calculations of uranium calorimetry are presented which reveal a significant difference in the responses of liquid argon and plastic scintillator in uranium calorimeters. Due to saturation effects, neutrons from the uranium are found to contribute only weakly to the liquid argon signal. Electromagnetic sampling inefficiencies are significant and contribute substantially to compensation in both systems. 17 references
Response matrix Monte Carlo based on a general geometry local calculation for electron transport
International Nuclear Information System (INIS)
Ballinger, C.T.; Rathkopf, J.A.; Martin, W.R.
1991-01-01
A Response Matrix Monte Carlo (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts to combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. Like condensed history, the RMMC method uses probability distributions functions (PDFs) to describe the energy and direction of the electron after several collisions. However, unlike the condensed history method the PDFs are based on an analog Monte Carlo simulation over a small region. Condensed history theories require assumptions about the electron scattering to derive the PDFs for direction and energy. Thus the RMMC method samples from PDFs which more accurately represent the electron random walk. Results show good agreement between the RMMC method and analog Monte Carlo. 13 refs., 8 figs
Uncertainty analysis in Monte Carlo criticality computations
International Nuclear Information System (INIS)
Qi Ao
2011-01-01
Highlights: ► Two types of uncertainty methods for k eff Monte Carlo computations are examined. ► Sampling method has the least restrictions on perturbation but computing resources. ► Analytical method is limited to small perturbation on material properties. ► Practicality relies on efficiency, multiparameter applicability and data availability. - Abstract: Uncertainty analysis is imperative for nuclear criticality risk assessments when using Monte Carlo neutron transport methods to predict the effective neutron multiplication factor (k eff ) for fissionable material systems. For the validation of Monte Carlo codes for criticality computations against benchmark experiments, code accuracy and precision are measured by both the computational bias and uncertainty in the bias. The uncertainty in the bias accounts for known or quantified experimental, computational and model uncertainties. For the application of Monte Carlo codes for criticality analysis of fissionable material systems, an administrative margin of subcriticality must be imposed to provide additional assurance of subcriticality for any unknown or unquantified uncertainties. Because of a substantial impact of the administrative margin of subcriticality on economics and safety of nuclear fuel cycle operations, recently increasing interests in reducing the administrative margin of subcriticality make the uncertainty analysis in criticality safety computations more risk-significant. This paper provides an overview of two most popular k eff uncertainty analysis methods for Monte Carlo criticality computations: (1) sampling-based methods, and (2) analytical methods. Examples are given to demonstrate their usage in the k eff uncertainty analysis due to uncertainties in both neutronic and non-neutronic parameters of fissionable material systems.
Pore-scale uncertainty quantification with multilevel Monte Carlo
Icardi, Matteo; Hoel, Haakon; Long, Quan; Tempone, Raul
2014-01-01
. Since there are no generic ways to parametrize the randomness in the porescale structures, Monte Carlo techniques are the most accessible to compute statistics. We propose a multilevel Monte Carlo (MLMC) technique to reduce the computational cost
Prospect on general software of Monte Carlo method
International Nuclear Information System (INIS)
Pei Lucheng
1992-01-01
This is a short paper on the prospect of Monte Carlo general software. The content consists of cluster sampling method, zero variance technique, self-improved method, and vectorized Monte Carlo method
Bayesian phylogeny analysis via stochastic approximation Monte Carlo
Cheon, Sooyoung; Liang, Faming
2009-01-01
in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method
Intelligent Monte Carlo phase-space division and importance estimation
International Nuclear Information System (INIS)
Booth, T.E.
1989-01-01
Two years ago, a quasi-deterministic method (QD) for obtaining the Monte Carlo importance function was reported. Since then, a number of very complex problems have been solved with the aid of QD. Not only does QD estimate the importance far faster than the (weight window) generator currently in MCNP, QD requires almost no user intervention in contrast to the generator. However, both the generator and QD require the user to divide the phase-space into importance regions. That is, both methods will estimate the importance of a phase-space region, but the user must define the regions. In practice this is tedious and time consuming, and many users are not particularly good at defining sensible importance regions. To make full use of the fat that QD is capable of getting good importance estimates in tens of thousands of phase-space regions relatively easily, some automatic method for dividing the phase space will be useful and perhaps essential. This paper describes recent progress toward an automatic and intelligent phase-space divider
KAMCCO, a reactor physics Monte Carlo neutron transport code
International Nuclear Information System (INIS)
Arnecke, G.; Borgwaldt, H.; Brandl, V.; Lalovic, M.
1976-06-01
KAMCCO is a 3-dimensional reactor Monte Carlo code for fast neutron physics problems. Two options are available for the solution of 1) the inhomogeneous time-dependent neutron transport equation (census time scheme), and 2) the homogeneous static neutron transport equation (generation cycle scheme). The user defines the desired output, e.g. estimates of reaction rates or neutron flux integrated over specified volumes in phase space and time intervals. Such primary quantities can be arbitrarily combined, also ratios of these quantities can be estimated with their errors. The Monte Carlo techniques are mostly analogue (exceptions: Importance sampling for collision processes, ELP/MELP, Russian roulette and splitting). Estimates are obtained from the collision and track length estimators. Elastic scattering takes into account first order anisotropy in the center of mass system. Inelastic scattering is processed via the evaporation model or via the excitation of discrete levels. For the calculation of cross sections, the energy is treated as a continuous variable. They are computed by a) linear interpolation, b) from optionally Doppler broadened single level Breit-Wigner resonances or c) from probability tables (in the region of statistically distributed resonances). (orig.) [de
Efficient Monte Carlo Simulations of Gas Molecules Inside Porous Materials.
Kim, Jihan; Smit, Berend
2012-07-10
Monte Carlo (MC) simulations are commonly used to obtain adsorption properties of gas molecules inside porous materials. In this work, we discuss various optimization strategies that lead to faster MC simulations with CO2 gas molecules inside host zeolite structures used as a test system. The reciprocal space contribution of the gas-gas Ewald summation and both the direct and the reciprocal gas-host potential energy interactions are stored inside energy grids to reduce the wall time in the MC simulations. Additional speedup can be obtained by selectively calling the routine that computes the gas-gas Ewald summation, which does not impact the accuracy of the zeolite's adsorption characteristics. We utilize two-level density-biased sampling technique in the grand canonical Monte Carlo (GCMC) algorithm to restrict CO2 insertion moves into low-energy regions within the zeolite materials to accelerate convergence. Finally, we make use of the graphics processing units (GPUs) hardware to conduct multiple MC simulations in parallel via judiciously mapping the GPU threads to available workload. As a result, we can obtain a CO2 adsorption isotherm curve with 14 pressure values (up to 10 atm) for a zeolite structure within a minute of total compute wall time.
Applications of Monte Carlo method in Medical Physics
International Nuclear Information System (INIS)
Diez Rios, A.; Labajos, M.
1989-01-01
The basic ideas of Monte Carlo techniques are presented. Random numbers and their generation by congruential methods, which underlie Monte Carlo calculations are shown. Monte Carlo techniques to solve integrals are discussed. The evaluation of a simple monodimensional integral with a known answer, by means of two different Monte Carlo approaches are discussed. The basic principles to simualate on a computer photon histories reduce variance and the current applications in Medical Physics are commented. (Author)
International Nuclear Information System (INIS)
Sharma, Subhash; Ott, Joseph; Williams, Jamone; Dickow, Danny
2011-01-01
Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required.
International Nuclear Information System (INIS)
Paelinck, L; Reynaert, N; Thierens, H; Neve, W De; Wagter, C de
2005-01-01
The purpose of this study was to assess the absorbed dose in and around lung tissue by performing radiochromic film measurements, Monte Carlo simulations and calculations with superposition convolution algorithms. We considered a layered polystyrene phantom of 12 x 12 x 12 cm 3 containing a central cavity of 6 x 6 x 6 cm 3 filled with Gammex RMI lung-equivalent material. Two field configurations were investigated, a small 1 x 10 cm 2 field and a larger 10 x 10 cm 2 field. First, we performed Monte Carlo simulations to investigate the influence of radiochromic film itself on the measured dose distribution when the film intersects a lung-equivalent region and is oriented parallel to the central beam axis. To that end, the film and the lung-equivalent materials were modelled in detail, taking into account their specific composition. Next, measurements were performed with the film oriented both parallel and perpendicular to the central beam axis to verify the results of our Monte Carlo simulations. Finally, we digitized the phantom in two commercially available treatment planning systems, Helax-TMS version 6.1A and Pinnacle version 6.2b, and calculated the absorbed dose in the phantom with their incorporated superposition convolution algorithms to compare with the Monte Carlo simulations. Comparing Monte Carlo simulations with measurements reveals that radiochromic film is a reliable dosimeter in and around lung-equivalent regions when the film is positioned perpendicular to the central beam axis. Radiochromic film is also able to predict the absorbed dose accurately when the film is positioned parallel to the central beam axis through the lung-equivalent region. However, attention must be paid when the film is not positioned along the central beam axis, in which case the film gradually attenuates the beam and decreases the dose measured behind the cavity. This underdosage disappears by offsetting the film a few centimetres. We find deviations of about 3.6% between
Paelinck, L.; Reynaert, N.; Thierens, H.; DeNeve, W.; DeWagter, C.
2005-05-01
The purpose of this study was to assess the absorbed dose in and around lung tissue by performing radiochromic film measurements, Monte Carlo simulations and calculations with superposition convolution algorithms. We considered a layered polystyrene phantom of 12 × 12 × 12 cm3 containing a central cavity of 6 × 6 × 6 cm3 filled with Gammex RMI lung-equivalent material. Two field configurations were investigated, a small 1 × 10 cm2 field and a larger 10 × 10 cm2 field. First, we performed Monte Carlo simulations to investigate the influence of radiochromic film itself on the measured dose distribution when the film intersects a lung-equivalent region and is oriented parallel to the central beam axis. To that end, the film and the lung-equivalent materials were modelled in detail, taking into account their specific composition. Next, measurements were performed with the film oriented both parallel and perpendicular to the central beam axis to verify the results of our Monte Carlo simulations. Finally, we digitized the phantom in two commercially available treatment planning systems, Helax-TMS version 6.1A and Pinnacle version 6.2b, and calculated the absorbed dose in the phantom with their incorporated superposition convolution algorithms to compare with the Monte Carlo simulations. Comparing Monte Carlo simulations with measurements reveals that radiochromic film is a reliable dosimeter in and around lung-equivalent regions when the film is positioned perpendicular to the central beam axis. Radiochromic film is also able to predict the absorbed dose accurately when the film is positioned parallel to the central beam axis through the lung-equivalent region. However, attention must be paid when the film is not positioned along the central beam axis, in which case the film gradually attenuates the beam and decreases the dose measured behind the cavity. This underdosage disappears by offsetting the film a few centimetres. We find deviations of about 3.6% between
Monte Carlo computation in the applied research of nuclear technology
International Nuclear Information System (INIS)
Xu Shuyan; Liu Baojie; Li Qin
2007-01-01
This article briefly introduces Monte Carlo Methods and their properties. It narrates the Monte Carlo methods with emphasis in their applications to several domains of nuclear technology. Monte Carlo simulation methods and several commonly used computer software to implement them are also introduced. The proposed methods are demonstrated by a real example. (authors)
Energy Technology Data Exchange (ETDEWEB)
Liang, Jingang; Wang, Kan; Qiu, Yishu [Dept. of Engineering Physics, LiuQing Building, Tsinghua University, Beijing (China); Chai, Xiao Ming; Qiang, Sheng Long [Science and Technology on Reactor System Design Technology Laboratory, Nuclear Power Institute of China, Chengdu (China)
2016-06-15
Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC) codes in accomplishing pin-wise three-dimensional (3D) full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC) code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.
Higgs boson events and background lep. A Monte Carlo study
International Nuclear Information System (INIS)
Ekspong, G.; Hultqvist, K.
1982-06-01
Higgs boson production at LEP using e+ e- to Z 0 to H 0 + e+ e- has been studied by Monte Carlo generation of events with realistic errors of measurement added. The results show the recoil mass (Higgs boson mass) resolution to be reasonably good for boson masses bigger than 5 Ge V. The events are found to populate a phase space region free of physical background for all boson masses below about 35 GeV. For masses above 40 GeV the Higgs boson signal merges with the physical background produced by semileptonic decays of heavy flavour quarks while diminishing in strength to low levels. The geometrical acceptance of a detector like DELPHI is about 80 per cent for Higgs boson events. (Author)
Monte Carlo-based tail exponent estimator
Barunik, Jozef; Vacha, Lukas
2010-11-01
In this paper we propose a new approach to estimation of the tail exponent in financial stock markets. We begin the study with the finite sample behavior of the Hill estimator under α-stable distributions. Using large Monte Carlo simulations, we show that the Hill estimator overestimates the true tail exponent and can hardly be used on samples with small length. Utilizing our results, we introduce a Monte Carlo-based method of estimation for the tail exponent. Our proposed method is not sensitive to the choice of tail size and works well also on small data samples. The new estimator also gives unbiased results with symmetrical confidence intervals. Finally, we demonstrate the power of our estimator on the international world stock market indices. On the two separate periods of 2002-2005 and 2006-2009, we estimate the tail exponent.
No-compromise reptation quantum Monte Carlo
International Nuclear Information System (INIS)
Yuen, W K; Farrar, Thomas J; Rothstein, Stuart M
2007-01-01
Since its publication, the reptation quantum Monte Carlo algorithm of Baroni and Moroni (1999 Phys. Rev. Lett. 82 4745) has been applied to several important problems in physics, but its mathematical foundations are not well understood. We show that their algorithm is not of typical Metropolis-Hastings type, and we specify conditions required for the generated Markov chain to be stationary and to converge to the intended distribution. The time-step bias may add up, and in many applications it is only the middle of a reptile that is the most important. Therefore, we propose an alternative, 'no-compromise reptation quantum Monte Carlo' to stabilize the middle of the reptile. (fast track communication)
Multilevel Monte Carlo Approaches for Numerical Homogenization
Efendiev, Yalchin R.
2015-10-01
In this article, we study the application of multilevel Monte Carlo (MLMC) approaches to numerical random homogenization. Our objective is to compute the expectation of some functionals of the homogenized coefficients, or of the homogenized solutions. This is accomplished within MLMC by considering different sizes of representative volumes (RVEs). Many inexpensive computations with the smallest RVE size are combined with fewer expensive computations performed on larger RVEs. Likewise, when it comes to homogenized solutions, different levels of coarse-grid meshes are used to solve the homogenized equation. We show that, by carefully selecting the number of realizations at each level, we can achieve a speed-up in the computations in comparison to a standard Monte Carlo method. Numerical results are presented for both one-dimensional and two-dimensional test-cases that illustrate the efficiency of the approach.
Status of Monte Carlo at Los Alamos
International Nuclear Information System (INIS)
Thompson, W.L.; Cashwell, E.D.
1980-01-01
At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time
Monte Carlo simulations in skin radiotherapy
International Nuclear Information System (INIS)
Sarvari, A.; Jeraj, R.; Kron, T.
2000-01-01
The primary goal of this work was to develop a procedure for calculation the appropriate filter shape for a brachytherapy applicator used for skin radiotherapy. In the applicator a radioactive source is positioned close to the skin. Without a filter, the resultant dose distribution would be highly nonuniform.High uniformity is usually required however. This can be achieved using an appropriately shaped filter, which flattens the dose profile. Because of the complexity of the transport and geometry, Monte Carlo simulations had to be used. An 192 Ir high dose rate photon source was used. All necessary transport parameters were simulated with the MCNP4B Monte Carlo code. A highly efficient iterative procedure was developed, which enabled calculation of the optimal filter shape in only few iterations. The initially non-uniform dose distributions became uniform within a percent when applying the filter calculated by this procedure. (author)
Monte Carlo simulations on SIMD computer architectures
International Nuclear Information System (INIS)
Burmester, C.P.; Gronsky, R.; Wille, L.T.
1992-01-01
In this paper algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SIMD) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carl updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures
Multilevel sequential Monte-Carlo samplers
Jasra, Ajay
2016-01-01
Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.
Monte Carlo simulation of gas Cerenkov detectors
International Nuclear Information System (INIS)
Mack, J.M.; Jain, M.; Jordan, T.M.
1984-01-01
Theoretical study of selected gamma-ray and electron diagnostic necessitates coupling Cerenkov radiation to electron/photon cascades. A Cerenkov production model and its incorporation into a general geometry Monte Carlo coupled electron/photon transport code is discussed. A special optical photon ray-trace is implemented using bulk optical properties assigned to each Monte Carlo zone. Good agreement exists between experimental and calculated Cerenkov data in the case of a carbon-dioxide gas Cerenkov detector experiment. Cerenkov production and threshold data are presented for a typical carbon-dioxide gas detector that converts a 16.7 MeV photon source to Cerenkov light, which is collected by optics and detected by a photomultiplier
Hypothesis testing of scientific Monte Carlo calculations
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Multilevel sequential Monte-Carlo samplers
Jasra, Ajay
2016-01-05
Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.
Monte Carlo Simulation for Particle Detectors
Pia, Maria Grazia
2012-01-01
Monte Carlo simulation is an essential component of experimental particle physics in all the phases of its life-cycle: the investigation of the physics reach of detector concepts, the design of facilities and detectors, the development and optimization of data reconstruction software, the data analysis for the production of physics results. This note briefly outlines some research topics related to Monte Carlo simulation, that are relevant to future experimental perspectives in particle physics. The focus is on physics aspects: conceptual progress beyond current particle transport schemes, the incorporation of materials science knowledge relevant to novel detection technologies, functionality to model radiation damage, the capability for multi-scale simulation, quantitative validation and uncertainty quantification to determine the predictive power of simulation. The R&D on simulation for future detectors would profit from cooperation within various components of the particle physics community, and synerg...
Status of Monte Carlo at Los Alamos
International Nuclear Information System (INIS)
Thompson, W.L.; Cashwell, E.D.; Godfrey, T.N.K.; Schrandt, R.G.; Deutsch, O.L.; Booth, T.E.
1980-05-01
Four papers were presented by Group X-6 on April 22, 1980, at the Oak Ridge Radiation Shielding Information Center (RSIC) Seminar-Workshop on Theory and Applications of Monte Carlo Methods. These papers are combined into one report for convenience and because they are related to each other. The first paper (by Thompson and Cashwell) is a general survey about X-6 and MCNP and is an introduction to the other three papers. It can also serve as a resume of X-6. The second paper (by Godfrey) explains some of the details of geometry specification in MCNP. The third paper (by Cashwell and Schrandt) illustrates calculating flux at a point with MCNP; in particular, the once-more-collided flux estimator is demonstrated. Finally, the fourth paper (by Thompson, Deutsch, and Booth) is a tutorial on some variance-reduction techniques. It should be required for a fledging Monte Carlo practitioner
Topological zero modes in Monte Carlo simulations
International Nuclear Information System (INIS)
Dilger, H.
1994-08-01
We present an improvement of global Metropolis updating steps, the instanton hits, used in a hybrid Monte Carlo simulation of the two-flavor Schwinger model with staggered fermions. These hits are designed to change the topological sector of the gauge field. In order to match these hits to an unquenched simulation with pseudofermions, the approximate zero mode structure of the lattice Dirac operator has to be considered explicitly. (orig.)
Handbook of Markov chain Monte Carlo
Brooks, Steve
2011-01-01
""Handbook of Markov Chain Monte Carlo"" brings together the major advances that have occurred in recent years while incorporating enough introductory material for new users of MCMC. Along with thorough coverage of the theoretical foundations and algorithmic and computational methodology, this comprehensive handbook includes substantial realistic case studies from a variety of disciplines. These case studies demonstrate the application of MCMC methods and serve as a series of templates for the construction, implementation, and choice of MCMC methodology.
The lund Monte Carlo for jet fragmentation
International Nuclear Information System (INIS)
Sjoestrand, T.
1982-03-01
We present a Monte Carlo program based on the Lund model for jet fragmentation. Quark, gluon, diquark and hadron jets are considered. Special emphasis is put on the fragmentation of colour singlet jet systems, for which energy, momentum and flavour are conserved explicitly. The model for decays of unstable particles, in particular the weak decay of heavy hadrons, is described. The central part of the paper is a detailed description on how to use the FORTRAN 77 program. (Author)
Monte Carlo methods for preference learning
DEFF Research Database (Denmark)
Viappiani, P.
2012-01-01
Utility elicitation is an important component of many applications, such as decision support systems and recommender systems. Such systems query the users about their preferences and give recommendations based on the system’s belief about the utility function. Critical to these applications is th...... is the acquisition of prior distribution about the utility parameters and the possibility of real time Bayesian inference. In this paper we consider Monte Carlo methods for these problems....
Monte Carlo methods for shield design calculations
International Nuclear Information System (INIS)
Grimstone, M.J.
1974-01-01
A suite of Monte Carlo codes is being developed for use on a routine basis in commercial reactor shield design. The methods adopted for this purpose include the modular construction of codes, simplified geometries, automatic variance reduction techniques, continuous energy treatment of cross section data, and albedo methods for streaming. Descriptions are given of the implementation of these methods and of their use in practical calculations. 26 references. (U.S.)
General purpose code for Monte Carlo simulations
International Nuclear Information System (INIS)
Wilcke, W.W.
1983-01-01
A general-purpose computer called MONTHY has been written to perform Monte Carlo simulations of physical systems. To achieve a high degree of flexibility the code is organized like a general purpose computer, operating on a vector describing the time dependent state of the system under simulation. The instruction set of the computer is defined by the user and is therefore adaptable to the particular problem studied. The organization of MONTHY allows iterative and conditional execution of operations
Autocorrelations in hybrid Monte Carlo simulations
International Nuclear Information System (INIS)
Schaefer, Stefan; Virotta, Francesco
2010-11-01
Simulations of QCD suffer from severe critical slowing down towards the continuum limit. This problem is known to be prominent in the topological charge, however, all observables are affected to various degree by these slow modes in the Monte Carlo evolution. We investigate the slowing down in high statistics simulations and propose a new error analysis method, which gives a realistic estimate of the contribution of the slow modes to the errors. (orig.)
Introduction to the Monte Carlo methods
International Nuclear Information System (INIS)
Uzhinskij, V.V.
1993-01-01
Codes illustrating the use of Monte Carlo methods in high energy physics such as the inverse transformation method, the ejection method, the particle propagation through the nucleus, the particle interaction with the nucleus, etc. are presented. A set of useful algorithms of random number generators is given (the binomial distribution, the Poisson distribution, β-distribution, γ-distribution and normal distribution). 5 figs., 1 tab
Sequential Monte Carlo with Highly Informative Observations
Del Moral, Pierre; Murray, Lawrence M.
2014-01-01
We propose sequential Monte Carlo (SMC) methods for sampling the posterior distribution of state-space models under highly informative observation regimes, a situation in which standard SMC methods can perform poorly. A special case is simulating bridges between given initial and final values. The basic idea is to introduce a schedule of intermediate weighting and resampling times between observation times, which guide particles towards the final state. This can always be done for continuous-...
Monte Carlo codes use in neutron therapy
International Nuclear Information System (INIS)
Paquis, P.; Mokhtari, F.; Karamanoukian, D.; Pignol, J.P.; Cuendet, P.; Iborra, N.
1998-01-01
Monte Carlo calculation codes allow to study accurately all the parameters relevant to radiation effects, like the dose deposition or the type of microscopic interactions, through one by one particle transport simulation. These features are very useful for neutron irradiations, from device development up to dosimetry. This paper illustrates some applications of these codes in Neutron Capture Therapy and Neutron Capture Enhancement of fast neutrons irradiations. (authors)
Quantum Monte Carlo calculations of light nuclei
International Nuclear Information System (INIS)
Pandharipande, V. R.
1999-01-01
Quantum Monte Carlo methods provide an essentially exact way to calculate various properties of nuclear bound, and low energy continuum states, from realistic models of nuclear interactions and currents. After a brief description of the methods and modern models of nuclear forces, we review the results obtained for all the bound, and some continuum states of up to eight nucleons. Various other applications of the methods are reviewed along with future prospects
Monte-Carlo simulation of electromagnetic showers
International Nuclear Information System (INIS)
Amatuni, Ts.A.
1984-01-01
The universal ELSS-1 program for Monte Carlo simulation of high energy electromagnetic showers in homogeneous absorbers of arbitrary geometry is written. The major processes and effects of electron and photon interaction with matter, particularly the Landau-Pomeranchuk-Migdal effect, are taken into account in the simulation procedures. The simulation results are compared with experimental data. Some characteristics of shower detectors and electromagnetic showers for energies up 1 TeV are calculated
Cost of splitting in Monte Carlo transport
International Nuclear Information System (INIS)
Everett, C.J.; Cashwell, E.D.
1978-03-01
In a simple transport problem designed to estimate transmission through a plane slab of x free paths by Monte Carlo methods, it is shown that m-splitting (m > or = 2) does not pay unless exp(x) > m(m + 3)/(m - 1). In such a case, the minimum total cost in terms of machine time is obtained as a function of m, and the optimal value of m is determined
From Monte Carlo to Quantum Computation
Heinrich, Stefan
2001-01-01
Quantum computing was so far mainly concerned with discrete problems. Recently, E. Novak and the author studied quantum algorithms for high dimensional integration and dealt with the question, which advantages quantum computing can bring over classical deterministic or randomized methods for this type of problem. In this paper we give a short introduction to the basic ideas of quantum computing and survey recent results on high dimensional integration. We discuss connections to the Monte Carl...
Monte Carlo simulation of Touschek effect
Directory of Open Access Journals (Sweden)
Aimin Xiao
2010-07-01
Full Text Available We present a Monte Carlo method implementation in the code elegant for simulating Touschek scattering effects in a linac beam. The local scattering rate and the distribution of scattered electrons can be obtained from the code either for a Gaussian-distributed beam or for a general beam whose distribution function is given. In addition, scattered electrons can be tracked through the beam line and the local beam-loss rate and beam halo information recorded.
Monte Carlo method for neutron transport problems
International Nuclear Information System (INIS)
Asaoka, Takumi
1977-01-01
Some methods for decreasing variances in Monte Carlo neutron transport calculations are presented together with the results of sample calculations. A general purpose neutron transport Monte Carlo code ''MORSE'' was used for the purpose. The first method discussed in this report is the method of statistical estimation. As an example of this method, the application of the coarse-mesh rebalance acceleration method to the criticality calculation of a cylindrical fast reactor is presented. Effective multiplication factor and its standard deviation are presented as a function of the number of histories and comparisons are made between the coarse-mesh rebalance method and the standard method. Five-group neutron fluxes at core center are also compared with the result of S4 calculation. The second method is the method of correlated sampling. This method was applied to the perturbation calculation of control rod worths in a fast critical assembly (FCA-V-3) Two methods of sampling (similar flight paths and identical flight paths) are tested and compared with experimental results. For every cases the experimental value lies within the standard deviation of the Monte Carlo calculations. The third method is the importance sampling. In this report a biased selection of particle flight directions discussed. This method was applied to the flux calculation in a spherical fast neutron system surrounded by a 10.16 cm iron reflector. Result-direction biasing, path-length stretching, and no biasing are compared with S8 calculation. (Aoki, K.)
Biased Monte Carlo optimization: the basic approach
International Nuclear Information System (INIS)
Campioni, Luca; Scardovelli, Ruben; Vestrucci, Paolo
2005-01-01
It is well-known that the Monte Carlo method is very successful in tackling several kinds of system simulations. It often happens that one has to deal with rare events, and the use of a variance reduction technique is almost mandatory, in order to have Monte Carlo efficient applications. The main issue associated with variance reduction techniques is related to the choice of the value of the biasing parameter. Actually, this task is typically left to the experience of the Monte Carlo user, who has to make many attempts before achieving an advantageous biasing. A valuable result is provided: a methodology and a practical rule addressed to establish an a priori guidance for the choice of the optimal value of the biasing parameter. This result, which has been obtained for a single component system, has the notable property of being valid for any multicomponent system. In particular, in this paper, the exponential and the uniform biases of exponentially distributed phenomena are investigated thoroughly
Quantum Monte Carlo for vibrating molecules
International Nuclear Information System (INIS)
Brown, W.R.; Lawrence Berkeley National Lab., CA
1996-08-01
Quantum Monte Carlo (QMC) has successfully computed the total electronic energies of atoms and molecules. The main goal of this work is to use correlation function quantum Monte Carlo (CFQMC) to compute the vibrational state energies of molecules given a potential energy surface (PES). In CFQMC, an ensemble of random walkers simulate the diffusion and branching processes of the imaginary-time time dependent Schroedinger equation in order to evaluate the matrix elements. The program QMCVIB was written to perform multi-state VMC and CFQMC calculations and employed for several calculations of the H 2 O and C 3 vibrational states, using 7 PES's, 3 trial wavefunction forms, two methods of non-linear basis function parameter optimization, and on both serial and parallel computers. In order to construct accurate trial wavefunctions different wavefunctions forms were required for H 2 O and C 3 . In order to construct accurate trial wavefunctions for C 3 , the non-linear parameters were optimized with respect to the sum of the energies of several low-lying vibrational states. In order to stabilize the statistical error estimates for C 3 the Monte Carlo data was collected into blocks. Accurate vibrational state energies were computed using both serial and parallel QMCVIB programs. Comparison of vibrational state energies computed from the three C 3 PES's suggested that a non-linear equilibrium geometry PES is the most accurate and that discrete potential representations may be used to conveniently determine vibrational state energies
Lattice gauge theories and Monte Carlo simulations
International Nuclear Information System (INIS)
Rebbi, C.
1981-11-01
After some preliminary considerations, the discussion of quantum gauge theories on a Euclidean lattice takes up the definition of Euclidean quantum theory and treatment of the continuum limit; analogy is made with statistical mechanics. Perturbative methods can produce useful results for strong or weak coupling. In the attempts to investigate the properties of the systems for intermediate coupling, numerical methods known as Monte Carlo simulations have proved valuable. The bulk of this paper illustrates the basic ideas underlying the Monte Carlo numerical techniques and the major results achieved with them according to the following program: Monte Carlo simulations (general theory, practical considerations), phase structure of Abelian and non-Abelian models, the observables (coefficient of the linear term in the potential between two static sources at large separation, mass of the lowest excited state with the quantum numbers of the vacuum (the so-called glueball), the potential between two static sources at very small distance, the critical temperature at which sources become deconfined), gauge fields coupled to basonic matter (Higgs) fields, and systems with fermions
Generalized hybrid Monte Carlo - CMFD methods for fission source convergence
International Nuclear Information System (INIS)
Wolters, Emily R.; Larsen, Edward W.; Martin, William R.
2011-01-01
In this paper, we generalize the recently published 'CMFD-Accelerated Monte Carlo' method and present two new methods that reduce the statistical error in CMFD-Accelerated Monte Carlo. The CMFD-Accelerated Monte Carlo method uses Monte Carlo to estimate nonlinear functionals used in low-order CMFD equations for the eigenfunction and eigenvalue. The Monte Carlo fission source is then modified to match the resulting CMFD fission source in a 'feedback' procedure. The two proposed methods differ from CMFD-Accelerated Monte Carlo in the definition of the required nonlinear functionals, but they have identical CMFD equations. The proposed methods are compared with CMFD-Accelerated Monte Carlo on a high dominance ratio test problem. All hybrid methods converge the Monte Carlo fission source almost immediately, leading to a large reduction in the number of inactive cycles required. The proposed methods stabilize the fission source more efficiently than CMFD-Accelerated Monte Carlo, leading to a reduction in the number of active cycles required. Finally, as in CMFD-Accelerated Monte Carlo, the apparent variance of the eigenfunction is approximately equal to the real variance, so the real error is well-estimated from a single calculation. This is an advantage over standard Monte Carlo, in which the real error can be underestimated due to inter-cycle correlation. (author)
Monte Carlo methods and models in finance and insurance
Korn, Ralf; Kroisandt, Gerald
2010-01-01
Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...
Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis
Energy Technology Data Exchange (ETDEWEB)
Wilson, Paul; Evans, Thomas; Tautges, Tim
2012-12-24
This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well
Hybrid Monte-Carlo method for ICF calculations
International Nuclear Information System (INIS)
Clouet, J.F.; Samba, G.
2003-01-01
Numerical simulation of Inertial Confinement Fusion targets in indirect drive requires an accurate description of the radiation transport flow. Laser energy is first converted to X-ray in the gold wall and then transferred to the fusion target through an hohlraum filled with gas. The emissive region is moving in the gold wall which is rapidly expanding into the hohlraum so that the resolution of the radiative transfer equations has to be coupled with hydrodynamic motion. Scientific computing is actually the only tool for an accurate design of ICF targets: one of the difficulties is to compute the non-isotropic irradiation on the capsule and to control them by an appropriate balance between the energy of the different laser beams. Hence an approximate description of radiation transport is not relevant and a transport method has to be chosen. On the other hand transport methods are known to be more or less inefficient in optically thick regions: for instance in the gold wall before it is sufficiently heated and ablated to become optically thin. In these regions, diffusion approximation of the transfer equations is an accurate description of the physical phenomenon; moreover it is much more cheaper to solve numerically than the full transport equations. This is why we developed an hybrid method for radiation transport where the lower part of the energy spectrum is treated in the diffusion approximation whereas the higher part is treated by a transport method. We introduced the notion of spectral cut-off to describe this separation between the two descriptions. The method is dynamic in the sense that the spectral cut-off evolves with time and space localization. The method has been introduced in our ICF code FCl2: this is a 2D radiation hydrodynamics code in cylindrical geometry which has been used for several years at the CEA for laser studies. It is a Lagrangian code with Arbitrary Lagrangian Eulerian capabilities, flux-limited thermal (electronic and ionic
Belo Monte hydropower project: actual studies; AHE Belo Monte: os estudos atuais
Energy Technology Data Exchange (ETDEWEB)
Figueira Netto, Carlos Alberto de Moya [CNEC Engenharia S.A., Sao Paulo, SP (Brazil); Rezende, Paulo Fernando Vieira Souto [Centrais Eletricas Brasileiras S.A. (ELETROBRAS), Rio de Janeiro, RJ (Brazil)
2008-07-01
This article presents the evolution of the studies of Belo Monte Hydro Power Project (HPP) since the initial inventory studies of the Xingu River in 1979 until the current studies for conclusion of the Technical, Economic and Environmental Feasibility Studies the Belo Monte Hydro Power Project, as authorized by Brazilian National Congress. The current studies characterize the Belo Monte HPP with an installed capacity of 11,181.3 MW (20 units of 550 MW in the main power house and 7 units of 25.9 MW in the additional power house), connected to the Brazilian Interconnected Power Grid, allowing to generate 4,796 mean MW of firm energy, without depending on any flow rate regularization of the upstream Xingu river flooding only 441 k m2, of which approximately 200 k m2, correspond to the normal annual wet season flooding of the Xingu River. (author)
Guideline of Monte Carlo calculation. Neutron/gamma ray transport simulation by Monte Carlo method
2002-01-01
This report condenses basic theories and advanced applications of neutron/gamma ray transport calculations in many fields of nuclear energy research. Chapters 1 through 5 treat historical progress of Monte Carlo methods, general issues of variance reduction technique, cross section libraries used in continuous energy Monte Carlo codes. In chapter 6, the following issues are discussed: fusion benchmark experiments, design of ITER, experiment analyses of fast critical assembly, core analyses of JMTR, simulation of pulsed neutron experiment, core analyses of HTTR, duct streaming calculations, bulk shielding calculations, neutron/gamma ray transport calculations of the Hiroshima atomic bomb. Chapters 8 and 9 treat function enhancements of MCNP and MVP codes, and a parallel processing of Monte Carlo calculation, respectively. An important references are attached at the end of this report.
Clinical dosimetry in photon radiotherapy. A Monte Carlo based investigation
International Nuclear Information System (INIS)
Wulff, Joerg
2010-01-01
-Attix theory with corresponding perturbation factors is valid. A further investigation of these conditions when measuring dose profiles was used to determine the type of detector with minimal change in response for regions of charged particle dis-equilibrium and high dose gradients. In terms of penumbra broadening, radiochromic film shows the smallest deviation from dose to water. Monte Carlo simulations will replace or at least extend the existing data in clinical dosimetry protocols in order to reduce the uncertainty in radiotherapy. For corrections under non-reference conditions as occuring in modern radiotherapy techniques, Monte Carlo calculations will be a crucial part. This work and the developed methods accordingly form an important step towards reduced uncertainties in radiotherapy for cancer treatment.
Comparison of calculations of a reflected reactor with diffusion, SN and Monte Carlo codes
International Nuclear Information System (INIS)
McGregor, B.
1975-01-01
A diffusion theory code, POW, was compared with a Monte Carlo transport theory code, KENO, for the calculation of a small C/ 235 U cylindrical core with a graphite reflector. The calculated multiplication factors were in good agreement but differences were noted in region-averaged group fluxes. A one-dimensional spherical geometry was devised to approximate cylindrical geometry. Differences similar to those already observed were noted when the region-averaged fluxes from a diffusion theory (POW) calculation were compared with an SN transport theory (ANAUSN) calculation for the spherical model. Calculations made with SN and Monte Carlo transport codes were in good agreement. It was concluded that observed flux differences were attributable to the POW code, and were not inconsistent with inherent diffusion theory approximations. (author)
Implicit Monte Carlo methods and non-equilibrium Marshak wave radiative transport
International Nuclear Information System (INIS)
Lynch, J.E.
1985-01-01
Two enhancements to the Fleck implicit Monte Carlo method for radiative transport are described, for use in transparent and opaque media respectively. The first introduces a spectral mean cross section, which applies to pseudoscattering in transparent regions with a high frequency incident spectrum. The second provides a simple Monte Carlo random walk method for opaque regions, without the need for a supplementary diffusion equation formulation. A time-dependent transport Marshak wave problem of radiative transfer, in which a non-equilibrium condition exists between the radiation and material energy fields, is then solved. These results are compared to published benchmark solutions and to new discrete ordinate S-N results, for both spatially integrated radiation-material energies versus time and to new spatially dependent temperature profiles. Multigroup opacities, which are independent of both temperature and frequency, are used in addition to a material specific heat which is proportional to the cube of the temperature. 7 refs., 4 figs
Statistical estimation Monte Carlo for unreliability evaluation of highly reliable system
International Nuclear Information System (INIS)
Xiao Gang; Su Guanghui; Jia Dounan; Li Tianduo
2000-01-01
Based on analog Monte Carlo simulation, statistical Monte Carlo methods for unreliable evaluation of highly reliable system are constructed, including direct statistical estimation Monte Carlo method and weighted statistical estimation Monte Carlo method. The basal element is given, and the statistical estimation Monte Carlo estimators are derived. Direct Monte Carlo simulation method, bounding-sampling method, forced transitions Monte Carlo method, direct statistical estimation Monte Carlo and weighted statistical estimation Monte Carlo are used to evaluate unreliability of a same system. By comparing, weighted statistical estimation Monte Carlo estimator has smallest variance, and has highest calculating efficiency
A Monte Carlo simulation of neon isotope separation in a DC discharge through a narrow capillary
International Nuclear Information System (INIS)
Niroshi Akatsuka; Masaaki Suzuki
1999-01-01
A numerical simulation was undertaken on the neon isotope separation in a DC arc discharge through a narrow capillary. The mass transport phenomenon of neutral particles as well as ions was treated by the direct simulation Monte Carlo (DSMC) method. The numerical results qualitatively agreed with existing experimental ones concerning not only the isotope separation phenomena, but also the pressure difference between the region of the anode and that of the cathode [ru
Monte Carlo simulation of medical linear accelerator using primo code
International Nuclear Information System (INIS)
Omer, Mohamed Osman Mohamed Elhasan
2014-12-01
The use of monte Carlo simulation has become very important in the medical field and especially in calculation in radiotherapy. Various Monte Carlo codes were developed simulating interactions of particles and photons with matter. One of these codes is PRIMO that performs simulation of radiation transport from the primary electron source of a linac to estimate the absorbed dose in a water phantom or computerized tomography (CT). PRIMO is based on Penelope Monte Carlo code. Measurements of 6 MV photon beam PDD and profile were done for Elekta precise linear accelerator at Radiation and Isotopes Center Khartoum using computerized Blue water phantom and CC13 Ionization Chamber. accept Software was used to control the phantom to measure and verify dose distribution. Elektalinac from the list of available linacs in PRIMO was tuned to model Elekta precise linear accelerator. Beam parameter of 6.0 MeV initial electron energy, 0.20 MeV FWHM, and 0.20 cm focal spot FWHM were used, and an error of 4% between calculated and measured curves was found. The buildup region Z max was 1.40 cm and homogenous profile in cross line and in line were acquired. A number of studies were done to verily the model usability one of them is the effect of the number of histories on accuracy of the simulation and the resulted profile for the same beam parameters. The effect was noticeable and inaccuracies in the profile were reduced by increasing the number of histories. Another study was the effect of Side-step errors on the calculated dose which was compared with the measured dose for the same setting.It was in range of 2% for 5 cm shift, but it was higher in the calculated dose because of the small difference between the tuned model and measured dose curves. Future developments include simulating asymmetrical fields, calculating the dose distribution in computerized tomographic (CT) volume, studying the effect of beam modifiers on beam profile for both electron and photon beams.(Author)
Investigating the impossible: Monte Carlo simulations
International Nuclear Information System (INIS)
Kramer, Gary H.; Crowley, Paul; Burns, Linda C.
2000-01-01
Designing and testing new equipment can be an expensive and time consuming process or the desired performance characteristics may preclude its construction due to technological shortcomings. Cost may also prevent equipment being purchased for other scenarios to be tested. An alternative is to use Monte Carlo simulations to make the investigations. This presentation exemplifies how Monte Carlo code calculations can be used to fill the gap. An example is given for the investigation of two sizes of germanium detector (70 mm and 80 mm diameter) at four different crystal thicknesses (15, 20, 25, and 30 mm) and makes predictions on how the size affects the counting efficiency and the Minimum Detectable Activity (MDA). The Monte Carlo simulations have shown that detector efficiencies can be adequately modelled using photon transport if the data is used to investigate trends. The investigation of the effect of detector thickness on the counting efficiency has shown that thickness for a fixed diameter detector of either 70 mm or 80 mm is unimportant up to 60 keV. At higher photon energies, the counting efficiency begins to decrease as the thickness decreases as expected. The simulations predict that the MDA of either the 70 mm or 80 mm diameter detectors does not differ by more than a factor of 1.15 at 17 keV or 1.2 at 60 keV when comparing detectors of equivalent thicknesses. The MDA is slightly increased at 17 keV, and rises by about 52% at 660 keV, when the thickness is decreased from 30 mm to 15 mm. One could conclude from this information that the extra cost associated with the larger area Ge detectors may not be justified for the slight improvement predicted in the MDA. (author)
International Nuclear Information System (INIS)
Lauber, Matthias; Baeyens, Bart; Bradbury, Michael H.
2000-12-01
Opalinus Clay is currently under investigation as a potential host rock for the disposal of high level and long-lived intermediate radioactive waste. A throughout physico-chemical characterisation was carried out on a bore core sample from the underground rock laboratory Mont Terri (Canton Jura). The results of these investigations indicate that the major characteristics (mineralogy, cation exchange capacity, cation occupancies, selectivity coefficients, chloride and sulphate inventories) were very similar to a different core sample, previously used for pore water modelling studies. It was concluded that the pore water compositions derived in the earlier studies were reliable and could be used in this work. The organic matter which dissolved from the Opalinus Clay rock was not humic or fulvic acids and the concentration remaining in the liquid phase in the sorption experiments was < 0.5 ppm C. The organic matter is therefore considered to have little or no influence on the sorption behaviour of the studied radionuclides. Redox potential measurements of the Opalinus Clay/synthetic pore water system inside the glove boxes indicated anoxic conditions. The main focus of the experimental work presented here is on the sorption behaviour of Cs (I), Sr (II), Ni (II), Eu (III), Th (IV), Sn (IV) and Se (IV) on Opalinus Clay equilibrated with synthetic pore waters at pH 6.3 and 8. Sorption isotherms were measured for Cs, Ni, Eu, Th and Se. Single point data were measured for Sr and Sn. For all radionuclides studied the sorption kinetics were measured first. The times required to complete the sorption on the Opalinus Clay varied between one day for Th and one month for Ni and Se. Within the concentration ranges under study the uptake of Cs, Ni, Eu and Se on Opalinus Clay was non-linear, whereas for Th a linear sorption behaviour was observed. For Ni, Eu and Th the sorption increased with increasing pH. For Cs a pH independent sorption behaviour was observed. The concentration
Monte Carlo Simulation of an American Option
Directory of Open Access Journals (Sweden)
Gikiri Thuo
2007-04-01
Full Text Available We implement gradient estimation techniques for sensitivity analysis of option pricing which can be efficiently employed in Monte Carlo simulation. Using these techniques we can simultaneously obtain an estimate of the option value together with the estimates of sensitivities of the option value to various parameters of the model. After deriving the gradient estimates we incorporate them in an iterative stochastic approximation algorithm for pricing an option with early exercise features. We illustrate the procedure using an example of an American call option with a single dividend that is analytically tractable. In particular we incorporate estimates for the gradient with respect to the early exercise threshold level.
Monte Carlo study of the multiquark systems
International Nuclear Information System (INIS)
Kerbikov, B.O.; Polikarpov, M.I.; Zamolodchikov, A.B.
1986-01-01
Random walks have been used to calculate the energies of the ground states in systems of N=3, 6, 9, 12 quarks. Multiquark states with N>3 are unstable with respect to the spontaneous dissociation into color singlet hadrons. The modified Green's function Monte Carlo algorithm which proved to be more simple and much accurate than the conventional few body methods have been employed. In contrast to other techniques, the same equations are used for any number of particles, while the computer time increases only linearly V, S the number of particles
Monte Carlo eigenfunction strategies and uncertainties
International Nuclear Information System (INIS)
Gast, R.C.; Candelore, N.R.
1974-01-01
Comparisons of convergence rates for several possible eigenfunction source strategies led to the selection of the ''straight'' analog of the analytic power method as the source strategy for Monte Carlo eigenfunction calculations. To insure a fair game strategy, the number of histories per iteration increases with increasing iteration number. The estimate of eigenfunction uncertainty is obtained from a modification of a proposal by D. B. MacMillan and involves only estimates of the usual purely statistical component of uncertainty and a serial correlation coefficient of lag one. 14 references. (U.S.)
ATLAS Monte Carlo tunes for MC09
The ATLAS collaboration
2010-01-01
This note describes the ATLAS tunes of underlying event and minimum bias description for the main Monte Carlo generators used in the MC09 production. For the main shower generators, pythia and herwig (with jimmy), the MRST LO* parton distribution functions (PDFs) were used for the first time in ATLAS. Special studies on the performance of these, conceptually new, PDFs for high pt physics processes at LHC energies are presented. In addition, a tune of jimmy for CTEQ6.6 is presented, for use with MC@NLO.
Markov chains analytic and Monte Carlo computations
Graham, Carl
2014-01-01
Markov Chains: Analytic and Monte Carlo Computations introduces the main notions related to Markov chains and provides explanations on how to characterize, simulate, and recognize them. Starting with basic notions, this book leads progressively to advanced and recent topics in the field, allowing the reader to master the main aspects of the classical theory. This book also features: Numerous exercises with solutions as well as extended case studies.A detailed and rigorous presentation of Markov chains with discrete time and state space.An appendix presenting probabilistic notions that are nec
Atomistic Monte Carlo simulation of lipid membranes
DEFF Research Database (Denmark)
Wüstner, Daniel; Sklenar, Heinz
2014-01-01
Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction...... into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches...
Monte Carlo method in radiation transport problems
International Nuclear Information System (INIS)
Dejonghe, G.; Nimal, J.C.; Vergnaud, T.
1986-11-01
In neutral radiation transport problems (neutrons, photons), two values are important: the flux in the phase space and the density of particles. To solve the problem with Monte Carlo method leads to, among other things, build a statistical process (called the play) and to provide a numerical value to a variable x (this attribution is called score). Sampling techniques are presented. Play biasing necessity is proved. A biased simulation is made. At last, the current developments (rewriting of programs for instance) are presented due to several reasons: two of them are the vectorial calculation apparition and the photon and neutron transport in vacancy media [fr
Mosaic crystal algorithm for Monte Carlo simulations
Seeger, P A
2002-01-01
An algorithm is presented for calculating reflectivity, absorption, and scattering of mosaic crystals in Monte Carlo simulations of neutron instruments. The algorithm uses multi-step transport through the crystal with an exact solution of the Darwin equations at each step. It relies on the kinematical model for Bragg reflection (with parameters adjusted to reproduce experimental data). For computation of thermal effects (the Debye-Waller factor and coherent inelastic scattering), an expansion of the Debye integral as a rapidly converging series of exponential terms is also presented. Any crystal geometry and plane orientation may be treated. The algorithm has been incorporated into the neutron instrument simulation package NISP. (orig.)
Monte Carlo methods to calculate impact probabilities
Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.
2014-09-01
Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward
MBR Monte Carlo Simulation in PYTHIA8
Ciesielski, R.
We present the MBR (Minimum Bias Rockefeller) Monte Carlo simulation of (anti)proton-proton interactions and its implementation in the PYTHIA8 event generator. We discuss the total, elastic, and total-inelastic cross sections, and three contributions from diffraction dissociation processes that contribute to the latter: single diffraction, double diffraction, and central diffraction or double-Pomeron exchange. The event generation follows a renormalized-Regge-theory model, successfully tested using CDF data. Based on the MBR-enhanced PYTHIA8 simulation, we present cross-section predictions for the LHC and beyond, up to collision energies of 50 TeV.
Spectral functions from Quantum Monte Carlo
International Nuclear Information System (INIS)
Silver, R.N.
1989-01-01
In his review, D. Scalapino identified two serious limitations on the application of Quantum Monte Carlo (QMC) methods to the models of interest in High T c Superconductivity (HTS). One is the ''sign problem''. The other is the ''analytic continuation problem'', which is how to extract electron spectral functions from QMC calculations of the imaginary time Green's functions. Through-out this Symposium on HTS, the spectral functions have been the focus for the discussion of normal state properties including the applicability of band theory, Fermi liquid theory, marginal Fermi liquids, and novel non-perturbative states. 5 refs., 1 fig
An analysis of Monte Carlo tree search
CSIR Research Space (South Africa)
James, S
2017-02-01
Full Text Available Tree Search Steven James∗, George Konidaris† & Benjamin Rosman∗‡ ∗University of the Witwatersrand, Johannesburg, South Africa †Brown University, Providence RI 02912, USA ‡Council for Scientific and Industrial Research, Pretoria, South Africa steven....james@students.wits.ac.za, gdk@cs.brown.edu, brosman@csir.co.za Abstract Monte Carlo Tree Search (MCTS) is a family of directed search algorithms that has gained widespread attention in re- cent years. Despite the vast amount of research into MCTS, the effect of modifications...
Monte Carlo simulation for the transport beamline
Energy Technology Data Exchange (ETDEWEB)
Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania (Italy); Attili, A.; Marchetto, F.; Russo, G. [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino (Italy); Cirrone, G. A. P.; Schillaci, F.; Scuderi, V. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Institute of Physics Czech Academy of Science, ELI-Beamlines project, Na Slovance 2, Prague (Czech Republic); Carpinelli, M. [INFN Sezione di Cagliari, c/o Dipartimento di Fisica, Università di Cagliari, Cagliari (Italy); Tramontana, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy)
2013-07-26
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.
Monte Carlo simulation for the transport beamline
International Nuclear Information System (INIS)
Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A.; Attili, A.; Marchetto, F.; Russo, G.; Cirrone, G. A. P.; Schillaci, F.; Scuderi, V.; Carpinelli, M.; Tramontana, A.
2013-01-01
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery
Diffusion quantum Monte Carlo for molecules
International Nuclear Information System (INIS)
Lester, W.A. Jr.
1986-07-01
A quantum mechanical Monte Carlo method has been used for the treatment of molecular problems. The imaginary-time Schroedinger equation written with a shift in zero energy [E/sub T/ - V(R)] can be interpreted as a generalized diffusion equation with a position-dependent rate or branching term. Since diffusion is the continuum limit of a random walk, one may simulate the Schroedinger equation with a function psi (note, not psi 2 ) as a density of ''walks.'' The walks undergo an exponential birth and death as given by the rate term. 16 refs., 2 tabs
Monte Carlo modelling for neutron guide losses
International Nuclear Information System (INIS)
Cser, L.; Rosta, L.; Toeroek, Gy.
1989-09-01
In modern research reactors, neutron guides are commonly used for beam conducting. The neutron guide is a well polished or equivalently smooth glass tube covered inside by sputtered or evaporated film of natural Ni or 58 Ni isotope where the neutrons are totally reflected. A Monte Carlo calculation was carried out to establish the real efficiency and the spectral as well as spatial distribution of the neutron beam at the end of a glass mirror guide. The losses caused by mechanical inaccuracy and mirror quality were considered and the effects due to the geometrical arrangement were analyzed. (author) 2 refs.; 2 figs
OGRE, Monte-Carlo System for Gamma Transport Problems
International Nuclear Information System (INIS)
1984-01-01
1 - Nature of physical problem solved: The OGRE programme system was designed to calculate, by Monte Carlo methods, any quantity related to gamma-ray transport. The system is represented by two examples - OGRE-P1 and OGRE-G. The OGRE-P1 programme is a simple prototype which calculates dose rate on one side of a slab due to a plane source on the other side. The OGRE-G programme, a prototype of a programme utilizing a general-geometry routine, calculates dose rate at arbitrary points. A very general source description in OGRE-G may be employed by reading a tape prepared by the user. 2 - Method of solution: Case histories of gamma rays in the prescribed geometry are generated and analyzed to produce averages of any desired quantity which, in the case of the prototypes, are gamma-ray dose rates. The system is designed to achieve generality by ease of modification. No importance sampling is built into the prototypes, a very general geometry subroutine permits the treatment of complicated geometries. This is essentially the same routine used in the O5R neutron transport system. Boundaries may be either planes or quadratic surfaces, arbitrarily oriented and intersecting in arbitrary fashion. Cross section data is prepared by the auxiliary master cross section programme XSECT which may be used to originate, update, or edit the master cross section tape. The master cross section tape is utilized in the OGRE programmes to produce detailed tables of macroscopic cross sections which are used during the Monte Carlo calculations. 3 - Restrictions on the complexity of the problem: Maximum cross-section array information may be estimated by a given formula for a specific problem. The number of regions must be less than or equal to 50
Variation in computer time with geometry prescription in monte carlo code KENO-IV
International Nuclear Information System (INIS)
Gopalakrishnan, C.R.
1988-01-01
In most studies, the Monte Carlo criticality code KENO-IV has been compared with other Monte Carlo codes, but evaluation of its performance with different box descriptions has not been done so far. In Monte Carlo computations, any fractional savings of computing time is highly desirable. Variation in computation time with box description in KENO for two different fast reactor fuel subassemblies of FBTR and PFBR is studied. The K eff of an infinite array of fuel subassemblies is calculated by modelling the subassemblies in two different ways (i) multi-region, (ii) multi-box. In addition to these two cases, excess reactivity calculations of FBTR are also performed in two ways to study this effect in a complex geometry. It is observed that the K eff values calculated by multi-region and multi-box models agree very well. However the increase in computation time from the multi-box to the multi-region is considerable, while the difference in computer storage requirements for the two models is negligible. This variation in computing time arises from the way the neutron is tracked in the two cases. (author)
International Nuclear Information System (INIS)
Wagner, J. C.; Blakeman, E. D.; Peplow, D. E.
2009-01-01
This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is a variation on the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for some time to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development of an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain approximately uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented in the ADVANTG/MCNP framework and has been fully automated within the MAVRIC sequence of SCALE 6. Results of the application of the method to enabling the calculation of dose rates throughout an entire full-scale pressurized-water reactor facility are presented and discussed. (authors)
Diffusion Monte Carlo approach versus adiabatic computation for local Hamiltonians
Bringewatt, Jacob; Dorland, William; Jordan, Stephen P.; Mink, Alan
2018-02-01
Most research regarding quantum adiabatic optimization has focused on stoquastic Hamiltonians, whose ground states can be expressed with only real non-negative amplitudes and thus for whom destructive interference is not manifest. This raises the question of whether classical Monte Carlo algorithms can efficiently simulate quantum adiabatic optimization with stoquastic Hamiltonians. Recent results have given counterexamples in which path-integral and diffusion Monte Carlo fail to do so. However, most adiabatic optimization algorithms, such as for solving MAX-k -SAT problems, use k -local Hamiltonians, whereas our previous counterexample for diffusion Monte Carlo involved n -body interactions. Here we present a 6-local counterexample which demonstrates that even for these local Hamiltonians there are cases where diffusion Monte Carlo cannot efficiently simulate quantum adiabatic optimization. Furthermore, we perform empirical testing of diffusion Monte Carlo on a standard well-studied class of permutation-symmetric tunneling problems and similarly find large advantages for quantum optimization over diffusion Monte Carlo.
Monte Carlo learning/biasing experiment with intelligent random numbers
International Nuclear Information System (INIS)
Booth, T.E.
1985-01-01
A Monte Carlo learning and biasing technique is described that does its learning and biasing in the random number space rather than the physical phase-space. The technique is probably applicable to all linear Monte Carlo problems, but no proof is provided here. Instead, the technique is illustrated with a simple Monte Carlo transport problem. Problems encountered, problems solved, and speculations about future progress are discussed. 12 refs
Discrete diffusion Monte Carlo for frequency-dependent radiative transfer
International Nuclear Information System (INIS)
Densmore, Jeffery D.; Thompson, Kelly G.; Urbatsch, Todd J.
2011-01-01
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique. (author)
Monte Carlo criticality analysis for dissolvers with neutron poison
International Nuclear Information System (INIS)
Yu, Deshun; Dong, Xiufang; Pu, Fuxiang.
1987-01-01
Criticality analysis for dissolvers with neutron poison is given on the basis of Monte Carlo method. In Monte Carlo calculations of thermal neutron group parameters for fuel pieces, neutron transport length is determined in terms of maximum cross section approach. A set of related effective multiplication factors (K eff ) are calculated by Monte Carlo method for the three cases. Related numerical results are quite useful for the design and operation of this kind of dissolver in the criticality safety analysis. (author)
Temperature variance study in Monte-Carlo photon transport theory
International Nuclear Information System (INIS)
Giorla, J.
1985-10-01
We study different Monte-Carlo methods for solving radiative transfer problems, and particularly Fleck's Monte-Carlo method. We first give the different time-discretization schemes and the corresponding stability criteria. Then we write the temperature variance as a function of the variances of temperature and absorbed energy at the previous time step. Finally we obtain some stability criteria for the Monte-Carlo method in the stationary case [fr
Le Comte de Monte Cristo: da literatura ao cinema
Caravela, Natércia Murta Silva
2008-01-01
A presente dissertação discute o diálogo estabelecido entre literatura e cinema no tratamento da personagem principal – um homem traído que se vinga de forma cruel dos seus inimigos – na obra literária Le Comte de Monte-Cristo, de Alexandre Dumas, e nas três adaptações fílmicas escolhidas: Le Comte de Monte-Cristo de Robert Vernay (1943); The count of Monte Cristo de David Greene (1975) e The count of Monte Cristo de Kevin Reynolds (2002). O projecto centra-se na análise da ...
Odd-flavor Simulations by the Hybrid Monte Carlo
Takaishi, Tetsuya; Takaishi, Tetsuya; De Forcrand, Philippe
2001-01-01
The standard hybrid Monte Carlo algorithm is known to simulate even flavors QCD only. Simulations of odd flavors QCD, however, can be also performed in the framework of the hybrid Monte Carlo algorithm where the inverse of the fermion matrix is approximated by a polynomial. In this exploratory study we perform three flavors QCD simulations. We make a comparison of the hybrid Monte Carlo algorithm and the R-algorithm which also simulates odd flavors systems but has step-size errors. We find that results from our hybrid Monte Carlo algorithm are in agreement with those from the R-algorithm obtained at very small step-size.
Wielandt acceleration for MCNP5 Monte Carlo eigenvalue calculations
International Nuclear Information System (INIS)
Brown, F.
2007-01-01
Monte Carlo criticality calculations use the power iteration method to determine the eigenvalue (k eff ) and eigenfunction (fission source distribution) of the fundamental mode. A recently proposed method for accelerating convergence of the Monte Carlo power iteration using Wielandt's method has been implemented in a test version of MCNP5. The method is shown to provide dramatic improvements in convergence rates and to greatly reduce the possibility of false convergence assessment. The method is effective and efficient, improving the Monte Carlo figure-of-merit for many problems. In addition, the method should eliminate most of the underprediction bias in confidence intervals for Monte Carlo criticality calculations. (authors)
Monte Carlo shielding analyses using an automated biasing procedure
International Nuclear Information System (INIS)
Tang, J.S.; Hoffman, T.J.
1988-01-01
A systematic and automated approach for biasing Monte Carlo shielding calculations is described. In particular, adjoint fluxes from a one-dimensional discrete ordinates calculation are used to generate biasing parameters for a Monte Carlo calculation. The entire procedure of adjoint calculation, biasing parameters generation, and Monte Carlo calculation has been automated. The automated biasing procedure has been applied to several realistic deep-penetration shipping cask problems. The results obtained for neutron and gamma-ray transport indicate that with the automated biasing procedure Monte Carlo shielding calculations of spent-fuel casks can be easily performed with minimum effort and that accurate results can be obtained at reasonable computing cost
Monte Carlo techniques for analyzing deep-penetration problems
International Nuclear Information System (INIS)
Cramer, S.N.; Gonnord, J.; Hendricks, J.S.
1986-01-01
Current methods and difficulties in Monte Carlo deep-penetration calculations are reviewed, including statistical uncertainty and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multigroup Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications
Igo - A Monte Carlo Code For Radiotherapy Planning
International Nuclear Information System (INIS)
Goldstein, M.; Regev, D.
1999-01-01
The goal of radiation therapy is to deliver a lethal dose to the tumor, while minimizing the dose to normal tissues and vital organs. To carry out this task, it is critical to calculate correctly the 3-D dose delivered. Monte Carlo transport methods (especially the Adjoint Monte Carlo have the potential to provide more accurate predictions of the 3-D dose the currently used methods. IG0 is a Monte Carlo code derived from the general Monte Carlo Program - MCNP, tailored specifically for calculating the effects of radiation therapy. This paper describes the IG0 transport code, the PIG0 interface and some preliminary results
Quantum statistical Monte Carlo methods and applications to spin systems
International Nuclear Information System (INIS)
Suzuki, M.
1986-01-01
A short review is given concerning the quantum statistical Monte Carlo method based on the equivalence theorem that d-dimensional quantum systems are mapped onto (d+1)-dimensional classical systems. The convergence property of this approximate tansformation is discussed in detail. Some applications of this general appoach to quantum spin systems are reviewed. A new Monte Carlo method, ''thermo field Monte Carlo method,'' is presented, which is an extension of the projection Monte Carlo method at zero temperature to that at finite temperatures
Applications of the Monte Carlo method in radiation protection
International Nuclear Information System (INIS)
Kulkarni, R.N.; Prasad, M.A.
1999-01-01
This paper gives a brief introduction to the application of the Monte Carlo method in radiation protection. It may be noted that an exhaustive review has not been attempted. The special advantage of the Monte Carlo method has been first brought out. The fundamentals of the Monte Carlo method have next been explained in brief, with special reference to two applications in radiation protection. Some sample current applications have been reported in the end in brief as examples. They are, medical radiation physics, microdosimetry, calculations of thermoluminescence intensity and probabilistic safety analysis. The limitations of the Monte Carlo method have also been mentioned in passing. (author)
Multi-Index Monte Carlo (MIMC)
Haji Ali, Abdul Lateef
2016-01-06
We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles s seminal work, instead of using first-order differences as in MLMC, we use in MIMC high-order mixed differences to reduce the variance of the hierarchical differences dramatically. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be of Total Degree (TD) type. When using such sets, MIMC yields new and improved complexity results, which are natural generalizations of Giles s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence, O(TOL-2).
Multi-Index Monte Carlo (MIMC)
Haji Ali, Abdul Lateef; Nobile, Fabio; Tempone, Raul
2016-01-01
We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles s seminal work, instead of using first-order differences as in MLMC, we use in MIMC high-order mixed differences to reduce the variance of the hierarchical differences dramatically. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be of Total Degree (TD) type. When using such sets, MIMC yields new and improved complexity results, which are natural generalizations of Giles s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence, O(TOL-2).
Multi-Index Monte Carlo (MIMC)
Haji Ali, Abdul Lateef; Nobile, Fabio; Tempone, Raul
2015-01-01
We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles’s seminal work, instead of using first-order differences as in MLMC, we use in MIMC high-order mixed differences to reduce the variance of the hierarchical differences dramatically. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be of Total Degree (TD) type. When using such sets, MIMC yields new and improved complexity results, which are natural generalizations of Giles’s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence.
International Nuclear Information System (INIS)
Ohta, Shigemi
1996-01-01
The Self-Test Monte Carlo (STMC) method resolves the main problems in using algebraic pseudo-random numbers for Monte Carlo (MC) calculations: that they can interfere with MC algorithms and lead to erroneous results, and that such an error often cannot be detected without known exact solution. STMC is based on good randomness of about 10 10 bits available from physical noise or transcendental numbers like π = 3.14---. Various bit modifiers are available to get more bits for applications that demands more than 10 10 random bits such as lattice quantum chromodynamics (QCD). These modifiers are designed so that a) each of them gives a bit sequence comparable in randomness as the original if used separately from each other, and b) their mutual interference when used jointly in a single MC calculation is adjustable. Intermediate data of the MC calculation itself are used to quantitatively test and adjust the mutual interference of the modifiers in respect of the MC algorithm. STMC is free of systematic error and gives reliable statistical error. Also it can be easily implemented on vector and parallel supercomputers. (author)
Algorithms for Monte Carlo calculations with fermions
International Nuclear Information System (INIS)
Weingarten, D.
1985-01-01
We describe a fermion Monte Carlo algorithm due to Petcher and the present author and another due to Fucito, Marinari, Parisi and Rebbi. For the first algorithm we estimate the number of arithmetic operations required to evaluate a vacuum expectation value grows as N 11 /msub(q) on an N 4 lattice with fixed periodicity in physical units and renormalized quark mass msub(q). For the second algorithm the rate of growth is estimated to be N 8 /msub(q) 2 . Numerical experiments are presented comparing the two algorithms on a lattice of size 2 4 . With a hopping constant K of 0.15 and β of 4.0 we find the number of operations for the second algorithm is about 2.7 times larger than for the first and about 13 000 times larger than for corresponding Monte Carlo calculations with a pure gauge theory. An estimate is given for the number of operations required for more realistic calculations by each algorithm on a larger lattice. (orig.)
Quantum Monte Carlo for atoms and molecules
International Nuclear Information System (INIS)
Barnett, R.N.
1989-11-01
The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H 2 , LiH, Li 2 , and H 2 O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li 2 , and H 2 O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations, the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions
Monte Carlo simulation of grain growth
Directory of Open Access Journals (Sweden)
Paulo Blikstein
1999-07-01
Full Text Available Understanding and predicting grain growth in Metallurgy is meaningful. Monte Carlo methods have been used in computer simulations in many different fields of knowledge. Grain growth simulation using this method is especially attractive as the statistical behavior of the atoms is properly reproduced; microstructural evolution depends only on the real topology of the grains and not on any kind of geometric simplification. Computer simulation has the advantage of allowing the user to visualize graphically the procedures, even dynamically and in three dimensions. Single-phase alloy grain growth simulation was carried out by calculating the free energy of each atom in the lattice (with its present crystallographic orientation and comparing this value to another one calculated with a different random orientation. When the resulting free energy is lower or equal to the initial value, the new orientation replaces the former. The measure of time is the Monte Carlo Step (MCS, which involves a series of trials throughout the lattice. A very close relationship between experimental and theoretical values for the grain growth exponent (n was observed.
Multi-Index Monte Carlo (MIMC)
Haji Ali, Abdul Lateef
2015-01-07
We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles’s seminal work, instead of using first-order differences as in MLMC, we use in MIMC high-order mixed differences to reduce the variance of the hierarchical differences dramatically. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be of Total Degree (TD) type. When using such sets, MIMC yields new and improved complexity results, which are natural generalizations of Giles’s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence.
Parallel Monte Carlo Search for Hough Transform
Lopes, Raul H. C.; Franqueira, Virginia N. L.; Reid, Ivan D.; Hobson, Peter R.
2017-10-01
We investigate the problem of line detection in digital image processing and in special how state of the art algorithms behave in the presence of noise and whether CPU efficiency can be improved by the combination of a Monte Carlo Tree Search, hierarchical space decomposition, and parallel computing. The starting point of the investigation is the method introduced in 1962 by Paul Hough for detecting lines in binary images. Extended in the 1970s to the detection of space forms, what came to be known as Hough Transform (HT) has been proposed, for example, in the context of track fitting in the LHC ATLAS and CMS projects. The Hough Transform transfers the problem of line detection, for example, into one of optimization of the peak in a vote counting process for cells which contain the possible points of candidate lines. The detection algorithm can be computationally expensive both in the demands made upon the processor and on memory. Additionally, it can have a reduced effectiveness in detection in the presence of noise. Our first contribution consists in an evaluation of the use of a variation of the Radon Transform as a form of improving theeffectiveness of line detection in the presence of noise. Then, parallel algorithms for variations of the Hough Transform and the Radon Transform for line detection are introduced. An algorithm for Parallel Monte Carlo Search applied to line detection is also introduced. Their algorithmic complexities are discussed. Finally, implementations on multi-GPU and multicore architectures are discussed.
Monte Carlo simulation for radiographic applications
International Nuclear Information System (INIS)
Tillack, G.R.; Bellon, C.
2003-01-01
Standard radiography simulators are based on the attenuation law complemented by built-up-factors (BUF) to describe the interaction of radiation with material. The assumption of BUF implies that scattered radiation reduces only the contrast in radiographic images. This simplification holds for a wide range of applications like weld inspection as known from practical experience. But only a detailed description of the different underlying interaction mechanisms is capable to explain effects like mottling or others that every radiographer has experienced in practice. The application of Monte Carlo models is capable to handle primary and secondary interaction mechanisms contributing to the image formation process like photon interactions (absorption, incoherent and coherent scattering including electron-binding effects, pair production) and electron interactions (electron tracing including X-Ray fluorescence and Bremsstrahlung production). It opens up possibilities like the separation of influencing factors and the understanding of the functioning of intensifying screen used in film radiography. The paper discusses the opportunities in applying the Monte Carlo method to investigate special features in radiography in terms of selected examples. (orig.) [de
Reactor perturbation calculations by Monte Carlo methods
International Nuclear Information System (INIS)
Gubbins, M.E.
1965-09-01
Whilst Monte Carlo methods are useful for reactor calculations involving complicated geometry, it is difficult to apply them to the calculation of perturbation worths because of the large amount of computing time needed to obtain good accuracy. Various ways of overcoming these difficulties are investigated in this report, with the problem of estimating absorbing control rod worths particularly in mind. As a basis for discussion a method of carrying out multigroup reactor calculations by Monte Carlo methods is described. Two methods of estimating a perturbation worth directly, without differencing two quantities of like magnitude, are examined closely but are passed over in favour of a third method based on a correlation technique. This correlation method is described, and demonstrated by a limited range of calculations for absorbing control rods in a fast reactor. In these calculations control rod worths of between 1% and 7% in reactivity are estimated to an accuracy better than 10% (3 standard errors) in about one hour's computing time on the English Electric KDF.9 digital computer. (author)
Energy Technology Data Exchange (ETDEWEB)
Garcia, Claudio; Costa, Artur; Bittencourt, Euclides [TRANSPETRO - PETROBRAS Transporte, Rio de Janeiro, RJ (Brazil)
2005-07-01
Due to the growing relevance of safety and environmental protection policies in PETROBRAS and its subsidiaries, as well as official regulatory agencies and population requirements, integrity management of oil and gas pipelines became a priority activity in TRANSPETRO, involving several sectors of the company's Support Management Department. Inspection activities using intelligent PIGs, field correlations and replacement of pipeline segments are known as high cost operations and request complex logistics. Thus, it is imperative the adoption of management tools that optimize the available resources. This study presents Monte Carlo simulation method as an additional tool for evaluation and management of pipeline structural integrity. The method consists in foreseeing future physical conditions of most significant defects found in intelligent PIG In Line Inspections based on a probabilistic approach. Through Monte Carlo simulation, probability functions of failure for each defect are produced, helping managers to decide which repairs should be executed in order to reach the desired or accepted risk level. The case that illustrates this study refers to the reconditioning of ORSOL 14'' (35,56 mm) pipeline. This pipeline was constructed to transfer petroleum from Urucu's production fields to Solimoes port, in Coari, city in Brazilian Amazon Region. The result of this analysis indicated critical points for repair, in addition to the results obtained by the conventional evaluation (deterministic ASME B-31G method). Due to the difficulties to mobilize staff and execute necessary repairs in remote areas like Amazon forest, the probabilistic tool was extremely useful, improving pipeline integrity level and avoiding future additional costs. (author)
International Nuclear Information System (INIS)
Zazula, J.M.
1984-01-01
This work concerns calculation of a neutron response, caused by a neutron field perturbed by materials surrounding the source or the detector. Solution of a problem is obtained using coupling of the Monte Carlo radiation transport computation for the perturbed region and the discrete ordinates transport computation for the unperturbed system. (author). 62 refs
International Nuclear Information System (INIS)
Altiparmakov, D.
1988-12-01
This analysis is part of the report on ' Implementation of geometry module of 05R code in another Monte Carlo code', chapter 6.0: establishment of future activity related to geometry in Monte Carlo method. The introduction points out some problems in solving complex three-dimensional models which induce the need for developing more efficient geometry modules in Monte Carlo calculations. Second part include formulation of the problem and geometry module. Two fundamental questions to be solved are defined: (1) for a given point, it is necessary to determine material region or boundary where it belongs, and (2) for a given direction, all cross section points with material regions should be determined. Third part deals with possible connection with Monte Carlo calculations for computer simulation of geometry objects. R-function theory enables creation of geometry module base on the same logic (complex regions are constructed by elementary regions sets operations) as well as construction geometry codes. R-functions can efficiently replace functions of three-value logic in all significant models. They are even more appropriate for application since three-value logic is not typical for digital computers which operate in two-value logic. This shows that there is a need for work in this field. It is shown that there is a possibility to develop interactive code for computer modeling of geometry objects in parallel with development of geometry module [sr
The Erebus Montes Debris-Apron Population: Investigation of Amazonian Landscape Evolution
van Gasselt, S.; Orgel, C.; Schulz, J.
2014-04-01
Lobate debris aprons are considered to be indicators for the presence of ice and water reservoirs on Mars and are therefore sensitive to climate variability. The northern hemisphere of Mars is characterized by three major populations of debris aprons (see, e.g. [12]): (1) the Tempe Terra/Mareotis Fossae region [2, 5], (2) the Deuteronilus/Protonilus Mensae [1, 4, 8], and (3) the Phlegra Montes (PM) [3]. The broader PM area can subdivided inro a number of smaller populations dispersed across parts of Arcadia Planitia (see figure 1) of which the Erebus Montes located at 180-195oE, 25-41oN form a well-confined set of features. We here focus on age and erosional characteristics of the northern Erebus Montes (see inset in figure 1). Our study makes use of panchromatic image data obtained by the High Resolution Stereo Camera (HRSC) [9, 6] onboard Mars Express and the Context Camera (CTX) [7] onboard Mars Reconnaissance Orbiter. Image data analyses are supported by digital terrain-model data derived from HRSC based stereo imaging [10] and from Mars Orbiter Laser Altimeter (MOLA) [11]. We performed detailed geologic mapping at a scale of 1:10,000 and analysed age relationships and erosion rates based on a similar approach as outlined in [5] for the northern part of the Erebus Montes. The aim of this study is to compare feature characteristics to other populations in order to assess timing and the overarching control of landforms evolution in the Martian northern hemisphere. The EM compare geologically relatively well with the Phlegra Montes in terms of individual feature morphologies. The concentration based on cluster analysis (figure 1) shows an up to 10 times higher concentration of remnants per 25 km2 area peaking at 3.4×10-3 features for Erebus Montes. Debris aprons show well-defined age signals ranging from 15 Myr up to 145 Myr. Some units even show continuous degradation implying active denudation of the Noachian to Hesperian-aged remnant massifs. Based on the
KENO V: the newest KENO Monte Carlo criticality program
International Nuclear Information System (INIS)
Landers, N.F.; Petrie, L.M.
1980-01-01
KENO V is a new multigroup Monte Carlo criticality program developed in the tradition of KENO and KENO IV for use in the SCALE system. The primary purpose of KENO V is to determine k-effective. Other calculated quantities include lifetime and generation time, energy-dependent leakages, energy- and region-dependent absorptions, fissions, fluxes, and fission densities. KENO V combines many of the efficient performance capabilities of KENO IV with improvements such as flexible data input, the ability to specify origins for cylindrical and spherical geometry regions, the capability of super grouping energy-dependent data, a P/sub n/ scattering model in the cross sections, a procedure for matching lethargy boundaries between albedos and cross sections to extend the usefulness of the albedo feature, and improved restart capabilities. This advanced user-oriented program combines simplified data input and efficient computer storage allocation to readily solve large problems whose computer storage requirements precluded solution when using KENO IV. 2 figures, 1 table
KENO IV: an improved Monte Carlo criticality program
International Nuclear Information System (INIS)
Petrie, L.M.; Cross, N.F.
1975-11-01
KENO IV is a multigroup Monte Carlo criticality program written for the IBM 360 computers. It executes rapidly and is flexibly dimensioned so the allowed size of a problem (i.e., the number of energy groups, number of geometry cards, etc., are arbitrary) is limited only by the total data storage required. The input data, with the exception of cross sections, fission spectra and albedos, may be entered in free form. The geometry input is quite simple to prepare and complicated three-dimensional systems can often be described with a minimum of effort. The results calculated by KENO IV include k-effective, lifetime and generation time, energy-dependent leakages and absorptions, energy- and region-dependent fluxes and region-dependent fission densities. Criticality searches can be made on unit dimensions or on the number of units in an array. A summary of the theory utilized by KENO IV, a section describing the logical program flow, a compilation of the error messages printed by the code and a comprehensive data guide for preparing input to the code are presented. 14 references
Quantum Monte Carlo studies of superfluid Fermi gases
International Nuclear Information System (INIS)
Chang, S.Y.; Pandharipande, V.R.; Carlson, J.; Schmidt, K.E.
2004-01-01
We report results of quantum Monte Carlo calculations of the ground state of dilute Fermi gases with attractive short-range two-body interactions. The strength of the interaction is varied to study different pairing regimes which are characterized by the product of the s-wave scattering length and the Fermi wave vector, ak F . We report results for the ground-state energy, the pairing gap Δ, and the quasiparticle spectrum. In the weak-coupling regime, 1/ak F FG . When a>0, the interaction is strong enough to form bound molecules with energy E mol . For 1/ak F > or approx. 0.5, we find that weakly interacting composite bosons are formed in the superfluid gas with Δ and gas energy per particle approaching E mol /2. In this region, we seem to have Bose-Einstein condensation (BEC) of molecules. The behavior of the energy and the gap in the BCS-to-BEC transition region, -0.5 F <0.5, is discussed
Recursive Monte Carlo method for deep-penetration problems
International Nuclear Information System (INIS)
Goldstein, M.; Greenspan, E.
1980-01-01
The Recursive Monte Carlo (RMC) method developed for estimating importance function distributions in deep-penetration problems is described. Unique features of the method, including the ability to infer the importance function distribution pertaining to many detectors from, essentially, a single M.C. run and the ability to use the history tape created for a representative region to calculate the importance function in identical regions, are illustrated. The RMC method is applied to the solution of two realistic deep-penetration problems - a concrete shield problem and a Tokamak major penetration problem. It is found that the RMC method can provide the importance function distributions, required for importance sampling, with accuracy that is suitable for an efficient solution of the deep-penetration problems considered. The use of the RMC method improved, by one to three orders of magnitude, the solution efficiency of the two deep-penetration problems considered: a concrete shield problem and a Tokamak major penetration problem. 8 figures, 4 tables
Directory of Open Access Journals (Sweden)
Wilson Cabral de Sousa Júnior
2010-06-01
Full Text Available The Amazon region is the final frontier and central focus of Brazilian hydro development, which raises a range of environmental concerns. The largest project in the Amazon is the planned Belo Monte Complex on the Xingu river. If constructed it will be the second biggest hydroelectric plant in Brazil, third largest on earth. In this study, we analyse the private and social costs, and benefits of the Belo Monte project. Furthermore, we present risk scenarios, considering fluctuations in the project’s feasibility that would result from variations in total costs and power. For our analysis, we create three scenarios. In the first scenario Belo Monte appears feasible, with a net present value (NPV in the range of US$670 million and a rate of return in excess of the 12% discount rate used in this analysis. The second scenario, where we varied some of the project costs and assumptions based on other economic estimates, shows the project to be infeasible, with a negative NPV of about US$3 billion and external costs around US$330 million. We also conducted a risk analysis, allowing variation in several of the parameters most important to the project’s feasibility. The simulations brought together the risks of cost overruns, construction delays, lower-than-expected generation and rising social costs. The probability of a positive NPV in these circumstances was calculated to be just 28%, or there is a 72% chance that the costs of the Belo Monte dam will be greater than the benefits. Several WCD recommendations are not considered in the project, especially those related to transparency, social participation in the discussion, economic analysis and risk assessment, and licensing of the project. This study underscores the importance of forming a participatory consensus, based on clear, objective information, on whether or not to build the Belo Monte dam.
International Nuclear Information System (INIS)
Allagi, Mabruk O.; Lewins, Jeffery D.
1999-01-01
In a further study of virtually processed Monte Carlo estimates in neutron transport, a shielding problem has been studied. The use of virtual sampling to estimate the importance function at a certain point in the phase space depends on the presence of neutrons from the real source at that point. But in deep penetration problems, not many neutrons will reach regions far away from the source. In order to overcome this problem, two suggestions are considered: (1) virtual sampling is used as far as the real neutrons can reach, then fictitious sampling is introduced for the remaining regions, distributed in all the regions, or (2) only one fictitious source is placed where the real neutrons almost terminate and then virtual sampling is used in the same way as for the real source. Variational processing is again found to improve the Monte Carlo estimates, being best when using one fictitious source in the far regions with virtual sampling (option 2). When fictitious sources are used to estimate the importances in regions far away from the source, some optimization has to be performed for the proportion of fictitious to real sources, weighted against accuracy and computational costs. It has been found in this study that the optimum number of cells to be treated by fictitious sampling is problem dependent, but as a rule of thumb, fictitious sampling should be employed in regions where the number of neutrons from the real source fall below a specified limit for good statistics
Dragovitsch, Peter; Linn, Stephan L.; Burbank, Mimi
1994-01-01
Calorimeter Geometry * Simulations with EGS4/PRESTA for Thin Si Sampling Calorimeter * SIBERIA -- Monte Carlo Code for Simulation of Hadron-Nuclei Interactions * CALOR89 Predictions for the Hanging File Test Configurations * Estimation of the Multiple Coulomb Scattering Error for Various Numbers of Radiation Lengths * Monte Carlo Generator for Nuclear Fragmentation Induced by Pion Capture * Calculation and Randomization of Hadron-Nucleus Reaction Cross Section * Developments in GEANT Physics * Status of the MC++ Event Generator Toolkit * Theoretical Overview of QCD Event Generators * Random Numbers? * Simulation of the GEM LKr Barrel Calorimeter Using CALOR89 * Recent Improvement of the EGS4 Code, Implementation of Linearly Polarized Photon Scattering * Interior-Flux Simulation in Enclosures with Electron-Emitting Walls * Some Recent Developments in Global Determinations of Parton Distributions * Summary of the Workshop on Simulating Accelerator Radiation Environments * Simulating the SDC Radiation Background and Activation * Applications of Cluster Monte Carlo Method to Lattice Spin Models * PDFLIB: A Library of All Available Parton Density Functions of the Nucleon, the Pion and the Photon and the Corresponding αs Calculations * DTUJET92: Sampling Hadron Production at Supercolliders * A New Model for Hadronic Interactions at Intermediate Energies for the FLUKA Code * Matrix Generator of Pseudo-Random Numbers * The OPAL Monte Carlo Production System * Monte Carlo Simulation of the Microstrip Gas Counter * Inner Detector Simulations in ATLAS * Simulation and Reconstruction in H1 Liquid Argon Calorimetry * Polarization Decomposition of Fluxes and Kinematics in ep Reactions * Towards Object-Oriented GEANT -- ProdiG Project * Parallel Processing of AMY Detector Simulation on Fujitsu AP1000 * Enigma: An Event Generator for Electron-Photon- or Pion-Induced Events in the ~1 GeV Region * SSCSIM: Development and Use by the Fermilab SDC Group * The GEANT-CALOR Interface
Selection of important Monte Carlo histories
International Nuclear Information System (INIS)
Egbert, Stephen D.
1987-01-01
The 1986 Dosimetry System (DS86) for Japanese A-bomb survivors uses information describing the behavior of individual radiation particles, simulated by Monte Carlo methods, to calculate the transmission of radiation into structures and, thence, into humans. However, there are practical constraints on the number of such particle 'histories' that may be used. First, the number must be sufficiently high to provide adequate statistical precision fir any calculated quantity of interest. For integral quantities, such as dose or kerma, statistical precision of approximately 5% (standard deviation) is required to ensure that statistical uncertainties are not a major contributor to the overall uncertainty of the transmitted value. For differential quantities, such as scalar fluence spectra, 10 to 15% standard deviation on individual energy groups is adequate. Second, the number of histories cannot be so large as to require an unacceptably large amount of computer time to process the entire survivor data base. Given that there are approx. 30,000 survivors, each having 13 or 14 organs of interest, the number of histories per organ must be constrained to less than several ten's of thousands at the very most. Selection and use of the most important Monte Carlo leakage histories from among all those calculated allows the creation of an efficient house and organ radiation transmission system for use at RERF. While attempts have been made during the adjoint Monte Carlo calculation to bias the histories toward an efficient dose estimate, this effort has been far from satisfactory. Many of the adjoint histories on a typical leakage tape are either starting in an energy group in which there is very little kerma or dose or leaking into an energy group with very little free-field couple with. By knowing the typical free-field fluence and the fluence-to-dose factors with which the leaking histories will be used, one can select histories rom a leakage tape that will contribute to dose
Energy Technology Data Exchange (ETDEWEB)
Han, Gi Yeong; Kim, Song Hyun; Kim, Do Hyun; Shin, Chang Ho; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of)
2014-05-15
In this study, how the geometry splitting strategy affects the calculation efficiency was analyzed. In this study, a geometry splitting method was proposed to increase the calculation efficiency in Monte Carlo simulation. First, the analysis of the neutron distribution characteristics in a deep penetration problem was performed. Then, considering the neutron population distribution, a geometry splitting method was devised. Using the proposed method, the FOMs with benchmark problems were estimated and compared with the conventional geometry splitting strategy. The results show that the proposed method can considerably increase the calculation efficiency in using geometry splitting method. It is expected that the proposed method will contribute to optimizing the computational cost as well as reducing the human errors in Monte Carlo simulation. Geometry splitting in Monte Carlo (MC) calculation is one of the most popular variance reduction techniques due to its simplicity, reliability and efficiency. For the use of the geometry splitting, the user should determine locations of geometry splitting and assign the relative importance of each region. Generally, the splitting parameters are decided by the user's experience. However, in this process, the splitting parameters can ineffectively or erroneously be selected. In order to prevent it, there is a recommendation to help the user eliminate guesswork, which is to split the geometry evenly. And then, the importance is estimated by a few iterations for preserving population of particle penetrating each region. However, evenly geometry splitting method can make the calculation inefficient due to the change in mean free path (MFP) of particles.
International Nuclear Information System (INIS)
Han, Gi Yeong; Kim, Song Hyun; Kim, Do Hyun; Shin, Chang Ho; Kim, Jong Kyung
2014-01-01
In this study, how the geometry splitting strategy affects the calculation efficiency was analyzed. In this study, a geometry splitting method was proposed to increase the calculation efficiency in Monte Carlo simulation. First, the analysis of the neutron distribution characteristics in a deep penetration problem was performed. Then, considering the neutron population distribution, a geometry splitting method was devised. Using the proposed method, the FOMs with benchmark problems were estimated and compared with the conventional geometry splitting strategy. The results show that the proposed method can considerably increase the calculation efficiency in using geometry splitting method. It is expected that the proposed method will contribute to optimizing the computational cost as well as reducing the human errors in Monte Carlo simulation. Geometry splitting in Monte Carlo (MC) calculation is one of the most popular variance reduction techniques due to its simplicity, reliability and efficiency. For the use of the geometry splitting, the user should determine locations of geometry splitting and assign the relative importance of each region. Generally, the splitting parameters are decided by the user's experience. However, in this process, the splitting parameters can ineffectively or erroneously be selected. In order to prevent it, there is a recommendation to help the user eliminate guesswork, which is to split the geometry evenly. And then, the importance is estimated by a few iterations for preserving population of particle penetrating each region. However, evenly geometry splitting method can make the calculation inefficient due to the change in mean free path (MFP) of particles
Response decomposition with Monte Carlo correlated coupling
International Nuclear Information System (INIS)
Ueki, T.; Hoogenboom, J.E.; Kloosterman, J.L.
2001-01-01
Particle histories that contribute to a detector response are categorized according to whether they are fully confined inside a source-detector enclosure or cross and recross the same enclosure. The contribution from the confined histories is expressed using a forward problem with the external boundary condition on the source-detector enclosure. The contribution from the crossing and recrossing histories is expressed as the surface integral at the same enclosure of the product of the directional cosine and the fluxes in the foregoing forward problem and the adjoint problem for the whole spatial domain. The former contribution can be calculated by a standard forward Monte Carlo. The latter contribution can be calculated by correlated coupling of forward and adjoint histories independently of the former contribution. We briefly describe the computational method and discuss its application to perturbation analysis for localized material changes. (orig.)
Response decomposition with Monte Carlo correlated coupling
Energy Technology Data Exchange (ETDEWEB)
Ueki, T.; Hoogenboom, J.E.; Kloosterman, J.L. [Delft Univ. of Technology (Netherlands). Interfaculty Reactor Inst.
2001-07-01
Particle histories that contribute to a detector response are categorized according to whether they are fully confined inside a source-detector enclosure or cross and recross the same enclosure. The contribution from the confined histories is expressed using a forward problem with the external boundary condition on the source-detector enclosure. The contribution from the crossing and recrossing histories is expressed as the surface integral at the same enclosure of the product of the directional cosine and the fluxes in the foregoing forward problem and the adjoint problem for the whole spatial domain. The former contribution can be calculated by a standard forward Monte Carlo. The latter contribution can be calculated by correlated coupling of forward and adjoint histories independently of the former contribution. We briefly describe the computational method and discuss its application to perturbation analysis for localized material changes. (orig.)
Monte Carlo simulations of low background detectors
International Nuclear Information System (INIS)
Miley, H.S.; Brodzinski, R.L.; Hensley, W.K.; Reeves, J.H.
1995-01-01
An implementation of the Electron Gamma Shower 4 code (EGS4) has been developed to allow convenient simulation of typical gamma ray measurement systems. Coincidence gamma rays, beta spectra, and angular correlations have been added to adequately simulate a complete nuclear decay and provide corrections to experimentally determined detector efficiencies. This code has been used to strip certain low-background spectra for the purpose of extremely low-level assay. Monte Carlo calculations of this sort can be extremely successful since low background detectors are usually free of significant contributions from poorly localized radiation sources, such as cosmic muons, secondary cosmic neutrons, and radioactive construction or shielding materials. Previously, validation of this code has been obtained from a series of comparisons between measurements and blind calculations. An example of the application of this code to an exceedingly low background spectrum stripping will be presented. (author) 5 refs.; 3 figs.; 1 tab
Homogenized group cross sections by Monte Carlo
International Nuclear Information System (INIS)
Van Der Marck, S. C.; Kuijper, J. C.; Oppe, J.
2006-01-01
Homogenized group cross sections play a large role in making reactor calculations efficient. Because of this significance, many codes exist that can calculate these cross sections based on certain assumptions. However, the application to the High Flux Reactor (HFR) in Petten, the Netherlands, the limitations of such codes imply that the core calculations would become less accurate when using homogenized group cross sections (HGCS). Therefore we developed a method to calculate HGCS based on a Monte Carlo program, for which we chose MCNP. The implementation involves an addition to MCNP, and a set of small executables to perform suitable averaging after the MCNP run(s) have completed. Here we briefly describe the details of the method, and we report on two tests we performed to show the accuracy of the method and its implementation. By now, this method is routinely used in preparation of the cycle to cycle core calculations for HFR. (authors)
Nuclear reactions in Monte Carlo codes
Ferrari, Alfredo
2002-01-01
The physics foundations of hadronic interactions as implemented in most Monte Carlo codes are presented together with a few practical examples. The description of the relevant physics is presented schematically split into the major steps in order to stress the different approaches required for the full understanding of nuclear reactions at intermediate and high energies. Due to the complexity of the problem, only a few semi-qualitative arguments are developed in this paper. The description will be necessarily schematic and somewhat incomplete, but hopefully it will be useful for a first introduction into this topic. Examples are shown mostly for the high energy regime, where all mechanisms mentioned in the paper are at work and to which perhaps most of the readers are less accustomed. Examples for lower energies can be found in the references. (43 refs) .
An accurate nonlinear Monte Carlo collision operator
International Nuclear Information System (INIS)
Wang, W.X.; Okamoto, M.; Nakajima, N.; Murakami, S.
1995-03-01
A three dimensional nonlinear Monte Carlo collision model is developed based on Coulomb binary collisions with the emphasis both on the accuracy and implementation efficiency. The operator of simple form fulfills particle number, momentum and energy conservation laws, and is equivalent to exact Fokker-Planck operator by correctly reproducing the friction coefficient and diffusion tensor, in addition, can effectively assure small-angle collisions with a binary scattering angle distributed in a limited range near zero. Two highly vectorizable algorithms are designed for its fast implementation. Various test simulations regarding relaxation processes, electrical conductivity, etc. are carried out in velocity space. The test results, which is in good agreement with theory, and timing results on vector computers show that it is practically applicable. The operator may be used for accurately simulating collisional transport problems in magnetized and unmagnetized plasmas. (author)
Computation cluster for Monte Carlo calculations
International Nuclear Information System (INIS)
Petriska, M.; Vitazek, K.; Farkas, G.; Stacho, M.; Michalek, S.
2010-01-01
Two computation clusters based on Rocks Clusters 5.1 Linux distribution with Intel Core Duo and Intel Core Quad based computers were made at the Department of the Nuclear Physics and Technology. Clusters were used for Monte Carlo calculations, specifically for MCNP calculations applied in Nuclear reactor core simulations. Optimization for computation speed was made on hardware and software basis. Hardware cluster parameters, such as size of the memory, network speed, CPU speed, number of processors per computation, number of processors in one computer were tested for shortening the calculation time. For software optimization, different Fortran compilers, MPI implementations and CPU multi-core libraries were tested. Finally computer cluster was used in finding the weighting functions of neutron ex-core detectors of VVER-440. (authors)
Monte Carlo stratified source-sampling
International Nuclear Information System (INIS)
Blomquist, R.N.; Gelbard, E.M.
1997-01-01
In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo open-quotes eigenvalue of the worldclose quotes problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. The original test-problem was treated by a special code designed specifically for that purpose. Recently ANL started work on a method for dealing with more realistic eigenvalue of the world configurations, and has been incorporating this method into VIM. The original method has been modified to take into account real-world statistical noise sources not included in the model problem. This paper constitutes a status report on work still in progress
Helminthiases in Montes Claros. Preliminary survey
Directory of Open Access Journals (Sweden)
Rina Girard Kaminsky
1976-04-01
Full Text Available A preliminary survey was conducted for the presence of helminths in the city of Montes Claros, M. G., Brazil. Three groups of persons were examined by the direct smear, Kato thick film and MIFC techniques; one group by direct smear and Kato only. General findings were: a high prevalence of hookworm, followed by ascariasis, S. mansoni, S. stercoralis and very light infections with T. trichiurá. E. vermicularis and H. nana were ranking parasites at an orphanage, with some hookworm and S. mansoni infections as well. At a pig slaughter house, the dominant parasites were hookworm and S. mansoni. Pig cysticercosis was an incidental finding worth mentioning for the health hazard it represents for humans as well as an economic loss. From the comparative results between the Kato and the MIF the former proved itself again as a more sensitive and reliable concentration method for helminth eggs, of low cost and easy performance.
Monte Carlo simulation of a CZT detector
International Nuclear Information System (INIS)
Chun, Sung Dae; Park, Se Hwan; Ha, Jang Ho; Kim, Han Soo; Cho, Yoon Ho; Kang, Sang Mook; Kim, Yong Kyun; Hong, Duk Geun
2008-01-01
CZT detector is one of the most promising radiation detectors for hard X-ray and γ-ray measurement. The energy spectrum of CZT detector has to be simulated to optimize the detector design. A CZT detector was fabricated with dimensions of 5x5x2 mm 3 . A Peltier cooler with a size of 40x40 mm 2 was installed below the fabricated CZT detector to reduce the operation temperature of the detector. Energy spectra of were measured with 59.5 keV γ-ray from 241 Am. A Monte Carlo code was developed to simulate the CZT energy spectrum, which was measured with a planar-type CZT detector, and the result was compared with the measured one. The simulation was extended to the CZT detector with strip electrodes. (author)
Vectorization of Monte Carlo particle transport
International Nuclear Information System (INIS)
Burns, P.J.; Christon, M.; Schweitzer, R.; Lubeck, O.M.; Wasserman, H.J.; Simmons, M.L.; Pryor, D.V.
1989-01-01
This paper reports that fully vectorized versions of the Los Alamos National Laboratory benchmark code Gamteb, a Monte Carlo photon transport algorithm, were developed for the Cyber 205/ETA-10 and Cray X-MP/Y-MP architectures. Single-processor performance measurements of the vector and scalar implementations were modeled in a modified Amdahl's Law that accounts for additional data motion in the vector code. The performance and implementation strategy of the vector codes are related to architectural features of each machine. Speedups between fifteen and eighteen for Cyber 205/ETA-10 architectures, and about nine for CRAY X-MP/Y-MP architectures are observed. The best single processor execution time for the problem was 0.33 seconds on the ETA-10G, and 0.42 seconds on the CRAY Y-MP
Computation cluster for Monte Carlo calculations
Energy Technology Data Exchange (ETDEWEB)
Petriska, M.; Vitazek, K.; Farkas, G.; Stacho, M.; Michalek, S. [Dep. Of Nuclear Physics and Technology, Faculty of Electrical Engineering and Information, Technology, Slovak Technical University, Ilkovicova 3, 81219 Bratislava (Slovakia)
2010-07-01
Two computation clusters based on Rocks Clusters 5.1 Linux distribution with Intel Core Duo and Intel Core Quad based computers were made at the Department of the Nuclear Physics and Technology. Clusters were used for Monte Carlo calculations, specifically for MCNP calculations applied in Nuclear reactor core simulations. Optimization for computation speed was made on hardware and software basis. Hardware cluster parameters, such as size of the memory, network speed, CPU speed, number of processors per computation, number of processors in one computer were tested for shortening the calculation time. For software optimization, different Fortran compilers, MPI implementations and CPU multi-core libraries were tested. Finally computer cluster was used in finding the weighting functions of neutron ex-core detectors of VVER-440. (authors)
Monte Carlo calculations of channeling radiation
International Nuclear Information System (INIS)
Bloom, S.D.; Berman, B.L.; Hamilton, D.C.; Alguard, M.J.; Barrett, J.H.; Datz, S.; Pantell, R.H.; Swent, R.H.
1981-01-01
Results of classical Monte Carlo calculations are presented for the radiation produced by ultra-relativistic positrons incident in a direction parallel to the (110) plane of Si in the energy range 30 to 100 MeV. The results all show the characteristic CR(channeling radiation) peak in the energy range 20 keV to 100 keV. Plots of the centroid energies, widths, and total yields of the CR peaks as a function of energy show the power law dependences of γ 1 5 , γ 1 7 , and γ 2 5 respectively. Except for the centroid energies and power-law dependence is only approximate. Agreement with experimental data is good for the centroid energies and only rough for the widths. Adequate experimental data for verifying the yield dependence on γ does not yet exist
Monte Carlo simulation of the ARGO
International Nuclear Information System (INIS)
Depaola, G.O.
1997-01-01
We use GEANT Monte Carlo code to design an outline of the geometry and simulate the performance of the Argentine gamma-ray observer (ARGO), a telescope based on silicon strip detector technlogy. The γ-ray direction is determined by geometrical means and the angular resolution is calculated for small variations of the basic design. The results show that the angular resolutions vary from a few degrees at low energies (∝50 MeV) to 0.2 , approximately, at high energies (>500 MeV). We also made simulations using as incoming γ-ray the energy spectrum of PKS0208-512 and PKS0528+134 quasars. Moreover, a method based on multiple scattering theory is also used to determine the incoming energy. We show that this method is applicable to energy spectrum. (orig.)
Variational Monte Carlo study of pentaquark states
Energy Technology Data Exchange (ETDEWEB)
Mark W. Paris
2005-07-01
Accurate numerical solution of the five-body Schrodinger equation is effected via variational Monte Carlo. The spectrum is assumed to exhibit a narrow resonance with strangeness S=+1. A fully antisymmetrized and pair-correlated five-quark wave function is obtained for the assumed non-relativistic Hamiltonian which has spin, isospin, and color dependent pair interactions and many-body confining terms which are fixed by the non-exotic spectra. Gauge field dynamics are modeled via flux tube exchange factors. The energy determined for the ground states with J=1/2 and negative (positive) parity is 2.22 GeV (2.50 GeV). A lower energy negative parity state is consistent with recent lattice results. The short-range structure of the state is analyzed via its diquark content.
Geometric Monte Carlo and black Janus geometries
Energy Technology Data Exchange (ETDEWEB)
Bak, Dongsu, E-mail: dsbak@uos.ac.kr [Physics Department, University of Seoul, Seoul 02504 (Korea, Republic of); B.W. Lee Center for Fields, Gravity & Strings, Institute for Basic Sciences, Daejeon 34047 (Korea, Republic of); Kim, Chanju, E-mail: cjkim@ewha.ac.kr [Department of Physics, Ewha Womans University, Seoul 03760 (Korea, Republic of); Kim, Kyung Kiu, E-mail: kimkyungkiu@gmail.com [Department of Physics, Sejong University, Seoul 05006 (Korea, Republic of); Department of Physics, College of Science, Yonsei University, Seoul 03722 (Korea, Republic of); Min, Hyunsoo, E-mail: hsmin@uos.ac.kr [Physics Department, University of Seoul, Seoul 02504 (Korea, Republic of); Song, Jeong-Pil, E-mail: jeong_pil_song@brown.edu [Department of Chemistry, Brown University, Providence, RI 02912 (United States)
2017-04-10
We describe an application of the Monte Carlo method to the Janus deformation of the black brane background. We present numerical results for three and five dimensional black Janus geometries with planar and spherical interfaces. In particular, we argue that the 5D geometry with a spherical interface has an application in understanding the finite temperature bag-like QCD model via the AdS/CFT correspondence. The accuracy and convergence of the algorithm are evaluated with respect to the grid spacing. The systematic errors of the method are determined using an exact solution of 3D black Janus. This numerical approach for solving linear problems is unaffected initial guess of a trial solution and can handle an arbitrary geometry under various boundary conditions in the presence of source fields.
Radiation Modeling with Direct Simulation Monte Carlo
Carlson, Ann B.; Hassan, H. A.
1991-01-01
Improvements in the modeling of radiation in low density shock waves with direct simulation Monte Carlo (DSMC) are the subject of this study. A new scheme to determine the relaxation collision numbers for excitation of electronic states is proposed. This scheme attempts to move the DSMC programs toward a more detailed modeling of the physics and more reliance on available rate data. The new method is compared with the current modeling technique and both techniques are compared with available experimental data. The differences in the results are evaluated. The test case is based on experimental measurements from the AVCO-Everett Research Laboratory electric arc-driven shock tube of a normal shock wave in air at 10 km/s and .1 Torr. The new method agrees with the available data as well as the results from the earlier scheme and is more easily extrapolated to di erent ow conditions.
Monte Carlo work at Argonne National Laboratory
International Nuclear Information System (INIS)
Gelbard, E.M.; Prael, R.E.
1974-01-01
A simple model of the Monte Carlo process is described and a (nonlinear) recursion relation between fission sources in successive generations is developed. From the linearized form of these recursion relations, it is possible to derive expressions for the mean square coefficients of error modes in the iterates and for correlation coefficients between fluctuations in successive generations. First-order nonlinear terms in the recursion relation are analyzed. From these nonlinear terms an expression for the bias in the eigenvalue estimator is derived, and prescriptions for measuring the bias are formulated. Plans for the development of the VIM code are reviewed, and the proposed treatment of small sample perturbations in VIM is described. 6 references. (U.S.)
Methods for Monte Carlo simulations of biomacromolecules.
Vitalis, Andreas; Pappu, Rohit V
2009-01-01
The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies.
Markov Chain Monte Carlo from Lagrangian Dynamics.
Lan, Shiwei; Stathopoulos, Vasileios; Shahbaba, Babak; Girolami, Mark
2015-04-01
Hamiltonian Monte Carlo (HMC) improves the computational e ciency of the Metropolis-Hastings algorithm by reducing its random walk behavior. Riemannian HMC (RHMC) further improves the performance of HMC by exploiting the geometric properties of the parameter space. However, the geometric integrator used for RHMC involves implicit equations that require fixed-point iterations. In some cases, the computational overhead for solving implicit equations undermines RHMC's benefits. In an attempt to circumvent this problem, we propose an explicit integrator that replaces the momentum variable in RHMC by velocity. We show that the resulting transformation is equivalent to transforming Riemannian Hamiltonian dynamics to Lagrangian dynamics. Experimental results suggests that our method improves RHMC's overall computational e ciency in the cases considered. All computer programs and data sets are available online (http://www.ics.uci.edu/~babaks/Site/Codes.html) in order to allow replication of the results reported in this paper.
Energy Technology Data Exchange (ETDEWEB)
Rodrigues, Bruno L.; Tomal, Alessandra [Universidade Estadual de Campinas (UNICAMP), Campinas, SP (Brazil). Instituto de Fisica Gleb Wataghin
2016-07-01
Mammography is the main tool for breast cancer diagnosis, and it is based on the use of X-rays to obtain images. However, the glandular tissue present within the breast is highly sensitive to ionizing radiation, and therefore requires strict quality control in order to minimize the absorbed dose. The quantification of the absorbed dose in the breast tissue can be done by using Monte Carlo simulation, which allows a detailed study of the deposition of energy in different regions of the breast. Besides, the results obtained from the simulation can be associated with experimental data and provide values of dose interest, such as the dose deposited in glandular tissue. (author)
PEPSI: a Monte Carlo generator for polarized leptoproduction
International Nuclear Information System (INIS)
Mankiewicz, L.
1992-01-01
We describe PEPSI (Polarized Electron Proton Scattering Interactions) a Monte Carlo program for the polarized deep inelastic leptoproduction mediated by electromagnetic interaction. The code is a modification of the LEPTO 4.3 Lund Monte Carlo for unpolarized scattering and requires the standard polarization-independent JETSET routines to perform fragmentation into final hadrons. (orig.)
Closed-shell variational quantum Monte Carlo simulation for the ...
African Journals Online (AJOL)
Closed-shell variational quantum Monte Carlo simulation for the electric dipole moment calculation of hydrazine molecule using casino-code. ... Nigeria Journal of Pure and Applied Physics ... The variational quantum Monte Carlo (VQMC) technique used in this work employed the restricted Hartree-Fock (RHF) scheme.
Efficiency and accuracy of Monte Carlo (importance) sampling
Waarts, P.H.
2003-01-01
Monte Carlo Analysis is often regarded as the most simple and accurate reliability method. Be-sides it is the most transparent method. The only problem is the accuracy in correlation with the efficiency. Monte Carlo gets less efficient or less accurate when very low probabilities are to be computed
Exponential convergence on a continuous Monte Carlo transport problem
International Nuclear Information System (INIS)
Booth, T.E.
1997-01-01
For more than a decade, it has been known that exponential convergence on discrete transport problems was possible using adaptive Monte Carlo techniques. An adaptive Monte Carlo method that empirically produces exponential convergence on a simple continuous transport problem is described
Multiple histogram method and static Monte Carlo sampling
Inda, M.A.; Frenkel, D.
2004-01-01
We describe an approach to use multiple-histogram methods in combination with static, biased Monte Carlo simulations. To illustrate this, we computed the force-extension curve of an athermal polymer from multiple histograms constructed in a series of static Rosenbluth Monte Carlo simulations. From
A Monte Carlo approach to combating delayed completion of ...
African Journals Online (AJOL)
The objective of this paper is to unveil the relevance of Monte Carlo critical path analysis in resolving problem of delays in scheduled completion of development projects. Commencing with deterministic network scheduling, Monte Carlo critical path analysis was advanced by assigning probability distributions to task times.
Forest canopy BRDF simulation using Monte Carlo method
Huang, J.; Wu, B.; Zeng, Y.; Tian, Y.
2006-01-01
Monte Carlo method is a random statistic method, which has been widely used to simulate the Bidirectional Reflectance Distribution Function (BRDF) of vegetation canopy in the field of visible remote sensing. The random process between photons and forest canopy was designed using Monte Carlo method.
New Approaches and Applications for Monte Carlo Perturbation Theory
Energy Technology Data Exchange (ETDEWEB)
Aufiero, Manuele; Bidaud, Adrien; Kotlyar, Dan; Leppänen, Jaakko; Palmiotti, Giuseppe; Salvatores, Massimo; Sen, Sonat; Shwageraus, Eugene; Fratoni, Massimiliano
2017-02-01
This paper presents some of the recent and new advancements in the extension of Monte Carlo Perturbation Theory methodologies and application. In particular, the discussed problems involve Brunup calculation, perturbation calculation based on continuous energy functions, and Monte Carlo Perturbation Theory in loosely coupled systems.
A Monte Carlo algorithm for the Vavilov distribution
International Nuclear Information System (INIS)
Yi, Chul-Young; Han, Hyon-Soo
1999-01-01
Using the convolution property of the inverse Laplace transform, an improved Monte Carlo algorithm for the Vavilov energy-loss straggling distribution of the charged particle is developed, which is relatively simple and gives enough accuracy to be used for most Monte Carlo applications
Neutron point-flux calculation by Monte Carlo
International Nuclear Information System (INIS)
Eichhorn, M.
1986-04-01
A survey of the usual methods for estimating flux at a point is given. The associated variance-reducing techniques in direct Monte Carlo games are explained. The multigroup Monte Carlo codes MC for critical systems and PUNKT for point source-point detector-systems are represented, and problems in applying the codes to practical tasks are discussed. (author)
Crop canopy BRDF simulation and analysis using Monte Carlo method
Huang, J.; Wu, B.; Tian, Y.; Zeng, Y.
2006-01-01
This author designs the random process between photons and crop canopy. A Monte Carlo model has been developed to simulate the Bi-directional Reflectance Distribution Function (BRDF) of crop canopy. Comparing Monte Carlo model to MCRM model, this paper analyzes the variations of different LAD and
Monte Carlo modelling of TRIGA research reactor
El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.
2010-10-01
The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.
International Nuclear Information System (INIS)
Raskach, K.F.; Blyskavka, V; Kislitsyna, T.S.
2011-01-01
In this paper we apply Monte Carlo for calculating spatial distribution of sodium reactivity worth in the perspective Russian sodium-cooled fast reactor BN-1200. A special Monte Carlo technique applicable for calculating perturbations and derivatives of the effective multiplication factor is used. The numerical results obtained show that Monte Carlo has a good perspective to deal with such problems and to be used as a reference solution for engineering codes based on the diffusion approximation. They also allow to conclude that in the sodium blanket and in the neighboring region of the core the diffusion code used likely overestimates sodium reactivity worth. This conclusion has to be verified in future work. (author)
Kontrola tačnosti rezultata u simulacijama Monte Karlo / Accuracy control in Monte Carlo simulations
Directory of Open Access Journals (Sweden)
Nebojša V. Nikolić
2010-04-01
Full Text Available U radu je demonstrirana primena metode automatizovanog ponavljanja nezavisnih simulacionih eksperimenata sa prikupljanjem statistike slučajnih procesa, u dostizanju i kontroli tačnosti simulacionih rezultata u simulaciji sistema masovnog opsluživanja Monte Karlo. Metoda se zasniva na primeni osnovnih stavova i teorema matematičke statistike i teorije verovatnoće. Tačnost simulacionih rezultata dovedena je u direktnu vezu sa brojem ponavljanja simulacionih eksperimenata. / The paper presents an application of the Automated Independent Replication with Gathering Statistics of the Stochastic Processes Method in achieving and controlling the accuracy of simulation results in the Monte Carlo queuing simulations. The method is based on the application of the basic theorems of the theory of probability and mathematical statistics. The accuracy of the simulation results is linked with a number of independent replications of simulation experiments.
Monte Carlo evaluation of path integral for the nuclear shell model
International Nuclear Information System (INIS)
Lang, G.H.
1993-01-01
The authors present a path-integral formulation of the nuclear shell model using auxillary fields; the path-integral is evaluated by Monte Carlo methods. The method scales favorably with valence-nucleon number and shell-model basis: full-basis calculations are demonstrated up to the rare-earth region, which cannot be treated by other methods. Observables are calculated for the ground state and in a thermal ensemble. Dynamical correlations are obtained, from which strength functions are extracted through the Maximum Entropy method. Examples in the s-d shell, where exact diagonalization can be carried out, compared well with exact results. The open-quotes sign problemclose quotes generic to quantum Monte Carlo calculations is found to be absent in the attractive pairing-plus-multipole interactions. The formulation is general for interacting fermion systems and is well suited for parallel computation. The authors have implemented it on the Intel Touchstone Delta System, achieving better than 99% parallelization
The Monte Carlo photoionization and moving-mesh radiation hydrodynamics code CMACIONIZE
Vandenbroucke, B.; Wood, K.
2018-04-01
We present the public Monte Carlo photoionization and moving-mesh radiation hydrodynamics code CMACIONIZE, which can be used to simulate the self-consistent evolution of HII regions surrounding young O and B stars, or other sources of ionizing radiation. The code combines a Monte Carlo photoionization algorithm that uses a complex mix of hydrogen, helium and several coolants in order to self-consistently solve for the ionization and temperature balance at any given type, with a standard first order hydrodynamics scheme. The code can be run as a post-processing tool to get the line emission from an existing simulation snapshot, but can also be used to run full radiation hydrodynamical simulations. Both the radiation transfer and the hydrodynamics are implemented in a general way that is independent of the grid structure that is used to discretize the system, allowing it to be run both as a standard fixed grid code, but also as a moving-mesh code.
Research on perturbation based Monte Carlo reactor criticality search
International Nuclear Information System (INIS)
Li Zeguang; Wang Kan; Li Yangliu; Deng Jingkang
2013-01-01
Criticality search is a very important aspect in reactor physics analysis. Due to the advantages of Monte Carlo method and the development of computer technologies, Monte Carlo criticality search is becoming more and more necessary and feasible. Traditional Monte Carlo criticality search method is suffered from large amount of individual criticality runs and uncertainty and fluctuation of Monte Carlo results. A new Monte Carlo criticality search method based on perturbation calculation is put forward in this paper to overcome the disadvantages of traditional method. By using only one criticality run to get initial k_e_f_f and differential coefficients of concerned parameter, the polynomial estimator of k_e_f_f changing function is solved to get the critical value of concerned parameter. The feasibility of this method was tested. The results show that the accuracy and efficiency of perturbation based criticality search method are quite inspiring and the method overcomes the disadvantages of traditional one. (authors)
Statistics of Monte Carlo methods used in radiation transport calculation
International Nuclear Information System (INIS)
Datta, D.
2009-01-01
Radiation transport calculation can be carried out by using either deterministic or statistical methods. Radiation transport calculation based on statistical methods is basic theme of the Monte Carlo methods. The aim of this lecture is to describe the fundamental statistics required to build the foundations of Monte Carlo technique for radiation transport calculation. Lecture note is organized in the following way. Section (1) will describe the introduction of Basic Monte Carlo and its classification towards the respective field. Section (2) will describe the random sampling methods, a key component of Monte Carlo radiation transport calculation, Section (3) will provide the statistical uncertainty of Monte Carlo estimates, Section (4) will describe in brief the importance of variance reduction techniques while sampling particles such as photon, or neutron in the process of radiation transport
Reconstruction of Monte Carlo replicas from Hessian parton distributions
Energy Technology Data Exchange (ETDEWEB)
Hou, Tie-Jiun [Department of Physics, Southern Methodist University,Dallas, TX 75275-0181 (United States); Gao, Jun [INPAC, Shanghai Key Laboratory for Particle Physics and Cosmology,Department of Physics and Astronomy, Shanghai Jiao-Tong University, Shanghai 200240 (China); High Energy Physics Division, Argonne National Laboratory,Argonne, Illinois, 60439 (United States); Huston, Joey [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States); Nadolsky, Pavel [Department of Physics, Southern Methodist University,Dallas, TX 75275-0181 (United States); Schmidt, Carl; Stump, Daniel [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States); Wang, Bo-Ting; Xie, Ke Ping [Department of Physics, Southern Methodist University,Dallas, TX 75275-0181 (United States); Dulat, Sayipjamal [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States); School of Physics Science and Technology, Xinjiang University,Urumqi, Xinjiang 830046 (China); Center for Theoretical Physics, Xinjiang University,Urumqi, Xinjiang 830046 (China); Pumplin, Jon; Yuan, C.P. [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States)
2017-03-20
We explore connections between two common methods for quantifying the uncertainty in parton distribution functions (PDFs), based on the Hessian error matrix and Monte-Carlo sampling. CT14 parton distributions in the Hessian representation are converted into Monte-Carlo replicas by a numerical method that reproduces important properties of CT14 Hessian PDFs: the asymmetry of CT14 uncertainties and positivity of individual parton distributions. The ensembles of CT14 Monte-Carlo replicas constructed this way at NNLO and NLO are suitable for various collider applications, such as cross section reweighting. Master formulas for computation of asymmetric standard deviations in the Monte-Carlo representation are derived. A correction is proposed to address a bias in asymmetric uncertainties introduced by the Taylor series approximation. A numerical program is made available for conversion of Hessian PDFs into Monte-Carlo replicas according to normal, log-normal, and Watt-Thorne sampling procedures.
Monte Carlo Solutions for Blind Phase Noise Estimation
Directory of Open Access Journals (Sweden)
Çırpan Hakan
2009-01-01
Full Text Available This paper investigates the use of Monte Carlo sampling methods for phase noise estimation on additive white Gaussian noise (AWGN channels. The main contributions of the paper are (i the development of a Monte Carlo framework for phase noise estimation, with special attention to sequential importance sampling and Rao-Blackwellization, (ii the interpretation of existing Monte Carlo solutions within this generic framework, and (iii the derivation of a novel phase noise estimator. Contrary to the ad hoc phase noise estimators that have been proposed in the past, the estimators considered in this paper are derived from solid probabilistic and performance-determining arguments. Computer simulations demonstrate that, on one hand, the Monte Carlo phase noise estimators outperform the existing estimators and, on the other hand, our newly proposed solution exhibits a lower complexity than the existing Monte Carlo solutions.
Sampling from a polytope and hard-disk Monte Carlo
International Nuclear Information System (INIS)
Kapfer, Sebastian C; Krauth, Werner
2013-01-01
The hard-disk problem, the statics and the dynamics of equal two-dimensional hard spheres in a periodic box, has had a profound influence on statistical and computational physics. Markov-chain Monte Carlo and molecular dynamics were first discussed for this model. Here we reformulate hard-disk Monte Carlo algorithms in terms of another classic problem, namely the sampling from a polytope. Local Markov-chain Monte Carlo, as proposed by Metropolis et al. in 1953, appears as a sequence of random walks in high-dimensional polytopes, while the moves of the more powerful event-chain algorithm correspond to molecular dynamics evolution. We determine the convergence properties of Monte Carlo methods in a special invariant polytope associated with hard-disk configurations, and the implications for convergence of hard-disk sampling. Finally, we discuss parallelization strategies for event-chain Monte Carlo and present results for a multicore implementation
Linear filtering applied to Monte Carlo criticality calculations
International Nuclear Information System (INIS)
Morrison, G.W.; Pike, D.H.; Petrie, L.M.
1975-01-01
A significant improvement in the acceleration of the convergence of the eigenvalue computed by Monte Carlo techniques has been developed by applying linear filtering theory to Monte Carlo calculations for multiplying systems. A Kalman filter was applied to a KENO Monte Carlo calculation of an experimental critical system consisting of eight interacting units of fissile material. A comparison of the filter estimate and the Monte Carlo realization was made. The Kalman filter converged in five iterations to 0.9977. After 95 iterations, the average k-eff from the Monte Carlo calculation was 0.9981. This demonstrates that the Kalman filter has the potential of reducing the calculational effort of multiplying systems. Other examples and results are discussed
Problems in radiation shielding calculations with Monte Carlo methods
International Nuclear Information System (INIS)
Ueki, Kohtaro
1985-01-01
The Monte Carlo method is a very useful tool for solving a large class of radiation transport problem. In contrast with deterministic method, geometric complexity is a much less significant problem for Monte Carlo calculations. However, the accuracy of Monte Carlo calculations is of course, limited by statistical error of the quantities to be estimated. In this report, we point out some typical problems to solve a large shielding system including radiation streaming. The Monte Carlo coupling technique was developed to settle such a shielding problem accurately. However, the variance of the Monte Carlo results using the coupling technique of which detectors were located outside the radiation streaming, was still not enough. So as to bring on more accurate results for the detectors located outside the streaming and also for a multi-legged-duct streaming problem, a practicable way of ''Prism Scattering technique'' is proposed in the study. (author)
Cluster monte carlo method for nuclear criticality safety calculation
International Nuclear Information System (INIS)
Pei Lucheng
1984-01-01
One of the most important applications of the Monte Carlo method is the calculation of the nuclear criticality safety. The fair source game problem was presented at almost the same time as the Monte Carlo method was applied to calculating the nuclear criticality safety. The source iteration cost may be reduced as much as possible or no need for any source iteration. This kind of problems all belongs to the fair source game prolems, among which, the optimal source game is without any source iteration. Although the single neutron Monte Carlo method solved the problem without the source iteration, there is still quite an apparent shortcoming in it, that is, it solves the problem without the source iteration only in the asymptotic sense. In this work, a new Monte Carlo method called the cluster Monte Carlo method is given to solve the problem further
Monte Carlo codes use in neutron therapy; Application de codes Monte Carlo en neutrontherapie
Energy Technology Data Exchange (ETDEWEB)
Paquis, P.; Mokhtari, F.; Karamanoukian, D. [Hopital Pasteur, 06 - Nice (France); Pignol, J.P. [Hopital du Hasenrain, 68 - Mulhouse (France); Cuendet, P. [CEA Centre d' Etudes de Saclay, 91 - Gif-sur-Yvette (France). Direction des Reacteurs Nucleaires; Fares, G.; Hachem, A. [Faculte des Sciences, 06 - Nice (France); Iborra, N. [Centre Antoine-Lacassagne, 06 - Nice (France)
1998-04-01
Monte Carlo calculation codes allow to study accurately all the parameters relevant to radiation effects, like the dose deposition or the type of microscopic interactions, through one by one particle transport simulation. These features are very useful for neutron irradiations, from device development up to dosimetry. This paper illustrates some applications of these codes in Neutron Capture Therapy and Neutron Capture Enhancement of fast neutrons irradiations. (authors)
Baräo, Fernando; Nakagawa, Masayuki; Távora, Luis; Vaz, Pedro
2001-01-01
This book focusses on the state of the art of Monte Carlo methods in radiation physics and particle transport simulation and applications, the latter involving in particular, the use and development of electron--gamma, neutron--gamma and hadronic codes. Besides the basic theory and the methods employed, special attention is paid to algorithm development for modeling, and the analysis of experiments and measurements in a variety of fields ranging from particle to medical physics.
International Nuclear Information System (INIS)
Theis, Christian; Feldbaumer, Eduard; Forkel-Wirth, Doris; Jaegerhofer, Lukas; Roesler, Stefan; Vincke, Helmut; Buchegger, Karl Heinz
2010-01-01
Nowadays radiation transport Monte Carlo simulations have become an indispensable tool in various fields of physics. The applications are diversified and range from physics simulations, like detector studies or shielding design, to medical applications. Usually a significant amount of time is spent on the quite cumbersome and often error prone task of implementing geometries, before the actual physics studies can be performed. SimpleGeo is an interactive solid modeler which allows for the interactive creation and visualization of geometries for various Monte Carlo particle transport codes in 3D. Even though visual validation of the geometry is important, it might not reveal subtle errors like overlapping or undefined regions. These might eventually corrupt the execution of the simulation or even lead to incorrect results, the latter being sometimes hard to identify. In many cases a debugger is provided by the Monte Carlo package, but most often they lack interactive visual feedback, thus making it hard for the user to localize and correct the error. In this paper we describe the latest developments in SimpleGeo, which include debugging facilities that support immediate visual feedback, and apply various algorithms based on deterministic, Monte Carlo or Quasi Monte Carlo methods. These approaches allow for a fast and robust identification of subtle geometry errors that are also marked visually. (author)
Fish-eye view from the water tower towards Jura
1977-01-01
In the very front, the cooling plant for the ISR magnets followed by Storage (housing ISR electric generators)and CAO (Control Accelerator Operation) Buildings (Bld 378-377), and the main Building of the ISR Division (Bld 30). Behind stands the West Hall, followed along the neutrino beam line, by the BEBC building, the building housing the neutrino experiments WA1 and WA18, and the Gargamelle Building.
Magic turtle dans le canton du Jura: concept marketing
Hauser, Magali; Perruchoud-Massy, Marie-Françoise
2012-01-01
Depuis juin 2009, Saint-Ursanne/Clos du Doubs est une région pilote du Projet Enjoy Switzerland/ASM ayant pour but d’intervenir sur le développement et la sensibilisation du tourisme dans la région. En parallèle, la Maison du Tourisme, entreprise proposant principalement des offres touristiques dans la région, a ouvert ses portes l’année dernière. Ces deux entités ont travaillé ensemble afin de développer une nouvelle offre touristique intitulée « Magic turtle ». Le Magic turtle, pensé par de...
Intergenerational Correlation in Monte Carlo k-Eigenvalue Calculation
International Nuclear Information System (INIS)
Ueki, Taro
2002-01-01
This paper investigates intergenerational correlation in the Monte Carlo k-eigenvalue calculation of a neutron effective multiplicative factor. To this end, the exponential transform for path stretching has been applied to large fissionable media with localized highly multiplying regions because in such media an exponentially decaying shape is a rough representation of the importance of source particles. The numerical results show that the difference between real and apparent variances virtually vanishes for an appropriate value of the exponential transform parameter. This indicates that the intergenerational correlation of k-eigenvalue samples could be eliminated by the adjoint biasing of particle transport. The relation between the biasing of particle transport and the intergenerational correlation is therefore investigated in the framework of collision estimators, and the following conclusion has been obtained: Within the leading order approximation with respect to the number of histories per generation, the intergenerational correlation vanishes when immediate importance is constant, and the immediate importance under simulation can be made constant by the biasing of particle transport with a function adjoint to the source neutron's distribution, i.e., the importance over all future generations
Preliminary validation of a Monte Carlo model for IMRT fields
International Nuclear Information System (INIS)
Wright, Tracy; Lye, Jessica; Mohammadi, Mohammad
2011-01-01
Full text: A Monte Carlo model of an Elekta linac, validated for medium to large (10-30 cm) symmetric fields, has been investigated for small, irregular and asymmetric fields suitable for IMRT treatments. The model has been validated with field segments using radiochromic film in solid water. The modelled positions of the multileaf collimator (MLC) leaves have been validated using EBT film, In the model, electrons with a narrow energy spectrum are incident on the target and all components of the linac head are included. The MLC is modelled using the EGSnrc MLCE component module. For the validation, a number of single complex IMRT segments with dimensions approximately 1-8 cm were delivered to film in solid water (see Fig, I), The same segments were modelled using EGSnrc by adjusting the MLC leaf positions in the model validated for 10 cm symmetric fields. Dose distributions along the centre of each MLC leaf as determined by both methods were compared. A picket fence test was also performed to confirm the MLC leaf positions. 95% of the points in the modelled dose distribution along the leaf axis agree with the film measurement to within 1%/1 mm for dose difference and distance to agreement. Areas of most deviation occur in the penumbra region. A system has been developed to calculate the MLC leaf positions in the model for any planned field size.
Monte Carlo modeling of ion chamber performance using MCNP.
Wallace, J D
2012-12-01
Ion Chambers have a generally flat energy response with some deviations at very low (2 MeV) energies. Some improvements in the low energy response can be achieved through use of high atomic number gases, such as argon and xenon, and higher chamber pressures. This work looks at the energy response of high pressure xenon-filled ion chambers using the MCNP Monte Carlo package to develop geometric models of a commercially available high pressure ion chamber (HPIC). The use of the F6 tally as an estimator of the energy deposited in a region of interest per unit mass, and the underlying assumptions associated with its use are described. The effect of gas composition, chamber gas pressure, chamber wall thickness, and chamber holder wall thicknesses on energy response are investigated and reported. The predicted energy response curve for the HPIC was found to be similar to that reported by other investigators. These investigations indicate that improvements to flatten the overall energy response of the HPIC down to 70 keV could be achieved through use of 3 mm-thick stainless steel walls for the ion chamber.
Monte-Carlo event generation for the LHC
Siegert, Frank
This thesis discusses recent developments for the simulation of particle physics in the light of the start-up of the Large Hadron Collider. Simulation programs for fully exclusive events, dubbed Monte-Carlo event generators, are improved in areas related to the perturbative as well as non-perturbative regions of strong interactions. A short introduction to the main principles of event generation is given to serve as a basis for the following discussion. An existing algorithm for the correction of parton-shower emissions with the help of exact tree-level matrix elements is revisited and significantly improved as attested by first results. In a next step, an automated implementation of the POWHEG method is presented. It allows for the combination of parton showers with full next-to-leading order QCD calculations and has been tested in several processes. These two methods are then combined into a more powerful framework which allows to correct a parton shower with full next-to-leading order matrix elements and h...
Monte Carlo simulation of two-photon processes
International Nuclear Information System (INIS)
Daverveldt, P.H.W.M.
1985-01-01
During the last two decades e + e - collider experiments provided physicists with a wealth of important discoveries concerning elementary particle physics. This thesis explains in detail how the Monte Carlo approach can be applied to establish the comparison between two-photon experiments and theory. The author describes the main motives for and objectives of two-photon research. He defines the kinematics and pays attention to some special kinematical regions. Also a popular approximation for the exact differential cross section is reviewed. Next he discusses the calculation of the complete lowest order cross section for processes with four leptons in the final state and for reactions such as e + e - →e + e - qanti q, e + e - →μ + μ - qanti q. Radiative corrections to the multiperipheral diagrams are considered. The author explains in detail the distinction between soft and hard photon corrections which turns out to be somewhat more tricky than in the case of radiative corrections to one-photon processes. Finally, he presents some results which were obtained by using the event generators. (Auth.)
Monte Carlo dose calculation of microbeam in a lung phantom
International Nuclear Information System (INIS)
Company, F.Z.; Mino, C.; Mino, F.
1998-01-01
Full text: Recent advances in synchrotron generated X-ray beams with high fluence rate permit investigation of the application of an array of closely spaced, parallel or converging microplanar beams in radiotherapy. The proposed techniques takes advantage of the hypothesised repair mechanism of capillary cells between alternate microbeam zones, which regenerates the lethally irradiated endothelial cells. The lateral and depth doses of 100 keV microplanar beams are investigated for different beam dimensions and spacings in a tissue, lung and tissue/lung/tissue phantom. The EGS4 Monte Carlo code is used to calculate dose profiles at different depth and bundles of beams (up to 20x20cm square cross section). The maximum dose on the beam axis (peak) and the minimum interbeam dose (valley) are compared at different depths, bundles, heights, widths and beam spacings. Relatively high peak to valley ratios are observed in the lung region, suggesting an ideal environment for microbeam radiotherapy. For a single field, the ratio at the tissue/lung interface will set the maximum dose to the target volume. However, in clinical application, several fields would be involved allowing much greater doses to be applied for the elimination of cancer cells. We conclude therefore that multifield microbeam therapy has the potential to achieve useful therapeutic ratios for the treatment of lung cancer
SU-E-T-202: Impact of Monte Carlo Dose Calculation Algorithm On Prostate SBRT Treatments
Energy Technology Data Exchange (ETDEWEB)
Venencia, C; Garrigo, E; Cardenas, J; Castro Pena, P [Instituto de Radioterapia - Fundacion Marie Curie, Cordoba (Argentina)
2014-06-01
Purpose: The purpose of this work was to quantify the dosimetric impact of using Monte Carlo algorithm on pre calculated SBRT prostate treatment with pencil beam dose calculation algorithm. Methods: A 6MV photon beam produced by a Novalis TX (BrainLAB-Varian) linear accelerator equipped with HDMLC was used. Treatment plans were done using 9 fields with Iplanv4.5 (BrainLAB) and dynamic IMRT modality. Institutional SBRT protocol uses a total dose to the prostate of 40Gy in 5 fractions, every other day. Dose calculation is done by pencil beam (2mm dose resolution), heterogeneity correction and dose volume constraint (UCLA) for PTV D95%=40Gy and D98%>39.2Gy, Rectum V20Gy<50%, V32Gy<20%, V36Gy<10% and V40Gy<5%, Bladder V20Gy<40% and V40Gy<10%, femoral heads V16Gy<5%, penile bulb V25Gy<3cc, urethra and overlap region between PTV and PRV Rectum Dmax<42Gy. 10 SBRT treatments plans were selected and recalculated using Monte Carlo with 2mm spatial resolution and mean variance of 2%. DVH comparisons between plans were done. Results: The average difference between PTV doses constraints were within 2%. However 3 plans have differences higher than 3% which does not meet the D98% criteria (>39.2Gy) and should have been renormalized. Dose volume constraint differences for rectum, bladder, femoral heads and penile bulb were les than 2% and within tolerances. Urethra region and overlapping between PTV and PRV Rectum shows increment of dose in all plans. The average difference for urethra region was 2.1% with a maximum of 7.8% and for the overlapping region 2.5% with a maximum of 8.7%. Conclusion: Monte Carlo dose calculation on dynamic IMRT treatments could affects on plan normalization. Dose increment in critical region of urethra and PTV overlapping region with PTV could have clinical consequences which need to be studied. The use of Monte Carlo dose calculation algorithm is limited because inverse planning dose optimization use only pencil beam.
The vector and parallel processing of MORSE code on Monte Carlo Machine
International Nuclear Information System (INIS)
Hasegawa, Yukihiro; Higuchi, Kenji.
1995-11-01
Multi-group Monte Carlo Code for particle transport, MORSE is modified for high performance computing on Monte Carlo Machine Monte-4. The method and the results are described. Monte-4 was specially developed to realize high performance computing of Monte Carlo codes for particle transport, which have been difficult to obtain high performance in vector processing on conventional vector processors. Monte-4 has four vector processor units with the special hardware called Monte Carlo pipelines. The vectorization and parallelization of MORSE code and the performance evaluation on Monte-4 are described. (author)
Helminthiases in Montes Claros. Preliminary survey
Directory of Open Access Journals (Sweden)
Rina Girard Kaminsky
1976-04-01
Full Text Available A preliminary survey was conducted for the presence of helminths in the city of Montes Claros, M. G., Brazil. Three groups of persons were examined by the direct smear, Kato thick film and MIFC techniques; one group by direct smear and Kato only. General findings were: a high prevalence of hookworm, followed by ascariasis, S. mansoni, S. stercoralis and very light infections with T. trichiurá. E. vermicularis and H. nana were ranking parasites at an orphanage, with some hookworm and S. mansoni infections as well. At a pig slaughter house, the dominant parasites were hookworm and S. mansoni. Pig cysticercosis was an incidental finding worth mentioning for the health hazard it represents for humans as well as an economic loss. From the comparative results between the Kato and the MIF the former proved itself again as a more sensitive and reliable concentration method for helminth eggs, of low cost and easy performance.Um estudo preliminar sobre helmintos foi feito na cidade de Montes Claros, MG, Brasil. Três grupos de pessoas foram examinados pelos métodos direto, de Kato e do MIF e um grupo pelos métodos direto e Kato exclusivamente. Encontrou-se uma alta prevalência de necatorose, seguindo-se ascaríase, S. mansoni, S. stercoralis, e infecções leves pelo T. trichiura. E. vermicularis e H. nana foram osparasitos mais prevalentes num orfanato, com alguns casos de infecção pelo Necator e S. mansoni. Cisticercose dos suinos foi achado incidental e é importante ser mencionada devido ao perigo que representa no plano da Saúde Pública, bem como pela perda econômica que produz. Discutiu-se brevemente a importância do solo na transmissão dos helmintos num clima quente e seco. Da comparação dos métodos de Kato e MIF, o primeiro demonstrou ser o método mais sensível para ovos de helmintos, de baixo custo e fácil preparo.
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
International Nuclear Information System (INIS)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Monte Carlo based diffusion coefficients for LMFBR analysis
International Nuclear Information System (INIS)
Van Rooijen, Willem F.G.; Takeda, Toshikazu; Hazama, Taira
2010-01-01
A method based on Monte Carlo calculations is developed to estimate the diffusion coefficient of unit cells. The method uses a geometrical model similar to that used in lattice theory, but does not use the assumption of a separable fundamental mode used in lattice theory. The method uses standard Monte Carlo flux and current tallies, and the continuous energy Monte Carlo code MVP was used without modifications. Four models are presented to derive the diffusion coefficient from tally results of flux and partial currents. In this paper the method is applied to the calculation of a plate cell of the fast-spectrum critical facility ZEBRA. Conventional calculations of the diffusion coefficient diverge in the presence of planar voids in the lattice, but our Monte Carlo method can treat this situation without any problem. The Monte Carlo method was used to investigate the influence of geometrical modeling as well as the directional dependence of the diffusion coefficient. The method can be used to estimate the diffusion coefficient of complicated unit cells, the limitation being the capabilities of the Monte Carlo code. The method will be used in the future to confirm results for the diffusion coefficient obtained of the Monte Carlo code. The method will be used in the future to confirm results for the diffusion coefficient obtained with deterministic codes. (author)
Present status and future prospects of neutronics Monte Carlo
International Nuclear Information System (INIS)
Gelbard, E.M.
1990-01-01
It is fair to say that the Monte Carlo method, over the last decade, has grown steadily more important as a neutronics computational tool. Apparently this has happened for assorted reasons. Thus, for example, as the power of computers has increased, the cost of the method has dropped, steadily becoming less and less of an obstacle to its use. In addition, more and more sophisticated input processors have now made it feasible to model extremely complicated systems routinely with really remarkable fidelity. Finally, as we demand greater and greater precision in reactor calculations, Monte Carlo is often found to be the only method accurate enough for use in benchmarking. Cross section uncertainties are now almost the only inherent limitations in our Monte Carlo capabilities. For this reason Monte Carlo has come to occupy a special position, interposed between experiment and other computational techniques. More and more often deterministic methods are tested by comparison with Monte Carlo, and cross sections are tested by comparing Monte Carlo with experiment. In this way one can distinguish very clearly between errors due to flaws in our numerical methods, and those due to deficiencies in cross section files. The special role of Monte Carlo as a benchmarking tool, often the only available benchmarking tool, makes it crucially important that this method should be polished to perfection. Problems relating to Eigenvalue calculations, variance reduction and the use of advanced computers are reviewed in this paper. (author)
Energy Technology Data Exchange (ETDEWEB)
Terashima, Kenichi; Suzuki, Kenji; Yamaguchi, Katsuhiko, E-mail: yama@sss.fukushima-u.ac.jp
2016-04-01
Monte Carlo simulations were performed for temperature dependences of closure domain parameter for a magnetic micro-torus ring cluster under magnetic field on limited temperature regions. Simulation results show that magnetic field on tiny limited temperature region can reverse magnetic closure domain structures when the magnetic field is applied at a threshold temperature corresponding to intensity of applied magnetic field. This is one of thermally assisted switching phenomena through a self-organization process. The results show the way to find non-wasteful pairs between intensity of magnetic field and temperature region for reversing closure domain structure by temperature dependence of the fluctuation of closure domain parameter. Monte Carlo method for this simulation is very valuable to optimize the design of thermally assisted switching devices.
Monte Carlo investigation of collapsed versus rotated IMRT plan verification.
Conneely, Elaine; Alexander, Andrew; Ruo, Russell; Chung, Eunah; Seuntjens, Jan; Foley, Mark J
2014-05-08
IMRT QA requires, among other tests, a time-consuming process of measuring the absorbed dose, at least to a point, in a high-dose, low-dose-gradient region. Some clinics use a technique of measuring this dose with all beams delivered at a single gantry angle (collapsed delivery), as opposed to the beams delivered at the planned gantry angle (rotated delivery). We examined, established, and optimized Monte Carlo simulations of the dosimetry for IMRT verification of treatment plans for these two different delivery modes (collapsed versus rotated). The results of the simulations were compared to the treatment planning system dose calculations for the two delivery modes, as well as to measurements taken. This was done in order to investigate the validity of the use of a collapsed delivery technique for IMRT QA. The BEAMnrc, DOSXYZnrc, and egs_chamber codes were utilized for the Monte Carlo simulations along with the MMCTP system. A number of different plan complexity metrics were also used in the analysis of the dose distributions in a bid to qualify why verification in a collapsed delivery may or may not be optimal for IMRT QA. Following the Alfonso et al. formalism, the kfclin,frefQclin,Q correction factor was calculated to correct the deviation of small fields from the reference conditions used for beam calibration. We report on the results obtained for a cohort of 20 patients. The plan complexity was investigated for each plan using the complexity metrics of homogeneity index, conformity index, modulation complexity score, and the fraction of beams from a particular plan that intersect the chamber when performing the QA. Rotated QA gives more consistent results than the collapsed QA technique. The kfclin,frefQclin,Qfactor deviates less from 1 for rotated QA than for collapsed QA. If the homogeneity index is less than 0.05 then the kfclin,frefQclin,Q factor does not deviate from unity by more than 1%. A value this low for the homogeneity index can only be obtained
Reactor physics simulations with coupled Monte Carlo calculation and computational fluid dynamics
International Nuclear Information System (INIS)
Seker, V.; Thomas, J.W.; Downar, T.J.
2007-01-01
A computational code system based on coupling the Monte Carlo code MCNP5 and the Computational Fluid Dynamics (CFD) code STAR-CD was developed as an audit tool for lower order nuclear reactor calculations. This paper presents the methodology of the developed computer program 'McSTAR'. McSTAR is written in FORTRAN90 programming language and couples MCNP5 and the commercial CFD code STAR-CD. MCNP uses a continuous energy cross section library produced by the NJOY code system from the raw ENDF/B data. A major part of the work was to develop and implement methods to update the cross section library with the temperature distribution calculated by STARCD for every region. Three different methods were investigated and implemented in McSTAR. The user subroutines in STAR-CD are modified to read the power density data and assign them to the appropriate variables in the program and to write an output data file containing the temperature, density and indexing information to perform the mapping between MCNP and STAR-CD cells. Preliminary testing of the code was performed using a 3x3 PWR pin-cell problem. The preliminary results are compared with those obtained from a STAR-CD coupled calculation with the deterministic transport code DeCART. Good agreement in the k eff and the power profile was observed. Increased computational capabilities and improvements in computational methods have accelerated interest in high fidelity modeling of nuclear reactor cores during the last several years. High-fidelity has been achieved by utilizing full core neutron transport solutions for the neutronics calculation and computational fluid dynamics solutions for the thermal-hydraulics calculation. Previous researchers have reported the coupling of 3D deterministic neutron transport method to CFD and their application to practical reactor analysis problems. One of the principal motivations of the work here was to utilize Monte Carlo methods to validate the coupled deterministic neutron transport
Statistical errors in Monte Carlo estimates of systematic errors
Roe, Byron P.
2007-01-01
For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.
Statistical errors in Monte Carlo estimates of systematic errors
Energy Technology Data Exchange (ETDEWEB)
Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu
2007-01-01
For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.
Statistical errors in Monte Carlo estimates of systematic errors
International Nuclear Information System (INIS)
Roe, Byron P.
2007-01-01
For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2
A radiating shock evaluated using Implicit Monte Carlo Diffusion
International Nuclear Information System (INIS)
Cleveland, M.; Gentile, N.
2013-01-01
Implicit Monte Carlo [1] (IMC) has been shown to be very expensive when used to evaluate a radiation field in opaque media. Implicit Monte Carlo Diffusion (IMD) [2], which evaluates a spatial discretized diffusion equation using a Monte Carlo algorithm, can be used to reduce the cost of evaluating the radiation field in opaque media [2]. This work couples IMD to the hydrodynamics equations to evaluate opaque diffusive radiating shocks. The Lowrie semi-analytic diffusive radiating shock benchmark[a] is used to verify our implementation of the coupled system of equations. (authors)
Recommender engine for continuous-time quantum Monte Carlo methods
Huang, Li; Yang, Yi-feng; Wang, Lei
2017-03-01
Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.
The Monte Carlo method the method of statistical trials
Shreider, YuA
1966-01-01
The Monte Carlo Method: The Method of Statistical Trials is a systematic account of the fundamental concepts and techniques of the Monte Carlo method, together with its range of applications. Some of these applications include the computation of definite integrals, neutron physics, and in the investigation of servicing processes. This volume is comprised of seven chapters and begins with an overview of the basic features of the Monte Carlo method and typical examples of its application to simple problems in computational mathematics. The next chapter examines the computation of multi-dimensio
Neutron flux calculation by means of Monte Carlo methods
International Nuclear Information System (INIS)
Barz, H.U.; Eichhorn, M.
1988-01-01
In this report a survey of modern neutron flux calculation procedures by means of Monte Carlo methods is given. Due to the progress in the development of variance reduction techniques and the improvements of computational techniques this method is of increasing importance. The basic ideas in application of Monte Carlo methods are briefly outlined. In more detail various possibilities of non-analog games and estimation procedures are presented, problems in the field of optimizing the variance reduction techniques are discussed. In the last part some important international Monte Carlo codes and own codes of the authors are listed and special applications are described. (author)
Gluon gas viscosity in nonperturbative region
International Nuclear Information System (INIS)
Il'in, S.V.; Mogilevskij, O.A.; Smolyanskij, S.A.; Zinov'ev, G.M.
1992-01-01
Using the Green-Kubo-type formulae and the cutoff model motivated by Monte Carlo lattice gluodynamics simulations we find the temperature behaviour of shear viscosity of gluon gas in the region of deconfinement phase transition. 22 refs.; 1 fig. (author)
Muñoz-Rojas, José; Carrasco González, Rosa María; Pedraza Gilsanz, Javier de
2009-01-01
La Geomorfología Regional, definida como ciencia que se ocupa de describir y explicar la distribución espacial de las formas del terreno a escala regional y sub-regional, ha sido considerada por la planificación física y la ordenación del territorio más clásicas, como la única disciplina capaz de analizar las “líneas maestras” que definen el carácter complejo del territorio y del paisaje. La utilización del relieve como base física para la delimitación y definición de unidades territoriale...
Uncertainty Propagation in Monte Carlo Depletion Analysis
International Nuclear Information System (INIS)
Shim, Hyung Jin; Kim, Yeong-il; Park, Ho Jin; Joo, Han Gyu; Kim, Chang Hyo
2008-01-01
A new formulation aimed at quantifying uncertainties of Monte Carlo (MC) tallies such as k eff and the microscopic reaction rates of nuclides and nuclide number densities in MC depletion analysis and examining their propagation behaviour as a function of depletion time step (DTS) is presented. It is shown that the variance of a given MC tally used as a measure of its uncertainty in this formulation arises from four sources; the statistical uncertainty of the MC tally, uncertainties of microscopic cross sections and nuclide number densities, and the cross correlations between them and the contribution of the latter three sources can be determined by computing the correlation coefficients between the uncertain variables. It is also shown that the variance of any given nuclide number density at the end of each DTS stems from uncertainties of the nuclide number densities (NND) and microscopic reaction rates (MRR) of nuclides at the beginning of each DTS and they are determined by computing correlation coefficients between these two uncertain variables. To test the viability of the formulation, we conducted MC depletion analysis for two sample depletion problems involving a simplified 7x7 fuel assembly (FA) and a 17x17 PWR FA, determined number densities of uranium and plutonium isotopes and their variances as well as k ∞ and its variance as a function of DTS, and demonstrated the applicability of the new formulation for uncertainty propagation analysis that need be followed in MC depletion computations. (authors)
Pseudopotentials for quantum-Monte-Carlo-calculations
International Nuclear Information System (INIS)
Burkatzki, Mark Thomas
2008-01-01
The author presents scalar-relativistic energy-consistent Hartree-Fock pseudopotentials for the main-group and 3d-transition-metal elements. The pseudopotentials do not exhibit a singularity at the nucleus and are therefore suitable for quantum Monte Carlo (QMC) calculations. The author demonstrates their transferability through extensive benchmark calculations of atomic excitation spectra as well as molecular properties. In particular, the author computes the vibrational frequencies and binding energies of 26 first- and second-row diatomic molecules using post Hartree-Fock methods, finding excellent agreement with the corresponding all-electron values. The author shows that the presented pseudopotentials give superior accuracy than other existing pseudopotentials constructed specifically for QMC. The localization error and the efficiency in QMC are discussed. The author also presents QMC calculations for selected atomic and diatomic 3d-transitionmetal systems. Finally, valence basis sets of different sizes (VnZ with n=D,T,Q,5 for 1st and 2nd row; with n=D,T for 3rd to 5th row; with n=D,T,Q for the 3d transition metals) optimized for the pseudopotentials are presented. (orig.)
Parallel Monte Carlo simulation of aerosol dynamics
Zhou, K.
2014-01-01
A highly efficient Monte Carlo (MC) algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process). Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI). The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands) of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD) function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles. 2014 Kun Zhou et al.
SERPENT Monte Carlo reactor physics code
International Nuclear Information System (INIS)
Leppaenen, J.
2010-01-01
SERPENT is a three-dimensional continuous-energy Monte Carlo reactor physics burnup calculation code, developed at VTT Technical Research Centre of Finland since 2004. The code is specialized in lattice physics applications, but the universe-based geometry description allows transport simulation to be carried out in complicated three-dimensional geometries as well. The suggested applications of SERPENT include generation of homogenized multi-group constants for deterministic reactor simulator calculations, fuel cycle studies involving detailed assembly-level burnup calculations, validation of deterministic lattice transport codes, research reactor applications, educational purposes and demonstration of reactor physics phenomena. The Serpent code has been publicly distributed by the OECD/NEA Data Bank since May 2009 and RSICC in the U. S. since March 2010. The code is being used in some 35 organizations in 20 countries around the world. This paper presents an overview of the methods and capabilities of the Serpent code, with examples in the modelling of WWER-440 reactor physics. (Author)
A continuation multilevel Monte Carlo algorithm
Collier, Nathan
2014-09-05
We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error tolerance is satisfied. CMLMC assumes discretization hierarchies that are defined a priori for each level and are geometrically refined across levels. The actual choice of computational work across levels is based on parametric models for the average cost per sample and the corresponding variance and weak error. These parameters are calibrated using Bayesian estimation, taking particular notice of the deepest levels of the discretization hierarchy, where only few realizations are available to produce the estimates. The resulting CMLMC estimator exhibits a non-trivial splitting between bias and statistical contributions. We also show the asymptotic normality of the statistical error in the MLMC estimator and justify in this way our error estimate that allows prescribing both required accuracy and confidence in the final result. Numerical results substantiate the above results and illustrate the corresponding computational savings in examples that are described in terms of differential equations either driven by random measures or with random coefficients. © 2014, Springer Science+Business Media Dordrecht.
Radon counting statistics - a Monte Carlo investigation
International Nuclear Information System (INIS)
Scott, A.G.
1996-01-01
Radioactive decay is a Poisson process, and so the Coefficient of Variation (COV) of open-quotes nclose quotes counts of a single nuclide is usually estimated as 1/√n. This is only true if the count duration is much shorter than the half-life of the nuclide. At longer count durations, the COV is smaller than the Poisson estimate. Most radon measurement methods count the alpha decays of 222 Rn, plus the progeny 218 Po and 214 Po, and estimate the 222 Rn activity from the sum of the counts. At long count durations, the chain decay of these nuclides means that every 222 Rn decay must be followed by two other alpha decays. The total number of decays is open-quotes 3Nclose quotes, where N is the number of radon decays, and the true COV of the radon concentration estimate is 1/√(N), √3 larger than the Poisson total count estimate of 1/√3N. Most count periods are comparable to the half lives of the progeny, so the relationship between COV and count time is complex. A Monte-Carlo estimate of the ratio of true COV to Poisson estimate was carried out for a range of count periods from 1 min to 16 h and three common radon measurement methods: liquid scintillation, scintillation cell, and electrostatic precipitation of progeny. The Poisson approximation underestimates COV by less than 20% for count durations of less than 60 min
Monte Carlo simulations for heavy ion dosimetry
Energy Technology Data Exchange (ETDEWEB)
Geithner, O.
2006-07-26
Water-to-air stopping power ratio (s{sub w,air}) calculations for the ionization chamber dosimetry of clinically relevant ion beams with initial energies from 50 to 450 MeV/u have been performed using the Monte Carlo technique. To simulate the transport of a particle in water the computer code SHIELD-HIT v2 was used which is a substantially modified version of its predecessor SHIELD-HIT v1. The code was partially rewritten, replacing formerly used single precision variables with double precision variables. The lowest particle transport specific energy was decreased from 1 MeV/u down to 10 keV/u by modifying the Bethe- Bloch formula, thus widening its range for medical dosimetry applications. Optional MSTAR and ICRU-73 stopping power data were included. The fragmentation model was verified using all available experimental data and some parameters were adjusted. The present code version shows excellent agreement with experimental data. Additional to the calculations of stopping power ratios, s{sub w,air}, the influence of fragments and I-values on s{sub w,air} for carbon ion beams was investigated. The value of s{sub w,air} deviates as much as 2.3% at the Bragg peak from the recommended by TRS-398 constant value of 1.130 for an energy of 50 MeV/u. (orig.)
The Monte Carlo calculation of gamma family
International Nuclear Information System (INIS)
Shibata, Makio
1980-01-01
The method of the Monte Carlo calculation for gamma family was investigated. The effects of the variation of values or terms of parameters on observed quantities were studied. The terms taken for the standard calculation are the scaling law for the model, simple proton spectrum for primary cosmic ray, a constant cross section of interaction, zero probability of neutral pion production, and the bending of the curve of primary energy spectrum. This is called S model. Calculations were made by changing one of above mentioned parameters. The chamber size, the mixing of gamma and hadrons, and the family size were fitted to the practical ECC data. When the model was changed from the scaling law to the CKP model, the energy spectrum of the family was able to be expressed by the CKP model better than the scaling law. The scaling law was better in the symmetry around the family center. It was denied that primary cosmic ray mostly consists of heavy particles. The increase of the interaction cross section was necessary in view of the frequency of the families. (Kato, T.)
Monte Carlo Production Management at CMS
Boudoul, G.; Pol, A; Srimanobhas, P; Vlimant, J R; Franzoni, Giovanni
2015-01-01
The analysis of the LHC data at the Compact Muon Solenoid (CMS) experiment requires the production of a large number of simulated events.During the runI of LHC (2010-2012), CMS has produced over 12 Billion simulated events,organized in approximately sixty different campaigns each emulating specific detector conditions and LHC running conditions (pile up).In order toaggregate the information needed for the configuration and prioritization of the events production,assure the book-keeping and of all the processing requests placed by the physics analysis groups,and to interface with the CMS production infrastructure,the web-based service Monte Carlo Management (McM) has been developed and put in production in 2012.McM is based on recent server infrastructure technology (CherryPy + java) and relies on a CouchDB database back-end.This contribution will coverthe one and half year of operational experience managing samples of simulated events for CMS,the evolution of its functionalitiesand the extension of its capabi...
Atomistic Monte Carlo Simulation of Lipid Membranes
Directory of Open Access Journals (Sweden)
Daniel Wüstner
2014-01-01
Full Text Available Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA for the phospholipid dipalmitoylphosphatidylcholine (DPPC. We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol.
Monte Carlo benchmarking: Validation and progress
International Nuclear Information System (INIS)
Sala, P.
2010-01-01
Document available in abstract form only. Full text of publication follows: Calculational tools for radiation shielding at accelerators are faced with new challenges from the present and next generations of particle accelerators. All the details of particle production and transport play a role when dealing with huge power facilities, therapeutic ion beams, radioactive beams and so on. Besides the traditional calculations required for shielding, activation predictions have become an increasingly critical component. Comparison and benchmarking with experimental data is obviously mandatory in order to build up confidence in the computing tools, and to assess their reliability and limitations. Thin target particle production data are often the best tools for understanding the predictive power of individual interaction models and improving their performances. Complex benchmarks (e.g. thick target data, deep penetration, etc.) are invaluable in assessing the overall performances of calculational tools when all ingredients are put at work together. A review of the validation procedures of Monte Carlo tools will be presented with practical and real life examples. The interconnections among benchmarks, model development and impact on shielding calculations will be highlighted. (authors)
Rare event simulation using Monte Carlo methods
Rubino, Gerardo
2009-01-01
In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. ...
The GENIE neutrino Monte Carlo generator
International Nuclear Information System (INIS)
Andreopoulos, C.; Bell, A.; Bhattacharya, D.; Cavanna, F.; Dobson, J.; Dytman, S.; Gallagher, H.; Guzowski, P.; Hatcher, R.; Kehayias, P.; Meregaglia, A.; Naples, D.; Pearce, G.; Rubbia, A.; Whalley, M.; Yang, T.
2010-01-01
GENIE is a new neutrino event generator for the experimental neutrino physics community. The goal of the project is to develop a 'canonical' neutrino interaction physics Monte Carlo whose validity extends to all nuclear targets and neutrino flavors from MeV to PeV energy scales. Currently, emphasis is on the few-GeV energy range, the challenging boundary between the non-perturbative and perturbative regimes, which is relevant for the current and near future long-baseline precision neutrino experiments using accelerator-made beams. The design of the package addresses many challenges unique to neutrino simulations and supports the full life-cycle of simulation and generator-related analysis tasks. GENIE is a large-scale software system, consisting of ∼120000 lines of C++ code, featuring a modern object-oriented design and extensively validated physics content. The first official physics release of GENIE was made available in August 2007, and at the time of the writing of this article, the latest available version was v2.4.4.
International Nuclear Information System (INIS)
2012-10-01
This document gathers numerous reports published by the CLI (Local Commission of Information) associated with the basic nuclear installation located in Brennilis, and also documents concerning this installation published by other actors (notably ASN, EDF). After a presentation of the CLI members, documents are gathered by themes: News (10 documents: reports of CLI sessions from January to October 2012, presentations by EDF and ASN), CLI operation (7 documents: creation order, members, internal regulation, site presentation, activity reports for 2009, 2010 and 2011), Works during consultation and public inquiry in 2009 (10 documents: mails exchanged between the CLI and different authorities, technical reports, report, opinion and conclusion of the inquiry commission, public meeting), consultation of the ASN on technical requirement project (notably with respect to water sampling and to releases by the power station), information on the Monts d'Arree site (22 documents: technical reports, information reports, radio-ecological investigation, environmental survey, mails by the different actors, i.e., the CLI, EDF, ASN), and reports of CLI plenary sessions (17 documents: reports from January 2009 to October 2012)
A vectorized Monte Carlo code for modeling photon transport in SPECT
International Nuclear Information System (INIS)
Smith, M.F.; Floyd, C.E. Jr.; Jaszczak, R.J.
1993-01-01
A vectorized Monte Carlo computer code has been developed for modeling photon transport in single photon emission computed tomography (SPECT). The code models photon transport in a uniform attenuating region and photon detection by a gamma camera. It is adapted from a history-based Monte Carlo code in which photon history data are stored in scalar variables and photon histories are computed sequentially. The vectorized code is written in FORTRAN77 and uses an event-based algorithm in which photon history data are stored in arrays and photon history computations are performed within DO loops. The indices of the DO loops range over the number of photon histories, and these loops may take advantage of the vector processing unit of our Stellar GS1000 computer for pipelined computations. Without the use of the vector processor the event-based code is faster than the history-based code because of numerical optimization performed during conversion to the event-based algorithm. When only the detection of unscattered photons is modeled, the event-based code executes 5.1 times faster with the use of the vector processor than without; when the detection of scattered and unscattered photons is modeled the speed increase is a factor of 2.9. Vectorization is a valuable way to increase the performance of Monte Carlo code for modeling photon transport in SPECT
A positive-weight next-to-leading-order Monte Carlo for Z pair hadroproduction
International Nuclear Information System (INIS)
Nason, Paolo; Ridolfi, Giovanni
2006-01-01
We present a first application of a previously published method for the computation of QCD processes that is accurate at next-to-leading order, and that can be interfaced consistently to standard shower Monte Carlo programs. We have considered Z pair production in hadron-hadron collisions, a process whose complexity is sufficient to test the general applicability of the method. We have interfaced our result to the HERWIG and PYTHIA shower Monte Carlo programs. Previous work on next-to-leading order corrections in a shower Monte Carlo (the MC-NLO program) may involve the generation of events with negative weights, that are avoided with the present method. We have compared our results with those obtained with MC-NLO, and found remarkable consistency. Our method can also be used as a standalone, alternative implementation of QCD corrections, with the advantage of positivity, improved convergence, and next-to-leading logarithmic accuracy in the region of small transverse momentum of the radiated parton
Energy Technology Data Exchange (ETDEWEB)
Zakova, Jitka [Department of Nuclear and Reactor Physics, Royal Institute of Technology, KTH, Roslagstullsbacken 21, S-10691 Stockholm (Sweden)], E-mail: jitka.zakova@neutron.kth.se; Talamo, Alberto [Nuclear Engineering Division, Argonne National Laboratory, ANL, 9700 South Cass Avenue, Argonne, IL 60439 (United States)], E-mail: alby@anl.gov
2008-05-15
Modeling of prismatic high temperature reactors requires a high precision description due to the triple heterogeneity of the core and also to the random distribution of fuel particles inside the fuel pins. On the latter issue, even with the most advanced Monte Carlo techniques, some approximation often arises while assessing the criticality level: first, a regular lattice of TRISO particles inside the fuel pins and, second, the cutting of TRISO particles by the fuel boundaries. We utilized two of the most accurate Monte Codes: MONK and MCNP, which are both used for licensing nuclear power plants in United Kingdom and in the USA, respectively, to evaluate the influence of the two previous approximations on estimating the criticality level of the Gas Turbine Modular Helium Reactor. The two codes exactly shared the same geometry and nuclear data library, ENDF/B, and only modeled different lattices of TRISO particles inside the fuel pins. More precisely, we investigated the difference between a regular lattice that cuts TRISO particles and a random lattice that axially repeats a region containing over 3000 non-cut particles. We have found that both Monte Carlo codes provide similar excesses of reactivity, provided that they share the same approximations.
Halim, A. A. A.; Laili, M. H.; Salikin, M. S.; Rusop, M.
2018-05-01
Monte Carlo Simulation has advanced their quantification based on number of the photon counting to solve the propagation of light inside the tissues including the absorption, scattering coefficient and act as preliminary study for functional near infrared application. The goal of this paper is to identify the optical properties using Monte Carlo simulation for non-invasive functional near infrared spectroscopy (fNIRS) evaluation of penetration depth in human muscle. This paper will describe the NIRS principle and the basis for its proposed used in Monte Carlo simulation which focused on several important parameters include ATP, ADP and relate with blow flow and oxygen content at certain exercise intensity. This will cover the advantages and limitation of such application upon this simulation. This result may help us to prove that our human muscle is transparent to this near infrared region and could deliver a lot of information regarding to the oxygenation level in human muscle. Thus, this might be useful for non-invasive technique for detecting oxygen status in muscle from living people either athletes or working people and allowing a lots of investigation muscle physiology in future.
Monte Carlo simulation of activity measurements by means of 4πβ-γ coincidence system
International Nuclear Information System (INIS)
Takeda, Mauro N.; Dias, Mauro S.; Koskinas, Marina F.
2004-01-01
The methodology for simulating all detection processes in a 4πβ-γ coincidence system by means of the Monte Carlo technique is described. The goal is to predict the behavior of the observed activity as a function of the 4πβ detector efficiency. In this approach, the information contained in the decay scheme is used for determining the contribution of all radiations emitted by the selected radionuclide, to the measured spectra by each detector. This simulation yields the shape of the coincidence spectrum, allowing the choice of suitable gamma-ray windows for which the activity can be obtained with maximum accuracy. The simulation can predict a detailed description of the extrapolation curve, mainly in the region where the 4πβ detector efficiency approaches 100%, which is experimentally unreachable due to self absorption of low energy electrons in the radioactive source substrate. The theoretical work is being developed with MCNP Monte Carlo code, applied to a gas-flow proportional counter of 4π geometry, coupled to a pair of NaI(Tl) crystals. The calculated efficiencies are compared to experimental results. The extrapolation curve can be obtained by means of another Monte Carlo algorithm, being developed in the present work, to take into account fundamental characteristics of a complex decay scheme, including different types of radiation and transitions. The present paper shows preliminary calculated values obtained by the simulation and compared to predicted analytical values for a simple decay scheme. (author)
Automatic variance reduction for Monte Carlo simulations via the local importance function transform
International Nuclear Information System (INIS)
Turner, S.A.
1996-02-01
The author derives a transformed transport problem that can be solved theoretically by analog Monte Carlo with zero variance. However, the Monte Carlo simulation of this transformed problem cannot be implemented in practice, so he develops a method for approximating it. The approximation to the zero variance method consists of replacing the continuous adjoint transport solution in the transformed transport problem by a piecewise continuous approximation containing local biasing parameters obtained from a deterministic calculation. He uses the transport and collision processes of the transformed problem to bias distance-to-collision and selection of post-collision energy groups and trajectories in a traditional Monte Carlo simulation of ''real'' particles. He refers to the resulting variance reduction method as the Local Importance Function Transform (LIFI) method. He demonstrates the efficiency of the LIFT method for several 3-D, linearly anisotropic scattering, one-group, and multigroup problems. In these problems the LIFT method is shown to be more efficient than the AVATAR scheme, which is one of the best variance reduction techniques currently available in a state-of-the-art Monte Carlo code. For most of the problems considered, the LIFT method produces higher figures of merit than AVATAR, even when the LIFT method is used as a ''black box''. There are some problems that cause trouble for most variance reduction techniques, and the LIFT method is no exception. For example, the author demonstrates that problems with voids, or low density regions, can cause a reduction in the efficiency of the LIFT method. However, the LIFT method still performs better than survival biasing and AVATAR in these difficult cases
Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo
Martinez, Josue G.; Liang, Faming; Zhou, Lan; Carroll, Raymond J.
2010-01-01
model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order
Time step length versus efficiency of Monte Carlo burnup calculations
International Nuclear Information System (INIS)
Dufek, Jan; Valtavirta, Ville
2014-01-01
Highlights: • Time step length largely affects efficiency of MC burnup calculations. • Efficiency of MC burnup calculations improves with decreasing time step length. • Results were obtained from SIE-based Monte Carlo burnup calculations. - Abstract: We demonstrate that efficiency of Monte Carlo burnup calculations can be largely affected by the selected time step length. This study employs the stochastic implicit Euler based coupling scheme for Monte Carlo burnup calculations that performs a number of inner iteration steps within each time step. In a series of calculations, we vary the time step length and the number of inner iteration steps; the results suggest that Monte Carlo burnup calculations get more efficient as the time step length is reduced. More time steps must be simulated as they get shorter; however, this is more than compensated by the decrease in computing cost per time step needed for achieving a certain accuracy
GE781: a Monte Carlo package for fixed target experiments
Davidenko, G.; Funk, M. A.; Kim, V.; Kuropatkin, N.; Kurshetsov, V.; Molchanov, V.; Rud, S.; Stutte, L.; Verebryusov, V.; Zukanovich Funchal, R.
The Monte Carlo package for the fixed target experiment B781 at Fermilab, a third generation charmed baryon experiment, is described. This package is based on GEANT 3.21, ADAMO database and DAFT input/output routines.
Bayesian phylogeny analysis via stochastic approximation Monte Carlo
Cheon, Sooyoung
2009-11-01
Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time. © 2009 Elsevier Inc. All rights reserved.
Optix: A Monte Carlo scintillation light transport code
Energy Technology Data Exchange (ETDEWEB)
Safari, M.J., E-mail: mjsafari@aut.ac.ir [Department of Energy Engineering and Physics, Amir Kabir University of Technology, PO Box 15875-4413, Tehran (Iran, Islamic Republic of); Afarideh, H. [Department of Energy Engineering and Physics, Amir Kabir University of Technology, PO Box 15875-4413, Tehran (Iran, Islamic Republic of); Ghal-Eh, N. [School of Physics, Damghan University, PO Box 36716-41167, Damghan (Iran, Islamic Republic of); Davani, F. Abbasi [Nuclear Engineering Department, Shahid Beheshti University, PO Box 1983963113, Tehran (Iran, Islamic Republic of)
2014-02-11
The paper reports on the capabilities of Monte Carlo scintillation light transport code Optix, which is an extended version of previously introduced code Optics. Optix provides the user a variety of both numerical and graphical outputs with a very simple and user-friendly input structure. A benchmarking strategy has been adopted based on the comparison with experimental results, semi-analytical solutions, and other Monte Carlo simulation codes to verify various aspects of the developed code. Besides, some extensive comparisons have been made against the tracking abilities of general-purpose MCNPX and FLUKA codes. The presented benchmark results for the Optix code exhibit promising agreements. -- Highlights: • Monte Carlo simulation of scintillation light transport in 3D geometry. • Evaluation of angular distribution of detected photons. • Benchmark studies to check the accuracy of Monte Carlo simulations.
Dosimetric measurements and Monte Carlo simulation for achieving ...
Indian Academy of Sciences (India)
Research Articles Volume 74 Issue 3 March 2010 pp 457-468 ... Food irradiation; electron accelerator; Monte Carlo; dose uniformity. ... for radiation processing of food and medical products is being commissioned at our centre in Indore, India.
Usefulness of the Monte Carlo method in reliability calculations
International Nuclear Information System (INIS)
Lanore, J.M.; Kalli, H.
1977-01-01
Three examples of reliability Monte Carlo programs developed in the LEP (Laboratory for Radiation Shielding Studies in the Nuclear Research Center at Saclay) are presented. First, an uncertainty analysis is given for a simplified spray system; a Monte Carlo program PATREC-MC has been written to solve the problem with the system components given in the fault tree representation. The second program MONARC 2 has been written to solve the problem of complex systems reliability by the Monte Carlo simulation, here again the system (a residual heat removal system) is in the fault tree representation. Third, the Monte Carlo program MONARC was used instead of the Markov diagram to solve the simulation problem of an electric power supply including two nets and two stand-by diesels
Anu Välba ronib Mont Blanci tippu
2008-01-01
Teleajakirjanik esindab Eesti naisi Euroliidu liikmesmaade naiste ühisretkel, millega tähistatakse Prantsusmaa EL eesistumisaja algust ja 200 aasta möödumist esimese naise jõudmisest Mont Blanci tippu
Monte Carlo techniques for analyzing deep penetration problems
International Nuclear Information System (INIS)
Cramer, S.N.; Gonnord, J.; Hendricks, J.S.
1985-01-01
A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications
Suppression of the initial transient in Monte Carlo criticality simulations
International Nuclear Information System (INIS)
Richet, Y.
2006-12-01
Criticality Monte Carlo calculations aim at estimating the effective multiplication factor (k-effective) for a fissile system through iterations simulating neutrons propagation (making a Markov chain). Arbitrary initialization of the neutron population can deeply bias the k-effective estimation, defined as the mean of the k-effective computed at each iteration. A simplified model of this cycle k-effective sequence is built, based on characteristics of industrial criticality Monte Carlo calculations. Statistical tests, inspired by Brownian bridge properties, are designed to discriminate stationarity of the cycle k-effective sequence. The initial detected transient is, then, suppressed in order to improve the estimation of the system k-effective. The different versions of this methodology are detailed and compared, firstly on a plan of numerical tests fitted on criticality Monte Carlo calculations, and, secondly on real criticality calculations. Eventually, the best methodologies observed in these tests are selected and allow to improve industrial Monte Carlo criticality calculations. (author)
Monte Carlo calculations of electron diffusion in materials
International Nuclear Information System (INIS)
Schroeder, U.G.
1976-01-01
By means of simulated experiments, various transport problems for 10 Mev electrons are investigated. For this purpose, a special Monte-Carlo programme is developed, and with this programme calculations are made for several material arrangements. (orig./LN) [de
A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT
MIKOSCH, T; WANG, QA
We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.
Combinatorial geometry domain decomposition strategies for Monte Carlo simulations
Energy Technology Data Exchange (ETDEWEB)
Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z. [Institute of Applied Physics and Computational Mathematics, Beijing, 100094 (China)
2013-07-01
Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)
Combinatorial geometry domain decomposition strategies for Monte Carlo simulations
International Nuclear Information System (INIS)
Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.
2013-01-01
Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)
Calculation of toroidal fusion reactor blankets by Monte Carlo
International Nuclear Information System (INIS)
Macdonald, J.L.; Cashwell, E.D.; Everett, C.J.
1977-01-01
A brief description of the calculational method is given. The code calculates energy deposition in toroidal geometry, but is a continuous energy Monte Carlo code, treating the reaction cross sections as well as the angular scattering distributions in great detail
The Monte Carlo simulation of the Ladon photon beam facility
International Nuclear Information System (INIS)
Strangio, C.
1976-01-01
The backward compton scattering of laser light against high energy electrons has been simulated with a Monte Carlo method. The main features of the produced photon beam are reported as well as a careful description of the numerical calculation
Monte Carlo variance reduction approaches for non-Boltzmann tallies
International Nuclear Information System (INIS)
Booth, T.E.
1992-12-01
Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed
Monte Carlo techniques for analyzing deep penetration problems
International Nuclear Information System (INIS)
Cramer, S.N.; Gonnord, J.; Hendricks, J.S.
1985-01-01
A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications. 29 refs
Combinatorial nuclear level density by a Monte Carlo method
International Nuclear Information System (INIS)
Cerf, N.
1994-01-01
We present a new combinatorial method for the calculation of the nuclear level density. It is based on a Monte Carlo technique, in order to avoid a direct counting procedure which is generally impracticable for high-A nuclei. The Monte Carlo simulation, making use of the Metropolis sampling scheme, allows a computationally fast estimate of the level density for many fermion systems in large shell model spaces. We emphasize the advantages of this Monte Carlo approach, particularly concerning the prediction of the spin and parity distributions of the excited states,and compare our results with those derived from a traditional combinatorial or a statistical method. Such a Monte Carlo technique seems very promising to determine accurate level densities in a large energy range for nuclear reaction calculations
MONTE: the next generation of mission design and navigation software
Evans, Scott; Taber, William; Drain, Theodore; Smith, Jonathon; Wu, Hsi-Cheng; Guevara, Michelle; Sunseri, Richard; Evans, James
2018-03-01
The Mission analysis, Operations and Navigation Toolkit Environment (MONTE) (Sunseri et al. in NASA Tech Briefs 36(9), 2012) is an astrodynamic toolkit produced by the Mission Design and Navigation Software Group at the Jet Propulsion Laboratory. It provides a single integrated environment for all phases of deep space and Earth orbiting missions. Capabilities include: trajectory optimization and analysis, operational orbit determination, flight path control, and 2D/3D visualization. MONTE is presented to the user as an importable Python language module. This allows a simple but powerful user interface via CLUI or script. In addition, the Python interface allows MONTE to be used seamlessly with other canonical scientific programming tools such as SciPy, NumPy, and Matplotlib. MONTE is the prime operational orbit determination software for all JPL navigated missions.
Bayesian Optimal Experimental Design Using Multilevel Monte Carlo
Ben Issaid, Chaouki
2015-01-01
informative data about the model parameters. One of the major difficulties in evaluating the expected information gain is that it naturally involves nested integration over a possibly high dimensional domain. We use the Multilevel Monte Carlo (MLMC) method
Studies of Monte Carlo Modelling of Jets at ATLAS
Kar, Deepak; The ATLAS collaboration
2017-01-01
The predictions of different Monte Carlo generators for QCD jet production, both in multijets and for jets produced in association with other objects, are presented. Recent improvements in showering Monte Carlos provide new tools for assessing systematic uncertainties associated with these jets. Studies of the dependence of physical observables on the choice of shower tune parameters and new prescriptions for assessing systematic uncertainties associated with the choice of shower model and tune are presented.
Herwig: The Evolution of a Monte Carlo Simulation
CERN. Geneva
2015-01-01
Monte Carlo event generation has seen significant developments in the last 10 years starting with preparation for the LHC and then during the first LHC run. I will discuss the basic ideas behind Monte Carlo event generators and then go on to discuss these developments, focussing on the developments in Herwig(++) event generator. I will conclude by presenting the current status of event generation together with some results of the forthcoming new version of Herwig, Herwig 7.
Clinical considerations of Monte Carlo for electron radiotherapy treatment planning
International Nuclear Information System (INIS)
Faddegon, Bruce; Balogh, Judith; Mackenzie, Robert; Scora, Daryl
1998-01-01
Technical requirements for Monte Carlo based electron radiotherapy treatment planning are outlined. The targeted overall accuracy for estimate of the delivered dose is the least restrictive of 5% in dose, 5 mm in isodose position. A system based on EGS4 and capable of achieving this accuracy is described. Experience gained in system design and commissioning is summarized. The key obstacle to widespread clinical use of Monte Carlo is lack of clinically acceptable measurement based methodology for accurate commissioning
Monte Carlo method for solving a parabolic problem
Directory of Open Access Journals (Sweden)
Tian Yi
2016-01-01
Full Text Available In this paper, we present a numerical method based on random sampling for a parabolic problem. This method combines use of the Crank-Nicolson method and Monte Carlo method. In the numerical algorithm, we first discretize governing equations by Crank-Nicolson method, and obtain a large sparse system of linear algebraic equations, then use Monte Carlo method to solve the linear algebraic equations. To illustrate the usefulness of this technique, we apply it to some test problems.
NUEN-618 Class Project: Actually Implicit Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Vega, R. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brunner, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2017-12-14
This research describes a new method for the solution of the thermal radiative transfer (TRT) equations that is implicit in time which will be called Actually Implicit Monte Carlo (AIMC). This section aims to introduce the TRT equations, as well as the current workhorse method which is known as Implicit Monte Carlo (IMC). As the name of the method proposed here indicates, IMC is a misnomer in that it is only semi-implicit, which will be shown in this section as well.
Monte Carlo methods and applications in nuclear physics
International Nuclear Information System (INIS)
Carlson, J.
1990-01-01
Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs
Monte Carlo methods and applications in nuclear physics
Energy Technology Data Exchange (ETDEWEB)
Carlson, J.
1990-01-01
Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs.
Study of the Transition Flow Regime using Monte Carlo Methods
Hassan, H. A.
1999-01-01
This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.
Modern analysis of ion channeling data by Monte Carlo simulations
Energy Technology Data Exchange (ETDEWEB)
Nowicki, Lech [Andrzej SoItan Institute for Nuclear Studies, ul. Hoza 69, 00-681 Warsaw (Poland)]. E-mail: lech.nowicki@fuw.edu.pl; Turos, Andrzej [Institute of Electronic Materials Technology, Wolczynska 133, 01-919 Warsaw (Poland); Ratajczak, Renata [Andrzej SoItan Institute for Nuclear Studies, ul. Hoza 69, 00-681 Warsaw (Poland); Stonert, Anna [Andrzej SoItan Institute for Nuclear Studies, ul. Hoza 69, 00-681 Warsaw (Poland); Garrido, Frederico [Centre de Spectrometrie Nucleaire et Spectrometrie de Masse, CNRS-IN2P3-Universite Paris-Sud, 91405 Orsay (France)
2005-10-15
Basic scheme of ion channeling spectra Monte Carlo simulation is reformulated in terms of statistical sampling. The McChasy simulation code is described and two examples of the code applications are presented. These are: calculation of projectile flux in uranium dioxide crystal and defect analysis for ion implanted InGaAsP/InP superlattice. Virtues and pitfalls of defect analysis using Monte Carlo simulations are discussed.
Monte Carlos of the new generation: status and progress
International Nuclear Information System (INIS)
Frixione, Stefano
2005-01-01
Standard parton shower monte carlos are designed to give reliable descriptions of low-pT physics. In the very high-energy regime of modern colliders, this is may lead to largely incorrect predictions of the basic reaction processes. This motivated the recent theoretical efforts aimed at improving monte carlos through the inclusion of matrix elements computed beyond the leading order in QCD. I briefly review the progress made, and discuss bottom production at the Tevatron
The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code
International Nuclear Information System (INIS)
Sutton, T.M.; Brown, F.B.; Bischoff, F.G.; MacMillan, D.B.; Ellis, C.L.; Ward, J.T.; Ballinger, C.T.; Kelly, D.J.; Schindler, L.
1999-01-01
This report describes the MCV (Monte Carlo - Vectorized)Monte Carlo neutron transport code [Brown, 1982, 1983; Brown and Mendelson, 1984a]. MCV is a module in the RACER system of codes that is used for Monte Carlo reactor physics analysis. The MCV module contains all of the neutron transport and statistical analysis functions of the system, while other modules perform various input-related functions such as geometry description, material assignment, output edit specification, etc. MCV is very closely related to the 05R neutron Monte Carlo code [Irving et al., 1965] developed at Oak Ridge National Laboratory. 05R evolved into the 05RR module of the STEMB system, which was the forerunner of the RACER system. Much of the overall logic and physics treatment of 05RR has been retained and, indeed, the original verification of MCV was achieved through comparison with STEMB results. MCV has been designed to be very computationally efficient [Brown, 1981, Brown and Martin, 1984b; Brown, 1986]. It was originally programmed to make use of vector-computing architectures such as those of the CDC Cyber- 205 and Cray X-MP. MCV was the first full-scale production Monte Carlo code to effectively utilize vector-processing capabilities. Subsequently, MCV was modified to utilize both distributed-memory [Sutton and Brown, 1994] and shared memory parallelism. The code has been compiled and run on platforms ranging from 32-bit UNIX workstations to clusters of 64-bit vector-parallel supercomputers. The computational efficiency of the code allows the analyst to perform calculations using many more neutron histories than is practical with most other Monte Carlo codes, thereby yielding results with smaller statistical uncertainties. MCV also utilizes variance reduction techniques such as survival biasing, splitting, and rouletting to permit additional reduction in uncertainties. While a general-purpose neutron Monte Carlo code, MCV is optimized for reactor physics calculations. It has the
Monte carlo sampling of fission multiplicity.
Energy Technology Data Exchange (ETDEWEB)
Hendricks, J. S. (John S.)
2004-01-01
Two new methods have been developed for fission multiplicity modeling in Monte Carlo calculations. The traditional method of sampling neutron multiplicity from fission is to sample the number of neutrons above or below the average. For example, if there are 2.7 neutrons per fission, three would be chosen 70% of the time and two would be chosen 30% of the time. For many applications, particularly {sup 3}He coincidence counting, a better estimate of the true number of neutrons per fission is required. Generally, this number is estimated by sampling a Gaussian distribution about the average. However, because the tail of the Gaussian distribution is negative and negative neutrons cannot be produced, a slight positive bias can be found in the average value. For criticality calculations, the result of rejecting the negative neutrons is an increase in k{sub eff} of 0.1% in some cases. For spontaneous fission, where the average number of neutrons emitted from fission is low, the error also can be unacceptably large. If the Gaussian width approaches the average number of fissions, 10% too many fission neutrons are produced by not treating the negative Gaussian tail adequately. The first method to treat the Gaussian tail is to determine a correction offset, which then is subtracted from all sampled values of the number of neutrons produced. This offset depends on the average value for any given fission at any energy and must be computed efficiently at each fission from the non-integrable error function. The second method is to determine a corrected zero point so that all neutrons sampled between zero and the corrected zero point are killed to compensate for the negative Gaussian tail bias. Again, the zero point must be computed efficiently at each fission. Both methods give excellent results with a negligible computing time penalty. It is now possible to include the full effects of fission multiplicity without the negative Gaussian tail bias.
Dosimetry applications in GATE Monte Carlo toolkit.
Papadimitroulas, Panagiotis
2017-09-01
Monte Carlo (MC) simulations are a well-established method for studying physical processes in medical physics. The purpose of this review is to present GATE dosimetry applications on diagnostic and therapeutic simulated protocols. There is a significant need for accurate quantification of the absorbed dose in several specific applications such as preclinical and pediatric studies. GATE is an open-source MC toolkit for simulating imaging, radiotherapy (RT) and dosimetry applications in a user-friendly environment, which is well validated and widely accepted by the scientific community. In RT applications, during treatment planning, it is essential to accurately assess the deposited energy and the absorbed dose per tissue/organ of interest, as well as the local statistical uncertainty. Several types of realistic dosimetric applications are described including: molecular imaging, radio-immunotherapy, radiotherapy and brachytherapy. GATE has been efficiently used in several applications, such as Dose Point Kernels, S-values, Brachytherapy parameters, and has been compared against various MC codes which are considered as standard tools for decades. Furthermore, the presented studies show reliable modeling of particle beams when comparing experimental with simulated data. Examples of different dosimetric protocols are reported for individualized dosimetry and simulations combining imaging and therapy dose monitoring, with the use of modern computational phantoms. Personalization of medical protocols can be achieved by combining GATE MC simulations with anthropomorphic computational models and clinical anatomical data. This is a review study, covering several dosimetric applications of GATE, and the different tools used for modeling realistic clinical acquisitions with accurate dose assessment. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Bayesian Optimal Experimental Design Using Multilevel Monte Carlo
Ben Issaid, Chaouki; Long, Quan; Scavino, Marco; Tempone, Raul
2015-01-01
Experimental design is very important since experiments are often resource-exhaustive and time-consuming. We carry out experimental design in the Bayesian framework. To measure the amount of information, which can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data for our purpose. One of the major difficulties in evaluating the expected information gain is that the integral is nested and can be high dimensional. We propose using Multilevel Monte Carlo techniques to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, the Multilevel Monte Carlo can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the Multilevel Monte Carlo method imposes less assumptions, such as the concentration of measures, required by Laplace method. We test our Multilevel Monte Carlo technique using a numerical example on the design of sensor deployment for a Darcy flow problem governed by one dimensional Laplace equation. We also compare the performance of the Multilevel Monte Carlo, Laplace approximation and direct double loop Monte Carlo.
Bayesian Optimal Experimental Design Using Multilevel Monte Carlo
Ben Issaid, Chaouki
2015-01-07
Experimental design is very important since experiments are often resource-exhaustive and time-consuming. We carry out experimental design in the Bayesian framework. To measure the amount of information, which can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data for our purpose. One of the major difficulties in evaluating the expected information gain is that the integral is nested and can be high dimensional. We propose using Multilevel Monte Carlo techniques to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, the Multilevel Monte Carlo can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the Multilevel Monte Carlo method imposes less assumptions, such as the concentration of measures, required by Laplace method. We test our Multilevel Monte Carlo technique using a numerical example on the design of sensor deployment for a Darcy flow problem governed by one dimensional Laplace equation. We also compare the performance of the Multilevel Monte Carlo, Laplace approximation and direct double loop Monte Carlo.
Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments
International Nuclear Information System (INIS)
Pevey, Ronald E.
2005-01-01
Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL
Alternative implementations of the Monte Carlo power method
International Nuclear Information System (INIS)
Blomquist, R.N.; Gelbard, E.M.
2002-01-01
We compare nominal efficiencies, i.e. variances in power shapes for equal running time, of different versions of the Monte Carlo eigenvalue computation, as applied to criticality safety analysis calculations. The two main methods considered here are ''conventional'' Monte Carlo and the superhistory method, and both are used in criticality safety codes. Within each of these major methods, different variants are available for the main steps of the basic Monte Carlo algorithm. Thus, for example, different treatments of the fission process may vary in the extent to which they follow, in analog fashion, the details of real-world fission, or may vary in details of the methods by which they choose next-generation source sites. In general the same options are available in both the superhistory method and conventional Monte Carlo, but there seems not to have been much examination of the special properties of the two major methods and their minor variants. We find, first, that the superhistory method is just as efficient as conventional Monte Carlo and, secondly, that use of different variants of the basic algorithms may, in special cases, have a surprisingly large effect on Monte Carlo computational efficiency