WorldWideScience

Sample records for mont blanc laboratory

  1. The research program of the Liquid Scintillation Detector (LSD) in the Mont Blanc Laboratory

    Science.gov (United States)

    Dadykin, V. L.; Yakushev, V. F.; Korchagin, P. V.; Korchagin, V. B.; Malgin, A. S.; Ryassny, F. G.; Ryazhskaya, O. G.; Talochkin, V. P.; Zatsepin, G. T.; Badino, G.

    1985-01-01

    A massive (90 tons) liquid scintillation detector (LSD) has been running since October 1984 in the Mont Blanc Laboratory at a depth of 5,200 hg/sq cm of standard rock. The research program of the experiment covers a variety of topics in particle physics and astrophysics. The performance of the detector, the main fields of research are presented and the preliminary results are discussed.

  2. Neutrino astronomy at Mont Blanc: from LSD to LSD-2

    International Nuclear Information System (INIS)

    Saavedra, O.; Aglietta, M.; Badino, G.

    1988-01-01

    In this paper we present the upgrading of the LSD experiment, presently running in the Mont Blanc Laboratory. The data recorded during the period when supernova 1987A exploded are analysed in detail. The research program of LSD-2, the same experiment as LSD but with an higher sensitivity to search for neutrino burst from collapsing stars, is also discussed

  3. Aasta film - joonisfilm "Mont Blanc" / Verni Leivak

    Index Scriptorium Estoniae

    Leivak, Verni, 1966-

    2002-01-01

    Eesti Filmiajakirjanike Ühing andis aasta 2001 parima filmi tiitli Priit Tenderi joonisfilmile "Mont Blanc" : Eesti Joonisfilm 2001.Ka filmikriitikute eelistused kinodes ja televisioonis 2001. aastal näidatud filmide osas

  4. Stopping particles in the Mont Blanc spark chamber telescopes

    Energy Technology Data Exchange (ETDEWEB)

    Bergamasco, L; Bilokon, H; Piazzoli, B E; Mannocchi, G; Picchi, P [Consiglio Nazionale delle Ricerche, Turin (Italy). Lab. di Cosmo-Geofisica; Turin Univ. (Italy). Ist. di Fisica Generale)

    1982-02-01

    We present the final results on the ratio of stopping to traversing muons as measured by two spark chamber telescopes in the Mont Blanc Station, Italy, at 4300 hg/cm/sup 2/. The experimental results are in agreement with the theoretical values within the limits of the error.

  5. L'érosion dans les environnements glaciaires : exemple du Glacier des Bossons (Massif du Mont-Blanc, Haute-Savoie, France)

    OpenAIRE

    Godon , Cécile

    2013-01-01

    The study presented in this PhD memory aim at better define and quantify the present timeerosion processes in glacial and proglacial domain. The Glacier des Bossons, situated in theMont-Blanc massif (Haute-Savoie, France), is a good example of a natural and nonanthropizedsystem which allows us to study this topic. This glacier lies on two mainlithologies (the Mont-Blanc granite and the metamorphic bedrock) and this peculiarity is usedto determine the origin of the glacial sediments. The sedim...

  6. Climbing Mont Blanc - A Training Site for Energy Efficient Programming on Heterogeneous Multicore Processors

    OpenAIRE

    Natvig, Lasse; Follan, Torbjørn; Støa, Simen; Magnussen, Sindre; Guirado, Antonio Garcia

    2015-01-01

    Climbing Mont Blanc (CMB) is an open online judge used for training in energy efficient programming of state-of-the-art heterogeneous multicores. It uses an Odroid-XU3 board from Hardkernel with an Exynos Octa processor and integrated power sensors. This processor is three-way heterogeneous containing 14 different cores of three different types. The board currently accepts C and C++ programs, with support for OpenCL v1.1, OpenMP 4.0 and Pthreads. Programs submitted using the graphical user in...

  7. A first comparison of Cosmo-Skymed and TerraSAR-X data over Chamonix Mont-Blanc test-site

    OpenAIRE

    Nicolas , Jean-Marie; Trouvé , Emmanuel; Fallourd , Renaud; Vernier , Flavien; Tupin , Florence; Harant , Olivier; Gay , Michel; Moreau , Luc

    2012-01-01

    International audience; This paper presents the first results obtained with satellite im- age time series (SITS) acquired by Cosmo-SkyMed (CSK) over the Chamonix Mont-Blanc test-site. A CSK SITS made of 39 images is merged with a TerraSAR-X SITS made of 26 images by using the orbital information and co-registration tools developed in the EFIDIR project. The results are illus- trated by the computation of speckle-free images by temporal averaging, by the generation and comparison of topographi...

  8. On the correlation between Mont Blanc and Baksan underground detectors in February 1987

    International Nuclear Information System (INIS)

    Chudakov, A.E.

    1989-01-01

    According to the author, there is a correlation directly between the Mont Blanc (LSD) and Baksan data, two quite similar underground scintillation detectors. The idea is: if something really happens that activates the gravitational antennas (G.A.) signal and that, after 1.2 s, gives a signal in a particular scintillator, then there should be a chance to observe a quasi-simultaneous signal in another, possibly very distant scintillator. The big distance between the Baksan and LSD detectors should exclude the common electrical power supply as a possible source of correlation. Another advantage of the suggested search could be a simplicity of statistical analysis when the duration of the signal (in the scintillation counter) is much less than the correlation time interval (1 s). In this report, the author discusses both positive and negative evidence concerning the LSD-Baksan correlation

  9. The Mont Blanc neutrinos from SN 1987A: Could they have been monochromatic (8 MeV) tachyons with m2 = - 0.38 keV2?

    Science.gov (United States)

    Ehrlich, Robert

    2018-05-01

    According to conventional wisdom the 5 h early Mont Blanc burst probably was not associated with SN 1987A, but if it was genuine, some exotic physics explanation had to be responsible. Here we consider one truly exotic explanation, namely faster-than-light neutrinos having mν2 = - 0.38 keV2. It is shown that the Mont Blanc burst is consistent with the distinctive signature of that explanation i.e., an 8 MeV antineutrino line from SN 1987A. It is further shown that a model of core collapse supernovae involving dark matter particles of mass 8 MeV would in fact yield an 8 MeV antineutrino line. Moreover, that dark matter model predicts 8 MeV ν ,νbar and e+e- pairs from the galactic center, a place where one would expect large amounts of dark matter to collect. The resulting e+ would create γ - rays from the galactic center, and a fit to MeV γ - ray data yields the model's dark matter mass, as well as the calculated source temperature and angular size. These good fits give indirect experimental support for the existence of an 8 MeV antineutrino line from SN 1987A. More direct support comes from the spectrum of N ∼ 1000 events recorded by the Kamiokande-II detector on the day of SN 1987A, which appear to show an 8 MeV line atop the detector background. This νbar line, if genuine, has been well-hidden for 30 years because it occurs very close to the peak of the background. This fact might ordinarily justify extreme skepticism. In the present case, however, a more positive view is called for based on (a) the very high statistical significance of the result (30σ), (b) the use of a detector background independent of the SN 1987A data using a later K-II data set, and (c) the observation of an excess above the background spectrum whose central energy and width both agree with that of an 8 MeV νbar line broadened by 25% resolution. Most importantly, the last observation is in accord with the prior prediction of an 8 MeV νbar line based on the Mont Blanc data, and

  10. Multicriteria analysis of protection actions in the case of transportation of radioactive materials: Regulating the transit of type A packages through the Mont Blanc Tunnel

    International Nuclear Information System (INIS)

    Hubert, P.; Lombard, J.; Pages, P.

    1986-09-01

    The utility function approach (decision analysis) is one of the classical decision aiding techniques that are of interest when performing ALARA analysis. In this paper a case study will serve as an illustration of this technique. The problem which is dealt with is the set up of a regulation applying to the transit of small radioactive material packages (type A) under the Mont Blanc Tunnel which is a major route between France and Italy. This case study is therefore a good example of an ALARA approach applied to a safety problem which implies both a probabilistic risk assessment and the evaluation of very heterogeneous criteria

  11. Michel Blanc 1952-2009

    CERN Multimedia

    HR Department

    2009-01-01

    We deeply regret to announce the death of Mr Michel BLANC on 27 November 2009. Mr BLANC, who was born on 4 April 1952, was a member of the IT Department and had worked at CERN since 1 January 1978. The Director-General has sent his family a message of condolence on behalf of the CERN personnel. Social Affairs It was with great sadness that we learned of the death of our friend and colleague Michel Blanc on the evening of Friday 27 November. Everyone who knew him, especially those who had spent many years with him in the Computer Centre where he worked since his arrival at CERN in 1978, but also his more recent colleagues, will always remember his good humour, his quick wit and his amazing zest for life. He made staunch friends during his thirty years at CERN and often kept in touch with them after they left the Organization. His passion for motorcycling and for walks with his wife, his two sons and his friends were some of his great joys in life. Michel took a well-earned early retirement in Octobe...

  12. Evolution of hut access facing glacier shrinkage in the Mer de Glace basin (Mont Blanc massif, France)

    Science.gov (United States)

    Mourey, Jacques; Ravanel, Ludovic

    2016-04-01

    Given the evolution of high mountain environment due to global warming, mountaineering routes and huts accesses are more and more strongly affected by glacial shrinkage and concomitant gravity processes, but almost no studies have been conducted on this relationship. The aim of this research is to describe and explain the evolution over the last century of the access to the five alpine huts around the Mer de Glace glacier (Mont Blanc massif), the larger French glacier (length = 11.5 km, area = 30 km²), a major place for Alpine tourism since 1741 and the birthplace of mountaineering, by using several methods (comparing photographs, surveying, collecting historical documents). While most of the 20th century shows no marked changes, loss of ice thickness and associated erosion of lateral moraines generate numerous and significant changes since the 1990s. Boulder falls, rockfalls and landslides are the main geomorphological processes that affect the access, while the glacier surface lowering makes access much longer and more unstable. The danger is then greatly increased and the access must be relocated and/or equipped more and more frequently (e.g. a total of 520 m of ladders has been added). This questions the future accessibility to the huts, jeopardizing an important part of mountaineering and its linked economy in the Mer de Glace area.

  13. The air and noise situation in the alpine transit valleys of Fréjus, Mont-Blanc, Gotthard and Brenner

    Directory of Open Access Journals (Sweden)

    Jürg Thudium

    2009-03-01

    Full Text Available Les conséquences du trafic routier, en termes de nuisances sonores et de qualité de l’air, ont été analysées et comparées pour quatre vallées de transit alpin (Fréjus, Mont Blanc, Gothard et Brenner durant l’année 2004. Au regard du trafic alpin dans son ensemble, des disparités considérables apparaissent entre les vallées étudiées, mais également à l’intérieur de ces mêmes vallées. Les immissions (concentration de polluants produites par unité d’émission du trafic routier sont deux à trois fois plus élevées dans ces vallées alpines qu’en plaine. Ceci s’explique principalement par la topographie et le climat particuliers de ces vallées. À de nombreux points d’observation, les seuils d’immission ont été dépassés. Les vallées sont également affectées durement par la pollution sonore. « L’effet amphithéâtre » transporte le bruit à des altitudes supérieures, qui n’auraient pas été exposées à autant d’irradiation acoustique si la source de nuisances était située à égale distance, mais dans un « paysage ouvert ». De plus, la protection contre le bruit qui se réfléchit sur les pentes est malaisée. En résumé, toutes les vallées de transit étudiées peuvent être considérées comme des régions sensibles.The environmental consequences of road transport with regard to air and noise in the transit valleys of Fréjus, Mont-Blanc, Gotthard, and Brenner have been analysed and compared with each other for the year 2004. In respect of the share of transport passing through the Alps in transport as a whole, there are in part considerable differences between the valleys under investigation as well as within the individual valleys. The air pollution produced per emission unit of road transport is two to three times higher than in the open country, mainly because of the topography and the climate. At numerous monitoring points, the thresholds for the air pollution were exceeded

  14. Patrick Blanc'i rippuvad aiad / Urmas Grišakov

    Index Scriptorium Estoniae

    Grišakov, Urmas, 1942-2013

    2010-01-01

    Prantsuse botaaniku ja aiakujundaja Patrick Blanc'i taeva poole kõrguvad rohelised seinad võimaldavad tänapäeval imetleda inimese loovuse ja teadmiste koostöös sündinut. Kujundaja on oma töödega tõestanud, et taimed võivad edukalt kasvada ka vertikaalselt üksteise kohal. Patrick Blanc'i kodulehekülg: www.verticalgardenpatrickblanc.com

  15. Surface and thickness variations of Brenva Glacier tongue (Mont Blanc, Italian Alps) in the second half of the 20th century by historical maps and aerial photogrammetry comparisons

    Science.gov (United States)

    D Agata, C.; Zanutta, A.; Muzzu Martis, D.; Mancini, F.; Smiraglia, C.

    2003-04-01

    Aim of this contribution is the evaluation of volumetric and surface variations of Brenva Glacier (Mont Blanc, Italian Alps) during the second half of the 20th century, by GIS-based processing of maps and aerial photogrammetry technique. Brenva Glacier is a typical debris covered glacier, located in a valley on the S-E side of the Mont Blanc. The glacier covers a surface of 7 kmq and shows a length of 7,6 km at maximum. The glacier snout reaches 1415 m a.s.l., which is the lowest glacier terminus of the Italian Alps. To evaluate glacier variations different historical maps were used: 1) The 1959 Map, at the scale 1:5.000, by EIRA (Ente Italiano Rilievi Aerofotogrammetrici, Firenze), from terrestrial photogrammetric survey, published in the Bollettino del Comitato Glaciologico Italiano, 2, n. 19, 1971. 2) The 1971 Map, at the scale 1:5.000, from aerial photogrammetry (Alifoto, Torino) published in the Bollettino del Comitato Glaciologico Italiano, 2, n. 20, 1972. 3) The 1988 Map, at the scale 1:10.000, (Region Aosta Valley, Regional Technical Map) from 1983 aerial photogrammetric survey. 4) The 1999 Map, at the scale 1:10.000, (Region Aosta Valley, Regional Technical Map) from 1991 aerial photogrammetry survey. For the same purpose the following aereal photographs were used: 1) The 1975 image, CGR (Italian General Company aerial Surveys) flight RAVDA (Administrative Autonomous Region Aosta Valley), at the scale 1:17.000. 2) The 1991 image, CGR (Italian General Company aerial Surveys) flight RAVDA (Administrative Autonomous Region Aosta Valley), at the scale 1:17.000. Aerial imageries have been acquired over a long period from 1975 to 1991. The black and white images were scanned at suitable resolution if compared with the imagery scale and several models, representing the glacier tongue area, oriented using the inner and outer orientation parameters delivered with the images, were produced. The digital photogrammetric system, after orientation and matching, produces

  16. Classification of Argentinean Sauvignon blanc wines by UV spectroscopy and chemometric methods.

    Science.gov (United States)

    Azcarate, Silvana Mariela; Cantarelli, Miguel Ángel; Pellerano, Roberto Gerardo; Marchevsky, Eduardo Jorge; Camiña, José Manuel

    2013-03-01

    Argentina is an important worldwide wine producer. In this country, there are several recognizable provinces that produce Sauvignon blanc wines: Neuquén, Río Negro, Mendoza, and San Juan. The analysis of the provenance of these white wines is complex and requires the use of expensive and time-consuming techniques. For this reason, this work discusses the determination of the provenance of Argentinean Sauvignon blanc wines by the use of UV spectroscopy and chemometric methods, such as principal component analysis (PCA), cluster analysis (CA), linear discriminant analysis (LDA), and partial least square discriminant analysis (PLS-DA). The proposed method requires low-cost equipment and short-time analysis in comparison with other techniques. The results are in very good agreement with results based on the geographical origin of Sauvignon blanc wines. This manuscript describes a method to determine the geographical origin of Sauvignon wines from Argentina. The main advantage of this method is the use of nonexpensive techniques, such as UV-Vis spectroscopy. © 2013 Institute of Food Technologists®

  17. The December 2008 Crammont rock avalanche, Mont Blanc massif area, Italy

    Directory of Open Access Journals (Sweden)

    P. Deline

    2011-12-01

    Full Text Available We describe a 0.5 Mm3 rock avalanche that occurred in 2008 in the western Alps and discuss possible roles of controlling factors in the context of current climate change. The source is located between 2410 m and 2653 m a.s.l. on Mont Crammont and is controlled by a densely fractured rock structure. The main part of the collapsed rock mass deposited at the foot of the rock wall. A smaller part travelled much farther, reaching horizontal and vertical travel distances of 3050 m and 1560 m, respectively. The mobility of the rock mass was enhanced by channelization and snow. The rock-avalanche volume was calculated by comparison of pre- and post-event DTMs, and geomechanical characterization of the detachment zone was extracted from LiDAR point cloud processing. Back analysis of the rock-avalanche runout suggests a two stage event.

    There was no previous rock avalanche activity from the Mont Crammont ridge during the Holocene. The 2008 rock avalanche may have resulted from permafrost degradation in the steep rock wall, as suggested by seepage water in the scar after the collapse in spite of negative air temperatures, and modelling of rock temperatures that indicate warm permafrost (T > −2 °C.

  18. Fixation of radioactive cerium-144 on white blood cells. Possibilities for use in physiopathology; Fixation du cerium radioactif ({sup 144}Ce) sur les globules blancs. Possibilites d'emploi en physiopathologie

    Energy Technology Data Exchange (ETDEWEB)

    Aeberhardt, A [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1958-07-01

    From the study of the means of transport of cerium in the blood of various laboratory animals, after intra-venous injection of {sup 144}Ce-{sup 144}Pr without carrier, we have been able to show up the part played by the white cells in the transport of this fission product during its passage in the blood. This observation has led to the study, in vitro, of the methods of cerium fixation on the white cells, with a view to determining the possibilities of using this property for white cell labelling, the methods used up to the present not being entirely satisfactory. Using the method for the separation of the known constituents of the blood proposed by us in 1956, we have studied the cerium fixation under various conditions: - on suspensions of white cells from the rabbit, - on a suspension of human white cells, - on the white cells in whole from the rabbit. (author) [French] L'etude du mode de transport du cerium dans le sang chez differents animaux de laboratoire, apres injection intra-veineuse de {sup 144}Ce-{sup 144}Pr sans entraineur, nous a permis de mettre en evidence le rale des globules blancs dans le transport de ce produit de fission au cours de son passage dans le sang. Cette constatation nous a conduit a etudier, in vitro, les modalites de la fixation du cerium sur les globules blancs afin de preciser les possibilites d'utilisation de cette propriete pour le marquage des globules blancs, les methodes employees jusqu'ici ne donnant pas entiere satisfaction. Disposant de la methode de separation des elements figures du sang que nous avons proposee en 1956, nous avons etudie la fixation du cerium, dans diverses conditions: - sur des suspensions de globules blancs de lapin, - sur une suspension de globules blancs humains, - sur les globules blancs dans le sang total de lapin. (auteur)

  19. Modelling rock wall permafrost degradation in the Mont Blanc massif from the LIA to the end of the 21st century

    Science.gov (United States)

    Magnin, Florence; Josnin, Jean-Yves; Ravanel, Ludovic; Pergaud, Julien; Pohl, Benjamin; Deline, Philip

    2017-08-01

    High alpine rock wall permafrost is extremely sensitive to climate change. Its degradation has a strong impact on landscape evolution and can trigger rockfalls constituting an increasing threat to socio-economical activities of highly frequented areas; quantitative understanding of permafrost evolution is crucial for such communities. This study investigates the long-term evolution of permafrost in three vertical cross sections of rock wall sites between 3160 and 4300 m above sea level in the Mont Blanc massif, from the Little Ice Age (LIA) steady-state conditions to 2100. Simulations are forced with air temperature time series, including two contrasted air temperature scenarios for the 21st century representing possible lower and upper boundaries of future climate change according to the most recent models and climate change scenarios. The 2-D finite element model accounts for heat conduction and latent heat transfers, and the outputs for the current period (2010-2015) are evaluated against borehole temperature measurements and an electrical resistivity transect: permafrost conditions are remarkably well represented. Over the past two decades, permafrost has disappeared on faces with a southerly aspect up to 3300 m a.s.l. and possibly higher. Warm permafrost (i.e. > - 2 °C) has extended up to 3300 and 3850 m a.s.l. in N and S-exposed faces respectively. During the 21st century, warm permafrost is likely to extend at least up to 4300 m a.s.l. on S-exposed rock walls and up to 3850 m a.s.l. depth on the N-exposed faces. In the most pessimistic case, permafrost will disappear on the S-exposed rock walls at a depth of up to 4300 m a.s.l., whereas warm permafrost will extend at a depth of the N faces up to 3850 m a.s.l., but possibly disappearing at such elevation under the influence of a close S face. The results are site specific and extrapolation to other sites is limited by the imbrication of local topographical and transient effects.

  20. Deep underground multiple muons at the Mt. Blanc station

    Energy Technology Data Exchange (ETDEWEB)

    Bergamasco, L; Bilokon, H; D' Ettorre Piazzoli, B; Mannocchi, G [Consiglio Nazionale delle Ricerche, Turin (Italy). Lab. di Cosmo-Geofisica; Castagnoli, C; Picchi, P [Consiglio Nazionale delle Ricerche, Turin (Italy). Lab. di Cosmo-Geofisica; Turin Univ. (Italy). Ist. di Fisica Generale)

    1979-12-29

    Results on multiple events recorded at the Mt. Blanc station in the last 3 years are presented. The integral energy spectrum of muons is obtained for Esub(..mu..)>1 TeV in the size range 10/sup 6/ - 10/sup 7/ which favours a multiplicity law for hadronic interactions of the form eta approximately Esup(1/4).

  1. Brief communication: 3-D reconstruction of a collapsed rock pillar from Web-retrieved images and terrestrial lidar data - the 2005 event of the west face of the Drus (Mont Blanc massif)

    Science.gov (United States)

    Guerin, Antoine; Abellán, Antonio; Matasci, Battista; Jaboyedoff, Michel; Derron, Marc-Henri; Ravanel, Ludovic

    2017-07-01

    In June 2005, a series of major rockfall events completely wiped out the Bonatti Pillar located in the legendary Drus west face (Mont Blanc massif, France). Terrestrial lidar scans of the west face were acquired after this event, but no pre-event point cloud is available. Thus, in order to reconstruct the volume and the shape of the collapsed blocks, a 3-D model has been built using photogrammetry (structure-from-motion (SfM) algorithms) based on 30 pictures collected on the Web. All these pictures were taken between September 2003 and May 2005. We then reconstructed the shape and volume of the fallen compartment by comparing the SfM model with terrestrial lidar data acquired in October 2005 and November 2011. The volume is calculated to 292 680 m3 (±5.6 %). This result is close to the value previously assessed by Ravanel and Deline (2008) for this same rock avalanche (265 000 ± 10 000 m3). The difference between these two estimations can be explained by the rounded shape of the volume determined by photogrammetry, which may lead to a volume overestimation. However it is not excluded that the volume calculated by Ravanel and Deline (2008) is slightly underestimated, the thickness of the blocks having been assessed manually from historical photographs.

  2. David Rosenthal’s Tirant lo Blanc turns 30

    Directory of Open Access Journals (Sweden)

    Jan Reinhart

    2014-12-01

    Full Text Available The groundbreaking English language translation of Tirant lo Blanc by New York poet and academic David Rosenthal remains dominant three decades after its initial, and celebrated, release. Rosenthal’s controversially fluid and concise rendering of the Valencian classic survived a serious challenge 20 years ago by a more literal version from a well-meaning amateur translator and journeyman academic backed by a leading U.S.-based Catalan scholar. The article reviews the controversy and compares the two versions, adding comments from some of the key critics. La traducció capdavantera a l’anglés del Tirant lo Blanc, feta pel poeta i erudit de Nova York, David Rosenthal, continua mantenint la seua importància, tres dècades després de publicar-se. La polèmica versió dúctil i concisa de Rosenthal del clàssic valencià, ha sobreviscut el desafiament seriós, de fa vint anys, de la versió més literal d’un benintencionat traductor amateur i acadèmci oficial, recolzat per un destacat erudit català establert als Estats Units. L’article revisa la polèmica i compara les dues versions, tot afegint els comentaris d’alguns dels crítics més importants.

  3. Fifteen years of microbiological investigation in Opalinus Clay at the Mont Terri rock laboratory (Switzerland)

    Energy Technology Data Exchange (ETDEWEB)

    Leupin, O.X. [National Cooperative for the Disposal of Radioactive Waste (NAGRA), Wettingen (Switzerland); Bernier-Latmani, R.; Bagnoud, A. [Swiss Federal Office of Technology EPFL, Lausanne (Switzerland); Moors, H.; Leys, N.; Wouters, K. [Belgian Nuclear Research Centre SCK-CEN, Mol (Belgium); Stroes-Gascoyne, S. [University of Saskatchewan, Saskatoon (Canada)

    2017-04-15

    Microbiological studies related to the geological disposal of radioactive waste have been conducted at the Mont Terri rock laboratory in Opalinus Clay, a potential host rock for a deep geologic repository, since 2002. The metabolic potential of microorganisms and their response to excavation-induced effects have been investigated in undisturbed and disturbed claystone cores and in pore- (borehole) water. Results from nearly 15 years of research at the Mont Terri rock laboratory have shown that microorganisms can potentially affect the environment of a repository by influencing redox conditions, metal corrosion and gas production and consumption under favourable conditions. However, the activity of microorganisms in undisturbed Opalinus Clay is limited by the very low porosity, the low water activity, and the largely recalcitrant nature of organic matter in the claystone formation. The presence of microorganisms in numerous experiments at the Mont Terri rock laboratory has suggested that excavation activities and perturbation of the host rock combined with additional contamination during the installation of experiments in boreholes create favourable conditions for microbial activity by providing increased space, water and substrates. Thus effects resulting from microbial activity might be expected in the proximity of a geological repository i.e., in the excavation damaged zone, the engineered barriers, and first containments (the containers). (authors)

  4. Fifteen years of microbiological investigation in Opalinus Clay at the Mont Terri rock laboratory (Switzerland)

    International Nuclear Information System (INIS)

    Leupin, O.X.; Bernier-Latmani, R.; Bagnoud, A.; Moors, H.; Leys, N.; Wouters, K.; Stroes-Gascoyne, S.

    2017-01-01

    Microbiological studies related to the geological disposal of radioactive waste have been conducted at the Mont Terri rock laboratory in Opalinus Clay, a potential host rock for a deep geologic repository, since 2002. The metabolic potential of microorganisms and their response to excavation-induced effects have been investigated in undisturbed and disturbed claystone cores and in pore- (borehole) water. Results from nearly 15 years of research at the Mont Terri rock laboratory have shown that microorganisms can potentially affect the environment of a repository by influencing redox conditions, metal corrosion and gas production and consumption under favourable conditions. However, the activity of microorganisms in undisturbed Opalinus Clay is limited by the very low porosity, the low water activity, and the largely recalcitrant nature of organic matter in the claystone formation. The presence of microorganisms in numerous experiments at the Mont Terri rock laboratory has suggested that excavation activities and perturbation of the host rock combined with additional contamination during the installation of experiments in boreholes create favourable conditions for microbial activity by providing increased space, water and substrates. Thus effects resulting from microbial activity might be expected in the proximity of a geological repository i.e., in the excavation damaged zone, the engineered barriers, and first containments (the containers). (authors)

  5. The Mont Terri rock laboratory: International research in the Opalinus Clay

    International Nuclear Information System (INIS)

    Bossart, P.

    2015-01-01

    This article reports on a visit made to the rock laboratory in Mont Terri, Switzerland, where research is being done concerning rock materials that can possibly be used for the implementation of repositories for nuclear wastes. Emphasis is placed on the project’s organisation, rock geology and on-going experiments. International organisations also involved in research on nuclear waste repositories are listed. The research facilities in tunnels built in Opalinus Clay at the Mont Terri site are described. The geology of Opalinus Clay and the structures found in the research tunnels are discussed, as is the hydro-geological setting. The research programme and various institutions involved are listed and experiments carried out are noted. The facilities are now also being used for research on topics related to carbon sequestration

  6. GRS' research on clay rock in the Mont Terri underground laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Wieczorek, Klaus; Czaikowski, Oliver [Gesellschaft fuer Anlagen- und Reaktorsicherheit gGmbH, Braunschweig (Germany)

    2016-07-15

    For constructing a nuclear waste repository and for ensuring the safety requirements are met over very long time periods, thorough knowledge about the safety-relevant processes occurring in the coupled system of waste containers, engineered barriers, and the host rock is indispensable. For respectively targeted research work, the Mont Terri rock laboratory is a unique facility where repository research is performed in a clay rock environment. It is run by 16 international partners, and a great variety of questions are investigated. Some of the work which GRS as one of the Mont Terri partners is involved in is presented in this article. The focus is on thermal, hydraulic and mechanical behaviour of host rock and/or engineered barriers.

  7. On the event detected by the Mont Blanc underground neutrino detector on February 23, 1987

    Energy Technology Data Exchange (ETDEWEB)

    Dadykin, V L; Zatsepin, G T; Korchagin, V B

    1988-02-01

    The event detected by the Mont Balnc Soviet -Italian scintillation detector on February 23, 1987 at 2:52:37 are discussed. The corrected energies of the pulases of the event and the probability of the event imitation by the background are presented.

  8. Effect of foliar nitrogen and sulphur application on aromatic expression of Vitis vinifera L. cv. Sauvignon blanc

    Directory of Open Access Journals (Sweden)

    Florian Lacroux

    2008-09-01

    Significance and impact of the study: Vine nitrogen deficiency can negatively impact on grape aroma potential. Soil nitrogen application can increase vine nitrogen status, but it has several drawbacks: it increases vigour and enhances Botrytis susceptibility. This study shows that foliar N and foliar N + S applications can improve vine nitrogen status and enhance aroma expression in Sauvignon blanc wines without the negative impact on vigour and Botrytis susceptibility. Although this study was carried out on Sauvignon blanc vines, it is likely that foliar N or foliar N + S applications will have similar effects on other grapevine varieties containing volatile thiols (Colombard, Riesling, Petit Manseng and Sémillon.

  9. Protein characterization of Roditis Greek grape variety and Sauvignon blanc and changes in certain nitrogen compounds during alcoholic fermentation

    Directory of Open Access Journals (Sweden)

    Z. G. Nakopoulou

    2006-09-01

    Full Text Available Must and wine samples of the Greek grape variety Roditis and the French one Sauvignon blanc were analysed in order to obtain further knowledge of the protein profile of Roditis and to watch the evolution of grape proteins during the alcoholic fermentation of Roditis and Sauvignon blanc musts. For these purposes protein samples were isolated from must and wine samples by ammonium sulphate precipitation and subjected to sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS - PAGE. Eleven and nine bands with molecular weights between 11,1 and 64,4 kDa were detected on the electrophoregramms of Roditis and Sauvignon blanc must and wine samples respectively, using Coomassie Brillant Blue R-250 and silver staining methods. Two protein fractions of must and wine samples with molecular weights of 64,4 kDa and 34,4 kDa were identified as being glycoproteins in the profile of the Greek grape variety, according to the Periodic acid - silver staining, while only one must and wine fraction of 64,4 kDa had positively react with this stain, as far as it concerns Sauvignon blanc. None of the low molecular weight protein fractions found to be responsible for haze formation. A modified (Bradford dye - binding procedure was used for the determination of musts and wines soluble proteins. Free amino nitrogen and the contents of neutral and acidic polysaccharides in the protein fractions after chromatography on Sephadex G - 25, were also analyzed.

  10. Molecular cloning and expression of gene encoding aromatic amino acid decarboxylase in 'Vidal blanc' grape berries.

    Science.gov (United States)

    Pan, Qiu-Hong; Chen, Fang; Zhu, Bao-Qing; Ma, Li-Yan; Li, Li; Li, Jing-Ming

    2012-04-01

    The pleasantly fruity and floral 2-phenylethanol are a dominant aroma compound in post-ripening 'Vidal blanc' grapes. However, to date little has been reported about its synthetic pathway in grapevine. In the present study, a full-length cDNA of VvAADC (encoding aromatic amino acid decarboxylase) was firstly cloned from the berries of 'Vidal blanc', an interspecific hybrid variety of Vitis vinifera × Vitis riparia. This sequence encodes a complete open reading frame of 482 amino acids with a calculated molecular mass of 54 kDa and isoelectric point value (pI) of 5.73. The amino acid sequence deduced shared about 79% identity with that of aromatic L: -amino acid decarboxylases (AADCs) from tomato. Real-time PCR analysis indicated that VvAADC transcript abundance presented a small peak at 110 days after full bloom and then a continuous increase at the berry post-ripening stage, which was consistent with the accumulation of 2-phenylethanol, but did not correspond to the trends of two potential intermediates, phenethylamine and 2-phenylacetaldehyde. Furthermore, phenylalanine still exhibited a continuous increase even in post-ripening period. It is thus suggested that 2-phenylethanol biosynthetic pathway mediated by AADC exists in grape berries, but it has possibly little contribution to a considerable accumulation of 2-phenylethanol in post-ripening 'Vidal blanc' grapes.

  11. Instability of a highly vulnerable high alpine rock ridge: the lower Arête des Cosmiques (Mont Blanc massif, France)

    Science.gov (United States)

    Ravanel, L.; Deline, P.; Lambiel, C.; Vincent, C.

    2012-04-01

    Glacier retreat and permafrost degradation are actually more and more thought to explain the increasing instability of rock slopes and rock ridges in high mountain environments. Hot summers with numerous rockfalls we experienced over the last two decades in the Alps have indeed contributed to test/strengthen the hypothesis of a strong correlation between rockfalls and global warming through these two cryospheric factors. Rockfalls from recently deglaciated and/or thawing areas may have very important economic and social implications for high mountain infrastructures and be a fatal hazard for mountaineers. At high mountain sites characterized by infrastructures that can be affected by rockfalls, the monitoring of rock slopes, permafrost and glaciers is thus an essential element for the sustainability of the infrastructure and for the knowledge/management of risks. Our study focuses on a particularly active area of the Mont Blanc massif (France), the lower Arête des Cosmiques, on which is located the very popular Refuge des Cosmiques (3613 m a.s.l.). Since 1998, when a rockfall threatened a part of the refuge and forced to major stabilizing works, observations allowed to identify 10 detachments (20 m3 to > 1000 m3), especially on the SE face of the ridge. Since 2009, this face is yearly surveyed by terrestrial laser scanning to obtain high-resolution 3D models. Their diachronic comparison gives precise measurements of the evolution of the rock slope. Eight rock detachments have thus been documented (0.7 m3 to 256.2 m3). Rock temperature measurements at the ridge and the close Aiguille du Midi (3842 m a.s.l.), and observations of the evolution of the underlying Glacier du Géant have enable to better understand the origin of the strong dynamics of this highly vulnerable area: (i) rock temperature data suggest the presence of warm permafrost (i.e. close to 0°C) from the first meters to depth in the SE face, and cold permafrost in the NW face; (ii) as suggested by the

  12. Kinetic Monte Carlo simulations of water ice porosity: extrapolations of deposition parameters from the laboratory to interstellar space

    Science.gov (United States)

    Clements, Aspen R.; Berk, Brandon; Cooke, Ilsa R.; Garrod, Robin T.

    2018-02-01

    Using an off-lattice kinetic Monte Carlo model we reproduce experimental laboratory trends in the density of amorphous solid water (ASW) for varied deposition angle, rate and surface temperature. Extrapolation of the model to conditions appropriate to protoplanetary disks and interstellar dark clouds indicate that these ices may be less porous than laboratory ices.

  13. Snow control on active layer and permafrost in steep alpine rock walls (Aiguille du Midi, 3842 m a.s.l, Mont Blanc massif)

    Science.gov (United States)

    Magnin, Florence; Westermann, Sebastian; Pogliotti, Paolo; Ravanel, Ludovic; Deline, Philip

    2016-04-01

    Permafrost degradation through the thickening of the active layer and the rising temperature at depth is a crucial process of rock wall stability. The ongoing increase in rock falls observed during hot periods in mid-latitude mountain ranges is regarded as a result of permafrost degradation. However, the short-term thermal dynamics of alpine rock walls are misunderstood since they result of complex processes related to the interaction of local climate variables, heterogeneous snow cover and heat transfers. As a consequence steady-state and long-term changes that can be approached with simpler process mainly related to air temperature, solar radiations and heat conduction were the most common dynamics to be studied so far. The effect of snow on the bedrock surface temperature is increasingly investigated and has already been demonstrated to be an essential factor of permafrost distribution. Nevertheless, its effect on the year-to-year changes of the active layer thickness and of the permafrost temperature in steep alpine bedrock has not been investigated yet, partly due to the lack of appropriate data. We explore the role of snow accumulations on the active layer and permafrost thermal regime of steep rock walls of a high-elevated site, the Aiguille du Midi (AdM, 3842 m a.s.l, Mont Blanc massif, Western European Alps) by mean of a multi-methods approach. We first analyse six years of temperature records in three 10-m-deep boreholes. Then we describe the snow accumulation patterns on two rock faces by means of automatically processed camera records. Finally, sensitivity analyses of the active layer thickness and permafrost temperature towards timing and magnitude of snow accumulations are performed using the numerical permafrost model CryoGrid 3. The energy balance module is forced with local meteorological measurements on the AdM S face and validated with surface temperature measurements at the weather station location. The heat conduction scheme is calibrated with

  14. Climbing Mont Blanc and Scalability

    OpenAIRE

    Chavez, Christian

    2016-01-01

    This thesis details a proposed system implementation upgrade for the CMB system, accessible at \\url{climb.idi.ntnu.no}, which profiles C/C++ code for its energy efficiency on an Odroid-XU3 board, which utilises a Samsung Exynos 5 Octa CPU, and has an ARM Mali-T628 GPU. Our proposed system implementation improves the robustness of the code base and its execution, in addition to permitting an increased throughput of submissions profiled by the system with the implementation's dispatcher whic...

  15. Twenty years of research at the Mont Terri rock laboratory: what we have learnt

    Energy Technology Data Exchange (ETDEWEB)

    Bossart, P. [Federal Office of Topography swisstopo, Wabern (Switzerland)

    2017-04-15

    The 20 papers in this Special Issue address questions related to the safe deep geological disposal of radioactive waste. Here we summarize the main results of these papers related to issues such as: formation of the excavation damaged zone, self-sealing processes, thermo-hydro-mechanical processes, anaerobic corrosion, hydrogen production and effects of microbial activity, and transport and retention processes of radionuclides. In addition, we clarify the question of transferability of results to other sites and programs and the role of rock laboratories for cooperation and training. Finally, we address the important role of the Mont Terri rock laboratory for the public acceptance of radioactive waste disposal. (author)

  16. qualite des eaux du bandama-blanc (cote d'ivoire) et de ses ...

    African Journals Online (AJOL)

    La qualité écologique des eaux des localités soumises à l'exploitation artisanale et clandestine de l'or au niveau du Bandama-Blanc et de ses affluents a été étudiée entre le 01 et le 15 Avril 2015. Le prélèvement du phytoplancton a été réalisé à l'aide de la bouteille hydrologique et du filet à plancton, tandis que le.

  17. JPRS Report Near East & South Asia

    Science.gov (United States)

    1991-02-22

    streets of Yerevan has not forgotten the past and the genocide and keeps his eyes fixed on Mount Ararat . Birand is not mistaken. It is Yesayi...five frozen accounts—Lotus, Tulip, Mont Blanc, Svenska Inc and an unidentified account—be passed on to the Indian investigators. Even if the Cantonal...accounts of ’Tulip,’ ’Lotus’ and ’ Mont Blanc’ (these are accused numbers 11, 12 and 13 respectively), and certain public servants of the government

  18. Influence of Fermentation Temperature, Yeast Strain, and Grape Juice on the Aroma Chemistry and Sensory Profile of Sauvignon Blanc Wines.

    Science.gov (United States)

    Deed, Rebecca C; Fedrizzi, Bruno; Gardner, Richard C

    2017-10-11

    Sauvignon blanc wine, balanced by herbaceous and tropical aromas, is fermented at low temperatures (10-15 °C). Anecdotal accounts from winemakers suggest that cold fermentations produce and retain more "fruity" aroma compounds; nonetheless, studies have not confirmed why low temperatures are optimal for Sauvignon blanc. Thirty-two aroma compounds were quantitated from two Marlborough Sauvignon blanc juices fermented at 12.5 and 25 °C, using Saccharomyces cerevisiae strains EC1118, L-1528, M2, and X5. Fourteen compounds were responsible for driving differences in aroma chemistry. The 12.5 °C-fermented wines had lower 3-mercaptohexan-1-ol (3MH) and higher alcohols but increased fruity acetate esters. However, a sensory panel did not find a significant difference between fruitiness in 75% of wine pairs based on fermentation temperature, in spite of chemical differences. For wine pairs with significant differences (25%), the 25 °C-fermented wines were fruitier than the 12.5 °C-fermented wines, with high fruitiness associated with 3MH. We propose that the benefits of low fermentation temperatures are not derived from increased fruitiness but a better balance between fruitiness and greenness. Even so, since 75% of wines showed no significant difference, higher fermentation temperatures could be utilized without detriment, lowering costs for the wine industry.

  19. Monte Carlo analysis of the Neutron Standards Laboratory of the CIEMAT

    International Nuclear Information System (INIS)

    Vega C, H. R.; Mendez V, R.; Guzman G, K. A.

    2014-10-01

    By means of Monte Carlo methods was characterized the neutrons field produced by calibration sources in the Neutron Standards Laboratory of the Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas (CIEMAT). The laboratory has two neutron calibration sources: 241 AmBe and 252 Cf which are stored in a water pool and are placed on the calibration bench using controlled systems at distance. To characterize the neutrons field was built a three-dimensional model of the room where it was included the stainless steel bench, the irradiation table and the storage pool. The sources model included double encapsulated of steel, as cladding. With the purpose of determining the effect that produces the presence of the different components of the room, during the characterization the neutrons spectra, the total flow and the rapidity of environmental equivalent dose to 100 cm of the source were considered. The presence of the walls, floor and ceiling of the room is causing the most modification in the spectra and the integral values of the flow and the rapidity of environmental equivalent dose. (Author)

  20. Monte Carlo simulation of muon radiation environment in China Jinping Underground Laboratory

    International Nuclear Information System (INIS)

    Su Jian; Zeng Zhi; Liu Yue; Yue Qian; Ma Hao; Cheng Jianping

    2012-01-01

    Muon radiation background of China Jinping Underground Laboratory (CJPL) was simulated by Monte Carlo method. According to the Gaisser formula and the MUSIC soft, the model of cosmic ray muons was established. Then the yield and the average energy of muon-induced photons and muon-induced neutrons were simulated by FLUKA. With the single-energy approximation, the contribution to the radiation background of shielding structure by secondary photons and neutrons was evaluated. The estimation results show that the average energy of residual muons is 369 GeV and the flux is 3.17 × 10 -6 m -2 · s -1 . The fluence rate of secondary photons is about 1.57 × 10 -4 m -2 · s -1 , and the fluence rate of secondary neutrons is about 8.37 × 10 -7 m -2 · s -1 . The muon radiation background of CJPL is lower than those of most other underground laboratories in the world. (authors)

  1. Hudozhnik namerevalsja võkrasit Monblan

    Index Scriptorium Estoniae

    2007-01-01

    Mont Blanci mäetipu roosaks värvida kavatsenud Tšiili päritolu taani avangardikunstnik Marco Evaristti peatati prantsuse politseinike poolt. Kunstniku kommentaarid kunstiprojektile Mont Blanc Project

  2. Calculation of electron transport in Ar/N2 and He/Kr gas mixtures emdash implications for validity of the Blanc close-quote s law method

    International Nuclear Information System (INIS)

    Wang, Y.; Van Brunt, R.J.

    1997-01-01

    The electron drift velocities and corresponding mean energies have been calculated numerically using an approximate two-term solution of the Boltzmann transport equation for Ar/N 2 gas mixtures at electric field-to-gas density ratios (E/N) below 2.0x10 -20 Vm 2 (20 Td) and for He/Kr mixtures at E/N below 5.0x10 -21 Vm 2 (5.0 Td). The results are compared with predictions obtained from a method proposed by Chiflikian based on an open-quotes analog of Blanc close-quote s lawclose quotes [Phys. Plasmas 2, 3902 (1995)]. Large differences are found between the results derived from the Blanc close-quote s law method and those found here from solutions of the transport equation that indicate serious errors and limitations associated with use of the Blanc close-quote s law method to compute drift velocities in gas mixtures. copyright 1997 American Institute of Physics

  3. Litho- and biostratigraphy of the Opalinus Clay and bounding formations in the Mont Terri rock laboratory (Switzerland)

    International Nuclear Information System (INIS)

    Hostettler, B.; Reisdorf, A. G.; Jaeggi, D.

    2017-01-01

    A 250 m-deep inclined well, the Mont Terri BDB-1, was drilled through the Jurassic Opalinus Clay and its bounding formations at the Mont Terri rock laboratory (NW Switzerland). For the first time, a continuous section from (oldest to youngest) the topmost members of the Staffelegg Formation to the basal layers of the Hauptrogenstein Formation is now available in the Mont Terri area. We extensively studied the drill core for lithostratigraphy and biostratigraphy, drawing upon three sections from the Mont Terri area. The macropaleontological, micropaleontological, and palynostratigraphical data are complementary, not only spatially but they also cover almost all biozones from the Late Toarcian to the Early Bajocian. We ran a suite of geophysical logs to determine formational and intraformational boundaries based on clay content in the BDB-1 well. In the framework of an interdisciplinary study, analysis of the above-mentioned formations permitted us to process and derive new and substantial data for the Mont Terri area in a straightforward way. Some parts of the lithologic inventory, stratigraphic architecture, thickness variations, and biostratigraphic classification of the studied formations deviate considerably from occurrences in northern Switzerland that crop out further to the east. For instance, with the exception of the Sissach Member, no further lithostratigraphic subdivision in members is proposed for the Passwang Formation. Also noteworthy is that the ca. 130 m-thick Opalinus Clay in the BDB-1 core is 20 m thinner than that equivalent section found in the Mont Terri tunnel. The lowermost 38 m of the Opalinus Clay can be attributed chronostratigraphically solely to the Aalensis Zone (Late Toarcian). Deposition of the Opalinus Clay began at the same time farther east in northern Switzerland (Aalensis Subzone, Aalensis Zone), but in the Mont Terri area the sedimentation rate was two or three orders of magnitude higher. (authors)

  4. Litho- and biostratigraphy of the Opalinus Clay and bounding formations in the Mont Terri rock laboratory (Switzerland)

    Energy Technology Data Exchange (ETDEWEB)

    Hostettler, B. [Naturhistorisches Museum der Burgergemeinde Berne, Berne (Switzerland); Reisdorf, A. G. [Geologisch-Paläontologisches InstitutUniversität Basle, Basle (Switzerland); Jaeggi, D. [Swisstopo, Federal Office of Topography, Wabern (Switzerland); and others

    2017-04-15

    A 250 m-deep inclined well, the Mont Terri BDB-1, was drilled through the Jurassic Opalinus Clay and its bounding formations at the Mont Terri rock laboratory (NW Switzerland). For the first time, a continuous section from (oldest to youngest) the topmost members of the Staffelegg Formation to the basal layers of the Hauptrogenstein Formation is now available in the Mont Terri area. We extensively studied the drill core for lithostratigraphy and biostratigraphy, drawing upon three sections from the Mont Terri area. The macropaleontological, micropaleontological, and palynostratigraphical data are complementary, not only spatially but they also cover almost all biozones from the Late Toarcian to the Early Bajocian. We ran a suite of geophysical logs to determine formational and intraformational boundaries based on clay content in the BDB-1 well. In the framework of an interdisciplinary study, analysis of the above-mentioned formations permitted us to process and derive new and substantial data for the Mont Terri area in a straightforward way. Some parts of the lithologic inventory, stratigraphic architecture, thickness variations, and biostratigraphic classification of the studied formations deviate considerably from occurrences in northern Switzerland that crop out further to the east. For instance, with the exception of the Sissach Member, no further lithostratigraphic subdivision in members is proposed for the Passwang Formation. Also noteworthy is that the ca. 130 m-thick Opalinus Clay in the BDB-1 core is 20 m thinner than that equivalent section found in the Mont Terri tunnel. The lowermost 38 m of the Opalinus Clay can be attributed chronostratigraphically solely to the Aalensis Zone (Late Toarcian). Deposition of the Opalinus Clay began at the same time farther east in northern Switzerland (Aalensis Subzone, Aalensis Zone), but in the Mont Terri area the sedimentation rate was two or three orders of magnitude higher. (authors)

  5. The MC21 Monte Carlo Transport Code

    International Nuclear Information System (INIS)

    Sutton TM; Donovan TJ; Trumbull TH; Dobreff PS; Caro E; Griesheimer DP; Tyburski LJ; Carpenter DC; Joo H

    2007-01-01

    MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities

  6. Exploration of consumer perception of Sauvignon Blanc wines with enhanced aroma properties using two different descriptive methods.

    Science.gov (United States)

    Lezaeta, Alvaro; Bordeu, Edmundo; Næs, Tormod; Varela, Paula

    2017-09-01

    The aim of this study was to evaluate consumers' perception of a complex set of stimuli as aromatically enriched wines. For that, two consumer based profiling methods were compared, concurrently run with overall liking measurements: projective mapping based on choice or preference (PM-C), a newly proposed method, and check-all-that-apply (CATA) questions with an ideal sample, a more established, consumer-based method for product optimization. Reserve bottling and regular bottling of Sauvignon Blanc wines from three wineries were aromatically enriched with natural aromas collected by condensation during wine fermentation. A total of 144 consumers were enrolled in the study. The results revealed that both consumer-based highlighted the positive effect of aromatic enrichment on consumer perception and acceptance. However, PM-C generated a very detailed description, in which consumers focused less on the sensory aspects and more on the usage, attitudes, and reasons behind their choices. Providing a deeper understanding of the drivers of liking/disliking of enriched Sauvignon Blanc wines. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Experience with the Monte Carlo Method

    Energy Technology Data Exchange (ETDEWEB)

    Hussein, E M.A. [Department of Mechanical Engineering University of New Brunswick, Fredericton, N.B., (Canada)

    2007-06-15

    Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed.

  8. Experience with the Monte Carlo Method

    International Nuclear Information System (INIS)

    Hussein, E.M.A.

    2007-01-01

    Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed

  9. LAPP - Annecy le Vieux Particle Physics Laboratory. Activity report 1996-1997

    International Nuclear Information System (INIS)

    Colas, Jacques; Minard, Marie-Noelle; Decamp, Daniel; Marion, Frederique; Drancourt, Cyril; Riva, Vanessa; Berger, Nicole; Bombar, Claudine; Dromby, Gerard

    2004-01-01

    LAPP is a high energy physics laboratory founded in 1976 and is one of the 19 laboratories of IN2P3 (National Institute of Nuclear and particle physics), institute of CNRS (National Centre for Scientific Research). LAPP is joint research facility of the University Savoie Mont Blanc (USMB) and the CNRS. Research carried out at LAPP aims at understanding the elementary particles and the fundamental interactions between them as well as exploring the connections between the infinitesimally small and the unbelievably big. Among other subjects LAPP teams try to understand the origin of the mass of the particles, the mystery of dark matter and what happened to the anti-matter that was present in the early universe. LAPP researchers work in close contact with phenomenologist teams from LAPTh, a theory laboratory hosted in the same building. LAPP teams also work since several decades at understanding the neutrinos, those elementary almost massless particles with amazing transformation properties. They took part in the design and realization of several experiments. Other LAPP teams collaborate in experiments studying signals from the cosmos. This document presents the activities of the laboratory during the years 1996-1997: 1 - Presentation of LAPP; 2 - Data acquisition experiments: e"+e"- annihilations at LEP (standard model and beyond the standard model - ALEPH, Study of hadronic final state events and Search for supersymmetric particles at L3 detector); Neutrino experiments (neutrino oscillation search at 1 km of the Chooz reactors, search for neutrino oscillations at the CERN Wide Band neutrino beam - NOMAD); Quarks-Gluons plasma; Hadronic spectroscopy; 3 - Experiments under preparation (CP violation study - BABAR, Anti Matter Spectrometer in Space - AMS, Search for gravitational waves - VIRGO, Search for the Higgs boson - ATLAS and CMS); 4 - Technical departments; 5 - Theoretical physics; 6 - Other activities

  10. Monte Carlo Transport for Electron Thermal Transport

    Science.gov (United States)

    Chenhall, Jeffrey; Cao, Duc; Moses, Gregory

    2015-11-01

    The iSNB (implicit Schurtz Nicolai Busquet multigroup electron thermal transport method of Cao et al. is adapted into a Monte Carlo transport method in order to better model the effects of non-local behavior. The end goal is a hybrid transport-diffusion method that combines Monte Carlo Transport with a discrete diffusion Monte Carlo (DDMC). The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the method will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.

  11. Monte Carlo analysis of the Neutron Standards Laboratory of the CIEMAT; Analisis Monte Carlo del Laboratorio de Patrones Neutronicos del CIEMAT

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas (Mexico); Mendez V, R. [Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas, Av. Complutense 40, 28040 Madrid (Spain); Guzman G, K. A., E-mail: fermineutron@yahoo.com [Universidad Politecnica de Madrid, Departamento de Ingenieria Nuclear, C. Jose Gutierrez Abascal 2, 28006 Madrid (Spain)

    2014-10-15

    By means of Monte Carlo methods was characterized the neutrons field produced by calibration sources in the Neutron Standards Laboratory of the Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas (CIEMAT). The laboratory has two neutron calibration sources: {sup 241}AmBe and {sup 252}Cf which are stored in a water pool and are placed on the calibration bench using controlled systems at distance. To characterize the neutrons field was built a three-dimensional model of the room where it was included the stainless steel bench, the irradiation table and the storage pool. The sources model included double encapsulated of steel, as cladding. With the purpose of determining the effect that produces the presence of the different components of the room, during the characterization the neutrons spectra, the total flow and the rapidity of environmental equivalent dose to 100 cm of the source were considered. The presence of the walls, floor and ceiling of the room is causing the most modification in the spectra and the integral values of the flow and the rapidity of environmental equivalent dose. (Author)

  12. Geographical origin of Sauvignon Blanc wines predicted by mass spectrometry and metal oxide based electronic nose

    Energy Technology Data Exchange (ETDEWEB)

    Berna, Amalia Z., E-mail: Amalia.Berna@csiro.au [CSIRO Entomology and Food Futures Flagship, PO Box 1700, Canberra, ACT 2601 (Australia); Trowell, Stephen [CSIRO Entomology and Food Futures Flagship, PO Box 1700, Canberra, ACT 2601 (Australia); Clifford, David [CSIRO Mathematical and Information Sciences, Locked Bag 17, North Ryde, NSW 1670 (Australia); Cynkar, Wies; Cozzolino, Daniel [The Australian Wine Research Institute, Waite Road, Urrbrae, PO Box 197, Adelaide, SA 5064 (Australia)

    2009-08-26

    Analysis of 34 Sauvignon Blanc wine samples from three different countries and six regions was performed by gas chromatography-mass spectrometry (GC-MS). Linear discriminant analysis (LDA) showed that there were three distinct clusters or classes of wines with different aroma profiles. Wines from the Loire region in France and Australian wines from Tasmania and Western Australia were found to have similar aroma patterns. New Zealand wines from the Marlborough region as well as the Australian ones from Victoria were grouped together based on the volatile composition. Wines from South Australia region formed one discrete class. Seven analytes, most of them esters, were found to be the relevant chemical compounds that characterized the classes. The grouping information obtained by GC-MS, was used to train metal oxide based electronic (MOS-Enose) and mass spectrometry based electronic (MS-Enose) noses. The combined use of solid phase microextraction (SPME) and ethanol removal prior to MOS-Enose analysis, allowed an average error of prediction of the regional origins of Sauvignon Blanc wines of 6.5% compared to 24% when static headspace (SHS) was employed. For MS-Enose, the misclassification rate was higher probably due to the requirement to delimit the m/z range considered.

  13. Geographical origin of Sauvignon Blanc wines predicted by mass spectrometry and metal oxide based electronic nose

    International Nuclear Information System (INIS)

    Berna, Amalia Z.; Trowell, Stephen; Clifford, David; Cynkar, Wies; Cozzolino, Daniel

    2009-01-01

    Analysis of 34 Sauvignon Blanc wine samples from three different countries and six regions was performed by gas chromatography-mass spectrometry (GC-MS). Linear discriminant analysis (LDA) showed that there were three distinct clusters or classes of wines with different aroma profiles. Wines from the Loire region in France and Australian wines from Tasmania and Western Australia were found to have similar aroma patterns. New Zealand wines from the Marlborough region as well as the Australian ones from Victoria were grouped together based on the volatile composition. Wines from South Australia region formed one discrete class. Seven analytes, most of them esters, were found to be the relevant chemical compounds that characterized the classes. The grouping information obtained by GC-MS, was used to train metal oxide based electronic (MOS-Enose) and mass spectrometry based electronic (MS-Enose) noses. The combined use of solid phase microextraction (SPME) and ethanol removal prior to MOS-Enose analysis, allowed an average error of prediction of the regional origins of Sauvignon Blanc wines of 6.5% compared to 24% when static headspace (SHS) was employed. For MS-Enose, the misclassification rate was higher probably due to the requirement to delimit the m/z range considered.

  14. Proteomic Analysis of Sauvignon Blanc Grape Skin, Pulp and Seed and Relative Quantification of Pathogenesis-Related Proteins.

    Directory of Open Access Journals (Sweden)

    Bin Tian

    Full Text Available Thaumatin-like proteins (TLPs and chitinases are the main constituents of so-called protein hazes which can form in finished white wine and which is a great concern of winemakers. These soluble pathogenesis-related (PR proteins are extracted from grape berries. However, their distribution in different grape tissues is not well documented. In this study, proteins were first separately extracted from the skin, pulp and seed of Sauvignon Blanc grapes, followed by trypsin digestion and analysis by liquid chromatography-electrospray ionization-tandem mass spectrometry (LC-ESI-MS/MS. Proteins identified included 75 proteins from Sauvignon Blanc grape skin, 63 from grape pulp and 35 from grape seed, mostly functionally classified as associated with metabolism and energy. Some were present exclusively in specific grape tissues; for example, proteins involved in photosynthesis were only detected in grape skin and proteins found in alcoholic fermentation were only detected in grape pulp. Moreover, proteins identified in grape seed were less diverse than those identified in grape skin and pulp. TLPs and chitinases were identified in both Sauvignon Blanc grape skin and pulp, but not in the seed. To relatively quantify the PR proteins, the protein extracts of grape tissues were seperated by HPLC first and then analysed by SDS-PAGE. The results showed that the protein fractions eluted at 9.3 min and 19.2 min under the chromatographic conditions of this study confirmed that these corresponded to TLPs and chitinases seperately. Thus, the relative quantification of TLPs and chitinases in protein extracts was carried out by comparing the area of corresponding peaks against the area of a thamautin standard. The results presented in this study clearly demonstrated the distribution of haze-forming PR proteins in grape berries, and the relative quantification of TLPs and chitinases could be applied in fast tracking of changes in PR proteins during grape growth and

  15. LAPP - Annecy le Vieux Particle Physics Laboratory. Activity report 2002-2003

    International Nuclear Information System (INIS)

    Colas, Jacques; Minard, Marie-Noelle; Decamp, Daniel; Marion, Frederique; Drancourt, Cyril; Riva, Vanessa; Berger, Nicole; Bombar, Claudine; Dromby, Gerard

    2004-01-01

    LAPP is a high energy physics laboratory founded in 1976 and is one of the 19 laboratories of IN2P3 (National Institute of Nuclear and particle physics), institute of CNRS (National Centre for Scientific Research). LAPP is joint research facility of the University Savoie Mont Blanc (USMB) and the CNRS. Research carried out at LAPP aims at understanding the elementary particles and the fundamental interactions between them as well as exploring the connections between the infinitesimally small and the unbelievably big. Among other subjects LAPP teams try to understand the origin of the mass of the particles, the mystery of dark matter and what happened to the anti-matter that was present in the early universe. LAPP researchers work in close contact with phenomenologist teams from LAPTh, a theory laboratory hosted in the same building. LAPP teams also work since several decades at understanding the neutrinos, those elementary almost massless particles with amazing transformation properties. They took part in the design and realization of several experiments. Other LAPP teams collaborate in experiments studying signals from the cosmos. This document presents the activities of the laboratory during the years 2002-2003: 1 - Presentation of LAPP; 2 - Experimental programs: Standard model and its extensions (accurate measurements and search for new particles, The end of ALEPH and L3 LEP experiments, ATLAS experiment at LHC, CMS experiment at LHC); CP violation (BaBar experiment on PEPII collider at SLAC, LHCb experiment); Neutrino physics (OPERA experiment on CERN's CNGS neutrino beam); Astro-particles (AMS experiment, EUSO project on the Columbus module of the International Space Station); Search for gravitational waves - Virgo experiment; 3 - Laboratory's know-how: Skills, Technical departments (Electronics, Computers, Mechanics); R and D - CLIC and Positrons; Valorisation and industrial relations; 4 - Laboratory operation: Administration and general services; Laboratory

  16. Exploring diffusion and sorption processes at the Mont Terri rock laboratory (Switzerland): lessons learned from 20 years of field research

    Energy Technology Data Exchange (ETDEWEB)

    Leupin, O.X. [National Cooperative for the Disposal of Radioactive Waste (NAGRA), Wettingen (Switzerland); Van Loon, L.R. [Paul Scherrer Institute PSI, Villigen (Switzerland); Gimmi, T. [Institute of Geological Sciences, University of Berne, Berne (Switzerland); Gimmi, T. [Institute of Environmental Assessment and Water Research IDAEA-CSIC, Barcelona (Spain); and others

    2017-04-15

    Transport and retardation parameters of radionuclides, which are needed to perform a safety analysis for a deep geological repository for radioactive waste in a compacted claystone such as Opalinus Clay, must be based on a detailed understanding of the mobility of nuclides at different spatial scales (laboratory, field, geological unit). Thanks to steadily improving experimental designs, similar tracer compositions in different experiments and complementary small laboratory-scale diffusion tests, a unique and large database could be compiled. This paper presents the main findings of 20 years of diffusion and retention experiments at the Mont Terri rock laboratory and their impact on safety analysis. (authors)

  17. Derivation of muon range spectrum under rock from the recent primary spectrum

    International Nuclear Information System (INIS)

    Pal, P.; Bhattacharyya, D.P.

    1985-01-01

    The muon range spectra under Mont Blanc Tunnel and Kolar Gold Field rocks have been calculated from the recently measured primary cosmic ray spectrum. The scaling hypothesis of Feynman has been used for the calculation of pion and kaon spectra in the atmosphere. The meson atmospheric diffusion equation has been solved by following the method of Bugaev et al. The derived muon energy spectrum has been found to be in good agreement with the measured data of the Kiel, Durham, DEIS, and Moscow University groups. The calculated muon energy spectra at large polar angles have been compared with the different experimental results. The integral muon spectrum up to 20 TeV supports the MARS burst data favourably. Using the procedure of Kobayakawa, the muon energy loss in rock due to ionization, pair production, and bremsstrahlung and nuclear interactions from Bezrukov and Bugaev, we have constructed the range-energy relation in Mont Blanc and Kolar Gold Field rocks. The estimated range spectra have been corrected for range fluctuations and have been compared with the Mont Blanc Tunnel data of Castagnoli et al., Bergamasco et al., and Sheldon et al. and the Kolar Gold Field data compilation by Krishnaswamy et al

  18. Mont Terri rock laboratory, 20 years of research: introduction, site characteristics and overview of experiments

    International Nuclear Information System (INIS)

    Bossart, P.; Bernier, F.; Birkholzer, J.

    2017-01-01

    Geologic repositories for radioactive waste are designed as multi-barrier disposal systems that perform a number of functions including the long-term isolation and containment of waste from the human environment, and the attenuation of radionuclides released to the subsurface. The rock laboratory at Mont Terri (canton Jura, Switzerland) in the Opalinus Clay plays an important role in the development of such repositories. The experimental results gained in the last 20 years are used to study the possible evolution of a repository and investigate processes closely related to the safety functions of a repository hosted in a clay rock. At the same time, these experiments have increased our general knowledge of the complex behaviour of argillaceous formations in response to coupled hydrological, mechanical, thermal, chemical, and biological processes. After presenting the geological setting in and around the Mont Terri rock laboratory and an overview of the mineralogy and key properties of the Opalinus Clay, we give a brief overview of the key experiments that are described in more detail in the following research papers to this Special Issue of the Swiss Journal of Geosciences. These experiments aim to characterise the Opalinus Clay and estimate safety-relevant parameters, test procedures, and technologies for repository construction and waste emplacement. Other aspects covered are: bentonite buffer emplacement, high-pH concrete-clay interaction experiments, anaerobic steel corrosion with hydrogen formation, depletion of hydrogen by microbial activity, and finally, release of radionuclides into the bentonite buffer and the Opalinus Clay barrier. In the case of a spent fuel/high-level waste repository, the time considered in performance assessment for repository evolution is generally 1 million years, starting with a transient phase over the first 10,000 years and followed by an equilibrium phase. Experiments dealing with initial conditions, construction, and waste

  19. Mont Terri rock laboratory, 20 years of research: introduction, site characteristics and overview of experiments

    Energy Technology Data Exchange (ETDEWEB)

    Bossart, P. [Swisstopo, Federal Office of Topography, Wabern (Switzerland); Bernier, F. [Federal Agency for Nuclear Control FANC, Brussels (Belgium); Birkholzer, J. [Lawrence Berkeley National Laboratory, Berkeley (United States); and others

    2017-04-15

    Geologic repositories for radioactive waste are designed as multi-barrier disposal systems that perform a number of functions including the long-term isolation and containment of waste from the human environment, and the attenuation of radionuclides released to the subsurface. The rock laboratory at Mont Terri (canton Jura, Switzerland) in the Opalinus Clay plays an important role in the development of such repositories. The experimental results gained in the last 20 years are used to study the possible evolution of a repository and investigate processes closely related to the safety functions of a repository hosted in a clay rock. At the same time, these experiments have increased our general knowledge of the complex behaviour of argillaceous formations in response to coupled hydrological, mechanical, thermal, chemical, and biological processes. After presenting the geological setting in and around the Mont Terri rock laboratory and an overview of the mineralogy and key properties of the Opalinus Clay, we give a brief overview of the key experiments that are described in more detail in the following research papers to this Special Issue of the Swiss Journal of Geosciences. These experiments aim to characterise the Opalinus Clay and estimate safety-relevant parameters, test procedures, and technologies for repository construction and waste emplacement. Other aspects covered are: bentonite buffer emplacement, high-pH concrete-clay interaction experiments, anaerobic steel corrosion with hydrogen formation, depletion of hydrogen by microbial activity, and finally, release of radionuclides into the bentonite buffer and the Opalinus Clay barrier. In the case of a spent fuel/high-level waste repository, the time considered in performance assessment for repository evolution is generally 1 million years, starting with a transient phase over the first 10,000 years and followed by an equilibrium phase. Experiments dealing with initial conditions, construction, and waste

  20. A Monte Carlo Simulation of the in vivo measurement of lung activity in the Lawrence Livermore National Laboratory torso phantom.

    Science.gov (United States)

    Acha, Robert; Brey, Richard; Capello, Kevin

    2013-02-01

    A torso phantom was developed by the Lawrence Livermore National Laboratory (LLNL) that serves as a standard for intercomparison and intercalibration of detector systems used to measure low-energy photons from radionuclides, such as americium deposited in the lungs. DICOM images of the second-generation Human Monitoring Laboratory-Lawrence Livermore National Laboratory (HML-LLNL) torso phantom were segmented and converted into three-dimensional (3D) voxel phantoms to simulate the response of high purity germanium (HPGe) detector systems, as found in the HML new lung counter using a Monte Carlo technique. The photon energies of interest in this study were 17.5, 26.4, 45.4, 59.5, 122, 244, and 344 keV. The detection efficiencies at these photon energies were predicted for different chest wall thicknesses (1.49 to 6.35 cm) and compared to measured values obtained with lungs containing (241)Am (34.8 kBq) and (152)Eu (10.4 kBq). It was observed that no statistically significant differences exist at the 95% confidence level between the mean values of simulated and measured detection efficiencies. Comparisons between the simulated and measured detection efficiencies reveal a variation of 20% at 17.5 keV and 1% at 59.5 keV. It was found that small changes in the formulation of the tissue substitute material caused no significant change in the outcome of Monte Carlo simulations.

  1. Llibre Blanc de l’Eurodistricte Català Transfronterer: creació de projecte i reestructuració territorial

    OpenAIRE

    Castañer i Vivas, Margarida

    2011-01-01

    En aquest article, s’hi presenta el procés d’elaboració, la metodologia, les reflexions i les conclusions del Llibre Blanc de l’Eurodistricte Català Transfronterer elaborat per la Mission Opérationnelle Transfrontalière (MOT) i la Universitat de Girona (UdG). L’estudi té per objectiu acompanyar la definició i l’emergència d’un projecte de territori transfronterer basat en la realitat d’un àmbit territorial compartit entre el departament dels Pirineus Orientals i les comarques de la província ...

  2. The Italian drilling project of the Mont Blanc road tunnel in the late fifties: an example of no geological care and lack of ethics in carrying out a big work.

    Science.gov (United States)

    Gosso, Guido; Croce, Giuseppe; Matteucci, Ruggero; Peppoloni, Silvia; Piacente, Sandra; Wasowski, Janusz

    2013-04-01

    In the first decade after the Second World War Italy was rushing to recover a positive role among European countries; basic needs as road communications with European neighbours became main priorities. The necessity of a rapid connection with South-eastern France, a subject already debated between the two nations over more than 50 years, appeared then on first line; the two countries convened on a joint investment for the construction of a tunnel across the international border of Mont Blanc, along the shortest track between Courmayeur and Chamonix. The political agreements were in favour of the quickest start of the drilling operations and such obligation imposed on the Italian side an impoverishment of the project content, specially concerning geological issues. No surveys were performed on fracture systems, cataclastic zones and faults, on the few rock ridges standing above the tunnel line and outcropping through thick talus cones, moraines, ice tongues and their related ice plateaus. Metasediments, migmatites and poorly foliated granites were to be drilled. Three Italian academics were allowed by the drilling company to track the working progress and collect rocks for comparison with other Alpine types; they mapped the lithology and the fault zonesall along the freshly excavated tunnel; the results of such survey appeared after the end of works. Geologists from Florence University published the surface granite faulting pattern 20 years after the road tunnel became operative. Such geological cares could have located the risky zones in time for the tunnel project, mitigating the catastrophic effects of sudden drainage of subglacial water from the Vallée Blanche ice plateau (Ghiacciaio del Gigante) at progression 3800m, that caused dramatic accidents and affected negatively the economy of the drilling. Also the wallrock temperature drops, measured during the drill, should have warned the company management on the location of dangerous fracture zones. Anxiety of

  3. An Overview of Grain Growth Theories for Pure Single Phase Systems,

    Science.gov (United States)

    1986-10-01

    the fundamental causes for these distributions. This Blanc and Mocellin (1979) and Carnal and Mocellin (1981j set out to do. 7.1 Monte-Carlo Simulations...termed event B) (in 2-D) of 3-sided grains. (2) Neighbour-switching (termed event C). Blanc and Mocellin (1979) dealt with 2-D sections through...Kurtz and Carpay (1980a). 7.2 Analytical Method to Obtain fn Carnal and Mocellin (1981) obtained the distribution of grain coordination numbers in

  4. Tests of the Monte Carlo simulation of the photon-tagger focal-plane electronics at the MAX IV Laboratory

    International Nuclear Information System (INIS)

    Preston, M.F.; Myers, L.S.; Annand, J.R.M.; Fissum, K.G.; Hansen, K.; Isaksson, L.; Jebali, R.; Lundin, M.

    2014-01-01

    Rate-dependent effects in the electronics used to instrument the tagger focal plane at the MAX IV Laboratory were recently investigated using the novel approach of Monte Carlo simulation to allow for normalization of high-rate experimental data acquired with single-hit time-to-digital converters (TDCs). The instrumentation of the tagger focal plane has now been expanded to include multi-hit TDCs. The agreement between results obtained from data taken using single-hit and multi-hit TDCs demonstrate a thorough understanding of the behavior of the detector system

  5. Tests of the Monte Carlo simulation of the photon-tagger focal-plane electronics at the MAX IV Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Preston, M.F. [Lund University, SE-221 00 Lund (Sweden); Myers, L.S. [Duke University, Durham, NC 27708 (United States); Annand, J.R.M. [University of Glasgow, Glasgow G12 8QQ, Scotland (United Kingdom); Fissum, K.G., E-mail: kevin.fissum@nuclear.lu.se [Lund University, SE-221 00 Lund (Sweden); Hansen, K.; Isaksson, L. [MAX IV Laboratory, Lund University, SE-221 00 Lund (Sweden); Jebali, R. [Arktis Radiation Detectors Limited, 8045 Zürich (Switzerland); Lundin, M. [MAX IV Laboratory, Lund University, SE-221 00 Lund (Sweden)

    2014-04-21

    Rate-dependent effects in the electronics used to instrument the tagger focal plane at the MAX IV Laboratory were recently investigated using the novel approach of Monte Carlo simulation to allow for normalization of high-rate experimental data acquired with single-hit time-to-digital converters (TDCs). The instrumentation of the tagger focal plane has now been expanded to include multi-hit TDCs. The agreement between results obtained from data taken using single-hit and multi-hit TDCs demonstrate a thorough understanding of the behavior of the detector system.

  6. Indications of the prominent role of elemental sulfur in the formation of the varietal thiol 3-mercaptohexanol in Sauvignon blanc wine.

    Science.gov (United States)

    Araujo, Leandro Dias; Vannevel, Sebastian; Buica, Astrid; Callerot, Suzanne; Fedrizzi, Bruno; Kilmartin, Paul A; du Toit, Wessel J

    2017-08-01

    Elemental sulfur is a fungicide traditionally used to control Powdery Mildew in the production of grapes. The presence of sulfur residues in grape juice has been associated with increased production of hydrogen sulfide during fermentation, which could take part in the formation of the varietal thiol 3-mercaptohexanol. This work examines whether elemental sulfur additions to Sauvignon blanc juice can increase the levels of sought-after varietal thiols. Initial trials were performed in South Africa and indicated a positive impact of sulfur on the levels of thiols. Further experiments were then carried out with New Zealand Sauvignon blanc and confirmed a positive relationship between elemental sulfur additions and wine varietal thiols. The formation of hydrogen sulfide was observed when the addition of elemental sulfur was made to clarified juice, along with an increase in further reductive sulfur compounds. When the addition of sulfur was made to pressed juice, prior to clarification, the production of reductive sulfur compounds was drastically decreased. Some mechanistic considerations are also presented, involving the reduction of sulfur to hydrogen sulfide prior to fermentation. Copyright © 2016. Published by Elsevier Ltd.

  7. Geological modeling of a fault zone in clay rocks at the Mont-Terri laboratory (Switzerland)

    Science.gov (United States)

    Kakurina, M.; Guglielmi, Y.; Nussbaum, C.; Valley, B.

    2016-12-01

    Clay-rich formations are considered to be a natural barrier for radionuclides or fluids (water, hydrocarbons, CO2) migration. However, little is known about the architecture of faults affecting clay formations because of their quick alteration at the Earth's surface. The Mont Terri Underground Research Laboratory provides exceptional conditions to investigate an un-weathered, perfectly exposed clay fault zone architecture and to conduct fault activation experiments that allow explore the conditions for stability of such clay faults. Here we show first results from a detailed geological model of the Mont Terri Main Fault architecture, using GoCad software, a detailed structural analysis of 6 fully cored and logged 30-to-50m long and 3-to-15m spaced boreholes crossing the fault zone. These high-definition geological data were acquired within the Fault Slip (FS) experiment project that consisted in fluid injections in different intervals within the fault using the SIMFIP probe to explore the conditions for the fault mechanical and seismic stability. The Mont Terri Main Fault "core" consists of a thrust zone about 0.8 to 3m wide that is bounded by two major fault planes. Between these planes, there is an assembly of distinct slickensided surfaces and various facies including scaly clays, fault gouge and fractured zones. Scaly clay including S-C bands and microfolds occurs in larger zones at top and bottom of the Mail Fault. A cm-thin layer of gouge, that is known to accommodate high strain parts, runs along the upper fault zone boundary. The non-scaly part mainly consists of undeformed rock block, bounded by slickensides. Such a complexity as well as the continuity of the two major surfaces are hard to correlate between the different boreholes even with the high density of geological data within the relatively small volume of the experiment. This may show that a poor strain localization occurred during faulting giving some perspectives about the potential for

  8. Performance of the Opalinus Clay under thermal loading: experimental results from Mont Terri rock laboratory (Switzerland)

    Energy Technology Data Exchange (ETDEWEB)

    Gens, A. [Universitat Politència de Catalunya, Barcelona (Spain); Wieczorek, K. [Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) GmbH, Braunschweig (Germany); Gaus, I. [National Cooperative for the Disposal of Radioactive Waste (NAGRA), Wettingen (Switzerland); and others

    2017-04-15

    The paper presents an overview of the behaviour of Opalinus Clay under thermal loading as observed in three in situ heating tests performed in the Mont Terri rock laboratory: HE-B, HE-D and HE-E. The three tests are summarily described; they encompass a broad range of test layouts and experimental conditions. Afterwards, the following topics are examined: determination of thermal conductivity, thermally-induced pore pressure generation and thermally-induced mechanical effects. The mechanisms underlying pore pressure generation and dissipation are discussed in detail and the relationship between rock damage and thermal loading is examined using an additional in situ test: SE-H. The paper concludes with an evaluation of the various thermo-hydro-mechanical (THM) interactions identified in the heating tests. (authors)

  9. The analog of Blanc's law for drift velocities of electrons in gas mixtures in weakly ionized plasma

    International Nuclear Information System (INIS)

    Chiflikian, R.V.

    1995-01-01

    The analog of Blanc's law for drift velocities of electrons in multicomponent gas mixtures in weakly ionized spatially homogeneous low-temperature plasma is derived. The obtained approximate-analytical expressions are valid for average electron energy in the 1--5 eV range typical for plasma conditions of low-pressure direct current (DC) discharges. The accuracy of these formulas is ±5%. The analytical criterion of the negative differential conductivity (NDC) of electrons in binary mixtures of gases is obtained. NDC of electrons is predicted in He:Kr and He:Xe rare gas mixtures. copyright 1995 American Institute of Physics

  10. Hydro-mechanical evolution of the EDZ as transport path for radionuclides and gas: insights from the Mont Terri rock laboratory (Switzerland)

    Energy Technology Data Exchange (ETDEWEB)

    Paul Marschall, P.; Giger, S. [National Cooperative for the Disposal of Radioactive Waste (NAGRA), Wettingen (Switzerland); La Vassière De, R. [Agence Nationale pour la Gestion des Déchets Radioactifs ANDRA, Meuse Haute-Marne, Center RD 960, Bure (France); and others

    2017-04-15

    The excavation damaged zone (EDZ) around the backfilled underground structures of a geological repository represents a release path for radionuclides, which needs to be addressed in the assessment of long-term safety. Additionally, the EDZ may form a highly efficient escape route for corrosion and degradation gases, thus limiting the gas overpressures in the backfilled repository structures. The efficiency of this release path depends not only on the shape and extent of the EDZ, but also on the self-sealing capacity of the host rock formation and the prevailing state conditions, such as in situ stresses and pore pressure. The hydro-mechanical and chemico-osmotic phenomena associated with the formation and temporal evolution of the EDZ are complex, thus precluding a detailed representation of the EDZ in conventional modelling tools for safety assessment. Therefore, simplified EDZ models, able to mimic the safety-relevant functional features of the EDZ in a traceable manner are required. In the framework of the Mont Terri Project, a versatile modelling approach has been developed for the simulation of flow and transport processes along the EDZ with the goal of capturing the evolution of hydraulic significance of the EDZ after closure of the backfilled underground structures. The approach draws on both empirical evidence and experimental data, collected in the niches and tunnels of the Mont Terri rock laboratory. The model was benchmarked with a data set from an in situ self-sealing experiment at the Mont Terri rock laboratory. This paper summarises the outcomes of the benchmark exercise that comprises relevant empirical evidence, experimental data bases and the conceptual framework for modelling the evolution of the hydraulic significance of the EDZ around a backfilled tunnel section during the entire re-saturation phase. (authors)

  11. Hydro-mechanical evolution of the EDZ as transport path for radionuclides and gas: insights from the Mont Terri rock laboratory (Switzerland)

    International Nuclear Information System (INIS)

    Paul Marschall, P.; Giger, S.; La Vassière De, R.

    2017-01-01

    The excavation damaged zone (EDZ) around the backfilled underground structures of a geological repository represents a release path for radionuclides, which needs to be addressed in the assessment of long-term safety. Additionally, the EDZ may form a highly efficient escape route for corrosion and degradation gases, thus limiting the gas overpressures in the backfilled repository structures. The efficiency of this release path depends not only on the shape and extent of the EDZ, but also on the self-sealing capacity of the host rock formation and the prevailing state conditions, such as in situ stresses and pore pressure. The hydro-mechanical and chemico-osmotic phenomena associated with the formation and temporal evolution of the EDZ are complex, thus precluding a detailed representation of the EDZ in conventional modelling tools for safety assessment. Therefore, simplified EDZ models, able to mimic the safety-relevant functional features of the EDZ in a traceable manner are required. In the framework of the Mont Terri Project, a versatile modelling approach has been developed for the simulation of flow and transport processes along the EDZ with the goal of capturing the evolution of hydraulic significance of the EDZ after closure of the backfilled underground structures. The approach draws on both empirical evidence and experimental data, collected in the niches and tunnels of the Mont Terri rock laboratory. The model was benchmarked with a data set from an in situ self-sealing experiment at the Mont Terri rock laboratory. This paper summarises the outcomes of the benchmark exercise that comprises relevant empirical evidence, experimental data bases and the conceptual framework for modelling the evolution of the hydraulic significance of the EDZ around a backfilled tunnel section during the entire re-saturation phase. (authors)

  12. Monte Carlo work at Argonne National Laboratory

    International Nuclear Information System (INIS)

    Gelbard, E.M.; Prael, R.E.

    1974-01-01

    A simple model of the Monte Carlo process is described and a (nonlinear) recursion relation between fission sources in successive generations is developed. From the linearized form of these recursion relations, it is possible to derive expressions for the mean square coefficients of error modes in the iterates and for correlation coefficients between fluctuations in successive generations. First-order nonlinear terms in the recursion relation are analyzed. From these nonlinear terms an expression for the bias in the eigenvalue estimator is derived, and prescriptions for measuring the bias are formulated. Plans for the development of the VIM code are reviewed, and the proposed treatment of small sample perturbations in VIM is described. 6 references. (U.S.)

  13. Discrete Diffusion Monte Carlo for Electron Thermal Transport

    Science.gov (United States)

    Chenhall, Jeffrey; Cao, Duc; Wollaeger, Ryan; Moses, Gregory

    2014-10-01

    The iSNB (implicit Schurtz Nicolai Busquet electron thermal transport method of Cao et al. is adapted to a Discrete Diffusion Monte Carlo (DDMC) solution method for eventual inclusion in a hybrid IMC-DDMC (Implicit Monte Carlo) method. The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the iSNB-DDMC method will be presented. This work was supported by Sandia National Laboratory - Albuquerque.

  14. Results of low energy background measurements with the Liquid Scintillation Detector (LSD) of the Mont Blanc Laboratory

    Science.gov (United States)

    Aglietta, M.; Badino, G.; Bologna, G. F.; Castagnoli, C.; Fulgione, W.; Galeotti, P.; Saavedra, O.; Trinchero, G. C.; Vernetto, S.; Dadykin, V. L.

    1985-01-01

    The 90 tons liquid scintillation detector (LSD) is fully running since October 1984, at a depth of 5,200 hg/sq cm of standard rock underground. The main goal is to search for neutrino bursts from collapsing stars. The experiment is very sensitive to detect low energy particles and has a very good signature to gamma-rays from (n,p) reaction which follows the upsilon e + p yields n + e sup + neutrino capture. The analysis of data is presented and the preliminary results on low energy measurements are discussed.

  15. Results of low energy background measurements with the liquid scintillation detector (LSD) of the Mont Blanc Laboratory

    International Nuclear Information System (INIS)

    Aglietta, M.; Badino, G.; Bologna, G.F.

    1985-01-01

    The 90 tons liquid scintillation detector (LSD) has been fully running since October 1984 at a depth of 5,200 hg/sq cm of standard rock underground. The main goal is to search for neutrino bursts from collapsing stars. The experiment is very sensitive to detect low energy particles and has a very good signature to gamma rays from (n,p) reactions which follows the upsilon e + p yields n + e sup + neutrino capture. The analysis of data is presented and the preliminary results on low energy measurements are discussed. 1 ref

  16. Riho Unt lühifilmide festivali žüriis

    Index Scriptorium Estoniae

    2002-01-01

    Nukufilmide režissöör on 1.-9.veebruarini Prantsusmaal Clermont-Ferrand'i lühifilmide festivali žürii liige. Tema nukufilmid Saamueli seklustest on valitud festivali eriprogrammi. Võistlusprogrammi kuulub Priit Tenderi joonisfilm "Mont Blanc"

  17. Monte Carlo code development in Los Alamos

    International Nuclear Information System (INIS)

    Carter, L.L.; Cashwell, E.D.; Everett, C.J.; Forest, C.A.; Schrandt, R.G.; Taylor, W.M.; Thompson, W.L.; Turner, G.D.

    1974-01-01

    The present status of Monte Carlo code development at Los Alamos Scientific Laboratory is discussed. A brief summary is given of several of the most important neutron, photon, and electron transport codes. 17 references. (U.S.)

  18. Effet d'un deficit hydrique sur le trefle blanc (Trifolium repens L.). I. Role d'un apport de potassium

    OpenAIRE

    Shamsun-Noor, L.; Robin, Christophe; Guckert, Armand

    1990-01-01

    Le comportement du trèfle blanc (Trifolium repens L cv Crau) est étudié en situation de contrainte hydrique et après réhydratation en liaison avec la fertilisation potassique. Le déficit en eau occasionne une décroissance progressive importante du potentiel hydrique des feuilles et une fermeture rapide des stomates. Ces manifestations sont accompagnées d’une chute de l’activité photosynthétique et de la fixation symbiotique de l’azote. En présence de potassium, la diminution de l’activité...

  19. Usefulness of the Monte Carlo method in reliability calculations

    International Nuclear Information System (INIS)

    Lanore, J.M.; Kalli, H.

    1977-01-01

    Three examples of reliability Monte Carlo programs developed in the LEP (Laboratory for Radiation Shielding Studies in the Nuclear Research Center at Saclay) are presented. First, an uncertainty analysis is given for a simplified spray system; a Monte Carlo program PATREC-MC has been written to solve the problem with the system components given in the fault tree representation. The second program MONARC 2 has been written to solve the problem of complex systems reliability by the Monte Carlo simulation, here again the system (a residual heat removal system) is in the fault tree representation. Third, the Monte Carlo program MONARC was used instead of the Markov diagram to solve the simulation problem of an electric power supply including two nets and two stand-by diesels

  20. Tirant lo Blanc o la pau no passa pels exèrcits

    Directory of Open Access Journals (Sweden)

    Antònia Carré

    2000-11-01

    Full Text Available Tirant lo Blanc o la pau no passa pels exèrcits és un metatext que té bàsicament dos objectius: 1 permetre, a partir d'una lectura activa i creativa de la novel·la de Joanot Martorell, acostar-se als aspectes cavallerescos de l'Edat Mitjana i aprofundir en el concepte de literatura existent aleshores, basat en la reconstrucció i recreació de textos a través de la manipulació de les obres que formaven part del patrimoni cultural de l'època, i 2 possibilitar, a partir de l'anàlisi d'un dels eixos temàtics principals de la novel·la com és la guerra, la reflexió (telemàtica sobre els valors que fonamenten una societat plural i democràtica a les portes del segle XXI: la sensibilització davant de les injustícies i els abusos de poder, la tolerància, la solidaritat, el respecte pel medi ambient, en definitiva, la valoració de la cultura per la pau.

  1. Mont Terri Project - Proceedings of the 10 Year Anniversary Workshop

    International Nuclear Information System (INIS)

    Hugi, M.; Bossart, P.; Hayoz, P.

    2007-01-01

    This book is a compilation of 12 reports presented at the St-Ursanne workshop. The workshop was dedicated to the scientific community of the Mont Terri partner organisations, their management and scientific/technical staff, involved research organisations and key contractors. The purpose of the event was to acknowledge the excellent research work that has been performed over the last decade, to evaluate and discuss the present state of knowledge in selected research areas and to explore the potential for future research activities. The topical areas addressed in the workshop are of particular importance with regard to deep geological disposal of radioactive waste and focused on the issues of coupled phenomena and transport processes in argillaceous rock and the demonstration (in underground rock laboratories) of disposal feasibility. After showing the history of the Mont Terri project and the general geology of Northwestern Switzerland, the different presentations are distributed into 3 topics: (a) Coupled phenomena in argillaceous rock, (b) Transport processes in argillaceous rock, and (c) Demonstration of disposal feasibility in underground rock laboratories. The last chapter describes the research still needed and the Mont Terri rock laboratory

  2. L’effet de l’ovotransferrine sur les enveloppes bactériennes dans les conditions physicochimiques du blanc d’oeuf

    OpenAIRE

    Rezazadeh Ghazvini, Raheleh

    2017-01-01

    Les protéines de blanc d'oeuf telles que l’ovotransferrine jouent un rôle important dans la défense contre l'invasion bactérienne. L'ovotransferrine a une capacité de liaison du fer, ce qui induit une activité bactériostatique en limitant le fer dans l'environnement des bactéries. Outre le mécanisme bien connu de la privation de fer (Baron et al 2016, pour revue), plusieurs auteurs ont suggéré que l'activité antimicrobienne de l'ovotransferrine pourrait résulter de son effet direct sur les me...

  3. Geomechanical behaviour of Opalinus Clay at multiple scales: results from Mont Terri rock laboratory (Switzerland)

    Energy Technology Data Exchange (ETDEWEB)

    Amann, F.; Wild, K.M.; Loew, S. [Institute of Geology, Engineering Geology, Swiss Federal Institute of Technology, Zurich (Switzerland); Yong, S. [Knight Piesold Ltd, Vancouver (Canada); Thoeny, R. [Grundwasserschutz und Entsorgung, AF-Consult Switzerland AG, Baden (Switzerland); Frank, E. [Sektion Geologie (GEOL), Eidgenössisches Nuklear-Sicherheitsinspektorat (ENSI), Brugg (Switzerland)

    2017-04-15

    The paper represents a summary about our research projects conducted between 2003 and 2015 related to the mechanical behaviour of Opalinus Clay at Mont Terri. The research summarized covers a series of laboratory and field tests that address the brittle failure behaviour of Opalinus Clay, its undrained and effective strength, the dependency of petro-physical and mechanical properties on total suction, hydro-mechanically coupled phenomena and the development of a damage zone around excavations. On the laboratory scale, even simple laboratory tests are difficult to interpret and uncertainties remain regarding the representativeness of the results. We show that suction may develop rapidly after core extraction and substantially modifies the strength, stiffness, and petro-physical properties of Opalinus Clay. Consolidated undrained tests performed on fully saturated specimens revealed a relatively small true cohesion and confirmed the strong hydro-mechanically coupled behaviour of this material. Strong hydro-mechanically coupled processes may explain the stability of cores and tunnel excavations in the short term. Pore-pressure effects may cause effective stress states that favour stability in the short term but may cause longer-term deformations and damage as the pore-pressure dissipates. In-situ observations show that macroscopic fracturing is strongly influenced by bedding planes and faults planes. In tunnel sections where opening or shearing along bedding planes or faults planes is kinematically free, the induced fracture type is strongly dependent on the fault plane frequency and orientation. A transition from extensional macroscopic failure to shearing can be observed with increasing fault plane frequency. In zones around the excavation where bedding plane shearing/shearing along tectonic fault planes is kinematically restrained, primary extensional type fractures develop. In addition, heterogeneities such as single tectonic fault planes or fault zones

  4. Monte Carlo codes and Monte Carlo simulator program

    International Nuclear Information System (INIS)

    Higuchi, Kenji; Asai, Kiyoshi; Suganuma, Masayuki.

    1990-03-01

    Four typical Monte Carlo codes KENO-IV, MORSE, MCNP and VIM have been vectorized on VP-100 at Computing Center, JAERI. The problems in vector processing of Monte Carlo codes on vector processors have become clear through the work. As the result, it is recognized that these are difficulties to obtain good performance in vector processing of Monte Carlo codes. A Monte Carlo computing machine, which processes the Monte Carlo codes with high performances is being developed at our Computing Center since 1987. The concept of Monte Carlo computing machine and its performance have been investigated and estimated by using a software simulator. In this report the problems in vectorization of Monte Carlo codes, Monte Carlo pipelines proposed to mitigate these difficulties and the results of the performance estimation of the Monte Carlo computing machine by the simulator are described. (author)

  5. VizieR Online Data Catalog: Project VeSElkA: HD stars atomic-line analysis (LeBlanc+, 2015)

    Science.gov (United States)

    Leblanc, F.; Khalack, V.; Yameogo, B.; Thibeault, C.; Gallant, I.

    2017-11-01

    The four stars studied here were observed with ESPaDOnS at CFHT. High-resolution (R=65000) Stokes IV spectra with large signal-to-noise ratios were obtained in the spectral range 3700-10500Å and were reduced with the software package LIBRE-ESPRIT (Donati et al. 1997MNRAS.291..658D). Two or more spectra of each star were taken to verify for any spectral variability. For the four stars studied here, no such variability is detected. Also, no strong magnetic fields were found. More details about these observations are given in Khalack & LeBlanc (2015AJ....150....2K); for instance, the exposure times and the signal-to-noise ratios are given in their table 1 for each star studied here. (4 data files).

  6. MONTE: the next generation of mission design and navigation software

    Science.gov (United States)

    Evans, Scott; Taber, William; Drain, Theodore; Smith, Jonathon; Wu, Hsi-Cheng; Guevara, Michelle; Sunseri, Richard; Evans, James

    2018-03-01

    The Mission analysis, Operations and Navigation Toolkit Environment (MONTE) (Sunseri et al. in NASA Tech Briefs 36(9), 2012) is an astrodynamic toolkit produced by the Mission Design and Navigation Software Group at the Jet Propulsion Laboratory. It provides a single integrated environment for all phases of deep space and Earth orbiting missions. Capabilities include: trajectory optimization and analysis, operational orbit determination, flight path control, and 2D/3D visualization. MONTE is presented to the user as an importable Python language module. This allows a simple but powerful user interface via CLUI or script. In addition, the Python interface allows MONTE to be used seamlessly with other canonical scientific programming tools such as SciPy, NumPy, and Matplotlib. MONTE is the prime operational orbit determination software for all JPL navigated missions.

  7. Priit Tenderi film sai Dresdenis auhinna

    Index Scriptorium Estoniae

    2002-01-01

    Dresdeni 14. filmifestivali animafilmide kategoorias sai teise preemia Priit Tenderi joonisfilm "Mont Blanc". Peapreemia sai saksa režissööri Jonathan Hodgsoni film "Camouflage". Kolmas oli PÖFFi peaauhinna saanud Stepan Birjukovi "Sossedi". Osales ka Nukufilmis diplomitööna valminud Jelena Girlini "Guff"

  8. Simulation of Rossi-α method with analog Monte-Carlo method

    International Nuclear Information System (INIS)

    Lu Yuzhao; Xie Qilin; Song Lingli; Liu Hangang

    2012-01-01

    The analog Monte-Carlo code for simulating Rossi-α method based on Geant4 was developed. The prompt neutron decay constant α of six metal uranium configurations in Oak Ridge National Laboratory were calculated. α was also calculated by Burst-Neutron method and the result was consistent with the result of Rossi-α method. There is the difference between results of analog Monte-Carlo simulation and experiment, and the reasons for the difference is the gaps between uranium layers. The influence of gaps decrease as the sub-criticality deepens. The relative difference between results of analog Monte-Carlo simulation and experiment changes from 19% to 0.19%. (authors)

  9. Monte Carlo and Quasi-Monte Carlo Sampling

    CERN Document Server

    Lemieux, Christiane

    2009-01-01

    Presents essential tools for using quasi-Monte Carlo sampling in practice. This book focuses on issues related to Monte Carlo methods - uniform and non-uniform random number generation, variance reduction techniques. It covers several aspects of quasi-Monte Carlo methods.

  10. Where protons will play The new supercollider that is poised to shake up physics

    CERN Multimedia

    Holt, Jim

    2007-01-01

    Near the foot of Mont-Blanc, the greatest of the Alpine peaks, a sizable object is taking shape, also quite beautiful in its way, yet not at all dumb. In fact, its pristine geometries may be instrumental in revealing what have hitherto been some of nature's deepest secrets. (2 pages)

  11. Monte Carlo applications at Hanford Engineering Development Laboratory

    International Nuclear Information System (INIS)

    Carter, L.L.; Morford, R.J.; Wilcox, A.D.

    1980-03-01

    Twenty applications of neutron and photon transport with Monte Carlo have been described to give an overview of the current effort at HEDL. A satisfaction factor was defined which quantitatively assigns an overall return for each calculation relative to the investment in machine time and expenditure of manpower. Low satisfaction factors are frequently encountered in the calculations. Usually this is due to limitations in execution rates of present day computers, but sometimes a low satisfaction factor is due to computer code limitations, calendar time constraints, or inadequacy of the nuclear data base. Present day computer codes have taken some of the burden off of the user. Nevertheless, it is highly desirable for the engineer using the computer code to have an understanding of particle transport including some intuition for the problems being solved, to understand the construction of sources for the random walk, to understand the interpretation of tallies made by the code, and to have a basic understanding of elementary biasing techniques

  12. Automated-biasing approach to Monte Carlo shipping-cask calculations

    International Nuclear Information System (INIS)

    Hoffman, T.J.; Tang, J.S.; Parks, C.V.; Childs, R.L.

    1982-01-01

    Computer Sciences at Oak Ridge National Laboratory, under a contract with the Nuclear Regulatory Commission, has developed the SCALE system for performing standardized criticality, shielding, and heat transfer analyses of nuclear systems. During the early phase of shielding development in SCALE, it was established that Monte Carlo calculations of radiation levels exterior to a spent fuel shipping cask would be extremely expensive. This cost can be substantially reduced by proper biasing of the Monte Carlo histories. The purpose of this study is to develop and test an automated biasing procedure for the MORSE-SGC/S module of the SCALE system

  13. Monte Carlo method for array criticality calculations

    International Nuclear Information System (INIS)

    Dickinson, D.; Whitesides, G.E.

    1976-01-01

    The Monte Carlo method for solving neutron transport problems consists of mathematically tracing paths of individual neutrons collision by collision until they are lost by absorption or leakage. The fate of the neutron after each collision is determined by the probability distribution functions that are formed from the neutron cross-section data. These distributions are sampled statistically to establish the successive steps in the neutron's path. The resulting data, accumulated from following a large number of batches, are analyzed to give estimates of k/sub eff/ and other collision-related quantities. The use of electronic computers to produce the simulated neutron histories, initiated at Los Alamos Scientific Laboratory, made the use of the Monte Carlo method practical for many applications. In analog Monte Carlo simulation, the calculation follows the physical events of neutron scattering, absorption, and leakage. To increase calculational efficiency, modifications such as the use of statistical weights are introduced. The Monte Carlo method permits the use of a three-dimensional geometry description and a detailed cross-section representation. Some of the problems in using the method are the selection of the spatial distribution for the initial batch, the preparation of the geometry description for complex units, and the calculation of error estimates for region-dependent quantities such as fluxes. The Monte Carlo method is especially appropriate for criticality safety calculations since it permits an accurate representation of interacting units of fissile material. Dissimilar units, units of complex shape, moderators between units, and reflected arrays may be calculated. Monte Carlo results must be correlated with relevant experimental data, and caution must be used to ensure that a representative set of neutron histories is produced

  14. Test of Blanc's law for negative ion mobility in mixtures of SF6 with N2, O2 and air

    International Nuclear Information System (INIS)

    Hinojosa, G; Urquijo, J de

    2003-01-01

    We have measured the mobility of negative ion species drifting in mixtures of SF 6 with N 2 , O 2 and air. The pulsed Townsend experiment was used for this purpose. The conditions of the experiment, high pressures and low values of the reduced electric field, E/N, ensured that the majority species drifting in the gap was SF 6 - , to which the present mobilities are ascribed. The extrapolated, zero field mobilities for several mixture compositions were used to test them successfully with Blanc's law. Moreover, the measured zero field SF 6 - mobilities in air could also be explained in terms of the measured mobilities for this ionic species in N 2 and O 2

  15. Virtual laboratory for radiation experiments

    International Nuclear Information System (INIS)

    Tiftikci, A.; Kocar, C.; Tombakoglu, M.

    2009-01-01

    Simulation of alpha, beta and gamma radiation detection and measurement experiments which are part of real nuclear physics laboratory courses was realized with Monte Carlo method and JAVA Programming Language. As being known, establishing this type of laboratories are very expensive. At the same time, highly radioactive sources used in some experiments carries risk for students and also for experimentalists. By taking into consideration of those problems, the aim of this study is to setup a virtual radiation laboratory with minimum cost and to speed up the training of radiation physics for students with no radiation risk. Software coded possesses the nature of radiation and radiation transport with the help of Monte Carlo method. In this software, experimental parameters can be changed manually by the user and experimental results can be followed synchronous in an MCA (Multi Channel Analyzer) or an SCA (Single Channel Analyzer). Results obtained in experiments can be analyzed by these MCA or SCA panels. Virtual radiation laboratory which is developed in this study with reliable results and unlimited experimentation capability seems as an useful educational material. Moreover, new type of experiments can be integrated to this software easily and as a result, virtual laboratory can be extended.

  16. Review of Hybrid (Deterministic/Monte Carlo) Radiation Transport Methods, Codes, and Applications at Oak Ridge National Laboratory

    International Nuclear Information System (INIS)

    Wagner, John C.; Peplow, Douglas E.; Mosher, Scott W.; Evans, Thomas M.

    2010-01-01

    This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10 2-4 ), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.

  17. Review of Hybrid (Deterministic/Monte Carlo) Radiation Transport Methods, Codes, and Applications at Oak Ridge National Laboratory

    International Nuclear Information System (INIS)

    Wagner, John C.; Peplow, Douglas E.; Mosher, Scott W.; Evans, Thomas M.

    2010-01-01

    This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.

  18. Review of hybrid (deterministic/Monte Carlo) radiation transport methods, codes, and applications at Oak Ridge National Laboratory

    International Nuclear Information System (INIS)

    Wagner, J.C.; Peplow, D.E.; Mosher, S.W.; Evans, T.M.

    2010-01-01

    This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10 2-4 ), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications. (author)

  19. Monte Carlo simulations of neutron-scattering instruments using McStas

    DEFF Research Database (Denmark)

    Nielsen, K.; Lefmann, K.

    2000-01-01

    Monte Carlo simulations have become an essential tool for improving the performance of neutron-scattering instruments, since the level of sophistication in the design of instruments is defeating purely analytical methods. The program McStas, being developed at Rise National Laboratory, includes...

  20. Calibration and Monte Carlo modelling of neutron long counters

    CERN Document Server

    Tagziria, H

    2000-01-01

    The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivit...

  1. Status of Monte Carlo at Los Alamos

    International Nuclear Information System (INIS)

    Thompson, W.L.; Cashwell, E.D.

    1980-01-01

    At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time

  2. Parallel MCNP Monte Carlo transport calculations with MPI

    International Nuclear Information System (INIS)

    Wagner, J.C.; Haghighat, A.

    1996-01-01

    The steady increase in computational performance has made Monte Carlo calculations for large/complex systems possible. However, in order to make these calculations practical, order of magnitude increases in performance are necessary. The Monte Carlo method is inherently parallel (particles are simulated independently) and thus has the potential for near-linear speedup with respect to the number of processors. Further, the ever-increasing accessibility of parallel computers, such as workstation clusters, facilitates the practical use of parallel Monte Carlo. Recognizing the nature of the Monte Carlo method and the trends in available computing, the code developers at Los Alamos National Laboratory implemented the message-passing general-purpose Monte Carlo radiation transport code MCNP (version 4A). The PVM package was chosen by the MCNP code developers because it supports a variety of communication networks, several UNIX platforms, and heterogeneous computer systems. This PVM version of MCNP has been shown to produce speedups that approach the number of processors and thus, is a very useful tool for transport analysis. Due to software incompatibilities on the local IBM SP2, PVM has not been available, and thus it is not possible to take advantage of this useful tool. Hence, it became necessary to implement an alternative message-passing library package into MCNP. Because the message-passing interface (MPI) is supported on the local system, takes advantage of the high-speed communication switches in the SP2, and is considered to be the emerging standard, it was selected

  3. A Multivariate Time Series Method for Monte Carlo Reactor Analysis

    International Nuclear Information System (INIS)

    Taro Ueki

    2008-01-01

    A robust multivariate time series method has been established for the Monte Carlo calculation of neutron multiplication problems. The method is termed Coarse Mesh Projection Method (CMPM) and can be implemented using the coarse statistical bins for acquisition of nuclear fission source data. A novel aspect of CMPM is the combination of the general technical principle of projection pursuit in the signal processing discipline and the neutron multiplication eigenvalue problem in the nuclear engineering discipline. CMPM enables reactor physicists to accurately evaluate major eigenvalue separations of nuclear reactors with continuous energy Monte Carlo calculation. CMPM was incorporated in the MCNP Monte Carlo particle transport code of Los Alamos National Laboratory. The great advantage of CMPM over the traditional Fission Matrix method is demonstrated for the three space-dimensional modeling of the initial core of a pressurized water reactor

  4. 5-year chemico-physical evolution of concrete-claystone interfaces, Mont Terri rock laboratory (Switzerland)

    Energy Technology Data Exchange (ETDEWEB)

    Mäder, U.; Jenni, A. [Institute of Geological Sciences, University of Berne, Berne (Switzerland); Lerouge, C. [French Geological Survey BRGM, Orléans (France); and others

    2017-04-15

    The Cement-Opalinus Clay Interaction (CI) Experiment at the Mont Terri rock laboratory is a long-term passive diffusion-reaction experiment between contrasting materials of relevance to engineered barrier systems/near-field for deep disposal of radioactive waste in claystone (Opalinus Clay). Reaction zones at interfaces of Opalinus Clay with two different types of concrete (OPC and 'low-pH'/ESDRED) were examined by sampling after 2.2 and 4.9 years. Analytical methods included element mapping (SEM, EPMA), select spot analysis (EDAX), 14C-MMA impregnation for radiography, and powder methods (IR, XRD, clay-exchanger characterisation) on carefully extracted miniature samples (mm). The presence of aggregate grains in concrete made the application of all methods difficult. Common features are a very limited extent of reaction within claystone, and a distinct and regularly zoned reaction zone within the cement matrix that is more extensive in the low-alkali cement (ESDRED). Both interfaces feature a de-calcification zone and overprinted a carbonate alteration zone thought to be mainly responsible for the observed porosity reduction. While OPC shows a distinct sulphate enrichment zone (indicative of ingress from Opalinus Clay), ESDRED displays a wide Mg-enriched zone, also with claystone pore-water as a source. A conclusion is that substitution of OPC by low-alkali cementitious products is not advantageous or necessary solely for the purpose of minimizing the extent of reaction between claystone and cementitious materials. Implications for reactive transport modelling are discussed. (authors)

  5. Kodumaine multifilm õilmitseb / Mart Rummo

    Index Scriptorium Estoniae

    Rummo, Mart

    2001-01-01

    Tähistamaks eesti animafilmi 70. aastapäeva näidatakse täna Kinomajas kuut uut multifilmi : 13-osaline lamenukkfilm "Tulelaeva kulid" (režissöör Mait Laas), Ülo Pikkovi kaks joonisfilmi "Superlove" ja "Peata ratsanik", Priit Tenderi joonisfilm "Mont Blanc", Kaspar Jancise joonisfilm "Romanss" ja Janno Põldma - Heiki Ernitsa laste joonisfilmiseriaali "Lepatriinude jõulud" 5-min. trailer

  6. Neutrinos, dark matter and the universe

    International Nuclear Information System (INIS)

    Stolarcyk, T.; Tran Thanh Van, J.; Vannucci, F.; Paris-7 Univ., 75

    1996-01-01

    The meeting was articulated around the general topic 'neutrinos, dark matter and the universe'. We have not yet succeeded in penetrating all of the neutrino's mysteries and in particular we still do not know its mass. Laboratory measurements involving beta disintegrations of Ni 63 , Re 187 , Xe 136 and tritium are being actively pursued by many teams. Astrophysical analyses have been led at neutrino observatories of Kamiokande, Baksan, IMB and the Mont-Blanc. But at the moment we can only give an upper limit of the neutrino mass. The problem of the 'missing' solar neutrinos cannot be dissociate from that of the neutrino mass and of the possible oscillation of one variety of neutrino into another. Dark matter shows up only through the effect of its gravitational field and at present we have no idea of its nature

  7. In-situ experiments on bentonite-based buffer and sealing materials at the Mont Terri rock laboratory (Switzerland)

    Energy Technology Data Exchange (ETDEWEB)

    Wieczorek, K. [Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) GmbH, Braunschweig (Germany); Gaus, I. [National Cooperative for the Disposal of Radioactive Waste (NAGRA), Wettingen (Switzerland); Mayor, J. C. [Empresa Nacional de Residuos Radiactivos SA (ENRESA), Madrid (Spain); and others

    2017-04-15

    Repository concepts in clay or crystalline rock involve bentonite-based buffer or seal systems to provide containment of the waste and limit advective flow. A thorough understanding of buffer and seal evolution is required to make sure the safety functions are fulfilled in the short and long term. Experiments at the real or near-real scale taking into account the interaction with the host rock help to make sure the safety-relevant processes are identified and understood and to show that laboratory-scale findings can be extrapolated to repository scale. Three large-scale experiments on buffer and seal properties performed in recent years at the Mont Terri rock laboratory are presented in this paper: The 1:2 scale HE-E heater experiment which is currently in operation, and the full-scale engineered barrier experiment and the Borehole Seal experiment which have been completed successfully in 2014 and 2012, respectively. All experiments faced considerable difficulties during installation, operation, evaluation or dismantling that required significant effort to overcome. The in situ experiments show that buffer and seal elements can be constructed meeting the expectations raised through small-scale testing. It was, however, also shown that interaction with the host rock caused additional effects in the buffer or seal that could not always be quantified or even anticipated from the experience of small-scale tests (such as re-saturation by pore-water from the rock, interaction with the excavation damaged zone in terms of preferential flow or mechanical effects). This led to the conclusion that testing of the integral system buffer/rock or seal/rock is needed. (authors)

  8. Monte Carlo: in the beginning and some great expectations

    International Nuclear Information System (INIS)

    Metropolis, N.

    1985-01-01

    The central theme will be on the historical setting and origins of the Monte Carlo Method. The scene was post-war Los Alamos Scientific Laboratory. There was an inevitability about the Monte Carlo Event: the ENIAC had recently enjoyed its meteoric rise (on a classified Los Alamos problem); Stan Ulam had returned to Los Alamos; John von Neumann was a frequent visitor. Techniques, algorithms, and applications developed rapidly at Los Alamos. Soon, the fascination of the Method reached wider horizons. The first paper was submitted for publication in the spring of 1949. In the summer of 1949, the first open conference was held at the University of California at Los Angeles. Of some interst perhaps is an account of Fermi's earlier, independent application in neutron moderation studies while at the University of Rome. The quantum leap expected with the advent of massively parallel processors will provide stimuli for very ambitious applications of the Monte Carlo Method in disciplines ranging from field theories to cosmology, including more realistic models in the neurosciences. A structure of multi-instruction sets for parallel processing is ideally suited for the Monte Carlo approach. One may even hope for a modest hardening of the soft sciences

  9. MCNP-REN a Monte Carlo tool for neutron detector design

    CERN Document Server

    Abhold, M E

    2002-01-01

    The development of neutron detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model fails to accurately predict detector response in common applications. For this reason, the general Monte Carlo code developed at Los Alamos National Laboratory, Monte Carlo N-Particle (MCNP), was modified to simulate the pulse streams that would be generated by a neutron detector and normally analyzed by a shift register. This modified code, MCNP-Random Exponentially Distributed Neutron Source (MCNP-REN), along with the Time Analysis Program, predicts neutron detector response without using the point reactor model, making it unnecessary for the user to decide whether or not the assumptions of the point model are met for their application. MCNP-REN is capable of simulating standard neutron coincidence counting as well as neutron multiplicity counting. Measurements of mixed oxide fresh fuel w...

  10. Final Report: 06-LW-013, Nuclear Physics the Monte Carlo Way

    International Nuclear Information System (INIS)

    Ormand, W.E.

    2009-01-01

    This is document reports the progress and accomplishments achieved in 2006-2007 with LDRD funding under the proposal 06-LW-013, 'Nuclear Physics the Monte Carlo Way'. The project was a theoretical study to explore a novel approach to dealing with a persistent problem in Monte Carlo approaches to quantum many-body systems. The goal was to implement a solution to the notorious 'sign-problem', which if successful, would permit, for the first time, exact solutions to quantum many-body systems that cannot be addressed with other methods. In this document, we outline the progress and accomplishments achieved during FY2006-2007 with LDRD funding in the proposal 06-LW-013, 'Nuclear Physics the Monte Carlo Way'. This project was funded under the Lab Wide LDRD competition at Lawrence Livermore National Laboratory. The primary objective of this project was to test the feasibility of implementing a novel approach to solving the generic quantum many-body problem, which is one of the most important problems being addressed in theoretical physics today. Instead of traditional methods based matrix diagonalization, this proposal focused a Monte Carlo method. The principal difficulty with Monte Carlo methods, is the so-called 'sign problem'. The sign problem, which will discussed in some detail later, is endemic to Monte Carlo approaches to the quantum many-body problem, and is the principal reason that they have not been completely successful in the past. Here, we outline our research in the 'shifted-contour method' applied the Auxiliary Field Monte Carlo (AFMC) method

  11. Self-sealing barriers of sand/bentonite-mixtures in a clay repository. SB-experiment in the Mont Terri Rock Laboratory. Final report

    International Nuclear Information System (INIS)

    Rothfuchs, Tilmann; Czaikowski, Oliver; Hartwig, Lothar; Hellwald, Karsten; Komischke, Michael; Miehe, Ruediger; Zhang, Chun-Liang

    2012-10-01

    Several years ago, GRS performed laboratory investigations on the suitability of clay/mineral mixtures as optimized sealing materials in underground repositories for radioactive wastes /JOC 00/ /MIE 03/. The investigations yielded promising results so that plans were developed for testing the sealing properties of those materials under representative in-situ conditions in the Mont Terri Rock Laboratory (MTRL). The project was proposed to the ''Projekttraeger Wassertechnologie und Entsorgung (PtWT+E)'', and finally launched in January 2003 under the name SB-project (''Self-sealing Barriers of Clay/Mineral Mixtures in a Clay Repository''). The project was divided in two parts, a pre-project running from January 2003 until June 2004 under contract No. 02E9713 /ROT 04/ and the main project running from January 2004 until June 2012 under contract No. 02E9894 with originally PtWT+E, later renamed as PTKA-WTE. In the course of the pre-project it was decided to incorporate the SB main project as a cost shared action of PtWT+E and the European Commission (contract No. FI6W-CT-2004-508851) into the EC Integrated Project ESDRED (Engineering Studies and Demonstrations of Repository Designs) performed by 11 European project partners within the 6th European framework programme. The ESDRED project was terminated prior to the termination of the SB project. Interim results were reported by mid 2009 in two ESDRED reports /DEB09/ /SEI 09/. This report presents the results achieved in the whole SB-project comprising preceding laboratory investigations for the final selection of suited material mixtures, the conduction of mock-up tests in the geotechnical laboratory of GRS in Braunschweig and the execution of in-situ experiments at the MTRL.

  12. PEREGRINE: An all-particle Monte Carlo code for radiation therapy

    International Nuclear Information System (INIS)

    Hartmann Siantar, C.L.; Chandler, W.P.; Rathkopf, J.A.; Svatos, M.M.; White, R.M.

    1994-09-01

    The goal of radiation therapy is to deliver a lethal dose to the tumor while minimizing the dose to normal tissues. To carry out this task, it is critical to calculate correctly the distribution of dose delivered. Monte Carlo transport methods have the potential to provide more accurate prediction of dose distributions than currently-used methods. PEREGRINE is a new Monte Carlo transport code developed at Lawrence Livermore National Laboratory for the specific purpose of modeling the effects of radiation therapy. PEREGRINE transports neutrons, photons, electrons, positrons, and heavy charged-particles, including protons, deuterons, tritons, helium-3, and alpha particles. This paper describes the PEREGRINE transport code and some preliminary results for clinically relevant materials and radiation sources

  13. First experience with particle-in-cell plasma physics code on ARM-based HPC systems

    Science.gov (United States)

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Mantsinen, Mervi; Mateo, Sergi; Cela, José M.; Castejón, Francisco

    2015-09-01

    In this work, we will explore the feasibility of porting a Particle-in-cell code (EUTERPE) to an ARM multi-core platform from the Mont-Blanc project. The used prototype is based on a system-on-chip Samsung Exynos 5 with an integrated GPU. It is the first prototype that could be used for High-Performance Computing (HPC), since it supports double precision and parallel programming languages.

  14. Monte Carlo applications to radiation shielding problems

    International Nuclear Information System (INIS)

    Subbaiah, K.V.

    2009-01-01

    transport in complex geometries is straightforward, while even the simplest finite geometries (e.g., thin foils) are very difficult to be dealt with by the transport equation. The main drawback of the Monte Carlo method lies in its random nature: all the results are affected by statistical uncertainties, which can be reduced at the expense of increasing the sampled population, and, hence, the computation time. Under special circumstances, the statistical uncertainties may be lowered by using variance-reduction techniques. Monte Carlo methods tend to be used when it is infeasible or impossible to compute an exact result with a deterministic algorithm. The term Monte Carlo was coined in the 1940s by physicists working on nuclear weapon projects in the Los Alamos National Laboratory

  15. Monte Carlo methods

    Directory of Open Access Journals (Sweden)

    Bardenet Rémi

    2013-07-01

    Full Text Available Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.

  16. The application of weight windows to 'Global' Monte Carlo problems

    International Nuclear Information System (INIS)

    Becker, T. L.; Larsen, E. W.

    2009-01-01

    This paper describes two basic types of global deep-penetration (shielding) problems-the global flux problem and the global response problem. For each of these, two methods for generating weight windows are presented. The first approach, developed by the authors of this paper and referred to generally as the Global Weight Window, constructs a weight window that distributes Monte Carlo particles according to a user-specified distribution. The second approach, developed at Oak Ridge National Laboratory and referred to as FW-CADIS, constructs a weight window based on intuitively extending the concept of the source-detector problem to global problems. The numerical results confirm that the theory used to describe the Monte Carlo particle distribution for a given weight window is valid and that the figure of merit is strongly correlated to the Monte Carlo particle distribution. Furthermore, they illustrate that, while both methods are capable of obtaining the correct solution, the Global Weight Window distributes particles much more uniformly than FW-CADIS. As a result, the figure of merit is higher for the Global Weight Window. (authors)

  17. Fission track dating of zircon: a multichronometer

    International Nuclear Information System (INIS)

    Carpena, J.

    1992-01-01

    Scattering in Fission Track ages of zircons of a single rock is possible when they present morphological and geochemical variations, if the greatest care is not taken in the choice of the etching conditions and the counting of tracks. The Fission Track study of two heterogeneous populations of zircons from the Mont Blanc granite and from the Gran Paradiso gneisses allows to show that zircon may work as a multichronometer

  18. SELF-ABSORPTION CORRECTIONS BASED ON MONTE CARLO SIMULATIONS

    Directory of Open Access Journals (Sweden)

    Kamila Johnová

    2016-12-01

    Full Text Available The main aim of this article is to demonstrate how Monte Carlo simulations are implemented in our gamma spectrometry laboratory at the Department of Dosimetry and Application of Ionizing Radiation in order to calculate the self-absorption within the samples. A model of real HPGe detector created for MCNP simulations is presented in this paper. All of the possible parameters, which may influence the self-absorption, are at first discussed theoretically and lately described using the calculated results.

  19. Implementation of the full-scale emplacement (FE) experiment at the Mont Terri rock laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Müller, H.R.; Garitte, B.; Vogt, T.; and others

    2017-04-15

    Opalinus Clay is currently being assessed as the host rock for a deep geological repository for high-level and low- and intermediate-level radioactive wastes in Switzerland. Within this framework, the 'Full-Scale Emplacement' (FE) experiment was initiated at the Mont Terri rock laboratory close to the small town of St-Ursanne in Switzerland. The FE experiment simulates, as realistically as possible, the construction, waste emplacement, backfilling and early post-closure evolution of a spent fuel/vitrified high-level waste disposal tunnel according to the Swiss repository concept. The main aim of this multiple heater test is the investigation of repository-induced thermo-hydro-mechanical (THM) coupled effects on the host rock at this scale and the validation of existing coupled THM models. For this, several hundred sensors were installed in the rock, the tunnel lining, the bentonite buffer, the heaters and the plug. This paper is structured according to the implementation timeline of the FE experiment. It documents relevant details about the instrumentation, the tunnel construction, the production of the bentonite blocks and the highly compacted 'granulated bentonite mixture' (GBM), the development and construction of the prototype 'backfilling machine' (BFM) and its testing for horizontal GBM emplacement. Finally, the plug construction and the start of all 3 heaters (with a thermal output of 1350 Watt each) in February 2015 are briefly described. In this paper, measurement results representative of the different experimental steps are also presented. Tunnel construction aspects are discussed on the basis of tunnel wall displacements, permeability testing and relative humidity measurements around the tunnel. GBM densities achieved with the BFM in the different off-site mock-up tests and, finally, in the FE tunnel are presented. Finally, in situ thermal conductivity and temperature measurements recorded during the first heating months

  20. Exploring Monte Carlo methods

    CERN Document Server

    Dunn, William L

    2012-01-01

    Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble

  1. Specialized Monte Carlo codes versus general-purpose Monte Carlo codes

    International Nuclear Information System (INIS)

    Moskvin, Vadim; DesRosiers, Colleen; Papiez, Lech; Lu, Xiaoyi

    2002-01-01

    The possibilities of Monte Carlo modeling for dose calculations and optimization treatment are quite limited in radiation oncology applications. The main reason is that the Monte Carlo technique for dose calculations is time consuming while treatment planning may require hundreds of possible cases of dose simulations to be evaluated for dose optimization. The second reason is that general-purpose codes widely used in practice, require an experienced user to customize them for calculations. This paper discusses the concept of Monte Carlo code design that can avoid the main problems that are preventing wide spread use of this simulation technique in medical physics. (authors)

  2. Where protons will play

    CERN Multimedia

    Holt, Jim

    2007-01-01

    "On seing the Alps for the first time, Dorothy Parket is reputed to have said, "They're beautiful, but they're dumb". Near the foot of Mont Blanc, the greatest of the alpine peaks, another sizable object is taking shape, also quite beautiful in its way, yet not at all dumb. In fact, its pristine geometries may be instrumental in revealing what have hitherto been some of nature's deepest secrets." (2 pages)

  3. Curves and Surfaces

    Science.gov (United States)

    1990-01-01

    joint work with Bj6rn Jawerth and Brad Lucier. Courbes et Surfaces CHAMONIX - MONT BLANC 21-27juin 1990 QUASI-Eh4TERPOIANT’S DE TYPE DE SZASZ ...t) =ect, c > 0, t e lR, et, G(IR,) = (f e C(1R: IlfiI sup ( if (t)I / (p(t), t e IR+) < +oo) L’opdrateur de Szasz -Mirakyan Sn de C,(IR+) dans G[a,b

  4. Monte Carlo principles and applications

    Energy Technology Data Exchange (ETDEWEB)

    Raeside, D E [Oklahoma Univ., Oklahoma City (USA). Health Sciences Center

    1976-03-01

    The principles underlying the use of Monte Carlo methods are explained, for readers who may not be familiar with the approach. The generation of random numbers is discussed, and the connection between Monte Carlo methods and random numbers is indicated. Outlines of two well established Monte Carlo sampling techniques are given, together with examples illustrating their use. The general techniques for improving the efficiency of Monte Carlo calculations are considered. The literature relevant to the applications of Monte Carlo calculations in medical physics is reviewed.

  5. 11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing

    CERN Document Server

    Nuyens, Dirk

    2016-01-01

    This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.

  6. EURADOS action for determination of americium in skull measures in vivo and Monte Carlo simulation; Accion EURADOS para la determinacion de americio en craneo mediante medidas in-vivo y simulacion Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Lopez Ponte, M. A.; Navarro Amaro, J. F.; Perez Lopez, B.; Navarro Bravo, T.; Nogueira, P.; Vrba, T.

    2013-07-01

    From the Group of WG7 internal dosimetry of the EURADOS Organization (European Radiation Dosimetry group, e.V.) which It coordinates CIEMAT, international action for the vivo measurement of americium has been conducted in three mannequins type skull with detectors of Germanium by gamma spectrometry and simulation by Monte Carlo methods. Such action has been raised as two separate exercises, with the participation of institutions in Europe, America and Asia. Other actions similar precede this vivo intercomparison of measurement and modeling Monte Carlo1. The preliminary results and associated findings are presented in this work. The laboratory of the body radioactivity (CRC) of service counter of dosimetry staff internal (DPI) of the CIEMAT, it has been one of the participants in vivo measures exercise. On the other hand part, the Group of numerical dosimetry of CIEMAT is participant of the Monte Carlo2 simulation exercise. (Author)

  7. Comparative evaluation of photon cross section libraries for materials of interest in PET Monte Carlo simulations

    CERN Document Server

    Zaidi, H

    1999-01-01

    the many applications of Monte Carlo modelling in nuclear medicine imaging make it desirable to increase the accuracy and computational speed of Monte Carlo codes. The accuracy of Monte Carlo simulations strongly depends on the accuracy in the probability functions and thus on the cross section libraries used for photon transport calculations. A comparison between different photon cross section libraries and parametrizations implemented in Monte Carlo simulation packages developed for positron emission tomography and the most recent Evaluated Photon Data Library (EPDL97) developed by the Lawrence Livermore National Laboratory was performed for several human tissues and common detector materials for energies from 1 keV to 1 MeV. Different photon cross section libraries and parametrizations show quite large variations as compared to the EPDL97 coefficients. This latter library is more accurate and was carefully designed in the form of look-up tables providing efficient data storage, access, and management. Toge...

  8. Coupling photon Monte Carlo simulation and CAD software. Application to X-ray nondestructive evaluation

    International Nuclear Information System (INIS)

    Tabary, J.; Gliere, A.

    2001-01-01

    A Monte Carlo radiation transport simulation program, EGS Nova, and a computer aided design software, BRL-CAD, have been coupled within the framework of Sindbad, a nondestructive evaluation (NDE) simulation system. In its current status, the program is very valuable in a NDE laboratory context, as it helps simulate the images due to the uncollided and scattered photon fluxes in a single NDE software environment, without having to switch to a Monte Carlo code parameters set. Numerical validations show a good agreement with EGS4 computed and published data. As the program's major drawback is the execution time, computational efficiency improvements are foreseen. (orig.)

  9. Mission Analysis, Operations, and Navigation Toolkit Environment (Monte) Version 040

    Science.gov (United States)

    Sunseri, Richard F.; Wu, Hsi-Cheng; Evans, Scott E.; Evans, James R.; Drain, Theodore R.; Guevara, Michelle M.

    2012-01-01

    Monte is a software set designed for use in mission design and spacecraft navigation operations. The system can process measurement data, design optimal trajectories and maneuvers, and do orbit determination, all in one application. For the first time, a single software set can be used for mission design and navigation operations. This eliminates problems due to different models and fidelities used in legacy mission design and navigation software. The unique features of Monte 040 include a blowdown thruster model for GRAIL (Gravity Recovery and Interior Laboratory) with associated pressure models, as well as an updated, optimalsearch capability (COSMIC) that facilitated mission design for ARTEMIS. Existing legacy software lacked the capabilities necessary for these two missions. There is also a mean orbital element propagator and an osculating to mean element converter that allows long-term orbital stability analysis for the first time in compiled code. The optimized trajectory search tool COSMIC allows users to place constraints and controls on their searches without any restrictions. Constraints may be user-defined and depend on trajectory information either forward or backwards in time. In addition, a long-term orbit stability analysis tool (morbiter) existed previously as a set of scripts on top of Monte. Monte is becoming the primary tool for navigation operations, a core competency at JPL. The mission design capabilities in Monte are becoming mature enough for use in project proposals as well as post-phase A mission design. Monte has three distinct advantages over existing software. First, it is being developed in a modern paradigm: object- oriented C++ and Python. Second, the software has been developed as a toolkit, which allows users to customize their own applications and allows the development team to implement requirements quickly, efficiently, and with minimal bugs. Finally, the software is managed in accordance with the CMMI (Capability Maturity Model

  10. MCNP: a general Monte Carlo code for neutron and photon transport. Version 3A. Revision 2

    International Nuclear Information System (INIS)

    Briesmeister, J.F.

    1986-09-01

    This manual is a practical guide for the use of our general-purpose Monte Carlo code MCNP. The first chapter is a primer for the novice user. The second chapter describes the mathematics, data, physics, and Monte Carlo simulation found in MCNP. This discussion is not meant to be exhaustive - details of the particular techniques and of the Monte Carlo method itself will have to be found elsewhere. The third chapter shows the user how to prepare input for the code. The fourth chapter contains several examples, and the fifth chapter explains the output. The appendices show how to use MCNP on particular computer systems at the Los Alamos National Laboratory and also give details about some of the code internals that those who wish to modify the code may find useful. 57 refs

  11. Calibration of the identiFINDER detector for the iodine measurement in thyroid using the Monte Carlo method; Calibracion del detector identiFINDER para la medicion de yodo en tiroides utilizando el metodo Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Ramos M, D.; Yera S, Y.; Lopez B, G. M.; Acosta R, N.; Vergara G, A., E-mail: dayana@cphr.edu.cu [Centro de Proteccion e Higiene de las Radiaciones, Calle 20 No. 4113 e/ 41 y 47, Playa, 10600 La Habana (Cuba)

    2014-08-15

    This work is based on the determination of the detection efficiency of {sup 125}I and {sup 131}I in thyroid of the identiFINDER detector using the Monte Carlo method. The suitability of the calibration method is analyzed, when comparing the results of the direct Monte Carlo method with the corrected, choosing the latter because the differences with the real efficiency stayed below 10%. To simulate the detector their geometric parameters were optimized using a tomographic study, what allowed the uncertainties minimization of the estimates. Finally were obtained the simulations of the detector geometry-point source to find the correction factors to 5 cm, 15 cm and 25 cm, and those corresponding to the detector-simulator arrangement for the method validation and final calculation of the efficiency, demonstrating that in the Monte Carlo method implementation if simulates at a greater distance than the used in the Laboratory measurements an efficiency overestimation can be obtained, while if simulates at a shorter distance this will be underestimated, so should be simulated at the same distance to which will be measured in the reality. Also, is achieved the obtaining of the efficiency curves and minimum detectable activity for the measurement of {sup 131}I and {sup 125}I. In general is achieved the implementation of the Monte Carlo methodology for the identiFINDER calibration with the purpose of estimating the measured activity of iodine in thyroid. This method represents an ideal way to replace the lack of patterns solutions and simulators assuring the capacities of the Internal Contamination Laboratory of the Centro de Proteccion e Higiene de las Radiaciones are always calibrated for the iodine measurement in thyroid. (author)

  12. Correlations between muons and low energy pulses at LSD of the Mont Blanc laboratory near the time of SN1987A explosion

    International Nuclear Information System (INIS)

    Dadykin, V.L.; Khalchukov, F.F.; Korchagin, P.V.; Korolkova, E.V.; Kudryavtsev, V.A.; Mal'gin, A.S.; Ryasny, V.G.; Ryazhskaya, O.G.; Yakushev, V.F.; Zatsepin, G.T.; Aglietta, M.; Badino, G.; Bologna, G.; Castagnoli, C.; Castellina, A.; Fulgione, W.; Galeotti, P.; Saavedra, O.; Trinchero, G.; Vernetto, S.; Turin Univ.

    1989-01-01

    We have analysed the data of LSD from February 10, 1987, to March 7, 1987, in order to search for autocorrelations between all pulses detected by LSD with energy higher than 5 MeV like those occurred at ∼ 3:00 UT on February 23, 1987, between the pulses detected by 3 neutrino telescopes and 2 gravitational wave antennae. We have found 9 pairs of correlated pulses (muon + low energy pulse) from 5:42 UT to 10:13 UT on February 23, 1987. The time differences of pulses in the pairs are less than 2 s, the first pulse in the pair being either muon or low energy pulse. The frequency of such random poissonian fluctuations is ∼1/(10 years). There are no correlations outside statistics between low energy, low energy pulses and muon, muon pulses detected by LSD during the whole time period

  13. Verification and Validation of Monte Carlo n-Particle Code 6 (MCNP6) with Neutron Protection Factor Measurements of an Iron Box

    Science.gov (United States)

    2014-03-27

    Vehicle Code System (VCS), the Monte Carlo Adjoint SHielding (MASH), and the Monte Carlo n- Particle ( MCNP ) code. Of the three, the oldest and still most...widely utilized radiation transport code is MCNP . First created at Los Alamos National Laboratory (LANL) in 1957, the code simulated neutral...particle types, and previous versions of MCNP were repeatedly validated using both simple and complex 10 geometries [12, 13]. Much greater discussion and

  14. Monte Carlo treatment planning and high-resolution alpha-track autoradiography for neutron capture therapy

    Energy Technology Data Exchange (ETDEWEB)

    Zamenhof, R.G.; Lin, K.; Ziegelmiller, D.; Clement, S.; Lui, C.; Harling, O.K.

    Monte Carlo simulations of thermal neutron flux distributions in a mathematical head model have been compared to experimental measurements in a corresponding anthropomorphic gelatin-based head phantom irradiated by a thermal neutron beam as presently available at the MITR-II Research Reactor. Excellent agreement between Monte Carlo and experimental measurements has encouraged us to employ the Monte Carlo simulation technique to approach treatment planning problems in neutron capture therapy. We have also implemented a high-resolution alpha-track autoradiography technique originally developed in our laboratory at MIT. Initial autoradiograms produced by this technique meet our expectations in terms of the high resolution available and the ability to etch tracks without concommitant destruction of stained tissue. Our preliminary results with computer-aided track distribution analysis indicate that this approach is very promising in being able to quantify boron distributions in tissue at the subcellular level with a minimum amount of operator effort necessary.

  15. Systems guide to MCNP (Monte Carlo Neutron and Photon Transport Code)

    International Nuclear Information System (INIS)

    Kirk, B.L.; West, J.T.

    1984-06-01

    The subject of this report is the implementation of the Los Alamos National Laboratory Monte Carlo Neutron and Photon Transport Code - Version 3 (MCNP) on the different types of computer systems, especially the IBM MVS system. The report supplements the documentation of the RSIC computer code package CCC-200/MCNP. Details of the procedure to follow in executing MCNP on the IBM computers, either in batch mode or interactive mode, are provided

  16. Mont Terri Project - Heater experiment, engineered barriers emplacement and ventilations tests. No 1 - Swiss Geological Survey, Bern, 2007

    International Nuclear Information System (INIS)

    Bossart, P.; Nussbaum, C.

    2007-01-01

    The international Mont Terri project started in January 1996. Research is carried out in the Mont Terri rock laboratory, an underground facility near the security gallery of the Mont Terri motorway tunnel (vicinity of St-Ursanne, Canton of Jura, Switzerland). The aim of the project is the geological, hydrogeological, geochemical and geotechnical characterisation of a clay formation, specifically of the Opalinus Clay. Twelve Partners from European countries and Japan participate in the project. These are ANDRA, BGR, CRIEPI, ENRESA, GRS, HSK, IRSN, JAEA, NAGRA, OBAYASHI, SCK.CEN and swisstopo. Since 2006, swisstopo acts as operator of the rock laboratory and is responsible for the implementation of the research programme decided by the partners. The three following reports are milestones in the research history of the Mont Terri project. It was the first time that an in-situ heating test with about 20 observation boreholes over a time span of several years was carried out in a clay formation. The engineered barrier emplacement experiment has been extended due to very encouraging measurement results and is still going on. The ventilation test was and is a challenge, especially in the very narrow microtunnel. All three projects were financially supported by the European Commission and the Swiss State Secretariat for Education and Research. The three important scientific and technical reports, which are presented in the following, have been provided by a number of scientists, engineers and technicians from the Partners, but also from national research organisations and private contractors. Many fruitful meetings where held, at the rock laboratory and at other facilities, not to forget the weeks and months of installation and testing work carried out by the technicians and engineers. The corresponding names and organisations are listed in detail in the reports. Special thanks are going to the co-ordinators of the three projects for their motivation of the team during

  17. On the use of stochastic approximation Monte Carlo for Monte Carlo integration

    KAUST Repository

    Liang, Faming

    2009-03-01

    The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration via a dynamically weighted estimator by calling some results from the literature of nonhomogeneous Markov chains. Our numerical results indicate that SAMC can yield significant savings over conventional Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, for the problems for which the energy landscape is rugged. © 2008 Elsevier B.V. All rights reserved.

  18. On the use of stochastic approximation Monte Carlo for Monte Carlo integration

    KAUST Repository

    Liang, Faming

    2009-01-01

    The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration

  19. EURADOS action for determination of americium in skull measures in vivo and Monte Carlo simulation

    International Nuclear Information System (INIS)

    Lopez Ponte, M. A.; Navarro Amaro, J. F.; Perez Lopez, B.; Navarro Bravo, T.; Nogueira, P.; Vrba, T.

    2013-01-01

    From the Group of WG7 internal dosimetry of the EURADOS Organization (European Radiation Dosimetry group, e.V.) which It coordinates CIEMAT, international action for the vivo measurement of americium has been conducted in three mannequins type skull with detectors of Germanium by gamma spectrometry and simulation by Monte Carlo methods. Such action has been raised as two separate exercises, with the participation of institutions in Europe, America and Asia. Other actions similar precede this vivo intercomparison of measurement and modeling Monte Carlo1. The preliminary results and associated findings are presented in this work. The laboratory of the body radioactivity (CRC) of service counter of dosimetry staff internal (DPI) of the CIEMAT, it has been one of the participants in vivo measures exercise. On the other hand part, the Group of numerical dosimetry of CIEMAT is participant of the Monte Carlo2 simulation exercise. (Author)

  20. Verification of the shift Monte Carlo code with the C5G7 reactor benchmark

    International Nuclear Information System (INIS)

    Sly, N. C.; Mervin, B. T.; Mosher, S. W.; Evans, T. M.; Wagner, J. C.; Maldonado, G. I.

    2012-01-01

    Shift is a new hybrid Monte Carlo/deterministic radiation transport code being developed at Oak Ridge National Laboratory. At its current stage of development, Shift includes a parallel Monte Carlo capability for simulating eigenvalue and fixed-source multigroup transport problems. This paper focuses on recent efforts to verify Shift's Monte Carlo component using the two-dimensional and three-dimensional C5G7 NEA benchmark problems. Comparisons were made between the benchmark eigenvalues and those output by the Shift code. In addition, mesh-based scalar flux tally results generated by Shift were compared to those obtained using MCNP5 on an identical model and tally grid. The Shift-generated eigenvalues were within three standard deviations of the benchmark and MCNP5-1.60 values in all cases. The flux tallies generated by Shift were found to be in very good agreement with those from MCNP. (authors)

  1. Improving the Terrain-Based Parameter for the Assessment of Snow Redistribution in the Col du Lac Blanc Area and Comparisons with TLS Snow Depth Data

    Science.gov (United States)

    Schön, Peter; Prokop, Alexander; Naaim-Bouvet, Florence; Nishimura, Kouichi; Vionnet, Vincent; Guyomarc'h, Gilbert

    2014-05-01

    Wind and the associated snow drift are dominating factors determining the snow distribution and accumulation in alpine areas, resulting in a high spatial variability of snow depth that is difficult to evaluate and quantify. The terrain-based parameter Sx characterizes the degree of shelter or exposure of a grid point provided by the upwind terrain, without the computational complexity of numerical wind field models. The parameter has shown to qualitatively predict snow redistribution with good reproduction of spatial patterns, but has failed to quantitatively describe the snow redistribution, and correlations with measured snow heights were poor. The objective of our research was to a) identify the sources of poor correlations between predicted and measured snow re-distribution and b) improve the parameters ability to qualitatively and quantitatively describe snow redistribution in our research area, the Col du Lac Blanc in the French Alps. The area is at an elevation of 2700 m and particularly suited for our study due to its constant wind direction and the availability of data from a meteorological station. Our work focused on areas with terrain edges of approximately 10 m height, and we worked with 1-2 m resolution digital terrain and snow surface data. We first compared the results of the terrain-based parameter calculations to measured snow-depths, obtained by high-accuracy terrestrial laser scan measurements. The results were similar to previous studies: The parameter was able to reproduce observed patterns in snow distribution, but regression analyses showed poor correlations between terrain-based parameter and measured snow-depths. We demonstrate how the correlations between measured and calculated snow heights improve if the parameter is calculated based on a snow surface model instead of a digital terrain model. We show how changing the parameter's search distance and how raster re-sampling and raster smoothing improve the results. To improve the parameter

  2. CHARACTERIZATION OF THE PERIOD OF SENSITIVITY OF FETAL MALE SEXUAL DEVELOPMENT TO VINCLOZOLIN

    Science.gov (United States)

    Characterization of the period of sensitivity of fetal male sexual development to vinclozolin.Wolf CJ, LeBlanc GA, Ostby JS, Gray LE Jr.Endocrinology Branch, MD 72, Reproductive Toxicology Division, National Health and Environmental Effects Research Laboratory, U....

  3. Monsieur Etienne Blanc Premier vice-président de la Région Auvergne-Rhône-Alpes Délégué aux finances, à l'administration générale, aux économies budgétaires et aux politiques transfrontalières

    CERN Multimedia

    Bennett, Sophia Elizabeth

    2017-01-01

    Monsieur Etienne Blanc Premier vice-président de la Région Auvergne-Rhône-Alpes Délégué aux finances, à l'administration générale, aux économies budgétaires et aux politiques transfrontalières

  4. Monte Carlo calculation of dose rate conversion factors for external exposure to photon emitters in soil

    CERN Document Server

    Clouvas, A; Antonopoulos-Domis, M; Silva, J

    2000-01-01

    The dose rate conversion factors D/sub CF/ (absorbed dose rate in air per unit activity per unit of soil mass, nGy h/sup -1/ per Bq kg/sup -1/) are calculated 1 m above ground for photon emitters of natural radionuclides uniformly distributed in the soil. Three Monte Carlo codes are used: 1) The MCNP code of Los Alamos; 2) The GEANT code of CERN; and 3) a Monte Carlo code developed in the Nuclear Technology Laboratory of the Aristotle University of Thessaloniki. The accuracy of the Monte Carlo results is tested by the comparison of the unscattered flux obtained by the three Monte Carlo codes with an independent straightforward calculation. All codes and particularly the MCNP calculate accurately the absorbed dose rate in air due to the unscattered radiation. For the total radiation (unscattered plus scattered) the D/sub CF/ values calculated from the three codes are in very good agreement between them. The comparison between these results and the results deduced previously by other authors indicates a good ag...

  5. Characterization of an extrapolation chamber for low-energy X-rays: Experimental and Monte Carlo preliminary results

    Energy Technology Data Exchange (ETDEWEB)

    Neves, Lucio P., E-mail: lpneves@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN-CNEN), Comissao Nacional de Energia Nuclear, Av. Prof. Lineu Prestes 2242, 05508-000 Sao Paulo, SP (Brazil); Silva, Eric A.B., E-mail: ebrito@usp.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN-CNEN), Comissao Nacional de Energia Nuclear, Av. Prof. Lineu Prestes 2242, 05508-000 Sao Paulo, SP (Brazil); Perini, Ana P., E-mail: aperini@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN-CNEN), Comissao Nacional de Energia Nuclear, Av. Prof. Lineu Prestes 2242, 05508-000 Sao Paulo, SP (Brazil); Maidana, Nora L., E-mail: nmaidana@if.usp.br [Universidade de Sao Paulo, Instituto de Fisica, Travessa R 187, 05508-900 Sao Paulo, SP (Brazil); Caldas, Linda V.E., E-mail: lcaldas@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN-CNEN), Comissao Nacional de Energia Nuclear, Av. Prof. Lineu Prestes 2242, 05508-000 Sao Paulo, SP (Brazil)

    2012-07-15

    The extrapolation chamber is a parallel-plate ionization chamber that allows variation of its air-cavity volume. In this work, an experimental study and MCNP-4C Monte Carlo code simulations of an ionization chamber designed and constructed at the Calibration Laboratory at IPEN to be used as a secondary dosimetry standard for low-energy X-rays are reported. The results obtained were within the international recommendations, and the simulations showed that the components of the extrapolation chamber may influence its response up to 11.0%. - Highlights: Black-Right-Pointing-Pointer A homemade extrapolation chamber was studied experimentally and with Monte Carlo. Black-Right-Pointing-Pointer It was characterized as a secondary dosimetry standard, for low energy X-rays. Black-Right-Pointing-Pointer Several characterization tests were performed and the results were satisfactory. Black-Right-Pointing-Pointer Simulation showed that its components may influence the response up to 11.0%. Black-Right-Pointing-Pointer This chamber may be used as a secondary standard at our laboratory.

  6. Vectorized Monte Carlo

    International Nuclear Information System (INIS)

    Brown, F.B.

    1981-01-01

    Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes

  7. Deformation mechanisms and evolution of the microstructure of gouge in the Main Fault in Opalinus Clay in the Mont Terri rock laboratory (CH)

    Science.gov (United States)

    Laurich, Ben; Urai, Janos L.; Vollmer, Christian; Nussbaum, Christophe

    2018-01-01

    We studied gouge from an upper-crustal, low-offset reverse fault in slightly overconsolidated claystone in the Mont Terri rock laboratory (Switzerland). The laboratory is designed to evaluate the suitability of the Opalinus Clay formation (OPA) to host a repository for radioactive waste. The gouge occurs in thin bands and lenses in the fault zone; it is darker in color and less fissile than the surrounding rock. It shows a matrix-based, P-foliated microfabric bordered and truncated by micrometer-thin shear zones consisting of aligned clay grains, as shown with broad-ion-beam scanning electron microscopy (BIB-SEM) and optical microscopy. Selected area electron diffraction based on transmission electron microscopy (TEM) shows evidence for randomly oriented nanometer-sized clay particles in the gouge matrix, surrounding larger elongated phyllosilicates with a strict P foliation. For the first time for the OPA, we report the occurrence of amorphous SiO2 grains within the gouge. Gouge has lower SEM-visible porosity and almost no calcite grains compared to the undeformed OPA. We present two hypotheses to explain the origin of gouge in the Main Fault: (i) authigenic generation consisting of fluid-mediated removal of calcite from the deforming OPA during shearing and (ii) clay smear consisting of mechanical smearing of calcite-poor (yet to be identified) source layers into the fault zone. Based on our data we prefer the first or a combination of both, but more work is needed to resolve this. Microstructures indicate a range of deformation mechanisms including solution-precipitation processes and a gouge that is weaker than the OPA because of the lower fraction of hard grains. For gouge, we infer a more rate-dependent frictional rheology than suggested from laboratory experiments on the undeformed OPA.

  8. Adjoint electron Monte Carlo calculations

    International Nuclear Information System (INIS)

    Jordan, T.M.

    1986-01-01

    Adjoint Monte Carlo is the most efficient method for accurate analysis of space systems exposed to natural and artificially enhanced electron environments. Recent adjoint calculations for isotropic electron environments include: comparative data for experimental measurements on electronics boxes; benchmark problem solutions for comparing total dose prediction methodologies; preliminary assessment of sectoring methods used during space system design; and total dose predictions on an electronics package. Adjoint Monte Carlo, forward Monte Carlo, and experiment are in excellent agreement for electron sources that simulate space environments. For electron space environments, adjoint Monte Carlo is clearly superior to forward Monte Carlo, requiring one to two orders of magnitude less computer time for relatively simple geometries. The solid-angle sectoring approximations used for routine design calculations can err by more than a factor of 2 on dose in simple shield geometries. For critical space systems exposed to severe electron environments, these potential sectoring errors demand the establishment of large design margins and/or verification of shield design by adjoint Monte Carlo/experiment

  9. Monte Carlo: Basics

    OpenAIRE

    Murthy, K. P. N.

    2001-01-01

    An introduction to the basics of Monte Carlo is given. The topics covered include, sample space, events, probabilities, random variables, mean, variance, covariance, characteristic function, chebyshev inequality, law of large numbers, central limit theorem (stable distribution, Levy distribution), random numbers (generation and testing), random sampling techniques (inversion, rejection, sampling from a Gaussian, Metropolis sampling), analogue Monte Carlo and Importance sampling (exponential b...

  10. MORSE Monte Carlo code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described

  11. First experience with particle-in-cell plasma physics code on ARM-based HPC systems

    OpenAIRE

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Mantsinen, Mervi; Mateo, Sergio; Cela, José M.; Castejón, Francisco

    2015-01-01

    In this work, we will explore the feasibility of porting a Particle-in-cell code (EUTERPE) to an ARM multi-core platform from the Mont-Blanc project. The used prototype is based on a system-on-chip Samsung Exynos 5 with an integrated GPU. It is the first prototype that could be used for High-Performance Computing (HPC), since it supports double precision and parallel programming languages. The research leading to these results has received funding from the European Com- munity's Seventh...

  12. Recent advances in hazardous materials transportation research: an international exchange. State-of-the-art Report 3, addendum

    International Nuclear Information System (INIS)

    Hills, P.; Geysen, W.J.; Tomachevsky, E.G.; Ringot, C.; Pages, P.

    1986-01-01

    The 4 papers in the report deal with the following areas: the transport of non-nuclear toxic and dangerous wastes in the United Kingdom; the transport system of dangerous products as a risk factor for the future: the computer-aided information program on hazardous materials; a validation study of the INTERTRAN model for assessing risks of transportation accidents: road transport of uranium hexafluoride; modifying the regulation for small radioactive package transit through the Mont Blanc tunnel-assessment of the health and economic impact

  13. Compte rendu de : La vallée électrique,Foëx E. (photographies) et Broennimann T. (textes), 2006, Paris, InFolio éditions, 164 p., 210 photographies en noir et blanc

    OpenAIRE

    Buisson, André

    2008-01-01

    Réunies sous le titre La vallée électrique, 210 photographies en noir et blanc du photographe Emmanuel Foëx illustrent l’architecture industrielle, l’urbanisme et le paysage dans l’Arc alpin. Comme un ouvrage classique, l’album est divisé en trois parties : La centrale, le réseau, le transformateur. L’auteur est parti de l’évidence qu’à travers l’arc alpin, les sillons des vallées sont constellés d’usines consacrées à la fabrication de l’électricité (La centrale). La montagne, telle un châtea...

  14. Monte Carlo theory and practice

    International Nuclear Information System (INIS)

    James, F.

    1987-01-01

    Historically, the first large-scale calculations to make use of the Monte Carlo method were studies of neutron scattering and absorption, random processes for which it is quite natural to employ random numbers. Such calculations, a subset of Monte Carlo calculations, are known as direct simulation, since the 'hypothetical population' of the narrower definition above corresponds directly to the real population being studied. The Monte Carlo method may be applied wherever it is possible to establish equivalence between the desired result and the expected behaviour of a stochastic system. The problem to be solved may already be of a probabilistic or statistical nature, in which case its Monte Carlo formulation will usually be a straightforward simulation, or it may be of a deterministic or analytic nature, in which case an appropriate Monte Carlo formulation may require some imagination and may appear contrived or artificial. In any case, the suitability of the method chosen will depend on its mathematical properties and not on its superficial resemblance to the problem to be solved. The authors show how Monte Carlo techniques may be compared with other methods of solution of the same physical problem

  15. MCNP-REN: a Monte Carlo tool for neutron detector design

    International Nuclear Information System (INIS)

    Abhold, M.E.; Baker, M.C.

    2002-01-01

    The development of neutron detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model fails to accurately predict detector response in common applications. For this reason, the general Monte Carlo code developed at Los Alamos National Laboratory, Monte Carlo N-Particle (MCNP), was modified to simulate the pulse streams that would be generated by a neutron detector and normally analyzed by a shift register. This modified code, MCNP-Random Exponentially Distributed Neutron Source (MCNP-REN), along with the Time Analysis Program, predicts neutron detector response without using the point reactor model, making it unnecessary for the user to decide whether or not the assumptions of the point model are met for their application. MCNP-REN is capable of simulating standard neutron coincidence counting as well as neutron multiplicity counting. Measurements of mixed oxide fresh fuel were taken with the Underwater Coincidence Counter, and measurements of highly enriched uranium reactor fuel were taken with the active neutron interrogation Research Reactor Fuel Counter and compared to calculation. Simulations completed for other detector design applications are described. The method used in MCNP-REN is demonstrated to be fundamentally sound and shown to eliminate the need to use the point model for detector performance predictions

  16. The adaptation method in the Monte Carlo simulation for computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyoung Gun; Yoon, Chang Yeon; Lee, Won Ho [Dept. of Bio-convergence Engineering, Korea University, Seoul (Korea, Republic of); Cho, Seung Ryong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Sung Ho [Dept. of Neurosurgery, Ulsan University Hospital, Ulsan (Korea, Republic of)

    2015-06-15

    The patient dose incurred from diagnostic procedures during advanced radiotherapy has become an important issue. Many researchers in medical physics are using computational simulations to calculate complex parameters in experiments. However, extended computation times make it difficult for personal computers to run the conventional Monte Carlo method to simulate radiological images with high-flux photons such as images produced by computed tomography (CT). To minimize the computation time without degrading imaging quality, we applied a deterministic adaptation to the Monte Carlo calculation and verified its effectiveness by simulating CT image reconstruction for an image evaluation phantom (Catphan; Phantom Laboratory, New York NY, USA) and a human-like voxel phantom (KTMAN-2) (Los Alamos National Laboratory, Los Alamos, NM, USA). For the deterministic adaptation, the relationship between iteration numbers and the simulations was estimated and the option to simulate scattered radiation was evaluated. The processing times of simulations using the adaptive method were at least 500 times faster than those using a conventional statistical process. In addition, compared with the conventional statistical method, the adaptive method provided images that were more similar to the experimental images, which proved that the adaptive method was highly effective for a simulation that requires a large number of iterations-assuming no radiation scattering in the vicinity of detectors minimized artifacts in the reconstructed image.

  17. Vectorization of Monte Carlo particle transport

    International Nuclear Information System (INIS)

    Burns, P.J.; Christon, M.; Schweitzer, R.; Lubeck, O.M.; Wasserman, H.J.; Simmons, M.L.; Pryor, D.V.

    1989-01-01

    This paper reports that fully vectorized versions of the Los Alamos National Laboratory benchmark code Gamteb, a Monte Carlo photon transport algorithm, were developed for the Cyber 205/ETA-10 and Cray X-MP/Y-MP architectures. Single-processor performance measurements of the vector and scalar implementations were modeled in a modified Amdahl's Law that accounts for additional data motion in the vector code. The performance and implementation strategy of the vector codes are related to architectural features of each machine. Speedups between fifteen and eighteen for Cyber 205/ETA-10 architectures, and about nine for CRAY X-MP/Y-MP architectures are observed. The best single processor execution time for the problem was 0.33 seconds on the ETA-10G, and 0.42 seconds on the CRAY Y-MP

  18. Monte Carlo Methods in Physics

    International Nuclear Information System (INIS)

    Santoso, B.

    1997-01-01

    Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained

  19. Monte Carlo techniques in radiation therapy

    CERN Document Server

    Verhaegen, Frank

    2013-01-01

    Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...

  20. Statistical implications in Monte Carlo depletions - 051

    International Nuclear Information System (INIS)

    Zhiwen, Xu; Rhodes, J.; Smith, K.

    2010-01-01

    As a result of steady advances of computer power, continuous-energy Monte Carlo depletion analysis is attracting considerable attention for reactor burnup calculations. The typical Monte Carlo analysis is set up as a combination of a Monte Carlo neutron transport solver and a fuel burnup solver. Note that the burnup solver is a deterministic module. The statistical errors in Monte Carlo solutions are introduced into nuclide number densities and propagated along fuel burnup. This paper is towards the understanding of the statistical implications in Monte Carlo depletions, including both statistical bias and statistical variations in depleted fuel number densities. The deterministic Studsvik lattice physics code, CASMO-5, is modified to model the Monte Carlo depletion. The statistical bias in depleted number densities is found to be negligible compared to its statistical variations, which, in turn, demonstrates the correctness of the Monte Carlo depletion method. Meanwhile, the statistical variation in number densities generally increases with burnup. Several possible ways of reducing the statistical errors are discussed: 1) to increase the number of individual Monte Carlo histories; 2) to increase the number of time steps; 3) to run additional independent Monte Carlo depletion cases. Finally, a new Monte Carlo depletion methodology, called the batch depletion method, is proposed, which consists of performing a set of independent Monte Carlo depletions and is thus capable of estimating the overall statistical errors including both the local statistical error and the propagated statistical error. (authors)

  1. Hamman Philippe, Blanc Christine, 2009, Sociologie du développement urbain durable. Projets et stratégies métropolitaines françaises, Bruxelles, P.I.E Peter Lang, 260p.

    Directory of Open Access Journals (Sweden)

    Bruno Villalba

    2009-10-01

    Full Text Available Cet ouvrage, réalisé en collaboration avec Flore Henninger, préfacé par Viviane Claude et postfacé par Corinne Larrue, est dirigé par Philippe Hamman (maître de conférences en sociologie au département d’urbanisme de l’UFR de sciences sociales, pratiques sociales et développement de l’université de Strasbourg ; il a dirigé, en 2008, Penser le développement durable urbain : regards croisés, Paris, L’Harmattan et Christine Blanc (sociologue et urbaniste, chargée d’études au Centre de recherche...

  2. Monte Carlo simulation for IRRMA

    International Nuclear Information System (INIS)

    Gardner, R.P.; Liu Lianyan

    2000-01-01

    Monte Carlo simulation is fast becoming a standard approach for many radiation applications that were previously treated almost entirely by experimental techniques. This is certainly true for Industrial Radiation and Radioisotope Measurement Applications - IRRMA. The reasons for this include: (1) the increased cost and inadequacy of experimentation for design and interpretation purposes; (2) the availability of low cost, large memory, and fast personal computers; and (3) the general availability of general purpose Monte Carlo codes that are increasingly user-friendly, efficient, and accurate. This paper discusses the history and present status of Monte Carlo simulation for IRRMA including the general purpose (GP) and specific purpose (SP) Monte Carlo codes and future needs - primarily from the experience of the authors

  3. Monte Carlo simulation of boron-ion implantation into single-crystal silicon

    International Nuclear Information System (INIS)

    Klein, K.M.

    1991-01-01

    A physically based Monte Carlo boron implantation model developed comprehends previously neglected but important implant parameters such as native oxide layers, wafer temperature, beam divergence, tilt angle, rotation (twist) angle, and dose, in addition to energy. This model uses as its foundation the MARLOWE Monte Carlo simulation code developed at Oak Ridge National Laboratory for the analysis of radiation effects in materials. This code was carefully adapted for the simulation of ion implantation, and a number of significant improvements have been made, including the addition of atomic pair specific interatomic potentials, the implementation of a newly developed local electron concentration dependent electronic stopping model, and the implementation of a newly developed cumulative damage model. This improved version of the code, known as UT-MARLOWE, allows boron implantation profiles to be accurately predicted as a function of energy, tilt angle, rotation angle, and dose. This code has also been used in the development and implementation of an accurate and efficient two-dimensional boron implantation model

  4. Calibration of the identiFINDER detector for the iodine measurement in thyroid using the Monte Carlo method

    International Nuclear Information System (INIS)

    Ramos M, D.; Yera S, Y.; Lopez B, G. M.; Acosta R, N.; Vergara G, A.

    2014-08-01

    This work is based on the determination of the detection efficiency of 125 I and 131 I in thyroid of the identiFINDER detector using the Monte Carlo method. The suitability of the calibration method is analyzed, when comparing the results of the direct Monte Carlo method with the corrected, choosing the latter because the differences with the real efficiency stayed below 10%. To simulate the detector their geometric parameters were optimized using a tomographic study, what allowed the uncertainties minimization of the estimates. Finally were obtained the simulations of the detector geometry-point source to find the correction factors to 5 cm, 15 cm and 25 cm, and those corresponding to the detector-simulator arrangement for the method validation and final calculation of the efficiency, demonstrating that in the Monte Carlo method implementation if simulates at a greater distance than the used in the Laboratory measurements an efficiency overestimation can be obtained, while if simulates at a shorter distance this will be underestimated, so should be simulated at the same distance to which will be measured in the reality. Also, is achieved the obtaining of the efficiency curves and minimum detectable activity for the measurement of 131 I and 125 I. In general is achieved the implementation of the Monte Carlo methodology for the identiFINDER calibration with the purpose of estimating the measured activity of iodine in thyroid. This method represents an ideal way to replace the lack of patterns solutions and simulators assuring the capacities of the Internal Contamination Laboratory of the Centro de Proteccion e Higiene de las Radiaciones are always calibrated for the iodine measurement in thyroid. (author)

  5. Geology of Maxwell Montes, Venus

    Science.gov (United States)

    Head, J. W.; Campbell, D. B.; Peterfreund, A. R.; Zisk, S. A.

    1984-01-01

    Maxwell Montes represent the most distinctive topography on the surface of Venus, rising some 11 km above mean planetary radius. The multiple data sets of the Pioneer missing and Earth based radar observations to characterize Maxwell Montes are analyzed. Maxwell Montes is a porkchop shaped feature located at the eastern end of Lakshmi Planum. The main massif trends about North 20 deg West for approximately 1000 km and the narrow handle extends several hundred km West South-West WSW from the north end of the main massif, descending down toward Lakshmi Planum. The main massif is rectilinear and approximately 500 km wide. The southern and northern edges of Maxwell Montes coincide with major topographic boundaries defining the edge of Ishtar Terra.

  6. Monte Carlo analysis of the slightly enriched uranium-D2O critical experiment LTRIIA (AWBA Development Program)

    International Nuclear Information System (INIS)

    Hardy, J. Jr.; Shore, J.M.

    1981-11-01

    The Savannah River Laboratory LTRIIA slightly-enriched uranium-D 2 O critical experiment was analyzed with ENDF/B-IV data and the RCP01 Monte Carlo program, which modeled the entire assembly in explicit detail. The integral parameters delta 25 and delta 28 showed good agreement with experiment. However, calculated K/sub eff/ was 2 to 3% low, due primarily to an overprediction of U238 capture. This is consistent with results obtained in similar analyses of the H 2 O-moderated TRX critical experiments. In comparisons with the VIM and MCNP2 Monte Carlo programs, good agreement was observed for calculated reeaction rates in the B 2 =0 cell

  7. (U) Introduction to Monte Carlo Methods

    Energy Technology Data Exchange (ETDEWEB)

    Hungerford, Aimee L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-20

    Monte Carlo methods are very valuable for representing solutions to particle transport problems. Here we describe a “cook book” approach to handling the terms in a transport equation using Monte Carlo methods. Focus is on the mechanics of a numerical Monte Carlo code, rather than the mathematical foundations of the method.

  8. Characterization of decommissioned reactor internals: Monte Carlo analysis technique

    International Nuclear Information System (INIS)

    Reid, B.D.; Love, E.F.; Luksic, A.T.

    1993-03-01

    This study discusses computer analysis techniques for determining activation levels of irradiated reactor component hardware to yield data for the Department of Energy's Greater-Than-Class C Low-Level Radioactive Waste Program. The study recommends the Monte Carlo Neutron/Photon (MCNP) computer code as the best analysis tool for this application and compares the technique to direct sampling methodology. To implement the MCNP analysis, a computer model would be developed to reflect the geometry, material composition, and power history of an existing shutdown reactor. MCNP analysis would then be performed using the computer model, and the results would be validated by comparison to laboratory analysis results from samples taken from the shutdown reactor. The report estimates uncertainties for each step of the computational and laboratory analyses; the overall uncertainty of the MCNP results is projected to be ±35%. The primary source of uncertainty is identified as the material composition of the components, and research is suggested to address that uncertainty

  9. Monte Carlo simulation study of the muon-induced neutron flux at LNGS

    International Nuclear Information System (INIS)

    Persiani, R.; Garbini, M.; Massoli, F.; Sartorelli, G; Selvi, M.

    2011-01-01

    Muon-induced neutrons are ultimate background for all the experiments searching for rare events in underground laboratories. Several measurements and simulations were performed concerning the neutron production and propagation but there are disagreements between experimental data and simulations. In this work we present our Monte-Carlo simulation study, based on Geant4, to estimate the muon-induced neutron flux at LNGS. The obtained integral flux of neutrons above 1 MeV is 2.31 x 10 -10 n/cm 2 /s.

  10. Lectures on Monte Carlo methods

    CERN Document Server

    Madras, Neal

    2001-01-01

    Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati

  11. Extending Strong Scaling of Quantum Monte Carlo to the Exascale

    Science.gov (United States)

    Shulenburger, Luke; Baczewski, Andrew; Luo, Ye; Romero, Nichols; Kent, Paul

    Quantum Monte Carlo is one of the most accurate and most computationally expensive methods for solving the electronic structure problem. In spite of its significant computational expense, its massively parallel nature is ideally suited to petascale computers which have enabled a wide range of applications to relatively large molecular and extended systems. Exascale capabilities have the potential to enable the application of QMC to significantly larger systems, capturing much of the complexity of real materials such as defects and impurities. However, both memory and computational demands will require significant changes to current algorithms to realize this possibility. This talk will detail both the causes of the problem and potential solutions. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corp, a wholly owned subsidiary of Lockheed Martin Corp, for the US Department of Energys National Nuclear Security Administration under contract DE-AC04-94AL85000.

  12. Hybrid SN/Monte Carlo research and results

    International Nuclear Information System (INIS)

    Baker, R.S.

    1993-01-01

    The neutral particle transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S N ) and stochastic (Monte Carlo) methods are applied. The Monte Carlo and S N regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid Monte Carlo/S N method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S N is well suited for by themselves. The hybrid method has been successfully applied to realistic shielding problems. The vectorized Monte Carlo algorithm in the hybrid method has been ported to the massively parallel architecture of the Connection Machine. Comparisons of performance on a vector machine (Cray Y-MP) and the Connection Machine (CM-2) show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when realistic problems requiring variance reduction are considered. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may be expected to perform well

  13. Experiments on thermo-hydro-mechanical behaviour of Opalinus Clay at Mont Terri rock laboratory, Switzerland

    Directory of Open Access Journals (Sweden)

    Paul Bossart

    2017-06-01

    Full Text Available Repositories for deep geological disposal of radioactive waste rely on multi-barrier systems to isolate waste from the biosphere. A multi-barrier system typically comprises the natural geological barrier provided by the repository host rock – in our case the Opalinus Clay – and an engineered barrier system (EBS. The Swiss repository concept for spent fuel and vitrified high-level waste (HLW consists of waste canisters, which are emplaced horizontally in the middle of an emplacement gallery and are separated from the gallery wall by granular backfill material (GBM. We describe here a selection of five in-situ experiments where characteristic hydro-mechanical (HM and thermo-hydro-mechanical (THM processes have been observed. The first example is a coupled HM and mine-by test where the evolution of the excavation damaged zone (EDZ was monitored around a gallery in the Opalinus Clay (ED-B experiment. Measurements of pore-water pressures and convergences due to stress redistribution during excavation highlighted the HM behaviour. The same measurements were subsequently carried out in a heater test (HE-D where we were able to characterise the Opalinus Clay in terms of its THM behaviour. These yielded detailed data to better understand the THM behaviours of the granular backfill and the natural host rock. For a presentation of the Swiss concept for HLW storage, we designed three demonstration experiments that were subsequently implemented in the Mont Terri rock laboratory: (1 the engineered barrier (EB experiment, (2 the in-situ heater test on key-THM processes and parameters (HE-E experiment, and (3 the full-scale emplacement (FE experiment. The first demonstration experiment has been dismantled, but the last two ones are on-going.

  14. Monte Carlo simulation in nuclear medicine

    International Nuclear Information System (INIS)

    Morel, Ch.

    2007-01-01

    The Monte Carlo method allows for simulating random processes by using series of pseudo-random numbers. It became an important tool in nuclear medicine to assist in the design of new medical imaging devices, optimise their use and analyse their data. Presently, the sophistication of the simulation tools allows the introduction of Monte Carlo predictions in data correction and image reconstruction processes. The availability to simulate time dependent processes opens up new horizons for Monte Carlo simulation in nuclear medicine. In a near future, these developments will allow to tackle simultaneously imaging and dosimetry issues and soon, case system Monte Carlo simulations may become part of the nuclear medicine diagnostic process. This paper describes some Monte Carlo method basics and the sampling methods that were developed for it. It gives a referenced list of different simulation software used in nuclear medicine and enumerates some of their present and prospective applications. (author)

  15. Geochemical signature of paleofluids in microstructures from Main Fault in the Opalinus Clay of the Mont Terri rock laboratory, Switzerland

    Energy Technology Data Exchange (ETDEWEB)

    Clauer, N. [Laboratoire d’Hydrologie et de Géochimie de Strasbourg (CNRS-UdS), Strasbourg (France); Techer, I. [Equipe Associée, Chrome, Université de Nîmes, Nîmes (France); Nussbaum, Ch. [Swiss Geological Survey, Federal Office of Topography Swisstopo, Wabern (Switzerland); Laurich, B. [Structural Geology, Tectonics and Geomechanics, RWTH Aachen University, Aachen (Germany); Laurich, B. [Federal Institute for Geosciences and Natural Resources BGR, Hannover (Germany)

    2017-04-15

    The present study reports on elemental and Sr isotopic analyses of calcite and associated celestite infillings of various microtectonic features collected mostly in the Main Fault of the Opalinus Clay from Mont Terri rock laboratory. Based on a detailed microstructural description of veins, slickensides, scaly clay aggregates and gouges, the geochemical signatures of the infillings were compared to those of the leachates from undeformed Opalinus Clay, and to the calcite from veins crosscutting Hauptrogenstein, Passwang and Staffelegg Formations above and below the Opalinus Clay. Vein calcite and celestite from Main Fault yield identical {sup 87}Sr/{sup 86}Sr ratios that are also close to those recorded in the Opalinus Clay matrix inside the Main Fault, but different from those of the diffuse Opalinus Clay calcite outside the fault. These varied {sup 87}Sr/{sup 86}Sr ratios of the diffuse calcite evidence a lack of interaction among the associated connate waters and the flowing fluids characterized by a homogeneous Sr signature. The {sup 87}Sr/{sup 86}Sr homogeneity at 0.70774 ± 0.00001 (2σ) for the infillings of most microstructures in the Main Fault, as well as of veins from nearby limestone layer and sediments around the Opalinus Clay, claims for an 'infinite' homogeneous marine supply, whereas the gouge infillings apparently interacted with a fluid chemically more complex. According to the known regional paleogeographic evolution, two seawater supplies were inferred and documented in the Delémont Basin: either during the Priabonian (38-34 Ma ago) from western Bresse graben, and/or during the Rupelian (34-28 Ma ago) from northern Rhine Graben. The Rupelian seawater that yields a mean {sup 87}Sr/{sup 86}Sr signature significantly higher than those of the microstructural infillings seems not to be the appropriate source. Alternatively, Priabonian seawater yields a mean {sup 87}Sr/{sup 86}Sr ratio precisely matching that of the leachates from diffuse

  16. Graphs of the cross sections in the recommended Monte Carlo cross-section library at the Los Alamos Scientific Laboratory

    International Nuclear Information System (INIS)

    Soran, P.D.; Seamon, R.E.

    1980-05-01

    Graphs of all neutron cross sections and photon production cross sections on the Recommended Monte Carlo Cross Section (RMCCS) library have been plotted along with local neutron heating numbers. Values for anti ν, the average number of neutrons per fission, are also given

  17. CERN takes part in the GE200.CH celebrations | 30 May - 1 June

    CERN Multimedia

    2014-01-01

    Come and discover the universe of CERN at the Mont Blanc rotunda during a weekend of celebrations marking the 200th anniversary of the arrival of the confederate troops at Geneva’s Port Noir.   CERN will also be taking part in the procession through the city centre from 2.30 p.m. to 5.00 p.m. on Saturday 31 May. Starting at the Parc des Bastions, the procession will pass through the Rues Basses and along the lake to the Port Noir. More information on this event here.  

  18. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    International Nuclear Information System (INIS)

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.

    2015-01-01

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 ® problems. These benchmark and scaling studies show promising results

  19. Corrosion of carbon steel in clay environments relevant to radioactive waste geological disposals, Mont Terri rock laboratory (Switzerland)

    Energy Technology Data Exchange (ETDEWEB)

    Necib, S. [Agence Nationale pour la Gestion des Déchets Radioactifs ANDRA, Meuse Haute-Marne, Center RD 960, Bure (France); Diomidis, N. [National Cooperative for the Disposal of Radioactive Waste (NAGRA), Wettingen (Switzerland); Keech, P. [Nuclear Waste Management Organisation NWMO, Toronto (Canada); Nakayama, M. [Japan Atomic Energy Agency JAEA, Horonobe-Cho (Japan)

    2017-04-15

    Carbon steel is widely considered as a candidate material for the construction of spent fuel and high-level waste disposal canisters. In order to investigate corrosion processes representative of the long term evolution of deep geological repositories, two in situ experiments are being conducted in the Mont Terri rock laboratory. The iron corrosion (IC) experiment, aims to measure the evolution of the instantaneous corrosion rate of carbon steel in contact with Opalinus Clay as a function of time, by using electrochemical impedance spectroscopy measurements. The Iron Corrosion in Bentonite (IC-A) experiment intends to determine the evolution of the average corrosion rate of carbon steel in contact with bentonite of different densities, by using gravimetric and surface analysis measurements, post exposure. Both experiments investigate the effect of microbial activity on corrosion. In the IC experiment, carbon steel showed a gradual decrease of the corrosion rate over a period of 7 years, which is consistent with the ongoing formation of protective corrosion products. Corrosion product layers composed of magnetite, mackinawite, hydroxychloride and siderite with some traces of oxidising species such as goethite were identified on the steel surface. Microbial investigations revealed thermophilic bacteria (sulphate and thiosulphate reducing bacteria) at the metal surface in low concentrations. In the IC-A experiment, carbon steel samples in direct contact with bentonite exhibited corrosion rates in the range of 2 µm/year after 20 months of exposure, in agreement with measurements in absence of microbes. Microstructural and chemical characterisation of the samples identified a complex corrosion product consisting mainly of magnetite. Microbial investigations confirmed the limited viability of microbes in highly compacted bentonite. (authors)

  20. Belo Monte hydropower project: actual studies; AHE Belo Monte: os estudos atuais

    Energy Technology Data Exchange (ETDEWEB)

    Figueira Netto, Carlos Alberto de Moya [CNEC Engenharia S.A., Sao Paulo, SP (Brazil); Rezende, Paulo Fernando Vieira Souto [Centrais Eletricas Brasileiras S.A. (ELETROBRAS), Rio de Janeiro, RJ (Brazil)

    2008-07-01

    This article presents the evolution of the studies of Belo Monte Hydro Power Project (HPP) since the initial inventory studies of the Xingu River in 1979 until the current studies for conclusion of the Technical, Economic and Environmental Feasibility Studies the Belo Monte Hydro Power Project, as authorized by Brazilian National Congress. The current studies characterize the Belo Monte HPP with an installed capacity of 11,181.3 MW (20 units of 550 MW in the main power house and 7 units of 25.9 MW in the additional power house), connected to the Brazilian Interconnected Power Grid, allowing to generate 4,796 mean MW of firm energy, without depending on any flow rate regularization of the upstream Xingu river flooding only 441 k m2, of which approximately 200 k m2, correspond to the normal annual wet season flooding of the Xingu River. (author)

  1. Advanced Multilevel Monte Carlo Methods

    KAUST Repository

    Jasra, Ajay

    2017-04-24

    This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

  2. Advanced Multilevel Monte Carlo Methods

    KAUST Repository

    Jasra, Ajay; Law, Kody; Suciu, Carina

    2017-01-01

    This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

  3. Monte Carlo - Advances and Challenges

    International Nuclear Information System (INIS)

    Brown, Forrest B.; Mosteller, Russell D.; Martin, William R.

    2008-01-01

    Abstract only, full text follows: With ever-faster computers and mature Monte Carlo production codes, there has been tremendous growth in the application of Monte Carlo methods to the analysis of reactor physics and reactor systems. In the past, Monte Carlo methods were used primarily for calculating k eff of a critical system. More recently, Monte Carlo methods have been increasingly used for determining reactor power distributions and many design parameters, such as β eff , l eff , τ, reactivity coefficients, Doppler defect, dominance ratio, etc. These advanced applications of Monte Carlo methods are now becoming common, not just feasible, but bring new challenges to both developers and users: Convergence of 3D power distributions must be assured; confidence interval bias must be eliminated; iterated fission probabilities are required, rather than single-generation probabilities; temperature effects including Doppler and feedback must be represented; isotopic depletion and fission product buildup must be modeled. This workshop focuses on recent advances in Monte Carlo methods and their application to reactor physics problems, and on the resulting challenges faced by code developers and users. The workshop is partly tutorial, partly a review of the current state-of-the-art, and partly a discussion of future work that is needed. It should benefit both novice and expert Monte Carlo developers and users. In each of the topic areas, we provide an overview of needs, perspective on past and current methods, a review of recent work, and discussion of further research and capabilities that are required. Electronic copies of all workshop presentations and material will be available. The workshop is structured as 2 morning and 2 afternoon segments: - Criticality Calculations I - convergence diagnostics, acceleration methods, confidence intervals, and the iterated fission probability, - Criticality Calculations II - reactor kinetics parameters, dominance ratio, temperature

  4. Graphs of the cross sections in the Alternate Monte Carlo Cross Section library at the Los Alamos Scientific Laboratory

    International Nuclear Information System (INIS)

    Seamon, R.E.; Soran, P.D.

    1980-06-01

    Graphs of all neutron cross sections and photon production cross sections on the Alternate Monte Carlo Cross Section (AMCCS) library have been plotted along with local neutron heating numbers. The values of ν-bar, the average number of neutrons per fission, are also plotted for appropriate isotopes

  5. The determination of beam quality correction factors: Monte Carlo simulations and measurements.

    Science.gov (United States)

    González-Castaño, D M; Hartmann, G H; Sánchez-Doblado, F; Gómez, F; Kapsch, R-P; Pena, J; Capote, R

    2009-08-07

    Modern dosimetry protocols are based on the use of ionization chambers provided with a calibration factor in terms of absorbed dose to water. The basic formula to determine the absorbed dose at a user's beam contains the well-known beam quality correction factor that is required whenever the quality of radiation used at calibration differs from that of the user's radiation. The dosimetry protocols describe the whole ionization chamber calibration procedure and include tabulated beam quality correction factors which refer to 60Co gamma radiation used as calibration quality. They have been calculated for a series of ionization chambers and radiation qualities based on formulae, which are also described in the protocols. In the case of high-energy photon beams, the relative standard uncertainty of the beam quality correction factor is estimated to amount to 1%. In the present work, two alternative methods to determine beam quality correction factors are prescribed-Monte Carlo simulation using the EGSnrc system and an experimental method based on a comparison with a reference chamber. Both Monte Carlo calculations and ratio measurements were carried out for nine chambers at several radiation beams. Four chamber types are not included in the current dosimetry protocols. Beam quality corrections for the reference chamber at two beam qualities were also measured using a calorimeter at a PTB Primary Standards Dosimetry Laboratory. Good agreement between the Monte Carlo calculated (1% uncertainty) and measured (0.5% uncertainty) beam quality correction factors was obtained. Based on these results we propose that beam quality correction factors can be generated both by measurements and by the Monte Carlo simulations with an uncertainty at least comparable to that given in current dosimetry protocols.

  6. Dynamic study of yeast species and Saccharomyces cerevisiae strains during the spontaneous fermentations of Muscat blanc in Jingyang, China.

    Science.gov (United States)

    Wang, Chunxiao; Liu, Yanlin

    2013-04-01

    The evolution of yeast species and Saccharomyces cerevisiae genotypes during spontaneous fermentations of Muscat blanc planted in 1957 in Jingyang region of China was followed in this study. Using a combination of colony morphology on Wallerstein Nutrient (WLN) medium, sequence analysis of the 26S rDNA D1/D2 domain and 5.8S-ITS-RFLP analysis, a total of 686 isolates were identified at the species level. The six species identified were S. cerevisiae, Hanseniaspora uvarum, Hanseniaspora opuntiae, Issatchenkia terricola, Pichia kudriavzevii (Issatchenkia orientalis) and Trichosporon coremiiforme. This is the first report of T. coremiiforme as an inhabitant of grape must. Three new colony morphologies on WLN medium and one new 5.8S-ITS-RFLP profile are described. Species of non-Saccharomyces, predominantly H. opuntiae, were found in early stages of fermentation. Subsequently, S. cerevisiae prevailed followed by large numbers of P. kudriavzevii that dominated at the end of fermentations. Six native genotypes of S. cerevisiae were determined by interdelta sequence analysis. Genotypes III and IV were predominant. As a first step in exploring untapped yeast resources of the region, this study is important for monitoring the yeast ecology in native fermentations and screening indigenous yeasts that will produce wines with regional characteristics. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Monte Carlo simulation of experiments

    International Nuclear Information System (INIS)

    Opat, G.I.

    1977-07-01

    An outline of the technique of computer simulation of particle physics experiments by the Monte Carlo method is presented. Useful special purpose subprograms are listed and described. At each stage the discussion is made concrete by direct reference to the programs SIMUL8 and its variant MONTE-PION, written to assist in the analysis of the radiative decay experiments μ + → e + ν sub(e) antiνγ and π + → e + ν sub(e)γ, respectively. These experiments were based on the use of two large sodium iodide crystals, TINA and MINA, as e and γ detectors. Instructions for the use of SIMUL8 and MONTE-PION are given. (author)

  8. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  9. Clonal differences and impact of defoliation on Sauvignon blanc (Vitis vinifera L.) wines: a chemical and sensory investigation.

    Science.gov (United States)

    Šuklje, Katja; Antalick, Guillaume; Buica, Astrid; Langlois, Jennifer; Coetzee, Zelmari A; Gouot, Julia; Schmidtke, Leigh M; Deloire, Alain

    2016-02-01

    The aim of this study, performed on Sauvignon blanc clones SB11 and SB316, grafted on the same rootstock 101-14 Mgt (Vitis riparia × V. ruperstris) and grown at two adjacent vineyards, was two-fold: (1) to study wine chemical and sensory composition of both clones within an unaltered canopy; and (2) to determine the effect of defoliation (e.g. bunch microclimate) on wine chemical and sensory composition. Orthogonal projection to latent structures discriminate analysis (OPLS-DA) was applied to the concentration profiles of volatile compounds derived from gas chromatography-mass spectrometry data. The loadings directions inferred that 3-isobutyl-2-methoxypyrazine (IBMP) discriminated control treatments (shaded fruit zone) of both clones from defoliation treatments (exposed fruit zone), whereas 3-sulfanyl-hexan-1-ol (3SH), 3-sulfanylhexyl acetate (3SHA), hexanol, hexyl hexanoate and some other esters discriminated defoliated treatments from the controls. The OPLS-DA indicated the importance of IBMP, higher alcohol acetates and phenylethyl esters, for discrimination of clone SB11 from clone SB316 irrespective of the treatment. Defoliation in the fruit zone significantly decreased perceived greenness in clone SB11 and elevated fruitier aromas, whereas in clone SB316 the effect of defoliation on wine sensory perception was less noticeable regardless the decrease in IBMP concentrations. These findings highlight the importance of clone selection and bunch microclimate to diversify produced wine styles. © 2015 Society of Chemical Industry.

  10. Monte Carlo Treatment Planning for Advanced Radiotherapy

    DEFF Research Database (Denmark)

    Cronholm, Rickard

    This Ph.d. project describes the development of a workflow for Monte Carlo Treatment Planning for clinical radiotherapy plans. The workflow may be utilized to perform an independent dose verification of treatment plans. Modern radiotherapy treatment delivery is often conducted by dynamically...... modulating the intensity of the field during the irradiation. The workflow described has the potential to fully model the dynamic delivery, including gantry rotation during irradiation, of modern radiotherapy. Three corner stones of Monte Carlo Treatment Planning are identified: Building, commissioning...... and validation of a Monte Carlo model of a medical linear accelerator (i), converting a CT scan of a patient to a Monte Carlo compliant phantom (ii) and translating the treatment plan parameters (including beam energy, angles of incidence, collimator settings etc) to a Monte Carlo input file (iii). A protocol...

  11. Merging a Terrain-Based Parameter and Snow Particle Counter Data for the Assessment of Snow Redistribution in the Col du Lac Blanc Area

    Science.gov (United States)

    Schön, Peter; Prokop, Alexander; Naaim-Bouvet, Florence; Vionnet, Vincent; Guyomarc'h, Gilbert; Heiser, Micha; Nishimura, Kouichi

    2015-04-01

    Wind and the associated snow drift are dominating factors determining the snow distribution and accumulation in alpine areas, resulting in a high spatial variability of snow depth that is difficult to evaluate and quantify. The terrain-based parameter Sx characterizes the degree of shelter or exposure of a grid point provided by the upwind terrain, without the computational complexity of numerical wind field models. The parameter has shown to qualitatively predict snow redistribution with good reproduction of spatial patterns. It does not, however, provide a quantitative estimate of changes in snow depths. The objective of our research was to introduce a new parameter to quantify changes in snow depths in our research area, the Col du Lac Blanc in the French Alps. The area is at an elevation of 2700 m and particularly suited for our study due to its consistently bi-modal wind directions. Our work focused on two pronounced, approximately 10 m high terrain breaks, and we worked with 1 m resolution digital snow surface models (DSM). The DSM and measured changes in snow depths were obtained with high-accuracy terrestrial laser scan (TLS) measurements. First we calculated the terrain-based parameter Sx on a digital snow surface model and correlated Sx with measured changes in snow-depths (Δ SH). Results showed that Δ SH can be approximated by Δ SHestimated = α * Sx, where α is a newly introduced parameter. The parameter α has shown to be linked to the amount of snow deposited influenced by blowing snow flux. At the Col du Lac Blanc test side, blowing snow flux is recorded with snow particle counters (SPC). Snow flux is the number of drifting snow particles per time and area. Hence, the SPC provide data about the duration and intensity of drifting snow events, two important factors not accounted for by the terrain parameter Sx. We analyse how the SPC snow flux data can be used to estimate the magnitude of the new variable parameter α . To simulate the development

  12. Summary - COG: A new point-wise Monte Carlo code for burnup credit analysis

    International Nuclear Information System (INIS)

    Alesso, H.P.

    1989-01-01

    COG, a new point-wise Monte Carlo code being developed and tested at Lawrence Livermore National Laboratory (LLNL) for the Cray-1, solves the Boltzmann equation for the transport of neutrons, photons, and (in future versions) other particles. Techniques included in the code for modifying the random walk of particles make COG most suitable for solving deep-penetration (shielding) problems and a wide variety of criticality problems. COG is similar to a number of other computer codes used in the shielding community. Each code is a little different in its geometry input and its random-walk modification options. COG is a Monte Carlo code specifically designed for the CRAY (in 1986) to be as precise as the current state of physics knowledge. It has been extensively benchmarked and used as a shielding code at LLNL since 1986, and has recently been extended to accomplish criticality calculations. It will make an excellent tool for future shipping cask studies

  13. Application of the Monte Carlo technique to the study of radiation transport in a prompt gamma in vivo neutron activation system

    International Nuclear Information System (INIS)

    Chan, A.A.; Beddoe, A.H.

    1985-01-01

    A Monte Carlo code (MORSE-SGC) from the Radiation Shielding Information Centre at Oak Ridge National Laboratory, USA, has been adapted and used to model radiation transport in the Auckland prompt gamma in vivo neutron activation analysis facility. Preliminary results are presented for the slow neutron flux in an anthropomorphic phantom which are in broad agreement with those obtained by measurement via activation foils. Since experimental optimization is not logistically feasible and since theoretical optimization of neutron activation facilities has not previously been attempted, it is hoped that the Monte Carlo calculations can be used to provide a basis for improved system design

  14. Lawrence Livermore National Laboratory`s PEREGRINE project

    Energy Technology Data Exchange (ETDEWEB)

    Hartmann-Siantar, C.L.; Bergstrom, P.M.; Chandler, W.P. [and others

    1997-03-01

    PEREGRINE is an all-particle, first-principles 3D Monte Carlo dose calculation system designed to serve as a dose calculation engine for clinical radiation therapy treatment planning (RTP) systems. By taking advantage of recent advances in low cost computer commodity hardware, modern symmetric multiprocessor architectures and state-of- the-art Monte Carlo transport algorithms., PEREGRINE performs high resolution, high accuracy, Monte Carlo RTP calculation in times that are reasonable for clinical use. Because of its speed and simple interface with conventional treatment planning systems, PEREGRINE brings Monte Carlo radiation transport calculations to the clinical RTP desktop environment. Although PEREGRINE is designed to calculate doe distributions for photon, electron, fast neutron and proton therapy, this paper focuses on photon teletherapy.

  15. Monte carlo simulation for soot dynamics

    KAUST Repository

    Zhou, Kun

    2012-01-01

    A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.

  16. Laboratory measurements of rock thermal properties

    DEFF Research Database (Denmark)

    Bording, Thue Sylvester; Balling, N.; Nielsen, S.B.

    The thermal properties of rocks are key elements in understanding and modelling the temperature field of the subsurface. Thermal conductivity and thermal diffusivity can be measured in the laboratory if rock samples can be provided. We have introduced improvements to the divided bar and needle...... probe methods to be able to measure both thermal conductivity and thermal diffusivity. The improvements we implement include, for both methods, a combination of fast numerical finite element forward modelling and a Markov Chain Monte Carlo inversion scheme for estimating rock thermal parameters...

  17. Monte Carlo calculations of the neutron coincidence gate utilisation factor for passive neutron coincidence counting

    International Nuclear Information System (INIS)

    Bourva, L.C.A.; Croft, S.

    1999-01-01

    The general purpose neutron-photon-electron Monte Carlo N-Particle code, MCNP TM , has been used to simulate the neutronic characteristics of the on-site laboratory passive neutron coincidence counter to be installed, under Euratom Safeguards Directorate supervision, at the Sellafield reprocessing plant in Cumbria, UK. This detector is part of a series of nondestructive assay instruments to be installed for the accurate determination of the plutonium content of nuclear materials. The present work focuses on one aspect of this task, namely, the accurate calculation of the coincidence gate utilisation factor. This parameter is an important term in the interpretative model used to analyse the passive neutron coincidence count data acquired using pulse train deconvolution electronics based on the shift register technique. It accounts for the limited proportion of neutrons detected within the time interval for which the electronics gate is open. The Monte Carlo code MCF, presented in this work, represents a new evaluation technique for the estimation of gate utilisation factors. It uses the die-away profile of a neutron coincidence chamber generated either by MCNP TM , or by other means, to simulate the neutron detection arrival time pattern originating from independent spontaneous fission events. A shift register simulation algorithm, embedded in the MCF code, then calculates the coincidence counts scored within the electronics gate. The gate utilisation factor is then deduced by dividing the coincidence counts obtained with that obtained in the same Monte Carlo run, but for an ideal detection system with a coincidence gate utilisation factor equal to unity. The MCF code has been benchmarked against analytical results calculated for both single and double exponential die-away profiles. These results are presented along with the development of the closed form algebraic expressions for the two cases. Results of this validity check showed very good agreement. On this

  18. Statistical estimation Monte Carlo for unreliability evaluation of highly reliable system

    International Nuclear Information System (INIS)

    Xiao Gang; Su Guanghui; Jia Dounan; Li Tianduo

    2000-01-01

    Based on analog Monte Carlo simulation, statistical Monte Carlo methods for unreliable evaluation of highly reliable system are constructed, including direct statistical estimation Monte Carlo method and weighted statistical estimation Monte Carlo method. The basal element is given, and the statistical estimation Monte Carlo estimators are derived. Direct Monte Carlo simulation method, bounding-sampling method, forced transitions Monte Carlo method, direct statistical estimation Monte Carlo and weighted statistical estimation Monte Carlo are used to evaluate unreliability of a same system. By comparing, weighted statistical estimation Monte Carlo estimator has smallest variance, and has highest calculating efficiency

  19. Applications of Monte Carlo method in Medical Physics

    International Nuclear Information System (INIS)

    Diez Rios, A.; Labajos, M.

    1989-01-01

    The basic ideas of Monte Carlo techniques are presented. Random numbers and their generation by congruential methods, which underlie Monte Carlo calculations are shown. Monte Carlo techniques to solve integrals are discussed. The evaluation of a simple monodimensional integral with a known answer, by means of two different Monte Carlo approaches are discussed. The basic principles to simualate on a computer photon histories reduce variance and the current applications in Medical Physics are commented. (Author)

  20. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-01-01

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  1. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros

    2016-08-29

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  2. Report on the Oak Ridge workshop on Monte Carlo codes for relativistic heavy-ion collisions

    International Nuclear Information System (INIS)

    Awes, T.C.; Sorensen, S.P.

    1988-01-01

    In order to make detailed predictions for the case of purely hadronic matter, several Monte Carlo codes have been developed to describe relativistic nucleus-nucleus collisions. Although these various models build upon models of hadron-hadron interactions and have been fitted to reproduce hadron-hadron collision data, they have rather different pictures of the underlying hadron collision process and of subsequent particle production. Until now, the different Monte Carlo codes have, in general, been compared to different sets of experimental data, according to which results were readily available to the model builder or which Monte Carlo code was readily available to an experimental group. As a result, it has been difficult to draw firm conclusions about whether the observed deviations between experiments and calculations were due to deficiencies in the particular model, experimental discrepancies, or interesting effects beyond a simple superposition of nucleon-nucleon collisions. For this reason, it was decided that it would be productive to have a structured confrontation between the available experimental data and the many models of high-energy nuclear collisions in a manner in which it could be ensured that the computer codes were run correctly and the experimental acceptances were properly taken into account. With this purpose in mind, a Workshop on Monte Carlo Codes for Relativistic Heavy-Ion Collisions was organized at the Joint Institute for Heavy Ion Research at Oak Ridge National Laboratory from September 12--23, 1988. This paper reviews this workshop. 11 refs., 6 figs

  3. Biogeochemical processes in a clay formation in situ experiment: Part E - Equilibrium controls on chemistry of pore water from the Opalinus Clay, Mont Terri Underground Research Laboratory, Switzerland

    Energy Technology Data Exchange (ETDEWEB)

    Pearson, F.J., E-mail: fjpearson@gmail.com [Ground-Water Geochemistry, 5108 Trent Woods Dr., New Bern, NC 28562 (United States); Tournassat, Christophe; Gaucher, Eric C. [BRGM, B.P. 36009, 45060 Orleans Cedex 2 (France)

    2011-06-15

    Highlights: > Equilibrium models of water-rock reactions in clay rocks are reviewed. > Analyses of pore waters of the Opalinus Clay from boreholes in the Mont Terri URL, Switzerland, are tabulated. > Results of modelling with various mineral controls are compared with the analyses. > Best agreement results with calcite, dolomite and siderite or daphnite saturation, Na-K-Ca-Mg exchange and/or kaolinite, illite, quartz and celestite saturation. > This approach allows calculation of the chemistry of pore water in clays too impermeable to yield water samples. - Abstract: The chemistry of pore water (particularly pH and ionic strength) is an important property of clay rocks being considered as host rocks for long-term storage of radioactive waste. Pore waters in clay-rich rocks generally cannot be sampled directly. Instead, their chemistry must be found using laboratory-measured properties of core samples and geochemical modelling. Many such measurements have been made on samples from the Opalinus Clay from the Mont Terri Underground Research Laboratory (URL). Several boreholes in that URL yielded water samples against which pore water models have been calibrated. Following a first synthesis report published in 2003, this paper presents the evolution of the modelling approaches developed within Mont Terri URL scientific programs through the last decade (1997-2009). Models are compared to the composition of waters sampled during dedicated borehole experiments. Reanalysis of the models, parameters and database enabled the principal shortcomings of the previous modelling efforts to be overcome. The inability to model the K concentrations correctly with the measured cation exchange properties was found to be due to the use of an inappropriate selectivity coefficient for Na-K exchange; the inability to reproduce the measured carbonate chemistry and pH of the pore waters using mineral-water reactions alone was corrected by considering clay mineral equilibria. Re

  4. Guideline of Monte Carlo calculation. Neutron/gamma ray transport simulation by Monte Carlo method

    CERN Document Server

    2002-01-01

    This report condenses basic theories and advanced applications of neutron/gamma ray transport calculations in many fields of nuclear energy research. Chapters 1 through 5 treat historical progress of Monte Carlo methods, general issues of variance reduction technique, cross section libraries used in continuous energy Monte Carlo codes. In chapter 6, the following issues are discussed: fusion benchmark experiments, design of ITER, experiment analyses of fast critical assembly, core analyses of JMTR, simulation of pulsed neutron experiment, core analyses of HTTR, duct streaming calculations, bulk shielding calculations, neutron/gamma ray transport calculations of the Hiroshima atomic bomb. Chapters 8 and 9 treat function enhancements of MCNP and MVP codes, and a parallel processing of Monte Carlo calculation, respectively. An important references are attached at the end of this report.

  5. Summary and recommendations of a National Cancer Institute workshop on issues limiting the clinical use of Monte Carlo dose calculation algorithms for megavoltage external beam radiation therapy

    International Nuclear Information System (INIS)

    Fraass, Benedick A.; Smathers, James; Deye, James

    2003-01-01

    Due to the significant interest in Monte Carlo dose calculations for external beam megavoltage radiation therapy from both the research and commercial communities, a workshop was held in October 2001 to assess the status of this computational method with regard to use for clinical treatment planning. The Radiation Research Program of the National Cancer Institute, in conjunction with the Nuclear Data and Analysis Group at the Oak Ridge National Laboratory, gathered a group of experts in clinical radiation therapy treatment planning and Monte Carlo dose calculations, and examined issues involved in clinical implementation of Monte Carlo dose calculation methods in clinical radiotherapy. The workshop examined the current status of Monte Carlo algorithms, the rationale for using Monte Carlo, algorithmic concerns, clinical issues, and verification methodologies. Based on these discussions, the workshop developed recommendations for future NCI-funded research and development efforts. This paper briefly summarizes the issues presented at the workshop and the recommendations developed by the group

  6. Monte Carlo simulations for the optimisation of low-background Ge detector designs

    Energy Technology Data Exchange (ETDEWEB)

    Hakenmueller, Janina; Heusser, Gerd; Maneschg, Werner; Schreiner, Jochen; Simgen, Hardy; Stolzenburg, Dominik; Strecker, Herbert; Weber, Marc; Westernmann, Jonas [Max-Planck-Institut fuer Kernphysik, Saupfercheckweg 1, 69117 Heidelberg (Germany); Laubenstein, Matthias [Laboratori Nazionali del Gran Sasso, Via G. Acitelli 22, 67100 Assergi L' Aquila (Italy)

    2015-07-01

    Monte Carlo simulations for the low-background Ge spectrometer Giove at the underground laboratory of MPI-K, Heidelberg, are presented. In order to reduce the cosmogenic background at the present shallow depth (15 m w.e.) the shielding of the spectrometer includes an active muon veto and a passive shielding (lead and borated PE layers). The achieved background suppression is comparable to Ge spectrometers operated in much greater depth. The geometry of the detector and the shielding were implemented using the Geant4-based toolkit MaGe. The simulations were successfully optimised by determining the correct diode position and active volume. With the help of the validated Monte Carlo simulation the contribution of the single components to the overall background can be examined. This includes a comparison between simulated results and measurements with different fillings of the sample chamber. Having reproduced the measured detector background in the simulation provides the possibility to improve the background by reverse engineering of the passive and active shield layers in the simulation.

  7. Monte Carlo Simulation Tool Installation and Operation Guide

    Energy Technology Data Exchange (ETDEWEB)

    Aguayo Navarrete, Estanislao; Ankney, Austin S.; Berguson, Timothy J.; Kouzes, Richard T.; Orrell, John L.; Troy, Meredith D.; Wiseman, Clinton G.

    2013-09-02

    This document provides information on software and procedures for Monte Carlo simulations based on the Geant4 toolkit, the ROOT data analysis software and the CRY cosmic ray library. These tools have been chosen for its application to shield design and activation studies as part of the simulation task for the Majorana Collaboration. This document includes instructions for installation, operation and modification of the simulation code in a high cyber-security computing environment, such as the Pacific Northwest National Laboratory network. It is intended as a living document, and will be periodically updated. It is a starting point for information collection by an experimenter, and is not the definitive source. Users should consult with one of the authors for guidance on how to find the most current information for their needs.

  8. Predictive hydro-mechanical excavation simulation of a mine-by test at the Mont Terri rock laboratory

    International Nuclear Information System (INIS)

    Krug, St.; Shao, H.; Hesser, J.; Nowak, T.; Kunz, H.; Vietor, T.

    2010-01-01

    Document available in extended abstract form only. The Mont Terri rock laboratory was extended from mid October 2007 to end 2008 with the goal to allow the project partners to continue their cooperative research on the long term. The extension of the underground laboratory by the excavation of an additional 165 metres long access tunnel (Gallery 08) with four niches was taken as opportunity to conduct an instrumented mine-by test in one of the niches (Niche 2/Niche MB). The measurements during the bedding parallel excavation provided a large amount of data as a basis to understand the hydro-mechanical (HM) coupled behaviour of Opalinus Clay around the excavated niche. BGR was involved in the in-situ investigations (seismic measurements) as a member of the experiment team consisting of five organisations (incl. NAGRA, ANDRA, GRS, Obayashi). An important issue for BGR is the application of the numerical code RockFlow (RF) for HM coupled simulations in order to understand the behaviour of Opalinus Clay by the use of the gained measuring data for validation. Under the management of NAGRA a blind prediction was carried out for a group of modelers belonging to some of the experiment team organisations. After a first comparison between the numerical results of different HM coupled models during the prediction meeting of the teams in June 2009 the measurement data are provided by NAGRA in order to validate the numerical models. Basically the model predictions have already shown the correct tendencies and ranges of observed deformation and pore water pressure evolution besides some under- or overestimations. The future RF validation results after having done some slight parameter adjustments are intended to be presented in the paper. The excavation of Niche 2 was done from 13 October to 7 November 2008 with a constant excavation rate of 1.30 m per day. The orientation of the niche follows the bedding strike, which amounts 60 deg.. The bedding planes have an average dip of

  9. Monte Carlo alpha calculation

    Energy Technology Data Exchange (ETDEWEB)

    Brockway, D.; Soran, P.; Whalen, P.

    1985-01-01

    A Monte Carlo algorithm to efficiently calculate static alpha eigenvalues, N = ne/sup ..cap alpha..t/, for supercritical systems has been developed and tested. A direct Monte Carlo approach to calculating a static alpha is to simply follow the buildup in time of neutrons in a supercritical system and evaluate the logarithmic derivative of the neutron population with respect to time. This procedure is expensive, and the solution is very noisy and almost useless for a system near critical. The modified approach is to convert the time-dependent problem to a static ..cap alpha../sup -/eigenvalue problem and regress ..cap alpha.. on solutions of a/sup -/ k/sup -/eigenvalue problem. In practice, this procedure is much more efficient than the direct calculation, and produces much more accurate results. Because the Monte Carlo codes are intrinsically three-dimensional and use elaborate continuous-energy cross sections, this technique is now used as a standard for evaluating other calculational techniques in odd geometries or with group cross sections.

  10. Monte Carlo simulations of neutron scattering instruments

    International Nuclear Information System (INIS)

    Aestrand, Per-Olof; Copenhagen Univ.; Lefmann, K.; Nielsen, K.

    2001-01-01

    A Monte Carlo simulation is an important computational tool used in many areas of science and engineering. The use of Monte Carlo techniques for simulating neutron scattering instruments is discussed. The basic ideas, techniques and approximations are presented. Since the construction of a neutron scattering instrument is very expensive, Monte Carlo software used for design of instruments have to be validated and tested extensively. The McStas software was designed with these aspects in mind and some of the basic principles of the McStas software will be discussed. Finally, some future prospects are discussed for using Monte Carlo simulations in optimizing neutron scattering experiments. (R.P.)

  11. Linear filtering applied to Monte Carlo criticality calculations

    International Nuclear Information System (INIS)

    Morrison, G.W.; Pike, D.H.; Petrie, L.M.

    1975-01-01

    A significant improvement in the acceleration of the convergence of the eigenvalue computed by Monte Carlo techniques has been developed by applying linear filtering theory to Monte Carlo calculations for multiplying systems. A Kalman filter was applied to a KENO Monte Carlo calculation of an experimental critical system consisting of eight interacting units of fissile material. A comparison of the filter estimate and the Monte Carlo realization was made. The Kalman filter converged in five iterations to 0.9977. After 95 iterations, the average k-eff from the Monte Carlo calculation was 0.9981. This demonstrates that the Kalman filter has the potential of reducing the calculational effort of multiplying systems. Other examples and results are discussed

  12. Burnup calculations using Monte Carlo method

    International Nuclear Information System (INIS)

    Ghosh, Biplab; Degweker, S.B.

    2009-01-01

    In the recent years, interest in burnup calculations using Monte Carlo methods has gained momentum. Previous burn up codes have used multigroup transport theory based calculations followed by diffusion theory based core calculations for the neutronic portion of codes. The transport theory methods invariably make approximations with regard to treatment of the energy and angle variables involved in scattering, besides approximations related to geometry simplification. Cell homogenisation to produce diffusion, theory parameters adds to these approximations. Moreover, while diffusion theory works for most reactors, it does not produce accurate results in systems that have strong gradients, strong absorbers or large voids. Also, diffusion theory codes are geometry limited (rectangular, hexagonal, cylindrical, and spherical coordinates). Monte Carlo methods are ideal to solve very heterogeneous reactors and/or lattices/assemblies in which considerable burnable poisons are used. The key feature of this approach is that Monte Carlo methods permit essentially 'exact' modeling of all geometrical detail, without resort to ene and spatial homogenization of neutron cross sections. Monte Carlo method would also be better for in Accelerator Driven Systems (ADS) which could have strong gradients due to the external source and a sub-critical assembly. To meet the demand for an accurate burnup code, we have developed a Monte Carlo burnup calculation code system in which Monte Carlo neutron transport code is coupled with a versatile code (McBurn) for calculating the buildup and decay of nuclides in nuclear materials. McBurn is developed from scratch by the authors. In this article we will discuss our effort in developing the continuous energy Monte Carlo burn-up code, McBurn. McBurn is intended for entire reactor core as well as for unit cells and assemblies. Generally, McBurn can do burnup of any geometrical system which can be handled by the underlying Monte Carlo transport code

  13. On the possibility of a two-bang supernova collapse

    International Nuclear Information System (INIS)

    Berezinsky, V.S.; Castagnoli, C.; Dokuchaev, V.I.; Galeotti, P.

    1988-01-01

    The possibility of a two-bang stellar collapse originating SN 1987a, and having the characteristics of the events recorded in Mont Blanc and Kamiokande, is discussed here. According to the ''standard'' collapse models of nonrotating stars, which predict the formation of a neutrino-sphere with a nondegenerate neutrino gas inside the star, the Mont Blanc and kamiokande data for the first burst give a too large stellar mass. On the contrary, a degenerate neutrino gas with low temperature T ∼ 0.5 MeV, and chemical potential μ ∼ (12-15), predicts a relatively low total energy outflow W ν ∼ (2-6) x 10 54 erg, and a small number of expected interactions in Kamiokande. A possible scenario is suggested: a massive (M ∼ 20M o ) rotating star is fragmented into two pieces, one light and the other heavy, at the onset of the collapse.The massive component collapses to a black hole, and produces the first burst. Neutrinos are trapped inside the collapsing star because of elastic scattering in the outer core off heavy nuclei, with A ∼ 300. It is shown that neutrinos fill up the quantum states, producing a degenerate neutrino gas. The second burst is explained by coalescence of the light fragment (M ∼ (1-3)M o ) onto the massive black hole. The time delay between the two observed bursts (4.7h) is mostly connected with gravitational braking, when the light fragment falls down onto the black hole, with an accompanying emission of gravitational waves for times of order of hours

  14. Monte Carlo simulations for plasma physics

    International Nuclear Information System (INIS)

    Okamoto, M.; Murakami, S.; Nakajima, N.; Wang, W.X.

    2000-07-01

    Plasma behaviours are very complicated and the analyses are generally difficult. However, when the collisional processes play an important role in the plasma behaviour, the Monte Carlo method is often employed as a useful tool. For examples, in neutral particle injection heating (NBI heating), electron or ion cyclotron heating, and alpha heating, Coulomb collisions slow down high energetic particles and pitch angle scatter them. These processes are often studied by the Monte Carlo technique and good agreements can be obtained with the experimental results. Recently, Monte Carlo Method has been developed to study fast particle transports associated with heating and generating the radial electric field. Further it is applied to investigating the neoclassical transport in the plasma with steep gradients of density and temperatures which is beyong the conventional neoclassical theory. In this report, we briefly summarize the researches done by the present authors utilizing the Monte Carlo method. (author)

  15. Neutrino oscillation parameter sampling with MonteCUBES

    Science.gov (United States)

    Blennow, Mattias; Fernandez-Martinez, Enrique

    2010-01-01

    We present MonteCUBES ("Monte Carlo Utility Based Experiment Simulator"), a software package designed to sample the neutrino oscillation parameter space through Markov Chain Monte Carlo algorithms. MonteCUBES makes use of the GLoBES software so that the existing experiment definitions for GLoBES, describing long baseline and reactor experiments, can be used with MonteCUBES. MonteCUBES consists of two main parts: The first is a C library, written as a plug-in for GLoBES, implementing the Markov Chain Monte Carlo algorithm to sample the parameter space. The second part is a user-friendly graphical Matlab interface to easily read, analyze, plot and export the results of the parameter space sampling. Program summaryProgram title: MonteCUBES (Monte Carlo Utility Based Experiment Simulator) Catalogue identifier: AEFJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 69 634 No. of bytes in distributed program, including test data, etc.: 3 980 776 Distribution format: tar.gz Programming language: C Computer: MonteCUBES builds and installs on 32 bit and 64 bit Linux systems where GLoBES is installed Operating system: 32 bit and 64 bit Linux RAM: Typically a few MBs Classification: 11.1 External routines: GLoBES [1,2] and routines/libraries used by GLoBES Subprograms used:Cat Id ADZI_v1_0, Title GLoBES, Reference CPC 177 (2007) 439 Nature of problem: Since neutrino masses do not appear in the standard model of particle physics, many models of neutrino masses also induce other types of new physics, which could affect the outcome of neutrino oscillation experiments. In general, these new physics imply high-dimensional parameter spaces that are difficult to explore using classical methods such as multi-dimensional projections and minimizations, such as those

  16. MCB. A continuous energy Monte Carlo burnup simulation code

    International Nuclear Information System (INIS)

    Cetnar, J.; Wallenius, J.; Gudowski, W.

    1999-01-01

    A code for integrated simulation of neutrinos and burnup based upon continuous energy Monte Carlo techniques and transmutation trajectory analysis has been developed. Being especially well suited for studies of nuclear waste transmutation systems, the code is an extension of the well validated MCNP transport program of Los Alamos National Laboratory. Among the advantages of the code (named MCB) is a fully integrated data treatment combined with a time-stepping routine that automatically corrects for burnup dependent changes in reaction rates, neutron multiplication, material composition and self-shielding. Fission product yields are treated as continuous functions of incident neutron energy, using a non-equilibrium thermodynamical model of the fission process. In the present paper a brief description of the code and applied methods are given. (author)

  17. A Monte Carlo model for 3D grain evolution during welding

    Science.gov (United States)

    Rodgers, Theron M.; Mitchell, John A.; Tikare, Veena

    2017-09-01

    Welding is one of the most wide-spread processes used in metal joining. However, there are currently no open-source software implementations for the simulation of microstructural evolution during a weld pass. Here we describe a Potts Monte Carlo based model implemented in the SPPARKS kinetic Monte Carlo computational framework. The model simulates melting, solidification and solid-state microstructural evolution of material in the fusion and heat-affected zones of a weld. The model does not simulate thermal behavior, but rather utilizes user input parameters to specify weld pool and heat-affect zone properties. Weld pool shapes are specified by Bézier curves, which allow for the specification of a wide range of pool shapes. Pool shapes can range from narrow and deep to wide and shallow representing different fluid flow conditions within the pool. Surrounding temperature gradients are calculated with the aide of a closest point projection algorithm. The model also allows simulation of pulsed power welding through time-dependent variation of the weld pool size. Example simulation results and comparisons with laboratory weld observations demonstrate microstructural variation with weld speed, pool shape, and pulsed-power.

  18. Radiation Modeling with Direct Simulation Monte Carlo

    Science.gov (United States)

    Carlson, Ann B.; Hassan, H. A.

    1991-01-01

    Improvements in the modeling of radiation in low density shock waves with direct simulation Monte Carlo (DSMC) are the subject of this study. A new scheme to determine the relaxation collision numbers for excitation of electronic states is proposed. This scheme attempts to move the DSMC programs toward a more detailed modeling of the physics and more reliance on available rate data. The new method is compared with the current modeling technique and both techniques are compared with available experimental data. The differences in the results are evaluated. The test case is based on experimental measurements from the AVCO-Everett Research Laboratory electric arc-driven shock tube of a normal shock wave in air at 10 km/s and .1 Torr. The new method agrees with the available data as well as the results from the earlier scheme and is more easily extrapolated to di erent ow conditions.

  19. Monte Carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf; Kroisandt, Gerald

    2010-01-01

    Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...

  20. Monte Carlo approaches to light nuclei

    International Nuclear Information System (INIS)

    Carlson, J.

    1990-01-01

    Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of 16 O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs

  1. Monte Carlo approaches to light nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, J.

    1990-01-01

    Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of {sup 16}O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs.

  2. Nota biobibliográfica + poética + creaciones («Rebel·lió: problema visual», «Tarot de Marsella: poema aleatorio», «Codex mundi. Contingencia: escritura fractal II-3», «Blanc o nada. Topoemalogía (trilingüe», «New York, Portbou, Benjamin» y «Bar 12 heures

    Directory of Open Access Journals (Sweden)

    Ramon Dachs

    2014-03-01

    Full Text Available Nota biobibliográfica + poética + creaciones («Rebel·lió: problema visual», «Tarot de Marsella: poema aleatorio», «Codex mundi. Contingencia: escritura fractal II-3», «Blanc o nada. Topoemalogía (trilingüe», «New York, Portbou, Benjamin» y «Bar 12 heures» + cuestionario (Victoria Pineda

  3. Performance of the SLD Warm Iron Calorimeter prototype

    International Nuclear Information System (INIS)

    Callegari, G.; Piemontese, L.; De Sangro, R.; Peruzzi, I.; Piccolo, M.; Busza, W.; Friedman, J.; Johnson, A.; Kendall, H.; Kistiakowsky, V.

    1986-03-01

    A prototype hadron calorimeter, of similar design to the Warm Iron Calorimeter (WIC) planned for the SLD experiment, has been built and its performance has been studied in a test beam. The WIC is an iron sampling calorimeter whose active elements are plastic streamer tubes similar to those used for the Mont-Blanc proton decay experiment. The construction and operation of the tubes will be briefly described together with their use in an iron calorimeter - muon tracker. Efficiency, resolution and linearity have been measured in a hadron/muon beam up to 11 GeV. The measured values correspond to the SLD design goals

  4. Simulation and the Monte Carlo method

    CERN Document Server

    Rubinstein, Reuven Y

    2016-01-01

    Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...

  5. Lecture 1. Monte Carlo basics. Lecture 2. Adjoint Monte Carlo. Lecture 3. Coupled Forward-Adjoint calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J.E. [Delft University of Technology, Interfaculty Reactor Institute, Delft (Netherlands)

    2000-07-01

    The Monte Carlo method is a statistical method to solve mathematical and physical problems using random numbers. The principle of the methods will be demonstrated for a simple mathematical problem and for neutron transport. Various types of estimators will be discussed, as well as generally applied variance reduction methods like splitting, Russian roulette and importance biasing. The theoretical formulation for solving eigenvalue problems for multiplying systems will be shown. Some reflections will be given about the applicability of the Monte Carlo method, its limitations and its future prospects for reactor physics calculations. Adjoint Monte Carlo is a Monte Carlo game to solve the adjoint neutron (or photon) transport equation. The adjoint transport equation can be interpreted in terms of simulating histories of artificial particles, which show properties of neutrons that move backwards in history. These particles will start their history at the detector from which the response must be estimated and give a contribution to the estimated quantity when they hit or pass through the neutron source. Application to multigroup transport formulation will be demonstrated Possible implementation for the continuous energy case will be outlined. The inherent advantages and disadvantages of the method will be discussed. The Midway Monte Carlo method will be presented for calculating a detector response due to a (neutron or photon) source. A derivation will be given of the basic formula for the Midway Monte Carlo method The black absorber technique, allowing for a cutoff of particle histories when reaching the midway surface in one of the calculations will be derived. An extension of the theory to coupled neutron-photon problems is given. The method will be demonstrated for an oil well logging problem, comprising a neutron source in a borehole and photon detectors to register the photons generated by inelastic neutron scattering. (author)

  6. Lecture 1. Monte Carlo basics. Lecture 2. Adjoint Monte Carlo. Lecture 3. Coupled Forward-Adjoint calculations

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.

    2000-01-01

    The Monte Carlo method is a statistical method to solve mathematical and physical problems using random numbers. The principle of the methods will be demonstrated for a simple mathematical problem and for neutron transport. Various types of estimators will be discussed, as well as generally applied variance reduction methods like splitting, Russian roulette and importance biasing. The theoretical formulation for solving eigenvalue problems for multiplying systems will be shown. Some reflections will be given about the applicability of the Monte Carlo method, its limitations and its future prospects for reactor physics calculations. Adjoint Monte Carlo is a Monte Carlo game to solve the adjoint neutron (or photon) transport equation. The adjoint transport equation can be interpreted in terms of simulating histories of artificial particles, which show properties of neutrons that move backwards in history. These particles will start their history at the detector from which the response must be estimated and give a contribution to the estimated quantity when they hit or pass through the neutron source. Application to multigroup transport formulation will be demonstrated Possible implementation for the continuous energy case will be outlined. The inherent advantages and disadvantages of the method will be discussed. The Midway Monte Carlo method will be presented for calculating a detector response due to a (neutron or photon) source. A derivation will be given of the basic formula for the Midway Monte Carlo method The black absorber technique, allowing for a cutoff of particle histories when reaching the midway surface in one of the calculations will be derived. An extension of the theory to coupled neutron-photon problems is given. The method will be demonstrated for an oil well logging problem, comprising a neutron source in a borehole and photon detectors to register the photons generated by inelastic neutron scattering. (author)

  7. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    International Nuclear Information System (INIS)

    Brown, Forrest B.; Univ. of New Mexico, Albuquerque, NM

    2016-01-01

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations

  8. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications Group; Univ. of New Mexico, Albuquerque, NM (United States). Nuclear Engineering Dept.

    2016-11-29

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations

  9. Generalized hybrid Monte Carlo - CMFD methods for fission source convergence

    International Nuclear Information System (INIS)

    Wolters, Emily R.; Larsen, Edward W.; Martin, William R.

    2011-01-01

    In this paper, we generalize the recently published 'CMFD-Accelerated Monte Carlo' method and present two new methods that reduce the statistical error in CMFD-Accelerated Monte Carlo. The CMFD-Accelerated Monte Carlo method uses Monte Carlo to estimate nonlinear functionals used in low-order CMFD equations for the eigenfunction and eigenvalue. The Monte Carlo fission source is then modified to match the resulting CMFD fission source in a 'feedback' procedure. The two proposed methods differ from CMFD-Accelerated Monte Carlo in the definition of the required nonlinear functionals, but they have identical CMFD equations. The proposed methods are compared with CMFD-Accelerated Monte Carlo on a high dominance ratio test problem. All hybrid methods converge the Monte Carlo fission source almost immediately, leading to a large reduction in the number of inactive cycles required. The proposed methods stabilize the fission source more efficiently than CMFD-Accelerated Monte Carlo, leading to a reduction in the number of active cycles required. Finally, as in CMFD-Accelerated Monte Carlo, the apparent variance of the eigenfunction is approximately equal to the real variance, so the real error is well-estimated from a single calculation. This is an advantage over standard Monte Carlo, in which the real error can be underestimated due to inter-cycle correlation. (author)

  10. Hybrid transport and diffusion modeling using electron thermal transport Monte Carlo SNB in DRACO

    Science.gov (United States)

    Chenhall, Jeffrey; Moses, Gregory

    2017-10-01

    The iSNB (implicit Schurtz Nicolai Busquet) multigroup diffusion electron thermal transport method is adapted into an Electron Thermal Transport Monte Carlo (ETTMC) transport method to better model angular and long mean free path non-local effects. Previously, the ETTMC model had been implemented in the 2D DRACO multiphysics code and found to produce consistent results with the iSNB method. Current work is focused on a hybridization of the computationally slower but higher fidelity ETTMC transport method with the computationally faster iSNB diffusion method in order to maximize computational efficiency. Furthermore, effects on the energy distribution of the heat flux divergence are studied. Work to date on the hybrid method will be presented. This work was supported by Sandia National Laboratories and the Univ. of Rochester Laboratory for Laser Energetics.

  11. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  12. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  13. Validation and verification of the ORNL Monte Carlo codes for nuclear safety analysis

    International Nuclear Information System (INIS)

    Emmett, M.B.

    1993-01-01

    The process of ensuring the quality of computer codes can be very time consuming and expensive. The Oak Ridge National Laboratory (ORNL) Monte Carlo codes all predate the existence of quality assurance (QA) standards and configuration control. The number of person-years and the amount of money spent on code development make it impossible to adhere strictly to all the current requirements. At ORNL, the Nuclear Engineering Applications Section of the Computing Applications Division is responsible for the development, maintenance, and application of the Monte Carlo codes MORSE and KENO. The KENO code is used for doing criticality analyses; the MORSE code, which has two official versions, CGA and SGC, is used for radiation transport analyses. Because KENO and MORSE were very thoroughly checked out over the many years of extensive use both in the United States and in the international community, the existing codes were open-quotes baselined.close quotes This means that the versions existing at the time the original configuration plan is written are considered to be validated and verified code systems based on the established experience with them

  14. Mean field simulation for Monte Carlo integration

    CERN Document Server

    Del Moral, Pierre

    2013-01-01

    In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko

  15. Variational Variance Reduction for Monte Carlo Criticality Calculations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Larsen, Edward W.

    2001-01-01

    A new variational variance reduction (VVR) method for Monte Carlo criticality calculations was developed. This method employs (a) a variational functional that is more accurate than the standard direct functional, (b) a representation of the deterministically obtained adjoint flux that is especially accurate for optically thick problems with high scattering ratios, and (c) estimates of the forward flux obtained by Monte Carlo. The VVR method requires no nonanalog Monte Carlo biasing, but it may be used in conjunction with Monte Carlo biasing schemes. Some results are presented from a class of criticality calculations involving alternating arrays of fuel and moderator regions

  16. Monte Carlo Solutions for Blind Phase Noise Estimation

    Directory of Open Access Journals (Sweden)

    Çırpan Hakan

    2009-01-01

    Full Text Available This paper investigates the use of Monte Carlo sampling methods for phase noise estimation on additive white Gaussian noise (AWGN channels. The main contributions of the paper are (i the development of a Monte Carlo framework for phase noise estimation, with special attention to sequential importance sampling and Rao-Blackwellization, (ii the interpretation of existing Monte Carlo solutions within this generic framework, and (iii the derivation of a novel phase noise estimator. Contrary to the ad hoc phase noise estimators that have been proposed in the past, the estimators considered in this paper are derived from solid probabilistic and performance-determining arguments. Computer simulations demonstrate that, on one hand, the Monte Carlo phase noise estimators outperform the existing estimators and, on the other hand, our newly proposed solution exhibits a lower complexity than the existing Monte Carlo solutions.

  17. Monte Carlo simulations for generic granite repository studies

    Energy Technology Data Exchange (ETDEWEB)

    Chu, Shaoping [Los Alamos National Laboratory; Lee, Joon H [SNL; Wang, Yifeng [SNL

    2010-12-08

    In a collaborative study between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL) for the DOE-NE Office of Fuel Cycle Technologies Used Fuel Disposition (UFD) Campaign project, we have conducted preliminary system-level analyses to support the development of a long-term strategy for geologic disposal of high-level radioactive waste. A general modeling framework consisting of a near- and a far-field submodel for a granite GDSE was developed. A representative far-field transport model for a generic granite repository was merged with an integrated systems (GoldSim) near-field model. Integrated Monte Carlo model runs with the combined near- and farfield transport models were performed, and the parameter sensitivities were evaluated for the combined system. In addition, a sub-set of radionuclides that are potentially important to repository performance were identified and evaluated for a series of model runs. The analyses were conducted with different waste inventory scenarios. Analyses were also conducted for different repository radionuelide release scenarios. While the results to date are for a generic granite repository, the work establishes the method to be used in the future to provide guidance on the development of strategy for long-term disposal of high-level radioactive waste in a granite repository.

  18. Monitoring and modelling of thermo-hydro-mechanical processes - main results of a heater experiment at the Mont Terri underground rock laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Ingeborg, G.; Alheid, H.J. [BGR - Federal Institute for Geosciences and Natural Resources, Hannover (Germany); Jockwerz, N. [Gesellschaft fur Anlagen- und Reaktorsicherheit (GRS) - Final Repository Research Division, Braunschweig (Germany); Mayor, J.C. [ENRESA - Empresa Nacional des Residuos Radioactivos, Madrid (Spain); Garcia-Siner, J.L. [AITEMIN -Asociacion para la Investigacion y Desarrollo Industrial de los Recursos Naturales, Madrid, (Spain); Alonso, E. [CIMNE - Centre Internacional de Metodos Numerics en Ingenyeria, UPC, Barcelona (Spain); Weber, H.P. [NAGRA - National Cooperative for the Disposal of Radioactive Waste, Wettingen (Switzerland); Plotze, M. [ETHZ - Swiss Federal Institute of Technology Zurich, IGT, Zurich, (Switzerland); Klubertanz, G. [COLENCO Power Engineering Ltd., Baden (Switzerland)

    2005-07-01

    The long-term safety of permanent underground repositories relies on a combination of engineered and geological barriers, so that the interactions between the barriers in response to conditions expected in a high-level waste repository need to be identified and fully understood. Co-financed by the European Community, a heater experiment was realized on a pilot plant scale at the underground laboratory in Mont Terri, Switzerland. The experiment was accompanied by an extensive programme of continuous monitoring, experimental investigations on-site as well as in laboratories, and numerical modelling of the coupled thermo-hydro-mechanical processes. Heat-producing waste was simulated by a heater element of 10 cm diameter, held at a constant surface temperature of 100 C. The heater element (length 2 m) operated in a vertical borehole of 7 m depth at 4 to 6 m depth. It was embedded in a geotechnical barrier of pre-compacted bentonite blocks (outer diameter 30 cm) that were irrigated for 35 months before the heating phase (duration 18 months) began. The host rock is a highly consolidated stiff Jurassic clay stone (Opalinus Clay). After the heating phase, the vicinity of the heater element was explored by seismic, hydraulic, and geotechnical tests to investigate if the heating had induced changes in the Opalinus Clay. Additionally, rock mechanic specimens were tested in the laboratory. Finally, the experiment was dismantled to provide laboratory specimens of post - heating buffer and host rock material. The bentonite blocks were thoroughly wetted at the time of the dismantling. The volume increase amounted to 5 to 9% and was thus below the bentonite potential. Geo-electrical measurements showed no decrease of the water content in the vicinity of the heater during the heating phase. Decreasing energy input to the heater element over time suggests hence, that the bentonite dried leading to a decrease of its thermal conductivity. Gas release during the heating period occurred

  19. Enhancement of precision and accuracy by Monte-Carlo simulation of a well-type pressurized ionization chamber used in radionuclide metrology

    International Nuclear Information System (INIS)

    Kryeziu, D.

    2006-09-01

    The aim of this work was to test and validate the Monte-Carlo (MC) ionization chamber simulation method in calculating the activity of radioactive solutions. This is required when no or not sufficient experimental calibration figures are available as well as to improve the accuracy of activity measurements for other radionuclides. Well-type or 4π γ ISOCAL IV ionization chambers (IC) are widely used in many national standard laboratories around the world. As secondary standard measuring systems these radionuclide calibrators serve to maintain measurement consistency checks and to ensure the quality of standards disseminated to users for a wide range of radionuclide where many of them are with special interest in nuclear medicine as well as in different applications on radionuclide metrology. For the studied radionuclides the calibration figures (efficiencies) and their respective volume correction factors are determined by using the PENELOPE MC computer code system. The ISOCAL IV IC filled with nitrogen gas at approximately 1 MPa is simulated. The simulated models of the chamber are designed by means of reduced quadric equation and applying the appropriate mathematical transformations. The simulations are done for various container geometries of the standard solution which take forms of: i) sealed Jena glass 5 ml PTB standard ampoule, ii) 10 ml (P6) vial and iii) 10 R Schott Type 1+ vial. Simulation of the ISOCAL IV IC is explained. The effect of density variation of the nitrogen filling gas on the sensitivity of the chamber is investigated. The code is also used to examine the effects of using lead and copper shields as well as to evaluate the sensitivity of the chamber to electrons and positrons. Validation of the Monte-Carlo simulation method has been proved by comparing the Monte-Carlo simulation calculated and experimental calibration figures available from the National Physical Laboratory (NPL) England which are deduced from the absolute activity

  20. Monte Carlo based diffusion coefficients for LMFBR analysis

    International Nuclear Information System (INIS)

    Van Rooijen, Willem F.G.; Takeda, Toshikazu; Hazama, Taira

    2010-01-01

    A method based on Monte Carlo calculations is developed to estimate the diffusion coefficient of unit cells. The method uses a geometrical model similar to that used in lattice theory, but does not use the assumption of a separable fundamental mode used in lattice theory. The method uses standard Monte Carlo flux and current tallies, and the continuous energy Monte Carlo code MVP was used without modifications. Four models are presented to derive the diffusion coefficient from tally results of flux and partial currents. In this paper the method is applied to the calculation of a plate cell of the fast-spectrum critical facility ZEBRA. Conventional calculations of the diffusion coefficient diverge in the presence of planar voids in the lattice, but our Monte Carlo method can treat this situation without any problem. The Monte Carlo method was used to investigate the influence of geometrical modeling as well as the directional dependence of the diffusion coefficient. The method can be used to estimate the diffusion coefficient of complicated unit cells, the limitation being the capabilities of the Monte Carlo code. The method will be used in the future to confirm results for the diffusion coefficient obtained of the Monte Carlo code. The method will be used in the future to confirm results for the diffusion coefficient obtained with deterministic codes. (author)

  1. 'Odontologic dosimetric card' experiments and simulations using Monte Carlo methods

    International Nuclear Information System (INIS)

    Menezes, C.J.M.; Lima, R. de A.; Peixoto, J.E.; Vieira, J.W.

    2008-01-01

    The techniques for data processing, combined with the development of fast and more powerful computers, makes the Monte Carlo methods one of the most widely used tools in the radiation transport simulation. For applications in diagnostic radiology, this method generally uses anthropomorphic phantoms to evaluate the absorbed dose to patients during exposure. In this paper, some Monte Carlo techniques were used to simulation of a testing device designed for intra-oral X-ray equipment performance evaluation called Odontologic Dosimetric Card (CDO of 'Cartao Dosimetrico Odontologico' in Portuguese) for different thermoluminescent detectors. This paper used two computational models of exposition RXD/EGS4 and CDO/EGS4. In the first model, the simulation results are compared with experimental data obtained in the similar conditions. The second model, it presents the same characteristics of the testing device studied (CDO). For the irradiations, the X-ray spectra were generated by the IPEM report number 78, spectrum processor. The attenuated spectrum was obtained for IEC 61267 qualities and various additional filters for a Pantak 320 X-ray industrial equipment. The results obtained for the study of the copper filters used in the determination of the kVp were compared with experimental data, validating the model proposed for the characterization of the CDO. The results shower of the CDO will be utilized in quality assurance programs in order to guarantee that the equipment fulfill the requirements of the Norm SVS No. 453/98 MS (Brazil) 'Directives of Radiation Protection in Medical and Dental Radiodiagnostic'. We conclude that the EGS4 is a suitable code Monte Carlo to simulate thermoluminescent dosimeters and experimental procedures employed in the routine of the quality control laboratory in diagnostic radiology. (author)

  2. Effects of Grapevine Leafroll associated Virus 3 (GLRaV-3) and duration of infection on fruit composition and wine chemical profile of Vitis vinifera L. cv. Sauvignon blanc.

    Science.gov (United States)

    Montero, R; Mundy, D; Albright, A; Grose, C; Trought, M C T; Cohen, D; Chooi, K M; MacDiarmid, R; Flexas, J; Bota, J

    2016-04-15

    In order to determine the effects of Grapevine Leafroll associated Virus 3 (GLRaV-3) on fruit composition and chemical profile of juice and wine from Vitis vinifera L. cv. Sauvignon blanc grown in New Zealand, composition variables were measured on fruit from vines either infected with GLRaV-3 (established or recent infections) or uninfected vines. Physiological ripeness (20.4°Brix) was the criterion established to determine the harvest date for each of the three treatments. Date of grape ripeness was strongly affected by virus infection. In juice and wine, GLRaV-3 infection prior to 2008 reduced titratable acidity compared with the uninfected control. Differences observed in amino acids from the three infection status groups did not modify basic wine chemical properties. In conclusion, GLRaV-3 infection slowed grape ripening, but at equivalent ripeness to result in minimal effects on the juice and wine chemistry. Time of infection produced differences in specific plant physiological variables. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Computer system for Monte Carlo experimentation

    International Nuclear Information System (INIS)

    Grier, D.A.

    1986-01-01

    A new computer system for Monte Carlo Experimentation is presented. The new system speeds and simplifies the process of coding and preparing a Monte Carlo Experiment; it also encourages the proper design of Monte Carlo Experiments, and the careful analysis of the experimental results. A new functional language is the core of this system. Monte Carlo Experiments, and their experimental designs, are programmed in this new language; those programs are compiled into Fortran output. The Fortran output is then compiled and executed. The experimental results are analyzed with a standard statistics package such as Si, Isp, or Minitab or with a user-supplied program. Both the experimental results and the experimental design may be directly loaded into the workspace of those packages. The new functional language frees programmers from many of the details of programming an experiment. Experimental designs such as factorial, fractional factorial, or latin square are easily described by the control structures and expressions of the language. Specific mathematical modes are generated by the routines of the language

  4. Monte Carlo calculations of the neutron coincidence gate utilisation factor for passive neutron coincidence counting

    CERN Document Server

    Bourva, L C A

    1999-01-01

    The general purpose neutron-photon-electron Monte Carlo N-Particle code, MCNP sup T sup M , has been used to simulate the neutronic characteristics of the on-site laboratory passive neutron coincidence counter to be installed, under Euratom Safeguards Directorate supervision, at the Sellafield reprocessing plant in Cumbria, UK. This detector is part of a series of nondestructive assay instruments to be installed for the accurate determination of the plutonium content of nuclear materials. The present work focuses on one aspect of this task, namely, the accurate calculation of the coincidence gate utilisation factor. This parameter is an important term in the interpretative model used to analyse the passive neutron coincidence count data acquired using pulse train deconvolution electronics based on the shift register technique. It accounts for the limited proportion of neutrons detected within the time interval for which the electronics gate is open. The Monte Carlo code MCF, presented in this work, represents...

  5. Random Numbers and Monte Carlo Methods

    Science.gov (United States)

    Scherer, Philipp O. J.

    Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.

  6. LCG Monte-Carlo Data Base

    CERN Document Server

    Bartalini, P.; Kryukov, A.; Selyuzhenkov, Ilya V.; Sherstnev, A.; Vologdin, A.

    2004-01-01

    We present the Monte-Carlo events Data Base (MCDB) project and its development plans. MCDB facilitates communication between authors of Monte-Carlo generators and experimental users. It also provides a convenient book-keeping and an easy access to generator level samples. The first release of MCDB is now operational for the CMS collaboration. In this paper we review the main ideas behind MCDB and discuss future plans to develop this Data Base further within the CERN LCG framework.

  7. Alternative implementations of the Monte Carlo power method

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Gelbard, E.M.

    2002-01-01

    We compare nominal efficiencies, i.e. variances in power shapes for equal running time, of different versions of the Monte Carlo eigenvalue computation, as applied to criticality safety analysis calculations. The two main methods considered here are ''conventional'' Monte Carlo and the superhistory method, and both are used in criticality safety codes. Within each of these major methods, different variants are available for the main steps of the basic Monte Carlo algorithm. Thus, for example, different treatments of the fission process may vary in the extent to which they follow, in analog fashion, the details of real-world fission, or may vary in details of the methods by which they choose next-generation source sites. In general the same options are available in both the superhistory method and conventional Monte Carlo, but there seems not to have been much examination of the special properties of the two major methods and their minor variants. We find, first, that the superhistory method is just as efficient as conventional Monte Carlo and, secondly, that use of different variants of the basic algorithms may, in special cases, have a surprisingly large effect on Monte Carlo computational efficiency

  8. Monte Carlo modelling of TRIGA research reactor

    Science.gov (United States)

    El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.

    2010-10-01

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  9. Igo - A Monte Carlo Code For Radiotherapy Planning

    International Nuclear Information System (INIS)

    Goldstein, M.; Regev, D.

    1999-01-01

    The goal of radiation therapy is to deliver a lethal dose to the tumor, while minimizing the dose to normal tissues and vital organs. To carry out this task, it is critical to calculate correctly the 3-D dose delivered. Monte Carlo transport methods (especially the Adjoint Monte Carlo have the potential to provide more accurate predictions of the 3-D dose the currently used methods. IG0 is a Monte Carlo code derived from the general Monte Carlo Program - MCNP, tailored specifically for calculating the effects of radiation therapy. This paper describes the IG0 transport code, the PIG0 interface and some preliminary results

  10. Monte Carlo techniques for analyzing deep-penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1986-01-01

    Current methods and difficulties in Monte Carlo deep-penetration calculations are reviewed, including statistical uncertainty and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multigroup Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications

  11. Odd-flavor Simulations by the Hybrid Monte Carlo

    CERN Document Server

    Takaishi, Tetsuya; Takaishi, Tetsuya; De Forcrand, Philippe

    2001-01-01

    The standard hybrid Monte Carlo algorithm is known to simulate even flavors QCD only. Simulations of odd flavors QCD, however, can be also performed in the framework of the hybrid Monte Carlo algorithm where the inverse of the fermion matrix is approximated by a polynomial. In this exploratory study we perform three flavors QCD simulations. We make a comparison of the hybrid Monte Carlo algorithm and the R-algorithm which also simulates odd flavors systems but has step-size errors. We find that results from our hybrid Monte Carlo algorithm are in agreement with those from the R-algorithm obtained at very small step-size.

  12. Quantum Monte Carlo approaches for correlated systems

    CERN Document Server

    Becca, Federico

    2017-01-01

    Over the past several decades, computational approaches to studying strongly-interacting systems have become increasingly varied and sophisticated. This book provides a comprehensive introduction to state-of-the-art quantum Monte Carlo techniques relevant for applications in correlated systems. Providing a clear overview of variational wave functions, and featuring a detailed presentation of stochastic samplings including Markov chains and Langevin dynamics, which are developed into a discussion of Monte Carlo methods. The variational technique is described, from foundations to a detailed description of its algorithms. Further topics discussed include optimisation techniques, real-time dynamics and projection methods, including Green's function, reptation and auxiliary-field Monte Carlo, from basic definitions to advanced algorithms for efficient codes, and the book concludes with recent developments on the continuum space. Quantum Monte Carlo Approaches for Correlated Systems provides an extensive reference ...

  13. Track 4: basic nuclear science variance reduction for Monte Carlo criticality simulations. 6. Variational Variance Reduction for Monte Carlo Criticality Calculations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Larsen, Edward W.

    2001-01-01

    Recently, it has been shown that the figure of merit (FOM) of Monte Carlo source-detector problems can be enhanced by using a variational rather than a direct functional to estimate the detector response. The direct functional, which is traditionally employed in Monte Carlo simulations, requires an estimate of the solution of the forward problem within the detector region. The variational functional is theoretically more accurate than the direct functional, but it requires estimates of the solutions of the forward and adjoint source-detector problems over the entire phase-space of the problem. In recent work, we have performed Monte Carlo simulations using the variational functional by (a) approximating the adjoint solution deterministically and representing this solution as a function in phase-space and (b) estimating the forward solution using Monte Carlo. We have called this general procedure variational variance reduction (VVR). The VVR method is more computationally expensive per history than traditional Monte Carlo because extra information must be tallied and processed. However, the variational functional yields a more accurate estimate of the detector response. Our simulations have shown that the VVR reduction in variance usually outweighs the increase in cost, resulting in an increased FOM. In recent work on source-detector problems, we have calculated the adjoint solution deterministically and represented this solution as a linear-in-angle, histogram-in-space function. This procedure has several advantages over previous implementations: (a) it requires much less adjoint information to be stored and (b) it is highly efficient for diffusive problems, due to the accurate linear-in-angle representation of the adjoint solution. (Traditional variance-reduction methods perform poorly for diffusive problems.) Here, we extend this VVR method to Monte Carlo criticality calculations, which are often diffusive and difficult for traditional variance-reduction methods

  14. Non statistical Monte-Carlo

    International Nuclear Information System (INIS)

    Mercier, B.

    1985-04-01

    We have shown that the transport equation can be solved with particles, like the Monte-Carlo method, but without random numbers. In the Monte-Carlo method, particles are created from the source, and are followed from collision to collision until either they are absorbed or they leave the spatial domain. In our method, particles are created from the original source, with a variable weight taking into account both collision and absorption. These particles are followed until they leave the spatial domain, and we use them to determine a first collision source. Another set of particles is then created from this first collision source, and tracked to determine a second collision source, and so on. This process introduces an approximation which does not exist in the Monte-Carlo method. However, we have analyzed the effect of this approximation, and shown that it can be limited. Our method is deterministic, gives reproducible results. Furthermore, when extra accuracy is needed in some region, it is easier to get more particles to go there. It has the same kind of applications: rather problems where streaming is dominant than collision dominated problems

  15. The vector and parallel processing of MORSE code on Monte Carlo Machine

    International Nuclear Information System (INIS)

    Hasegawa, Yukihiro; Higuchi, Kenji.

    1995-11-01

    Multi-group Monte Carlo Code for particle transport, MORSE is modified for high performance computing on Monte Carlo Machine Monte-4. The method and the results are described. Monte-4 was specially developed to realize high performance computing of Monte Carlo codes for particle transport, which have been difficult to obtain high performance in vector processing on conventional vector processors. Monte-4 has four vector processor units with the special hardware called Monte Carlo pipelines. The vectorization and parallelization of MORSE code and the performance evaluation on Monte-4 are described. (author)

  16. Mont Terri project, cyclic deformations in the Opalinus Clay

    International Nuclear Information System (INIS)

    Moeri, A.; Bossart, P.; Matray, J.M.; Mueller, H.; Frank, E.

    2010-01-01

    Document available in extended abstract form only. Shrinkage structures in the Opalinus Clay, related to seasonal changes in temperature and humidity, are observed on the tunnel walls of the Mont Terri Rock Laboratory. The structures open in winter, when relative humidity in the tunnel decreases to 65%. In summer the cracks close again because of the increase in the clay volume when higher humidity causes rock swelling. Shrinkage structures are monitored in the Mont Terri Rock Laboratory at two different sites within the undisturbed rock matrix and a major fault zone. The relative movements of the rock on both sides of the cracks are monitored in three directions and compared to the fluctuations in ambient relative humidity and temperature. The cyclic deformations (CD) experiment aims to quantify the variations in crack opening in relation to the evolution of climatic conditions and to identify the processes underlying these swell and shrinkage cycles. It consists of the following tasks: - Measuring and quantifying the long-term (now up to three yearly cycles) opening and closing and, if present, the associated shear displacements of selected shrinkage cracks along an undisturbed bedding plane as well as within a major fault zone ('Main Fault'). The measurements are accompanied by temperature and humidity records as well as by a long-term monitoring of tunnel convergence. - Analysing at the micro-scale the surfaces of the crack planes to identify potential relative movements, changes in the rock fabric on the crack surfaces and the formation of fault gouge material as observed in closed cracks. - Processing and analysing measured fluctuations of crack apertures and rock deformation in the time series as well as in the hydro-meteorological variables, in particular relative humidity Hr(t) and air temperature. - Studying and reconstructing the opening cycles on a drill-core sample under well-known laboratory conditions and observing potential propagation of

  17. Monte Carlo techniques for analyzing deep penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1985-01-01

    A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications

  18. Monte Carlo techniques for analyzing deep penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1985-01-01

    A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications. 29 refs

  19. Structural characterization of polysaccharides from Cabernet Franc, Cabernet Sauvignon and Sauvignon Blanc wines: Anti-inflammatory activity in LPS stimulated RAW 264.7 cells.

    Science.gov (United States)

    Bezerra, Iglesias de Lacerda; Caillot, Adriana Rute Cordeiro; Palhares, Lais Cristina Gusmão Ferreira; Santana-Filho, Arquimedes Paixão; Chavante, Suely Ferreira; Sassaki, Guilherme Lanzi

    2018-04-15

    The structural characterization of the polysaccharides and in vitro anti-inflammatory properties of Cabernet Franc (WCF), Cabernet Sauvignon (WCS) and Sauvignon Blanc (WSB) wines were studied for the first time in this work. The polysaccharides of wines gave rise to three fractions of polysaccharides, namely (WCF) 0.16%, (WCS) 0.05% and (WSB) 0.02%; the highest one was chosen for isolation of polysaccharides (WCF). It was identified the presence of mannan, formed by a sequence of α-d-Manp (1 → 6)-linked and side chains O-2 substituted for α-d-mannan (1 → 2)-linked; type II arabinogalactan, formed by (1 → 3)-linked β-d-Galp main chain, substituted at O-6 by (1 → 6)-linked β-d-Galp side chains, and nonreducing end-units of arabinose 3-O-substituted; type I rhamnogalacturonan formed by repeating (1 → 4)-α-d-GalpA-(1 → 2)-α-L-Rhap groups; and traces of type II rhamnogalacturonan. The polysaccharide mixture and isolated fractions inhibited the production of inflammatory cytokines (TNF-α and IL-1β) and mediator (NO) in RAW 264.7 cells stimulated with LPS. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Biases in Monte Carlo eigenvalue calculations

    Energy Technology Data Exchange (ETDEWEB)

    Gelbard, E.M.

    1992-12-01

    The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the ``fixed-source`` case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated (``replicated``) over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here.

  1. Biases in Monte Carlo eigenvalue calculations

    Energy Technology Data Exchange (ETDEWEB)

    Gelbard, E.M.

    1992-01-01

    The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the fixed-source'' case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated ( replicated'') over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here.

  2. Biases in Monte Carlo eigenvalue calculations

    International Nuclear Information System (INIS)

    Gelbard, E.M.

    1992-01-01

    The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the ''fixed-source'' case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated (''replicated'') over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here

  3. Determination of factors through Monte Carlo method for Fricke dosimetry from 192Ir sources for brachytherapy

    International Nuclear Information System (INIS)

    David, Mariano Gazineu; Salata, Camila; Almeida, Carlos Eduardo

    2014-01-01

    The Laboratorio de Ciencias Radiologicas develops a methodology for the determination of the absorbed dose to water by Fricke chemical dosimetry method for brachytherapy sources of 192 Ir high dose rate and have compared their results with the laboratory of the National Research Council Canada. This paper describes the determination of the correction factors by Monte Carlo method, with the Penelope code. Values for all factors are presented, with a maximum difference of 0.22% for their determination by an alternative way. (author)

  4. Importance iteration in MORSE Monte Carlo calculations

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Hoogenboom, J.E.

    1994-01-01

    An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example that shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation

  5. Importance iteration in MORSE Monte Carlo calculations

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Hoogenboom, J.E.

    1994-02-01

    An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example, which shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation. (orig.)

  6. Neutron background measurements in the underground laboratory of Modane

    International Nuclear Information System (INIS)

    Chazal, V.; Chambon, B.; De Jesus, M.; Drain, D.; Pastor, C.; Vagneron, L.; Brissot, R.; Cavaignac, J.F.; Stutz, A.; Giraud-Heraud, Y.

    1997-07-01

    Measurements of the background neutron environment, at a depth of 1780 m (4800 mWe) in the Underground Laboratory of Modane (L.S.M) are reported. Using a 6 Li liquid scintillator, the energy spectrum of the fast neutron flux has been determined. Monte-Carlo calculations of the (α,n) and spontaneous fission processes in the surrounding rock has been performed and compared to the experimental result. In addition, using two 3 He neutron counters, the thermal neutron flux has been measured. (author)

  7. Monte Carlo simulation of expert judgments on human errors in chemical analysis--a case study of ICP-MS.

    Science.gov (United States)

    Kuselman, Ilya; Pennecchi, Francesca; Epstein, Malka; Fajgelj, Ales; Ellison, Stephen L R

    2014-12-01

    Monte Carlo simulation of expert judgments on human errors in a chemical analysis was used for determination of distributions of the error quantification scores (scores of likelihood and severity, and scores of effectiveness of a laboratory quality system in prevention of the errors). The simulation was based on modeling of an expert behavior: confident, reasonably doubting and irresolute expert judgments were taken into account by means of different probability mass functions (pmfs). As a case study, 36 scenarios of human errors which may occur in elemental analysis of geological samples by ICP-MS were examined. Characteristics of the score distributions for three pmfs of an expert behavior were compared. Variability of the scores, as standard deviation of the simulated score values from the distribution mean, was used for assessment of the score robustness. A range of the score values, calculated directly from elicited data and simulated by a Monte Carlo method for different pmfs, was also discussed from the robustness point of view. It was shown that robustness of the scores, obtained in the case study, can be assessed as satisfactory for the quality risk management and improvement of a laboratory quality system against human errors. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Advanced Computational Methods for Monte Carlo Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-12

    This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.

  9. Prospect on general software of Monte Carlo method

    International Nuclear Information System (INIS)

    Pei Lucheng

    1992-01-01

    This is a short paper on the prospect of Monte Carlo general software. The content consists of cluster sampling method, zero variance technique, self-improved method, and vectorized Monte Carlo method

  10. Strategije drevesnega preiskovanja Monte Carlo

    OpenAIRE

    VODOPIVEC, TOM

    2018-01-01

    Po preboju pri igri go so metode drevesnega preiskovanja Monte Carlo (ang. Monte Carlo tree search – MCTS) sprožile bliskovit napredek agentov za igranje iger: raziskovalna skupnost je od takrat razvila veliko variant in izboljšav algoritma MCTS ter s tem zagotovila napredek umetne inteligence ne samo pri igrah, ampak tudi v številnih drugih domenah. Čeprav metode MCTS združujejo splošnost naključnega vzorčenja z natančnostjo drevesnega preiskovanja, imajo lahko v praksi težave s počasno konv...

  11. Monte Carlo electron/photon transport

    International Nuclear Information System (INIS)

    Mack, J.M.; Morel, J.E.; Hughes, H.G.

    1985-01-01

    A review of nonplasma coupled electron/photon transport using Monte Carlo method is presented. Remarks are mainly restricted to linerarized formalisms at electron energies from 1 keV to 1000 MeV. Applications involving pulse-height estimation, transport in external magnetic fields, and optical Cerenkov production are discussed to underscore the importance of this branch of computational physics. Advances in electron multigroup cross-section generation is reported, and its impact on future code development assessed. Progress toward the transformation of MCNP into a generalized neutral/charged-particle Monte Carlo code is described. 48 refs

  12. Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

    KAUST Repository

    Ben Issaid, Chaouki; Long, Quan; Scavino, Marco; Tempone, Raul

    2015-01-01

    Experimental design is very important since experiments are often resource-exhaustive and time-consuming. We carry out experimental design in the Bayesian framework. To measure the amount of information, which can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data for our purpose. One of the major difficulties in evaluating the expected information gain is that the integral is nested and can be high dimensional. We propose using Multilevel Monte Carlo techniques to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, the Multilevel Monte Carlo can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the Multilevel Monte Carlo method imposes less assumptions, such as the concentration of measures, required by Laplace method. We test our Multilevel Monte Carlo technique using a numerical example on the design of sensor deployment for a Darcy flow problem governed by one dimensional Laplace equation. We also compare the performance of the Multilevel Monte Carlo, Laplace approximation and direct double loop Monte Carlo.

  13. Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

    KAUST Repository

    Ben Issaid, Chaouki

    2015-01-07

    Experimental design is very important since experiments are often resource-exhaustive and time-consuming. We carry out experimental design in the Bayesian framework. To measure the amount of information, which can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data for our purpose. One of the major difficulties in evaluating the expected information gain is that the integral is nested and can be high dimensional. We propose using Multilevel Monte Carlo techniques to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, the Multilevel Monte Carlo can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the Multilevel Monte Carlo method imposes less assumptions, such as the concentration of measures, required by Laplace method. We test our Multilevel Monte Carlo technique using a numerical example on the design of sensor deployment for a Darcy flow problem governed by one dimensional Laplace equation. We also compare the performance of the Multilevel Monte Carlo, Laplace approximation and direct double loop Monte Carlo.

  14. Le Comte de Monte Cristo: da literatura ao cinema

    OpenAIRE

    Caravela, Natércia Murta Silva

    2008-01-01

    A presente dissertação discute o diálogo estabelecido entre literatura e cinema no tratamento da personagem principal – um homem traído que se vinga de forma cruel dos seus inimigos – na obra literária Le Comte de Monte-Cristo, de Alexandre Dumas, e nas três adaptações fílmicas escolhidas: Le Comte de Monte-Cristo de Robert Vernay (1943); The count of Monte Cristo de David Greene (1975) e The count of Monte Cristo de Kevin Reynolds (2002). O projecto centra-se na análise da ...

  15. Charged-particle thermonuclear reaction rates: I. Monte Carlo method and statistical distributions

    International Nuclear Information System (INIS)

    Longland, R.; Iliadis, C.; Champagne, A.E.; Newton, J.R.; Ugalde, C.; Coc, A.; Fitzgerald, R.

    2010-01-01

    A method based on Monte Carlo techniques is presented for evaluating thermonuclear reaction rates. We begin by reviewing commonly applied procedures and point out that reaction rates that have been reported up to now in the literature have no rigorous statistical meaning. Subsequently, we associate each nuclear physics quantity entering in the calculation of reaction rates with a specific probability density function, including Gaussian, lognormal and chi-squared distributions. Based on these probability density functions the total reaction rate is randomly sampled many times until the required statistical precision is achieved. This procedure results in a median (Monte Carlo) rate which agrees under certain conditions with the commonly reported recommended 'classical' rate. In addition, we present at each temperature a low rate and a high rate, corresponding to the 0.16 and 0.84 quantiles of the cumulative reaction rate distribution. These quantities are in general different from the statistically meaningless 'minimum' (or 'lower limit') and 'maximum' (or 'upper limit') reaction rates which are commonly reported. Furthermore, we approximate the output reaction rate probability density function by a lognormal distribution and present, at each temperature, the lognormal parameters μ and σ. The values of these quantities will be crucial for future Monte Carlo nucleosynthesis studies. Our new reaction rates, appropriate for bare nuclei in the laboratory, are tabulated in the second paper of this issue (Paper II). The nuclear physics input used to derive our reaction rates is presented in the third paper of this issue (Paper III). In the fourth paper of this issue (Paper IV) we compare our new reaction rates to previous results.

  16. Present status of transport code development based on Monte Carlo method

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki

    1985-01-01

    The present status of development in Monte Carlo code is briefly reviewed. The main items are the followings; Application fields, Methods used in Monte Carlo code (geometry spectification, nuclear data, estimator and variance reduction technique) and unfinished works, Typical Monte Carlo codes and Merits of continuous energy Monte Carlo code. (author)

  17. Quantum Monte Carlo Studies of Bulk and Few- or Single-Layer Black Phosphorus

    Science.gov (United States)

    Shulenburger, Luke; Baczewski, Andrew; Zhu, Zhen; Guan, Jie; Tomanek, David

    2015-03-01

    The electronic and optical properties of phosphorus depend strongly on the structural properties of the material. Given the limited experimental information on the structure of phosphorene, it is natural to turn to electronic structure calculations to provide this information. Unfortunately, given phosphorus' propensity to form layered structures bound by van der Waals interactions, standard density functional theory methods provide results of uncertain accuracy. Recently, it has been demonstrated that Quantum Monte Carlo (QMC) methods achieve high accuracy when applied to solids in which van der Waals forces play a significant role. In this talk, we will present QMC results from our recent calculations on black phosphorus, focusing on the structural and energetic properties of monolayers, bilayers and bulk structures. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  18. Deep fracturation of granitic rock mass. Fracturation profonde des massifs rocheux granitiques

    Energy Technology Data Exchange (ETDEWEB)

    Bles, J L; Blanchin, R; Bonijoly, D; Dutartre, P; Feybesse, J L; Gros, Y; Landry, J; Martin, P

    1986-01-01

    This documentary study realized with the financial support of the European Communities and the CEA aims at the utilization of available data for the understanding of the evolution of natural fractures in granitic rocks from the surface to deep underground, in various feasibility studies dealing with radioactive wastes disposal. The Mont Blanc road tunnel, the EDF Arc-Isere gallerie, the Auriat deep borehole and the Pyrenean rock mass of Bassies are studied. In this study are more particularly analyzed the relationship between small fractures and large faults, evolution with depth of fracture density and direction, consequences of rock decompression and relationship between fracturation and groundwater.

  19. Deep fracturation of granitic rock mass

    International Nuclear Information System (INIS)

    Bles, J.L.; Blanchin, R.; Bonijoly, D.; Dutartre, P.; Feybesse, J.L.; Gros, Y.; Landry, J.; Martin, P.

    1986-01-01

    This documentary study realized with the financial support of the European Communities and the CEA aims at the utilization of available data for the understanding of the evolution of natural fractures in granitic rocks from the surface to deep underground, in various feasibility studies dealing with radioactive wastes disposal. The Mont Blanc road tunnel, the EDF Arc-Isere gallerie, the Auriat deep borehole and the Pyrenean rock mass of Bassies are studied. In this study are more particularly analyzed the relationship between small fractures and large faults, evolution with depth of fracture density and direction, consequences of rock decompression and relationship between fracturation and groundwater [fr

  20. Successful vectorization - reactor physics Monte Carlo code

    International Nuclear Information System (INIS)

    Martin, W.R.

    1989-01-01

    Most particle transport Monte Carlo codes in use today are based on the ''history-based'' algorithm, wherein one particle history at a time is simulated. Unfortunately, the ''history-based'' approach (present in all Monte Carlo codes until recent years) is inherently scalar and cannot be vectorized. In particular, the history-based algorithm cannot take advantage of vector architectures, which characterize the largest and fastest computers at the current time, vector supercomputers such as the Cray X/MP or IBM 3090/600. However, substantial progress has been made in recent years in developing and implementing a vectorized Monte Carlo algorithm. This algorithm follows portions of many particle histories at the same time and forms the basis for all successful vectorized Monte Carlo codes that are in use today. This paper describes the basic vectorized algorithm along with descriptions of several variations that have been developed by different researchers for specific applications. These applications have been mainly in the areas of neutron transport in nuclear reactor and shielding analysis and photon transport in fusion plasmas. The relative merits of the various approach schemes will be discussed and the present status of known vectorization efforts will be summarized along with available timing results, including results from the successful vectorization of 3-D general geometry, continuous energy Monte Carlo. (orig.)

  1. Bayesian phylogeny analysis via stochastic approximation Monte Carlo

    KAUST Repository

    Cheon, Sooyoung; Liang, Faming

    2009-01-01

    in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method

  2. Reflections on early Monte Carlo calculations

    International Nuclear Information System (INIS)

    Spanier, J.

    1992-01-01

    Monte Carlo methods for solving various particle transport problems developed in parallel with the evolution of increasingly sophisticated computer programs implementing diffusion theory and low-order moments calculations. In these early years, Monte Carlo calculations and high-order approximations to the transport equation were seen as too expensive to use routinely for nuclear design but served as invaluable aids and supplements to design with less expensive tools. The earliest Monte Carlo programs were quite literal; i.e., neutron and other particle random walk histories were simulated by sampling from the probability laws inherent in the physical system without distoration. Use of such analogue sampling schemes resulted in a good deal of time being spent in examining the possibility of lowering the statistical uncertainties in the sample estimates by replacing simple, and intuitively obvious, random variables by those with identical means but lower variances

  3. Hydro-mechanical analysis of results acquired by video-observations and deformation measurements performed in boreholes in the Opalinus clay of the URL Mont Terri supported by laboratory investigations on the hydro-mechanical behaviour of Opalinus clay

    International Nuclear Information System (INIS)

    Seeska, R.; Rutenberg, M.; Lux, K.H.

    2012-01-01

    Document available in extended abstract form only. Seven different boreholes in the Opalinus Clay formation of the Mont Terri Underground Rock Laboratory (URL Mont Terri) have been investigated by the Clausthal University of Technology (TUC) in cooperation with different partners with time, namely the National Cooperative for the Disposal of Radioactive Waste (NAGRA), the Federal Institute for Geosciences and Natural Resources (BGR) as well as the Swiss Federal Institute of Technology Zurich (ETHZ) and the Swiss Federal Nuclear Safety Inspectorate (ENSI). Aim of the investigations was to gain a large amount of high-quality and significant information on rock mass behaviour that can be used to increase knowledge about and improve understanding of time-dependent load-bearing and deformation behaviour of Opalinus Clay including pore water influences. For this purpose, an axial borehole camera and a three-arm calliper have been used. High-quality information on the load-bearing and deformation behaviour of the investigated boreholes was generated by the measurement and monitoring techniques used in the research project. The recordings reveal great as well as occasionally unexpected differences regarding the load-bearing behaviour as well as differences regarding the hydro-mechanical behaviour of the observed boreholes. While most of the boreholes have proved to be rather stable with only partial failure of the borehole wall in some areas, a complete borehole wall collapse occurred in two of the observed boreholes. The differences regarding the borehole wall stability and also the differences between the appearances of the occurring failure mechanisms are very likely due to the different orientation, the different locations within the URL Mont Terri, and the different facies the boreholes are located in. Figure 1 shows the time-dependent development of a borehole wall instability in one of the observed boreholes in a borehole section where an increase of moisture could

  4. Reconstruction of Monte Carlo replicas from Hessian parton distributions

    Energy Technology Data Exchange (ETDEWEB)

    Hou, Tie-Jiun [Department of Physics, Southern Methodist University,Dallas, TX 75275-0181 (United States); Gao, Jun [INPAC, Shanghai Key Laboratory for Particle Physics and Cosmology,Department of Physics and Astronomy, Shanghai Jiao-Tong University, Shanghai 200240 (China); High Energy Physics Division, Argonne National Laboratory,Argonne, Illinois, 60439 (United States); Huston, Joey [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States); Nadolsky, Pavel [Department of Physics, Southern Methodist University,Dallas, TX 75275-0181 (United States); Schmidt, Carl; Stump, Daniel [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States); Wang, Bo-Ting; Xie, Ke Ping [Department of Physics, Southern Methodist University,Dallas, TX 75275-0181 (United States); Dulat, Sayipjamal [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States); School of Physics Science and Technology, Xinjiang University,Urumqi, Xinjiang 830046 (China); Center for Theoretical Physics, Xinjiang University,Urumqi, Xinjiang 830046 (China); Pumplin, Jon; Yuan, C.P. [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States)

    2017-03-20

    We explore connections between two common methods for quantifying the uncertainty in parton distribution functions (PDFs), based on the Hessian error matrix and Monte-Carlo sampling. CT14 parton distributions in the Hessian representation are converted into Monte-Carlo replicas by a numerical method that reproduces important properties of CT14 Hessian PDFs: the asymmetry of CT14 uncertainties and positivity of individual parton distributions. The ensembles of CT14 Monte-Carlo replicas constructed this way at NNLO and NLO are suitable for various collider applications, such as cross section reweighting. Master formulas for computation of asymmetric standard deviations in the Monte-Carlo representation are derived. A correction is proposed to address a bias in asymmetric uncertainties introduced by the Taylor series approximation. A numerical program is made available for conversion of Hessian PDFs into Monte-Carlo replicas according to normal, log-normal, and Watt-Thorne sampling procedures.

  5. Sampling from a polytope and hard-disk Monte Carlo

    International Nuclear Information System (INIS)

    Kapfer, Sebastian C; Krauth, Werner

    2013-01-01

    The hard-disk problem, the statics and the dynamics of equal two-dimensional hard spheres in a periodic box, has had a profound influence on statistical and computational physics. Markov-chain Monte Carlo and molecular dynamics were first discussed for this model. Here we reformulate hard-disk Monte Carlo algorithms in terms of another classic problem, namely the sampling from a polytope. Local Markov-chain Monte Carlo, as proposed by Metropolis et al. in 1953, appears as a sequence of random walks in high-dimensional polytopes, while the moves of the more powerful event-chain algorithm correspond to molecular dynamics evolution. We determine the convergence properties of Monte Carlo methods in a special invariant polytope associated with hard-disk configurations, and the implications for convergence of hard-disk sampling. Finally, we discuss parallelization strategies for event-chain Monte Carlo and present results for a multicore implementation

  6. Problems in radiation shielding calculations with Monte Carlo methods

    International Nuclear Information System (INIS)

    Ueki, Kohtaro

    1985-01-01

    The Monte Carlo method is a very useful tool for solving a large class of radiation transport problem. In contrast with deterministic method, geometric complexity is a much less significant problem for Monte Carlo calculations. However, the accuracy of Monte Carlo calculations is of course, limited by statistical error of the quantities to be estimated. In this report, we point out some typical problems to solve a large shielding system including radiation streaming. The Monte Carlo coupling technique was developed to settle such a shielding problem accurately. However, the variance of the Monte Carlo results using the coupling technique of which detectors were located outside the radiation streaming, was still not enough. So as to bring on more accurate results for the detectors located outside the streaming and also for a multi-legged-duct streaming problem, a practicable way of ''Prism Scattering technique'' is proposed in the study. (author)

  7. Cluster monte carlo method for nuclear criticality safety calculation

    International Nuclear Information System (INIS)

    Pei Lucheng

    1984-01-01

    One of the most important applications of the Monte Carlo method is the calculation of the nuclear criticality safety. The fair source game problem was presented at almost the same time as the Monte Carlo method was applied to calculating the nuclear criticality safety. The source iteration cost may be reduced as much as possible or no need for any source iteration. This kind of problems all belongs to the fair source game prolems, among which, the optimal source game is without any source iteration. Although the single neutron Monte Carlo method solved the problem without the source iteration, there is still quite an apparent shortcoming in it, that is, it solves the problem without the source iteration only in the asymptotic sense. In this work, a new Monte Carlo method called the cluster Monte Carlo method is given to solve the problem further

  8. Wielandt acceleration for MCNP5 Monte Carlo eigenvalue calculations

    International Nuclear Information System (INIS)

    Brown, F.

    2007-01-01

    Monte Carlo criticality calculations use the power iteration method to determine the eigenvalue (k eff ) and eigenfunction (fission source distribution) of the fundamental mode. A recently proposed method for accelerating convergence of the Monte Carlo power iteration using Wielandt's method has been implemented in a test version of MCNP5. The method is shown to provide dramatic improvements in convergence rates and to greatly reduce the possibility of false convergence assessment. The method is effective and efficient, improving the Monte Carlo figure-of-merit for many problems. In addition, the method should eliminate most of the underprediction bias in confidence intervals for Monte Carlo criticality calculations. (authors)

  9. Monte Carlo shielding analyses using an automated biasing procedure

    International Nuclear Information System (INIS)

    Tang, J.S.; Hoffman, T.J.

    1988-01-01

    A systematic and automated approach for biasing Monte Carlo shielding calculations is described. In particular, adjoint fluxes from a one-dimensional discrete ordinates calculation are used to generate biasing parameters for a Monte Carlo calculation. The entire procedure of adjoint calculation, biasing parameters generation, and Monte Carlo calculation has been automated. The automated biasing procedure has been applied to several realistic deep-penetration shipping cask problems. The results obtained for neutron and gamma-ray transport indicate that with the automated biasing procedure Monte Carlo shielding calculations of spent-fuel casks can be easily performed with minimum effort and that accurate results can be obtained at reasonable computing cost

  10. Applications of the Monte Carlo method in radiation protection

    International Nuclear Information System (INIS)

    Kulkarni, R.N.; Prasad, M.A.

    1999-01-01

    This paper gives a brief introduction to the application of the Monte Carlo method in radiation protection. It may be noted that an exhaustive review has not been attempted. The special advantage of the Monte Carlo method has been first brought out. The fundamentals of the Monte Carlo method have next been explained in brief, with special reference to two applications in radiation protection. Some sample current applications have been reported in the end in brief as examples. They are, medical radiation physics, microdosimetry, calculations of thermoluminescence intensity and probabilistic safety analysis. The limitations of the Monte Carlo method have also been mentioned in passing. (author)

  11. Pore-scale uncertainty quantification with multilevel Monte Carlo

    KAUST Repository

    Icardi, Matteo; Hoel, Haakon; Long, Quan; Tempone, Raul

    2014-01-01

    . Since there are no generic ways to parametrize the randomness in the porescale structures, Monte Carlo techniques are the most accessible to compute statistics. We propose a multilevel Monte Carlo (MLMC) technique to reduce the computational cost

  12. Criticality benchmarks for COG: A new point-wise Monte Carlo code

    International Nuclear Information System (INIS)

    Alesso, H.P.; Pearson, J.; Choi, J.S.

    1989-01-01

    COG is a new point-wise Monte Carlo code being developed and tested at LLNL for the Cray computer. It solves the Boltzmann equation for the transport of neutrons, photons, and (in future versions) charged particles. Techniques included in the code for modifying the random walk of particles make COG most suitable for solving deep-penetration (shielding) problems. However, its point-wise cross-sections also make it effective for a wide variety of criticality problems. COG has some similarities to a number of other computer codes used in the shielding and criticality community. These include the Lawrence Livermore National Laboratory (LLNL) codes TART and ALICE, the Los Alamos National Laboratory code MCNP, the Oak Ridge National Laboratory codes 05R, 06R, KENO, and MORSE, the SACLAY code TRIPOLI, and the MAGI code SAM. Each code is a little different in its geometry input and its random-walk modification options. Validating COG consists in part of running benchmark calculations against critical experiments as well as other codes. The objective of this paper is to present calculational results of a variety of critical benchmark experiments using COG, and to present the resulting code bias. Numerous benchmark calculations have been completed for a wide variety of critical experiments which generally involve both simple and complex physical problems. The COG results, which they report in this paper, have been excellent

  13. Current and future applications of Monte Carlo

    International Nuclear Information System (INIS)

    Zaidi, H.

    2003-01-01

    Full text: The use of radionuclides in medicine has a long history and encompasses a large area of applications including diagnosis and radiation treatment of cancer patients using either external or radionuclide radiotherapy. The 'Monte Carlo method'describes a very broad area of science, in which many processes, physical systems, and phenomena are simulated by statistical methods employing random numbers. The general idea of Monte Carlo analysis is to create a model, which is as similar as possible to the real physical system of interest, and to create interactions within that system based on known probabilities of occurrence, with random sampling of the probability density functions (pdfs). As the number of individual events (called 'histories') is increased, the quality of the reported average behavior of the system improves, meaning that the statistical uncertainty decreases. The use of the Monte Carlo method to simulate radiation transport has become the most accurate means of predicting absorbed dose distributions and other quantities of interest in the radiation treatment of cancer patients using either external or radionuclide radiotherapy. The same trend has occurred for the estimation of the absorbed dose in diagnostic procedures using radionuclides as well as the assessment of image quality and quantitative accuracy of radionuclide imaging. As a consequence of this generalized use, many questions are being raised primarily about the need and potential of Monte Carlo techniques, but also about how accurate it really is, what would it take to apply it clinically and make it available widely to the nuclear medicine community at large. Many of these questions will be answered when Monte Carlo techniques are implemented and used for more routine calculations and for in-depth investigations. In this paper, the conceptual role of the Monte Carlo method is briefly introduced and followed by a survey of its different applications in diagnostic and therapeutic

  14. ALGOL geometrical module for reactor and reactor cell calculations in the R-Z geometry with the Monte Carlo method

    International Nuclear Information System (INIS)

    Usikov, D.A.

    1975-01-01

    A description of a geometrical module used in a program of the ARMONT complex of the Monte Carlo calculations is given. The geometrical module is designed to simulate the particle trajectory in the R-Z geometry. The geometrical module follows the particle trajectory from the start point to the next collision or flight-out points. The flight direction at the scattering point is assumed isotropic in the laboratory coordinate system. In the module the angle between the flight direction before and after collision is not determined. The principles for the module construction are presented alongside with the text-module in the ALGOL language. The module is optimumized as to the counting rate and it is rather compact not to cause difficulties due to the translator limitations in common translation with other program blocks based on the use of the Monte Carlo calculations

  15. Quantum statistical Monte Carlo methods and applications to spin systems

    International Nuclear Information System (INIS)

    Suzuki, M.

    1986-01-01

    A short review is given concerning the quantum statistical Monte Carlo method based on the equivalence theorem that d-dimensional quantum systems are mapped onto (d+1)-dimensional classical systems. The convergence property of this approximate tansformation is discussed in detail. Some applications of this general appoach to quantum spin systems are reviewed. A new Monte Carlo method, ''thermo field Monte Carlo method,'' is presented, which is an extension of the projection Monte Carlo method at zero temperature to that at finite temperatures

  16. Radiological hazard assessment at the Monte Bello Islands

    International Nuclear Information System (INIS)

    Cooper, M.B.; Martin, L.J.; Wilks, M.J.; Wiliams, G.A.

    1990-12-01

    Field and laboratory measurements are described and data presented which enabled dose assessments for exposure to artificial radionuclides at the Monte Bello Islands, the sites of U.K. atomic weapons tests in 1952 and 1956. The report focuses on quantifying the inhalation hazard as exposure via the ingestion and wound contamination pathways is considered inconsequential. Surface soil concentrations of radionuclides and particle size analyses are presented for various sampling sites. Analyses of the distribution with depth indicated that, in general, the activity is more or less uniformly mixed through the top 40 mm, although in a few cases the top 10 mm contains the bulk of the activity. The 239 Pu/ 241 Am activity ratios were measured for selected samples. The only potential hazards to health from residual radioactive contamination on the Monte Bello Islands are due to the inhalation of actinides (specifically plutonium and americium) and from the external gamma-radiation field. Only one area, in the fallout plume of HURRICANE to the north-west of Main Beach, is a potential inhalation hazard. For an average inhalable dust loading of 0.1 mg/m 3 , three days occupancy of the most contaminated site will result in a committed effective dose equivalent of 1 mSv. The two ground zeros could not be considered inhalation hazards, considering the small areas concerned and the habits of visitors (full-time occupancy, over a period of one year or more, of the most contaminated sites at either of the G1 or G2 ground zeros would be required to reach 1 mSv). 25 refs., 23 tabs., 3 figs

  17. Extraction of Pathogenesis-Related Proteins and Phenolics in Sauvignon Blanc as Affected by Grape Harvesting and Processing Conditions

    Directory of Open Access Journals (Sweden)

    Bin Tian

    2017-07-01

    Full Text Available Thaumatin-like proteins (TLPs and chitinases are the two main groups of pathogenesis-related (PR proteins found in wine that cause protein haze formation. Previous studies have found that phenolics are also involved in protein haze formation. In this study, Sauvignon Blanc grapes were harvested and processed in two vintages (2011 and 2012 by three different treatments: (1 hand harvesting with whole bunch press (H-WB; (2 hand harvesting with destem/crush and 3 h skin contact (H-DC-3; and (3 machine harvesting with destem/crush and 3 h skin contact (M-DC-3. The juices were collected at three pressure levels (0.4 MPa, 0.8 MPa and 1.6 MPa, some juices were fermented in 750 mL of wine bottles to determine the bentonite requirement for the resulting wines. Results showed juices of M-DC-3 had significantly lower concentration of proteins, including PR proteins, compared to those of H-DC-3, likely due to the greater juice yield of M-DC-3 and interactions between proteins and phenolics. Juices from the 0.8–1.6 MPa pressure and resultant wines had the highest concentration of phenolics but the lowest concentration of TLPs. This supported the view that TLPs are released at low pressure as they are mainly present in grape pulp but additional extraction of phenolics largely present in skin occurs at higher pressing pressure. Wine protein stability tests showed a positive linear correlation between bentonite requirement and the concentration of chitinases, indicating the possibility of predicting bentonite requirement by quantification of chitinases. This study contributes to an improved understanding of extraction of haze-forming PR proteins and phenolics that can influence bentonite requirement for protein stabilization.

  18. LAPP. Activity report 2006-2008

    International Nuclear Information System (INIS)

    Karyotakis, Yannis; Berger, Nicole; Bombar, Claudine; Jeremie, Andrea; Lees, Jean-Pierre; Marion, Federique; Riva, Vanessa; Riordan, Sonia

    2009-01-01

    LAPP is a high energy physics laboratory founded in 1976 and is one of the 19 laboratories of IN2P3 (National Institute of Nuclear and particle physics), institute of CNRS (National Centre for Scientific Research). LAPP is joint research facility of the University Savoie Mont Blanc (USMB) and the CNRS. Research carried out at LAPP aims at understanding the elementary particles and the fundamental interactions between them as well as exploring the connections between the infinitesimally small and the unbelievably big. Among other subjects LAPP teams try to understand the origin of the mass of the particles, the mystery of dark matter and what happened to the anti-matter that was present in the early universe. LAPP researchers work in close contact with phenomenologist teams from LAPTh, a theory laboratory hosted in the same building. LAPP teams also work since several decades at understanding the neutrinos, those elementary almost massless particles with amazing transformation properties. They took part in the design and realization of several experiments. Other LAPP teams collaborate in experiments studying signals from the cosmos. This document presents the activities of the laboratory during the years 2006-2008: 1 - Forewords; 2 - Experimental groups: the quest for new physics at the TeV scale - ATLAS, CMS; CP violation and flavor physics - BaBar, LHCb, CKMFITTER; Revealing neutrinos' nature - OPERA; Decoding cosmos messages - VIRGO, HESS, CTA, AMS; Investing for the future - ILC, LAVISTA, CTF3, POLAR, PMM2, POSITRON; 3 - Scientific production; 4 - Teaching; 5 - Annecy-le-Vieux International Centre of High Energy Physics - CIPHEA; 6 - The calculation meso-center; 7 - The know-how: Electronics department; Computers department; Mechanics department; Valorisation; 8 - Laboratory organisation, operation and means: organisation, administration, human and financial resources; 9 - Scientific life and communication; 10 - Prices and awards; 11 - Responsibilities in

  19. SPQR: a Monte Carlo reactor kinetics code

    International Nuclear Information System (INIS)

    Cramer, S.N.; Dodds, H.L.

    1980-02-01

    The SPQR Monte Carlo code has been developed to analyze fast reactor core accident problems where conventional methods are considered inadequate. The code is based on the adiabatic approximation of the quasi-static method. This initial version contains no automatic material motion or feedback. An existing Monte Carlo code is used to calculate the shape functions and the integral quantities needed in the kinetics module. Several sample problems have been devised and analyzed. Due to the large statistical uncertainty associated with the calculation of reactivity in accident simulations, the results, especially at later times, differ greatly from deterministic methods. It was also found that in large uncoupled systems, the Monte Carlo method has difficulty in handling asymmetric perturbations

  20. Monte Carlo simulation of molecular flow in a neutral-beam injector and comparison with experiment

    International Nuclear Information System (INIS)

    Lillie, R.A.; Gabriel, T.A.; Schwenterly, S.W.; Alsmiller, R.G. Jr.; Santoro, R.T.

    1981-09-01

    Monte Carlo calculations have been performed to obtain estimates of the background gas pressure and molecular number density as a function of position in the PDX-prototype neutral beam injector which has undergone testing at the Oak Ridge National Laboratory. Estimates of these quantities together with the transient and steady-state energy deposition and molecular capture rates on the cryopanels of the cryocondensation pumps and the molecular escape rate from the injector were obtained utilizing a detailed geometric model of the neutral beam injector. The molecular flow calculations were performed using an existing Monte Carlo radiation transport code which was modified slightly to monitor the energy of the background gas molecules. The credibility of these calculations is demonstrated by the excellent agreement between the calculated and experimentally measured background gas pressure in front of the beamline calorimeter located in the downstream drift region of the injector. The usefulness of the calculational method as a design tool is illustrated by a comparison of the integrated beamline molecular density over the drift region of the injector for three modes of cryopump operation

  1. Optix: A Monte Carlo scintillation light transport code

    Energy Technology Data Exchange (ETDEWEB)

    Safari, M.J., E-mail: mjsafari@aut.ac.ir [Department of Energy Engineering and Physics, Amir Kabir University of Technology, PO Box 15875-4413, Tehran (Iran, Islamic Republic of); Afarideh, H. [Department of Energy Engineering and Physics, Amir Kabir University of Technology, PO Box 15875-4413, Tehran (Iran, Islamic Republic of); Ghal-Eh, N. [School of Physics, Damghan University, PO Box 36716-41167, Damghan (Iran, Islamic Republic of); Davani, F. Abbasi [Nuclear Engineering Department, Shahid Beheshti University, PO Box 1983963113, Tehran (Iran, Islamic Republic of)

    2014-02-11

    The paper reports on the capabilities of Monte Carlo scintillation light transport code Optix, which is an extended version of previously introduced code Optics. Optix provides the user a variety of both numerical and graphical outputs with a very simple and user-friendly input structure. A benchmarking strategy has been adopted based on the comparison with experimental results, semi-analytical solutions, and other Monte Carlo simulation codes to verify various aspects of the developed code. Besides, some extensive comparisons have been made against the tracking abilities of general-purpose MCNPX and FLUKA codes. The presented benchmark results for the Optix code exhibit promising agreements. -- Highlights: • Monte Carlo simulation of scintillation light transport in 3D geometry. • Evaluation of angular distribution of detected photons. • Benchmark studies to check the accuracy of Monte Carlo simulations.

  2. Bayesian phylogeny analysis via stochastic approximation Monte Carlo

    KAUST Repository

    Cheon, Sooyoung

    2009-11-01

    Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time. © 2009 Elsevier Inc. All rights reserved.

  3. Present status and future prospects of neutronics Monte Carlo

    International Nuclear Information System (INIS)

    Gelbard, E.M.

    1990-01-01

    It is fair to say that the Monte Carlo method, over the last decade, has grown steadily more important as a neutronics computational tool. Apparently this has happened for assorted reasons. Thus, for example, as the power of computers has increased, the cost of the method has dropped, steadily becoming less and less of an obstacle to its use. In addition, more and more sophisticated input processors have now made it feasible to model extremely complicated systems routinely with really remarkable fidelity. Finally, as we demand greater and greater precision in reactor calculations, Monte Carlo is often found to be the only method accurate enough for use in benchmarking. Cross section uncertainties are now almost the only inherent limitations in our Monte Carlo capabilities. For this reason Monte Carlo has come to occupy a special position, interposed between experiment and other computational techniques. More and more often deterministic methods are tested by comparison with Monte Carlo, and cross sections are tested by comparing Monte Carlo with experiment. In this way one can distinguish very clearly between errors due to flaws in our numerical methods, and those due to deficiencies in cross section files. The special role of Monte Carlo as a benchmarking tool, often the only available benchmarking tool, makes it crucially important that this method should be polished to perfection. Problems relating to Eigenvalue calculations, variance reduction and the use of advanced computers are reviewed in this paper. (author)

  4. Diffusion Monte Carlo approach versus adiabatic computation for local Hamiltonians

    Science.gov (United States)

    Bringewatt, Jacob; Dorland, William; Jordan, Stephen P.; Mink, Alan

    2018-02-01

    Most research regarding quantum adiabatic optimization has focused on stoquastic Hamiltonians, whose ground states can be expressed with only real non-negative amplitudes and thus for whom destructive interference is not manifest. This raises the question of whether classical Monte Carlo algorithms can efficiently simulate quantum adiabatic optimization with stoquastic Hamiltonians. Recent results have given counterexamples in which path-integral and diffusion Monte Carlo fail to do so. However, most adiabatic optimization algorithms, such as for solving MAX-k -SAT problems, use k -local Hamiltonians, whereas our previous counterexample for diffusion Monte Carlo involved n -body interactions. Here we present a 6-local counterexample which demonstrates that even for these local Hamiltonians there are cases where diffusion Monte Carlo cannot efficiently simulate quantum adiabatic optimization. Furthermore, we perform empirical testing of diffusion Monte Carlo on a standard well-studied class of permutation-symmetric tunneling problems and similarly find large advantages for quantum optimization over diffusion Monte Carlo.

  5. Frequency domain Monte Carlo simulation method for cross power spectral density driven by periodically pulsed spallation neutron source using complex-valued weight Monte Carlo

    International Nuclear Information System (INIS)

    Yamamoto, Toshihiro

    2014-01-01

    Highlights: • The cross power spectral density in ADS has correlated and uncorrelated components. • A frequency domain Monte Carlo method to calculate the uncorrelated one is developed. • The method solves the Fourier transformed transport equation. • The method uses complex-valued weights to solve the equation. • The new method reproduces well the CPSDs calculated with time domain MC method. - Abstract: In an accelerator driven system (ADS), pulsed spallation neutrons are injected at a constant frequency. The cross power spectral density (CPSD), which can be used for monitoring the subcriticality of the ADS, is composed of the correlated and uncorrelated components. The uncorrelated component is described by a series of the Dirac delta functions that occur at the integer multiples of the pulse repetition frequency. In the present paper, a Monte Carlo method to solve the Fourier transformed neutron transport equation with a periodically pulsed neutron source term has been developed to obtain the CPSD in ADSs. Since the Fourier transformed flux is a complex-valued quantity, the Monte Carlo method introduces complex-valued weights to solve the Fourier transformed equation. The Monte Carlo algorithm used in this paper is similar to the one that was developed by the author of this paper to calculate the neutron noise caused by cross section perturbations. The newly-developed Monte Carlo algorithm is benchmarked to the conventional time domain Monte Carlo simulation technique. The CPSDs are obtained both with the newly-developed frequency domain Monte Carlo method and the conventional time domain Monte Carlo method for a one-dimensional infinite slab. The CPSDs obtained with the frequency domain Monte Carlo method agree well with those with the time domain method. The higher order mode effects on the CPSD in an ADS with a periodically pulsed neutron source are discussed

  6. Neutron point-flux calculation by Monte Carlo

    International Nuclear Information System (INIS)

    Eichhorn, M.

    1986-04-01

    A survey of the usual methods for estimating flux at a point is given. The associated variance-reducing techniques in direct Monte Carlo games are explained. The multigroup Monte Carlo codes MC for critical systems and PUNKT for point source-point detector-systems are represented, and problems in applying the codes to practical tasks are discussed. (author)

  7. Research on perturbation based Monte Carlo reactor criticality search

    International Nuclear Information System (INIS)

    Li Zeguang; Wang Kan; Li Yangliu; Deng Jingkang

    2013-01-01

    Criticality search is a very important aspect in reactor physics analysis. Due to the advantages of Monte Carlo method and the development of computer technologies, Monte Carlo criticality search is becoming more and more necessary and feasible. Traditional Monte Carlo criticality search method is suffered from large amount of individual criticality runs and uncertainty and fluctuation of Monte Carlo results. A new Monte Carlo criticality search method based on perturbation calculation is put forward in this paper to overcome the disadvantages of traditional method. By using only one criticality run to get initial k_e_f_f and differential coefficients of concerned parameter, the polynomial estimator of k_e_f_f changing function is solved to get the critical value of concerned parameter. The feasibility of this method was tested. The results show that the accuracy and efficiency of perturbation based criticality search method are quite inspiring and the method overcomes the disadvantages of traditional one. (authors)

  8. Shell model the Monte Carlo way

    International Nuclear Information System (INIS)

    Ormand, W.E.

    1995-01-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined

  9. Shell model the Monte Carlo way

    Energy Technology Data Exchange (ETDEWEB)

    Ormand, W.E.

    1995-03-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.

  10. Monte Carlo learning/biasing experiment with intelligent random numbers

    International Nuclear Information System (INIS)

    Booth, T.E.

    1985-01-01

    A Monte Carlo learning and biasing technique is described that does its learning and biasing in the random number space rather than the physical phase-space. The technique is probably applicable to all linear Monte Carlo problems, but no proof is provided here. Instead, the technique is illustrated with a simple Monte Carlo transport problem. Problems encountered, problems solved, and speculations about future progress are discussed. 12 refs

  11. Temperature variance study in Monte-Carlo photon transport theory

    International Nuclear Information System (INIS)

    Giorla, J.

    1985-10-01

    We study different Monte-Carlo methods for solving radiative transfer problems, and particularly Fleck's Monte-Carlo method. We first give the different time-discretization schemes and the corresponding stability criteria. Then we write the temperature variance as a function of the variances of temperature and absorbed energy at the previous time step. Finally we obtain some stability criteria for the Monte-Carlo method in the stationary case [fr

  12. Monte Carlo code criticality benchmark comparisons for waste packaging

    International Nuclear Information System (INIS)

    Alesso, H.P.; Annese, C.E.; Buck, R.M.; Pearson, J.S.; Lloyd, W.R.

    1992-07-01

    COG is a new point-wise Monte Carlo code being developed and tested at Lawrence Livermore National Laboratory (LLNL). It solves the Boltzmann equation for the transport of neutrons and photons. The objective of this paper is to report on COG results for criticality benchmark experiments both on a Cray mainframe and on a HP 9000 workstation. COG has been recently ported to workstations to improve its accessibility to a wider community of users. COG has some similarities to a number of other computer codes used in the shielding and criticality community. The recently introduced high performance reduced instruction set (RISC) UNIX workstations provide computational power that approach mainframes at a fraction of the cost. A version of COG is currently being developed for the Hewlett Packard 9000/730 computer with a UNIX operating system. Subsequent porting operations will move COG to SUN, DEC, and IBM workstations. In addition, a CAD system for preparation of the geometry input for COG is being developed. In July 1977, Babcock ampersand Wilcox Co. (B ampersand W) was awarded a contract to conduct a series of critical experiments that simulated close-packed storage of LWR-type fuel. These experiments provided data for benchmarking and validating calculational methods used in predicting K-effective of nuclear fuel storage in close-packed, neutron poisoned arrays. Low enriched UO2 fuel pins in water-moderated lattices in fuel storage represent a challenging criticality calculation for Monte Carlo codes particularly when the fuel pins extend out of the water. COG and KENO calculational results of these criticality benchmark experiments are presented

  13. Randomized quasi-Monte Carlo simulation of fast-ion thermalization

    Science.gov (United States)

    Höök, L. J.; Johnson, T.; Hellsten, T.

    2012-01-01

    This work investigates the applicability of the randomized quasi-Monte Carlo method for simulation of fast-ion thermalization processes in fusion plasmas, e.g. for simulation of neutral beam injection and radio frequency heating. In contrast to the standard Monte Carlo method, the quasi-Monte Carlo method uses deterministic numbers instead of pseudo-random numbers and has a statistical weak convergence close to {O}(N^{-1}) , where N is the number of markers. We have compared different quasi-Monte Carlo methods for a neutral beam injection scenario, which is solved by many realizations of the associated stochastic differential equation, discretized with the Euler-Maruyama scheme. The statistical convergence of the methods is measured for time steps up to 214.

  14. Nèg Blanc sa a (Aquela negra branca – desafiando as categorias de cor, nacionalidade e pertença a partir de um olhar afro-brasileiro sobre o Haiti

    Directory of Open Access Journals (Sweden)

    Renata de Melo Rosa

    2016-11-01

    Full Text Available Resumo  Este artigo tem como objetivo analisar a centralidade da categoria de pessoa no Haiti contemporâneo, a qual se funda a partir dos sentidos contextuais atribuídos à noção de nèg, (que em tradução livre para o português pode ser entendida como “negro/a” que antecede e funda, ao mesmo tempo, a categoria de pessoa. No entanto, mesmo que a categoria de pessoa no Haiti se ancore em uma nomenclatura “racial”, nèg não é uma categoria necessariamente atrelada à cor da pele, mas à qualidade da pertença de cada sujeito à nação haitiana. Identificar-se e ser identificado como um “nèg” atualiza, no processo identitário e no diálogo inter-subjetivo, diacríticos importantes, cujos sentidos são dados coletiva e contextualmente na rede de significados tecidas no contexto haitiano. Assim, pela natureza contextual e por sua constante dinâmica, é possível que uma “pessoa” que, aos olhos ocidentais, possa se assemelhar com o que nós entendemos como um/a “negro/a” no Brasil, no Haiti esta mesma “pessoa” pode não estar imediatamente identificada como um nèg ou como uma pessoa que  “pertença” ao Haiti. Em outras palavras, é preciso que cada nèg (para continuar sendo nèg e, portanto, “pessoa” atualize, de acordo com os contornos da cultura haitiana que inscrevem um nèg, as diversas obrigações rituais de pertença a esta categoria. Vista desta perspectiva, a categoria nèg pode ser ritualizada por um (a haitiano (a branco (a, desde que os rituais de pertença à nação também sejam atualizados, tornando o sujeito em um (a Nég Blanc (negro branco, expressão que dá título a este artigo. Por último, esta reflexão propõe que a categoria nèg como sinônimo da categoria de pessoa é uma contra narrativa às tentativas de inferiorização racial vigentes no período colonial francês. Palavras chave: nèg; noção de pessoa; Haiti; nação, categorias de cor. Nèg Blanc sa a (Aquella negra

  15. A Monte Carlo algorithm for the Vavilov distribution

    International Nuclear Information System (INIS)

    Yi, Chul-Young; Han, Hyon-Soo

    1999-01-01

    Using the convolution property of the inverse Laplace transform, an improved Monte Carlo algorithm for the Vavilov energy-loss straggling distribution of the charged particle is developed, which is relatively simple and gives enough accuracy to be used for most Monte Carlo applications

  16. Integrated Tiger Series of electron/photon Monte Carlo transport codes: a user's guide for use on IBM mainframes

    International Nuclear Information System (INIS)

    Kirk, B.L.

    1985-12-01

    The ITS (Integrated Tiger Series) Monte Carlo code package developed at Sandia National Laboratories and distributed as CCC-467/ITS by the Radiation Shielding Information Center (RSIC) at Oak Ridge National Laboratory (ORNL) consists of eight codes - the standard codes, TIGER, CYLTRAN, ACCEPT; the P-codes, TIGERP, CYLTRANP, ACCEPTP; and the M-codes ACCEPTM, CYLTRANM. The codes have been adapted to run on the IBM 3081, VAX 11/780, CDC-7600, and Cray 1 with the use of the update emulator UPEML. This manual should serve as a guide to a user running the codes on IBM computers having 370 architecture. The cases listed were tested on the IBM 3033, under the MVS operating system using the VS Fortran Level 1.3.1 compiler

  17. Adaptive Multilevel Monte Carlo Simulation

    KAUST Repository

    Hoel, H

    2011-08-23

    This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).

  18. Monte Carlo computation in the applied research of nuclear technology

    International Nuclear Information System (INIS)

    Xu Shuyan; Liu Baojie; Li Qin

    2007-01-01

    This article briefly introduces Monte Carlo Methods and their properties. It narrates the Monte Carlo methods with emphasis in their applications to several domains of nuclear technology. Monte Carlo simulation methods and several commonly used computer software to implement them are also introduced. The proposed methods are demonstrated by a real example. (authors)

  19. Nested Sampling with Constrained Hamiltonian Monte Carlo

    OpenAIRE

    Betancourt, M. J.

    2010-01-01

    Nested sampling is a powerful approach to Bayesian inference ultimately limited by the computationally demanding task of sampling from a heavily constrained probability distribution. An effective algorithm in its own right, Hamiltonian Monte Carlo is readily adapted to efficiently sample from any smooth, constrained distribution. Utilizing this constrained Hamiltonian Monte Carlo, I introduce a general implementation of the nested sampling algorithm.

  20. Statistics of Monte Carlo methods used in radiation transport calculation

    International Nuclear Information System (INIS)

    Datta, D.

    2009-01-01

    Radiation transport calculation can be carried out by using either deterministic or statistical methods. Radiation transport calculation based on statistical methods is basic theme of the Monte Carlo methods. The aim of this lecture is to describe the fundamental statistics required to build the foundations of Monte Carlo technique for radiation transport calculation. Lecture note is organized in the following way. Section (1) will describe the introduction of Basic Monte Carlo and its classification towards the respective field. Section (2) will describe the random sampling methods, a key component of Monte Carlo radiation transport calculation, Section (3) will provide the statistical uncertainty of Monte Carlo estimates, Section (4) will describe in brief the importance of variance reduction techniques while sampling particles such as photon, or neutron in the process of radiation transport

  1. Multiple histogram method and static Monte Carlo sampling

    NARCIS (Netherlands)

    Inda, M.A.; Frenkel, D.

    2004-01-01

    We describe an approach to use multiple-histogram methods in combination with static, biased Monte Carlo simulations. To illustrate this, we computed the force-extension curve of an athermal polymer from multiple histograms constructed in a series of static Rosenbluth Monte Carlo simulations. From

  2. Forest canopy BRDF simulation using Monte Carlo method

    NARCIS (Netherlands)

    Huang, J.; Wu, B.; Zeng, Y.; Tian, Y.

    2006-01-01

    Monte Carlo method is a random statistic method, which has been widely used to simulate the Bidirectional Reflectance Distribution Function (BRDF) of vegetation canopy in the field of visible remote sensing. The random process between photons and forest canopy was designed using Monte Carlo method.

  3. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.

    1996-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs

  4. Monte Carlo techniques in diagnostic and therapeutic nuclear medicine

    International Nuclear Information System (INIS)

    Zaidi, H.

    2002-01-01

    Monte Carlo techniques have become one of the most popular tools in different areas of medical radiation physics following the development and subsequent implementation of powerful computing systems for clinical use. In particular, they have been extensively applied to simulate processes involving random behaviour and to quantify physical parameters that are difficult or even impossible to calculate analytically or to determine by experimental measurements. The use of the Monte Carlo method to simulate radiation transport turned out to be the most accurate means of predicting absorbed dose distributions and other quantities of interest in the radiation treatment of cancer patients using either external or radionuclide radiotherapy. The same trend has occurred for the estimation of the absorbed dose in diagnostic procedures using radionuclides. There is broad consensus in accepting that the earliest Monte Carlo calculations in medical radiation physics were made in the area of nuclear medicine, where the technique was used for dosimetry modelling and computations. Formalism and data based on Monte Carlo calculations, developed by the Medical Internal Radiation Dose (MIRD) committee of the Society of Nuclear Medicine, were published in a series of supplements to the Journal of Nuclear Medicine, the first one being released in 1968. Some of these pamphlets made extensive use of Monte Carlo calculations to derive specific absorbed fractions for electron and photon sources uniformly distributed in organs of mathematical phantoms. Interest in Monte Carlo-based dose calculations with β-emitters has been revived with the application of radiolabelled monoclonal antibodies to radioimmunotherapy. As a consequence of this generalized use, many questions are being raised primarily about the need and potential of Monte Carlo techniques, but also about how accurate it really is, what would it take to apply it clinically and make it available widely to the medical physics

  5. Monte Carlo strategies in scientific computing

    CERN Document Server

    Liu, Jun S

    2008-01-01

    This paperback edition is a reprint of the 2001 Springer edition This book provides a self-contained and up-to-date treatment of the Monte Carlo method and develops a common framework under which various Monte Carlo techniques can be "standardized" and compared Given the interdisciplinary nature of the topics and a moderate prerequisite for the reader, this book should be of interest to a broad audience of quantitative researchers such as computational biologists, computer scientists, econometricians, engineers, probabilists, and statisticians It can also be used as the textbook for a graduate-level course on Monte Carlo methods Many problems discussed in the alter chapters can be potential thesis topics for masters’ or PhD students in statistics or computer science departments Jun Liu is Professor of Statistics at Harvard University, with a courtesy Professor appointment at Harvard Biostatistics Department Professor Liu was the recipient of the 2002 COPSS Presidents' Award, the most prestigious one for sta...

  6. Off-diagonal expansion quantum Monte Carlo.

    Science.gov (United States)

    Albash, Tameem; Wagenbreth, Gene; Hen, Itay

    2017-12-01

    We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.

  7. Dynamic bounds coupled with Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Rajabalinejad, M., E-mail: M.Rajabalinejad@tudelft.n [Faculty of Civil Engineering, Delft University of Technology, Delft (Netherlands); Meester, L.E. [Delft Institute of Applied Mathematics, Delft University of Technology, Delft (Netherlands); Gelder, P.H.A.J.M. van; Vrijling, J.K. [Faculty of Civil Engineering, Delft University of Technology, Delft (Netherlands)

    2011-02-15

    For the reliability analysis of engineering structures a variety of methods is known, of which Monte Carlo (MC) simulation is widely considered to be among the most robust and most generally applicable. To reduce simulation cost of the MC method, variance reduction methods are applied. This paper describes a method to reduce the simulation cost even further, while retaining the accuracy of Monte Carlo, by taking into account widely present monotonicity. For models exhibiting monotonic (decreasing or increasing) behavior, dynamic bounds (DB) are defined, which in a coupled Monte Carlo simulation are updated dynamically, resulting in a failure probability estimate, as well as a strict (non-probabilistic) upper and lower bounds. Accurate results are obtained at a much lower cost than an equivalent ordinary Monte Carlo simulation. In a two-dimensional and a four-dimensional numerical example, the cost reduction factors are 130 and 9, respectively, where the relative error is smaller than 5%. At higher accuracy levels, this factor increases, though this effect is expected to be smaller with increasing dimension. To show the application of DB method to real world problems, it is applied to a complex finite element model of a flood wall in New Orleans.

  8. Coded aperture optimization using Monte Carlo simulations

    International Nuclear Information System (INIS)

    Martineau, A.; Rocchisani, J.M.; Moretti, J.L.

    2010-01-01

    Coded apertures using Uniformly Redundant Arrays (URA) have been unsuccessfully evaluated for two-dimensional and three-dimensional imaging in Nuclear Medicine. The images reconstructed from coded projections contain artifacts and suffer from poor spatial resolution in the longitudinal direction. We introduce a Maximum-Likelihood Expectation-Maximization (MLEM) algorithm for three-dimensional coded aperture imaging which uses a projection matrix calculated by Monte Carlo simulations. The aim of the algorithm is to reduce artifacts and improve the three-dimensional spatial resolution in the reconstructed images. Firstly, we present the validation of GATE (Geant4 Application for Emission Tomography) for Monte Carlo simulations of a coded mask installed on a clinical gamma camera. The coded mask modelling was validated by comparison between experimental and simulated data in terms of energy spectra, sensitivity and spatial resolution. In the second part of the study, we use the validated model to calculate the projection matrix with Monte Carlo simulations. A three-dimensional thyroid phantom study was performed to compare the performance of the three-dimensional MLEM reconstruction with conventional correlation method. The results indicate that the artifacts are reduced and three-dimensional spatial resolution is improved with the Monte Carlo-based MLEM reconstruction.

  9. Randomized quasi-Monte Carlo simulation of fast-ion thermalization

    International Nuclear Information System (INIS)

    Höök, L J; Johnson, T; Hellsten, T

    2012-01-01

    This work investigates the applicability of the randomized quasi-Monte Carlo method for simulation of fast-ion thermalization processes in fusion plasmas, e.g. for simulation of neutral beam injection and radio frequency heating. In contrast to the standard Monte Carlo method, the quasi-Monte Carlo method uses deterministic numbers instead of pseudo-random numbers and has a statistical weak convergence close to O(N -1 ), where N is the number of markers. We have compared different quasi-Monte Carlo methods for a neutral beam injection scenario, which is solved by many realizations of the associated stochastic differential equation, discretized with the Euler-Maruyama scheme. The statistical convergence of the methods is measured for time steps up to 2 14 . (paper)

  10. Combinatorial nuclear level density by a Monte Carlo method

    International Nuclear Information System (INIS)

    Cerf, N.

    1994-01-01

    We present a new combinatorial method for the calculation of the nuclear level density. It is based on a Monte Carlo technique, in order to avoid a direct counting procedure which is generally impracticable for high-A nuclei. The Monte Carlo simulation, making use of the Metropolis sampling scheme, allows a computationally fast estimate of the level density for many fermion systems in large shell model spaces. We emphasize the advantages of this Monte Carlo approach, particularly concerning the prediction of the spin and parity distributions of the excited states,and compare our results with those derived from a traditional combinatorial or a statistical method. Such a Monte Carlo technique seems very promising to determine accurate level densities in a large energy range for nuclear reaction calculations

  11. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    International Nuclear Information System (INIS)

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed

  12. BOMAB phantom manufacturing quality assurance study using Monte Carlo computations

    International Nuclear Information System (INIS)

    Mallett, M.W.

    1994-01-01

    Monte Carlo calculations have been performed to assess the importance of and quantify quality assurance protocols in the manufacturing of the Bottle-Manikin-Absorption (BOMAB) phantom for calibrating in vivo measurement systems. The parameters characterizing the BOMAB phantom that were examined included height, fill volume, fill material density, wall thickness, and source concentration. Transport simulation was performed for monoenergetic photon sources of 0.200, 0.662, and 1,460 MeV. A linear response was observed in the photon current exiting the exterior surface of the BOMAB phantom due to variations in these parameters. Sensitivity studies were also performed for an in vivo system in operation at the Pacific Northwest Laboratories in Richland, WA. Variations in detector current for this in vivo system are reported for changes in the BOMAB phantom parameters studied here. Physical justifications for the observed results are also discussed

  13. Variational Monte Carlo Technique

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 8. Variational Monte Carlo Technique: Ground State Energies of Quantum Mechanical Systems. Sukanta Deb. General Article Volume 19 Issue 8 August 2014 pp 713-739 ...

  14. Diffusion and retention experiment at the Mont Terri underground rock laboratory in St. Ursanne

    International Nuclear Information System (INIS)

    Leupin, O.X.; Wersin, P.; Gimmi, Th.; Van Loon, L.; Eikenberg, J.; Baeyens, B.; Soler, J.M.; Dewonck, S.; Wittebroodt, C.; Samper, J.; Yi, S.; Naves, A.

    2010-01-01

    Document available in extended abstract form only. Because of their favourable hydraulic and retention properties that limit the migration of radionuclides, indurated clays are being considered as potential host rocks for radioactive waste disposal. Migration of radionuclides by diffusion and retention is thereby one of the main concerns for safety assessment and therefore carefully investigated at different scales. The transfer from dispersed sorption batch and diffusion data from lab experiments to field scale is however not always straightforward. Thus, combined sorption and diffusion experiments at both lab and field scale are instrumental for a critical verification of the applicability of such sorption and diffusion data. The present migration field experiment 'DR' (Diffusion and Retention experiment) at the Mont Terri Rock Laboratory (Switzerland) is the continuation of a series of successful diffusion experiments. The design is based on these previous diffusion experiments and has been extended to two diffusion chambers in a single borehole drilled perpendicular to the bedding plane. The radionuclides were injected as a pulse in both upper and lower loops where artificial pore water is circulating. The injected tracers were tritium, iodide, bromide, sodium-22, strontium-85, caesium (stable) for the lower diffusion chamber and deuterium caesium-137, barium-133, cobalt-60, europium-152, selenium (stable) and selenium-75 for the lower diffusion chamber. Their decrease in the circulation fluid - as they diffuse into the clay - is continuously monitored by online?-detection and regular sampling. The goals are fourfold (i) obtain diffusion and retention data for moderately to strongly sorbing tracers and to verify the corresponding data obtained on small-scale lab samples, (ii) improve diffusion data for the rock anisotropy, (iii) quantify effects of the borehole-disturbed zone for non-reactive tracers and (iv) improve data for long term diffusion. The

  15. McSnow: A Monte-Carlo Particle Model for Riming and Aggregation of Ice Particles in a Multidimensional Microphysical Phase Space

    Science.gov (United States)

    Brdar, S.; Seifert, A.

    2018-01-01

    We present a novel Monte-Carlo ice microphysics model, McSnow, to simulate the evolution of ice particles due to deposition, aggregation, riming, and sedimentation. The model is an application and extension of the super-droplet method of Shima et al. (2009) to the more complex problem of rimed ice particles and aggregates. For each individual super-particle, the ice mass, rime mass, rime volume, and the number of monomers are predicted establishing a four-dimensional particle-size distribution. The sensitivity of the model to various assumptions is discussed based on box model and one-dimensional simulations. We show that the Monte-Carlo method provides a feasible approach to tackle this high-dimensional problem. The largest uncertainty seems to be related to the treatment of the riming processes. This calls for additional field and laboratory measurements of partially rimed snowflakes.

  16. Discrete diffusion Monte Carlo for frequency-dependent radiative transfer

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Thompson, Kelly G.; Urbatsch, Todd J.

    2011-01-01

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique. (author)

  17. Modified Monte Carlo procedure for particle transport problems

    International Nuclear Information System (INIS)

    Matthes, W.

    1978-01-01

    The simulation of photon transport in the atmosphere with the Monte Carlo method forms part of the EURASEP-programme. The specifications for the problems posed for a solution were such, that the direct application of the analogue Monte Carlo method was not feasible. For this reason the standard Monte Carlo procedure was modified in the sense that additional properly weighted branchings at each collision and transport process in a photon history were introduced. This modified Monte Carlo procedure leads to a clear and logical separation of the essential parts of a problem and offers a large flexibility for variance reducing techniques. More complex problems, as foreseen in the EURASEP-programme (e.g. clouds in the atmosphere, rough ocean-surface and chlorophyl-distribution in the ocean) can be handled by recoding some subroutines. This collision- and transport-splitting procedure can of course be performed differently in different space- and energy regions. It is applied here only for a homogeneous problem

  18. Uncertainty analysis in Monte Carlo criticality computations

    International Nuclear Information System (INIS)

    Qi Ao

    2011-01-01

    Highlights: ► Two types of uncertainty methods for k eff Monte Carlo computations are examined. ► Sampling method has the least restrictions on perturbation but computing resources. ► Analytical method is limited to small perturbation on material properties. ► Practicality relies on efficiency, multiparameter applicability and data availability. - Abstract: Uncertainty analysis is imperative for nuclear criticality risk assessments when using Monte Carlo neutron transport methods to predict the effective neutron multiplication factor (k eff ) for fissionable material systems. For the validation of Monte Carlo codes for criticality computations against benchmark experiments, code accuracy and precision are measured by both the computational bias and uncertainty in the bias. The uncertainty in the bias accounts for known or quantified experimental, computational and model uncertainties. For the application of Monte Carlo codes for criticality analysis of fissionable material systems, an administrative margin of subcriticality must be imposed to provide additional assurance of subcriticality for any unknown or unquantified uncertainties. Because of a substantial impact of the administrative margin of subcriticality on economics and safety of nuclear fuel cycle operations, recently increasing interests in reducing the administrative margin of subcriticality make the uncertainty analysis in criticality safety computations more risk-significant. This paper provides an overview of two most popular k eff uncertainty analysis methods for Monte Carlo criticality computations: (1) sampling-based methods, and (2) analytical methods. Examples are given to demonstrate their usage in the k eff uncertainty analysis due to uncertainties in both neutronic and non-neutronic parameters of fissionable material systems.

  19. Quantum Monte Carlo simulations of Ti4 O7 Magnéli phase

    Science.gov (United States)

    Benali, Anouar; Shulenburger, Luke; Krogel, Jaron; Zhong, Xiaoliang; Kent, Paul; Heinonen, Olle

    2015-03-01

    Ti4O7 is ubiquitous in Ti-oxides. It has been extensively studied, both experimentally and theoretically in the past decades using multiple levels of theories, resulting in multiple diverse results. The latest DFT +SIC methods and state of the art HSE06 hybrid functionals even propose a new anti-ferromagnetic state at low temperature. Using Quantum Monte Carlo (QMC), as implemented in the QMCPACK simulation package, we investigated the electronic and magnetic properties of Ti4O7 at low (120K) and high (298K) temperatures and at different magnetic states. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357. L.S, J.K and P.K were supported through Predictive Theory and Modeling for Materials and Chemical Science program by the Office of Basic Energy Sciences (BES), Department of Energy (DOE) Sandia National Laboratories is a multiprogram laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under Contract No. DE-AC04-94AL85000.

  20. High-energy particle Monte Carlo at Los Alamos

    International Nuclear Information System (INIS)

    Prael, R.E.

    1985-01-01

    A major computational effort at Los Alamos has been the development of a code system based on the HETC code for the transport of nucleons, pions, and muons. The Los Alamos National Laboratory version of HETC utilizes MCNP geometry and interfaces with MCNP for the transport of neutrons below 20 MeV and photons at any energy. A major recent effort has been the development of the PHT code for treating the gamma cascade in excited nuclei (the residual nuclei from an HETC calculation) by the Monte Carlo method to generate a photon source for MCNP. The HETC/MCNP code system has been extensively used for design studies of accelerator targets and shielding, including the design of LAMPF-II. It is extensively used for the design and analysis of accelerator experiments. Los Alamos National Laboratory has been an active member of the International Collaboration on Advanced Neutron Sources; as such we engage in shared code development and computational efforts. In the past few years, additional effort has been devoted to the development of a Chen-model intranuclear cascade code (INCA1) featuring a cluster model for the nucleus and deuteron pickup reactions. Concurrently, the INCA2 code for the breakup of light, excited nuclei using the Fermi breakup model has been developed. Together, they have been used for the calculation of neutron and proton cross sections in the energy ranges appropriate to medical accelerators, and for the computation of tissue kerma factors

  1. An Overview of the Monte Carlo Application ToolKit (MCATK)

    Energy Technology Data Exchange (ETDEWEB)

    Trahan, Travis John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-01-07

    MCATK is a C++ component-based Monte Carlo neutron-gamma transport software library designed to build specialized applications and designed to provide new functionality in existing general-purpose Monte Carlo codes like MCNP; it was developed with Agile software engineering methodologies under the motivation to reduce costs. The characteristics of MCATK can be summarized as follows: MCATK physics – continuous energy neutron-gamma transport with multi-temperature treatment, static eigenvalue (k and α) algorithms, time-dependent algorithm, fission chain algorithms; MCATK geometry – mesh geometries, solid body geometries. MCATK provides verified, unit-tested Monte Carlo components, flexibility in Monte Carlo applications development, and numerous tools such as geometry and cross section plotters. Recent work has involved deterministic and Monte Carlo analysis of stochastic systems. Static and dynamic analysis is discussed, and the results of a dynamic test problem are given.

  2. KENO-VI: A Monte Carlo Criticality Program with generalized quadratic geometry

    International Nuclear Information System (INIS)

    Hollenbach, D.F.; Petrie, L.M.; Landers, N.F.

    1993-01-01

    This report discusses KENO-VI which is a new version of the KENO monte Carlo Criticality Safety developed at Oak Ridge National Laboratory. The purpose of KENO-VI is to provide a criticality safety code similar to KENO-V.a that possesses a more general and flexible geometry package. KENO-VI constructs and processes geometry data as sets of quadratic equations. A lengthy set of simple, easy-to-use geometric functions, similar to those provided in KENO-V.a., and the ability to build more complex geometric shapes represented by sets of quadratic equations are the heart of the geometry package in KENO-VI. The code's flexibility is increased by allowing intersecting geometry regions, hexagonal as well as cuboidal arrays, and the ability to specify an array boundary that intersects the array

  3. Efficiency and accuracy of Monte Carlo (importance) sampling

    NARCIS (Netherlands)

    Waarts, P.H.

    2003-01-01

    Monte Carlo Analysis is often regarded as the most simple and accurate reliability method. Be-sides it is the most transparent method. The only problem is the accuracy in correlation with the efficiency. Monte Carlo gets less efficient or less accurate when very low probabilities are to be computed

  4. SU-E-J-60: Efficient Monte Carlo Dose Calculation On CPU-GPU Heterogeneous Systems

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, K; Chen, D. Z; Hu, X. S [University of Notre Dame, Notre Dame, IN (United States); Zhou, B [Altera Corp., San Jose, CA (United States)

    2014-06-01

    Purpose: It is well-known that the performance of GPU-based Monte Carlo dose calculation implementations is bounded by memory bandwidth. One major cause of this bottleneck is the random memory writing patterns in dose deposition, which leads to several memory efficiency issues on GPU such as un-coalesced writing and atomic operations. We propose a new method to alleviate such issues on CPU-GPU heterogeneous systems, which achieves overall performance improvement for Monte Carlo dose calculation. Methods: Dose deposition is to accumulate dose into the voxels of a dose volume along the trajectories of radiation rays. Our idea is to partition this procedure into the following three steps, which are fine-tuned for CPU or GPU: (1) each GPU thread writes dose results with location information to a buffer on GPU memory, which achieves fully-coalesced and atomic-free memory transactions; (2) the dose results in the buffer are transferred to CPU memory; (3) the dose volume is constructed from the dose buffer on CPU. We organize the processing of all radiation rays into streams. Since the steps within a stream use different hardware resources (i.e., GPU, DMA, CPU), we can overlap the execution of these steps for different streams by pipelining. Results: We evaluated our method using a Monte Carlo Convolution Superposition (MCCS) program and tested our implementation for various clinical cases on a heterogeneous system containing an Intel i7 quad-core CPU and an NVIDIA TITAN GPU. Comparing with a straightforward MCCS implementation on the same system (using both CPU and GPU for radiation ray tracing), our method gained 2-5X speedup without losing dose calculation accuracy. Conclusion: The results show that our new method improves the effective memory bandwidth and overall performance for MCCS on the CPU-GPU systems. Our proposed method can also be applied to accelerate other Monte Carlo dose calculation approaches. This research was supported in part by NSF under Grants CCF

  5. Suppression of the initial transient in Monte Carlo criticality simulations; Suppression du regime transitoire initial des simulations Monte-Carlo de criticite

    Energy Technology Data Exchange (ETDEWEB)

    Richet, Y

    2006-12-15

    Criticality Monte Carlo calculations aim at estimating the effective multiplication factor (k-effective) for a fissile system through iterations simulating neutrons propagation (making a Markov chain). Arbitrary initialization of the neutron population can deeply bias the k-effective estimation, defined as the mean of the k-effective computed at each iteration. A simplified model of this cycle k-effective sequence is built, based on characteristics of industrial criticality Monte Carlo calculations. Statistical tests, inspired by Brownian bridge properties, are designed to discriminate stationarity of the cycle k-effective sequence. The initial detected transient is, then, suppressed in order to improve the estimation of the system k-effective. The different versions of this methodology are detailed and compared, firstly on a plan of numerical tests fitted on criticality Monte Carlo calculations, and, secondly on real criticality calculations. Eventually, the best methodologies observed in these tests are selected and allow to improve industrial Monte Carlo criticality calculations. (author)

  6. Monte Carlo criticality analysis for dissolvers with neutron poison

    International Nuclear Information System (INIS)

    Yu, Deshun; Dong, Xiufang; Pu, Fuxiang.

    1987-01-01

    Criticality analysis for dissolvers with neutron poison is given on the basis of Monte Carlo method. In Monte Carlo calculations of thermal neutron group parameters for fuel pieces, neutron transport length is determined in terms of maximum cross section approach. A set of related effective multiplication factors (K eff ) are calculated by Monte Carlo method for the three cases. Related numerical results are quite useful for the design and operation of this kind of dissolver in the criticality safety analysis. (author)

  7. Improvements for Monte Carlo burnup calculation

    Energy Technology Data Exchange (ETDEWEB)

    Shenglong, Q.; Dong, Y.; Danrong, S.; Wei, L., E-mail: qiangshenglong@tsinghua.org.cn, E-mail: d.yao@npic.ac.cn, E-mail: songdr@npic.ac.cn, E-mail: luwei@npic.ac.cn [Nuclear Power Inst. of China, Cheng Du, Si Chuan (China)

    2015-07-01

    Monte Carlo burnup calculation is development trend of reactor physics, there would be a lot of work to be done for engineering applications. Based on Monte Carlo burnup code MOI, non-fuel burnup calculation methods and critical search suggestions will be mentioned in this paper. For non-fuel burnup, mixed burnup mode will improve the accuracy of burnup calculation and efficiency. For critical search of control rod position, a new method called ABN based on ABA which used by MC21 will be proposed for the first time in this paper. (author)

  8. Monte Carlo dose distributions for radiosurgery

    International Nuclear Information System (INIS)

    Perucha, M.; Leal, A.; Rincon, M.; Carrasco, E.

    2001-01-01

    The precision of Radiosurgery Treatment planning systems is limited by the approximations of their algorithms and by their dosimetrical input data. This fact is especially important in small fields. However, the Monte Carlo methods is an accurate alternative as it considers every aspect of particle transport. In this work an acoustic neurinoma is studied by comparing the dose distribution of both a planning system and Monte Carlo. Relative shifts have been measured and furthermore, Dose-Volume Histograms have been calculated for target and adjacent organs at risk. (orig.)

  9. Quasi Monte Carlo methods for optimization models of the energy industry with pricing and load processes; Quasi-Monte Carlo Methoden fuer Optimierungsmodelle der Energiewirtschaft mit Preis- und Last-Prozessen

    Energy Technology Data Exchange (ETDEWEB)

    Leoevey, H.; Roemisch, W. [Humboldt-Univ., Berlin (Germany)

    2015-07-01

    We discuss progress in quasi Monte Carlo methods for numerical calculation integrals or expected values and justify why these methods are more efficient than the classic Monte Carlo methods. Quasi Monte Carlo methods are found to be particularly efficient if the integrands have a low effective dimension. That's why We also discuss the concept of effective dimension and prove on the example of a stochastic Optimization model of the energy industry that such models can posses a low effective dimension. Modern quasi Monte Carlo methods are therefore for such models very promising. [German] Wir diskutieren Fortschritte bei Quasi-Monte Carlo Methoden zur numerischen Berechnung von Integralen bzw. Erwartungswerten und begruenden warum diese Methoden effizienter sind als die klassischen Monte Carlo Methoden. Quasi-Monte Carlo Methoden erweisen sich als besonders effizient, falls die Integranden eine geringe effektive Dimension besitzen. Deshalb diskutieren wir auch den Begriff effektive Dimension und weisen am Beispiel eines stochastischen Optimierungsmodell aus der Energiewirtschaft nach, dass solche Modelle eine niedrige effektive Dimension besitzen koennen. Moderne Quasi-Monte Carlo Methoden sind deshalb fuer solche Modelle sehr erfolgversprechend.

  10. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.; Dean, D.J.; Langanke, K.

    1997-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; the resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo (SMMC) methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, the thermal and rotational behavior of rare-earth and γ-soft nuclei, and the calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. (orig.)

  11. Monte Carlo Methods in ICF

    Science.gov (United States)

    Zimmerman, George B.

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  12. Monte Carlo methods in ICF

    International Nuclear Information System (INIS)

    Zimmerman, George B.

    1997-01-01

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials

  13. Elementary and Advanced Computer Projects for the Physics Classroom and Laboratory

    Science.gov (United States)

    1992-12-01

    the language of science and engineering in industry and government laboratories (alth..4h C is becoming a powerful competitor ). RM/FORTRAN (cost $400...an AD m1ber may be obtained from the National Technical Informatio Service, U.S. Departmen of Commwce, Spng Virgin 22151. Other pipes ar available from...pp.. Nov 1989 "-2- PP 471 PP 499 Holiday . Mary Robin. Methodology of an Event-Driven Siegel. Adam B., A Brave New Curriculum for a Brave Monte Carlo

  14. BREM5 electroweak Monte Carlo

    International Nuclear Information System (INIS)

    Kennedy, D.C. II.

    1987-01-01

    This is an update on the progress of the BREMMUS Monte Carlo simulator, particularly in its current incarnation, BREM5. The present report is intended only as a follow-up to the Mark II/Granlibakken proceedings, and those proceedings should be consulted for a complete description of the capabilities and goals of the BREMMUS program. The new BREM5 program improves on the previous version of BREMMUS, BREM2, in a number of important ways. In BREM2, the internal loop (oblique) corrections were not treated in consistent fashion, a deficiency that led to renormalization scheme-dependence; i.e., physical results, such as cross sections, were dependent on the method used to eliminate infinities from the theory. Of course, this problem cannot be tolerated in a Monte Carlo designed for experimental use. BREM5 incorporates a new way of treating the oblique corrections, as explained in the Granlibakken proceedings, that guarantees renormalization scheme-independence and dramatically simplifies the organization and calculation of radiative corrections. This technique is to be presented in full detail in a forthcoming paper. BREM5 is, at this point, the only Monte Carlo to contain the entire set of one-loop corrections to electroweak four-fermion processes and renormalization scheme-independence. 3 figures

  15. PEPSI: a Monte Carlo generator for polarized leptoproduction

    International Nuclear Information System (INIS)

    Mankiewicz, L.

    1992-01-01

    We describe PEPSI (Polarized Electron Proton Scattering Interactions) a Monte Carlo program for the polarized deep inelastic leptoproduction mediated by electromagnetic interaction. The code is a modification of the LEPTO 4.3 Lund Monte Carlo for unpolarized scattering and requires the standard polarization-independent JETSET routines to perform fragmentation into final hadrons. (orig.)

  16. Importance estimation in Monte Carlo modelling of neutron and photon transport

    International Nuclear Information System (INIS)

    Mickael, M.W.

    1992-01-01

    The estimation of neutron and photon importance in a three-dimensional geometry is achieved using a coupled Monte Carlo and diffusion theory calculation. The parameters required for the solution of the multigroup adjoint diffusion equation are estimated from an analog Monte Carlo simulation of the system under investigation. The solution of the adjoint diffusion equation is then used as an estimate of the particle importance in the actual simulation. This approach provides an automated and efficient variance reduction method for Monte Carlo simulations. The technique has been successfully applied to Monte Carlo simulation of neutron and coupled neutron-photon transport in the nuclear well-logging field. The results show that the importance maps obtained in a few minutes of computer time using this technique are in good agreement with Monte Carlo generated importance maps that require prohibitive computing times. The application of this method to Monte Carlo modelling of the response of neutron porosity and pulsed neutron instruments has resulted in major reductions in computation time. (Author)

  17. Laboratory tests on neutron shields for gamma-ray detectors in space

    CERN Document Server

    Hong, J; Hailey, C J

    2000-01-01

    Shields capable of suppressing neutron-induced background in new classes of gamma-ray detectors such as CdZnTe are becoming important for a variety of reasons. These include a high cross section for neutron interactions in new classes of detector materials as well as the inefficient vetoing of neutron-induced background in conventional active shields. We have previously demonstrated through Monte-Carlo simulations how our new approach, supershields, is superior to the monolithic, bi-atomic neutron shields which have been developed in the past. We report here on the first prototype models for supershields based on boron and hydrogen. We verify the performance of these supershields through laboratory experiments. These experimental results, as well as measurements of conventional monolithic neutron shields, are shown to be consistent with Monte-Carlo simulations. We discuss the implications of this experiment for designs of supershields in general and their application to future hard X-ray/gamma-ray experiments...

  18. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    Energy Technology Data Exchange (ETDEWEB)

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

  19. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    International Nuclear Information System (INIS)

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors

  20. Study on random number generator in Monte Carlo code

    International Nuclear Information System (INIS)

    Oya, Kentaro; Kitada, Takanori; Tanaka, Shinichi

    2011-01-01

    The Monte Carlo code uses a sequence of pseudo-random numbers with a random number generator (RNG) to simulate particle histories. A pseudo-random number has its own period depending on its generation method and the period is desired to be long enough not to exceed the period during one Monte Carlo calculation to ensure the correctness especially for a standard deviation of results. The linear congruential generator (LCG) is widely used as Monte Carlo RNG and the period of LCG is not so long by considering the increasing rate of simulation histories in a Monte Carlo calculation according to the remarkable enhancement of computer performance. Recently, many kinds of RNG have been developed and some of their features are better than those of LCG. In this study, we investigate the appropriate RNG in a Monte Carlo code as an alternative to LCG especially for the case of enormous histories. It is found that xorshift has desirable features compared with LCG, and xorshift has a larger period, a comparable speed to generate random numbers, a better randomness, and good applicability to parallel calculation. (author)

  1. Neutron Standards Laboratory of the CIEMAT

    International Nuclear Information System (INIS)

    Guzman G, K. A.; Mendez V, R.; Vega C, H. R.

    2014-08-01

    By means of a calculation series with Monte Carlo methods and the code MCNPX was characterized the neutrons field produced by the existent calibration sources in the Neutron Standards Laboratory of the Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas (CIEMAT). The laboratory has two neutron calibration sources one of 241 AmBe and other 252 Cf that are stored in a water pool. A detailed three-dimensional model of the room was built with the base of stainless steel remarking in the selector to the sources that situates them to 4 m of the floor to be irradiated on the irradiation table and the storage pool. Each one of the sources was defined on the model in its double steel encapsulated. The spectra were calculated with different cases with the purpose of to calculate the contribution of each element that impacts to the neutrons transport. The spectra of the calibration sources were calculated to different distances regarding the source from 0, 15, 35, 50 to 300 cm on the base and in a same way the values of the ambient dose equivalent using the approaches of the ICRP-74. The results show clearly that the great contribution in the modification of the spectrum is attributed to the walls, and floor of the Neutron Standards Laboratory installations. (Author)

  2. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z. [Institute of Applied Physics and Computational Mathematics, Beijing, 100094 (China)

    2013-07-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  3. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    International Nuclear Information System (INIS)

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.

    2013-01-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  4. An analysis on the costs of Belo Monte hydroelectric power plant; Uma analise sobre os custos da hidreletrica Belo Monte

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Marcos Vinicius Miranda da [Universidade de Sao Paulo (PIPGE/USP), SP (Brazil). Programa Interunidades de Pos-Graduacao em Energia], e-mail: energiapara@yahoo.com.br

    2008-07-01

    The Belo Monte hydropower plant's low generation cost is among the arguments used by Centrais Eletricas do Norte do Brazil (ELETRONORTE), a Brazilian state electric utility, to make possible its construction. This paper shows that the generation cost presented by ELETROBRAS is very low in relation to the world pattern of cost and probably unrealistic. It also shows that the generation cost cannot be used separately to determine the Belo Monte dam's economic feasibility. There is the need to include other costs, such as: socio environmental degradation and control, financial compensation for using the hydraulic resources, transmission and thermal backup stations, beyond, evidently, generation cost for assuring the credibility of the Belo Monte hydropower plant's economic analysis. (author)

  5. Monte Carlo method applied to medical physics

    International Nuclear Information System (INIS)

    Oliveira, C.; Goncalves, I.F.; Chaves, A.; Lopes, M.C.; Teixeira, N.; Matos, B.; Goncalves, I.C.; Ramalho, A.; Salgado, J.

    2000-01-01

    The main application of the Monte Carlo method to medical physics is dose calculation. This paper shows some results of two dose calculation studies and two other different applications: optimisation of neutron field for Boron Neutron Capture Therapy and optimization of a filter for a beam tube for several purposes. The time necessary for Monte Carlo calculations - the highest boundary for its intensive utilisation - is being over-passed with faster and cheaper computers. (author)

  6. A radiating shock evaluated using Implicit Monte Carlo Diffusion

    International Nuclear Information System (INIS)

    Cleveland, M.; Gentile, N.

    2013-01-01

    Implicit Monte Carlo [1] (IMC) has been shown to be very expensive when used to evaluate a radiation field in opaque media. Implicit Monte Carlo Diffusion (IMD) [2], which evaluates a spatial discretized diffusion equation using a Monte Carlo algorithm, can be used to reduce the cost of evaluating the radiation field in opaque media [2]. This work couples IMD to the hydrodynamics equations to evaluate opaque diffusive radiating shocks. The Lowrie semi-analytic diffusive radiating shock benchmark[a] is used to verify our implementation of the coupled system of equations. (authors)

  7. The Monte Carlo method the method of statistical trials

    CERN Document Server

    Shreider, YuA

    1966-01-01

    The Monte Carlo Method: The Method of Statistical Trials is a systematic account of the fundamental concepts and techniques of the Monte Carlo method, together with its range of applications. Some of these applications include the computation of definite integrals, neutron physics, and in the investigation of servicing processes. This volume is comprised of seven chapters and begins with an overview of the basic features of the Monte Carlo method and typical examples of its application to simple problems in computational mathematics. The next chapter examines the computation of multi-dimensio

  8. Automated Monte Carlo biasing for photon-generated electrons near surfaces.

    Energy Technology Data Exchange (ETDEWEB)

    Franke, Brian Claude; Crawford, Martin James; Kensek, Ronald Patrick

    2009-09-01

    This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.

  9. Applicability of quasi-Monte Carlo for lattice systems

    International Nuclear Information System (INIS)

    Ammon, Andreas; Deutsches Elektronen-Synchrotron; Hartung, Tobias; Jansen, Karl; Leovey, Hernan; Griewank, Andreas; Mueller-Preussker, Michael

    2013-11-01

    This project investigates the applicability of quasi-Monte Carlo methods to Euclidean lattice systems in order to improve the asymptotic error scaling of observables for such theories. The error of an observable calculated by averaging over random observations generated from ordinary Monte Carlo simulations scales like N -1/2 , where N is the number of observations. By means of quasi-Monte Carlo methods it is possible to improve this scaling for certain problems to N -1 , or even further if the problems are regular enough. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling of all investigated observables in both cases.

  10. Applicability of quasi-Monte Carlo for lattice systems

    Energy Technology Data Exchange (ETDEWEB)

    Ammon, Andreas [Berlin Humboldt-Univ. (Germany). Dept. of Physics; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Hartung, Tobias [King' s College London (United Kingdom). Dept. of Mathematics; Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leovey, Hernan; Griewank, Andreas [Berlin Humboldt-Univ. (Germany). Dept. of Mathematics; Mueller-Preussker, Michael [Berlin Humboldt-Univ. (Germany). Dept. of Physics

    2013-11-15

    This project investigates the applicability of quasi-Monte Carlo methods to Euclidean lattice systems in order to improve the asymptotic error scaling of observables for such theories. The error of an observable calculated by averaging over random observations generated from ordinary Monte Carlo simulations scales like N{sup -1/2}, where N is the number of observations. By means of quasi-Monte Carlo methods it is possible to improve this scaling for certain problems to N{sup -1}, or even further if the problems are regular enough. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling of all investigated observables in both cases.

  11. Uniform distribution and quasi-Monte Carlo methods discrepancy, integration and applications

    CERN Document Server

    Kritzer, Peter; Pillichshammer, Friedrich; Winterhof, Arne

    2014-01-01

    The survey articles in this book focus on number theoretic point constructions, uniform distribution theory, and quasi-Monte Carlo methods. As deterministic versions of the Monte Carlo method, quasi-Monte Carlo rules enjoy increasing popularity, with many fruitful applications in mathematical practice, as for example in finance, computer graphics, and biology.

  12. Clinical implementation of full Monte Carlo dose calculation in proton beam therapy

    International Nuclear Information System (INIS)

    Paganetti, Harald; Jiang, Hongyu; Parodi, Katia; Slopsema, Roelf; Engelsman, Martijn

    2008-01-01

    The goal of this work was to facilitate the clinical use of Monte Carlo proton dose calculation to support routine treatment planning and delivery. The Monte Carlo code Geant4 was used to simulate the treatment head setup, including a time-dependent simulation of modulator wheels (for broad beam modulation) and magnetic field settings (for beam scanning). Any patient-field-specific setup can be modeled according to the treatment control system of the facility. The code was benchmarked against phantom measurements. Using a simulation of the ionization chamber reading in the treatment head allows the Monte Carlo dose to be specified in absolute units (Gy per ionization chamber reading). Next, the capability of reading CT data information was implemented into the Monte Carlo code to model patient anatomy. To allow time-efficient dose calculation, the standard Geant4 tracking algorithm was modified. Finally, a software link of the Monte Carlo dose engine to the patient database and the commercial planning system was established to allow data exchange, thus completing the implementation of the proton Monte Carlo dose calculation engine ('DoC++'). Monte Carlo re-calculated plans are a valuable tool to revisit decisions in the planning process. Identification of clinically significant differences between Monte Carlo and pencil-beam-based dose calculations may also drive improvements of current pencil-beam methods. As an example, four patients (29 fields in total) with tumors in the head and neck regions were analyzed. Differences between the pencil-beam algorithm and Monte Carlo were identified in particular near the end of range, both due to dose degradation and overall differences in range prediction due to bony anatomy in the beam path. Further, the Monte Carlo reports dose-to-tissue as compared to dose-to-water by the planning system. Our implementation is tailored to a specific Monte Carlo code and the treatment planning system XiO (Computerized Medical Systems Inc

  13. Monte Carlo simulation of {beta}-{gamma} coincidence system using plastic scintillators in 4{pi} geometry

    Energy Technology Data Exchange (ETDEWEB)

    Dias, M.S. [Instituto de Pesquisas Energeticas e Nucleares: IPEN-CNEN/SP, Av. Prof. Lineu Prestes 2242, 05508-000 Sao Paulo, SP (Brazil)], E-mail: msdias@ipen.br; Piuvezam-Filho, H. [Instituto de Pesquisas Energeticas e Nucleares: IPEN-CNEN/SP, Av. Prof. Lineu Prestes 2242, 05508-000 Sao Paulo, SP (Brazil); Baccarelli, A.M. [Departamento de Fisica-PUC/SP-Rua Marques de Paranagua 111, 01303-050 Sao Paulo, SP (Brazil); Takeda, M.N. [Universidade Santo Amaro, UNISA-Rua Prof. Eneas da Siqueira Neto 340, 04829-300 Sao Paulo, SP (Brazil); Koskinas, M.F. [Instituto de Pesquisas Energeticas e Nucleares: IPEN-CNEN/SP, Av. Prof. Lineu Prestes 2242, 05508-000 Sao Paulo, SP (Brazil)

    2007-09-21

    A modified version of a Monte Carlo code called Esquema, developed at the Nuclear Metrology Laboratory in IPEN, Sao Paulo, Brazil, has been applied for simulating a 4{pi}{beta}(PS)-{gamma} coincidence system designed for primary radionuclide standardisation. This system consists of a plastic scintillator in 4{pi} geometry, for alpha or electron detection, coupled to a NaI(Tl) counter for gamma-ray detection. The response curves for monoenergetic electrons and photons have been calculated previously by Penelope code and applied as input data to code Esquema. The latter code simulates all the disintegration processes, from the precursor nucleus to the ground state of the daughter radionuclide. As a result, the curve between the observed disintegration rate as a function of the beta efficiency parameter can be simulated. A least-squares fit between the experimental activity values and the Monte Carlo calculation provided the actual radioactive source activity, without need of conventional extrapolation procedures. Application of this methodology to {sup 60}Co and {sup 133}Ba radioactive sources is presented and showed results in good agreement with a conventional proportional counter 4{pi}{beta}(PC)-{gamma} coincidence system.

  14. Monte Carlo simulation of β-γ coincidence system using plastic scintillators in 4π geometry

    International Nuclear Information System (INIS)

    Dias, M.S.; Piuvezam-Filho, H.; Baccarelli, A.M.; Takeda, M.N.; Koskinas, M.F.

    2007-01-01

    A modified version of a Monte Carlo code called Esquema, developed at the Nuclear Metrology Laboratory in IPEN, Sao Paulo, Brazil, has been applied for simulating a 4πβ(PS)-γ coincidence system designed for primary radionuclide standardisation. This system consists of a plastic scintillator in 4π geometry, for alpha or electron detection, coupled to a NaI(Tl) counter for gamma-ray detection. The response curves for monoenergetic electrons and photons have been calculated previously by Penelope code and applied as input data to code Esquema. The latter code simulates all the disintegration processes, from the precursor nucleus to the ground state of the daughter radionuclide. As a result, the curve between the observed disintegration rate as a function of the beta efficiency parameter can be simulated. A least-squares fit between the experimental activity values and the Monte Carlo calculation provided the actual radioactive source activity, without need of conventional extrapolation procedures. Application of this methodology to 60 Co and 133 Ba radioactive sources is presented and showed results in good agreement with a conventional proportional counter 4πβ(PC)-γ coincidence system

  15. Exponential convergence on a continuous Monte Carlo transport problem

    International Nuclear Information System (INIS)

    Booth, T.E.

    1997-01-01

    For more than a decade, it has been known that exponential convergence on discrete transport problems was possible using adaptive Monte Carlo techniques. An adaptive Monte Carlo method that empirically produces exponential convergence on a simple continuous transport problem is described

  16. A flexible coupling scheme for Monte Carlo and thermal-hydraulics codes

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. Eduard, E-mail: J.E.Hoogenboom@tudelft.nl [Delft University of Technology (Netherlands); Ivanov, Aleksandar; Sanchez, Victor, E-mail: Aleksandar.Ivanov@kit.edu, E-mail: Victor.Sanchez@kit.edu [Karlsruhe Institute of Technology, Institute of Neutron Physics and Reactor Technology, Eggenstein-Leopoldshafen (Germany); Diop, Cheikh, E-mail: Cheikh.Diop@cea.fr [CEA/DEN/DANS/DM2S/SERMA, Commissariat a l' Energie Atomique, Gif-sur-Yvette (France)

    2011-07-01

    A coupling scheme between a Monte Carlo code and a thermal-hydraulics code is being developed within the European NURISP project for comprehensive and validated reactor analysis. The scheme is flexible as it allows different Monte Carlo codes and different thermal-hydraulics codes to be used. At present the MCNP and TRIPOLI4 Monte Carlo codes can be used and the FLICA4 and SubChanFlow thermal-hydraulics codes. For all these codes only an original executable is necessary. A Python script drives the iterations between Monte Carlo and thermal-hydraulics calculations. It also calls a conversion program to merge a master input file for the Monte Carlo code with the appropriate temperature and coolant density data from the thermal-hydraulics calculation. Likewise it calls another conversion program to merge a master input file for the thermal-hydraulics code with the power distribution data from the Monte Carlo calculation. Special attention is given to the neutron cross section data for the various required temperatures in the Monte Carlo calculation. Results are shown for an infinite lattice of PWR fuel pin cells and a 3 x 3 fuel BWR pin cell cluster. Various possibilities for further improvement and optimization of the coupling system are discussed. (author)

  17. A flexible coupling scheme for Monte Carlo and thermal-hydraulics codes

    International Nuclear Information System (INIS)

    Hoogenboom, J. Eduard; Ivanov, Aleksandar; Sanchez, Victor; Diop, Cheikh

    2011-01-01

    A coupling scheme between a Monte Carlo code and a thermal-hydraulics code is being developed within the European NURISP project for comprehensive and validated reactor analysis. The scheme is flexible as it allows different Monte Carlo codes and different thermal-hydraulics codes to be used. At present the MCNP and TRIPOLI4 Monte Carlo codes can be used and the FLICA4 and SubChanFlow thermal-hydraulics codes. For all these codes only an original executable is necessary. A Python script drives the iterations between Monte Carlo and thermal-hydraulics calculations. It also calls a conversion program to merge a master input file for the Monte Carlo code with the appropriate temperature and coolant density data from the thermal-hydraulics calculation. Likewise it calls another conversion program to merge a master input file for the thermal-hydraulics code with the power distribution data from the Monte Carlo calculation. Special attention is given to the neutron cross section data for the various required temperatures in the Monte Carlo calculation. Results are shown for an infinite lattice of PWR fuel pin cells and a 3 x 3 fuel BWR pin cell cluster. Various possibilities for further improvement and optimization of the coupling system are discussed. (author)

  18. Isotopic depletion with Monte Carlo

    International Nuclear Information System (INIS)

    Martin, W.R.; Rathkopf, J.A.

    1996-06-01

    This work considers a method to deplete isotopes during a time- dependent Monte Carlo simulation of an evolving system. The method is based on explicitly combining a conventional estimator for the scalar flux with the analytical solutions to the isotopic depletion equations. There are no auxiliary calculations; the method is an integral part of the Monte Carlo calculation. The method eliminates negative densities and reduces the variance in the estimates for the isotope densities, compared to existing methods. Moreover, existing methods are shown to be special cases of the general method described in this work, as they can be derived by combining a high variance estimator for the scalar flux with a low-order approximation to the analytical solution to the depletion equation

  19. Multilevel sequential Monte-Carlo samplers

    KAUST Repository

    Jasra, Ajay

    2016-01-01

    Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.

  20. Monte Carlo methods in ICF

    International Nuclear Information System (INIS)

    Zimmerman, G.B.

    1997-01-01

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials. copyright 1997 American Institute of Physics

  1. Multilevel sequential Monte-Carlo samplers

    KAUST Repository

    Jasra, Ajay

    2016-01-05

    Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.

  2. Monte Carlo systems used for treatment planning and dose verification

    Energy Technology Data Exchange (ETDEWEB)

    Brualla, Lorenzo [Universitaetsklinikum Essen, NCTeam, Strahlenklinik, Essen (Germany); Rodriguez, Miguel [Centro Medico Paitilla, Balboa (Panama); Lallena, Antonio M. [Universidad de Granada, Departamento de Fisica Atomica, Molecular y Nuclear, Granada (Spain)

    2017-04-15

    General-purpose radiation transport Monte Carlo codes have been used for estimation of the absorbed dose distribution in external photon and electron beam radiotherapy patients since several decades. Results obtained with these codes are usually more accurate than those provided by treatment planning systems based on non-stochastic methods. Traditionally, absorbed dose computations based on general-purpose Monte Carlo codes have been used only for research, owing to the difficulties associated with setting up a simulation and the long computation time required. To take advantage of radiation transport Monte Carlo codes applied to routine clinical practice, researchers and private companies have developed treatment planning and dose verification systems that are partly or fully based on fast Monte Carlo algorithms. This review presents a comprehensive list of the currently existing Monte Carlo systems that can be used to calculate or verify an external photon and electron beam radiotherapy treatment plan. Particular attention is given to those systems that are distributed, either freely or commercially, and that do not require programming tasks from the end user. These systems are compared in terms of features and the simulation time required to compute a set of benchmark calculations. (orig.) [German] Seit mehreren Jahrzehnten werden allgemein anwendbare Monte-Carlo-Codes zur Simulation des Strahlungstransports benutzt, um die Verteilung der absorbierten Dosis in der perkutanen Strahlentherapie mit Photonen und Elektronen zu evaluieren. Die damit erzielten Ergebnisse sind meist akkurater als solche, die mit nichtstochastischen Methoden herkoemmlicher Bestrahlungsplanungssysteme erzielt werden koennen. Wegen des damit verbundenen Arbeitsaufwands und der langen Dauer der Berechnungen wurden Monte-Carlo-Simulationen von Dosisverteilungen in der konventionellen Strahlentherapie in der Vergangenheit im Wesentlichen in der Forschung eingesetzt. Im Bemuehen, Monte

  3. Multilevel Monte Carlo in Approximate Bayesian Computation

    KAUST Repository

    Jasra, Ajay

    2017-02-13

    In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.

  4. Implementation of the n-body Monte-Carlo event generator into the Geant4 toolkit for photonuclear studies

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Wen, E-mail: wenluo-ok@163.com [School of Nuclear Science and Technology, University of South China, Hengyang 421001 (China); Lan, Hao-yang [School of Nuclear Science and Technology, University of South China, Hengyang 421001 (China); Xu, Yi; Balabanski, Dimiter L. [Extreme Light Infrastructure-Nuclear Physics, “Horia Hulubei” National Institute for Physics and Nuclear Engineering (IFIN-HH), 30 Reactorului, 077125 Bucharest-Magurele (Romania)

    2017-03-21

    A data-based Monte Carlo simulation algorithm, Geant4-GENBOD, was developed by coupling the n-body Monte-Carlo event generator to the Geant4 toolkit, aiming at accurate simulations of specific photonuclear reactions for diverse photonuclear physics studies. Good comparisons of Geant4-GENBOD calculations with reported measurements of photo-neutron production cross-sections and yields, and with reported energy spectra of the {sup 6}Li(n,α)t reaction were performed. Good agreements between the calculations and experimental data were found and the validation of the developed program was verified consequently. Furthermore, simulations for the {sup 92}Mo(γ,p) reaction of astrophysics relevance and photo-neutron production of {sup 99}Mo/{sup 99m}Tc and {sup 225}Ra/{sup 225}Ac radioisotopes were investigated, which demonstrate the applicability of this program. We conclude that the Geant4-GENBOD is a reliable tool for study of the emerging experiment programs at high-intensity γ-beam laboratories, such as the Extreme Light Infrastructure – Nuclear Physics facility and the High Intensity Gamma-Ray Source at Duke University.

  5. Monte Carlo simulation of Markov unreliability models

    International Nuclear Information System (INIS)

    Lewis, E.E.; Boehm, F.

    1984-01-01

    A Monte Carlo method is formulated for the evaluation of the unrealibility of complex systems with known component failure and repair rates. The formulation is in terms of a Markov process allowing dependences between components to be modeled and computational efficiencies to be achieved in the Monte Carlo simulation. Two variance reduction techniques, forced transition and failure biasing, are employed to increase computational efficiency of the random walk procedure. For an example problem these result in improved computational efficiency by more than three orders of magnitudes over analog Monte Carlo. The method is generalized to treat problems with distributed failure and repair rate data, and a batching technique is introduced and shown to result in substantial increases in computational efficiency for an example problem. A method for separating the variance due to the data uncertainty from that due to the finite number of random walks is presented. (orig.)

  6. A residual Monte Carlo method for discrete thermal radiative diffusion

    International Nuclear Information System (INIS)

    Evans, T.M.; Urbatsch, T.J.; Lichtenstein, H.; Morel, J.E.

    2003-01-01

    Residual Monte Carlo methods reduce statistical error at a rate of exp(-bN), where b is a positive constant and N is the number of particle histories. Contrast this convergence rate with 1/√N, which is the rate of statistical error reduction for conventional Monte Carlo methods. Thus, residual Monte Carlo methods hold great promise for increased efficiency relative to conventional Monte Carlo methods. Previous research has shown that the application of residual Monte Carlo methods to the solution of continuum equations, such as the radiation transport equation, is problematic for all but the simplest of cases. However, the residual method readily applies to discrete systems as long as those systems are monotone, i.e., they produce positive solutions given positive sources. We develop a residual Monte Carlo method for solving a discrete 1D non-linear thermal radiative equilibrium diffusion equation, and we compare its performance with that of the discrete conventional Monte Carlo method upon which it is based. We find that the residual method provides efficiency gains of many orders of magnitude. Part of the residual gain is due to the fact that we begin each timestep with an initial guess equal to the solution from the previous timestep. Moreover, fully consistent non-linear solutions can be obtained in a reasonable amount of time because of the effective lack of statistical noise. We conclude that the residual approach has great potential and that further research into such methods should be pursued for more general discrete and continuum systems

  7. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung

    2013-02-16

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  8. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung; Liang, Faming; Chen, Yuguo; Yu, Kai

    2013-01-01

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  9. Pore-water evolution and solute-transport mechanisms in Opalinus Clay at Mont Terri and Mont Russelin (Canton Jura, Switzerland)

    Energy Technology Data Exchange (ETDEWEB)

    Mazurek, M. [Institute of Geological Sciences, University of Berne, Berne (Switzerland); Haller de, A. [Earth and Environmental Sciences, University of Geneva, Geneva (Switzerland)

    2017-04-15

    Data pertinent to pore-water composition in Opalinus Clay in the Mont Terri and Mont Russelin anticlines have been collected over the last 20 years from long-term in situ pore-water sampling in dedicated boreholes, from laboratory analyses on drill cores and from the geochemical characteristics of vein infills. Together with independent knowledge on regional geology, an attempt is made here to constrain the geochemical evolution of the pore-waters. Following basin inversion and the establishment of continental conditions in the late Cretaceous, the Malm limestones acted as a fresh-water upper boundary leading to progressive out-diffusion of salinity from the originally marine pore-waters of the Jurassic low-permeability sequence. Model calculations suggest that at the end of the Palaeogene, pore-water salinity in Opalinus Clay was about half the original value. In the Chattian/Aquitanian, partial evaporation of sea-water occurred. It is postulated that brines diffused into the underlying sequence over a period of several Myr, resulting in an increase of salinity in Opalinus Clay to levels observed today. This hypothesis is further supported by the isotopic signatures of SO{sub 4}{sup 2-} and {sup 87}Sr/{sup 86}Sr in current pore-waters. These are not simple binary mixtures of sea and meteoric water, but their Cl{sup -} and stable water-isotope signatures can be potentially explained by a component of partially evaporated sea-water. After the re-establishment of fresh-water conditions on the surface and the formation of the Jura Fold and Thrust Belt, erosion caused the activation of aquifers embedding the low-permeability sequence, leading to the curved profiles of various pore-water tracers that are observed today. Fluid flow triggered by deformation events during thrusting and folding of the anticlines occurred and is documented by infrequent vein infills in major fault structures. However, this flow was spatially focussed and of limited duration and so did not

  10. Contributon Monte Carlo

    International Nuclear Information System (INIS)

    Dubi, A.; Gerstl, S.A.W.

    1979-05-01

    The contributon Monte Carlo method is based on a new recipe to calculate target responses by means of volume integral of the contributon current in a region between the source and the detector. A comprehensive description of the method, its implementation in the general-purpose MCNP code, and results of the method for realistic nonhomogeneous, energy-dependent problems are presented. 23 figures, 10 tables

  11. Bayesian Monte Carlo method

    International Nuclear Information System (INIS)

    Rajabalinejad, M.

    2010-01-01

    To reduce cost of Monte Carlo (MC) simulations for time-consuming processes, Bayesian Monte Carlo (BMC) is introduced in this paper. The BMC method reduces number of realizations in MC according to the desired accuracy level. BMC also provides a possibility of considering more priors. In other words, different priors can be integrated into one model by using BMC to further reduce cost of simulations. This study suggests speeding up the simulation process by considering the logical dependence of neighboring points as prior information. This information is used in the BMC method to produce a predictive tool through the simulation process. The general methodology and algorithm of BMC method are presented in this paper. The BMC method is applied to the simplified break water model as well as the finite element model of 17th Street Canal in New Orleans, and the results are compared with the MC and Dynamic Bounds methods.

  12. Iterative optimisation of Monte Carlo detector models using measurements and simulations

    Energy Technology Data Exchange (ETDEWEB)

    Marzocchi, O., E-mail: olaf@marzocchi.net [European Patent Office, Rijswijk (Netherlands); Leone, D., E-mail: debora.leone@kit.edu [Institute for Nuclear Waste Disposal, Karlsruhe Institute of Technology, Karlsruhe (Germany)

    2015-04-11

    This work proposes a new technique to optimise the Monte Carlo models of radiation detectors, offering the advantage of a significantly lower user effort and therefore an improved work efficiency compared to the prior techniques. The method consists of four steps, two of which are iterative and suitable for automation using scripting languages. The four steps consist in the acquisition in the laboratory of measurement data to be used as reference; the modification of a previously available detector model; the simulation of a tentative model of the detector to obtain the coefficients of a set of linear equations; the solution of the system of equations and the update of the detector model. Steps three and four can be repeated for more accurate results. This method avoids the “try and fail” approach typical of the prior techniques.

  13. Closed-shell variational quantum Monte Carlo simulation for the ...

    African Journals Online (AJOL)

    Closed-shell variational quantum Monte Carlo simulation for the electric dipole moment calculation of hydrazine molecule using casino-code. ... Nigeria Journal of Pure and Applied Physics ... The variational quantum Monte Carlo (VQMC) technique used in this work employed the restricted Hartree-Fock (RHF) scheme.

  14. New Approaches and Applications for Monte Carlo Perturbation Theory

    Energy Technology Data Exchange (ETDEWEB)

    Aufiero, Manuele; Bidaud, Adrien; Kotlyar, Dan; Leppänen, Jaakko; Palmiotti, Giuseppe; Salvatores, Massimo; Sen, Sonat; Shwageraus, Eugene; Fratoni, Massimiliano

    2017-02-01

    This paper presents some of the recent and new advancements in the extension of Monte Carlo Perturbation Theory methodologies and application. In particular, the discussed problems involve Brunup calculation, perturbation calculation based on continuous energy functions, and Monte Carlo Perturbation Theory in loosely coupled systems.

  15. Recommender engine for continuous-time quantum Monte Carlo methods

    Science.gov (United States)

    Huang, Li; Yang, Yi-feng; Wang, Lei

    2017-03-01

    Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.

  16. Rapid Monte Carlo Simulation of Gravitational Wave Galaxies

    Science.gov (United States)

    Breivik, Katelyn; Larson, Shane L.

    2015-01-01

    With the detection of gravitational waves on the horizon, astrophysical catalogs produced by gravitational wave observatories can be used to characterize the populations of sources and validate different galactic population models. Efforts to simulate gravitational wave catalogs and source populations generally focus on population synthesis models that require extensive time and computational power to produce a single simulated galaxy. Monte Carlo simulations of gravitational wave source populations can also be used to generate observation catalogs from the gravitational wave source population. Monte Carlo simulations have the advantes of flexibility and speed, enabling rapid galactic realizations as a function of galactic binary parameters with less time and compuational resources required. We present a Monte Carlo method for rapid galactic simulations of gravitational wave binary populations.

  17. Acceleration of monte Carlo solution by conjugate gradient method

    International Nuclear Information System (INIS)

    Toshihisa, Yamamoto

    2005-01-01

    The conjugate gradient method (CG) was applied to accelerate Monte Carlo solutions in fixed source problems. The equilibrium model based formulation enables to use CG scheme as well as initial guess to maximize computational performance. This method is available to arbitrary geometry provided that the neutron source distribution in each subregion can be regarded as flat. Even if it is not the case, the method can still be used as a powerful tool to provide an initial guess very close to the converged solution. The major difference of Monte Carlo CG to deterministic CG is that residual error is estimated using Monte Carlo sampling, thus statistical error exists in the residual. This leads to a flow diagram specific to Monte Carlo-CG. Three pre-conditioners were proposed for CG scheme and the performance was compared with a simple 1-D slab heterogeneous test problem. One of them, Sparse-M option, showed an excellent performance in convergence. The performance per unit cost was improved by four times in the test problem. Although direct estimation of efficiency of the method is impossible mainly because of the strong problem-dependence of the optimized pre-conditioner in CG, the method seems to have efficient potential as a fast solution algorithm for Monte Carlo calculations. (author)

  18. Uncertainty evaluation of the kerma in the air, related to the active volume in the ionization chamber of concentric cylinders, by Monte Carlo simulation; Avaliacao de incerteza no kerma no ar, em relacao ao volume ativo da camara de ionizacao de cilindros concentricos, por simulacao de Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Lo Bianco, A.S.; Oliveira, H.P.S.; Peixoto, J.G.P., E-mail: abianco@ird.gov.b [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. Nacional de Metrologia das Radiacoes Ionizantes (LNMRI)

    2009-07-01

    To implant the primary standard of the magnitude kerma in the air for X-ray between 10 - 50 keV, the National Metrology Laboratory of Ionizing Radiations (LNMRI) must evaluate all the uncertainties of measurement related with Victtoren chamber. So, it was evaluated the uncertainty of the kerma in the air consequent of the inaccuracy in the active volume of the chamber using the calculation of Monte Carlo as a tool through the Penelope software

  19. Development of a Fully-Automated Monte Carlo Burnup Code Monteburns

    International Nuclear Information System (INIS)

    Poston, D.I.; Trellue, H.R.

    1999-01-01

    Several computer codes have been developed to perform nuclear burnup calculations over the past few decades. In addition, because of advances in computer technology, it recently has become more desirable to use Monte Carlo techniques for such problems. Monte Carlo techniques generally offer two distinct advantages over discrete ordinate methods: (1) the use of continuous energy cross sections and (2) the ability to model detailed, complex, three-dimensional (3-D) geometries. These advantages allow more accurate burnup results to be obtained, provided that the user possesses the required computing power (which is required for discrete ordinate methods as well). Several linkage codes have been written that combine a Monte Carlo N-particle transport code (such as MCNP TM ) with a radioactive decay and burnup code. This paper describes one such code that was written at Los Alamos National Laboratory: monteburns. Monteburns links MCNP with the isotope generation and depletion code ORIGEN2. The basis for the development of monteburns was the need for a fully automated code that could perform accurate burnup (and other) calculations for any 3-D system (accelerator-driven or a full reactor core). Before the initial development of monteburns, a list of desired attributes was made and is given below. o The code should be fully automated (that is, after the input is set up, no further user interaction is required). . The code should allow for the irradiation of several materials concurrently (each material is evaluated collectively in MCNP and burned separately in 0RIGEN2). o The code should allow the transfer of materials (shuffling) between regions in MCNP. . The code should allow any materials to be added or removed before, during, or after each step in an automated fashion. . The code should not require the user to provide input for 0RIGEN2 and should have minimal MCNP input file requirements (other than a working MCNP deck). . The code should be relatively easy to use

  20. PERHITUNGAN VaR PORTOFOLIO SAHAM MENGGUNAKAN DATA HISTORIS DAN DATA SIMULASI MONTE CARLO

    Directory of Open Access Journals (Sweden)

    WAYAN ARTHINI

    2012-09-01

    Full Text Available Value at Risk (VaR is the maximum potential loss on a portfolio based on the probability at a certain time.  In this research, portfolio VaR values calculated from historical data and Monte Carlo simulation data. Historical data is processed so as to obtain stock returns, variance, correlation coefficient, and variance-covariance matrix, then the method of Markowitz sought proportion of each stock fund, and portfolio risk and return portfolio. The data was then simulated by Monte Carlo simulation, Exact Monte Carlo Simulation and Expected Monte Carlo Simulation. Exact Monte Carlo simulation have same returns and standard deviation  with historical data, while the Expected Monte Carlo Simulation satistic calculation similar to historical data. The results of this research is the portfolio VaR  with time horizon T=1, T=10, T=22 and the confidence level of 95 %, values obtained VaR between historical data and Monte Carlo simulation data with the method exact and expected. Value of VaR from both Monte Carlo simulation is greater than VaR historical data.

  1. Assessment of the suitability of different random number generators for Monte Carlo simulations in gamma-ray spectrometry

    International Nuclear Information System (INIS)

    Cornejo Diaz, N.; Vergara Gil, A.; Jurado Vargas, M.

    2010-01-01

    The Monte Carlo method has become a valuable numerical laboratory framework in which to simulate complex physical systems. It is based on the generation of pseudo-random number sequences by numerical algorithms called random generators. In this work we assessed the suitability of different well-known random number generators for the simulation of gamma-ray spectrometry systems during efficiency calibrations. The assessment was carried out in two stages. The generators considered (Delphi's linear congruential, mersenne twister, XorShift, multiplier with carry, universal virtual array, and non-periodic logistic map based generator) were first evaluated with different statistical empirical tests, including moments, correlations, uniformity, independence of terms and the DIEHARD battery of tests. In a second step, an application-specific test was conducted by implementing the generators in our Monte Carlo program DETEFF and comparing the results obtained with them. The calculations were performed with two different CPUs, for a typical HpGe detector and a water sample in Marinelli geometry, with gamma-rays between 59 and 1800 keV. For the Non-periodic Logistic Map based generator, dependence of the most significant bits was evident. This explains the bias, in excess of 5%, of the efficiency values obtained with this generator. The results of the application-specific assessment and the statistical performance of the other algorithms studied indicate their suitability for the Monte Carlo simulation of gamma-ray spectrometry systems for efficiency calculations.

  2. Assessment of the suitability of different random number generators for Monte Carlo simulations in gamma-ray spectrometry.

    Science.gov (United States)

    Díaz, N Cornejo; Gil, A Vergara; Vargas, M Jurado

    2010-03-01

    The Monte Carlo method has become a valuable numerical laboratory framework in which to simulate complex physical systems. It is based on the generation of pseudo-random number sequences by numerical algorithms called random generators. In this work we assessed the suitability of different well-known random number generators for the simulation of gamma-ray spectrometry systems during efficiency calibrations. The assessment was carried out in two stages. The generators considered (Delphi's linear congruential, mersenne twister, XorShift, multiplier with carry, universal virtual array, and non-periodic logistic map based generator) were first evaluated with different statistical empirical tests, including moments, correlations, uniformity, independence of terms and the DIEHARD battery of tests. In a second step, an application-specific test was conducted by implementing the generators in our Monte Carlo program DETEFF and comparing the results obtained with them. The calculations were performed with two different CPUs, for a typical HpGe detector and a water sample in Marinelli geometry, with gamma-rays between 59 and 1800 keV. For the Non-periodic Logistic Map based generator, dependence of the most significant bits was evident. This explains the bias, in excess of 5%, of the efficiency values obtained with this generator. The results of the application-specific assessment and the statistical performance of the other algorithms studied indicate their suitability for the Monte Carlo simulation of gamma-ray spectrometry systems for efficiency calculations. Copyright 2009 Elsevier Ltd. All rights reserved.

  3. Monte Carlo methods for the reliability analysis of Markov systems

    International Nuclear Information System (INIS)

    Buslik, A.J.

    1985-01-01

    This paper presents Monte Carlo methods for the reliability analysis of Markov systems. Markov models are useful in treating dependencies between components. The present paper shows how the adjoint Monte Carlo method for the continuous time Markov process can be derived from the method for the discrete-time Markov process by a limiting process. The straightforward extensions to the treatment of mean unavailability (over a time interval) are given. System unavailabilities can also be estimated; this is done by making the system failed states absorbing, and not permitting repair from them. A forward Monte Carlo method is presented in which the weighting functions are related to the adjoint function. In particular, if the exact adjoint function is known then weighting factors can be constructed such that the exact answer can be obtained with a single Monte Carlo trial. Of course, if the exact adjoint function is known, there is no need to perform the Monte Carlo calculation. However, the formulation is useful since it gives insight into choices of the weight factors which will reduce the variance of the estimator

  4. A general transform for variance reduction in Monte Carlo simulations

    International Nuclear Information System (INIS)

    Becker, T.L.; Larsen, E.W.

    2011-01-01

    This paper describes a general transform to reduce the variance of the Monte Carlo estimate of some desired solution, such as flux or biological dose. This transform implicitly includes many standard variance reduction techniques, including source biasing, collision biasing, the exponential transform for path-length stretching, and weight windows. Rather than optimizing each of these techniques separately or choosing semi-empirical biasing parameters based on the experience of a seasoned Monte Carlo practitioner, this General Transform unites all these variance techniques to achieve one objective: a distribution of Monte Carlo particles that attempts to optimize the desired solution. Specifically, this transform allows Monte Carlo particles to be distributed according to the user's specification by using information obtained from a computationally inexpensive deterministic simulation of the problem. For this reason, we consider the General Transform to be a hybrid Monte Carlo/Deterministic method. The numerical results con rm that the General Transform distributes particles according to the user-specified distribution and generally provide reasonable results for shielding applications. (author)

  5. A Monte Carlo approach to combating delayed completion of ...

    African Journals Online (AJOL)

    The objective of this paper is to unveil the relevance of Monte Carlo critical path analysis in resolving problem of delays in scheduled completion of development projects. Commencing with deterministic network scheduling, Monte Carlo critical path analysis was advanced by assigning probability distributions to task times.

  6. Perturbation based Monte Carlo criticality search in density, enrichment and concentration

    International Nuclear Information System (INIS)

    Li, Zeguang; Wang, Kan; Deng, Jingkang

    2015-01-01

    Highlights: • A new perturbation based Monte Carlo criticality search method is proposed. • The method could get accurate results with only one individual criticality run. • The method is used to solve density, enrichment and concentration search problems. • Results show the feasibility and good performances of this method. • The relationship between results’ accuracy and perturbation order is discussed. - Abstract: Criticality search is a very important aspect in reactor physics analysis. Due to the advantages of Monte Carlo method and the development of computer technologies, Monte Carlo criticality search is becoming more and more necessary and feasible. Existing Monte Carlo criticality search methods need large amount of individual criticality runs and may have unstable results because of the uncertainties of criticality results. In this paper, a new perturbation based Monte Carlo criticality search method is proposed and discussed. This method only needs one individual criticality calculation with perturbation tallies to estimate k eff changing function using initial k eff and differential coefficients results, and solves polynomial equations to get the criticality search results. The new perturbation based Monte Carlo criticality search method is implemented in the Monte Carlo code RMC, and criticality search problems in density, enrichment and concentration are taken out. Results show that this method is quite inspiring in accuracy and efficiency, and has advantages compared with other criticality search methods

  7. PENENTUAN HARGA OPSI BELI TIPE ASIA DENGAN METODE MONTE CARLO-CONTROL VARIATE

    Directory of Open Access Journals (Sweden)

    NI NYOMAN AYU ARTANADI

    2017-01-01

    Full Text Available Option is a contract between the writer and the holder which entitles the holder to buy or sell an underlying asset at the maturity date for a specified price known as an exercise price. Asian option is a type of financial derivatives which the payoff taking the average value over the time series of the asset price. The aim of the study is to present the Monte Carlo-Control Variate as an extension of Standard Monte Carlo applied on the calculation of the Asian option price. Standard Monte Carlo simulations 10.000.000 generate standard error 0.06 and the option price convergent at Rp.160.00 while Monte Carlo-Control Variate simulations 100.000 generate standard error 0.01 and the option price convergent at Rp.152.00. This shows the Monte Carlo-Control Variate achieve faster option price toward convergent of the Monte Carlo Standar.

  8. Monte Carlo numerical study of lattice field theories

    International Nuclear Information System (INIS)

    Gan Cheekwan; Kim Seyong; Ohta, Shigemi

    1997-01-01

    The authors are interested in the exact first-principle calculations of quantum field theories which are indeed exact ones. For quantum chromodynamics (QCD) at low energy scale, a nonperturbation method is needed, and the only known such method is the lattice method. The path integral can be evaluated by putting a system on a finite 4-dimensional volume and discretizing space time continuum into finite points, lattice. The continuum limit is taken by making the lattice infinitely fine. For evaluating such a finite-dimensional integral, the Monte Carlo numerical estimation of the path integral can be obtained. The calculation of light hadron mass in quenched lattice QCD with staggered quarks, 3-dimensional Thirring model calculation and the development of self-test Monte Carlo method have been carried out by using the RIKEN supercomputer. The motivation of this study, lattice QCD formulation, continuum limit, Monte Carlo update, hadron propagator, light hadron mass, auto-correlation and source size dependence are described on lattice QCD. The phase structure of the 3-dimensional Thirring model for a small 8 3 lattice has been mapped. The discussion on self-test Monte Carlo method is described again. (K.I.)

  9. Continuous energy Monte Carlo method based lattice homogeinzation

    International Nuclear Information System (INIS)

    Li Mancang; Yao Dong; Wang Kan

    2014-01-01

    Based on the Monte Carlo code MCNP, the continuous energy Monte Carlo multi-group constants generation code MCMC has been developed. The track length scheme has been used as the foundation of cross section generation. The scattering matrix and Legendre components require special techniques, and the scattering event method has been proposed to solve this problem. Three methods have been developed to calculate the diffusion coefficients for diffusion reactor core codes and the Legendre method has been applied in MCMC. To the satisfaction of the equivalence theory, the general equivalence theory (GET) and the superhomogenization method (SPH) have been applied to the Monte Carlo method based group constants. The super equivalence method (SPE) has been proposed to improve the equivalence. GET, SPH and SPE have been implemented into MCMC. The numerical results showed that generating the homogenization multi-group constants via Monte Carlo method overcomes the difficulties in geometry and treats energy in continuum, thus provides more accuracy parameters. Besides, the same code and data library can be used for a wide range of applications due to the versatility. The MCMC scheme can be seen as a potential alternative to the widely used deterministic lattice codes. (authors)

  10. Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments

    International Nuclear Information System (INIS)

    Pevey, Ronald E.

    2005-01-01

    Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL

  11. Biased Monte Carlo optimization: the basic approach

    International Nuclear Information System (INIS)

    Campioni, Luca; Scardovelli, Ruben; Vestrucci, Paolo

    2005-01-01

    It is well-known that the Monte Carlo method is very successful in tackling several kinds of system simulations. It often happens that one has to deal with rare events, and the use of a variance reduction technique is almost mandatory, in order to have Monte Carlo efficient applications. The main issue associated with variance reduction techniques is related to the choice of the value of the biasing parameter. Actually, this task is typically left to the experience of the Monte Carlo user, who has to make many attempts before achieving an advantageous biasing. A valuable result is provided: a methodology and a practical rule addressed to establish an a priori guidance for the choice of the optimal value of the biasing parameter. This result, which has been obtained for a single component system, has the notable property of being valid for any multicomponent system. In particular, in this paper, the exponential and the uniform biases of exponentially distributed phenomena are investigated thoroughly

  12. ICF target 2D modeling using Monte Carlo SNB electron thermal transport in DRACO

    Science.gov (United States)

    Chenhall, Jeffrey; Cao, Duc; Moses, Gregory

    2016-10-01

    The iSNB (implicit Schurtz Nicolai Busquet multigroup diffusion electron thermal transport method is adapted into a Monte Carlo (MC) transport method to better model angular and long mean free path non-local effects. The MC model was first implemented in the 1D LILAC code to verify consistency with the iSNB model. Implementation of the MC SNB model in the 2D DRACO code enables higher fidelity non-local thermal transport modeling in 2D implosions such as polar drive experiments on NIF. The final step is to optimize the MC model by hybridizing it with a MC version of the iSNB diffusion method. The hybrid method will combine the efficiency of a diffusion method in intermediate mean free path regions with the accuracy of a transport method in long mean free path regions allowing for improved computational efficiency while maintaining accuracy. Work to date on the method will be presented. This work was supported by Sandia National Laboratories and the Univ. of Rochester Laboratory for Laser Energetics.

  13. Self-learning Monte Carlo (dynamical biasing)

    International Nuclear Information System (INIS)

    Matthes, W.

    1981-01-01

    In many applications the histories of a normal Monte Carlo game rarely reach the target region. An approximate knowledge of the importance (with respect to the target) may be used to guide the particles more frequently into the target region. A Monte Carlo method is presented in which each history contributes to update the importance field such that eventually most target histories are sampled. It is a self-learning method in the sense that the procedure itself: (a) learns which histories are important (reach the target) and increases their probability; (b) reduces the probabilities of unimportant histories; (c) concentrates gradually on the more important target histories. (U.K.)

  14. RNA folding kinetics using Monte Carlo and Gillespie algorithms.

    Science.gov (United States)

    Clote, Peter; Bayegan, Amir H

    2018-04-01

    RNA secondary structure folding kinetics is known to be important for the biological function of certain processes, such as the hok/sok system in E. coli. Although linear algebra provides an exact computational solution of secondary structure folding kinetics with respect to the Turner energy model for tiny ([Formula: see text]20 nt) RNA sequences, the folding kinetics for larger sequences can only be approximated by binning structures into macrostates in a coarse-grained model, or by repeatedly simulating secondary structure folding with either the Monte Carlo algorithm or the Gillespie algorithm. Here we investigate the relation between the Monte Carlo algorithm and the Gillespie algorithm. We prove that asymptotically, the expected time for a K-step trajectory of the Monte Carlo algorithm is equal to [Formula: see text] times that of the Gillespie algorithm, where [Formula: see text] denotes the Boltzmann expected network degree. If the network is regular (i.e. every node has the same degree), then the mean first passage time (MFPT) computed by the Monte Carlo algorithm is equal to MFPT computed by the Gillespie algorithm multiplied by [Formula: see text]; however, this is not true for non-regular networks. In particular, RNA secondary structure folding kinetics, as computed by the Monte Carlo algorithm, is not equal to the folding kinetics, as computed by the Gillespie algorithm, although the mean first passage times are roughly correlated. Simulation software for RNA secondary structure folding according to the Monte Carlo and Gillespie algorithms is publicly available, as is our software to compute the expected degree of the network of secondary structures of a given RNA sequence-see http://bioinformatics.bc.edu/clote/RNAexpNumNbors .

  15. A NEW MONTE CARLO METHOD FOR TIME-DEPENDENT NEUTRINO RADIATION TRANSPORT

    International Nuclear Information System (INIS)

    Abdikamalov, Ernazar; Ott, Christian D.; O'Connor, Evan; Burrows, Adam; Dolence, Joshua C.; Löffler, Frank; Schnetter, Erik

    2012-01-01

    Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck and Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.

  16. A NEW MONTE CARLO METHOD FOR TIME-DEPENDENT NEUTRINO RADIATION TRANSPORT

    Energy Technology Data Exchange (ETDEWEB)

    Abdikamalov, Ernazar; Ott, Christian D.; O' Connor, Evan [TAPIR, California Institute of Technology, MC 350-17, 1200 E California Blvd., Pasadena, CA 91125 (United States); Burrows, Adam; Dolence, Joshua C. [Department of Astrophysical Sciences, Princeton University, Peyton Hall, Ivy Lane, Princeton, NJ 08544 (United States); Loeffler, Frank; Schnetter, Erik, E-mail: abdik@tapir.caltech.edu [Center for Computation and Technology, Louisiana State University, 216 Johnston Hall, Baton Rouge, LA 70803 (United States)

    2012-08-20

    Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck and Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.

  17. Therapeutic Applications of Monte Carlo Calculations in Nuclear Medicine

    International Nuclear Information System (INIS)

    Coulot, J

    2003-01-01

    Monte Carlo techniques are involved in many applications in medical physics, and the field of nuclear medicine has seen a great development in the past ten years due to their wider use. Thus, it is of great interest to look at the state of the art in this domain, when improving computer performances allow one to obtain improved results in a dramatically reduced time. The goal of this book is to make, in 15 chapters, an exhaustive review of the use of Monte Carlo techniques in nuclear medicine, also giving key features which are not necessary directly related to the Monte Carlo method, but mandatory for its practical application. As the book deals with therapeutic' nuclear medicine, it focuses on internal dosimetry. After a general introduction on Monte Carlo techniques and their applications in nuclear medicine (dosimetry, imaging and radiation protection), the authors give an overview of internal dosimetry methods (formalism, mathematical phantoms, quantities of interest). Then, some of the more widely used Monte Carlo codes are described, as well as some treatment planning softwares. Some original techniques are also mentioned, such as dosimetry for boron neutron capture synovectomy. It is generally well written, clearly presented, and very well documented. Each chapter gives an overview of each subject, and it is up to the reader to investigate it further using the extensive bibliography provided. Each topic is discussed from a practical point of view, which is of great help for non-experienced readers. For instance, the chapter about mathematical aspects of Monte Carlo particle transport is very clear and helps one to apprehend the philosophy of the method, which is often a difficulty with a more theoretical approach. Each chapter is put in the general (clinical) context, and this allows the reader to keep in mind the intrinsic limitation of each technique involved in dosimetry (for instance activity quantitation). Nevertheless, there are some minor remarks to

  18. Grain-boundary melting: A Monte Carlo study

    DEFF Research Database (Denmark)

    Besold, Gerhard; Mouritsen, Ole G.

    1994-01-01

    Grain-boundary melting in a lattice-gas model of a bicrystal is studied by Monte Carlo simulation using the grand canonical ensemble. Well below the bulk melting temperature T(m), a disordered liquidlike layer gradually emerges at the grain boundary. Complete interfacial wetting can be observed...... when the temperature approaches T(m) from below. Monte Carlo data over an extended temperature range indicate a logarithmic divergence w(T) approximately - ln(T(m)-T) of the width of the disordered layer w, in agreement with mean-field theory....

  19. Analysis of error in Monte Carlo transport calculations

    International Nuclear Information System (INIS)

    Booth, T.E.

    1979-01-01

    The Monte Carlo method for neutron transport calculations suffers, in part, because of the inherent statistical errors associated with the method. Without an estimate of these errors in advance of the calculation, it is difficult to decide what estimator and biasing scheme to use. Recently, integral equations have been derived that, when solved, predicted errors in Monte Carlo calculations in nonmultiplying media. The present work allows error prediction in nonanalog Monte Carlo calculations of multiplying systems, even when supercritical. Nonanalog techniques such as biased kernels, particle splitting, and Russian Roulette are incorporated. Equations derived here allow prediction of how much a specific variance reduction technique reduces the number of histories required, to be weighed against the change in time required for calculation of each history. 1 figure, 1 table

  20. PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code

    International Nuclear Information System (INIS)

    Iandola, F.N.; O'Brien, M.J.; Procassini, R.J.

    2010-01-01

    Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.

  1. Neutron flux calculation by means of Monte Carlo methods

    International Nuclear Information System (INIS)

    Barz, H.U.; Eichhorn, M.

    1988-01-01

    In this report a survey of modern neutron flux calculation procedures by means of Monte Carlo methods is given. Due to the progress in the development of variance reduction techniques and the improvements of computational techniques this method is of increasing importance. The basic ideas in application of Monte Carlo methods are briefly outlined. In more detail various possibilities of non-analog games and estimation procedures are presented, problems in the field of optimizing the variance reduction techniques are discussed. In the last part some important international Monte Carlo codes and own codes of the authors are listed and special applications are described. (author)

  2. Transport methods: general. 1. The Analytical Monte Carlo Method for Radiation Transport Calculations

    International Nuclear Information System (INIS)

    Martin, William R.; Brown, Forrest B.

    2001-01-01

    We present an alternative Monte Carlo method for solving the coupled equations of radiation transport and material energy. This method is based on incorporating the analytical solution to the material energy equation directly into the Monte Carlo simulation for the radiation intensity. This method, which we call the Analytical Monte Carlo (AMC) method, differs from the well known Implicit Monte Carlo (IMC) method of Fleck and Cummings because there is no discretization of the material energy equation since it is solved as a by-product of the Monte Carlo simulation of the transport equation. Our method also differs from the method recently proposed by Ahrens and Larsen since they use Monte Carlo to solve both equations, while we are solving only the radiation transport equation with Monte Carlo, albeit with effective sources and cross sections to represent the emission sources. Our method bears some similarity to a method developed and implemented by Carter and Forest nearly three decades ago, but there are substantive differences. We have implemented our method in a simple zero-dimensional Monte Carlo code to test the feasibility of the method, and the preliminary results are very promising, justifying further extension to more realistic geometries. (authors)

  3. Markov Chain Monte Carlo

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 3. Markov Chain Monte Carlo - Examples. Arnab Chakraborty. General Article Volume 7 Issue 3 March 2002 pp 25-34. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/007/03/0025-0034. Keywords.

  4. Monte Carlo simulations for design of the KFUPM PGNAA facility

    CERN Document Server

    Naqvi, A A; Maslehuddin, M; Kidwai, S

    2003-01-01

    Monte Carlo simulations were carried out to design a 2.8 MeV neutron-based prompt gamma ray neutron activation analysis (PGNAA) setup for elemental analysis of cement samples. The elemental analysis was carried out using prompt gamma rays produced through capture of thermal neutrons in sample nuclei. The basic design of the PGNAA setup consists of a cylindrical cement sample enclosed in a cylindrical high-density polyethylene moderator placed between a neutron source and a gamma ray detector. In these simulations the predominant geometrical parameters of the PGNAA setup were optimized, including moderator size, sample size and shielding of the detector. Using the results of the simulations, an experimental PGNAA setup was then fabricated at the 350 kV Accelerator Laboratory of this University. The design calculations were checked experimentally through thermal neutron flux measurements inside the PGNAA moderator. A test prompt gamma ray spectrum of the PGNAA setup was also acquired from a Portland cement samp...

  5. Monte Carlo studies of high-transverse-energy hadronic interactions

    International Nuclear Information System (INIS)

    Corcoran, M.D.

    1985-01-01

    A four-jet Monte Carlo calculation has been used to simulate hadron-hadron interactions which deposit high transverse energy into a large-solid-angle calorimeter and limited solid-angle regions of the calorimeter. The calculation uses first-order QCD cross sections to generate two scattered jets and also produces beam and target jets. Field-Feynman fragmentation has been used in the hadronization. The sensitivity of the results to a few features of the Monte Carlo program has been studied. The results are found to be very sensitive to the method used to ensure overall energy conservation after the fragmentation of the four jets is complete. Results are also sensitive to the minimum momentum transfer in the QCD subprocesses and to the distribution of p/sub T/ to the jet axis and the multiplicities in the fragmentation. With reasonable choices of these features of the Monte Carlo program, good agreement with data at Fermilab/CERN SPS energies is obtained, comparable to the agreement achieved with more sophisticated parton-shower models. With other choices, however, the calculation gives qualitatively different results which are in strong disagreement with the data. These results have important implications for extracting physics conclusions from Monte Carlo calculations. It is not possible to test the validity of a particular model or distinguish between different models unless the Monte Carlo results are unambiguous and different models exhibit clearly different behavior

  6. The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code

    International Nuclear Information System (INIS)

    Sutton, T.M.; Brown, F.B.; Bischoff, F.G.; MacMillan, D.B.; Ellis, C.L.; Ward, J.T.; Ballinger, C.T.; Kelly, D.J.; Schindler, L.

    1999-01-01

    This report describes the MCV (Monte Carlo - Vectorized)Monte Carlo neutron transport code [Brown, 1982, 1983; Brown and Mendelson, 1984a]. MCV is a module in the RACER system of codes that is used for Monte Carlo reactor physics analysis. The MCV module contains all of the neutron transport and statistical analysis functions of the system, while other modules perform various input-related functions such as geometry description, material assignment, output edit specification, etc. MCV is very closely related to the 05R neutron Monte Carlo code [Irving et al., 1965] developed at Oak Ridge National Laboratory. 05R evolved into the 05RR module of the STEMB system, which was the forerunner of the RACER system. Much of the overall logic and physics treatment of 05RR has been retained and, indeed, the original verification of MCV was achieved through comparison with STEMB results. MCV has been designed to be very computationally efficient [Brown, 1981, Brown and Martin, 1984b; Brown, 1986]. It was originally programmed to make use of vector-computing architectures such as those of the CDC Cyber- 205 and Cray X-MP. MCV was the first full-scale production Monte Carlo code to effectively utilize vector-processing capabilities. Subsequently, MCV was modified to utilize both distributed-memory [Sutton and Brown, 1994] and shared memory parallelism. The code has been compiled and run on platforms ranging from 32-bit UNIX workstations to clusters of 64-bit vector-parallel supercomputers. The computational efficiency of the code allows the analyst to perform calculations using many more neutron histories than is practical with most other Monte Carlo codes, thereby yielding results with smaller statistical uncertainties. MCV also utilizes variance reduction techniques such as survival biasing, splitting, and rouletting to permit additional reduction in uncertainties. While a general-purpose neutron Monte Carlo code, MCV is optimized for reactor physics calculations. It has the

  7. Monte Carlo methods and applications in nuclear physics

    International Nuclear Information System (INIS)

    Carlson, J.

    1990-01-01

    Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs

  8. LAPP. Activity report 2009-2012

    International Nuclear Information System (INIS)

    Karyotakis, Yannis; Berger, Nicole; Bombar, Claudine; Delmastro, Marco; T'Jampens, Stephane

    2013-10-01

    LAPP is a high energy physics laboratory founded in 1976 and is one of the 19 laboratories of IN2P3 (National Institute of Nuclear and particle physics), institute of CNRS (National Centre for Scientific Research). LAPP is joint research facility of the University Savoie Mont Blanc (USMB) and the CNRS. Research carried out at LAPP aims at understanding the elementary particles and the fundamental interactions between them as well as exploring the connections between the infinitesimally small and the unbelievably big. Among other subjects LAPP teams try to understand the origin of the mass of the particles, the mystery of dark matter and what happened to the anti-matter that was present in the early universe. LAPP researchers work in close contact with phenomenologist teams from LAPTh, a theory laboratory hosted in the same building. LAPP teams also work since several decades at understanding the neutrinos, those elementary almost massless particles with amazing transformation properties. They took part in the design and realization of several experiments. Other LAPP teams collaborate in experiments studying signals from the cosmos. This document presents the activities of the laboratory during the years 2009-2012: 1 - Forewords; 2 - Experimental groups: Standard model and new physics (ATLAS: the road to the Higgs boson; Babar: flavor and CP violation physics; LHCb: flavor and CP violation physics in the search of new physics; CKM-fitter: CKM matrix phenomenology; OPERA: neutrino physics and the study of flavors oscillation; future linear colliders: Micromegas R and D for calorimetry; CTF3 electron beam instrumentation; Lavista - vibratory control R and D); The universe as laboratory: Virgo, the search for gravitational waves; Gamma astronomy at LAPP: H.E.S.S./CTA; Cosmic rays study with AMS; Scientific production; 3 - Teaching activities; 4 - Services: Electronics department; Computers department and MUST; Mechanics department; Valorisation; Administration and

  9. Monte Carlo and analytic simulations in nanoparticle-enhanced radiation therapy

    Directory of Open Access Journals (Sweden)

    Paro AD

    2016-09-01

    Full Text Available Autumn D Paro,1 Mainul Hossain,2 Thomas J Webster,1,3,4 Ming Su1,4 1Department of Chemical Engineering, Northeastern University, Boston, MA, USA; 2NanoScience Technology Center and School of Electrical Engineering and Computer Science, University of Central Florida, Orlando, Florida, USA; 3Excellence for Advanced Materials Research, King Abdulaziz University, Jeddah, Saudi Arabia; 4Wenzhou Institute of Biomaterials and Engineering, Chinese Academy of Science, Wenzhou Medical University, Zhejiang, People’s Republic of China Abstract: Analytical and Monte Carlo simulations have been used to predict dose enhancement factors in nanoparticle-enhanced X-ray radiation therapy. Both simulations predict an increase in dose enhancement in the presence of nanoparticles, but the two methods predict different levels of enhancement over the studied energy, nanoparticle materials, and concentration regime for several reasons. The Monte Carlo simulation calculates energy deposited by electrons and photons, while the analytical one only calculates energy deposited by source photons and photoelectrons; the Monte Carlo simulation accounts for electron–hole recombination, while the analytical one does not; and the Monte Carlo simulation randomly samples photon or electron path and accounts for particle interactions, while the analytical simulation assumes a linear trajectory. This study demonstrates that the Monte Carlo simulation will be a better choice to evaluate dose enhancement with nanoparticles in radiation therapy. Keywords: nanoparticle, dose enhancement, Monte Carlo simulation, analytical simulation, radiation therapy, tumor cell, X-ray 

  10. Monte Carlo simulation of a coded-aperture thermal neutron camera

    International Nuclear Information System (INIS)

    Dioszegi, I.; Salwen, C.; Forman, L.

    2011-01-01

    We employed the MCNPX Monte Carlo code to simulate image formation in a coded-aperture thermal-neutron camera. The camera, developed at Brookhaven National Laboratory (BNL), consists of a 20 x 17 cm"2 active area "3He-filled position-sensitive wire chamber in a cadmium enclosure box. The front of the box is a coded-aperture cadmium mask (at present with three different resolutions). We tested the detector experimentally with various arrangements of moderated point-neutron sources. The purpose of using the Monte Carlo modeling was to develop an easily modifiable model of the device to predict the detector's behavior using different mask patterns, and also to generate images of extended-area sources or large numbers (up to ten) of them, that is important for nonproliferation and arms-control verification, but difficult to achieve experimentally. In the model, we utilized the advanced geometry capabilities of the MCNPX code to simulate the coded aperture mask. Furthermore, the code simulated the production of thermal neutrons from fission sources surrounded by a thermalizer. With this code we also determined the thermal-neutron shadow cast by the cadmium mask; the calculations encompassed fast- and epithermal-neutrons penetrating into the detector through the mask. Since the process of signal production in "3He-filled position-sensitive wire chambers is well known, we omitted this part from our modeling. Simplified efficiency values were used for the three (thermal, epithermal, and fast) neutron-energy regions. Electronic noise and the room's background were included as a uniform irradiation component. We processed the experimental- and simulated-images using identical LabVIEW virtual instruments. (author)

  11. High-efficiency wavefunction updates for large scale Quantum Monte Carlo

    Science.gov (United States)

    Kent, Paul; McDaniel, Tyler; Li, Ying Wai; D'Azevedo, Ed

    Within ab intio Quantum Monte Carlo (QMC) simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunctions. The evaluation of each Monte Carlo move requires finding the determinant of a dense matrix, which is traditionally iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. For calculations with thousands of electrons, this operation dominates the execution profile. We propose a novel rank- k delayed update scheme. This strategy enables probability evaluation for multiple successive Monte Carlo moves, with application of accepted moves to the matrices delayed until after a predetermined number of moves, k. Accepted events grouped in this manner are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency. This procedure does not change the underlying Monte Carlo sampling or the sampling efficiency. For large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude speedups can be obtained on both multi-core CPU and on GPUs, making this algorithm highly advantageous for current petascale and future exascale computations.

  12. The Monte Carlo Simulation Method for System Reliability and Risk Analysis

    CERN Document Server

    Zio, Enrico

    2013-01-01

    Monte Carlo simulation is one of the best tools for performing realistic analysis of complex systems as it allows most of the limiting assumptions on system behavior to be relaxed. The Monte Carlo Simulation Method for System Reliability and Risk Analysis comprehensively illustrates the Monte Carlo simulation method and its application to reliability and system engineering. Readers are given a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling.   Whilst many of the topics rely on a high-level understanding of calculus, probability and statistics, simple academic examples will be provided in support to the explanation of the theoretical foundations to facilitate comprehension of the subject matter. Case studies will be introduced to provide the practical value of the most advanced techniques.   This detailed approach makes The Monte Carlo Simulation Method for System Reliability and Risk Analysis a key reference for senior undergra...

  13. Monte Carlo simulation applied to alpha spectrometry

    International Nuclear Information System (INIS)

    Baccouche, S.; Gharbi, F.; Trabelsi, A.

    2007-01-01

    Alpha particle spectrometry is a widely-used analytical method, in particular when we deal with pure alpha emitting radionuclides. Monte Carlo simulation is an adequate tool to investigate the influence of various phenomena on this analytical method. We performed an investigation of those phenomena using the simulation code GEANT of CERN. The results concerning the geometrical detection efficiency in different measurement geometries agree with analytical calculations. This work confirms that Monte Carlo simulation of solid angle of detection is a very useful tool to determine with very good accuracy the detection efficiency.

  14. Monte Carlo simulation of neutron scattering instruments

    International Nuclear Information System (INIS)

    Seeger, P.A.

    1995-01-01

    A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width

  15. Simulation of transport equations with Monte Carlo

    International Nuclear Information System (INIS)

    Matthes, W.

    1975-09-01

    The main purpose of the report is to explain the relation between the transport equation and the Monte Carlo game used for its solution. The introduction of artificial particles carrying a weight provides one with high flexibility in constructing many different games for the solution of the same equation. This flexibility opens a way to construct a Monte Carlo game for the solution of the adjoint transport equation. Emphasis is laid mostly on giving a clear understanding of what to do and not on the details of how to do a specific game

  16. Microcanonical Monte Carlo

    International Nuclear Information System (INIS)

    Creutz, M.

    1986-01-01

    The author discusses a recently developed algorithm for simulating statistical systems. The procedure interpolates between molecular dynamics methods and canonical Monte Carlo. The primary advantages are extremely fast simulations of discrete systems such as the Ising model and a relative insensitivity to random number quality. A variation of the algorithm gives rise to a deterministic dynamics for Ising spins. This model may be useful for high speed simulation of non-equilibrium phenomena

  17. A contribution Monte Carlo method

    International Nuclear Information System (INIS)

    Aboughantous, C.H.

    1994-01-01

    A Contribution Monte Carlo method is developed and successfully applied to a sample deep-penetration shielding problem. The random walk is simulated in most of its parts as in conventional Monte Carlo methods. The probability density functions (pdf's) are expressed in terms of spherical harmonics and are continuous functions in direction cosine and azimuthal angle variables as well as in position coordinates; the energy is discretized in the multigroup approximation. The transport pdf is an unusual exponential kernel strongly dependent on the incident and emergent directions and energies and on the position of the collision site. The method produces the same results obtained with the deterministic method with a very small standard deviation, with as little as 1,000 Contribution particles in both analog and nonabsorption biasing modes and with only a few minutes CPU time

  18. Response Matrix Method Development Program at Savannah River Laboratory

    International Nuclear Information System (INIS)

    Sicilian, J.M.

    1976-01-01

    The Response Matrix Method Development Program at Savannah River Laboratory (SRL) has concentrated on the development of an effective system of computer codes for the analysis of Savannah River Plant (SRP) reactors. The most significant contribution of this program to date has been the verification of the accuracy of diffusion theory codes as used for routine analysis of SRP reactor operation. This paper documents the two steps carried out in achieving this verification: confirmation of the accuracy of the response matrix technique through comparison with experiment and Monte Carlo calculations; and establishment of agreement between diffusion theory and response matrix codes in situations which realistically approximate actual operating conditions

  19. The Utopia of Cross-border Regions. Territorial transformation and Cross-Border Governance on Espace Mont-Blanc

    NARCIS (Netherlands)

    Lissandrello, E.

    2006-01-01

    The theories on globalisation, internationalisation, post-nationalism or trans-nationalism dismiss the concept of 'territoriality' within the paradigm of the beyond the 'nation-state' sovereignty. In this work, a diverse idea is sustained: borders and territoriality are not just lost terms within

  20. The impact of Monte Carlo simulation: a scientometric analysis of scholarly literature

    CERN Document Server

    Pia, Maria Grazia; Bell, Zane W; Dressendorfer, Paul V

    2010-01-01

    A scientometric analysis of Monte Carlo simulation and Monte Carlo codes has been performed over a set of representative scholarly journals related to radiation physics. The results of this study are reported and discussed. They document and quantitatively appraise the role of Monte Carlo methods and codes in scientific research and engineering applications.

  1. Exact Monte Carlo for molecules

    International Nuclear Information System (INIS)

    Lester, W.A. Jr.; Reynolds, P.J.

    1985-03-01

    A brief summary of the fixed-node quantum Monte Carlo method is presented. Results obtained for binding energies, the classical barrier height for H + H 2 , and the singlet-triplet splitting in methylene are presented and discussed. 17 refs

  2. Application of Monte Carlo simulation to the standardization of positron emitting radionuclides; Aplicacao do metodo de Monte Carlo na padronizacao de radionuclideos emissores de positrons

    Energy Technology Data Exchange (ETDEWEB)

    Tongu, Margareth Lika Onishi

    2009-07-01

    Since 1967, the Nuclear Metrology Laboratory (LNM) at the Nuclear and Energy Research (IPEN) in Sao Paulo, Brazil, has developed radionuclide standardization methods and measurements of the Gamma-ray emission probabilities per decay by means of 4{pi}{beta}-{gamma} coincidence system, a high accuracy primary method for determining disintegration rate of radionuclides of interest. In 2001 the LNM started a research field on modeling, based on Monte Carlo method, of all the system components, including radiation detectors and radionuclide decay processes. This methodology allows the simulation of the detection process in a 4{pi}{beta}-{gamma} system, determining theoretically the observed activity as a function of the 4{pi}{beta} detector efficiency, enabling the prediction of the behavior of the extrapolation curve and optimizing a detailed planning of the experiment before starting the measurements. One of the objectives of the present work is the improvement of the 4{pi} proportional counter modeling, presenting a detailed description of the source holder and radioactive source material, as well as absorbers placed around the source. The simulation of radiation transport through the detectors has been carried out using code MCNPX. The main focus of the present work is on Monte Carlo modeling of the standardization of positron emitting radionuclides associated (or not) with electron capture and accompanied (or not) by the emission of Gamma radiation. One difficulty in this modeling is to simulate the detection of the annihilation Gamma ray, which arise in the process of positron absorption within the 4{pi} detector. The methodology was applied to radionuclides {sup 18}F and {sup 22}Na. (author)

  3. No-compromise reptation quantum Monte Carlo

    International Nuclear Information System (INIS)

    Yuen, W K; Farrar, Thomas J; Rothstein, Stuart M

    2007-01-01

    Since its publication, the reptation quantum Monte Carlo algorithm of Baroni and Moroni (1999 Phys. Rev. Lett. 82 4745) has been applied to several important problems in physics, but its mathematical foundations are not well understood. We show that their algorithm is not of typical Metropolis-Hastings type, and we specify conditions required for the generated Markov chain to be stationary and to converge to the intended distribution. The time-step bias may add up, and in many applications it is only the middle of a reptile that is the most important. Therefore, we propose an alternative, 'no-compromise reptation quantum Monte Carlo' to stabilize the middle of the reptile. (fast track communication)

  4. Exploring cluster Monte Carlo updates with Boltzmann machines.

    Science.gov (United States)

    Wang, Lei

    2017-11-01

    Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.

  5. Exploring cluster Monte Carlo updates with Boltzmann machines

    Science.gov (United States)

    Wang, Lei

    2017-11-01

    Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.

  6. Monte Carlo simulation of continuous-space crystal growth

    International Nuclear Information System (INIS)

    Dodson, B.W.; Taylor, P.A.

    1986-01-01

    We describe a method, based on Monte Carlo techniques, of simulating the atomic growth of crystals without the discrete lattice space assumed by conventional Monte Carlo growth simulations. Since no lattice space is assumed, problems involving epitaxial growth, heteroepitaxy, phonon-driven mechanisms, surface reconstruction, and many other phenomena incompatible with the lattice-space approximation can be studied. Also, use of the Monte Carlo method circumvents to some extent the extreme limitations on simulated timescale inherent in crystal-growth techniques which might be proposed using molecular dynamics. The implementation of the new method is illustrated by studying the growth of strained-layer superlattice (SLS) interfaces in two-dimensional Lennard-Jones atomic systems. Despite the extreme simplicity of such systems, the qualitative features of SLS growth seen here are similar to those observed experimentally in real semiconductor systems

  7. Effect of error propagation of nuclide number densities on Monte Carlo burn-up calculations

    International Nuclear Information System (INIS)

    Tohjoh, Masayuki; Endo, Tomohiro; Watanabe, Masato; Yamamoto, Akio

    2006-01-01

    As a result of improvements in computer technology, the continuous energy Monte Carlo burn-up calculation has received attention as a good candidate for an assembly calculation method. However, the results of Monte Carlo calculations contain the statistical errors. The results of Monte Carlo burn-up calculations, in particular, include propagated statistical errors through the variance of the nuclide number densities. Therefore, if statistical error alone is evaluated, the errors in Monte Carlo burn-up calculations may be underestimated. To make clear this effect of error propagation on Monte Carlo burn-up calculations, we here proposed an equation that can predict the variance of nuclide number densities after burn-up calculations, and we verified this equation using enormous numbers of the Monte Carlo burn-up calculations by changing only the initial random numbers. We also verified the effect of the number of burn-up calculation points on Monte Carlo burn-up calculations. From these verifications, we estimated the errors in Monte Carlo burn-up calculations including both statistical and propagated errors. Finally, we made clear the effects of error propagation on Monte Carlo burn-up calculations by comparing statistical errors alone versus both statistical and propagated errors. The results revealed that the effects of error propagation on the Monte Carlo burn-up calculations of 8 x 8 BWR fuel assembly are low up to 60 GWd/t

  8. Monte Carlo simulation of neutron counters for safeguards applications

    International Nuclear Information System (INIS)

    Looman, Marc; Peerani, Paolo; Tagziria, Hamid

    2009-01-01

    MCNP-PTA is a new Monte Carlo code for the simulation of neutron counters for nuclear safeguards applications developed at the Joint Research Centre (JRC) in Ispra (Italy). After some preliminary considerations outlining the general aspects involved in the computational modelling of neutron counters, this paper describes the specific details and approximations which make up the basis of the model implemented in the code. One of the major improvements allowed by the use of Monte Carlo simulation is a considerable reduction in both the experimental work and in the reference materials required for the calibration of the instruments. This new approach to the calibration of counters using Monte Carlo simulation techniques is also discussed.

  9. Monte Carlo methods and applications in nuclear physics

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, J.

    1990-01-01

    Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs.

  10. Research on Monte Carlo improved quasi-static method for reactor space-time dynamics

    International Nuclear Information System (INIS)

    Xu Qi; Wang Kan; Li Shirui; Yu Ganglin

    2013-01-01

    With large time steps, improved quasi-static (IQS) method can improve the calculation speed for reactor dynamic simulations. The Monte Carlo IQS method was proposed in this paper, combining the advantages of both the IQS method and MC method. Thus, the Monte Carlo IQS method is beneficial for solving space-time dynamics problems of new concept reactors. Based on the theory of IQS, Monte Carlo algorithms for calculating adjoint neutron flux, reactor kinetic parameters and shape function were designed and realized. A simple Monte Carlo IQS code and a corresponding diffusion IQS code were developed, which were used for verification of the Monte Carlo IQS method. (authors)

  11. Kontrola tačnosti rezultata u simulacijama Monte Karlo / Accuracy control in Monte Carlo simulations

    Directory of Open Access Journals (Sweden)

    Nebojša V. Nikolić

    2010-04-01

    Full Text Available U radu je demonstrirana primena metode automatizovanog ponavljanja nezavisnih simulacionih eksperimenata sa prikupljanjem statistike slučajnih procesa, u dostizanju i kontroli tačnosti simulacionih rezultata u simulaciji sistema masovnog opsluživanja Monte Karlo. Metoda se zasniva na primeni osnovnih stavova i teorema matematičke statistike i teorije verovatnoće. Tačnost simulacionih rezultata dovedena je u direktnu vezu sa brojem ponavljanja simulacionih eksperimenata. / The paper presents an application of the Automated Independent Replication with Gathering Statistics of the Stochastic Processes Method in achieving and controlling the accuracy of simulation results in the Monte Carlo queuing simulations. The method is based on the application of the basic theorems of the theory of probability and mathematical statistics. The accuracy of the simulation results is linked with a number of independent replications of simulation experiments.

  12. Report on International Collaboration Involving the FE Heater and HG-A Tests at Mont Terri

    Energy Technology Data Exchange (ETDEWEB)

    Houseworth, Jim [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Rutqvist, Jonny [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Asahina, Daisuke [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chen, Fei [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Vilarrasa, Victor [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Liu, Hui-Hai [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Birkholzer, Jens [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2013-11-01

    Nuclear waste programs outside of the US have focused on different host rock types for geological disposal of high-level radioactive waste. Several countries, including France, Switzerland, Belgium, and Japan are exploring the possibility of waste disposal in shale and other clay-rich rock that fall within the general classification of argillaceous rock. This rock type is also of interest for the US program because the US has extensive sedimentary basins containing large deposits of argillaceous rock. LBNL, as part of the DOE-NE Used Fuel Disposition Campaign, is collaborating on some of the underground research laboratory (URL) activities at the Mont Terri URL near Saint-Ursanne, Switzerland. The Mont Terri project, which began in 1995, has developed a URL at a depth of about 300 m in a stiff clay formation called the Opalinus Clay. Our current collaboration efforts include two test modeling activities for the FE heater test and the HG-A leak-off test. This report documents results concerning our current modeling of these field tests. The overall objectives of these activities include an improved understanding of and advanced relevant modeling capabilities for EDZ evolution in clay repositories and the associated coupled processes, and to develop a technical basis for the maximum allowable temperature for a clay repository.

  13. Lattice gauge theories and Monte Carlo simulations

    International Nuclear Information System (INIS)

    Rebbi, C.

    1981-11-01

    After some preliminary considerations, the discussion of quantum gauge theories on a Euclidean lattice takes up the definition of Euclidean quantum theory and treatment of the continuum limit; analogy is made with statistical mechanics. Perturbative methods can produce useful results for strong or weak coupling. In the attempts to investigate the properties of the systems for intermediate coupling, numerical methods known as Monte Carlo simulations have proved valuable. The bulk of this paper illustrates the basic ideas underlying the Monte Carlo numerical techniques and the major results achieved with them according to the following program: Monte Carlo simulations (general theory, practical considerations), phase structure of Abelian and non-Abelian models, the observables (coefficient of the linear term in the potential between two static sources at large separation, mass of the lowest excited state with the quantum numbers of the vacuum (the so-called glueball), the potential between two static sources at very small distance, the critical temperature at which sources become deconfined), gauge fields coupled to basonic matter (Higgs) fields, and systems with fermions

  14. Applying Squeezing Technique to Clayrocks: Lessons Learned from Experiments at Mont Terri Rock Laboratory

    International Nuclear Information System (INIS)

    Fernandez, A. M.; Sanchez-Ledesma, D. M.; Tournassat, C.; Melon, A.; Gaucher, E.; Astudillo, E.; Vinsot, A.

    2013-01-01

    Knowledge of the pore water chemistry in clay rock formations plays an important role in determining radionuclide migration in the context of nuclear waste disposal. Among the different in situ and ex-situ techniques for pore water sampling in clay sediments and soils, squeezing technique dates back 115 years. Although different studies have been performed about the reliability and representativeness of squeezed pore waters, more of them were achieved on high porosity, high water content and unconsolidated clay sediments. A very few of them tackled the analysis of squeezed pore water from low-porosity, low water content and highly consolidated clay rocks. In this work, a specially designed and fabricated one-dimensional compression cell two directional fluid flow was used to extract and analyse the pore water composition of Opalinus Clay core samples from Mont Terri (Switzerland). The reproducibility of the technique is good and no ionic ultrafiltration, chemical fractionation or anion exclusion was found in the range of pressures analysed: 70-200 MPa. Pore waters extracted in this range of pressures do not decrease in concentration, which would indicate a dilution of water by mixing of the free pore water and the outer layers of double layer water (Donnan water). A threshold (safety) squeezing pressure of 175 MPa was established for avoiding membrane effects (ion filtering, anion exclusion, etc.) from clay particles induced by increasing pressures. Besides, the pore waters extracted at these pressures are representative of the Opalinus Clay formation from a direct comparison against in situ collected borehole waters. (Author)

  15. Applying Squeezing Technique to Clayrocks: Lessons Learned from Experiments at Mont Terri Rock Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez, A. M.; Sanchez-Ledesma, D. M.; Tournassat, C.; Melon, A.; Gaucher, E.; Astudillo, E.; Vinsot, A.

    2013-07-01

    Knowledge of the pore water chemistry in clay rock formations plays an important role in determining radionuclide migration in the context of nuclear waste disposal. Among the different in situ and ex-situ techniques for pore water sampling in clay sediments and soils, squeezing technique dates back 115 years. Although different studies have been performed about the reliability and representativeness of squeezed pore waters, more of them were achieved on high porosity, high water content and unconsolidated clay sediments. A very few of them tackled the analysis of squeezed pore water from low-porosity, low water content and highly consolidated clay rocks. In this work, a specially designed and fabricated one-dimensional compression cell two directional fluid flow was used to extract and analyse the pore water composition of Opalinus Clay core samples from Mont Terri (Switzerland). The reproducibility of the technique is good and no ionic ultrafiltration, chemical fractionation or anion exclusion was found in the range of pressures analysed: 70-200 MPa. Pore waters extracted in this range of pressures do not decrease in concentration, which would indicate a dilution of water by mixing of the free pore water and the outer layers of double layer water (Donnan water). A threshold (safety) squeezing pressure of 175 MPa was established for avoiding membrane effects (ion filtering, anion exclusion, etc.) from clay particles induced by increasing pressures. Besides, the pore waters extracted at these pressures are representative of the Opalinus Clay formation from a direct comparison against in situ collected borehole waters. (Author)

  16. Time step length versus efficiency of Monte Carlo burnup calculations

    International Nuclear Information System (INIS)

    Dufek, Jan; Valtavirta, Ville

    2014-01-01

    Highlights: • Time step length largely affects efficiency of MC burnup calculations. • Efficiency of MC burnup calculations improves with decreasing time step length. • Results were obtained from SIE-based Monte Carlo burnup calculations. - Abstract: We demonstrate that efficiency of Monte Carlo burnup calculations can be largely affected by the selected time step length. This study employs the stochastic implicit Euler based coupling scheme for Monte Carlo burnup calculations that performs a number of inner iteration steps within each time step. In a series of calculations, we vary the time step length and the number of inner iteration steps; the results suggest that Monte Carlo burnup calculations get more efficient as the time step length is reduced. More time steps must be simulated as they get shorter; however, this is more than compensated by the decrease in computing cost per time step needed for achieving a certain accuracy

  17. Results of laboratory and in-situ measurements for the description of coupled thermo-hydro-mechanical processes in clays

    Energy Technology Data Exchange (ETDEWEB)

    Goebel, Ingeborg; Alheid, Hans-Joachim [BGR Hannover, Stilleweg 2, D-30655 Hannover (Germany); Jockwer, Norbert [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH, Theodor-Heuss-Str. 4, 38122 Braunschweig (Germany); Mayor, Juan Carlos [ENRESA, Emilio Vargas 7, E-Madrid (Spain); Garcia-Sineriz, Jose Luis [AITEMIN, c/ Alenza, 1 - 28003 Madrid (Spain); Alonso, Eduardo [International Center for Numerical Methods in Engineering, CIMNE, Edificio C-1, Campus Norte UPC, C/Gran Capitan, s/n, 08034 Barcelona (Spain); Weber, Hans Peter [NAGRA, Hardstrasse 73, CH-5430 Wettingen (Switzerland); Ploetze, Michael [ETHZ, Eidgenoessische Technische Hochschule Zuerich, ETH Zentrum, HG Raemistrasse 101, CH-8092 Zuerich (Switzerland); Klubertanz, Georg [COLENCO Power Engineering Ltd, CPE, Taefern Str. 26, 5405 Baden-Daettwil (Switzerland); Ammon, Christian [Rothpletz, Lienhard, Cie AG, Schifflaendestrasse 35, 5001 Aarau (Switzerland)

    2004-07-01

    The Heater Experiment at the Mont Terri Underground Laboratory aims at producing a validated model of thermo-hydro-mechanically (THM) coupled processes. The experiment consists of an engineered barrier system where in a vertical borehole, a heater is embedded in bentonite blocks, surrounded by the host rock, Opalinus Clay. The experimental programme comprises permanent monitoring before, during, and after the heating phase, complemented by geotechnical, hydraulic, and seismic in-situ measurements as well as laboratory analyses of mineralogical and rock mechanics properties. After the heating, the experiment was dismantled for further investigations. Major results of the experimental findings are outlined. (authors)

  18. Monitoring the Excavation Damaged Zone in Opalinus clay by three dimensional reconstruction of the electrical resistivity in the Mont Terri gallery G-04

    Science.gov (United States)

    Lesparre, N.; Adler, A.; Nicollin, F.; Gibert, D.; Nussbaum, C.

    2012-04-01

    The characteristics of opalinus clay have been studied in the last years for its capacity to retain radionuclide transport as a low permeable rock. This formation presents thereby suitable properties for hosting repository sites of radioactive waste. The Mont Terri underground rock laboratory (Switzerland) has been excavated in opalinus clay layer in order to develop experiences improving the knowledge on the physico-chemical properties of the rock. The study of electrical properties furnishes information on the rock structure, its anisotropy and the changes of these properties with time (Nicollin et al., 2010 ; Thovert et al., 2011). Here the three dimensional reconstruction of the electrical resistivity aims at monitoring the temporal evolution of the excavation damaged zone. Three rings of electrodes have been set-up around the gallery and voltage is measured between two electrodes while a current is injected between two others (Gibert et al., 2006). Measurements have been achieved from July 2004 until April 2008 before, during and after the excavation of the gallery 04. In this study we develop a computational approach to reconstruct three dimensional images of the resistivity in the vicinity of the electrodes. A finite element model is used to represent the complex geometry of the gallery. The measurements inferred from a given resistivity distribution are estimated using the software EIDORS (Adler and Lionheart, 2006), this constitutes the forward problem. The reconstruction of the media resistivity is then implemented by fitting the estimated to the measured data, via the resolution of an inverse problem. The parameters of this inverse problem are defined by mapping the forward problem elements into a coarser mesh. This allows to reduce drastically the number of unknowns and so increases the robustness of the inversion. The inversion is executed with the conjugate gradient method regularised by an analysis of the Jacobian singular values. The results show an

  19. Artificial neural networks, a new alternative to Monte Carlo calculations for radiotherapy

    International Nuclear Information System (INIS)

    Martin, E.; Gschwind, R.; Henriet, J.; Sauget, M.; Makovicka, L.

    2010-01-01

    In order to reduce the computing time needed by Monte Carlo codes in the field of irradiation physics, notably in dosimetry, the authors report the use of artificial neural networks in combination with preliminary Monte Carlo calculations. During the learning phase, Monte Carlo calculations are performed in homogeneous media to allow the building up of the neural network. Then, dosimetric calculations (in heterogeneous media, unknown by the network) can be performed by the so-learned network. Results with an equivalent precision can be obtained within less than one minute on a simple PC whereas several days are needed with a Monte Carlo calculation

  20. Interface methods for hybrid Monte Carlo-diffusion radiation-transport simulations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.

    2006-01-01

    Discrete diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Monte Carlo simulations in diffusive media. An important aspect of DDMC is the treatment of interfaces between diffusive regions, where DDMC is used, and transport regions, where standard Monte Carlo is employed. Three previously developed methods exist for treating transport-diffusion interfaces: the Marshak interface method, based on the Marshak boundary condition, the asymptotic interface method, based on the asymptotic diffusion-limit boundary condition, and the Nth-collided source technique, a scheme that allows Monte Carlo particles to undergo several collisions in a diffusive region before DDMC is used. Numerical calculations have shown that each of these interface methods gives reasonable results as part of larger radiation-transport simulations. In this paper, we use both analytic and numerical examples to compare the ability of these three interface techniques to treat simpler, transport-diffusion interface problems outside of a more complex radiation-transport calculation. We find that the asymptotic interface method is accurate regardless of the angular distribution of Monte Carlo particles incident on the interface surface. In contrast, the Marshak boundary condition only produces correct solutions if the incident particles are isotropic. We also show that the Nth-collided source technique has the capacity to yield accurate results if spatial cells are optically small and Monte Carlo particles are allowed to undergo many collisions within a diffusive region before DDMC is employed. These requirements make the Nth-collided source technique impractical for realistic radiation-transport calculations

  1. A User's Manual for MASH V1.5 - A Monte Carlo Adjoint Shielding Code System

    Energy Technology Data Exchange (ETDEWEB)

    C. O. Slater; J. M. Barnes; J. O. Johnson; J.D. Drischler

    1998-10-01

    The Monte Carlo ~djoint ~ielding Code System, MASH, calculates neutron and gamma- ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air- over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system includes the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. The current version, MASH v 1.5, is the successor to the original MASH v 1.0 code system initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the "dose importance" of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response as a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem.

  2. Herwig: The Evolution of a Monte Carlo Simulation

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Monte Carlo event generation has seen significant developments in the last 10 years starting with preparation for the LHC and then during the first LHC run. I will discuss the basic ideas behind Monte Carlo event generators and then go on to discuss these developments, focussing on the developments in Herwig(++) event generator. I will conclude by presenting the current status of event generation together with some results of the forthcoming new version of Herwig, Herwig 7.

  3. Monte Carlo tests of the ELIPGRID-PC algorithm

    International Nuclear Information System (INIS)

    Davidson, J.R.

    1995-04-01

    The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM reg-sign PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within ±0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangular sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error

  4. Improved Monte Carlo Method for PSA Uncertainty Analysis

    International Nuclear Information System (INIS)

    Choi, Jongsoo

    2016-01-01

    The treatment of uncertainty is an important issue for regulatory decisions. Uncertainties exist from knowledge limitations. A probabilistic approach has exposed some of these limitations and provided a framework to assess their significance and assist in developing a strategy to accommodate them in the regulatory process. The uncertainty analysis (UA) is usually based on the Monte Carlo method. This paper proposes a Monte Carlo UA approach to calculate the mean risk metrics accounting for the SOKC between basic events (including CCFs) using efficient random number generators and to meet Capability Category III of the ASME/ANS PRA standard. Audit calculation is needed in PSA regulatory reviews of uncertainty analysis results submitted for licensing. The proposed Monte Carlo UA approach provides a high degree of confidence in PSA reviews. All PSA needs accounting for the SOKC between event probabilities to meet the ASME/ANS PRA standard

  5. Two proposed convergence criteria for Monte Carlo solutions

    International Nuclear Information System (INIS)

    Forster, R.A.; Pederson, S.P.; Booth, T.E.

    1992-01-01

    The central limit theorem (CLT) can be applied to a Monte Carlo solution if two requirements are satisfied: (1) The random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these two conditions are satisfied, a confidence interval (CI) based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the Monte Carlo tally being used. The Monte Carlo practitioner has a limited number of marginal methods to assess the fulfillment of the second requirement, such as statistical error reduction proportional to 1/√N with error magnitude guidelines. Two proposed methods are discussed in this paper to assist in deciding if N is large enough: estimating the relative variance of the variance (VOV) and examining the empirical history score probability density function (pdf)

  6. Improved Monte Carlo Method for PSA Uncertainty Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jongsoo [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2016-10-15

    The treatment of uncertainty is an important issue for regulatory decisions. Uncertainties exist from knowledge limitations. A probabilistic approach has exposed some of these limitations and provided a framework to assess their significance and assist in developing a strategy to accommodate them in the regulatory process. The uncertainty analysis (UA) is usually based on the Monte Carlo method. This paper proposes a Monte Carlo UA approach to calculate the mean risk metrics accounting for the SOKC between basic events (including CCFs) using efficient random number generators and to meet Capability Category III of the ASME/ANS PRA standard. Audit calculation is needed in PSA regulatory reviews of uncertainty analysis results submitted for licensing. The proposed Monte Carlo UA approach provides a high degree of confidence in PSA reviews. All PSA needs accounting for the SOKC between event probabilities to meet the ASME/ANS PRA standard.

  7. Multiple-time-stepping generalized hybrid Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  8. A keff calculation method by Monte Carlo

    International Nuclear Information System (INIS)

    Shen, H; Wang, K.

    2008-01-01

    The effective multiplication factor (k eff ) is defined as the ratio between the number of neutrons in successive generations, which definition is adopted by most Monte Carlo codes (e.g. MCNP). Also, it can be thought of as the ratio of the generation rate of neutrons by the sum of the leakage rate and the absorption rate, which should exclude the effect of the neutron reaction such as (n, 2n) and (n, 3n). This article discusses the Monte Carlo method for k eff calculation based on the second definition. A new code has been developed and the results are presented. (author)

  9. NOTE: Monte Carlo evaluation of kerma in an HDR brachytherapy bunker

    Science.gov (United States)

    Pérez-Calatayud, J.; Granero, D.; Ballester, F.; Casal, E.; Crispin, V.; Puchades, V.; León, A.; Verdú, G.

    2004-12-01

    In recent years, the use of high dose rate (HDR) after-loader machines has greatly increased due to the shift from traditional Cs-137/Ir-192 low dose rate (LDR) to HDR brachytherapy. The method used to calculate the required concrete and, where appropriate, lead shielding in the door is based on analytical methods provided by documents published by the ICRP, the IAEA and the NCRP. The purpose of this study is to perform a more realistic kerma evaluation at the entrance maze door of an HDR bunker using the Monte Carlo code GEANT4. The Monte Carlo results were validated experimentally. The spectrum at the maze entrance door, obtained with Monte Carlo, has an average energy of about 110 keV, maintaining a similar value along the length of the maze. The comparison of results from the aforementioned values with the Monte Carlo ones shows that results obtained using the albedo coefficient from the ICRP document more closely match those given by the Monte Carlo method, although the maximum value given by MC calculations is 30% greater.

  10. 29th Rencontres de Physique de La Vallée d'Aoste

    CERN Document Server

    Rencontres de La Thuile

    2015-01-01

    This is the twenty-nineth Workshop in Particle Physics being held yearly at the Planibel Hotel of La Thuile, Aosta Valley. La Thuile is a beautiful mountain village located 1450 m a.sl., about 40 km north of the city of Aosta, on the road to the Mont Blanc. The hotel is at the bottom of a vast skiing area connected with the French ski resort La Rosiere and it is an outstanding complex for winter sports and congresses. The Rencontres will bring together about 120 active experimentalists and theorists, as well as a number of young students supported by the Organizers, to review the status and the future prospects in elementary particle physics. Will be published in: Nuovo Cimento, C

  11. Crop canopy BRDF simulation and analysis using Monte Carlo method

    NARCIS (Netherlands)

    Huang, J.; Wu, B.; Tian, Y.; Zeng, Y.

    2006-01-01

    This author designs the random process between photons and crop canopy. A Monte Carlo model has been developed to simulate the Bi-directional Reflectance Distribution Function (BRDF) of crop canopy. Comparing Monte Carlo model to MCRM model, this paper analyzes the variations of different LAD and

  12. Monte Carlo radiation transport: A revolution in science

    International Nuclear Information System (INIS)

    Hendricks, J.

    1993-01-01

    When Enrico Fermi, Stan Ulam, Nicholas Metropolis, John von Neuman, and Robert Richtmyer invented the Monte Carlo method fifty years ago, little could they imagine the far-flung consequences, the international applications, and the revolution in science epitomized by their abstract mathematical method. The Monte Carlo method is used in a wide variety of fields to solve exact computational models approximately by statistical sampling. It is an alternative to traditional physics modeling methods which solve approximate computational models exactly by deterministic methods. Modern computers and improved methods, such as variance reduction, have enhanced the method to the point of enabling a true predictive capability in areas such as radiation or particle transport. This predictive capability has contributed to a radical change in the way science is done: design and understanding come from computations built upon experiments rather than being limited to experiments, and the computer codes doing the computations have become the repository for physics knowledge. The MCNP Monte Carlo computer code effort at Los Alamos is an example of this revolution. Physicians unfamiliar with physics details can design cancer treatments using physics buried in the MCNP computer code. Hazardous environments and hypothetical accidents can be explored. Many other fields, from underground oil well exploration to aerospace, from physics research to energy production, from safety to bulk materials processing, benefit from MCNP, the Monte Carlo method, and the revolution in science

  13. Suppression of the initial transient in Monte Carlo criticality simulations

    International Nuclear Information System (INIS)

    Richet, Y.

    2006-12-01

    Criticality Monte Carlo calculations aim at estimating the effective multiplication factor (k-effective) for a fissile system through iterations simulating neutrons propagation (making a Markov chain). Arbitrary initialization of the neutron population can deeply bias the k-effective estimation, defined as the mean of the k-effective computed at each iteration. A simplified model of this cycle k-effective sequence is built, based on characteristics of industrial criticality Monte Carlo calculations. Statistical tests, inspired by Brownian bridge properties, are designed to discriminate stationarity of the cycle k-effective sequence. The initial detected transient is, then, suppressed in order to improve the estimation of the system k-effective. The different versions of this methodology are detailed and compared, firstly on a plan of numerical tests fitted on criticality Monte Carlo calculations, and, secondly on real criticality calculations. Eventually, the best methodologies observed in these tests are selected and allow to improve industrial Monte Carlo criticality calculations. (author)

  14. The SGHWR version of the Monte Carlo code W-MONTE. Part 1. The theoretical model

    International Nuclear Information System (INIS)

    Allen, F.R.

    1976-03-01

    W-MONTE provides a multi-group model of neutron transport in the exact geometry of a reactor lattice using Monte Carlo methods. It is currently restricted to uniform axial properties. Material data is normally obtained from a preliminary WIMS lattice calculation in the transport group structure. The SGHWR version has been required for analysis of zero energy experiments and special aspects of power reactor lattices, such as the unmoderated lattice region above the moderator when drained to dump height. Neutron transport is modelled for a uniform infinite lattice, simultaneously treating the cases of no leakage, radial leakage or axial leakage only, and the combined effects of radial and axial leakage. Multigroup neutron balance edits are incorporated for the separate effects of radial and axial leakage to facilitate the analysis of leakage and to provide effective diffusion theory parameters for core representation in reactor cores. (author)

  15. Uncertainty evaluation of the kerma in the air, related to the active volume in the ionization chamber of concentric cylinders, by Monte Carlo simulation

    International Nuclear Information System (INIS)

    Lo Bianco, A.S.; Oliveira, H.P.S.; Peixoto, J.G.P.

    2009-01-01

    To implant the primary standard of the magnitude kerma in the air for X-ray between 10 - 50 keV, the National Metrology Laboratory of Ionizing Radiations (LNMRI) must evaluate all the uncertainties of measurement related with Victtoren chamber. So, it was evaluated the uncertainty of the kerma in the air consequent of the inaccuracy in the active volume of the chamber using the calculation of Monte Carlo as a tool through the Penelope software

  16. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Badal, A [U.S. Food and Drug Administration (CDRH/OSEL), Silver Spring, MD (United States); Zbijewski, W [Johns Hopkins University, Baltimore, MD (United States); Bolch, W [University of Florida, Gainesville, FL (United States); Sechopoulos, I [Emory University, Atlanta, GA (United States)

    2014-06-15

    Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the

  17. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    International Nuclear Information System (INIS)

    Badal, A; Zbijewski, W; Bolch, W; Sechopoulos, I

    2014-01-01

    Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10 7 xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual

  18. PEPSI - a Monte Carlo generator for polarized leptoproduction

    International Nuclear Information System (INIS)

    Mankiewicz, L.

    1992-01-01

    We describe PEPSI (Polarized Electron Proton Scattering Interactions) a Monte Carlo program for polarized deep inelastic leptoproduction mediated by electromagnetic interaction, and explain how to use it. The code is a modification of the Lepto 4.3 Lund Monte Carlo for unpolarized scattering. The hard virtual gamma-parton scattering is generated according to the polarization-dependent QCD cross-section of the first order in α S . PEPSI requires the standard polarization-independent JETSET routines to simulate the fragmentation into final hadrons. (orig.)

  19. Monte Carlo method for solving a parabolic problem

    Directory of Open Access Journals (Sweden)

    Tian Yi

    2016-01-01

    Full Text Available In this paper, we present a numerical method based on random sampling for a parabolic problem. This method combines use of the Crank-Nicolson method and Monte Carlo method. In the numerical algorithm, we first discretize governing equations by Crank-Nicolson method, and obtain a large sparse system of linear algebraic equations, then use Monte Carlo method to solve the linear algebraic equations. To illustrate the usefulness of this technique, we apply it to some test problems.

  20. NUEN-618 Class Project: Actually Implicit Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Vega, R. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brunner, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-12-14

    This research describes a new method for the solution of the thermal radiative transfer (TRT) equations that is implicit in time which will be called Actually Implicit Monte Carlo (AIMC). This section aims to introduce the TRT equations, as well as the current workhorse method which is known as Implicit Monte Carlo (IMC). As the name of the method proposed here indicates, IMC is a misnomer in that it is only semi-implicit, which will be shown in this section as well.

  1. Monte Carlo burnup codes acceleration using the correlated sampling method

    International Nuclear Information System (INIS)

    Dieudonne, C.

    2013-01-01

    For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this document we present an original methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time we develop a theoretical model to study the features of the correlated sampling method to understand its effects on depletion calculations. In a third time the implementation of this method in the TRIPOLI-4 code will be discussed, as well as the precise calculation scheme used to bring important speed-up of the depletion calculation. We will begin to validate and optimize the perturbed depletion scheme with the calculation of a REP-like fuel cell depletion. Then this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes. (author) [fr

  2. Monte Carlo Simulation in Statistical Physics An Introduction

    CERN Document Server

    Binder, Kurt

    2010-01-01

    Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. The fifth edition covers Classical as well as Quantum Monte Carlo methods. Furthermore a new chapter on the sampling of free-energy landscapes has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was awarded the Berni J. Alder CECAM Award for Computational Physics 2001 as well ...

  3. Monte Carlo simulation in statistical physics an introduction

    CERN Document Server

    Binder, Kurt

    1992-01-01

    The Monte Carlo method is a computer simulation method which uses random numbers to simulate statistical fluctuations The method is used to model complex systems with many degrees of freedom Probability distributions for these systems are generated numerically and the method then yields numerically exact information on the models Such simulations may be used tosee how well a model system approximates a real one or to see how valid the assumptions are in an analyical theory A short and systematic theoretical introduction to the method forms the first part of this book The second part is a practical guide with plenty of examples and exercises for the student Problems treated by simple sampling (random and self-avoiding walks, percolation clusters, etc) are included, along with such topics as finite-size effects and guidelines for the analysis of Monte Carlo simulations The two parts together provide an excellent introduction to the theory and practice of Monte Carlo simulations

  4. Geometry and Dynamics for Markov Chain Monte Carlo

    Science.gov (United States)

    Barp, Alessandro; Briol, François-Xavier; Kennedy, Anthony D.; Girolami, Mark

    2018-03-01

    Markov Chain Monte Carlo methods have revolutionised mathematical computation and enabled statistical inference within many previously intractable models. In this context, Hamiltonian dynamics have been proposed as an efficient way of building chains which can explore probability densities efficiently. The method emerges from physics and geometry and these links have been extensively studied by a series of authors through the last thirty years. However, there is currently a gap between the intuitions and knowledge of users of the methodology and our deep understanding of these theoretical foundations. The aim of this review is to provide a comprehensive introduction to the geometric tools used in Hamiltonian Monte Carlo at a level accessible to statisticians, machine learners and other users of the methodology with only a basic understanding of Monte Carlo methods. This will be complemented with some discussion of the most recent advances in the field which we believe will become increasingly relevant to applied scientists.

  5. Vectorizing and macrotasking Monte Carlo neutral particle algorithms

    International Nuclear Information System (INIS)

    Heifetz, D.B.

    1987-04-01

    Monte Carlo algorithms for computing neutral particle transport in plasmas have been vectorized and macrotasked. The techniques used are directly applicable to Monte Carlo calculations of neutron and photon transport, and Monte Carlo integration schemes in general. A highly vectorized code was achieved by calculating test flight trajectories in loops over arrays of flight data, isolating the conditional branches to as few a number of loops as possible. A number of solutions are discussed to the problem of gaps appearing in the arrays due to completed flights, which impede vectorization. A simple and effective implementation of macrotasking is achieved by dividing the calculation of the test flight profile among several processors. A tree of random numbers is used to ensure reproducible results. The additional memory required for each task may preclude using a larger number of tasks. In future machines, the limit of macrotasking may be possible, with each test flight, and split test flight, being a separate task

  6. Multi-Index Monte Carlo (MIMC)

    KAUST Repository

    Haji Ali, Abdul Lateef; Nobile, Fabio; Tempone, Raul

    2015-01-01

    We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles’s seminal work, instead of using first-order differences as in MLMC, we use in MIMC high-order mixed differences to reduce the variance of the hierarchical differences dramatically. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be of Total Degree (TD) type. When using such sets, MIMC yields new and improved complexity results, which are natural generalizations of Giles’s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence.

  7. Multi-Index Monte Carlo (MIMC)

    KAUST Repository

    Haji Ali, Abdul Lateef

    2015-01-07

    We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles’s seminal work, instead of using first-order differences as in MLMC, we use in MIMC high-order mixed differences to reduce the variance of the hierarchical differences dramatically. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be of Total Degree (TD) type. When using such sets, MIMC yields new and improved complexity results, which are natural generalizations of Giles’s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence.

  8. Quasi-Monte Carlo methods for lattice systems. A first look

    International Nuclear Information System (INIS)

    Jansen, K.; Cyprus Univ., Nicosia; Leovey, H.; Griewank, A.; Nube, A.; Humboldt-Universitaet, Berlin; Mueller-Preussker, M.

    2013-02-01

    We investigate the applicability of Quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N -1/2 , where N is the number of observations. By means of Quasi-Monte Carlo methods it is possible to improve this behavior for certain problems up to N -1 . We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling.

  9. Quasi-Monte Carlo methods for lattice systems. A first look

    Energy Technology Data Exchange (ETDEWEB)

    Jansen, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Leovey, H.; Griewank, A. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Mathematik; Nube, A. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Mueller-Preussker, M. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik

    2013-02-15

    We investigate the applicability of Quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N{sup -1/2}, where N is the number of observations. By means of Quasi-Monte Carlo methods it is possible to improve this behavior for certain problems up to N{sup -1}. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling.

  10. Monte Carlo calculations of thermodynamic properties of deuterium under high pressures

    International Nuclear Information System (INIS)

    Levashov, P R; Filinov, V S; BoTan, A; Fortov, V E; Bonitz, M

    2008-01-01

    Two different numerical approaches have been applied for calculations of shock Hugoniots and compression isentrope of deuterium: direct path integral Monte Carlo and reactive Monte Carlo. The results show good agreement between two methods at intermediate pressure which is an indication of correct accounting of dissociation effects in the direct path integral Monte Carlo method. Experimental data on both shock and quasi-isentropic compression of deuterium are well described by calculations. Thus dissociation of deuterium molecules in these experiments together with interparticle interaction play significant role

  11. Monte Carlo simulated dynamical magnetization of single-chain magnets

    Energy Technology Data Exchange (ETDEWEB)

    Li, Jun; Liu, Bang-Gui, E-mail: bgliu@iphy.ac.cn

    2015-03-15

    Here, a dynamical Monte-Carlo (DMC) method is used to study temperature-dependent dynamical magnetization of famous Mn{sub 2}Ni system as typical example of single-chain magnets with strong magnetic anisotropy. Simulated magnetization curves are in good agreement with experimental results under typical temperatures and sweeping rates, and simulated coercive fields as functions of temperature are also consistent with experimental curves. Further analysis indicates that the magnetization reversal is determined by both thermal-activated effects and quantum spin tunnelings. These can help explore basic properties and applications of such important magnetic systems. - Highlights: • Monte Carlo simulated magnetization curves are in good agreement with experimental results. • Simulated coercive fields as functions of temperature are consistent with experimental results. • The magnetization reversal is understood in terms of the Monte Carlo simulations.

  12. LCG MCDB - a Knowledgebase of Monte Carlo Simulated Events

    CERN Document Server

    Belov, S; Galkin, E; Gusev, A; Pokorski, Witold; Sherstnev, A V

    2008-01-01

    In this paper we report on LCG Monte Carlo Data Base (MCDB) and software which has been developed to operate MCDB. The main purpose of the LCG MCDB project is to provide a storage and documentation system for sophisticated event samples simulated for the LHC collaborations by experts. In many cases, the modern Monte Carlo simulation of physical processes requires expert knowledge in Monte Carlo generators or significant amount of CPU time to produce the events. MCDB is a knowledgebase mainly to accumulate simulated events of this type. The main motivation behind LCG MCDB is to make the sophisticated MC event samples available for various physical groups. All the data from MCDB is accessible in several convenient ways. LCG MCDB is being developed within the CERN LCG Application Area Simulation project.

  13. Exponentially-convergent Monte Carlo via finite-element trial spaces

    International Nuclear Information System (INIS)

    Morel, Jim E.; Tooley, Jared P.; Blamer, Brandon J.

    2011-01-01

    Exponentially-Convergent Monte Carlo (ECMC) methods, also known as adaptive Monte Carlo and residual Monte Carlo methods, were the subject of intense research over a decade ago, but they never became practical for solving the realistic problems. We believe that the failure of previous efforts may be related to the choice of trial spaces that were global and thus highly oscillatory. As an alternative, we consider finite-element trial spaces, which have the ability to treat fully realistic problems. As a first step towards more general methods, we apply piecewise-linear trial spaces to the spatially-continuous two-stream transport equation. Using this approach, we achieve exponential convergence and computationally demonstrate several fundamental properties of finite-element based ECMC methods. Finally, our results indicate that the finite-element approach clearly deserves further investigation. (author)

  14. Monte Carlo Calculation of Sensitivities to Secondary Angular Distributions. Theory and Validation

    International Nuclear Information System (INIS)

    Perell, R. L.

    2002-01-01

    The basic methods for solution of the transport equation that are in practical use today are the discrete ordinates (SN) method, and the Monte Carlo (Monte Carlo) method. While the SN method is typically less computation time consuming, the Monte Carlo method is often preferred for detailed and general description of three-dimensional geometries, and for calculations using cross sections that are point-wise energy dependent. For analysis of experimental and calculated results, sensitivities are needed. Sensitivities to material parameters in general, and to the angular distribution of the secondary (scattered) neutrons in particular, can be calculated by well known SN methods, using the fluxes obtained from solution of the direct and the adjoint transport equations. Algorithms to calculate sensitivities to cross-sections with Monte Carlo methods have been known for quite a time. However, only just recently we have developed a general Monte Carlo algorithm for the calculation of sensitivities to the angular distribution of the secondary neutrons

  15. Application of Monte Carlo simulation to the standardization of positron emitting radionuclides

    International Nuclear Information System (INIS)

    Tongu, Margareth Lika Onishi

    2009-01-01

    Since 1967, the Nuclear Metrology Laboratory (LNM) at the Nuclear and Energy Research (IPEN) in Sao Paulo, Brazil, has developed radionuclide standardization methods and measurements of the Gamma-ray emission probabilities per decay by means of 4πβ-γ coincidence system, a high accuracy primary method for determining disintegration rate of radionuclides of interest. In 2001 the LNM started a research field on modeling, based on Monte Carlo method, of all the system components, including radiation detectors and radionuclide decay processes. This methodology allows the simulation of the detection process in a 4πβ-γ system, determining theoretically the observed activity as a function of the 4πβ detector efficiency, enabling the prediction of the behavior of the extrapolation curve and optimizing a detailed planning of the experiment before starting the measurements. One of the objectives of the present work is the improvement of the 4π proportional counter modeling, presenting a detailed description of the source holder and radioactive source material, as well as absorbers placed around the source. The simulation of radiation transport through the detectors has been carried out using code MCNPX. The main focus of the present work is on Monte Carlo modeling of the standardization of positron emitting radionuclides associated (or not) with electron capture and accompanied (or not) by the emission of Gamma radiation. One difficulty in this modeling is to simulate the detection of the annihilation Gamma ray, which arise in the process of positron absorption within the 4π detector. The methodology was applied to radionuclides 18 F and 22 Na. (author)

  16. Monte Carlo treatment planning with modulated electron radiotherapy: framework development and application

    Science.gov (United States)

    Alexander, Andrew William

    Within the field of medical physics, Monte Carlo radiation transport simulations are considered to be the most accurate method for the determination of dose distributions in patients. The McGill Monte Carlo treatment planning system (MMCTP), provides a flexible software environment to integrate Monte Carlo simulations with current and new treatment modalities. A developing treatment modality called energy and intensity modulated electron radiotherapy (MERT) is a promising modality, which has the fundamental capabilities to enhance the dosimetry of superficial targets. An objective of this work is to advance the research and development of MERT with the end goal of clinical use. To this end, we present the MMCTP system with an integrated toolkit for MERT planning and delivery of MERT fields. Delivery is achieved using an automated "few leaf electron collimator" (FLEC) and a controller. Aside from the MERT planning toolkit, the MMCTP system required numerous add-ons to perform the complex task of large-scale autonomous Monte Carlo simulations. The first was a DICOM import filter, followed by the implementation of DOSXYZnrc as a dose calculation engine and by logic methods for submitting and updating the status of Monte Carlo simulations. Within this work we validated the MMCTP system with a head and neck Monte Carlo recalculation study performed by a medical dosimetrist. The impact of MMCTP lies in the fact that it allows for systematic and platform independent large-scale Monte Carlo dose calculations for different treatment sites and treatment modalities. In addition to the MERT planning tools, various optimization algorithms were created external to MMCTP. The algorithms produced MERT treatment plans based on dose volume constraints that employ Monte Carlo pre-generated patient-specific kernels. The Monte Carlo kernels are generated from patient-specific Monte Carlo dose distributions within MMCTP. The structure of the MERT planning toolkit software and

  17. Proton therapy analysis using the Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Noshad, Houshyar [Center for Theoretical Physics and Mathematics, AEOI, P.O. Box 14155-1339, Tehran (Iran, Islamic Republic of)]. E-mail: hnoshad@aeoi.org.ir; Givechi, Nasim [Islamic Azad University, Science and Research Branch, Tehran (Iran, Islamic Republic of)

    2005-10-01

    The range and straggling data obtained from the transport of ions in matter (TRIM) computer program were used to determine the trajectories of monoenergetic 60 MeV protons in muscle tissue by using the Monte Carlo technique. The appropriate profile for the shape of a proton pencil beam in proton therapy as well as the dose deposited in the tissue were computed. The good agreements between our results as compared with the corresponding experimental values are presented here to show the reliability of our Monte Carlo method.

  18. Simplified monte carlo simulation for Beijing spectrometer

    International Nuclear Information System (INIS)

    Wang Taijie; Wang Shuqin; Yan Wuguang; Huang Yinzhi; Huang Deqiang; Lang Pengfei

    1986-01-01

    The Monte Carlo method based on the functionization of the performance of detectors and the transformation of values of kinematical variables into ''measured'' ones by means of smearing has been used to program the Monte Carlo simulation of the performance of the Beijing Spectrometer (BES) in FORTRAN language named BESMC. It can be used to investigate the multiplicity, the particle type, and the distribution of four-momentum of the final states of electron-positron collision, and also the response of the BES to these final states. Thus, it provides a measure to examine whether the overall design of the BES is reasonable and to decide the physical topics of the BES

  19. Monte Carlo simulation of gas Cerenkov detectors

    International Nuclear Information System (INIS)

    Mack, J.M.; Jain, M.; Jordan, T.M.

    1984-01-01

    Theoretical study of selected gamma-ray and electron diagnostic necessitates coupling Cerenkov radiation to electron/photon cascades. A Cerenkov production model and its incorporation into a general geometry Monte Carlo coupled electron/photon transport code is discussed. A special optical photon ray-trace is implemented using bulk optical properties assigned to each Monte Carlo zone. Good agreement exists between experimental and calculated Cerenkov data in the case of a carbon-dioxide gas Cerenkov detector experiment. Cerenkov production and threshold data are presented for a typical carbon-dioxide gas detector that converts a 16.7 MeV photon source to Cerenkov light, which is collected by optics and detected by a photomultiplier

  20. Solving the transport equation by Monte Carlo method and application for biological shield calculations; Resavanje transportne jednacine Monte Carlo metodom i njena primena u zadacima proracuna bioloskog stita

    Energy Technology Data Exchange (ETDEWEB)

    Kocic, A [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Serbia and Montenegro)

    1977-07-01

    General sampling Monte Carlo scheme for neutron transport equation has been described. Programme TRANSFER for neutron beam transmission analysis has been used to calculate the neutron leakage spectrum, detector efficiency and neutron angular distribution of the example problem (author) [Serbo-Croat] U radu se najpre razmatraju osnovni problemi resavanja transportne jednacine i nacin kako Monte Karlo metoda omogucuje da se prevazidju neki od njih: visedimenzionalnost zadatka, problem dubokog prodiranja i dovoljno fino tretiranje efikasnih preseka. Dalje, govori se o iskustvima sa primenom Monte Karlo metode u Laboratoriji za nuklearnu energetiku i tehnicku fiziku i o primeni ove metode na probleme zastite. Na kraju dati su i analizirani ilustrativni primeri proracuna transporta neutrona kroz ravan sloj zastitnog materijala koriscenjem Monte Karlo programa TRANSFER (author)