The research program of the Liquid Scintillation Detector (LSD) in the Mont Blanc Laboratory
Dadykin, V. L.; Yakushev, V. F.; Korchagin, P. V.; Korchagin, V. B.; Malgin, A. S.; Ryassny, F. G.; Ryazhskaya, O. G.; Talochkin, V. P.; Zatsepin, G. T.; Badino, G.
1985-01-01
A massive (90 tons) liquid scintillation detector (LSD) has been running since October 1984 in the Mont Blanc Laboratory at a depth of 5,200 hg/sq cm of standard rock. The research program of the experiment covers a variety of topics in particle physics and astrophysics. The performance of the detector, the main fields of research are presented and the preliminary results are discussed.
Aasta film - joonisfilm "Mont Blanc" / Verni Leivak
Leivak, Verni, 1966-
2002-01-01
Eesti Filmiajakirjanike Ühing andis aasta 2001 parima filmi tiitli Priit Tenderi joonisfilmile "Mont Blanc" : Eesti Joonisfilm 2001.Ka filmikriitikute eelistused kinodes ja televisioonis 2001. aastal näidatud filmide osas
Neutrino astronomy at Mont Blanc: from LSD to LSD-2
International Nuclear Information System (INIS)
Saavedra, O.; Aglietta, M.; Badino, G.
1988-01-01
In this paper we present the upgrading of the LSD experiment, presently running in the Mont Blanc Laboratory. The data recorded during the period when supernova 1987A exploded are analysed in detail. The research program of LSD-2, the same experiment as LSD but with an higher sensitivity to search for neutrino burst from collapsing stars, is also discussed
Stopping particles in the Mont Blanc spark chamber telescopes
Energy Technology Data Exchange (ETDEWEB)
Bergamasco, L; Bilokon, H; Piazzoli, B E; Mannocchi, G; Picchi, P [Consiglio Nazionale delle Ricerche, Turin (Italy). Lab. di Cosmo-Geofisica; Turin Univ. (Italy). Ist. di Fisica Generale)
1982-02-01
We present the final results on the ratio of stopping to traversing muons as measured by two spark chamber telescopes in the Mont Blanc Station, Italy, at 4300 hg/cm/sup 2/. The experimental results are in agreement with the theoretical values within the limits of the error.
Natvig, Lasse; Follan, Torbjørn; Støa, Simen; Magnussen, Sindre; Guirado, Antonio Garcia
2015-01-01
Climbing Mont Blanc (CMB) is an open online judge used for training in energy efficient programming of state-of-the-art heterogeneous multicores. It uses an Odroid-XU3 board from Hardkernel with an Exynos Octa processor and integrated power sensors. This processor is three-way heterogeneous containing 14 different cores of three different types. The board currently accepts C and C++ programs, with support for OpenCL v1.1, OpenMP 4.0 and Pthreads. Programs submitted using the graphical user in...
On the correlation between Mont Blanc and Baksan underground detectors in February 1987
International Nuclear Information System (INIS)
Chudakov, A.E.
1989-01-01
According to the author, there is a correlation directly between the Mont Blanc (LSD) and Baksan data, two quite similar underground scintillation detectors. The idea is: if something really happens that activates the gravitational antennas (G.A.) signal and that, after 1.2 s, gives a signal in a particular scintillator, then there should be a chance to observe a quasi-simultaneous signal in another, possibly very distant scintillator. The big distance between the Baksan and LSD detectors should exclude the common electrical power supply as a possible source of correlation. Another advantage of the suggested search could be a simplicity of statistical analysis when the duration of the signal (in the scintillation counter) is much less than the correlation time interval (1 s). In this report, the author discusses both positive and negative evidence concerning the LSD-Baksan correlation
Godon , Cécile
2013-01-01
The study presented in this PhD memory aim at better define and quantify the present timeerosion processes in glacial and proglacial domain. The Glacier des Bossons, situated in theMont-Blanc massif (Haute-Savoie, France), is a good example of a natural and nonanthropizedsystem which allows us to study this topic. This glacier lies on two mainlithologies (the Mont-Blanc granite and the metamorphic bedrock) and this peculiarity is usedto determine the origin of the glacial sediments. The sedim...
Mourey, Jacques; Ravanel, Ludovic
2016-04-01
Given the evolution of high mountain environment due to global warming, mountaineering routes and huts accesses are more and more strongly affected by glacial shrinkage and concomitant gravity processes, but almost no studies have been conducted on this relationship. The aim of this research is to describe and explain the evolution over the last century of the access to the five alpine huts around the Mer de Glace glacier (Mont Blanc massif), the larger French glacier (length = 11.5 km, area = 30 km²), a major place for Alpine tourism since 1741 and the birthplace of mountaineering, by using several methods (comparing photographs, surveying, collecting historical documents). While most of the 20th century shows no marked changes, loss of ice thickness and associated erosion of lateral moraines generate numerous and significant changes since the 1990s. Boulder falls, rockfalls and landslides are the main geomorphological processes that affect the access, while the glacier surface lowering makes access much longer and more unstable. The danger is then greatly increased and the access must be relocated and/or equipped more and more frequently (e.g. a total of 520 m of ladders has been added). This questions the future accessibility to the huts, jeopardizing an important part of mountaineering and its linked economy in the Mer de Glace area.
On the event detected by the Mont Blanc underground neutrino detector on February 23, 1987
Energy Technology Data Exchange (ETDEWEB)
Dadykin, V L; Zatsepin, G T; Korchagin, V B
1988-02-01
The event detected by the Mont Balnc Soviet -Italian scintillation detector on February 23, 1987 at 2:52:37 are discussed. The corrected energies of the pulases of the event and the probability of the event imitation by the background are presented.
The December 2008 Crammont rock avalanche, Mont Blanc massif area, Italy
Directory of Open Access Journals (Sweden)
P. Deline
2011-12-01
Full Text Available We describe a 0.5 Mm^{3} rock avalanche that occurred in 2008 in the western Alps and discuss possible roles of controlling factors in the context of current climate change. The source is located between 2410 m and 2653 m a.s.l. on Mont Crammont and is controlled by a densely fractured rock structure. The main part of the collapsed rock mass deposited at the foot of the rock wall. A smaller part travelled much farther, reaching horizontal and vertical travel distances of 3050 m and 1560 m, respectively. The mobility of the rock mass was enhanced by channelization and snow. The rock-avalanche volume was calculated by comparison of pre- and post-event DTMs, and geomechanical characterization of the detachment zone was extracted from LiDAR point cloud processing. Back analysis of the rock-avalanche runout suggests a two stage event.
There was no previous rock avalanche activity from the Mont Crammont ridge during the Holocene. The 2008 rock avalanche may have resulted from permafrost degradation in the steep rock wall, as suggested by seepage water in the scar after the collapse in spite of negative air temperatures, and modelling of rock temperatures that indicate warm permafrost (T > −2 °C.
A first comparison of Cosmo-Skymed and TerraSAR-X data over Chamonix Mont-Blanc test-site
Nicolas , Jean-Marie; Trouvé , Emmanuel; Fallourd , Renaud; Vernier , Flavien; Tupin , Florence; Harant , Olivier; Gay , Michel; Moreau , Luc
2012-01-01
International audience; This paper presents the first results obtained with satellite im- age time series (SITS) acquired by Cosmo-SkyMed (CSK) over the Chamonix Mont-Blanc test-site. A CSK SITS made of 39 images is merged with a TerraSAR-X SITS made of 26 images by using the orbital information and co-registration tools developed in the EFIDIR project. The results are illus- trated by the computation of speckle-free images by temporal averaging, by the generation and comparison of topographi...
International Nuclear Information System (INIS)
Hubert, P.; Lombard, J.; Pages, P.
1986-09-01
The utility function approach (decision analysis) is one of the classical decision aiding techniques that are of interest when performing ALARA analysis. In this paper a case study will serve as an illustration of this technique. The problem which is dealt with is the set up of a regulation applying to the transit of small radioactive material packages (type A) under the Mont Blanc Tunnel which is a major route between France and Italy. This case study is therefore a good example of an ALARA approach applied to a safety problem which implies both a probabilistic risk assessment and the evaluation of very heterogeneous criteria
Ravanel, L.; Deline, P.; Lambiel, C.; Vincent, C.
2012-04-01
Glacier retreat and permafrost degradation are actually more and more thought to explain the increasing instability of rock slopes and rock ridges in high mountain environments. Hot summers with numerous rockfalls we experienced over the last two decades in the Alps have indeed contributed to test/strengthen the hypothesis of a strong correlation between rockfalls and global warming through these two cryospheric factors. Rockfalls from recently deglaciated and/or thawing areas may have very important economic and social implications for high mountain infrastructures and be a fatal hazard for mountaineers. At high mountain sites characterized by infrastructures that can be affected by rockfalls, the monitoring of rock slopes, permafrost and glaciers is thus an essential element for the sustainability of the infrastructure and for the knowledge/management of risks. Our study focuses on a particularly active area of the Mont Blanc massif (France), the lower Arête des Cosmiques, on which is located the very popular Refuge des Cosmiques (3613 m a.s.l.). Since 1998, when a rockfall threatened a part of the refuge and forced to major stabilizing works, observations allowed to identify 10 detachments (20 m3 to > 1000 m3), especially on the SE face of the ridge. Since 2009, this face is yearly surveyed by terrestrial laser scanning to obtain high-resolution 3D models. Their diachronic comparison gives precise measurements of the evolution of the rock slope. Eight rock detachments have thus been documented (0.7 m3 to 256.2 m3). Rock temperature measurements at the ridge and the close Aiguille du Midi (3842 m a.s.l.), and observations of the evolution of the underlying Glacier du Géant have enable to better understand the origin of the strong dynamics of this highly vulnerable area: (i) rock temperature data suggest the presence of warm permafrost (i.e. close to 0°C) from the first meters to depth in the SE face, and cold permafrost in the NW face; (ii) as suggested by the
Ehrlich, Robert
2018-05-01
According to conventional wisdom the 5 h early Mont Blanc burst probably was not associated with SN 1987A, but if it was genuine, some exotic physics explanation had to be responsible. Here we consider one truly exotic explanation, namely faster-than-light neutrinos having mν2 = - 0.38 keV2. It is shown that the Mont Blanc burst is consistent with the distinctive signature of that explanation i.e., an 8 MeV antineutrino line from SN 1987A. It is further shown that a model of core collapse supernovae involving dark matter particles of mass 8 MeV would in fact yield an 8 MeV antineutrino line. Moreover, that dark matter model predicts 8 MeV ν ,νbar and e+e- pairs from the galactic center, a place where one would expect large amounts of dark matter to collect. The resulting e+ would create γ - rays from the galactic center, and a fit to MeV γ - ray data yields the model's dark matter mass, as well as the calculated source temperature and angular size. These good fits give indirect experimental support for the existence of an 8 MeV antineutrino line from SN 1987A. More direct support comes from the spectrum of N ∼ 1000 events recorded by the Kamiokande-II detector on the day of SN 1987A, which appear to show an 8 MeV line atop the detector background. This νbar line, if genuine, has been well-hidden for 30 years because it occurs very close to the peak of the background. This fact might ordinarily justify extreme skepticism. In the present case, however, a more positive view is called for based on (a) the very high statistical significance of the result (30σ), (b) the use of a detector background independent of the SN 1987A data using a later K-II data set, and (c) the observation of an excess above the background spectrum whose central energy and width both agree with that of an 8 MeV νbar line broadened by 25% resolution. Most importantly, the last observation is in accord with the prior prediction of an 8 MeV νbar line based on the Mont Blanc data, and
D Agata, C.; Zanutta, A.; Muzzu Martis, D.; Mancini, F.; Smiraglia, C.
2003-04-01
Aim of this contribution is the evaluation of volumetric and surface variations of Brenva Glacier (Mont Blanc, Italian Alps) during the second half of the 20th century, by GIS-based processing of maps and aerial photogrammetry technique. Brenva Glacier is a typical debris covered glacier, located in a valley on the S-E side of the Mont Blanc. The glacier covers a surface of 7 kmq and shows a length of 7,6 km at maximum. The glacier snout reaches 1415 m a.s.l., which is the lowest glacier terminus of the Italian Alps. To evaluate glacier variations different historical maps were used: 1) The 1959 Map, at the scale 1:5.000, by EIRA (Ente Italiano Rilievi Aerofotogrammetrici, Firenze), from terrestrial photogrammetric survey, published in the Bollettino del Comitato Glaciologico Italiano, 2, n. 19, 1971. 2) The 1971 Map, at the scale 1:5.000, from aerial photogrammetry (Alifoto, Torino) published in the Bollettino del Comitato Glaciologico Italiano, 2, n. 20, 1972. 3) The 1988 Map, at the scale 1:10.000, (Region Aosta Valley, Regional Technical Map) from 1983 aerial photogrammetric survey. 4) The 1999 Map, at the scale 1:10.000, (Region Aosta Valley, Regional Technical Map) from 1991 aerial photogrammetry survey. For the same purpose the following aereal photographs were used: 1) The 1975 image, CGR (Italian General Company aerial Surveys) flight RAVDA (Administrative Autonomous Region Aosta Valley), at the scale 1:17.000. 2) The 1991 image, CGR (Italian General Company aerial Surveys) flight RAVDA (Administrative Autonomous Region Aosta Valley), at the scale 1:17.000. Aerial imageries have been acquired over a long period from 1975 to 1991. The black and white images were scanned at suitable resolution if compared with the imagery scale and several models, representing the glacier tongue area, oriented using the inner and outer orientation parameters delivered with the images, were produced. The digital photogrammetric system, after orientation and matching, produces
Directory of Open Access Journals (Sweden)
Jürg Thudium
2009-03-01
Full Text Available Les conséquences du trafic routier, en termes de nuisances sonores et de qualité de l’air, ont été analysées et comparées pour quatre vallées de transit alpin (Fréjus, Mont Blanc, Gothard et Brenner durant l’année 2004. Au regard du trafic alpin dans son ensemble, des disparités considérables apparaissent entre les vallées étudiées, mais également à l’intérieur de ces mêmes vallées. Les immissions (concentration de polluants produites par unité d’émission du trafic routier sont deux à trois fois plus élevées dans ces vallées alpines qu’en plaine. Ceci s’explique principalement par la topographie et le climat particuliers de ces vallées. À de nombreux points d’observation, les seuils d’immission ont été dépassés. Les vallées sont également affectées durement par la pollution sonore. « L’effet amphithéâtre » transporte le bruit à des altitudes supérieures, qui n’auraient pas été exposées à autant d’irradiation acoustique si la source de nuisances était située à égale distance, mais dans un « paysage ouvert ». De plus, la protection contre le bruit qui se réfléchit sur les pentes est malaisée. En résumé, toutes les vallées de transit étudiées peuvent être considérées comme des régions sensibles.The environmental consequences of road transport with regard to air and noise in the transit valleys of Fréjus, Mont-Blanc, Gotthard, and Brenner have been analysed and compared with each other for the year 2004. In respect of the share of transport passing through the Alps in transport as a whole, there are in part considerable differences between the valleys under investigation as well as within the individual valleys. The air pollution produced per emission unit of road transport is two to three times higher than in the open country, mainly because of the topography and the climate. At numerous monitoring points, the thresholds for the air pollution were exceeded
Magnin, Florence; Westermann, Sebastian; Pogliotti, Paolo; Ravanel, Ludovic; Deline, Philip
2016-04-01
Permafrost degradation through the thickening of the active layer and the rising temperature at depth is a crucial process of rock wall stability. The ongoing increase in rock falls observed during hot periods in mid-latitude mountain ranges is regarded as a result of permafrost degradation. However, the short-term thermal dynamics of alpine rock walls are misunderstood since they result of complex processes related to the interaction of local climate variables, heterogeneous snow cover and heat transfers. As a consequence steady-state and long-term changes that can be approached with simpler process mainly related to air temperature, solar radiations and heat conduction were the most common dynamics to be studied so far. The effect of snow on the bedrock surface temperature is increasingly investigated and has already been demonstrated to be an essential factor of permafrost distribution. Nevertheless, its effect on the year-to-year changes of the active layer thickness and of the permafrost temperature in steep alpine bedrock has not been investigated yet, partly due to the lack of appropriate data. We explore the role of snow accumulations on the active layer and permafrost thermal regime of steep rock walls of a high-elevated site, the Aiguille du Midi (AdM, 3842 m a.s.l, Mont Blanc massif, Western European Alps) by mean of a multi-methods approach. We first analyse six years of temperature records in three 10-m-deep boreholes. Then we describe the snow accumulation patterns on two rock faces by means of automatically processed camera records. Finally, sensitivity analyses of the active layer thickness and permafrost temperature towards timing and magnitude of snow accumulations are performed using the numerical permafrost model CryoGrid 3. The energy balance module is forced with local meteorological measurements on the AdM S face and validated with surface temperature measurements at the weather station location. The heat conduction scheme is calibrated with
Magnin, Florence; Josnin, Jean-Yves; Ravanel, Ludovic; Pergaud, Julien; Pohl, Benjamin; Deline, Philip
2017-08-01
High alpine rock wall permafrost is extremely sensitive to climate change. Its degradation has a strong impact on landscape evolution and can trigger rockfalls constituting an increasing threat to socio-economical activities of highly frequented areas; quantitative understanding of permafrost evolution is crucial for such communities. This study investigates the long-term evolution of permafrost in three vertical cross sections of rock wall sites between 3160 and 4300 m above sea level in the Mont Blanc massif, from the Little Ice Age (LIA) steady-state conditions to 2100. Simulations are forced with air temperature time series, including two contrasted air temperature scenarios for the 21st century representing possible lower and upper boundaries of future climate change according to the most recent models and climate change scenarios. The 2-D finite element model accounts for heat conduction and latent heat transfers, and the outputs for the current period (2010-2015) are evaluated against borehole temperature measurements and an electrical resistivity transect: permafrost conditions are remarkably well represented. Over the past two decades, permafrost has disappeared on faces with a southerly aspect up to 3300 m a.s.l. and possibly higher. Warm permafrost (i.e. > - 2 °C) has extended up to 3300 and 3850 m a.s.l. in N and S-exposed faces respectively. During the 21st century, warm permafrost is likely to extend at least up to 4300 m a.s.l. on S-exposed rock walls and up to 3850 m a.s.l. depth on the N-exposed faces. In the most pessimistic case, permafrost will disappear on the S-exposed rock walls at a depth of up to 4300 m a.s.l., whereas warm permafrost will extend at a depth of the N faces up to 3850 m a.s.l., but possibly disappearing at such elevation under the influence of a close S face. The results are site specific and extrapolation to other sites is limited by the imbrication of local topographical and transient effects.
Climbing Mont Blanc and Scalability
Chavez, Christian
2016-01-01
This thesis details a proposed system implementation upgrade for the CMB system, accessible at \\url{climb.idi.ntnu.no}, which profiles C/C++ code for its energy efficiency on an Odroid-XU3 board, which utilises a Samsung Exynos 5 Octa CPU, and has an ARM Mali-T628 GPU. Our proposed system implementation improves the robustness of the code base and its execution, in addition to permitting an increased throughput of submissions profiled by the system with the implementation's dispatcher whic...
Guerin, Antoine; Abellán, Antonio; Matasci, Battista; Jaboyedoff, Michel; Derron, Marc-Henri; Ravanel, Ludovic
2017-07-01
In June 2005, a series of major rockfall events completely wiped out the Bonatti Pillar located in the legendary Drus west face (Mont Blanc massif, France). Terrestrial lidar scans of the west face were acquired after this event, but no pre-event point cloud is available. Thus, in order to reconstruct the volume and the shape of the collapsed blocks, a 3-D model has been built using photogrammetry (structure-from-motion (SfM) algorithms) based on 30 pictures collected on the Web. All these pictures were taken between September 2003 and May 2005. We then reconstructed the shape and volume of the fallen compartment by comparing the SfM model with terrestrial lidar data acquired in October 2005 and November 2011. The volume is calculated to 292 680 m3 (±5.6 %). This result is close to the value previously assessed by Ravanel and Deline (2008) for this same rock avalanche (265 000 ± 10 000 m3). The difference between these two estimations can be explained by the rounded shape of the volume determined by photogrammetry, which may lead to a volume overestimation. However it is not excluded that the volume calculated by Ravanel and Deline (2008) is slightly underestimated, the thickness of the blocks having been assessed manually from historical photographs.
Aglietta, M.; Badino, G.; Bologna, G. F.; Castagnoli, C.; Fulgione, W.; Galeotti, P.; Saavedra, O.; Trinchero, G. C.; Vernetto, S.; Dadykin, V. L.
1985-01-01
The 90 tons liquid scintillation detector (LSD) is fully running since October 1984, at a depth of 5,200 hg/sq cm of standard rock underground. The main goal is to search for neutrino bursts from collapsing stars. The experiment is very sensitive to detect low energy particles and has a very good signature to gamma-rays from (n,p) reaction which follows the upsilon e + p yields n + e sup + neutrino capture. The analysis of data is presented and the preliminary results on low energy measurements are discussed.
International Nuclear Information System (INIS)
Aglietta, M.; Badino, G.; Bologna, G.F.
1985-01-01
The 90 tons liquid scintillation detector (LSD) has been fully running since October 1984 at a depth of 5,200 hg/sq cm of standard rock underground. The main goal is to search for neutrino bursts from collapsing stars. The experiment is very sensitive to detect low energy particles and has a very good signature to gamma rays from (n,p) reactions which follows the upsilon e + p yields n + e sup + neutrino capture. The analysis of data is presented and the preliminary results on low energy measurements are discussed. 1 ref
Monte Carlo work at Argonne National Laboratory
International Nuclear Information System (INIS)
Gelbard, E.M.; Prael, R.E.
1974-01-01
A simple model of the Monte Carlo process is described and a (nonlinear) recursion relation between fission sources in successive generations is developed. From the linearized form of these recursion relations, it is possible to derive expressions for the mean square coefficients of error modes in the iterates and for correlation coefficients between fluctuations in successive generations. First-order nonlinear terms in the recursion relation are analyzed. From these nonlinear terms an expression for the bias in the eigenvalue estimator is derived, and prescriptions for measuring the bias are formulated. Plans for the development of the VIM code are reviewed, and the proposed treatment of small sample perturbations in VIM is described. 6 references. (U.S.)
Gosso, Guido; Croce, Giuseppe; Matteucci, Ruggero; Peppoloni, Silvia; Piacente, Sandra; Wasowski, Janusz
2013-04-01
In the first decade after the Second World War Italy was rushing to recover a positive role among European countries; basic needs as road communications with European neighbours became main priorities. The necessity of a rapid connection with South-eastern France, a subject already debated between the two nations over more than 50 years, appeared then on first line; the two countries convened on a joint investment for the construction of a tunnel across the international border of Mont Blanc, along the shortest track between Courmayeur and Chamonix. The political agreements were in favour of the quickest start of the drilling operations and such obligation imposed on the Italian side an impoverishment of the project content, specially concerning geological issues. No surveys were performed on fracture systems, cataclastic zones and faults, on the few rock ridges standing above the tunnel line and outcropping through thick talus cones, moraines, ice tongues and their related ice plateaus. Metasediments, migmatites and poorly foliated granites were to be drilled. Three Italian academics were allowed by the drilling company to track the working progress and collect rocks for comparison with other Alpine types; they mapped the lithology and the fault zonesall along the freshly excavated tunnel; the results of such survey appeared after the end of works. Geologists from Florence University published the surface granite faulting pattern 20 years after the road tunnel became operative. Such geological cares could have located the risky zones in time for the tunnel project, mitigating the catastrophic effects of sudden drainage of subglacial water from the Vallée Blanche ice plateau (Ghiacciaio del Gigante) at progression 3800m, that caused dramatic accidents and affected negatively the economy of the drilling. Also the wallrock temperature drops, measured during the drill, should have warned the company management on the location of dangerous fracture zones. Anxiety of
HR Department
2009-01-01
We deeply regret to announce the death of Mr Michel BLANC on 27 November 2009. Mr BLANC, who was born on 4 April 1952, was a member of the IT Department and had worked at CERN since 1 January 1978. The Director-General has sent his family a message of condolence on behalf of the CERN personnel. Social Affairs It was with great sadness that we learned of the death of our friend and colleague Michel Blanc on the evening of Friday 27 November. Everyone who knew him, especially those who had spent many years with him in the Computer Centre where he worked since his arrival at CERN in 1978, but also his more recent colleagues, will always remember his good humour, his quick wit and his amazing zest for life. He made staunch friends during his thirty years at CERN and often kept in touch with them after they left the Organization. His passion for motorcycling and for walks with his wife, his two sons and his friends were some of his great joys in life. Michel took a well-earned early retirement in Octobe...
International Nuclear Information System (INIS)
Dadykin, V.L.; Khalchukov, F.F.; Korchagin, P.V.; Korolkova, E.V.; Kudryavtsev, V.A.; Mal'gin, A.S.; Ryasny, V.G.; Ryazhskaya, O.G.; Yakushev, V.F.; Zatsepin, G.T.; Aglietta, M.; Badino, G.; Bologna, G.; Castagnoli, C.; Castellina, A.; Fulgione, W.; Galeotti, P.; Saavedra, O.; Trinchero, G.; Vernetto, S.; Turin Univ.
1989-01-01
We have analysed the data of LSD from February 10, 1987, to March 7, 1987, in order to search for autocorrelations between all pulses detected by LSD with energy higher than 5 MeV like those occurred at ∼ 3:00 UT on February 23, 1987, between the pulses detected by 3 neutrino telescopes and 2 gravitational wave antennae. We have found 9 pairs of correlated pulses (muon + low energy pulse) from 5:42 UT to 10:13 UT on February 23, 1987. The time differences of pulses in the pairs are less than 2 s, the first pulse in the pair being either muon or low energy pulse. The frequency of such random poissonian fluctuations is ∼1/(10 years). There are no correlations outside statistics between low energy, low energy pulses and muon, muon pulses detected by LSD during the whole time period
Monte Carlo applications at Hanford Engineering Development Laboratory
International Nuclear Information System (INIS)
Carter, L.L.; Morford, R.J.; Wilcox, A.D.
1980-03-01
Twenty applications of neutron and photon transport with Monte Carlo have been described to give an overview of the current effort at HEDL. A satisfaction factor was defined which quantitatively assigns an overall return for each calculation relative to the investment in machine time and expenditure of manpower. Low satisfaction factors are frequently encountered in the calculations. Usually this is due to limitations in execution rates of present day computers, but sometimes a low satisfaction factor is due to computer code limitations, calendar time constraints, or inadequacy of the nuclear data base. Present day computer codes have taken some of the burden off of the user. Nevertheless, it is highly desirable for the engineer using the computer code to have an understanding of particle transport including some intuition for the problems being solved, to understand the construction of sources for the random walk, to understand the interpretation of tallies made by the code, and to have a basic understanding of elementary biasing techniques
Energy Technology Data Exchange (ETDEWEB)
Leupin, O.X. [National Cooperative for the Disposal of Radioactive Waste (NAGRA), Wettingen (Switzerland); Bernier-Latmani, R.; Bagnoud, A. [Swiss Federal Office of Technology EPFL, Lausanne (Switzerland); Moors, H.; Leys, N.; Wouters, K. [Belgian Nuclear Research Centre SCK-CEN, Mol (Belgium); Stroes-Gascoyne, S. [University of Saskatchewan, Saskatoon (Canada)
2017-04-15
Microbiological studies related to the geological disposal of radioactive waste have been conducted at the Mont Terri rock laboratory in Opalinus Clay, a potential host rock for a deep geologic repository, since 2002. The metabolic potential of microorganisms and their response to excavation-induced effects have been investigated in undisturbed and disturbed claystone cores and in pore- (borehole) water. Results from nearly 15 years of research at the Mont Terri rock laboratory have shown that microorganisms can potentially affect the environment of a repository by influencing redox conditions, metal corrosion and gas production and consumption under favourable conditions. However, the activity of microorganisms in undisturbed Opalinus Clay is limited by the very low porosity, the low water activity, and the largely recalcitrant nature of organic matter in the claystone formation. The presence of microorganisms in numerous experiments at the Mont Terri rock laboratory has suggested that excavation activities and perturbation of the host rock combined with additional contamination during the installation of experiments in boreholes create favourable conditions for microbial activity by providing increased space, water and substrates. Thus effects resulting from microbial activity might be expected in the proximity of a geological repository i.e., in the excavation damaged zone, the engineered barriers, and first containments (the containers). (authors)
International Nuclear Information System (INIS)
Leupin, O.X.; Bernier-Latmani, R.; Bagnoud, A.; Moors, H.; Leys, N.; Wouters, K.; Stroes-Gascoyne, S.
2017-01-01
Microbiological studies related to the geological disposal of radioactive waste have been conducted at the Mont Terri rock laboratory in Opalinus Clay, a potential host rock for a deep geologic repository, since 2002. The metabolic potential of microorganisms and their response to excavation-induced effects have been investigated in undisturbed and disturbed claystone cores and in pore- (borehole) water. Results from nearly 15 years of research at the Mont Terri rock laboratory have shown that microorganisms can potentially affect the environment of a repository by influencing redox conditions, metal corrosion and gas production and consumption under favourable conditions. However, the activity of microorganisms in undisturbed Opalinus Clay is limited by the very low porosity, the low water activity, and the largely recalcitrant nature of organic matter in the claystone formation. The presence of microorganisms in numerous experiments at the Mont Terri rock laboratory has suggested that excavation activities and perturbation of the host rock combined with additional contamination during the installation of experiments in boreholes create favourable conditions for microbial activity by providing increased space, water and substrates. Thus effects resulting from microbial activity might be expected in the proximity of a geological repository i.e., in the excavation damaged zone, the engineered barriers, and first containments (the containers). (authors)
Twenty years of research at the Mont Terri rock laboratory: what we have learnt
Energy Technology Data Exchange (ETDEWEB)
Bossart, P. [Federal Office of Topography swisstopo, Wabern (Switzerland)
2017-04-15
The 20 papers in this Special Issue address questions related to the safe deep geological disposal of radioactive waste. Here we summarize the main results of these papers related to issues such as: formation of the excavation damaged zone, self-sealing processes, thermo-hydro-mechanical processes, anaerobic corrosion, hydrogen production and effects of microbial activity, and transport and retention processes of radionuclides. In addition, we clarify the question of transferability of results to other sites and programs and the role of rock laboratories for cooperation and training. Finally, we address the important role of the Mont Terri rock laboratory for the public acceptance of radioactive waste disposal. (author)
The Mont Terri rock laboratory: International research in the Opalinus Clay
International Nuclear Information System (INIS)
Bossart, P.
2015-01-01
This article reports on a visit made to the rock laboratory in Mont Terri, Switzerland, where research is being done concerning rock materials that can possibly be used for the implementation of repositories for nuclear wastes. Emphasis is placed on the project’s organisation, rock geology and on-going experiments. International organisations also involved in research on nuclear waste repositories are listed. The research facilities in tunnels built in Opalinus Clay at the Mont Terri site are described. The geology of Opalinus Clay and the structures found in the research tunnels are discussed, as is the hydro-geological setting. The research programme and various institutions involved are listed and experiments carried out are noted. The facilities are now also being used for research on topics related to carbon sequestration
GRS' research on clay rock in the Mont Terri underground laboratory
Energy Technology Data Exchange (ETDEWEB)
Wieczorek, Klaus; Czaikowski, Oliver [Gesellschaft fuer Anlagen- und Reaktorsicherheit gGmbH, Braunschweig (Germany)
2016-07-15
For constructing a nuclear waste repository and for ensuring the safety requirements are met over very long time periods, thorough knowledge about the safety-relevant processes occurring in the coupled system of waste containers, engineered barriers, and the host rock is indispensable. For respectively targeted research work, the Mont Terri rock laboratory is a unique facility where repository research is performed in a clay rock environment. It is run by 16 international partners, and a great variety of questions are investigated. Some of the work which GRS as one of the Mont Terri partners is involved in is presented in this article. The focus is on thermal, hydraulic and mechanical behaviour of host rock and/or engineered barriers.
Clements, Aspen R.; Berk, Brandon; Cooke, Ilsa R.; Garrod, Robin T.
2018-02-01
Using an off-lattice kinetic Monte Carlo model we reproduce experimental laboratory trends in the density of amorphous solid water (ASW) for varied deposition angle, rate and surface temperature. Extrapolation of the model to conditions appropriate to protoplanetary disks and interstellar dark clouds indicate that these ices may be less porous than laboratory ices.
Energy Technology Data Exchange (ETDEWEB)
Gens, A. [Universitat Politència de Catalunya, Barcelona (Spain); Wieczorek, K. [Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) GmbH, Braunschweig (Germany); Gaus, I. [National Cooperative for the Disposal of Radioactive Waste (NAGRA), Wettingen (Switzerland); and others
2017-04-15
The paper presents an overview of the behaviour of Opalinus Clay under thermal loading as observed in three in situ heating tests performed in the Mont Terri rock laboratory: HE-B, HE-D and HE-E. The three tests are summarily described; they encompass a broad range of test layouts and experimental conditions. Afterwards, the following topics are examined: determination of thermal conductivity, thermally-induced pore pressure generation and thermally-induced mechanical effects. The mechanisms underlying pore pressure generation and dissipation are discussed in detail and the relationship between rock damage and thermal loading is examined using an additional in situ test: SE-H. The paper concludes with an evaluation of the various thermo-hydro-mechanical (THM) interactions identified in the heating tests. (authors)
International Nuclear Information System (INIS)
Bossart, P.; Bernier, F.; Birkholzer, J.
2017-01-01
Geologic repositories for radioactive waste are designed as multi-barrier disposal systems that perform a number of functions including the long-term isolation and containment of waste from the human environment, and the attenuation of radionuclides released to the subsurface. The rock laboratory at Mont Terri (canton Jura, Switzerland) in the Opalinus Clay plays an important role in the development of such repositories. The experimental results gained in the last 20 years are used to study the possible evolution of a repository and investigate processes closely related to the safety functions of a repository hosted in a clay rock. At the same time, these experiments have increased our general knowledge of the complex behaviour of argillaceous formations in response to coupled hydrological, mechanical, thermal, chemical, and biological processes. After presenting the geological setting in and around the Mont Terri rock laboratory and an overview of the mineralogy and key properties of the Opalinus Clay, we give a brief overview of the key experiments that are described in more detail in the following research papers to this Special Issue of the Swiss Journal of Geosciences. These experiments aim to characterise the Opalinus Clay and estimate safety-relevant parameters, test procedures, and technologies for repository construction and waste emplacement. Other aspects covered are: bentonite buffer emplacement, high-pH concrete-clay interaction experiments, anaerobic steel corrosion with hydrogen formation, depletion of hydrogen by microbial activity, and finally, release of radionuclides into the bentonite buffer and the Opalinus Clay barrier. In the case of a spent fuel/high-level waste repository, the time considered in performance assessment for repository evolution is generally 1 million years, starting with a transient phase over the first 10,000 years and followed by an equilibrium phase. Experiments dealing with initial conditions, construction, and waste
Energy Technology Data Exchange (ETDEWEB)
Bossart, P. [Swisstopo, Federal Office of Topography, Wabern (Switzerland); Bernier, F. [Federal Agency for Nuclear Control FANC, Brussels (Belgium); Birkholzer, J. [Lawrence Berkeley National Laboratory, Berkeley (United States); and others
2017-04-15
Geologic repositories for radioactive waste are designed as multi-barrier disposal systems that perform a number of functions including the long-term isolation and containment of waste from the human environment, and the attenuation of radionuclides released to the subsurface. The rock laboratory at Mont Terri (canton Jura, Switzerland) in the Opalinus Clay plays an important role in the development of such repositories. The experimental results gained in the last 20 years are used to study the possible evolution of a repository and investigate processes closely related to the safety functions of a repository hosted in a clay rock. At the same time, these experiments have increased our general knowledge of the complex behaviour of argillaceous formations in response to coupled hydrological, mechanical, thermal, chemical, and biological processes. After presenting the geological setting in and around the Mont Terri rock laboratory and an overview of the mineralogy and key properties of the Opalinus Clay, we give a brief overview of the key experiments that are described in more detail in the following research papers to this Special Issue of the Swiss Journal of Geosciences. These experiments aim to characterise the Opalinus Clay and estimate safety-relevant parameters, test procedures, and technologies for repository construction and waste emplacement. Other aspects covered are: bentonite buffer emplacement, high-pH concrete-clay interaction experiments, anaerobic steel corrosion with hydrogen formation, depletion of hydrogen by microbial activity, and finally, release of radionuclides into the bentonite buffer and the Opalinus Clay barrier. In the case of a spent fuel/high-level waste repository, the time considered in performance assessment for repository evolution is generally 1 million years, starting with a transient phase over the first 10,000 years and followed by an equilibrium phase. Experiments dealing with initial conditions, construction, and waste
Monte Carlo analysis of the Neutron Standards Laboratory of the CIEMAT
International Nuclear Information System (INIS)
Vega C, H. R.; Mendez V, R.; Guzman G, K. A.
2014-10-01
By means of Monte Carlo methods was characterized the neutrons field produced by calibration sources in the Neutron Standards Laboratory of the Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas (CIEMAT). The laboratory has two neutron calibration sources: 241 AmBe and 252 Cf which are stored in a water pool and are placed on the calibration bench using controlled systems at distance. To characterize the neutrons field was built a three-dimensional model of the room where it was included the stainless steel bench, the irradiation table and the storage pool. The sources model included double encapsulated of steel, as cladding. With the purpose of determining the effect that produces the presence of the different components of the room, during the characterization the neutrons spectra, the total flow and the rapidity of environmental equivalent dose to 100 cm of the source were considered. The presence of the walls, floor and ceiling of the room is causing the most modification in the spectra and the integral values of the flow and the rapidity of environmental equivalent dose. (Author)
Monte Carlo simulation of muon radiation environment in China Jinping Underground Laboratory
International Nuclear Information System (INIS)
Su Jian; Zeng Zhi; Liu Yue; Yue Qian; Ma Hao; Cheng Jianping
2012-01-01
Muon radiation background of China Jinping Underground Laboratory (CJPL) was simulated by Monte Carlo method. According to the Gaisser formula and the MUSIC soft, the model of cosmic ray muons was established. Then the yield and the average energy of muon-induced photons and muon-induced neutrons were simulated by FLUKA. With the single-energy approximation, the contribution to the radiation background of shielding structure by secondary photons and neutrons was evaluated. The estimation results show that the average energy of residual muons is 369 GeV and the flux is 3.17 × 10 -6 m -2 · s -1 . The fluence rate of secondary photons is about 1.57 × 10 -4 m -2 · s -1 , and the fluence rate of secondary neutrons is about 8.37 × 10 -7 m -2 · s -1 . The muon radiation background of CJPL is lower than those of most other underground laboratories in the world. (authors)
Geological modeling of a fault zone in clay rocks at the Mont-Terri laboratory (Switzerland)
Kakurina, M.; Guglielmi, Y.; Nussbaum, C.; Valley, B.
2016-12-01
Clay-rich formations are considered to be a natural barrier for radionuclides or fluids (water, hydrocarbons, CO2) migration. However, little is known about the architecture of faults affecting clay formations because of their quick alteration at the Earth's surface. The Mont Terri Underground Research Laboratory provides exceptional conditions to investigate an un-weathered, perfectly exposed clay fault zone architecture and to conduct fault activation experiments that allow explore the conditions for stability of such clay faults. Here we show first results from a detailed geological model of the Mont Terri Main Fault architecture, using GoCad software, a detailed structural analysis of 6 fully cored and logged 30-to-50m long and 3-to-15m spaced boreholes crossing the fault zone. These high-definition geological data were acquired within the Fault Slip (FS) experiment project that consisted in fluid injections in different intervals within the fault using the SIMFIP probe to explore the conditions for the fault mechanical and seismic stability. The Mont Terri Main Fault "core" consists of a thrust zone about 0.8 to 3m wide that is bounded by two major fault planes. Between these planes, there is an assembly of distinct slickensided surfaces and various facies including scaly clays, fault gouge and fractured zones. Scaly clay including S-C bands and microfolds occurs in larger zones at top and bottom of the Mail Fault. A cm-thin layer of gouge, that is known to accommodate high strain parts, runs along the upper fault zone boundary. The non-scaly part mainly consists of undeformed rock block, bounded by slickensides. Such a complexity as well as the continuity of the two major surfaces are hard to correlate between the different boreholes even with the high density of geological data within the relatively small volume of the experiment. This may show that a poor strain localization occurred during faulting giving some perspectives about the potential for
Energy Technology Data Exchange (ETDEWEB)
Amann, F.; Wild, K.M.; Loew, S. [Institute of Geology, Engineering Geology, Swiss Federal Institute of Technology, Zurich (Switzerland); Yong, S. [Knight Piesold Ltd, Vancouver (Canada); Thoeny, R. [Grundwasserschutz und Entsorgung, AF-Consult Switzerland AG, Baden (Switzerland); Frank, E. [Sektion Geologie (GEOL), Eidgenössisches Nuklear-Sicherheitsinspektorat (ENSI), Brugg (Switzerland)
2017-04-15
The paper represents a summary about our research projects conducted between 2003 and 2015 related to the mechanical behaviour of Opalinus Clay at Mont Terri. The research summarized covers a series of laboratory and field tests that address the brittle failure behaviour of Opalinus Clay, its undrained and effective strength, the dependency of petro-physical and mechanical properties on total suction, hydro-mechanically coupled phenomena and the development of a damage zone around excavations. On the laboratory scale, even simple laboratory tests are difficult to interpret and uncertainties remain regarding the representativeness of the results. We show that suction may develop rapidly after core extraction and substantially modifies the strength, stiffness, and petro-physical properties of Opalinus Clay. Consolidated undrained tests performed on fully saturated specimens revealed a relatively small true cohesion and confirmed the strong hydro-mechanically coupled behaviour of this material. Strong hydro-mechanically coupled processes may explain the stability of cores and tunnel excavations in the short term. Pore-pressure effects may cause effective stress states that favour stability in the short term but may cause longer-term deformations and damage as the pore-pressure dissipates. In-situ observations show that macroscopic fracturing is strongly influenced by bedding planes and faults planes. In tunnel sections where opening or shearing along bedding planes or faults planes is kinematically free, the induced fracture type is strongly dependent on the fault plane frequency and orientation. A transition from extensional macroscopic failure to shearing can be observed with increasing fault plane frequency. In zones around the excavation where bedding plane shearing/shearing along tectonic fault planes is kinematically restrained, primary extensional type fractures develop. In addition, heterogeneities such as single tectonic fault planes or fault zones
Energy Technology Data Exchange (ETDEWEB)
Leupin, O.X. [National Cooperative for the Disposal of Radioactive Waste (NAGRA), Wettingen (Switzerland); Van Loon, L.R. [Paul Scherrer Institute PSI, Villigen (Switzerland); Gimmi, T. [Institute of Geological Sciences, University of Berne, Berne (Switzerland); Gimmi, T. [Institute of Environmental Assessment and Water Research IDAEA-CSIC, Barcelona (Spain); and others
2017-04-15
Transport and retardation parameters of radionuclides, which are needed to perform a safety analysis for a deep geological repository for radioactive waste in a compacted claystone such as Opalinus Clay, must be based on a detailed understanding of the mobility of nuclides at different spatial scales (laboratory, field, geological unit). Thanks to steadily improving experimental designs, similar tracer compositions in different experiments and complementary small laboratory-scale diffusion tests, a unique and large database could be compiled. This paper presents the main findings of 20 years of diffusion and retention experiments at the Mont Terri rock laboratory and their impact on safety analysis. (authors)
International Nuclear Information System (INIS)
Hostettler, B.; Reisdorf, A. G.; Jaeggi, D.
2017-01-01
A 250 m-deep inclined well, the Mont Terri BDB-1, was drilled through the Jurassic Opalinus Clay and its bounding formations at the Mont Terri rock laboratory (NW Switzerland). For the first time, a continuous section from (oldest to youngest) the topmost members of the Staffelegg Formation to the basal layers of the Hauptrogenstein Formation is now available in the Mont Terri area. We extensively studied the drill core for lithostratigraphy and biostratigraphy, drawing upon three sections from the Mont Terri area. The macropaleontological, micropaleontological, and palynostratigraphical data are complementary, not only spatially but they also cover almost all biozones from the Late Toarcian to the Early Bajocian. We ran a suite of geophysical logs to determine formational and intraformational boundaries based on clay content in the BDB-1 well. In the framework of an interdisciplinary study, analysis of the above-mentioned formations permitted us to process and derive new and substantial data for the Mont Terri area in a straightforward way. Some parts of the lithologic inventory, stratigraphic architecture, thickness variations, and biostratigraphic classification of the studied formations deviate considerably from occurrences in northern Switzerland that crop out further to the east. For instance, with the exception of the Sissach Member, no further lithostratigraphic subdivision in members is proposed for the Passwang Formation. Also noteworthy is that the ca. 130 m-thick Opalinus Clay in the BDB-1 core is 20 m thinner than that equivalent section found in the Mont Terri tunnel. The lowermost 38 m of the Opalinus Clay can be attributed chronostratigraphically solely to the Aalensis Zone (Late Toarcian). Deposition of the Opalinus Clay began at the same time farther east in northern Switzerland (Aalensis Subzone, Aalensis Zone), but in the Mont Terri area the sedimentation rate was two or three orders of magnitude higher. (authors)
Energy Technology Data Exchange (ETDEWEB)
Hostettler, B. [Naturhistorisches Museum der Burgergemeinde Berne, Berne (Switzerland); Reisdorf, A. G. [Geologisch-Paläontologisches InstitutUniversität Basle, Basle (Switzerland); Jaeggi, D. [Swisstopo, Federal Office of Topography, Wabern (Switzerland); and others
2017-04-15
A 250 m-deep inclined well, the Mont Terri BDB-1, was drilled through the Jurassic Opalinus Clay and its bounding formations at the Mont Terri rock laboratory (NW Switzerland). For the first time, a continuous section from (oldest to youngest) the topmost members of the Staffelegg Formation to the basal layers of the Hauptrogenstein Formation is now available in the Mont Terri area. We extensively studied the drill core for lithostratigraphy and biostratigraphy, drawing upon three sections from the Mont Terri area. The macropaleontological, micropaleontological, and palynostratigraphical data are complementary, not only spatially but they also cover almost all biozones from the Late Toarcian to the Early Bajocian. We ran a suite of geophysical logs to determine formational and intraformational boundaries based on clay content in the BDB-1 well. In the framework of an interdisciplinary study, analysis of the above-mentioned formations permitted us to process and derive new and substantial data for the Mont Terri area in a straightforward way. Some parts of the lithologic inventory, stratigraphic architecture, thickness variations, and biostratigraphic classification of the studied formations deviate considerably from occurrences in northern Switzerland that crop out further to the east. For instance, with the exception of the Sissach Member, no further lithostratigraphic subdivision in members is proposed for the Passwang Formation. Also noteworthy is that the ca. 130 m-thick Opalinus Clay in the BDB-1 core is 20 m thinner than that equivalent section found in the Mont Terri tunnel. The lowermost 38 m of the Opalinus Clay can be attributed chronostratigraphically solely to the Aalensis Zone (Late Toarcian). Deposition of the Opalinus Clay began at the same time farther east in northern Switzerland (Aalensis Subzone, Aalensis Zone), but in the Mont Terri area the sedimentation rate was two or three orders of magnitude higher. (authors)
Deep underground multiple muons at the Mt. Blanc station
Energy Technology Data Exchange (ETDEWEB)
Bergamasco, L; Bilokon, H; D' Ettorre Piazzoli, B; Mannocchi, G [Consiglio Nazionale delle Ricerche, Turin (Italy). Lab. di Cosmo-Geofisica; Castagnoli, C; Picchi, P [Consiglio Nazionale delle Ricerche, Turin (Italy). Lab. di Cosmo-Geofisica; Turin Univ. (Italy). Ist. di Fisica Generale)
1979-12-29
Results on multiple events recorded at the Mt. Blanc station in the last 3 years are presented. The integral energy spectrum of muons is obtained for Esub(..mu..)>1 TeV in the size range 10/sup 6/ - 10/sup 7/ which favours a multiplicity law for hadronic interactions of the form eta approximately Esup(1/4).
Energy Technology Data Exchange (ETDEWEB)
Mäder, U.; Jenni, A. [Institute of Geological Sciences, University of Berne, Berne (Switzerland); Lerouge, C. [French Geological Survey BRGM, Orléans (France); and others
2017-04-15
The Cement-Opalinus Clay Interaction (CI) Experiment at the Mont Terri rock laboratory is a long-term passive diffusion-reaction experiment between contrasting materials of relevance to engineered barrier systems/near-field for deep disposal of radioactive waste in claystone (Opalinus Clay). Reaction zones at interfaces of Opalinus Clay with two different types of concrete (OPC and 'low-pH'/ESDRED) were examined by sampling after 2.2 and 4.9 years. Analytical methods included element mapping (SEM, EPMA), select spot analysis (EDAX), 14C-MMA impregnation for radiography, and powder methods (IR, XRD, clay-exchanger characterisation) on carefully extracted miniature samples (mm). The presence of aggregate grains in concrete made the application of all methods difficult. Common features are a very limited extent of reaction within claystone, and a distinct and regularly zoned reaction zone within the cement matrix that is more extensive in the low-alkali cement (ESDRED). Both interfaces feature a de-calcification zone and overprinted a carbonate alteration zone thought to be mainly responsible for the observed porosity reduction. While OPC shows a distinct sulphate enrichment zone (indicative of ingress from Opalinus Clay), ESDRED displays a wide Mg-enriched zone, also with claystone pore-water as a source. A conclusion is that substitution of OPC by low-alkali cementitious products is not advantageous or necessary solely for the purpose of minimizing the extent of reaction between claystone and cementitious materials. Implications for reactive transport modelling are discussed. (authors)
Implementation of the full-scale emplacement (FE) experiment at the Mont Terri rock laboratory
Energy Technology Data Exchange (ETDEWEB)
Müller, H.R.; Garitte, B.; Vogt, T.; and others
2017-04-15
Opalinus Clay is currently being assessed as the host rock for a deep geological repository for high-level and low- and intermediate-level radioactive wastes in Switzerland. Within this framework, the 'Full-Scale Emplacement' (FE) experiment was initiated at the Mont Terri rock laboratory close to the small town of St-Ursanne in Switzerland. The FE experiment simulates, as realistically as possible, the construction, waste emplacement, backfilling and early post-closure evolution of a spent fuel/vitrified high-level waste disposal tunnel according to the Swiss repository concept. The main aim of this multiple heater test is the investigation of repository-induced thermo-hydro-mechanical (THM) coupled effects on the host rock at this scale and the validation of existing coupled THM models. For this, several hundred sensors were installed in the rock, the tunnel lining, the bentonite buffer, the heaters and the plug. This paper is structured according to the implementation timeline of the FE experiment. It documents relevant details about the instrumentation, the tunnel construction, the production of the bentonite blocks and the highly compacted 'granulated bentonite mixture' (GBM), the development and construction of the prototype 'backfilling machine' (BFM) and its testing for horizontal GBM emplacement. Finally, the plug construction and the start of all 3 heaters (with a thermal output of 1350 Watt each) in February 2015 are briefly described. In this paper, measurement results representative of the different experimental steps are also presented. Tunnel construction aspects are discussed on the basis of tunnel wall displacements, permeability testing and relative humidity measurements around the tunnel. GBM densities achieved with the BFM in the different off-site mock-up tests and, finally, in the FE tunnel are presented. Finally, in situ thermal conductivity and temperature measurements recorded during the first heating months
Energy Technology Data Exchange (ETDEWEB)
Vega C, H. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas (Mexico); Mendez V, R. [Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas, Av. Complutense 40, 28040 Madrid (Spain); Guzman G, K. A., E-mail: fermineutron@yahoo.com [Universidad Politecnica de Madrid, Departamento de Ingenieria Nuclear, C. Jose Gutierrez Abascal 2, 28006 Madrid (Spain)
2014-10-15
By means of Monte Carlo methods was characterized the neutrons field produced by calibration sources in the Neutron Standards Laboratory of the Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas (CIEMAT). The laboratory has two neutron calibration sources: {sup 241}AmBe and {sup 252}Cf which are stored in a water pool and are placed on the calibration bench using controlled systems at distance. To characterize the neutrons field was built a three-dimensional model of the room where it was included the stainless steel bench, the irradiation table and the storage pool. The sources model included double encapsulated of steel, as cladding. With the purpose of determining the effect that produces the presence of the different components of the room, during the characterization the neutrons spectra, the total flow and the rapidity of environmental equivalent dose to 100 cm of the source were considered. The presence of the walls, floor and ceiling of the room is causing the most modification in the spectra and the integral values of the flow and the rapidity of environmental equivalent dose. (Author)
Acha, Robert; Brey, Richard; Capello, Kevin
2013-02-01
A torso phantom was developed by the Lawrence Livermore National Laboratory (LLNL) that serves as a standard for intercomparison and intercalibration of detector systems used to measure low-energy photons from radionuclides, such as americium deposited in the lungs. DICOM images of the second-generation Human Monitoring Laboratory-Lawrence Livermore National Laboratory (HML-LLNL) torso phantom were segmented and converted into three-dimensional (3D) voxel phantoms to simulate the response of high purity germanium (HPGe) detector systems, as found in the HML new lung counter using a Monte Carlo technique. The photon energies of interest in this study were 17.5, 26.4, 45.4, 59.5, 122, 244, and 344 keV. The detection efficiencies at these photon energies were predicted for different chest wall thicknesses (1.49 to 6.35 cm) and compared to measured values obtained with lungs containing (241)Am (34.8 kBq) and (152)Eu (10.4 kBq). It was observed that no statistically significant differences exist at the 95% confidence level between the mean values of simulated and measured detection efficiencies. Comparisons between the simulated and measured detection efficiencies reveal a variation of 20% at 17.5 keV and 1% at 59.5 keV. It was found that small changes in the formulation of the tissue substitute material caused no significant change in the outcome of Monte Carlo simulations.
Diffusion and retention experiment at the Mont Terri underground rock laboratory in St. Ursanne
International Nuclear Information System (INIS)
Leupin, O.X.; Wersin, P.; Gimmi, Th.; Van Loon, L.; Eikenberg, J.; Baeyens, B.; Soler, J.M.; Dewonck, S.; Wittebroodt, C.; Samper, J.; Yi, S.; Naves, A.
2010-01-01
Document available in extended abstract form only. Because of their favourable hydraulic and retention properties that limit the migration of radionuclides, indurated clays are being considered as potential host rocks for radioactive waste disposal. Migration of radionuclides by diffusion and retention is thereby one of the main concerns for safety assessment and therefore carefully investigated at different scales. The transfer from dispersed sorption batch and diffusion data from lab experiments to field scale is however not always straightforward. Thus, combined sorption and diffusion experiments at both lab and field scale are instrumental for a critical verification of the applicability of such sorption and diffusion data. The present migration field experiment 'DR' (Diffusion and Retention experiment) at the Mont Terri Rock Laboratory (Switzerland) is the continuation of a series of successful diffusion experiments. The design is based on these previous diffusion experiments and has been extended to two diffusion chambers in a single borehole drilled perpendicular to the bedding plane. The radionuclides were injected as a pulse in both upper and lower loops where artificial pore water is circulating. The injected tracers were tritium, iodide, bromide, sodium-22, strontium-85, caesium (stable) for the lower diffusion chamber and deuterium caesium-137, barium-133, cobalt-60, europium-152, selenium (stable) and selenium-75 for the lower diffusion chamber. Their decrease in the circulation fluid - as they diffuse into the clay - is continuously monitored by online?-detection and regular sampling. The goals are fourfold (i) obtain diffusion and retention data for moderately to strongly sorbing tracers and to verify the corresponding data obtained on small-scale lab samples, (ii) improve diffusion data for the rock anisotropy, (iii) quantify effects of the borehole-disturbed zone for non-reactive tracers and (iv) improve data for long term diffusion. The
Energy Technology Data Exchange (ETDEWEB)
Preston, M.F. [Lund University, SE-221 00 Lund (Sweden); Myers, L.S. [Duke University, Durham, NC 27708 (United States); Annand, J.R.M. [University of Glasgow, Glasgow G12 8QQ, Scotland (United Kingdom); Fissum, K.G., E-mail: kevin.fissum@nuclear.lu.se [Lund University, SE-221 00 Lund (Sweden); Hansen, K.; Isaksson, L. [MAX IV Laboratory, Lund University, SE-221 00 Lund (Sweden); Jebali, R. [Arktis Radiation Detectors Limited, 8045 Zürich (Switzerland); Lundin, M. [MAX IV Laboratory, Lund University, SE-221 00 Lund (Sweden)
2014-04-21
Rate-dependent effects in the electronics used to instrument the tagger focal plane at the MAX IV Laboratory were recently investigated using the novel approach of Monte Carlo simulation to allow for normalization of high-rate experimental data acquired with single-hit time-to-digital converters (TDCs). The instrumentation of the tagger focal plane has now been expanded to include multi-hit TDCs. The agreement between results obtained from data taken using single-hit and multi-hit TDCs demonstrate a thorough understanding of the behavior of the detector system.
International Nuclear Information System (INIS)
Preston, M.F.; Myers, L.S.; Annand, J.R.M.; Fissum, K.G.; Hansen, K.; Isaksson, L.; Jebali, R.; Lundin, M.
2014-01-01
Rate-dependent effects in the electronics used to instrument the tagger focal plane at the MAX IV Laboratory were recently investigated using the novel approach of Monte Carlo simulation to allow for normalization of high-rate experimental data acquired with single-hit time-to-digital converters (TDCs). The instrumentation of the tagger focal plane has now been expanded to include multi-hit TDCs. The agreement between results obtained from data taken using single-hit and multi-hit TDCs demonstrate a thorough understanding of the behavior of the detector system
David Rosenthal’s Tirant lo Blanc turns 30
Directory of Open Access Journals (Sweden)
Jan Reinhart
2014-12-01
Full Text Available The groundbreaking English language translation of Tirant lo Blanc by New York poet and academic David Rosenthal remains dominant three decades after its initial, and celebrated, release. Rosenthal’s controversially fluid and concise rendering of the Valencian classic survived a serious challenge 20 years ago by a more literal version from a well-meaning amateur translator and journeyman academic backed by a leading U.S.-based Catalan scholar. The article reviews the controversy and compares the two versions, adding comments from some of the key critics. La traducció capdavantera a l’anglés del Tirant lo Blanc, feta pel poeta i erudit de Nova York, David Rosenthal, continua mantenint la seua importància, tres dècades després de publicar-se. La polèmica versió dúctil i concisa de Rosenthal del clàssic valencià, ha sobreviscut el desafiament seriós, de fa vint anys, de la versió més literal d’un benintencionat traductor amateur i acadèmci oficial, recolzat per un destacat erudit català establert als Estats Units. L’article revisa la polèmica i compara les dues versions, tot afegint els comentaris d’alguns dels crítics més importants.
Energy Technology Data Exchange (ETDEWEB)
Vinsot, A.; Lundy, M. [Agence Nationale pour la Gestion des Déchets Radioactifs ANDRA, Meuse Haute-Marne Center, Bure (France); Appelo, C.A.J. [Dr C.A.J. Appelo, Hydrochemical Consultant, Amsterdam (Netherlands); and others
2017-04-15
Two experiments have been installed at Mont Terri in 2004 and 2009 that allowed gas circulation within a borehole at a pressure between 1 and 2 bar. These experiments made it possible to observe the natural gases that were initially dissolved in pore-water degassing into the borehole and to monitor their content evolution in the borehole over several years. They also allowed for inert (He, Ne) and reactive (H{sub 2}) gases to be injected into the borehole with the aim either to determine their diffusion properties into the rock pore-water or to evaluate their removal reaction kinetics. The natural gases identified were CO{sub 2}, light alkanes, He, and more importantly N{sub 2}. The natural concentration of four gases in Opalinus Clay pore-water was evaluated at the experiment location: N{sub 2} 2.2 mmol/L ± 25%, CH{sub 4} 0.30 mmol/L ± 25%, C{sub 2}H{sub 6} 0.023 mmol/L ± 25%, C{sub 3}H{sub 8} 0.012 mmol/L ± 25%. Retention properties of methane, ethane, and propane were estimated. Ne injection tests helped to characterize rock diffusion properties regarding the dissolved inert gases. These experimental results are highly relevant towards evaluating how the fluid composition could possibly evolve in the drifts of a radioactive waste disposal facility. (authors)
International Nuclear Information System (INIS)
Fernandez, A. M.; Sanchez-Ledesma, D. M.; Tournassat, C.; Melon, A.; Gaucher, E.; Astudillo, E.; Vinsot, A.
2013-01-01
Knowledge of the pore water chemistry in clay rock formations plays an important role in determining radionuclide migration in the context of nuclear waste disposal. Among the different in situ and ex-situ techniques for pore water sampling in clay sediments and soils, squeezing technique dates back 115 years. Although different studies have been performed about the reliability and representativeness of squeezed pore waters, more of them were achieved on high porosity, high water content and unconsolidated clay sediments. A very few of them tackled the analysis of squeezed pore water from low-porosity, low water content and highly consolidated clay rocks. In this work, a specially designed and fabricated one-dimensional compression cell two directional fluid flow was used to extract and analyse the pore water composition of Opalinus Clay core samples from Mont Terri (Switzerland). The reproducibility of the technique is good and no ionic ultrafiltration, chemical fractionation or anion exclusion was found in the range of pressures analysed: 70-200 MPa. Pore waters extracted in this range of pressures do not decrease in concentration, which would indicate a dilution of water by mixing of the free pore water and the outer layers of double layer water (Donnan water). A threshold (safety) squeezing pressure of 175 MPa was established for avoiding membrane effects (ion filtering, anion exclusion, etc.) from clay particles induced by increasing pressures. Besides, the pore waters extracted at these pressures are representative of the Opalinus Clay formation from a direct comparison against in situ collected borehole waters. (Author)
Energy Technology Data Exchange (ETDEWEB)
Fernandez, A. M.; Sanchez-Ledesma, D. M.; Tournassat, C.; Melon, A.; Gaucher, E.; Astudillo, E.; Vinsot, A.
2013-07-01
Knowledge of the pore water chemistry in clay rock formations plays an important role in determining radionuclide migration in the context of nuclear waste disposal. Among the different in situ and ex-situ techniques for pore water sampling in clay sediments and soils, squeezing technique dates back 115 years. Although different studies have been performed about the reliability and representativeness of squeezed pore waters, more of them were achieved on high porosity, high water content and unconsolidated clay sediments. A very few of them tackled the analysis of squeezed pore water from low-porosity, low water content and highly consolidated clay rocks. In this work, a specially designed and fabricated one-dimensional compression cell two directional fluid flow was used to extract and analyse the pore water composition of Opalinus Clay core samples from Mont Terri (Switzerland). The reproducibility of the technique is good and no ionic ultrafiltration, chemical fractionation or anion exclusion was found in the range of pressures analysed: 70-200 MPa. Pore waters extracted in this range of pressures do not decrease in concentration, which would indicate a dilution of water by mixing of the free pore water and the outer layers of double layer water (Donnan water). A threshold (safety) squeezing pressure of 175 MPa was established for avoiding membrane effects (ion filtering, anion exclusion, etc.) from clay particles induced by increasing pressures. Besides, the pore waters extracted at these pressures are representative of the Opalinus Clay formation from a direct comparison against in situ collected borehole waters. (Author)
Energy Technology Data Exchange (ETDEWEB)
Aeberhardt, A [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1958-07-01
From the study of the means of transport of cerium in the blood of various laboratory animals, after intra-venous injection of {sup 144}Ce-{sup 144}Pr without carrier, we have been able to show up the part played by the white cells in the transport of this fission product during its passage in the blood. This observation has led to the study, in vitro, of the methods of cerium fixation on the white cells, with a view to determining the possibilities of using this property for white cell labelling, the methods used up to the present not being entirely satisfactory. Using the method for the separation of the known constituents of the blood proposed by us in 1956, we have studied the cerium fixation under various conditions: - on suspensions of white cells from the rabbit, - on a suspension of human white cells, - on the white cells in whole from the rabbit. (author) [French] L'etude du mode de transport du cerium dans le sang chez differents animaux de laboratoire, apres injection intra-veineuse de {sup 144}Ce-{sup 144}Pr sans entraineur, nous a permis de mettre en evidence le rale des globules blancs dans le transport de ce produit de fission au cours de son passage dans le sang. Cette constatation nous a conduit a etudier, in vitro, les modalites de la fixation du cerium sur les globules blancs afin de preciser les possibilites d'utilisation de cette propriete pour le marquage des globules blancs, les methodes employees jusqu'ici ne donnant pas entiere satisfaction. Disposant de la methode de separation des elements figures du sang que nous avons proposee en 1956, nous avons etudie la fixation du cerium, dans diverses conditions: - sur des suspensions de globules blancs de lapin, - sur une suspension de globules blancs humains, - sur les globules blancs dans le sang total de lapin. (auteur)
International Nuclear Information System (INIS)
Wagner, John C.; Peplow, Douglas E.; Mosher, Scott W.; Evans, Thomas M.
2010-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10 2-4 ), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.
International Nuclear Information System (INIS)
Wagner, John C.; Peplow, Douglas E.; Mosher, Scott W.; Evans, Thomas M.
2010-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.
International Nuclear Information System (INIS)
Wagner, J.C.; Peplow, D.E.; Mosher, S.W.; Evans, T.M.
2010-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10 2-4 ), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications. (author)
Energy Technology Data Exchange (ETDEWEB)
Campo B, X.; Mendez V, R.; Embid S, M. [Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas, Av. Complutense 40, 28040 Madrid (Spain); Vega C, H. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98060 Zacatecas (Mexico); Sanz G, J., E-mail: xandra.campo@ciemat.es [Universidad Nacional de Educacion a Distancia, Escuela Tecnica Superior de Ingenieros Industriales, C. Juan del Rosal 12, 28040 Madrid (Spain)
2014-08-15
Neutron Standards Laboratory of CIEMAT in Spain is a brand new irradiation facility, with {sup 241}Am-Be (185 GBq) and {sup 252}Cf (5 GBq) calibrated neutron sources which are stored in a water pool with a concrete cover. From this storage place an automated system is able to take the selected source and place it in the irradiation position, 4 m over the ground level and in the geometrical center of the Irradiation Room with 9 m (length) x 7.5 m (width) x 8 m (height). For calibration or irradiation purposes, detectors or materials can be placed on a bench but it is possible to use the pool (1.0 m x 1.5 m and more than 1.0 m depth) for long time irradiations in thermal neutron fields. For this reason it is essential to characterize the pool itself in terms of neutron spectrum. In this document, the main features of this facility are presented and the characterization of the storage pool in terms of neutron fluence rate and neutron spectrum has been carried out using simulations with MCNPX-2.7.e code. The MCNPX-2.7.e model has been validated using experimental measurements outside the pool (Bert hold LB6411). Inside the pool, the fluence rate decreases and the spectra is thermalized with the distance to the {sup 252}Cf source. This source predominates and the effect of the {sup 241}Am-Be source in these magnitudes is not shown until positions closer than 20 cm from it. (author)
International Nuclear Information System (INIS)
Campo B, X.; Mendez V, R.; Embid S, M.; Vega C, H. R.; Sanz G, J.
2014-08-01
Neutron Standards Laboratory of CIEMAT in Spain is a brand new irradiation facility, with 241 Am-Be (185 GBq) and 252 Cf (5 GBq) calibrated neutron sources which are stored in a water pool with a concrete cover. From this storage place an automated system is able to take the selected source and place it in the irradiation position, 4 m over the ground level and in the geometrical center of the Irradiation Room with 9 m (length) x 7.5 m (width) x 8 m (height). For calibration or irradiation purposes, detectors or materials can be placed on a bench but it is possible to use the pool (1.0 m x 1.5 m and more than 1.0 m depth) for long time irradiations in thermal neutron fields. For this reason it is essential to characterize the pool itself in terms of neutron spectrum. In this document, the main features of this facility are presented and the characterization of the storage pool in terms of neutron fluence rate and neutron spectrum has been carried out using simulations with MCNPX-2.7.e code. The MCNPX-2.7.e model has been validated using experimental measurements outside the pool (Bert hold LB6411). Inside the pool, the fluence rate decreases and the spectra is thermalized with the distance to the 252 Cf source. This source predominates and the effect of the 241 Am-Be source in these magnitudes is not shown until positions closer than 20 cm from it. (author)
Energy Technology Data Exchange (ETDEWEB)
Wieczorek, K. [Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) GmbH, Braunschweig (Germany); Gaus, I. [National Cooperative for the Disposal of Radioactive Waste (NAGRA), Wettingen (Switzerland); Mayor, J. C. [Empresa Nacional de Residuos Radiactivos SA (ENRESA), Madrid (Spain); and others
2017-04-15
Repository concepts in clay or crystalline rock involve bentonite-based buffer or seal systems to provide containment of the waste and limit advective flow. A thorough understanding of buffer and seal evolution is required to make sure the safety functions are fulfilled in the short and long term. Experiments at the real or near-real scale taking into account the interaction with the host rock help to make sure the safety-relevant processes are identified and understood and to show that laboratory-scale findings can be extrapolated to repository scale. Three large-scale experiments on buffer and seal properties performed in recent years at the Mont Terri rock laboratory are presented in this paper: The 1:2 scale HE-E heater experiment which is currently in operation, and the full-scale engineered barrier experiment and the Borehole Seal experiment which have been completed successfully in 2014 and 2012, respectively. All experiments faced considerable difficulties during installation, operation, evaluation or dismantling that required significant effort to overcome. The in situ experiments show that buffer and seal elements can be constructed meeting the expectations raised through small-scale testing. It was, however, also shown that interaction with the host rock caused additional effects in the buffer or seal that could not always be quantified or even anticipated from the experience of small-scale tests (such as re-saturation by pore-water from the rock, interaction with the excavation damaged zone in terms of preferential flow or mechanical effects). This led to the conclusion that testing of the integral system buffer/rock or seal/rock is needed. (authors)
Patrick Blanc'i rippuvad aiad / Urmas Grišakov
Grišakov, Urmas, 1942-2013
2010-01-01
Prantsuse botaaniku ja aiakujundaja Patrick Blanc'i taeva poole kõrguvad rohelised seinad võimaldavad tänapäeval imetleda inimese loovuse ja teadmiste koostöös sündinut. Kujundaja on oma töödega tõestanud, et taimed võivad edukalt kasvada ka vertikaalselt üksteise kohal. Patrick Blanc'i kodulehekülg: www.verticalgardenpatrickblanc.com
Energy Technology Data Exchange (ETDEWEB)
Clauer, N. [Laboratoire d’Hydrologie et de Géochimie de Strasbourg (CNRS-UdS), Strasbourg (France); Techer, I. [Equipe Associée, Chrome, Université de Nîmes, Nîmes (France); Nussbaum, Ch. [Swiss Geological Survey, Federal Office of Topography Swisstopo, Wabern (Switzerland); Laurich, B. [Structural Geology, Tectonics and Geomechanics, RWTH Aachen University, Aachen (Germany); Laurich, B. [Federal Institute for Geosciences and Natural Resources BGR, Hannover (Germany)
2017-04-15
The present study reports on elemental and Sr isotopic analyses of calcite and associated celestite infillings of various microtectonic features collected mostly in the Main Fault of the Opalinus Clay from Mont Terri rock laboratory. Based on a detailed microstructural description of veins, slickensides, scaly clay aggregates and gouges, the geochemical signatures of the infillings were compared to those of the leachates from undeformed Opalinus Clay, and to the calcite from veins crosscutting Hauptrogenstein, Passwang and Staffelegg Formations above and below the Opalinus Clay. Vein calcite and celestite from Main Fault yield identical {sup 87}Sr/{sup 86}Sr ratios that are also close to those recorded in the Opalinus Clay matrix inside the Main Fault, but different from those of the diffuse Opalinus Clay calcite outside the fault. These varied {sup 87}Sr/{sup 86}Sr ratios of the diffuse calcite evidence a lack of interaction among the associated connate waters and the flowing fluids characterized by a homogeneous Sr signature. The {sup 87}Sr/{sup 86}Sr homogeneity at 0.70774 ± 0.00001 (2σ) for the infillings of most microstructures in the Main Fault, as well as of veins from nearby limestone layer and sediments around the Opalinus Clay, claims for an 'infinite' homogeneous marine supply, whereas the gouge infillings apparently interacted with a fluid chemically more complex. According to the known regional paleogeographic evolution, two seawater supplies were inferred and documented in the Delémont Basin: either during the Priabonian (38-34 Ma ago) from western Bresse graben, and/or during the Rupelian (34-28 Ma ago) from northern Rhine Graben. The Rupelian seawater that yields a mean {sup 87}Sr/{sup 86}Sr signature significantly higher than those of the microstructural infillings seems not to be the appropriate source. Alternatively, Priabonian seawater yields a mean {sup 87}Sr/{sup 86}Sr ratio precisely matching that of the leachates from diffuse
Directory of Open Access Journals (Sweden)
Paul Bossart
2017-06-01
Full Text Available Repositories for deep geological disposal of radioactive waste rely on multi-barrier systems to isolate waste from the biosphere. A multi-barrier system typically comprises the natural geological barrier provided by the repository host rock – in our case the Opalinus Clay – and an engineered barrier system (EBS. The Swiss repository concept for spent fuel and vitrified high-level waste (HLW consists of waste canisters, which are emplaced horizontally in the middle of an emplacement gallery and are separated from the gallery wall by granular backfill material (GBM. We describe here a selection of five in-situ experiments where characteristic hydro-mechanical (HM and thermo-hydro-mechanical (THM processes have been observed. The first example is a coupled HM and mine-by test where the evolution of the excavation damaged zone (EDZ was monitored around a gallery in the Opalinus Clay (ED-B experiment. Measurements of pore-water pressures and convergences due to stress redistribution during excavation highlighted the HM behaviour. The same measurements were subsequently carried out in a heater test (HE-D where we were able to characterise the Opalinus Clay in terms of its THM behaviour. These yielded detailed data to better understand the THM behaviours of the granular backfill and the natural host rock. For a presentation of the Swiss concept for HLW storage, we designed three demonstration experiments that were subsequently implemented in the Mont Terri rock laboratory: (1 the engineered barrier (EB experiment, (2 the in-situ heater test on key-THM processes and parameters (HE-E experiment, and (3 the full-scale emplacement (FE experiment. The first demonstration experiment has been dismantled, but the last two ones are on-going.
Energy Technology Data Exchange (ETDEWEB)
Necib, S. [Agence Nationale pour la Gestion des Déchets Radioactifs ANDRA, Meuse Haute-Marne, Center RD 960, Bure (France); Diomidis, N. [National Cooperative for the Disposal of Radioactive Waste (NAGRA), Wettingen (Switzerland); Keech, P. [Nuclear Waste Management Organisation NWMO, Toronto (Canada); Nakayama, M. [Japan Atomic Energy Agency JAEA, Horonobe-Cho (Japan)
2017-04-15
Carbon steel is widely considered as a candidate material for the construction of spent fuel and high-level waste disposal canisters. In order to investigate corrosion processes representative of the long term evolution of deep geological repositories, two in situ experiments are being conducted in the Mont Terri rock laboratory. The iron corrosion (IC) experiment, aims to measure the evolution of the instantaneous corrosion rate of carbon steel in contact with Opalinus Clay as a function of time, by using electrochemical impedance spectroscopy measurements. The Iron Corrosion in Bentonite (IC-A) experiment intends to determine the evolution of the average corrosion rate of carbon steel in contact with bentonite of different densities, by using gravimetric and surface analysis measurements, post exposure. Both experiments investigate the effect of microbial activity on corrosion. In the IC experiment, carbon steel showed a gradual decrease of the corrosion rate over a period of 7 years, which is consistent with the ongoing formation of protective corrosion products. Corrosion product layers composed of magnetite, mackinawite, hydroxychloride and siderite with some traces of oxidising species such as goethite were identified on the steel surface. Microbial investigations revealed thermophilic bacteria (sulphate and thiosulphate reducing bacteria) at the metal surface in low concentrations. In the IC-A experiment, carbon steel samples in direct contact with bentonite exhibited corrosion rates in the range of 2 µm/year after 20 months of exposure, in agreement with measurements in absence of microbes. Microstructural and chemical characterisation of the samples identified a complex corrosion product consisting mainly of magnetite. Microbial investigations confirmed the limited viability of microbes in highly compacted bentonite. (authors)
International Nuclear Information System (INIS)
Krug, St.; Shao, H.; Hesser, J.; Nowak, T.; Kunz, H.; Vietor, T.
2010-01-01
Document available in extended abstract form only. The Mont Terri rock laboratory was extended from mid October 2007 to end 2008 with the goal to allow the project partners to continue their cooperative research on the long term. The extension of the underground laboratory by the excavation of an additional 165 metres long access tunnel (Gallery 08) with four niches was taken as opportunity to conduct an instrumented mine-by test in one of the niches (Niche 2/Niche MB). The measurements during the bedding parallel excavation provided a large amount of data as a basis to understand the hydro-mechanical (HM) coupled behaviour of Opalinus Clay around the excavated niche. BGR was involved in the in-situ investigations (seismic measurements) as a member of the experiment team consisting of five organisations (incl. NAGRA, ANDRA, GRS, Obayashi). An important issue for BGR is the application of the numerical code RockFlow (RF) for HM coupled simulations in order to understand the behaviour of Opalinus Clay by the use of the gained measuring data for validation. Under the management of NAGRA a blind prediction was carried out for a group of modelers belonging to some of the experiment team organisations. After a first comparison between the numerical results of different HM coupled models during the prediction meeting of the teams in June 2009 the measurement data are provided by NAGRA in order to validate the numerical models. Basically the model predictions have already shown the correct tendencies and ranges of observed deformation and pore water pressure evolution besides some under- or overestimations. The future RF validation results after having done some slight parameter adjustments are intended to be presented in the paper. The excavation of Niche 2 was done from 13 October to 7 November 2008 with a constant excavation rate of 1.30 m per day. The orientation of the niche follows the bedding strike, which amounts 60 deg.. The bedding planes have an average dip of
International Nuclear Information System (INIS)
Soran, P.D.; Seamon, R.E.
1980-05-01
Graphs of all neutron cross sections and photon production cross sections on the Recommended Monte Carlo Cross Section (RMCCS) library have been plotted along with local neutron heating numbers. Values for anti ν, the average number of neutrons per fission, are also given
International Nuclear Information System (INIS)
Seamon, R.E.; Soran, P.D.
1980-06-01
Graphs of all neutron cross sections and photon production cross sections on the Alternate Monte Carlo Cross Section (AMCCS) library have been plotted along with local neutron heating numbers. The values of ν-bar, the average number of neutrons per fission, are also plotted for appropriate isotopes
International Nuclear Information System (INIS)
Paul Marschall, P.; Giger, S.; La Vassière De, R.
2017-01-01
The excavation damaged zone (EDZ) around the backfilled underground structures of a geological repository represents a release path for radionuclides, which needs to be addressed in the assessment of long-term safety. Additionally, the EDZ may form a highly efficient escape route for corrosion and degradation gases, thus limiting the gas overpressures in the backfilled repository structures. The efficiency of this release path depends not only on the shape and extent of the EDZ, but also on the self-sealing capacity of the host rock formation and the prevailing state conditions, such as in situ stresses and pore pressure. The hydro-mechanical and chemico-osmotic phenomena associated with the formation and temporal evolution of the EDZ are complex, thus precluding a detailed representation of the EDZ in conventional modelling tools for safety assessment. Therefore, simplified EDZ models, able to mimic the safety-relevant functional features of the EDZ in a traceable manner are required. In the framework of the Mont Terri Project, a versatile modelling approach has been developed for the simulation of flow and transport processes along the EDZ with the goal of capturing the evolution of hydraulic significance of the EDZ after closure of the backfilled underground structures. The approach draws on both empirical evidence and experimental data, collected in the niches and tunnels of the Mont Terri rock laboratory. The model was benchmarked with a data set from an in situ self-sealing experiment at the Mont Terri rock laboratory. This paper summarises the outcomes of the benchmark exercise that comprises relevant empirical evidence, experimental data bases and the conceptual framework for modelling the evolution of the hydraulic significance of the EDZ around a backfilled tunnel section during the entire re-saturation phase. (authors)
Energy Technology Data Exchange (ETDEWEB)
Paul Marschall, P.; Giger, S. [National Cooperative for the Disposal of Radioactive Waste (NAGRA), Wettingen (Switzerland); La Vassière De, R. [Agence Nationale pour la Gestion des Déchets Radioactifs ANDRA, Meuse Haute-Marne, Center RD 960, Bure (France); and others
2017-04-15
The excavation damaged zone (EDZ) around the backfilled underground structures of a geological repository represents a release path for radionuclides, which needs to be addressed in the assessment of long-term safety. Additionally, the EDZ may form a highly efficient escape route for corrosion and degradation gases, thus limiting the gas overpressures in the backfilled repository structures. The efficiency of this release path depends not only on the shape and extent of the EDZ, but also on the self-sealing capacity of the host rock formation and the prevailing state conditions, such as in situ stresses and pore pressure. The hydro-mechanical and chemico-osmotic phenomena associated with the formation and temporal evolution of the EDZ are complex, thus precluding a detailed representation of the EDZ in conventional modelling tools for safety assessment. Therefore, simplified EDZ models, able to mimic the safety-relevant functional features of the EDZ in a traceable manner are required. In the framework of the Mont Terri Project, a versatile modelling approach has been developed for the simulation of flow and transport processes along the EDZ with the goal of capturing the evolution of hydraulic significance of the EDZ after closure of the backfilled underground structures. The approach draws on both empirical evidence and experimental data, collected in the niches and tunnels of the Mont Terri rock laboratory. The model was benchmarked with a data set from an in situ self-sealing experiment at the Mont Terri rock laboratory. This paper summarises the outcomes of the benchmark exercise that comprises relevant empirical evidence, experimental data bases and the conceptual framework for modelling the evolution of the hydraulic significance of the EDZ around a backfilled tunnel section during the entire re-saturation phase. (authors)
Pan, Qiu-Hong; Chen, Fang; Zhu, Bao-Qing; Ma, Li-Yan; Li, Li; Li, Jing-Ming
2012-04-01
The pleasantly fruity and floral 2-phenylethanol are a dominant aroma compound in post-ripening 'Vidal blanc' grapes. However, to date little has been reported about its synthetic pathway in grapevine. In the present study, a full-length cDNA of VvAADC (encoding aromatic amino acid decarboxylase) was firstly cloned from the berries of 'Vidal blanc', an interspecific hybrid variety of Vitis vinifera × Vitis riparia. This sequence encodes a complete open reading frame of 482 amino acids with a calculated molecular mass of 54 kDa and isoelectric point value (pI) of 5.73. The amino acid sequence deduced shared about 79% identity with that of aromatic L: -amino acid decarboxylases (AADCs) from tomato. Real-time PCR analysis indicated that VvAADC transcript abundance presented a small peak at 110 days after full bloom and then a continuous increase at the berry post-ripening stage, which was consistent with the accumulation of 2-phenylethanol, but did not correspond to the trends of two potential intermediates, phenethylamine and 2-phenylacetaldehyde. Furthermore, phenylalanine still exhibited a continuous increase even in post-ripening period. It is thus suggested that 2-phenylethanol biosynthetic pathway mediated by AADC exists in grape berries, but it has possibly little contribution to a considerable accumulation of 2-phenylethanol in post-ripening 'Vidal blanc' grapes.
Classification of Argentinean Sauvignon blanc wines by UV spectroscopy and chemometric methods.
Azcarate, Silvana Mariela; Cantarelli, Miguel Ángel; Pellerano, Roberto Gerardo; Marchevsky, Eduardo Jorge; Camiña, José Manuel
2013-03-01
Argentina is an important worldwide wine producer. In this country, there are several recognizable provinces that produce Sauvignon blanc wines: Neuquén, Río Negro, Mendoza, and San Juan. The analysis of the provenance of these white wines is complex and requires the use of expensive and time-consuming techniques. For this reason, this work discusses the determination of the provenance of Argentinean Sauvignon blanc wines by the use of UV spectroscopy and chemometric methods, such as principal component analysis (PCA), cluster analysis (CA), linear discriminant analysis (LDA), and partial least square discriminant analysis (PLS-DA). The proposed method requires low-cost equipment and short-time analysis in comparison with other techniques. The results are in very good agreement with results based on the geographical origin of Sauvignon blanc wines. This manuscript describes a method to determine the geographical origin of Sauvignon wines from Argentina. The main advantage of this method is the use of nonexpensive techniques, such as UV-Vis spectroscopy. © 2013 Institute of Food Technologists®
International Nuclear Information System (INIS)
Rothfuchs, Tilmann; Czaikowski, Oliver; Hartwig, Lothar; Hellwald, Karsten; Komischke, Michael; Miehe, Ruediger; Zhang, Chun-Liang
2012-10-01
Several years ago, GRS performed laboratory investigations on the suitability of clay/mineral mixtures as optimized sealing materials in underground repositories for radioactive wastes /JOC 00/ /MIE 03/. The investigations yielded promising results so that plans were developed for testing the sealing properties of those materials under representative in-situ conditions in the Mont Terri Rock Laboratory (MTRL). The project was proposed to the ''Projekttraeger Wassertechnologie und Entsorgung (PtWT+E)'', and finally launched in January 2003 under the name SB-project (''Self-sealing Barriers of Clay/Mineral Mixtures in a Clay Repository''). The project was divided in two parts, a pre-project running from January 2003 until June 2004 under contract No. 02E9713 /ROT 04/ and the main project running from January 2004 until June 2012 under contract No. 02E9894 with originally PtWT+E, later renamed as PTKA-WTE. In the course of the pre-project it was decided to incorporate the SB main project as a cost shared action of PtWT+E and the European Commission (contract No. FI6W-CT-2004-508851) into the EC Integrated Project ESDRED (Engineering Studies and Demonstrations of Repository Designs) performed by 11 European project partners within the 6th European framework programme. The ESDRED project was terminated prior to the termination of the SB project. Interim results were reported by mid 2009 in two ESDRED reports /DEB09/ /SEI 09/. This report presents the results achieved in the whole SB-project comprising preceding laboratory investigations for the final selection of suited material mixtures, the conduction of mock-up tests in the geotechnical laboratory of GRS in Braunschweig and the execution of in-situ experiments at the MTRL.
Laurich, Ben; Urai, Janos L.; Vollmer, Christian; Nussbaum, Christophe
2018-01-01
We studied gouge from an upper-crustal, low-offset reverse fault in slightly overconsolidated claystone in the Mont Terri rock laboratory (Switzerland). The laboratory is designed to evaluate the suitability of the Opalinus Clay formation (OPA) to host a repository for radioactive waste. The gouge occurs in thin bands and lenses in the fault zone; it is darker in color and less fissile than the surrounding rock. It shows a matrix-based, P-foliated microfabric bordered and truncated by micrometer-thin shear zones consisting of aligned clay grains, as shown with broad-ion-beam scanning electron microscopy (BIB-SEM) and optical microscopy. Selected area electron diffraction based on transmission electron microscopy (TEM) shows evidence for randomly oriented nanometer-sized clay particles in the gouge matrix, surrounding larger elongated phyllosilicates with a strict P foliation. For the first time for the OPA, we report the occurrence of amorphous SiO2 grains within the gouge. Gouge has lower SEM-visible porosity and almost no calcite grains compared to the undeformed OPA. We present two hypotheses to explain the origin of gouge in the Main Fault: (i) authigenic generation consisting of fluid-mediated removal of calcite from the deforming OPA during shearing and (ii) clay smear consisting of mechanical smearing of calcite-poor (yet to be identified) source layers into the fault zone. Based on our data we prefer the first or a combination of both, but more work is needed to resolve this. Microstructures indicate a range of deformation mechanisms including solution-precipitation processes and a gouge that is weaker than the OPA because of the lower fraction of hard grains. For gouge, we infer a more rate-dependent frictional rheology than suggested from laboratory experiments on the undeformed OPA.
International Nuclear Information System (INIS)
Sergeant, C.; Vesvres, M.H.; Barsotti, V.; Stroes-Gascoyne, S.; Hamon, C.J.; Neble, S.; Shippers, A.; Le Marrec, C.; Vinsot, A.; Schwyn, B.
2010-01-01
Document available in extended abstract form only. Exploration of deep subsurface microbial life has increased for very diverse motives. One of them is that these environments are potential host rocks for radioactive waste repositories and that microorganisms may influence geochemical conditions around such sites and migration properties of radionuclides. The pore water Chemistry experiment (PC) was conducted at the Mont Terri-RL to measure in situ the pH, Eh, and other geochemical parameters within the pore water of the Opalinus Clay formation. The borehole for PC was drilled with N 2 under clean but not aseptic conditions, filled immediately with synthetic pore water, which was circulated and monitored for five years. Soon after initiation of PC it was evident that microbial activity affected the borehole water geochemistry. Microbial analyses, including molecular biology and culturing methods, were performed repeatedly during PC (2003-2006), with detailed analysis of water and over-core clay upon termination in 2007. Results indicated the presence of heterotrophic aerobes and anaerobes, nitrate-reducers, iron-reducers, sulphate-reducers and Archaea, which together with geochemical data suggested a reducing environment with sulphate reduction in the water and adjacent clay. A black precipitate containing pyrite and a strong H 2 S smell confirmed the occurrence of sulphate reduction. Specific species identified (> 98% similarity) in PC water included Pseudomonas stutzeri, Bacillus licheniformis, and Desulfosporosinus sp., with similar and additional species (e.g., Trichococcus sp.; Koccuria sp.) in the clay. The origin of these (mostly anaerobic) species cannot be determined with certainty. Some species likely resulted from contamination, but others could be revived species indigenous in the Opalinus Clay. The microbial processes that occurred in PC are not representative of the processes in the undisturbed formation but illustrate the potential for microbial
Energy Technology Data Exchange (ETDEWEB)
Pearson, F.J., E-mail: fjpearson@gmail.com [Ground-Water Geochemistry, 5108 Trent Woods Dr., New Bern, NC 28562 (United States); Tournassat, Christophe; Gaucher, Eric C. [BRGM, B.P. 36009, 45060 Orleans Cedex 2 (France)
2011-06-15
Highlights: > Equilibrium models of water-rock reactions in clay rocks are reviewed. > Analyses of pore waters of the Opalinus Clay from boreholes in the Mont Terri URL, Switzerland, are tabulated. > Results of modelling with various mineral controls are compared with the analyses. > Best agreement results with calcite, dolomite and siderite or daphnite saturation, Na-K-Ca-Mg exchange and/or kaolinite, illite, quartz and celestite saturation. > This approach allows calculation of the chemistry of pore water in clays too impermeable to yield water samples. - Abstract: The chemistry of pore water (particularly pH and ionic strength) is an important property of clay rocks being considered as host rocks for long-term storage of radioactive waste. Pore waters in clay-rich rocks generally cannot be sampled directly. Instead, their chemistry must be found using laboratory-measured properties of core samples and geochemical modelling. Many such measurements have been made on samples from the Opalinus Clay from the Mont Terri Underground Research Laboratory (URL). Several boreholes in that URL yielded water samples against which pore water models have been calibrated. Following a first synthesis report published in 2003, this paper presents the evolution of the modelling approaches developed within Mont Terri URL scientific programs through the last decade (1997-2009). Models are compared to the composition of waters sampled during dedicated borehole experiments. Reanalysis of the models, parameters and database enabled the principal shortcomings of the previous modelling efforts to be overcome. The inability to model the K concentrations correctly with the measured cation exchange properties was found to be due to the use of an inappropriate selectivity coefficient for Na-K exchange; the inability to reproduce the measured carbonate chemistry and pH of the pore waters using mineral-water reactions alone was corrected by considering clay mineral equilibria. Re
qualite des eaux du bandama-blanc (cote d'ivoire) et de ses ...
African Journals Online (AJOL)
La qualité écologique des eaux des localités soumises à l'exploitation artisanale et clandestine de l'or au niveau du Bandama-Blanc et de ses affluents a été étudiée entre le 01 et le 15 Avril 2015. Le prélèvement du phytoplancton a été réalisé à l'aide de la bouteille hydrologique et du filet à plancton, tandis que le.
Directory of Open Access Journals (Sweden)
Florian Lacroux
2008-09-01
Significance and impact of the study: Vine nitrogen deficiency can negatively impact on grape aroma potential. Soil nitrogen application can increase vine nitrogen status, but it has several drawbacks: it increases vigour and enhances Botrytis susceptibility. This study shows that foliar N and foliar N + S applications can improve vine nitrogen status and enhance aroma expression in Sauvignon blanc wines without the negative impact on vigour and Botrytis susceptibility. Although this study was carried out on Sauvignon blanc vines, it is likely that foliar N or foliar N + S applications will have similar effects on other grapevine varieties containing volatile thiols (Colombard, Riesling, Petit Manseng and Sémillon.
Energy Technology Data Exchange (ETDEWEB)
Ingeborg, G.; Alheid, H.J. [BGR - Federal Institute for Geosciences and Natural Resources, Hannover (Germany); Jockwerz, N. [Gesellschaft fur Anlagen- und Reaktorsicherheit (GRS) - Final Repository Research Division, Braunschweig (Germany); Mayor, J.C. [ENRESA - Empresa Nacional des Residuos Radioactivos, Madrid (Spain); Garcia-Siner, J.L. [AITEMIN -Asociacion para la Investigacion y Desarrollo Industrial de los Recursos Naturales, Madrid, (Spain); Alonso, E. [CIMNE - Centre Internacional de Metodos Numerics en Ingenyeria, UPC, Barcelona (Spain); Weber, H.P. [NAGRA - National Cooperative for the Disposal of Radioactive Waste, Wettingen (Switzerland); Plotze, M. [ETHZ - Swiss Federal Institute of Technology Zurich, IGT, Zurich, (Switzerland); Klubertanz, G. [COLENCO Power Engineering Ltd., Baden (Switzerland)
2005-07-01
The long-term safety of permanent underground repositories relies on a combination of engineered and geological barriers, so that the interactions between the barriers in response to conditions expected in a high-level waste repository need to be identified and fully understood. Co-financed by the European Community, a heater experiment was realized on a pilot plant scale at the underground laboratory in Mont Terri, Switzerland. The experiment was accompanied by an extensive programme of continuous monitoring, experimental investigations on-site as well as in laboratories, and numerical modelling of the coupled thermo-hydro-mechanical processes. Heat-producing waste was simulated by a heater element of 10 cm diameter, held at a constant surface temperature of 100 C. The heater element (length 2 m) operated in a vertical borehole of 7 m depth at 4 to 6 m depth. It was embedded in a geotechnical barrier of pre-compacted bentonite blocks (outer diameter 30 cm) that were irrigated for 35 months before the heating phase (duration 18 months) began. The host rock is a highly consolidated stiff Jurassic clay stone (Opalinus Clay). After the heating phase, the vicinity of the heater element was explored by seismic, hydraulic, and geotechnical tests to investigate if the heating had induced changes in the Opalinus Clay. Additionally, rock mechanic specimens were tested in the laboratory. Finally, the experiment was dismantled to provide laboratory specimens of post - heating buffer and host rock material. The bentonite blocks were thoroughly wetted at the time of the dismantling. The volume increase amounted to 5 to 9% and was thus below the bentonite potential. Geo-electrical measurements showed no decrease of the water content in the vicinity of the heater during the heating phase. Decreasing energy input to the heater element over time suggests hence, that the bentonite dried leading to a decrease of its thermal conductivity. Gas release during the heating period occurred
Energy Technology Data Exchange (ETDEWEB)
Berna, Amalia Z., E-mail: Amalia.Berna@csiro.au [CSIRO Entomology and Food Futures Flagship, PO Box 1700, Canberra, ACT 2601 (Australia); Trowell, Stephen [CSIRO Entomology and Food Futures Flagship, PO Box 1700, Canberra, ACT 2601 (Australia); Clifford, David [CSIRO Mathematical and Information Sciences, Locked Bag 17, North Ryde, NSW 1670 (Australia); Cynkar, Wies; Cozzolino, Daniel [The Australian Wine Research Institute, Waite Road, Urrbrae, PO Box 197, Adelaide, SA 5064 (Australia)
2009-08-26
Analysis of 34 Sauvignon Blanc wine samples from three different countries and six regions was performed by gas chromatography-mass spectrometry (GC-MS). Linear discriminant analysis (LDA) showed that there were three distinct clusters or classes of wines with different aroma profiles. Wines from the Loire region in France and Australian wines from Tasmania and Western Australia were found to have similar aroma patterns. New Zealand wines from the Marlborough region as well as the Australian ones from Victoria were grouped together based on the volatile composition. Wines from South Australia region formed one discrete class. Seven analytes, most of them esters, were found to be the relevant chemical compounds that characterized the classes. The grouping information obtained by GC-MS, was used to train metal oxide based electronic (MOS-Enose) and mass spectrometry based electronic (MS-Enose) noses. The combined use of solid phase microextraction (SPME) and ethanol removal prior to MOS-Enose analysis, allowed an average error of prediction of the regional origins of Sauvignon Blanc wines of 6.5% compared to 24% when static headspace (SHS) was employed. For MS-Enose, the misclassification rate was higher probably due to the requirement to delimit the m/z range considered.
International Nuclear Information System (INIS)
Berna, Amalia Z.; Trowell, Stephen; Clifford, David; Cynkar, Wies; Cozzolino, Daniel
2009-01-01
Analysis of 34 Sauvignon Blanc wine samples from three different countries and six regions was performed by gas chromatography-mass spectrometry (GC-MS). Linear discriminant analysis (LDA) showed that there were three distinct clusters or classes of wines with different aroma profiles. Wines from the Loire region in France and Australian wines from Tasmania and Western Australia were found to have similar aroma patterns. New Zealand wines from the Marlborough region as well as the Australian ones from Victoria were grouped together based on the volatile composition. Wines from South Australia region formed one discrete class. Seven analytes, most of them esters, were found to be the relevant chemical compounds that characterized the classes. The grouping information obtained by GC-MS, was used to train metal oxide based electronic (MOS-Enose) and mass spectrometry based electronic (MS-Enose) noses. The combined use of solid phase microextraction (SPME) and ethanol removal prior to MOS-Enose analysis, allowed an average error of prediction of the regional origins of Sauvignon Blanc wines of 6.5% compared to 24% when static headspace (SHS) was employed. For MS-Enose, the misclassification rate was higher probably due to the requirement to delimit the m/z range considered.
Energy Technology Data Exchange (ETDEWEB)
Bleyen, N.; Smets, S. [Belgian Nuclear Research Centre SCK-CEN, Mol (Belgium); Small, J. [National Nuclear Laboratory NLL, Warrington (United Kingdom); and others
2017-04-15
At the Mont Terri rock laboratory (Switzerland), an in situ experiment is being carried out to examine the fate of nitrate leaching from nitrate-containing bituminized radioactive waste, in a clay host rock for geological disposal. Such a release of nitrate may cause a geochemical perturbation of the clay, possibly affecting some of the favorable characteristics of the host rock. In this in situ experiment, combined transport and reactivity of nitrate is studied inside anoxic and water-saturated chambers in a borehole in the Opalinus Clay. Continuous circulation of the solution from the borehole to the surface equipment allows a regular sampling and online monitoring of its chemical composition. In this paper, in situ microbial nitrate reduction in the Opalinus Clay is discussed, in the presence or absence of additional electron donors relevant for the disposal concept and likely to be released from nitrate-containing bituminized radioactive waste: acetate (simulating bitumen degradation products) and H{sub 2} (originating from radiolysis and corrosion in the repository). The results of these tests indicate that - in case microorganisms would be active in the repository or the surrounding clay - microbial nitrate reduction can occur using electron donors naturally present in the clay (e.g. pyrite, dissolved organic matter). Nevertheless, non-reactive transport of nitrate in the clay is expected to be the main process. In contrast, when easily oxidizable electron donors would be available (e.g. acetate and H{sub 2}), the microbial activity will be strongly stimulated. Both in the presence of H{sub 2} and acetate, nitrite and nitrogenous gases are predominantly produced, although some ammonium can also be formed when H{sub 2} is present. The reduction of nitrate in the clay could have an impact on the redox conditions in the pore-water and might also lead to a gas-related perturbation of the host rock, depending on the electron donor used during denitrification
Directory of Open Access Journals (Sweden)
Z. G. Nakopoulou
2006-09-01
Full Text Available Must and wine samples of the Greek grape variety Roditis and the French one Sauvignon blanc were analysed in order to obtain further knowledge of the protein profile of Roditis and to watch the evolution of grape proteins during the alcoholic fermentation of Roditis and Sauvignon blanc musts. For these purposes protein samples were isolated from must and wine samples by ammonium sulphate precipitation and subjected to sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS - PAGE. Eleven and nine bands with molecular weights between 11,1 and 64,4 kDa were detected on the electrophoregramms of Roditis and Sauvignon blanc must and wine samples respectively, using Coomassie Brillant Blue R-250 and silver staining methods. Two protein fractions of must and wine samples with molecular weights of 64,4 kDa and 34,4 kDa were identified as being glycoproteins in the profile of the Greek grape variety, according to the Periodic acid - silver staining, while only one must and wine fraction of 64,4 kDa had positively react with this stain, as far as it concerns Sauvignon blanc. None of the low molecular weight protein fractions found to be responsible for haze formation. A modified (Bradford dye - binding procedure was used for the determination of musts and wines soluble proteins. Free amino nitrogen and the contents of neutral and acidic polysaccharides in the protein fractions after chromatography on Sephadex G - 25, were also analyzed.
Lissandrello, E.
2006-01-01
The theories on globalisation, internationalisation, post-nationalism or trans-nationalism dismiss the concept of 'territoriality' within the paradigm of the beyond the 'nation-state' sovereignty. In this work, a diverse idea is sustained: borders and territoriality are not just lost terms within
International Nuclear Information System (INIS)
Lesparre, N.
2011-01-01
Cosmic muons are produced in cascade processes following the interactions of cosmic rays with the atmosphere. Muons are fundamental particles with a mass 200 times higher than electrons. Their low interaction probability with matter allows them to cross the atmosphere and even the first kilometers of the Earth crust. The muons flux is attenuated through a media as function of the quantity of matter crossed. The study of the muon flux attenuation allows then to obtain a direct measurement of the rock opacity. This opacity corresponds to the media density, integrated along the muon path through rock. Muons' trajectory is indeed considered to be straight when crossing rock. It is then possible to realise geophysical tomographies by setting a sensor network around geological objects in order to determine the internal structures geometry inside these objects. An underground muon flux model is developed herein from flux models estimated at surface and a model of muon flux attenuation through rock. A feasibility equation of the muon tomography is then established in order to determine the minimum time of data acquisition to distinguish heterogeneities. Four muons telescopes have been built during this thesis and conditioned to bear field installation, notably in tropical media. These telescopes are made by two or three matrices of detection constituted of scintillating bars linked to photomultipliers. The modeling of the telescopes detection capacity and angular resolution is realised as function of their geometrical configuration. A calibration method is also established in order to correct the signal from any distortion. Moreover, arrangements to reduce the backward noise produced by low energy particles are set up and evaluated. The development of this new tomographic method is then illustrated by two geophysical applications. The measurements realised in the Mont Terri underground laboratory (Switzerland) allowed us to benefit from stable acquisition conditions to
International Nuclear Information System (INIS)
Le Gonidec, Y.; Kergosien, B.; Schubnel, A.; Gueguen, Y.; Wassermann, J.; Gibert, D.; Sarout, J.; Nussbaum, C.
2010-01-01
Document available in extended abstract form only. In the Underground Research Laboratory (URL) at Mont Terri, a new gallery G08 was planned to be excavated in 2008 following an original process: the excavation process allowed to monitor the Excavation Damaged Zone (EDZ) from geophysical measurements designed and installed at the end of face of the EZ-G04 gallery during the excavation from the other side, i.e. the end face of the EZ-G08 gallery. The objectives of the project concern spatio-temporal changes of the EDZ: among the methodological developments adapted for the EZG08 project to provide complementary information, acoustic experiments have been prepared in horizontal boreholes to perform the continuous acoustic monitoring of the Excavation Damaged Zone (EDZ). The acoustic measurements, performed on acoustic arrays of several receivers, have been recorded during one month, following two main steps: - Active acoustic surveys: a source is introduced in a central borehole (BEZG5) allowing tomography experiments in the far field and in the near field, i.e. close to and far from BEZG5, respectively. - Acoustic emissions: during the excavation process, numerous acoustic emissions can be detected and associated to micro-seismic events due to rapid crack propagation, generated by the rock relaxation, or simply associated to the excavation process. From the tomography measurements, the acoustic wave velocity field can be estimated, with P and S-wave velocities roughly equal to 2500 m/s-3500 m/s, and 1500 m/s, respectively. The acoustic setup does not show variations of P-wave velocity during the campaign, but spatial variations which could be associated to anisotropic elastic properties of the rock with the maximum P-wave velocities close to the bedding plane. An original method based on a multifrequency approach puts in evidence a frequency dependence of the velocity, with a striking phenomena since the wave velocity decreases with increasing frequency. This effect
Deed, Rebecca C; Fedrizzi, Bruno; Gardner, Richard C
2017-10-11
Sauvignon blanc wine, balanced by herbaceous and tropical aromas, is fermented at low temperatures (10-15 °C). Anecdotal accounts from winemakers suggest that cold fermentations produce and retain more "fruity" aroma compounds; nonetheless, studies have not confirmed why low temperatures are optimal for Sauvignon blanc. Thirty-two aroma compounds were quantitated from two Marlborough Sauvignon blanc juices fermented at 12.5 and 25 °C, using Saccharomyces cerevisiae strains EC1118, L-1528, M2, and X5. Fourteen compounds were responsible for driving differences in aroma chemistry. The 12.5 °C-fermented wines had lower 3-mercaptohexan-1-ol (3MH) and higher alcohols but increased fruity acetate esters. However, a sensory panel did not find a significant difference between fruitiness in 75% of wine pairs based on fermentation temperature, in spite of chemical differences. For wine pairs with significant differences (25%), the 25 °C-fermented wines were fruitier than the 12.5 °C-fermented wines, with high fruitiness associated with 3MH. We propose that the benefits of low fermentation temperatures are not derived from increased fruitiness but a better balance between fruitiness and greenness. Even so, since 75% of wines showed no significant difference, higher fermentation temperatures could be utilized without detriment, lowering costs for the wine industry.
International Nuclear Information System (INIS)
Thoeny, R.
2014-01-01
Clay rock formations are potential host rocks for deep geological disposal of nuclear waste. However, they exhibit relatively low strength and brittle failure behaviour. Construction of underground openings in clay rocks may lead to the formation of an excavation damage zone (EDZ) in the near-field area of the tunnel. This has to be taken into account during risk assessment for waste-disposal facilities. To investigate the geomechanical processes associated with the rock mass response of faulted Opalinus Clay during tunnelling, a full-scale ‘mine-by’ experiment was carried out at the Mont Terri Underground Rock Laboratory (URL) in Switzerland. In the ‘mine-by’ experiment, fracture network characteristics within the experimental section were characterized prior to and after excavation by integrating structural data from geological mapping of the excavation surfaces and from four pre- and post-excavation boreholes.The displacements and deformations in the surrounding rock mass were measured using geo-technical instrumentation including borehole inclinometers, extensometers and deflectometers, together with high-resolution geodetic displacement measurements and laser scanning measurements on the excavation surfaces. Complementary data was gathered from structural and geophysical characterization of the surrounding rock mass. Geological and geophysical techniques were used to analyse the structural and kinematic relationships between the natural and excavation-induced fracture network surrounding the ‘mine-by’ experiment. Integrating the results from seismic refraction tomography, borehole logging, and tunnel surface mapping revealed that spatial variations in fault frequency along the tunnel axis alter the rock mass deformability and strength. Failure mechanisms, orientation and frequency of excavation-induced fractures are significantly influenced by tectonic faults. On the side walls, extensional fracturing tangential to the tunnel circumference was the
Energy Technology Data Exchange (ETDEWEB)
Thoeny, R.
2014-07-01
Clay rock formations are potential host rocks for deep geological disposal of nuclear waste. However, they exhibit relatively low strength and brittle failure behaviour. Construction of underground openings in clay rocks may lead to the formation of an excavation damage zone (EDZ) in the near-field area of the tunnel. This has to be taken into account during risk assessment for waste-disposal facilities. To investigate the geomechanical processes associated with the rock mass response of faulted Opalinus Clay during tunnelling, a full-scale ‘mine-by’ experiment was carried out at the Mont Terri Underground Rock Laboratory (URL) in Switzerland. In the ‘mine-by’ experiment, fracture network characteristics within the experimental section were characterized prior to and after excavation by integrating structural data from geological mapping of the excavation surfaces and from four pre- and post-excavation boreholes.The displacements and deformations in the surrounding rock mass were measured using geo-technical instrumentation including borehole inclinometers, extensometers and deflectometers, together with high-resolution geodetic displacement measurements and laser scanning measurements on the excavation surfaces. Complementary data was gathered from structural and geophysical characterization of the surrounding rock mass. Geological and geophysical techniques were used to analyse the structural and kinematic relationships between the natural and excavation-induced fracture network surrounding the ‘mine-by’ experiment. Integrating the results from seismic refraction tomography, borehole logging, and tunnel surface mapping revealed that spatial variations in fault frequency along the tunnel axis alter the rock mass deformability and strength. Failure mechanisms, orientation and frequency of excavation-induced fractures are significantly influenced by tectonic faults. On the side walls, extensional fracturing tangential to the tunnel circumference was the
Tirant lo Blanc o la pau no passa pels exèrcits
Directory of Open Access Journals (Sweden)
Antònia Carré
2000-11-01
Full Text Available Tirant lo Blanc o la pau no passa pels exèrcits és un metatext que té bàsicament dos objectius: 1 permetre, a partir d'una lectura activa i creativa de la novel·la de Joanot Martorell, acostar-se als aspectes cavallerescos de l'Edat Mitjana i aprofundir en el concepte de literatura existent aleshores, basat en la reconstrucció i recreació de textos a través de la manipulació de les obres que formaven part del patrimoni cultural de l'època, i 2 possibilitar, a partir de l'anàlisi d'un dels eixos temàtics principals de la novel·la com és la guerra, la reflexió (telemàtica sobre els valors que fonamenten una societat plural i democràtica a les portes del segle XXI: la sensibilització davant de les injustícies i els abusos de poder, la tolerància, la solidaritat, el respecte pel medi ambient, en definitiva, la valoració de la cultura per la pau.
Directory of Open Access Journals (Sweden)
Bin Tian
Full Text Available Thaumatin-like proteins (TLPs and chitinases are the main constituents of so-called protein hazes which can form in finished white wine and which is a great concern of winemakers. These soluble pathogenesis-related (PR proteins are extracted from grape berries. However, their distribution in different grape tissues is not well documented. In this study, proteins were first separately extracted from the skin, pulp and seed of Sauvignon Blanc grapes, followed by trypsin digestion and analysis by liquid chromatography-electrospray ionization-tandem mass spectrometry (LC-ESI-MS/MS. Proteins identified included 75 proteins from Sauvignon Blanc grape skin, 63 from grape pulp and 35 from grape seed, mostly functionally classified as associated with metabolism and energy. Some were present exclusively in specific grape tissues; for example, proteins involved in photosynthesis were only detected in grape skin and proteins found in alcoholic fermentation were only detected in grape pulp. Moreover, proteins identified in grape seed were less diverse than those identified in grape skin and pulp. TLPs and chitinases were identified in both Sauvignon Blanc grape skin and pulp, but not in the seed. To relatively quantify the PR proteins, the protein extracts of grape tissues were seperated by HPLC first and then analysed by SDS-PAGE. The results showed that the protein fractions eluted at 9.3 min and 19.2 min under the chromatographic conditions of this study confirmed that these corresponded to TLPs and chitinases seperately. Thus, the relative quantification of TLPs and chitinases in protein extracts was carried out by comparing the area of corresponding peaks against the area of a thamautin standard. The results presented in this study clearly demonstrated the distribution of haze-forming PR proteins in grape berries, and the relative quantification of TLPs and chitinases could be applied in fast tracking of changes in PR proteins during grape growth and
The analog of Blanc's law for drift velocities of electrons in gas mixtures in weakly ionized plasma
International Nuclear Information System (INIS)
Chiflikian, R.V.
1995-01-01
The analog of Blanc's law for drift velocities of electrons in multicomponent gas mixtures in weakly ionized spatially homogeneous low-temperature plasma is derived. The obtained approximate-analytical expressions are valid for average electron energy in the 1--5 eV range typical for plasma conditions of low-pressure direct current (DC) discharges. The accuracy of these formulas is ±5%. The analytical criterion of the negative differential conductivity (NDC) of electrons in binary mixtures of gases is obtained. NDC of electrons is predicted in He:Kr and He:Xe rare gas mixtures. copyright 1995 American Institute of Physics
Castañer i Vivas, Margarida
2011-01-01
En aquest article, s’hi presenta el procés d’elaboració, la metodologia, les reflexions i les conclusions del Llibre Blanc de l’Eurodistricte Català Transfronterer elaborat per la Mission Opérationnelle Transfrontalière (MOT) i la Universitat de Girona (UdG). L’estudi té per objectiu acompanyar la definició i l’emergència d’un projecte de territori transfronterer basat en la realitat d’un àmbit territorial compartit entre el departament dels Pirineus Orientals i les comarques de la província ...
Monte Carlo code development in Los Alamos
International Nuclear Information System (INIS)
Carter, L.L.; Cashwell, E.D.; Everett, C.J.; Forest, C.A.; Schrandt, R.G.; Taylor, W.M.; Thompson, W.L.; Turner, G.D.
1974-01-01
The present status of Monte Carlo code development at Los Alamos Scientific Laboratory is discussed. A brief summary is given of several of the most important neutron, photon, and electron transport codes. 17 references. (U.S.)
LAPP - Annecy le Vieux Particle Physics Laboratory. Activity report 1996-1997
International Nuclear Information System (INIS)
Colas, Jacques; Minard, Marie-Noelle; Decamp, Daniel; Marion, Frederique; Drancourt, Cyril; Riva, Vanessa; Berger, Nicole; Bombar, Claudine; Dromby, Gerard
2004-01-01
LAPP is a high energy physics laboratory founded in 1976 and is one of the 19 laboratories of IN2P3 (National Institute of Nuclear and particle physics), institute of CNRS (National Centre for Scientific Research). LAPP is joint research facility of the University Savoie Mont Blanc (USMB) and the CNRS. Research carried out at LAPP aims at understanding the elementary particles and the fundamental interactions between them as well as exploring the connections between the infinitesimally small and the unbelievably big. Among other subjects LAPP teams try to understand the origin of the mass of the particles, the mystery of dark matter and what happened to the anti-matter that was present in the early universe. LAPP researchers work in close contact with phenomenologist teams from LAPTh, a theory laboratory hosted in the same building. LAPP teams also work since several decades at understanding the neutrinos, those elementary almost massless particles with amazing transformation properties. They took part in the design and realization of several experiments. Other LAPP teams collaborate in experiments studying signals from the cosmos. This document presents the activities of the laboratory during the years 1996-1997: 1 - Presentation of LAPP; 2 - Data acquisition experiments: e"+e"- annihilations at LEP (standard model and beyond the standard model - ALEPH, Study of hadronic final state events and Search for supersymmetric particles at L3 detector); Neutrino experiments (neutrino oscillation search at 1 km of the Chooz reactors, search for neutrino oscillations at the CERN Wide Band neutrino beam - NOMAD); Quarks-Gluons plasma; Hadronic spectroscopy; 3 - Experiments under preparation (CP violation study - BABAR, Anti Matter Spectrometer in Space - AMS, Search for gravitational waves - VIRGO, Search for the Higgs boson - ATLAS and CMS); 4 - Technical departments; 5 - Theoretical physics; 6 - Other activities
Lezaeta, Alvaro; Bordeu, Edmundo; Næs, Tormod; Varela, Paula
2017-09-01
The aim of this study was to evaluate consumers' perception of a complex set of stimuli as aromatically enriched wines. For that, two consumer based profiling methods were compared, concurrently run with overall liking measurements: projective mapping based on choice or preference (PM-C), a newly proposed method, and check-all-that-apply (CATA) questions with an ideal sample, a more established, consumer-based method for product optimization. Reserve bottling and regular bottling of Sauvignon Blanc wines from three wineries were aromatically enriched with natural aromas collected by condensation during wine fermentation. A total of 144 consumers were enrolled in the study. The results revealed that both consumer-based highlighted the positive effect of aromatic enrichment on consumer perception and acceptance. However, PM-C generated a very detailed description, in which consumers focused less on the sensory aspects and more on the usage, attitudes, and reasons behind their choices. Providing a deeper understanding of the drivers of liking/disliking of enriched Sauvignon Blanc wines. Copyright © 2017 Elsevier Ltd. All rights reserved.
The MC21 Monte Carlo Transport Code
International Nuclear Information System (INIS)
Sutton TM; Donovan TJ; Trumbull TH; Dobreff PS; Caro E; Griesheimer DP; Tyburski LJ; Carpenter DC; Joo H
2007-01-01
MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities
Test of Blanc's law for negative ion mobility in mixtures of SF6 with N2, O2 and air
International Nuclear Information System (INIS)
Hinojosa, G; Urquijo, J de
2003-01-01
We have measured the mobility of negative ion species drifting in mixtures of SF 6 with N 2 , O 2 and air. The pulsed Townsend experiment was used for this purpose. The conditions of the experiment, high pressures and low values of the reduced electric field, E/N, ensured that the majority species drifting in the gap was SF 6 - , to which the present mobilities are ascribed. The extrapolated, zero field mobilities for several mixture compositions were used to test them successfully with Blanc's law. Moreover, the measured zero field SF 6 - mobilities in air could also be explained in terms of the measured mobilities for this ionic species in N 2 and O 2
International Nuclear Information System (INIS)
Seeska, R.; Rutenberg, M.; Lux, K.H.
2012-01-01
Document available in extended abstract form only. Seven different boreholes in the Opalinus Clay formation of the Mont Terri Underground Rock Laboratory (URL Mont Terri) have been investigated by the Clausthal University of Technology (TUC) in cooperation with different partners with time, namely the National Cooperative for the Disposal of Radioactive Waste (NAGRA), the Federal Institute for Geosciences and Natural Resources (BGR) as well as the Swiss Federal Institute of Technology Zurich (ETHZ) and the Swiss Federal Nuclear Safety Inspectorate (ENSI). Aim of the investigations was to gain a large amount of high-quality and significant information on rock mass behaviour that can be used to increase knowledge about and improve understanding of time-dependent load-bearing and deformation behaviour of Opalinus Clay including pore water influences. For this purpose, an axial borehole camera and a three-arm calliper have been used. High-quality information on the load-bearing and deformation behaviour of the investigated boreholes was generated by the measurement and monitoring techniques used in the research project. The recordings reveal great as well as occasionally unexpected differences regarding the load-bearing behaviour as well as differences regarding the hydro-mechanical behaviour of the observed boreholes. While most of the boreholes have proved to be rather stable with only partial failure of the borehole wall in some areas, a complete borehole wall collapse occurred in two of the observed boreholes. The differences regarding the borehole wall stability and also the differences between the appearances of the occurring failure mechanisms are very likely due to the different orientation, the different locations within the URL Mont Terri, and the different facies the boreholes are located in. Figure 1 shows the time-dependent development of a borehole wall instability in one of the observed boreholes in a borehole section where an increase of moisture could
LAPP - Annecy le Vieux Particle Physics Laboratory. Activity report 2002-2003
International Nuclear Information System (INIS)
Colas, Jacques; Minard, Marie-Noelle; Decamp, Daniel; Marion, Frederique; Drancourt, Cyril; Riva, Vanessa; Berger, Nicole; Bombar, Claudine; Dromby, Gerard
2004-01-01
LAPP is a high energy physics laboratory founded in 1976 and is one of the 19 laboratories of IN2P3 (National Institute of Nuclear and particle physics), institute of CNRS (National Centre for Scientific Research). LAPP is joint research facility of the University Savoie Mont Blanc (USMB) and the CNRS. Research carried out at LAPP aims at understanding the elementary particles and the fundamental interactions between them as well as exploring the connections between the infinitesimally small and the unbelievably big. Among other subjects LAPP teams try to understand the origin of the mass of the particles, the mystery of dark matter and what happened to the anti-matter that was present in the early universe. LAPP researchers work in close contact with phenomenologist teams from LAPTh, a theory laboratory hosted in the same building. LAPP teams also work since several decades at understanding the neutrinos, those elementary almost massless particles with amazing transformation properties. They took part in the design and realization of several experiments. Other LAPP teams collaborate in experiments studying signals from the cosmos. This document presents the activities of the laboratory during the years 2002-2003: 1 - Presentation of LAPP; 2 - Experimental programs: Standard model and its extensions (accurate measurements and search for new particles, The end of ALEPH and L3 LEP experiments, ATLAS experiment at LHC, CMS experiment at LHC); CP violation (BaBar experiment on PEPII collider at SLAC, LHCb experiment); Neutrino physics (OPERA experiment on CERN's CNGS neutrino beam); Astro-particles (AMS experiment, EUSO project on the Columbus module of the International Space Station); Search for gravitational waves - Virgo experiment; 3 - Laboratory's know-how: Skills, Technical departments (Electronics, Computers, Mechanics); R and D - CLIC and Positrons; Valorisation and industrial relations; 4 - Laboratory operation: Administration and general services; Laboratory
Experience with the Monte Carlo Method
Energy Technology Data Exchange (ETDEWEB)
Hussein, E M.A. [Department of Mechanical Engineering University of New Brunswick, Fredericton, N.B., (Canada)
2007-06-15
Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed.
Experience with the Monte Carlo Method
International Nuclear Information System (INIS)
Hussein, E.M.A.
2007-01-01
Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed
Monte Carlo Transport for Electron Thermal Transport
Chenhall, Jeffrey; Cao, Duc; Moses, Gregory
2015-11-01
The iSNB (implicit Schurtz Nicolai Busquet multigroup electron thermal transport method of Cao et al. is adapted into a Monte Carlo transport method in order to better model the effects of non-local behavior. The end goal is a hybrid transport-diffusion method that combines Monte Carlo Transport with a discrete diffusion Monte Carlo (DDMC). The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the method will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.
International Nuclear Information System (INIS)
Wang, Y.; Van Brunt, R.J.
1997-01-01
The electron drift velocities and corresponding mean energies have been calculated numerically using an approximate two-term solution of the Boltzmann transport equation for Ar/N 2 gas mixtures at electric field-to-gas density ratios (E/N) below 2.0x10 -20 Vm 2 (20 Td) and for He/Kr mixtures at E/N below 5.0x10 -21 Vm 2 (5.0 Td). The results are compared with predictions obtained from a method proposed by Chiflikian based on an open-quotes analog of Blanc close-quote s lawclose quotes [Phys. Plasmas 2, 3902 (1995)]. Large differences are found between the results derived from the Blanc close-quote s law method and those found here from solutions of the transport equation that indicate serious errors and limitations associated with use of the Blanc close-quote s law method to compute drift velocities in gas mixtures. copyright 1997 American Institute of Physics
Bourdet, Jean-Francois; And Others
1992-01-01
Four activities for the French language classroom are described. These include helping students discover comparative expressions by using advertisements; using horoscopes for teaching vocabulary; using a missing persons report as a source for intermediate to advanced level discussion, collective writing, and questions; and a video designed to…
Wang, Chunxiao; Liu, Yanlin
2013-04-01
The evolution of yeast species and Saccharomyces cerevisiae genotypes during spontaneous fermentations of Muscat blanc planted in 1957 in Jingyang region of China was followed in this study. Using a combination of colony morphology on Wallerstein Nutrient (WLN) medium, sequence analysis of the 26S rDNA D1/D2 domain and 5.8S-ITS-RFLP analysis, a total of 686 isolates were identified at the species level. The six species identified were S. cerevisiae, Hanseniaspora uvarum, Hanseniaspora opuntiae, Issatchenkia terricola, Pichia kudriavzevii (Issatchenkia orientalis) and Trichosporon coremiiforme. This is the first report of T. coremiiforme as an inhabitant of grape must. Three new colony morphologies on WLN medium and one new 5.8S-ITS-RFLP profile are described. Species of non-Saccharomyces, predominantly H. opuntiae, were found in early stages of fermentation. Subsequently, S. cerevisiae prevailed followed by large numbers of P. kudriavzevii that dominated at the end of fermentations. Six native genotypes of S. cerevisiae were determined by interdelta sequence analysis. Genotypes III and IV were predominant. As a first step in exploring untapped yeast resources of the region, this study is important for monitoring the yeast ecology in native fermentations and screening indigenous yeasts that will produce wines with regional characteristics. Copyright © 2012 Elsevier Ltd. All rights reserved.
Šuklje, Katja; Antalick, Guillaume; Buica, Astrid; Langlois, Jennifer; Coetzee, Zelmari A; Gouot, Julia; Schmidtke, Leigh M; Deloire, Alain
2016-02-01
The aim of this study, performed on Sauvignon blanc clones SB11 and SB316, grafted on the same rootstock 101-14 Mgt (Vitis riparia × V. ruperstris) and grown at two adjacent vineyards, was two-fold: (1) to study wine chemical and sensory composition of both clones within an unaltered canopy; and (2) to determine the effect of defoliation (e.g. bunch microclimate) on wine chemical and sensory composition. Orthogonal projection to latent structures discriminate analysis (OPLS-DA) was applied to the concentration profiles of volatile compounds derived from gas chromatography-mass spectrometry data. The loadings directions inferred that 3-isobutyl-2-methoxypyrazine (IBMP) discriminated control treatments (shaded fruit zone) of both clones from defoliation treatments (exposed fruit zone), whereas 3-sulfanyl-hexan-1-ol (3SH), 3-sulfanylhexyl acetate (3SHA), hexanol, hexyl hexanoate and some other esters discriminated defoliated treatments from the controls. The OPLS-DA indicated the importance of IBMP, higher alcohol acetates and phenylethyl esters, for discrimination of clone SB11 from clone SB316 irrespective of the treatment. Defoliation in the fruit zone significantly decreased perceived greenness in clone SB11 and elevated fruitier aromas, whereas in clone SB316 the effect of defoliation on wine sensory perception was less noticeable regardless the decrease in IBMP concentrations. These findings highlight the importance of clone selection and bunch microclimate to diversify produced wine styles. © 2015 Society of Chemical Industry.
Dunn, William L
2012-01-01
Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble
Directory of Open Access Journals (Sweden)
Bardenet Rémi
2013-07-01
Full Text Available Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.
Murthy, K. P. N.
2001-01-01
An introduction to the basics of Monte Carlo is given. The topics covered include, sample space, events, probabilities, random variables, mean, variance, covariance, characteristic function, chebyshev inequality, law of large numbers, central limit theorem (stable distribution, Levy distribution), random numbers (generation and testing), random sampling techniques (inversion, rejection, sampling from a Gaussian, Metropolis sampling), analogue Monte Carlo and Importance sampling (exponential b...
Directory of Open Access Journals (Sweden)
Bin Tian
2017-07-01
Full Text Available Thaumatin-like proteins (TLPs and chitinases are the two main groups of pathogenesis-related (PR proteins found in wine that cause protein haze formation. Previous studies have found that phenolics are also involved in protein haze formation. In this study, Sauvignon Blanc grapes were harvested and processed in two vintages (2011 and 2012 by three different treatments: (1 hand harvesting with whole bunch press (H-WB; (2 hand harvesting with destem/crush and 3 h skin contact (H-DC-3; and (3 machine harvesting with destem/crush and 3 h skin contact (M-DC-3. The juices were collected at three pressure levels (0.4 MPa, 0.8 MPa and 1.6 MPa, some juices were fermented in 750 mL of wine bottles to determine the bentonite requirement for the resulting wines. Results showed juices of M-DC-3 had significantly lower concentration of proteins, including PR proteins, compared to those of H-DC-3, likely due to the greater juice yield of M-DC-3 and interactions between proteins and phenolics. Juices from the 0.8–1.6 MPa pressure and resultant wines had the highest concentration of phenolics but the lowest concentration of TLPs. This supported the view that TLPs are released at low pressure as they are mainly present in grape pulp but additional extraction of phenolics largely present in skin occurs at higher pressing pressure. Wine protein stability tests showed a positive linear correlation between bentonite requirement and the concentration of chitinases, indicating the possibility of predicting bentonite requirement by quantification of chitinases. This study contributes to an improved understanding of extraction of haze-forming PR proteins and phenolics that can influence bentonite requirement for protein stabilization.
Shamsun-Noor, L.; Robin, Christophe; Guckert, Armand
1990-01-01
Le comportement du trèfle blanc (Trifolium repens L cv Crau) est étudié en situation de contrainte hydrique et après réhydratation en liaison avec la fertilisation potassique. Le déficit en eau occasionne une décroissance progressive importante du potentiel hydrique des feuilles et une fermeture rapide des stomates. Ces manifestations sont accompagnées d’une chute de l’activité photosynthétique et de la fixation symbiotique de l’azote. En présence de potassium, la diminution de l’activité...
International Nuclear Information System (INIS)
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described
Variational Monte Carlo Technique
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 8. Variational Monte Carlo Technique: Ground State Energies of Quantum Mechanical Systems. Sukanta Deb. General Article Volume 19 Issue 8 August 2014 pp 713-739 ...
Araujo, Leandro Dias; Vannevel, Sebastian; Buica, Astrid; Callerot, Suzanne; Fedrizzi, Bruno; Kilmartin, Paul A; du Toit, Wessel J
2017-08-01
Elemental sulfur is a fungicide traditionally used to control Powdery Mildew in the production of grapes. The presence of sulfur residues in grape juice has been associated with increased production of hydrogen sulfide during fermentation, which could take part in the formation of the varietal thiol 3-mercaptohexanol. This work examines whether elemental sulfur additions to Sauvignon blanc juice can increase the levels of sought-after varietal thiols. Initial trials were performed in South Africa and indicated a positive impact of sulfur on the levels of thiols. Further experiments were then carried out with New Zealand Sauvignon blanc and confirmed a positive relationship between elemental sulfur additions and wine varietal thiols. The formation of hydrogen sulfide was observed when the addition of elemental sulfur was made to clarified juice, along with an increase in further reductive sulfur compounds. When the addition of sulfur was made to pressed juice, prior to clarification, the production of reductive sulfur compounds was drastically decreased. Some mechanistic considerations are also presented, involving the reduction of sulfur to hydrogen sulfide prior to fermentation. Copyright © 2016. Published by Elsevier Ltd.
Monte Carlo codes and Monte Carlo simulator program
International Nuclear Information System (INIS)
Higuchi, Kenji; Asai, Kiyoshi; Suganuma, Masayuki.
1990-03-01
Four typical Monte Carlo codes KENO-IV, MORSE, MCNP and VIM have been vectorized on VP-100 at Computing Center, JAERI. The problems in vector processing of Monte Carlo codes on vector processors have become clear through the work. As the result, it is recognized that these are difficulties to obtain good performance in vector processing of Monte Carlo codes. A Monte Carlo computing machine, which processes the Monte Carlo codes with high performances is being developed at our Computing Center since 1987. The concept of Monte Carlo computing machine and its performance have been investigated and estimated by using a software simulator. In this report the problems in vectorization of Monte Carlo codes, Monte Carlo pipelines proposed to mitigate these difficulties and the results of the performance estimation of the Monte Carlo computing machine by the simulator are described. (author)
International Nuclear Information System (INIS)
Brown, F.B.
1981-01-01
Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes
Monte Carlo method for array criticality calculations
International Nuclear Information System (INIS)
Dickinson, D.; Whitesides, G.E.
1976-01-01
The Monte Carlo method for solving neutron transport problems consists of mathematically tracing paths of individual neutrons collision by collision until they are lost by absorption or leakage. The fate of the neutron after each collision is determined by the probability distribution functions that are formed from the neutron cross-section data. These distributions are sampled statistically to establish the successive steps in the neutron's path. The resulting data, accumulated from following a large number of batches, are analyzed to give estimates of k/sub eff/ and other collision-related quantities. The use of electronic computers to produce the simulated neutron histories, initiated at Los Alamos Scientific Laboratory, made the use of the Monte Carlo method practical for many applications. In analog Monte Carlo simulation, the calculation follows the physical events of neutron scattering, absorption, and leakage. To increase calculational efficiency, modifications such as the use of statistical weights are introduced. The Monte Carlo method permits the use of a three-dimensional geometry description and a detailed cross-section representation. Some of the problems in using the method are the selection of the spatial distribution for the initial batch, the preparation of the geometry description for complex units, and the calculation of error estimates for region-dependent quantities such as fluxes. The Monte Carlo method is especially appropriate for criticality safety calculations since it permits an accurate representation of interacting units of fissile material. Dissimilar units, units of complex shape, moderators between units, and reflected arrays may be calculated. Monte Carlo results must be correlated with relevant experimental data, and caution must be used to ensure that a representative set of neutron histories is produced
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 3. Markov Chain Monte Carlo - Examples. Arnab Chakraborty. General Article Volume 7 Issue 3 March 2002 pp 25-34. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/007/03/0025-0034. Keywords.
Discrete Diffusion Monte Carlo for Electron Thermal Transport
Chenhall, Jeffrey; Cao, Duc; Wollaeger, Ryan; Moses, Gregory
2014-10-01
The iSNB (implicit Schurtz Nicolai Busquet electron thermal transport method of Cao et al. is adapted to a Discrete Diffusion Monte Carlo (DDMC) solution method for eventual inclusion in a hybrid IMC-DDMC (Implicit Monte Carlo) method. The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the iSNB-DDMC method will be presented. This work was supported by Sandia National Laboratory - Albuquerque.
Monte Carlo and Quasi-Monte Carlo Sampling
Lemieux, Christiane
2009-01-01
Presents essential tools for using quasi-Monte Carlo sampling in practice. This book focuses on issues related to Monte Carlo methods - uniform and non-uniform random number generation, variance reduction techniques. It covers several aspects of quasi-Monte Carlo methods.
VizieR Online Data Catalog: Project VeSElkA: HD stars atomic-line analysis (LeBlanc+, 2015)
Leblanc, F.; Khalack, V.; Yameogo, B.; Thibeault, C.; Gallant, I.
2017-11-01
The four stars studied here were observed with ESPaDOnS at CFHT. High-resolution (R=65000) Stokes IV spectra with large signal-to-noise ratios were obtained in the spectral range 3700-10500Å and were reduced with the software package LIBRE-ESPRIT (Donati et al. 1997MNRAS.291..658D). Two or more spectra of each star were taken to verify for any spectral variability. For the four stars studied here, no such variability is detected. Also, no strong magnetic fields were found. More details about these observations are given in Khalack & LeBlanc (2015AJ....150....2K); for instance, the exposure times and the signal-to-noise ratios are given in their table 1 for each star studied here. (4 data files).
Rezazadeh Ghazvini, Raheleh
2017-01-01
Les protéines de blanc d'oeuf telles que l’ovotransferrine jouent un rôle important dans la défense contre l'invasion bactérienne. L'ovotransferrine a une capacité de liaison du fer, ce qui induit une activité bactériostatique en limitant le fer dans l'environnement des bactéries. Outre le mécanisme bien connu de la privation de fer (Baron et al 2016, pour revue), plusieurs auteurs ont suggéré que l'activité antimicrobienne de l'ovotransferrine pourrait résulter de son effet direct sur les me...
Monte Carlo applications to radiation shielding problems
International Nuclear Information System (INIS)
Subbaiah, K.V.
2009-01-01
transport in complex geometries is straightforward, while even the simplest finite geometries (e.g., thin foils) are very difficult to be dealt with by the transport equation. The main drawback of the Monte Carlo method lies in its random nature: all the results are affected by statistical uncertainties, which can be reduced at the expense of increasing the sampled population, and, hence, the computation time. Under special circumstances, the statistical uncertainties may be lowered by using variance-reduction techniques. Monte Carlo methods tend to be used when it is infeasible or impossible to compute an exact result with a deterministic algorithm. The term Monte Carlo was coined in the 1940s by physicists working on nuclear weapon projects in the Los Alamos National Laboratory
Monte Carlo principles and applications
Energy Technology Data Exchange (ETDEWEB)
Raeside, D E [Oklahoma Univ., Oklahoma City (USA). Health Sciences Center
1976-03-01
The principles underlying the use of Monte Carlo methods are explained, for readers who may not be familiar with the approach. The generation of random numbers is discussed, and the connection between Monte Carlo methods and random numbers is indicated. Outlines of two well established Monte Carlo sampling techniques are given, together with examples illustrating their use. The general techniques for improving the efficiency of Monte Carlo calculations are considered. The literature relevant to the applications of Monte Carlo calculations in medical physics is reviewed.
International Nuclear Information System (INIS)
Rajabalinejad, M.
2010-01-01
To reduce cost of Monte Carlo (MC) simulations for time-consuming processes, Bayesian Monte Carlo (BMC) is introduced in this paper. The BMC method reduces number of realizations in MC according to the desired accuracy level. BMC also provides a possibility of considering more priors. In other words, different priors can be integrated into one model by using BMC to further reduce cost of simulations. This study suggests speeding up the simulation process by considering the logical dependence of neighboring points as prior information. This information is used in the BMC method to produce a predictive tool through the simulation process. The general methodology and algorithm of BMC method are presented in this paper. The BMC method is applied to the simplified break water model as well as the finite element model of 17th Street Canal in New Orleans, and the results are compared with the MC and Dynamic Bounds methods.
International Nuclear Information System (INIS)
Dubi, A.; Gerstl, S.A.W.
1979-05-01
The contributon Monte Carlo method is based on a new recipe to calculate target responses by means of volume integral of the contributon current in a region between the source and the detector. A comprehensive description of the method, its implementation in the general-purpose MCNP code, and results of the method for realistic nonhomogeneous, energy-dependent problems are presented. 23 figures, 10 tables
International Nuclear Information System (INIS)
Wollaber, Allan Benton
2016-01-01
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating @@), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
International Nuclear Information System (INIS)
Creutz, M.
1986-01-01
The author discusses a recently developed algorithm for simulating statistical systems. The procedure interpolates between molecular dynamics methods and canonical Monte Carlo. The primary advantages are extremely fast simulations of discrete systems such as the Ising model and a relative insensitivity to random number quality. A variation of the algorithm gives rise to a deterministic dynamics for Ising spins. This model may be useful for high speed simulation of non-equilibrium phenomena
Energy Technology Data Exchange (ETDEWEB)
Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
Energy Technology Data Exchange (ETDEWEB)
Brockway, D.; Soran, P.; Whalen, P.
1985-01-01
A Monte Carlo algorithm to efficiently calculate static alpha eigenvalues, N = ne/sup ..cap alpha..t/, for supercritical systems has been developed and tested. A direct Monte Carlo approach to calculating a static alpha is to simply follow the buildup in time of neutrons in a supercritical system and evaluate the logarithmic derivative of the neutron population with respect to time. This procedure is expensive, and the solution is very noisy and almost useless for a system near critical. The modified approach is to convert the time-dependent problem to a static ..cap alpha../sup -/eigenvalue problem and regress ..cap alpha.. on solutions of a/sup -/ k/sup -/eigenvalue problem. In practice, this procedure is much more efficient than the direct calculation, and produces much more accurate results. Because the Monte Carlo codes are intrinsically three-dimensional and use elaborate continuous-energy cross sections, this technique is now used as a standard for evaluating other calculational techniques in odd geometries or with group cross sections.
Status of Monte Carlo at Los Alamos
International Nuclear Information System (INIS)
Thompson, W.L.; Cashwell, E.D.
1980-01-01
At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time
Usefulness of the Monte Carlo method in reliability calculations
International Nuclear Information System (INIS)
Lanore, J.M.; Kalli, H.
1977-01-01
Three examples of reliability Monte Carlo programs developed in the LEP (Laboratory for Radiation Shielding Studies in the Nuclear Research Center at Saclay) are presented. First, an uncertainty analysis is given for a simplified spray system; a Monte Carlo program PATREC-MC has been written to solve the problem with the system components given in the fault tree representation. The second program MONARC 2 has been written to solve the problem of complex systems reliability by the Monte Carlo simulation, here again the system (a residual heat removal system) is in the fault tree representation. Third, the Monte Carlo program MONARC was used instead of the Markov diagram to solve the simulation problem of an electric power supply including two nets and two stand-by diesels
MONTE: the next generation of mission design and navigation software
Evans, Scott; Taber, William; Drain, Theodore; Smith, Jonathon; Wu, Hsi-Cheng; Guevara, Michelle; Sunseri, Richard; Evans, James
2018-03-01
The Mission analysis, Operations and Navigation Toolkit Environment (MONTE) (Sunseri et al. in NASA Tech Briefs 36(9), 2012) is an astrodynamic toolkit produced by the Mission Design and Navigation Software Group at the Jet Propulsion Laboratory. It provides a single integrated environment for all phases of deep space and Earth orbiting missions. Capabilities include: trajectory optimization and analysis, operational orbit determination, flight path control, and 2D/3D visualization. MONTE is presented to the user as an importable Python language module. This allows a simple but powerful user interface via CLUI or script. In addition, the Python interface allows MONTE to be used seamlessly with other canonical scientific programming tools such as SciPy, NumPy, and Matplotlib. MONTE is the prime operational orbit determination software for all JPL navigated missions.
Schön, Peter; Prokop, Alexander; Naaim-Bouvet, Florence; Nishimura, Kouichi; Vionnet, Vincent; Guyomarc'h, Gilbert
2014-05-01
Wind and the associated snow drift are dominating factors determining the snow distribution and accumulation in alpine areas, resulting in a high spatial variability of snow depth that is difficult to evaluate and quantify. The terrain-based parameter Sx characterizes the degree of shelter or exposure of a grid point provided by the upwind terrain, without the computational complexity of numerical wind field models. The parameter has shown to qualitatively predict snow redistribution with good reproduction of spatial patterns, but has failed to quantitatively describe the snow redistribution, and correlations with measured snow heights were poor. The objective of our research was to a) identify the sources of poor correlations between predicted and measured snow re-distribution and b) improve the parameters ability to qualitatively and quantitatively describe snow redistribution in our research area, the Col du Lac Blanc in the French Alps. The area is at an elevation of 2700 m and particularly suited for our study due to its constant wind direction and the availability of data from a meteorological station. Our work focused on areas with terrain edges of approximately 10 m height, and we worked with 1-2 m resolution digital terrain and snow surface data. We first compared the results of the terrain-based parameter calculations to measured snow-depths, obtained by high-accuracy terrestrial laser scan measurements. The results were similar to previous studies: The parameter was able to reproduce observed patterns in snow distribution, but regression analyses showed poor correlations between terrain-based parameter and measured snow-depths. We demonstrate how the correlations between measured and calculated snow heights improve if the parameter is calculated based on a snow surface model instead of a digital terrain model. We show how changing the parameter's search distance and how raster re-sampling and raster smoothing improve the results. To improve the parameter
Jazz Club
2012-01-01
The 5th edition of the "Monts Jura Jazz Festival" that will take place on September 21st and 22nd 2012 at the Esplanade du Lac in Divonne-les-Bains. This festival is organized by the "CERN Jazz Club" with the support of the "CERN Staff Association". This festival is a major musical event in the French/Swiss area and proposes a world class program with jazz artists such as D.Lockwood and D.Reinhardt. More information on http://www.jurajazz.com.
2012-01-01
The 5th edition of the "Monts Jura Jazz Festival" will take place at the Esplanade du Lac in Divonne-les-Bains, France on September 21 and 22. This festival organized by the CERN Jazz Club and supported by the CERN Staff Association is becoming a major musical event in the Geneva region. International Jazz artists like Didier Lockwood and David Reinhardt are part of this year outstanding program. Full program and e-tickets are available on the festival website. Don't miss this great festival!
Schön, Peter; Prokop, Alexander; Naaim-Bouvet, Florence; Vionnet, Vincent; Guyomarc'h, Gilbert; Heiser, Micha; Nishimura, Kouichi
2015-04-01
Wind and the associated snow drift are dominating factors determining the snow distribution and accumulation in alpine areas, resulting in a high spatial variability of snow depth that is difficult to evaluate and quantify. The terrain-based parameter Sx characterizes the degree of shelter or exposure of a grid point provided by the upwind terrain, without the computational complexity of numerical wind field models. The parameter has shown to qualitatively predict snow redistribution with good reproduction of spatial patterns. It does not, however, provide a quantitative estimate of changes in snow depths. The objective of our research was to introduce a new parameter to quantify changes in snow depths in our research area, the Col du Lac Blanc in the French Alps. The area is at an elevation of 2700 m and particularly suited for our study due to its consistently bi-modal wind directions. Our work focused on two pronounced, approximately 10 m high terrain breaks, and we worked with 1 m resolution digital snow surface models (DSM). The DSM and measured changes in snow depths were obtained with high-accuracy terrestrial laser scan (TLS) measurements. First we calculated the terrain-based parameter Sx on a digital snow surface model and correlated Sx with measured changes in snow-depths (Δ SH). Results showed that Δ SH can be approximated by Δ SHestimated = α * Sx, where α is a newly introduced parameter. The parameter α has shown to be linked to the amount of snow deposited influenced by blowing snow flux. At the Col du Lac Blanc test side, blowing snow flux is recorded with snow particle counters (SPC). Snow flux is the number of drifting snow particles per time and area. Hence, the SPC provide data about the duration and intensity of drifting snow events, two important factors not accounted for by the terrain parameter Sx. We analyse how the SPC snow flux data can be used to estimate the magnitude of the new variable parameter α . To simulate the development
Bezerra, Iglesias de Lacerda; Caillot, Adriana Rute Cordeiro; Palhares, Lais Cristina Gusmão Ferreira; Santana-Filho, Arquimedes Paixão; Chavante, Suely Ferreira; Sassaki, Guilherme Lanzi
2018-04-15
The structural characterization of the polysaccharides and in vitro anti-inflammatory properties of Cabernet Franc (WCF), Cabernet Sauvignon (WCS) and Sauvignon Blanc (WSB) wines were studied for the first time in this work. The polysaccharides of wines gave rise to three fractions of polysaccharides, namely (WCF) 0.16%, (WCS) 0.05% and (WSB) 0.02%; the highest one was chosen for isolation of polysaccharides (WCF). It was identified the presence of mannan, formed by a sequence of α-d-Manp (1 → 6)-linked and side chains O-2 substituted for α-d-mannan (1 → 2)-linked; type II arabinogalactan, formed by (1 → 3)-linked β-d-Galp main chain, substituted at O-6 by (1 → 6)-linked β-d-Galp side chains, and nonreducing end-units of arabinose 3-O-substituted; type I rhamnogalacturonan formed by repeating (1 → 4)-α-d-GalpA-(1 → 2)-α-L-Rhap groups; and traces of type II rhamnogalacturonan. The polysaccharide mixture and isolated fractions inhibited the production of inflammatory cytokines (TNF-α and IL-1β) and mediator (NO) in RAW 264.7 cells stimulated with LPS. Copyright © 2018 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Lupton, L.R.; Keller, N.A.
1982-09-01
The design of a positron emission tomography (PET) ring camera involves trade-offs between such things as sensitivity, resolution and cost. As a design aid, a Monte Carlo simulation of a single-ring camera system has been developed. The model includes a source-filled phantom, collimators, detectors, and optional shadow shields and inter-crystal septa. Individual gamma rays are tracked within the system materials until they escape, are absorbed, or are detected. Compton and photelectric interactions are modelled. All system dimensions are variable within the computation. Coincidence and singles data are recorded according to type (true or scattered), annihilation origin, and detected energy. Photon fluxes at various points of interest, such as the edge of the phantom and the collimator, are available. This report reviews the basics of PET, describes the physics involved in the simulation, and provides detailed outlines of the routines
2003-01-01
MGS MOC Release No. MOC2-387, 10 June 2003This is a Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) wide angle view of the Charitum Montes, south of Argyre Planitia, in early June 2003. The seasonal south polar frost cap, composed of carbon dioxide, has been retreating southward through this area since spring began a month ago. The bright features toward the bottom of this picture are surfaces covered by frost. The picture is located near 57oS, 43oW. North is at the top, south is at the bottom. Sunlight illuminates the scene from the upper left. The area shown is about 217 km (135 miles) wide.
Monte Carlo Methods in Physics
International Nuclear Information System (INIS)
Santoso, B.
1997-01-01
Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained
Energy Technology Data Exchange (ETDEWEB)
Wersin, P., E-mail: paul.wersin@gruner.ch [NAGRA, Hardstrasse 73, 5430 Wettingen (Switzerland)] [Gruner Ltd., Gellertstrasse 55, 4020 Basel (Switzerland); Leupin, O.X. [NAGRA, Hardstrasse 73, 5430 Wettingen (Switzerland); Mettler, S. [NAGRA, Hardstrasse 73, 5430 Wettingen (Switzerland)] [Solexperts Ltd., Mettlenbachstrasse 25, 8617 Moenchaltorf (Switzerland); Gaucher, E.C. [BRGM, 3 avenue Claude Guillemin, B.P. 36009, 45060 Orleans Cedex 2 (France); Maeder, U. [University of Bern, Institute of Geological Sciences, Baltzerstrasse 3, CH-3012 Bern (Switzerland); De Canniere, P. [SCK.CEN, Waste and Disposal Project, Boeretang 200, 2400 Mol (Belgium); Vinsot, A. [ANDRA, Laboratoire de Recherche Souterrain de Meuse/Haute-Marne, RD960 BP9, 55290 Bure (France); Gaebler, H.E. [BGR, Stilleweg 2, 30655 Hannover (Germany); Kunimaro, T. [JAEA, Tokai-mura, Naka-gun, Ibaraki 319-1195 (Japan); Kiho, K. [CRIEPI, 1646 Abiko, Abiko-city Chiba 270-1194 (Japan); Eichinger, L. [Hydroisotop, 85301 Schweitenkirchen (Germany)
2011-06-15
Highlights: > The composition was affected by the complex interplay of diffusion, mineral and surface reactions. > The {sup 13}C signals for carbon species showed significant variations which could only be partly explained. > The main cations remained remarkably constant during the experiment. > This underlines the strong buffering via cation exchange and carbonate dissolution/precipitation. - Abstract: An in situ test in the Opalinus Clay formation, termed porewater chemistry (PC) experiment, was carried out for a period of 5 years. It was based on the concept of diffusive equilibration whereby a traced water with a composition close to that expected in the formation was continuously circulated and monitored in a packed-off borehole. The main original focus was to obtain reliable data on the pH/pCO{sub 2} conditions of the porewater, but because of unexpected microbiologically-induced redox reactions, the objective was extended to elucidate the biogeochemical processes occurring in the borehole and to understand their impact on pH/pCO{sub 2} and porewater chemistry in the low permeability clay formation. The behaviour of the conservative tracers {sup 2}H and Br{sup -} could be explained by diffusive dilution in the clay and moreover the results showed that diffusive equilibration between the borehole water and the formation occurred within about 3 year's time. However, the composition and pH/pCO{sub 2} conditions differed considerably from those of the in situ porewater. Thus, pH was lower and pCO{sub 2} was higher than indicated by complementary laboratory investigations. The noted differences are explained by microbiologically-induced redox reactions occurring in the borehole and in the interfacial wall area which were caused by an organic source released from the equipment material. The degradation of this source was accompanied by sulfate reduction and - to a lesser extent - by methane generation, which induced a high rate of acetogenic reactions
International Nuclear Information System (INIS)
Wersin, P.; Leupin, O.X.; Mettler, S.; Gaucher, E.C.; Maeder, U.; De Canniere, P.; Vinsot, A.; Gaebler, H.E.; Kunimaro, T.; Kiho, K.; Eichinger, L.
2011-01-01
Highlights: → The composition was affected by the complex interplay of diffusion, mineral and surface reactions. → The 13 C signals for carbon species showed significant variations which could only be partly explained. → The main cations remained remarkably constant during the experiment. → This underlines the strong buffering via cation exchange and carbonate dissolution/precipitation. - Abstract: An in situ test in the Opalinus Clay formation, termed porewater chemistry (PC) experiment, was carried out for a period of 5 years. It was based on the concept of diffusive equilibration whereby a traced water with a composition close to that expected in the formation was continuously circulated and monitored in a packed-off borehole. The main original focus was to obtain reliable data on the pH/pCO 2 conditions of the porewater, but because of unexpected microbiologically-induced redox reactions, the objective was extended to elucidate the biogeochemical processes occurring in the borehole and to understand their impact on pH/pCO 2 and porewater chemistry in the low permeability clay formation. The behaviour of the conservative tracers 2 H and Br - could be explained by diffusive dilution in the clay and moreover the results showed that diffusive equilibration between the borehole water and the formation occurred within about 3 year's time. However, the composition and pH/pCO 2 conditions differed considerably from those of the in situ porewater. Thus, pH was lower and pCO 2 was higher than indicated by complementary laboratory investigations. The noted differences are explained by microbiologically-induced redox reactions occurring in the borehole and in the interfacial wall area which were caused by an organic source released from the equipment material. The degradation of this source was accompanied by sulfate reduction and - to a lesser extent - by methane generation, which induced a high rate of acetogenic reactions corresponding to very high acetate
Lectures on Monte Carlo methods
Madras, Neal
2001-01-01
Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati
Federal Laboratory Consortium — The Bioassay Laboratory is an accredited laboratory capable of conducting standardized and innovative environmental testing in the area of aquatic ecotoxicology. The...
Federal Laboratory Consortium — Naval Academy Hydromechanics LaboratoryThe Naval Academy Hydromechanics Laboratory (NAHL) began operations in Rickover Hall in September 1976. The primary purpose of...
Advanced Multilevel Monte Carlo Methods
Jasra, Ajay; Law, Kody; Suciu, Carina
2017-01-01
This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.
Advanced Multilevel Monte Carlo Methods
Jasra, Ajay
2017-04-24
This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.
Monte Carlo simulation for IRRMA
International Nuclear Information System (INIS)
Gardner, R.P.; Liu Lianyan
2000-01-01
Monte Carlo simulation is fast becoming a standard approach for many radiation applications that were previously treated almost entirely by experimental techniques. This is certainly true for Industrial Radiation and Radioisotope Measurement Applications - IRRMA. The reasons for this include: (1) the increased cost and inadequacy of experimentation for design and interpretation purposes; (2) the availability of low cost, large memory, and fast personal computers; and (3) the general availability of general purpose Monte Carlo codes that are increasingly user-friendly, efficient, and accurate. This paper discusses the history and present status of Monte Carlo simulation for IRRMA including the general purpose (GP) and specific purpose (SP) Monte Carlo codes and future needs - primarily from the experience of the authors
Geology of Maxwell Montes, Venus
Head, J. W.; Campbell, D. B.; Peterfreund, A. R.; Zisk, S. A.
1984-01-01
Maxwell Montes represent the most distinctive topography on the surface of Venus, rising some 11 km above mean planetary radius. The multiple data sets of the Pioneer missing and Earth based radar observations to characterize Maxwell Montes are analyzed. Maxwell Montes is a porkchop shaped feature located at the eastern end of Lakshmi Planum. The main massif trends about North 20 deg West for approximately 1000 km and the narrow handle extends several hundred km West South-West WSW from the north end of the main massif, descending down toward Lakshmi Planum. The main massif is rectilinear and approximately 500 km wide. The southern and northern edges of Maxwell Montes coincide with major topographic boundaries defining the edge of Ishtar Terra.
A Multivariate Time Series Method for Monte Carlo Reactor Analysis
International Nuclear Information System (INIS)
Taro Ueki
2008-01-01
A robust multivariate time series method has been established for the Monte Carlo calculation of neutron multiplication problems. The method is termed Coarse Mesh Projection Method (CMPM) and can be implemented using the coarse statistical bins for acquisition of nuclear fission source data. A novel aspect of CMPM is the combination of the general technical principle of projection pursuit in the signal processing discipline and the neutron multiplication eigenvalue problem in the nuclear engineering discipline. CMPM enables reactor physicists to accurately evaluate major eigenvalue separations of nuclear reactors with continuous energy Monte Carlo calculation. CMPM was incorporated in the MCNP Monte Carlo particle transport code of Los Alamos National Laboratory. The great advantage of CMPM over the traditional Fission Matrix method is demonstrated for the three space-dimensional modeling of the initial core of a pressurized water reactor
Calibration and Monte Carlo modelling of neutron long counters
Tagziria, H
2000-01-01
The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivit...
Adjoint electron Monte Carlo calculations
International Nuclear Information System (INIS)
Jordan, T.M.
1986-01-01
Adjoint Monte Carlo is the most efficient method for accurate analysis of space systems exposed to natural and artificially enhanced electron environments. Recent adjoint calculations for isotropic electron environments include: comparative data for experimental measurements on electronics boxes; benchmark problem solutions for comparing total dose prediction methodologies; preliminary assessment of sectoring methods used during space system design; and total dose predictions on an electronics package. Adjoint Monte Carlo, forward Monte Carlo, and experiment are in excellent agreement for electron sources that simulate space environments. For electron space environments, adjoint Monte Carlo is clearly superior to forward Monte Carlo, requiring one to two orders of magnitude less computer time for relatively simple geometries. The solid-angle sectoring approximations used for routine design calculations can err by more than a factor of 2 on dose in simple shield geometries. For critical space systems exposed to severe electron environments, these potential sectoring errors demand the establishment of large design margins and/or verification of shield design by adjoint Monte Carlo/experiment
Monte Carlo theory and practice
International Nuclear Information System (INIS)
James, F.
1987-01-01
Historically, the first large-scale calculations to make use of the Monte Carlo method were studies of neutron scattering and absorption, random processes for which it is quite natural to employ random numbers. Such calculations, a subset of Monte Carlo calculations, are known as direct simulation, since the 'hypothetical population' of the narrower definition above corresponds directly to the real population being studied. The Monte Carlo method may be applied wherever it is possible to establish equivalence between the desired result and the expected behaviour of a stochastic system. The problem to be solved may already be of a probabilistic or statistical nature, in which case its Monte Carlo formulation will usually be a straightforward simulation, or it may be of a deterministic or analytic nature, in which case an appropriate Monte Carlo formulation may require some imagination and may appear contrived or artificial. In any case, the suitability of the method chosen will depend on its mathematical properties and not on its superficial resemblance to the problem to be solved. The authors show how Monte Carlo techniques may be compared with other methods of solution of the same physical problem
Parallel MCNP Monte Carlo transport calculations with MPI
International Nuclear Information System (INIS)
Wagner, J.C.; Haghighat, A.
1996-01-01
The steady increase in computational performance has made Monte Carlo calculations for large/complex systems possible. However, in order to make these calculations practical, order of magnitude increases in performance are necessary. The Monte Carlo method is inherently parallel (particles are simulated independently) and thus has the potential for near-linear speedup with respect to the number of processors. Further, the ever-increasing accessibility of parallel computers, such as workstation clusters, facilitates the practical use of parallel Monte Carlo. Recognizing the nature of the Monte Carlo method and the trends in available computing, the code developers at Los Alamos National Laboratory implemented the message-passing general-purpose Monte Carlo radiation transport code MCNP (version 4A). The PVM package was chosen by the MCNP code developers because it supports a variety of communication networks, several UNIX platforms, and heterogeneous computer systems. This PVM version of MCNP has been shown to produce speedups that approach the number of processors and thus, is a very useful tool for transport analysis. Due to software incompatibilities on the local IBM SP2, PVM has not been available, and thus it is not possible to take advantage of this useful tool. Hence, it became necessary to implement an alternative message-passing library package into MCNP. Because the message-passing interface (MPI) is supported on the local system, takes advantage of the high-speed communication switches in the SP2, and is considered to be the emerging standard, it was selected
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan
2016-01-01
In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . âˆž>h0>h1â‹¯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. Â© 2016 Elsevier B.V.
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros
2016-08-29
In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . âˆž>h0>h1â‹¯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. Â© 2016 Elsevier B.V.
Vectorization of Monte Carlo particle transport
International Nuclear Information System (INIS)
Burns, P.J.; Christon, M.; Schweitzer, R.; Lubeck, O.M.; Wasserman, H.J.; Simmons, M.L.; Pryor, D.V.
1989-01-01
This paper reports that fully vectorized versions of the Los Alamos National Laboratory benchmark code Gamteb, a Monte Carlo photon transport algorithm, were developed for the Cyber 205/ETA-10 and Cray X-MP/Y-MP architectures. Single-processor performance measurements of the vector and scalar implementations were modeled in a modified Amdahl's Law that accounts for additional data motion in the vector code. The performance and implementation strategy of the vector codes are related to architectural features of each machine. Speedups between fifteen and eighteen for Cyber 205/ETA-10 architectures, and about nine for CRAY X-MP/Y-MP architectures are observed. The best single processor execution time for the problem was 0.33 seconds on the ETA-10G, and 0.42 seconds on the CRAY Y-MP
Radiation Modeling with Direct Simulation Monte Carlo
Carlson, Ann B.; Hassan, H. A.
1991-01-01
Improvements in the modeling of radiation in low density shock waves with direct simulation Monte Carlo (DSMC) are the subject of this study. A new scheme to determine the relaxation collision numbers for excitation of electronic states is proposed. This scheme attempts to move the DSMC programs toward a more detailed modeling of the physics and more reliance on available rate data. The new method is compared with the current modeling technique and both techniques are compared with available experimental data. The differences in the results are evaluated. The test case is based on experimental measurements from the AVCO-Everett Research Laboratory electric arc-driven shock tube of a normal shock wave in air at 10 km/s and .1 Torr. The new method agrees with the available data as well as the results from the earlier scheme and is more easily extrapolated to di erent ow conditions.
Directory of Open Access Journals (Sweden)
Philipp Christian
2017-01-01
Full Text Available Österreich ist mit einer Anbaufläche von 1.914 ha das drittgrößte Weißburgunder-produzierende Land der Welt. Diese Fläche entspricht 4,3% der österreichischen Weinbaufläche und 12,3% der Weltanbaufläche von Weißburgunder (15.493 ha. Nur in Deutschland (4.794 ha, 30,9% und in Italien (3.086 ha; 19,9% wird mehr Pinot blanc angebaut. Das Aroma von trockenem Weißburgunder ist in der Regel diskret, mit vorherrschenden Birnen- und Apfelaromen, diese kommen oft mit einem Nussaroma und einem Hauch von Blüten (Akazienblüten und Kräutern zusammen. Der Geschmack ist zart und vollmundig. Gereifte Weine zeigen oft Honig- und Mandelnoten. Ethyl-trans2-cis4-decadienoat ist bekannt als Leitaroma in frischen und verarbeiteten Birnenprodukten. Diese Verbindung wurde bis jetzt nicht in Wein beschrieben. Im Rahmen dieser Studie wurde der Gehalt an Ethyl-trans2-cis4-Decadienoat und anderen Ethyl- und Methylester der cis-trans-Isomere der Decadiensäure sowie andere mit Birne assoziierten Aromen (Isoamylacetat, Methyl-trans-Geranoat, Ethylhexanoat, Ethyloctanoat, Ethyldecanoat und Ethyldodecanoat in österreichischen Weißburgunderproben direkt in Wein mit HS-SPME-SIM-MS analysiert. Bei den Analysen konnten relevante Quantitäten an Ethyl-trans-2-cis-4-decadienoat gefunden werden. Die Konzentrationen der untersuchten Weine lagen zwischen >0,036 und 4,04 μg/L. Ein vorangegangener Test zur Feststellung des Wahrnehmungsschwellenwertes nach dem BET-3-Alternative-Forced-Choice-Verfahren ergab bei dieser Verbindung einen Wert von 2 μg/L. Somit ist der Aromastoff teilweise relevant für den Charakter der untersuchten Weine. Die Aromen wurden nach einer Konsumentenstudie ausgesucht und die sogenannte “birnenspezifische Odour Activity Value” nach zwei Verfahren berechnet. Im Zuge der sensorischen und analytischen Studien konnte festgestellt werden, dass ein Zusammenspiel einiger analysierten Aromen für den Charakter, die Qualität und die Typizit
Monte Carlo simulation of experiments
International Nuclear Information System (INIS)
Opat, G.I.
1977-07-01
An outline of the technique of computer simulation of particle physics experiments by the Monte Carlo method is presented. Useful special purpose subprograms are listed and described. At each stage the discussion is made concrete by direct reference to the programs SIMUL8 and its variant MONTE-PION, written to assist in the analysis of the radiative decay experiments μ + → e + ν sub(e) antiνγ and π + → e + ν sub(e)γ, respectively. These experiments were based on the use of two large sodium iodide crystals, TINA and MINA, as e and γ detectors. Instructions for the use of SIMUL8 and MONTE-PION are given. (author)
SELF-ABSORPTION CORRECTIONS BASED ON MONTE CARLO SIMULATIONS
Directory of Open Access Journals (Sweden)
Kamila Johnová
2016-12-01
Full Text Available The main aim of this article is to demonstrate how Monte Carlo simulations are implemented in our gamma spectrometry laboratory at the Department of Dosimetry and Application of Ionizing Radiation in order to calculate the self-absorption within the samples. A model of real HPGe detector created for MCNP simulations is presented in this paper. All of the possible parameters, which may influence the self-absorption, are at first discussed theoretically and lately described using the calculated results.
Strategije drevesnega preiskovanja Monte Carlo
VODOPIVEC, TOM
2018-01-01
Po preboju pri igri go so metode drevesnega preiskovanja Monte Carlo (ang. Monte Carlo tree search – MCTS) sprožile bliskovit napredek agentov za igranje iger: raziskovalna skupnost je od takrat razvila veliko variant in izboljšav algoritma MCTS ter s tem zagotovila napredek umetne inteligence ne samo pri igrah, ampak tudi v številnih drugih domenah. Čeprav metode MCTS združujejo splošnost naključnega vzorčenja z natančnostjo drevesnega preiskovanja, imajo lahko v praksi težave s počasno konv...
Federal Laboratory Consortium — Purpose:The Photometrics Laboratory provides the capability to measure, analyze and characterize radiometric and photometric properties of light sources and filters,...
Federal Laboratory Consortium — FUNCTION: Enables evaluation and characterization of materials ranging from the ultraviolet to the longwave infrared (LWIR).DESCRIPTION: The Blackroom Laboratory is...
Mont Terri Project - Proceedings of the 10 Year Anniversary Workshop
International Nuclear Information System (INIS)
Hugi, M.; Bossart, P.; Hayoz, P.
2007-01-01
This book is a compilation of 12 reports presented at the St-Ursanne workshop. The workshop was dedicated to the scientific community of the Mont Terri partner organisations, their management and scientific/technical staff, involved research organisations and key contractors. The purpose of the event was to acknowledge the excellent research work that has been performed over the last decade, to evaluate and discuss the present state of knowledge in selected research areas and to explore the potential for future research activities. The topical areas addressed in the workshop are of particular importance with regard to deep geological disposal of radioactive waste and focused on the issues of coupled phenomena and transport processes in argillaceous rock and the demonstration (in underground rock laboratories) of disposal feasibility. After showing the history of the Mont Terri project and the general geology of Northwestern Switzerland, the different presentations are distributed into 3 topics: (a) Coupled phenomena in argillaceous rock, (b) Transport processes in argillaceous rock, and (c) Demonstration of disposal feasibility in underground rock laboratories. The last chapter describes the research still needed and the Mont Terri rock laboratory
Mission Analysis, Operations, and Navigation Toolkit Environment (Monte) Version 040
Sunseri, Richard F.; Wu, Hsi-Cheng; Evans, Scott E.; Evans, James R.; Drain, Theodore R.; Guevara, Michelle M.
2012-01-01
Monte is a software set designed for use in mission design and spacecraft navigation operations. The system can process measurement data, design optimal trajectories and maneuvers, and do orbit determination, all in one application. For the first time, a single software set can be used for mission design and navigation operations. This eliminates problems due to different models and fidelities used in legacy mission design and navigation software. The unique features of Monte 040 include a blowdown thruster model for GRAIL (Gravity Recovery and Interior Laboratory) with associated pressure models, as well as an updated, optimalsearch capability (COSMIC) that facilitated mission design for ARTEMIS. Existing legacy software lacked the capabilities necessary for these two missions. There is also a mean orbital element propagator and an osculating to mean element converter that allows long-term orbital stability analysis for the first time in compiled code. The optimized trajectory search tool COSMIC allows users to place constraints and controls on their searches without any restrictions. Constraints may be user-defined and depend on trajectory information either forward or backwards in time. In addition, a long-term orbit stability analysis tool (morbiter) existed previously as a set of scripts on top of Monte. Monte is becoming the primary tool for navigation operations, a core competency at JPL. The mission design capabilities in Monte are becoming mature enough for use in project proposals as well as post-phase A mission design. Monte has three distinct advantages over existing software. First, it is being developed in a modern paradigm: object- oriented C++ and Python. Second, the software has been developed as a toolkit, which allows users to customize their own applications and allows the development team to implement requirements quickly, efficiently, and with minimal bugs. Finally, the software is managed in accordance with the CMMI (Capability Maturity Model
Is Monte Carlo embarrassingly parallel?
Energy Technology Data Exchange (ETDEWEB)
Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)
2012-07-01
Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)
Is Monte Carlo embarrassingly parallel?
International Nuclear Information System (INIS)
Hoogenboom, J. E.
2012-01-01
Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)
Exact Monte Carlo for molecules
International Nuclear Information System (INIS)
Lester, W.A. Jr.; Reynolds, P.J.
1985-03-01
A brief summary of the fixed-node quantum Monte Carlo method is presented. Results obtained for binding energies, the classical barrier height for H + H 2 , and the singlet-triplet splitting in methylene are presented and discussed. 17 refs
Monte Carlo - Advances and Challenges
International Nuclear Information System (INIS)
Brown, Forrest B.; Mosteller, Russell D.; Martin, William R.
2008-01-01
Abstract only, full text follows: With ever-faster computers and mature Monte Carlo production codes, there has been tremendous growth in the application of Monte Carlo methods to the analysis of reactor physics and reactor systems. In the past, Monte Carlo methods were used primarily for calculating k eff of a critical system. More recently, Monte Carlo methods have been increasingly used for determining reactor power distributions and many design parameters, such as β eff , l eff , τ, reactivity coefficients, Doppler defect, dominance ratio, etc. These advanced applications of Monte Carlo methods are now becoming common, not just feasible, but bring new challenges to both developers and users: Convergence of 3D power distributions must be assured; confidence interval bias must be eliminated; iterated fission probabilities are required, rather than single-generation probabilities; temperature effects including Doppler and feedback must be represented; isotopic depletion and fission product buildup must be modeled. This workshop focuses on recent advances in Monte Carlo methods and their application to reactor physics problems, and on the resulting challenges faced by code developers and users. The workshop is partly tutorial, partly a review of the current state-of-the-art, and partly a discussion of future work that is needed. It should benefit both novice and expert Monte Carlo developers and users. In each of the topic areas, we provide an overview of needs, perspective on past and current methods, a review of recent work, and discussion of further research and capabilities that are required. Electronic copies of all workshop presentations and material will be available. The workshop is structured as 2 morning and 2 afternoon segments: - Criticality Calculations I - convergence diagnostics, acceleration methods, confidence intervals, and the iterated fission probability, - Criticality Calculations II - reactor kinetics parameters, dominance ratio, temperature
Bennett, Sophia Elizabeth
2017-01-01
Monsieur Etienne Blanc Premier vice-président de la Région Auvergne-Rhône-Alpes Délégué aux finances, à l'administration générale, aux économies budgétaires et aux politiques transfrontalières
Quand Hollywood lave plus blanc...
Directory of Open Access Journals (Sweden)
André Bleikasten
2006-03-01
Full Text Available Alexandre Dumas pensait (je cite de mémoire que « l’on peut violer l’Histoire à condition de lui faire un bel enfant ». Il savait de quoi il parlait, il lui en avait fait d’assez beaux. La métaphore, c’est entendu, n’est pas du meilleur goût, mais, s’agissant de la question toujours controversée de l’adaptation cinématographique, l’idée me paraît assez juste. Un cinéaste peut s’emparer d’un roman pour le malmener, le triturer et le défigurer, le tirer vers lui au point de le rendre méconnais...
Boulouch, Nathalie
2008-01-01
La couleur n’est entrée que tardivement dans la pratique photographique, avec la production industrielle de la plaque Autochrome Lumière en 1907. Après cette première étape, la commercialisation des procédés à développement chromogène à partir du milieu des années 1930 marquera un nouveau palier, ouvrant l’ère de la photographie couleur moderne. Au-delà du progrès technique généralement retenu, on s’attachera ici au fait que les procédés couleur introduisent une nouvelle catégorie dans le cha...
(U) Introduction to Monte Carlo Methods
Energy Technology Data Exchange (ETDEWEB)
Hungerford, Aimee L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-03-20
Monte Carlo methods are very valuable for representing solutions to particle transport problems. Here we describe a “cook book” approach to handling the terms in a transport equation using Monte Carlo methods. Focus is on the mechanics of a numerical Monte Carlo code, rather than the mathematical foundations of the method.
Monte Carlo: in the beginning and some great expectations
International Nuclear Information System (INIS)
Metropolis, N.
1985-01-01
The central theme will be on the historical setting and origins of the Monte Carlo Method. The scene was post-war Los Alamos Scientific Laboratory. There was an inevitability about the Monte Carlo Event: the ENIAC had recently enjoyed its meteoric rise (on a classified Los Alamos problem); Stan Ulam had returned to Los Alamos; John von Neumann was a frequent visitor. Techniques, algorithms, and applications developed rapidly at Los Alamos. Soon, the fascination of the Method reached wider horizons. The first paper was submitted for publication in the spring of 1949. In the summer of 1949, the first open conference was held at the University of California at Los Angeles. Of some interst perhaps is an account of Fermi's earlier, independent application in neutron moderation studies while at the University of Rome. The quantum leap expected with the advent of massively parallel processors will provide stimuli for very ambitious applications of the Monte Carlo Method in disciplines ranging from field theories to cosmology, including more realistic models in the neurosciences. A structure of multi-instruction sets for parallel processing is ideally suited for the Monte Carlo approach. One may even hope for a modest hardening of the soft sciences
Monte Carlo modelling of TRIGA research reactor
El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.
2010-10-01
The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.
Isotopic depletion with Monte Carlo
International Nuclear Information System (INIS)
Martin, W.R.; Rathkopf, J.A.
1996-06-01
This work considers a method to deplete isotopes during a time- dependent Monte Carlo simulation of an evolving system. The method is based on explicitly combining a conventional estimator for the scalar flux with the analytical solutions to the isotopic depletion equations. There are no auxiliary calculations; the method is an integral part of the Monte Carlo calculation. The method eliminates negative densities and reduces the variance in the estimates for the isotope densities, compared to existing methods. Moreover, existing methods are shown to be special cases of the general method described in this work, as they can be derived by combining a high variance estimator for the scalar flux with a low-order approximation to the analytical solution to the depletion equation
Zimmerman, George B.
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
International Nuclear Information System (INIS)
Zimmerman, G.B.
1997-01-01
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials. copyright 1997 American Institute of Physics
International Nuclear Information System (INIS)
Zimmerman, George B.
1997-01-01
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials
Shell model Monte Carlo methods
International Nuclear Information System (INIS)
Koonin, S.E.; Dean, D.J.; Langanke, K.
1997-01-01
We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; the resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo (SMMC) methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, the thermal and rotational behavior of rare-earth and γ-soft nuclei, and the calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. (orig.)
A contribution Monte Carlo method
International Nuclear Information System (INIS)
Aboughantous, C.H.
1994-01-01
A Contribution Monte Carlo method is developed and successfully applied to a sample deep-penetration shielding problem. The random walk is simulated in most of its parts as in conventional Monte Carlo methods. The probability density functions (pdf's) are expressed in terms of spherical harmonics and are continuous functions in direction cosine and azimuthal angle variables as well as in position coordinates; the energy is discretized in the multigroup approximation. The transport pdf is an unusual exponential kernel strongly dependent on the incident and emergent directions and energies and on the position of the collision site. The method produces the same results obtained with the deterministic method with a very small standard deviation, with as little as 1,000 Contribution particles in both analog and nonabsorption biasing modes and with only a few minutes CPU time
Shell model Monte Carlo methods
International Nuclear Information System (INIS)
Koonin, S.E.
1996-01-01
We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs
Parallel Monte Carlo reactor neutronics
International Nuclear Information System (INIS)
Blomquist, R.N.; Brown, F.B.
1994-01-01
The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved
Elements of Monte Carlo techniques
International Nuclear Information System (INIS)
Nagarajan, P.S.
2000-01-01
The Monte Carlo method is essentially mimicking the real world physical processes at the microscopic level. With the incredible increase in computing speeds and ever decreasing computing costs, there is widespread use of the method for practical problems. The method is used in calculating algorithm-generated sequences known as pseudo random sequence (prs)., probability density function (pdf), test for randomness, extension to multidimensional integration etc
Adaptive Multilevel Monte Carlo Simulation
Hoel, H
2011-08-23
This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).
Geometrical splitting in Monte Carlo
International Nuclear Information System (INIS)
Dubi, A.; Elperin, T.; Dudziak, D.J.
1982-01-01
A statistical model is presented by which a direct statistical approach yielded an analytic expression for the second moment, the variance ratio, and the benefit function in a model of an n surface-splitting Monte Carlo game. In addition to the insight into the dependence of the second moment on the splitting parameters the main importance of the expressions developed lies in their potential to become a basis for in-code optimization of splitting through a general algorithm. Refs
Extending canonical Monte Carlo methods
International Nuclear Information System (INIS)
Velazquez, L; Curilef, S
2010-01-01
In this paper, we discuss the implications of a recently obtained equilibrium fluctuation-dissipation relation for the extension of the available Monte Carlo methods on the basis of the consideration of the Gibbs canonical ensemble to account for the existence of an anomalous regime with negative heat capacities C α with α≈0.2 for the particular case of the 2D ten-state Potts model
Virtual laboratory for radiation experiments
International Nuclear Information System (INIS)
Tiftikci, A.; Kocar, C.; Tombakoglu, M.
2009-01-01
Simulation of alpha, beta and gamma radiation detection and measurement experiments which are part of real nuclear physics laboratory courses was realized with Monte Carlo method and JAVA Programming Language. As being known, establishing this type of laboratories are very expensive. At the same time, highly radioactive sources used in some experiments carries risk for students and also for experimentalists. By taking into consideration of those problems, the aim of this study is to setup a virtual radiation laboratory with minimum cost and to speed up the training of radiation physics for students with no radiation risk. Software coded possesses the nature of radiation and radiation transport with the help of Monte Carlo method. In this software, experimental parameters can be changed manually by the user and experimental results can be followed synchronous in an MCA (Multi Channel Analyzer) or an SCA (Single Channel Analyzer). Results obtained in experiments can be analyzed by these MCA or SCA panels. Virtual radiation laboratory which is developed in this study with reliable results and unlimited experimentation capability seems as an useful educational material. Moreover, new type of experiments can be integrated to this software easily and as a result, virtual laboratory can be extended.
International Nuclear Information System (INIS)
Mercier, B.
1985-04-01
We have shown that the transport equation can be solved with particles, like the Monte-Carlo method, but without random numbers. In the Monte-Carlo method, particles are created from the source, and are followed from collision to collision until either they are absorbed or they leave the spatial domain. In our method, particles are created from the original source, with a variable weight taking into account both collision and absorption. These particles are followed until they leave the spatial domain, and we use them to determine a first collision source. Another set of particles is then created from this first collision source, and tracked to determine a second collision source, and so on. This process introduces an approximation which does not exist in the Monte-Carlo method. However, we have analyzed the effect of this approximation, and shown that it can be limited. Our method is deterministic, gives reproducible results. Furthermore, when extra accuracy is needed in some region, it is easier to get more particles to go there. It has the same kind of applications: rather problems where streaming is dominant than collision dominated problems
International Nuclear Information System (INIS)
Kennedy, D.C. II.
1987-01-01
This is an update on the progress of the BREMMUS Monte Carlo simulator, particularly in its current incarnation, BREM5. The present report is intended only as a follow-up to the Mark II/Granlibakken proceedings, and those proceedings should be consulted for a complete description of the capabilities and goals of the BREMMUS program. The new BREM5 program improves on the previous version of BREMMUS, BREM2, in a number of important ways. In BREM2, the internal loop (oblique) corrections were not treated in consistent fashion, a deficiency that led to renormalization scheme-dependence; i.e., physical results, such as cross sections, were dependent on the method used to eliminate infinities from the theory. Of course, this problem cannot be tolerated in a Monte Carlo designed for experimental use. BREM5 incorporates a new way of treating the oblique corrections, as explained in the Granlibakken proceedings, that guarantees renormalization scheme-independence and dramatically simplifies the organization and calculation of radiative corrections. This technique is to be presented in full detail in a forthcoming paper. BREM5 is, at this point, the only Monte Carlo to contain the entire set of one-loop corrections to electroweak four-fermion processes and renormalization scheme-independence. 3 figures
Statistical implications in Monte Carlo depletions - 051
International Nuclear Information System (INIS)
Zhiwen, Xu; Rhodes, J.; Smith, K.
2010-01-01
As a result of steady advances of computer power, continuous-energy Monte Carlo depletion analysis is attracting considerable attention for reactor burnup calculations. The typical Monte Carlo analysis is set up as a combination of a Monte Carlo neutron transport solver and a fuel burnup solver. Note that the burnup solver is a deterministic module. The statistical errors in Monte Carlo solutions are introduced into nuclide number densities and propagated along fuel burnup. This paper is towards the understanding of the statistical implications in Monte Carlo depletions, including both statistical bias and statistical variations in depleted fuel number densities. The deterministic Studsvik lattice physics code, CASMO-5, is modified to model the Monte Carlo depletion. The statistical bias in depleted number densities is found to be negligible compared to its statistical variations, which, in turn, demonstrates the correctness of the Monte Carlo depletion method. Meanwhile, the statistical variation in number densities generally increases with burnup. Several possible ways of reducing the statistical errors are discussed: 1) to increase the number of individual Monte Carlo histories; 2) to increase the number of time steps; 3) to run additional independent Monte Carlo depletion cases. Finally, a new Monte Carlo depletion methodology, called the batch depletion method, is proposed, which consists of performing a set of independent Monte Carlo depletions and is thus capable of estimating the overall statistical errors including both the local statistical error and the propagated statistical error. (authors)
Federal Laboratory Consortium — This laboratory contains a number of commercial off-the-shelf and in-house software packages allowing for both statistical analysis as well as mathematical modeling...
International Nuclear Information System (INIS)
Moscati, G.
1983-01-01
The foundation of a 'National Laboratory' which would support a Research center in synchrotron radiation applications is proposed. The essential features of such a laboratory differing of others centers in Brazil are presented. (L.C.) [pt
Federal Laboratory Consortium — The Geomechanics Laboratory allows its users to measure rock properties under a wide range of simulated service conditions up to very high pressures and complex load...
Directory of Open Access Journals (Sweden)
Bruno Villalba
2009-10-01
Full Text Available Cet ouvrage, réalisé en collaboration avec Flore Henninger, préfacé par Viviane Claude et postfacé par Corinne Larrue, est dirigé par Philippe Hamman (maître de conférences en sociologie au département d’urbanisme de l’UFR de sciences sociales, pratiques sociales et développement de l’université de Strasbourg ; il a dirigé, en 2008, Penser le développement durable urbain : regards croisés, Paris, L’Harmattan et Christine Blanc (sociologue et urbaniste, chargée d’études au Centre de recherche...
Buisson, André
2008-01-01
Réunies sous le titre La vallée électrique, 210 photographies en noir et blanc du photographe Emmanuel Foëx illustrent l’architecture industrielle, l’urbanisme et le paysage dans l’Arc alpin. Comme un ouvrage classique, l’album est divisé en trois parties : La centrale, le réseau, le transformateur. L’auteur est parti de l’évidence qu’à travers l’arc alpin, les sillons des vallées sont constellés d’usines consacrées à la fabrication de l’électricité (La centrale). La montagne, telle un châtea...
Zaidi, H
1999-01-01
the many applications of Monte Carlo modelling in nuclear medicine imaging make it desirable to increase the accuracy and computational speed of Monte Carlo codes. The accuracy of Monte Carlo simulations strongly depends on the accuracy in the probability functions and thus on the cross section libraries used for photon transport calculations. A comparison between different photon cross section libraries and parametrizations implemented in Monte Carlo simulation packages developed for positron emission tomography and the most recent Evaluated Photon Data Library (EPDL97) developed by the Lawrence Livermore National Laboratory was performed for several human tissues and common detector materials for energies from 1 keV to 1 MeV. Different photon cross section libraries and parametrizations show quite large variations as compared to the EPDL97 coefficients. This latter library is more accurate and was carefully designed in the form of look-up tables providing efficient data storage, access, and management. Toge...
Monte Carlo Particle Lists: MCPL
DEFF Research Database (Denmark)
Kittelmann, Thomas; Klinkby, Esben Bryndt; Bergbäck Knudsen, Erik
2017-01-01
A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular...... simulation packages. Program summary: Program Title: MCPL. Program Files doi: http://dx.doi.org/10.17632/cby92vsv5g.1 Licensing provisions: CC0 for core MCPL, see LICENSE file for details. Programming language: C and C++ External routines/libraries: Geant4, MCNP, McStas, McXtrace Nature of problem: Saving...
Monte Carlo techniques in radiation therapy
Verhaegen, Frank
2013-01-01
Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...
Energy Technology Data Exchange (ETDEWEB)
Herrera, Joshua M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-03-01
This report is an analysis of the means of egress and life safety requirements for the laboratory building. The building is located at Sandia National Laboratories (SNL) in Albuquerque, NM. The report includes a prescriptive-based analysis as well as a performance-based analysis. Following the analysis are appendices which contain maps of the laboratory building used throughout the analysis. The top of all the maps is assumed to be north.
Federal Laboratory Consortium — Purpose: To conduct fundamental studies of highway materials aimed at understanding both failure mechanisms and superior performance. New standard test methods are...
Federal Laboratory Consortium — The NWFSC conducts critical fisheries science research at its headquarters in Seattle, WA and at five research stations throughout Washington and Oregon. The unique...
Federal Laboratory Consortium — The Dynamics Lab replicates vibration environments for every Navy platform. Testing performed includes: Flight Clearance, Component Improvement, Qualification, Life...
Federal Laboratory Consortium — This facility provides testing stations for computer-based assessment of cognitive and behavioral Warfighter performance. This 500 square foot configurable space can...
Federal Laboratory Consortium — FUNCTION: Evaluates and improves the operational effectiveness of existing and emerging electronic warfare systems. By analyzing and visualizing simulation results...
Federal Laboratory Consortium — The Analytical Labspecializes in Oil and Hydraulic Fluid Analysis, Identification of Unknown Materials, Engineering Investigations, Qualification Testing (to support...
Federal Laboratory Consortium — The Propulsion Lab simulates field test conditions in a controlled environment, using standardized or customized test procedures. The Propulsion Lab's 11 cells can...
Mean field simulation for Monte Carlo integration
Del Moral, Pierre
2013-01-01
In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko
Monte Carlo simulations of neutron scattering instruments
International Nuclear Information System (INIS)
Aestrand, Per-Olof; Copenhagen Univ.; Lefmann, K.; Nielsen, K.
2001-01-01
A Monte Carlo simulation is an important computational tool used in many areas of science and engineering. The use of Monte Carlo techniques for simulating neutron scattering instruments is discussed. The basic ideas, techniques and approximations are presented. Since the construction of a neutron scattering instrument is very expensive, Monte Carlo software used for design of instruments have to be validated and tested extensively. The McStas software was designed with these aspects in mind and some of the basic principles of the McStas software will be discussed. Finally, some future prospects are discussed for using Monte Carlo simulations in optimizing neutron scattering experiments. (R.P.)
Monte Carlo surface flux tallies
International Nuclear Information System (INIS)
Favorite, Jeffrey A.
2010-01-01
Particle fluxes on surfaces are difficult to calculate with Monte Carlo codes because the score requires a division by the surface-crossing angle cosine, and grazing angles lead to inaccuracies. We revisit the standard practice of dividing by half of a cosine 'cutoff' for particles whose surface-crossing cosines are below the cutoff. The theory behind this approximation is sound, but the application of the theory to all possible situations does not account for two implicit assumptions: (1) the grazing band must be symmetric about 0, and (2) a single linear expansion for the angular flux must be applied in the entire grazing band. These assumptions are violated in common circumstances; for example, for separate in-going and out-going flux tallies on internal surfaces, and for out-going flux tallies on external surfaces. In some situations, dividing by two-thirds of the cosine cutoff is more appropriate. If users were able to control both the cosine cutoff and the substitute value, they could use these parameters to make accurate surface flux tallies. The procedure is demonstrated in a test problem in which Monte Carlo surface fluxes in cosine bins are converted to angular fluxes and compared with the results of a discrete ordinates calculation.
On the use of stochastic approximation Monte Carlo for Monte Carlo integration
Liang, Faming
2009-01-01
The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration
The application of weight windows to 'Global' Monte Carlo problems
International Nuclear Information System (INIS)
Becker, T. L.; Larsen, E. W.
2009-01-01
This paper describes two basic types of global deep-penetration (shielding) problems-the global flux problem and the global response problem. For each of these, two methods for generating weight windows are presented. The first approach, developed by the authors of this paper and referred to generally as the Global Weight Window, constructs a weight window that distributes Monte Carlo particles according to a user-specified distribution. The second approach, developed at Oak Ridge National Laboratory and referred to as FW-CADIS, constructs a weight window based on intuitively extending the concept of the source-detector problem to global problems. The numerical results confirm that the theory used to describe the Monte Carlo particle distribution for a given weight window is valid and that the figure of merit is strongly correlated to the Monte Carlo particle distribution. Furthermore, they illustrate that, while both methods are capable of obtaining the correct solution, the Global Weight Window distributes particles much more uniformly than FW-CADIS. As a result, the figure of merit is higher for the Global Weight Window. (authors)
Monte Carlo simulations of neutron-scattering instruments using McStas
DEFF Research Database (Denmark)
Nielsen, K.; Lefmann, K.
2000-01-01
Monte Carlo simulations have become an essential tool for improving the performance of neutron-scattering instruments, since the level of sophistication in the design of instruments is defeating purely analytical methods. The program McStas, being developed at Rise National Laboratory, includes...
International Nuclear Information System (INIS)
Moore, J.G.
1974-01-01
The Monte Carlo code MONK is a general program written to provide a high degree of flexibility to the user. MONK is distinguished by its detailed representation of nuclear data in point form i.e., the cross-section is tabulated at specific energies instead of the more usual group representation. The nuclear data are unadjusted in the point form but recently the code has been modified to accept adjusted group data as used in fast and thermal reactor applications. The various geometrical handling capabilities and importance sampling techniques are described. In addition to the nuclear data aspects, the following features are also described; geometrical handling routines, tracking cycles, neutron source and output facilities. 12 references. (U.S.)
Monte Carlo lattice program KIM
International Nuclear Information System (INIS)
Cupini, E.; De Matteis, A.; Simonini, R.
1980-01-01
The Monte Carlo program KIM solves the steady-state linear neutron transport equation for a fixed-source problem or, by successive fixed-source runs, for the eigenvalue problem, in a two-dimensional thermal reactor lattice. Fluxes and reaction rates are the main quantities computed by the program, from which power distribution and few-group averaged cross sections are derived. The simulation ranges from 10 MeV to zero and includes anisotropic and inelastic scattering in the fast energy region, the epithermal Doppler broadening of the resonances of some nuclides, and the thermalization phenomenon by taking into account the thermal velocity distribution of some molecules. Besides the well known combinatorial geometry, the program allows complex configurations to be represented by a discrete set of points, an approach greatly improving calculation speed
Advanced Computational Methods for Monte Carlo Calculations
Energy Technology Data Exchange (ETDEWEB)
Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2018-01-12
This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.
Nested Sampling with Constrained Hamiltonian Monte Carlo
Betancourt, M. J.
2010-01-01
Nested sampling is a powerful approach to Bayesian inference ultimately limited by the computationally demanding task of sampling from a heavily constrained probability distribution. An effective algorithm in its own right, Hamiltonian Monte Carlo is readily adapted to efficiently sample from any smooth, constrained distribution. Utilizing this constrained Hamiltonian Monte Carlo, I introduce a general implementation of the nested sampling algorithm.
Monte Carlo Treatment Planning for Advanced Radiotherapy
DEFF Research Database (Denmark)
Cronholm, Rickard
This Ph.d. project describes the development of a workflow for Monte Carlo Treatment Planning for clinical radiotherapy plans. The workflow may be utilized to perform an independent dose verification of treatment plans. Modern radiotherapy treatment delivery is often conducted by dynamically...... modulating the intensity of the field during the irradiation. The workflow described has the potential to fully model the dynamic delivery, including gantry rotation during irradiation, of modern radiotherapy. Three corner stones of Monte Carlo Treatment Planning are identified: Building, commissioning...... and validation of a Monte Carlo model of a medical linear accelerator (i), converting a CT scan of a patient to a Monte Carlo compliant phantom (ii) and translating the treatment plan parameters (including beam energy, angles of incidence, collimator settings etc) to a Monte Carlo input file (iii). A protocol...
Monte Carlo simulation in nuclear medicine
International Nuclear Information System (INIS)
Morel, Ch.
2007-01-01
The Monte Carlo method allows for simulating random processes by using series of pseudo-random numbers. It became an important tool in nuclear medicine to assist in the design of new medical imaging devices, optimise their use and analyse their data. Presently, the sophistication of the simulation tools allows the introduction of Monte Carlo predictions in data correction and image reconstruction processes. The availability to simulate time dependent processes opens up new horizons for Monte Carlo simulation in nuclear medicine. In a near future, these developments will allow to tackle simultaneously imaging and dosimetry issues and soon, case system Monte Carlo simulations may become part of the nuclear medicine diagnostic process. This paper describes some Monte Carlo method basics and the sampling methods that were developed for it. It gives a referenced list of different simulation software used in nuclear medicine and enumerates some of their present and prospective applications. (author)
... Medical Devices Radiation-Emitting Products Vaccines, Blood & Biologics Animal & ... What are lab tests? Laboratory tests are medical devices that are intended for use on samples of blood, urine, or other tissues ...
Federal Laboratory Consortium — FUNCTION: Provides an environment and facilities for auditory display research. A primary focus is the performance use of binaurally rendered 3D sound in conjunction...
International Nuclear Information System (INIS)
Ephraim, D.C.; Pednekar, A.R.
1993-01-01
A target laboratory to make stripper foils for the accelerator and various targets for use in the experiments is set up in the pelletron accelerator facility. The facilities available in the laboratory are: (1) D.C. glow discharge setup, (2) carbon arc set up, and (3) vacuum evaporation set up (resistance heating), electron beam source, rolling mill - all for target preparation. They are described. Centrifugal deposition technique is used for target preparation. (author). 3 figs
Semiconductor Electrical Measurements Laboratory
Federal Laboratory Consortium — The Semiconductor Electrical Measurements Laboratory is a research laboratory which complements the Optical Measurements Laboratory. The laboratory provides for Hall...
Extending Strong Scaling of Quantum Monte Carlo to the Exascale
Shulenburger, Luke; Baczewski, Andrew; Luo, Ye; Romero, Nichols; Kent, Paul
Quantum Monte Carlo is one of the most accurate and most computationally expensive methods for solving the electronic structure problem. In spite of its significant computational expense, its massively parallel nature is ideally suited to petascale computers which have enabled a wide range of applications to relatively large molecular and extended systems. Exascale capabilities have the potential to enable the application of QMC to significantly larger systems, capturing much of the complexity of real materials such as defects and impurities. However, both memory and computational demands will require significant changes to current algorithms to realize this possibility. This talk will detail both the causes of the problem and potential solutions. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corp, a wholly owned subsidiary of Lockheed Martin Corp, for the US Department of Energys National Nuclear Security Administration under contract DE-AC04-94AL85000.
Automated-biasing approach to Monte Carlo shipping-cask calculations
International Nuclear Information System (INIS)
Hoffman, T.J.; Tang, J.S.; Parks, C.V.; Childs, R.L.
1982-01-01
Computer Sciences at Oak Ridge National Laboratory, under a contract with the Nuclear Regulatory Commission, has developed the SCALE system for performing standardized criticality, shielding, and heat transfer analyses of nuclear systems. During the early phase of shielding development in SCALE, it was established that Monte Carlo calculations of radiation levels exterior to a spent fuel shipping cask would be extremely expensive. This cost can be substantially reduced by proper biasing of the Monte Carlo histories. The purpose of this study is to develop and test an automated biasing procedure for the MORSE-SGC/S module of the SCALE system
International Nuclear Information System (INIS)
Tabary, J.; Gliere, A.
2001-01-01
A Monte Carlo radiation transport simulation program, EGS Nova, and a computer aided design software, BRL-CAD, have been coupled within the framework of Sindbad, a nondestructive evaluation (NDE) simulation system. In its current status, the program is very valuable in a NDE laboratory context, as it helps simulate the images due to the uncollided and scattered photon fluxes in a single NDE software environment, without having to switch to a Monte Carlo code parameters set. Numerical validations show a good agreement with EGS4 computed and published data. As the program's major drawback is the execution time, computational efficiency improvements are foreseen. (orig.)
International Nuclear Information System (INIS)
1978-01-01
This report from the Dutch Ministry of Health is an advisory document concerned with isotope laboratories in hospitals, in connection with the Dutch laws for hospitals. It discusses which hospitals should have isotope laboratories and concludes that as many hospitals as possible should have small laboratories so that emergency cases can be dealt with. It divides the Netherlands into regions and suggests which hospitals should have these facilities. The questions of how big each lab. is to be, what equipment each has, how each lab. is organised, what therapeutic and diagnostic work should be carried out by each, etc. are discussed. The answers are provided by reports from working groups for in vivo diagnostics, in vitro diagnostics, therapy, and safety and their results form the criteria for the licences of isotope labs. The results of a questionnaire for isotope labs. already in the Netherlands are presented, and their activities outlined. (C.F.)
Importance iteration in MORSE Monte Carlo calculations
International Nuclear Information System (INIS)
Kloosterman, J.L.; Hoogenboom, J.E.
1994-01-01
An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example that shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation
Monte Carlo approaches to light nuclei
International Nuclear Information System (INIS)
Carlson, J.
1990-01-01
Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of 16 O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs
Monte Carlo approaches to light nuclei
Energy Technology Data Exchange (ETDEWEB)
Carlson, J.
1990-01-01
Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of {sup 16}O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs.
Importance iteration in MORSE Monte Carlo calculations
International Nuclear Information System (INIS)
Kloosterman, J.L.; Hoogenboom, J.E.
1994-02-01
An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example, which shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation. (orig.)
Monte carlo simulation for soot dynamics
Zhou, Kun
2012-01-01
A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.
Montero, R; Mundy, D; Albright, A; Grose, C; Trought, M C T; Cohen, D; Chooi, K M; MacDiarmid, R; Flexas, J; Bota, J
2016-04-15
In order to determine the effects of Grapevine Leafroll associated Virus 3 (GLRaV-3) on fruit composition and chemical profile of juice and wine from Vitis vinifera L. cv. Sauvignon blanc grown in New Zealand, composition variables were measured on fruit from vines either infected with GLRaV-3 (established or recent infections) or uninfected vines. Physiological ripeness (20.4°Brix) was the criterion established to determine the harvest date for each of the three treatments. Date of grape ripeness was strongly affected by virus infection. In juice and wine, GLRaV-3 infection prior to 2008 reduced titratable acidity compared with the uninfected control. Differences observed in amino acids from the three infection status groups did not modify basic wine chemical properties. In conclusion, GLRaV-3 infection slowed grape ripening, but at equivalent ripeness to result in minimal effects on the juice and wine chemistry. Time of infection produced differences in specific plant physiological variables. Copyright © 2015 Elsevier Ltd. All rights reserved.
Monte Carlo Codes Invited Session
International Nuclear Information System (INIS)
Trama, J.C.; Malvagi, F.; Brown, F.
2013-01-01
This document lists 22 Monte Carlo codes used in radiation transport applications throughout the world. For each code the names of the organization and country and/or place are given. We have the following computer codes. 1) ARCHER, USA, RPI; 2) COG11, USA, LLNL; 3) DIANE, France, CEA/DAM Bruyeres; 4) FLUKA, Italy and CERN, INFN and CERN; 5) GEANT4, International GEANT4 collaboration; 6) KENO and MONACO (SCALE), USA, ORNL; 7) MC21, USA, KAPL and Bettis; 8) MCATK, USA, LANL; 9) MCCARD, South Korea, Seoul National University; 10) MCNP6, USA, LANL; 11) MCU, Russia, Kurchatov Institute; 12) MONK and MCBEND, United Kingdom, AMEC; 13) MORET5, France, IRSN Fontenay-aux-Roses; 14) MVP2, Japan, JAEA; 15) OPENMC, USA, MIT; 16) PENELOPE, Spain, Barcelona University; 17) PHITS, Japan, JAEA; 18) PRIZMA, Russia, VNIITF; 19) RMC, China, Tsinghua University; 20) SERPENT, Finland, VTT; 21) SUPERMONTECARLO, China, CAS INEST FDS Team Hefei; and 22) TRIPOLI-4, France, CEA Saclay
Advanced computers and Monte Carlo
International Nuclear Information System (INIS)
Jordan, T.L.
1979-01-01
High-performance parallelism that is currently available is synchronous in nature. It is manifested in such architectures as Burroughs ILLIAC-IV, CDC STAR-100, TI ASC, CRI CRAY-1, ICL DAP, and many special-purpose array processors designed for signal processing. This form of parallelism has apparently not been of significant value to many important Monte Carlo calculations. Nevertheless, there is much asynchronous parallelism in many of these calculations. A model of a production code that requires up to 20 hours per problem on a CDC 7600 is studied for suitability on some asynchronous architectures that are on the drawing board. The code is described and some of its properties and resource requirements ae identified to compare with corresponding properties and resource requirements are identified to compare with corresponding properties and resource requirements are identified to compare with corresponding properties and resources of some asynchronous multiprocessor architectures. Arguments are made for programer aids and special syntax to identify and support important asynchronous parallelism. 2 figures, 5 tables
Adaptive Markov Chain Monte Carlo
Jadoon, Khan
2016-08-08
A substantial interpretation of electromagnetic induction (EMI) measurements requires quantifying optimal model parameters and uncertainty of a nonlinear inverse problem. For this purpose, an adaptive Bayesian Markov chain Monte Carlo (MCMC) algorithm is used to assess multi-orientation and multi-offset EMI measurements in an agriculture field with non-saline and saline soil. In the MCMC simulations, posterior distribution was computed using Bayes rule. The electromagnetic forward model based on the full solution of Maxwell\\'s equations was used to simulate the apparent electrical conductivity measured with the configurations of EMI instrument, the CMD mini-Explorer. The model parameters and uncertainty for the three-layered earth model are investigated by using synthetic data. Our results show that in the scenario of non-saline soil, the parameters of layer thickness are not well estimated as compared to layers electrical conductivity because layer thicknesses in the model exhibits a low sensitivity to the EMI measurements, and is hence difficult to resolve. Application of the proposed MCMC based inversion to the field measurements in a drip irrigation system demonstrate that the parameters of the model can be well estimated for the saline soil as compared to the non-saline soil, and provide useful insight about parameter uncertainty for the assessment of the model outputs.
Directory of Open Access Journals (Sweden)
Renata de Melo Rosa
2016-11-01
Full Text Available Resumo Este artigo tem como objetivo analisar a centralidade da categoria de pessoa no Haiti contemporâneo, a qual se funda a partir dos sentidos contextuais atribuídos à noção de nèg, (que em tradução livre para o português pode ser entendida como “negro/a” que antecede e funda, ao mesmo tempo, a categoria de pessoa. No entanto, mesmo que a categoria de pessoa no Haiti se ancore em uma nomenclatura “racial”, nèg não é uma categoria necessariamente atrelada à cor da pele, mas à qualidade da pertença de cada sujeito à nação haitiana. Identificar-se e ser identificado como um “nèg” atualiza, no processo identitário e no diálogo inter-subjetivo, diacríticos importantes, cujos sentidos são dados coletiva e contextualmente na rede de significados tecidas no contexto haitiano. Assim, pela natureza contextual e por sua constante dinâmica, é possível que uma “pessoa” que, aos olhos ocidentais, possa se assemelhar com o que nós entendemos como um/a “negro/a” no Brasil, no Haiti esta mesma “pessoa” pode não estar imediatamente identificada como um nèg ou como uma pessoa que “pertença” ao Haiti. Em outras palavras, é preciso que cada nèg (para continuar sendo nèg e, portanto, “pessoa” atualize, de acordo com os contornos da cultura haitiana que inscrevem um nèg, as diversas obrigações rituais de pertença a esta categoria. Vista desta perspectiva, a categoria nèg pode ser ritualizada por um (a haitiano (a branco (a, desde que os rituais de pertença à nação também sejam atualizados, tornando o sujeito em um (a Nég Blanc (negro branco, expressão que dá título a este artigo. Por último, esta reflexão propõe que a categoria nèg como sinônimo da categoria de pessoa é uma contra narrativa às tentativas de inferiorização racial vigentes no período colonial francês. Palavras chave: nèg; noção de pessoa; Haiti; nação, categorias de cor. Nèg Blanc sa a (Aquella negra
11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing
Nuyens, Dirk
2016-01-01
This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.
International Nuclear Information System (INIS)
Hughes, S.B.
1986-01-01
The paper concerns the work of the Kingsbury Laboratories of Fairey Engineering Company, for the nuclear industry. The services provided include: monitoring of nuclear graphite machining, specialist welding, non-destructive testing, and metallurgy testing; and all are briefly described. (U.K.)
Quantum Monte Carlo approaches for correlated systems
Becca, Federico
2017-01-01
Over the past several decades, computational approaches to studying strongly-interacting systems have become increasingly varied and sophisticated. This book provides a comprehensive introduction to state-of-the-art quantum Monte Carlo techniques relevant for applications in correlated systems. Providing a clear overview of variational wave functions, and featuring a detailed presentation of stochastic samplings including Markov chains and Langevin dynamics, which are developed into a discussion of Monte Carlo methods. The variational technique is described, from foundations to a detailed description of its algorithms. Further topics discussed include optimisation techniques, real-time dynamics and projection methods, including Green's function, reptation and auxiliary-field Monte Carlo, from basic definitions to advanced algorithms for efficient codes, and the book concludes with recent developments on the continuum space. Quantum Monte Carlo Approaches for Correlated Systems provides an extensive reference ...
Monte Carlo simulations for plasma physics
International Nuclear Information System (INIS)
Okamoto, M.; Murakami, S.; Nakajima, N.; Wang, W.X.
2000-07-01
Plasma behaviours are very complicated and the analyses are generally difficult. However, when the collisional processes play an important role in the plasma behaviour, the Monte Carlo method is often employed as a useful tool. For examples, in neutral particle injection heating (NBI heating), electron or ion cyclotron heating, and alpha heating, Coulomb collisions slow down high energetic particles and pitch angle scatter them. These processes are often studied by the Monte Carlo technique and good agreements can be obtained with the experimental results. Recently, Monte Carlo Method has been developed to study fast particle transports associated with heating and generating the radial electric field. Further it is applied to investigating the neoclassical transport in the plasma with steep gradients of density and temperatures which is beyong the conventional neoclassical theory. In this report, we briefly summarize the researches done by the present authors utilizing the Monte Carlo method. (author)
Frontiers of quantum Monte Carlo workshop: preface
International Nuclear Information System (INIS)
Gubernatis, J.E.
1985-01-01
The introductory remarks, table of contents, and list of attendees are presented from the proceedings of the conference, Frontiers of Quantum Monte Carlo, which appeared in the Journal of Statistical Physics
Avariide kiuste Monte Carlosse / Aare Arula
Arula, Aare
2007-01-01
Vt. ka Tehnika dlja Vsehh nr. 3, lk. 26-27. 26. jaanuaril 1937 Tallinnast Monte Carlo tähesõidule startinud Karl Siitanit ja tema meeskonda ootasid ees seiklused, mis oleksid neile peaaegu elu maksnud
A continuation multilevel Monte Carlo algorithm
Collier, Nathan; Haji Ali, Abdul Lateef; Nobile, Fabio; von Schwerin, Erik; Tempone, Raul
2014-01-01
We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error
Simulation and the Monte Carlo method
Rubinstein, Reuven Y
2016-01-01
Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...
Hybrid Monte Carlo methods in computational finance
Leitao Rodriguez, A.
2017-01-01
Monte Carlo methods are highly appreciated and intensively employed in computational finance in the context of financial derivatives valuation or risk management. The method offers valuable advantages like flexibility, easy interpretation and straightforward implementation. Furthermore, the
Bartalini, P.; Kryukov, A.; Selyuzhenkov, Ilya V.; Sherstnev, A.; Vologdin, A.
2004-01-01
We present the Monte-Carlo events Data Base (MCDB) project and its development plans. MCDB facilitates communication between authors of Monte-Carlo generators and experimental users. It also provides a convenient book-keeping and an easy access to generator level samples. The first release of MCDB is now operational for the CMS collaboration. In this paper we review the main ideas behind MCDB and discuss future plans to develop this Data Base further within the CERN LCG framework.
Multilevel Monte Carlo in Approximate Bayesian Computation
Jasra, Ajay
2017-02-13
In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.
Monte Carlo method applied to medical physics
International Nuclear Information System (INIS)
Oliveira, C.; Goncalves, I.F.; Chaves, A.; Lopes, M.C.; Teixeira, N.; Matos, B.; Goncalves, I.C.; Ramalho, A.; Salgado, J.
2000-01-01
The main application of the Monte Carlo method to medical physics is dose calculation. This paper shows some results of two dose calculation studies and two other different applications: optimisation of neutron field for Boron Neutron Capture Therapy and optimization of a filter for a beam tube for several purposes. The time necessary for Monte Carlo calculations - the highest boundary for its intensive utilisation - is being over-passed with faster and cheaper computers. (author)
Verification of the shift Monte Carlo code with the C5G7 reactor benchmark
International Nuclear Information System (INIS)
Sly, N. C.; Mervin, B. T.; Mosher, S. W.; Evans, T. M.; Wagner, J. C.; Maldonado, G. I.
2012-01-01
Shift is a new hybrid Monte Carlo/deterministic radiation transport code being developed at Oak Ridge National Laboratory. At its current stage of development, Shift includes a parallel Monte Carlo capability for simulating eigenvalue and fixed-source multigroup transport problems. This paper focuses on recent efforts to verify Shift's Monte Carlo component using the two-dimensional and three-dimensional C5G7 NEA benchmark problems. Comparisons were made between the benchmark eigenvalues and those output by the Shift code. In addition, mesh-based scalar flux tally results generated by Shift were compared to those obtained using MCNP5 on an identical model and tally grid. The Shift-generated eigenvalues were within three standard deviations of the benchmark and MCNP5-1.60 values in all cases. The flux tallies generated by Shift were found to be in very good agreement with those from MCNP. (authors)
Energy Technology Data Exchange (ETDEWEB)
Zamenhof, R.G.; Lin, K.; Ziegelmiller, D.; Clement, S.; Lui, C.; Harling, O.K.
Monte Carlo simulations of thermal neutron flux distributions in a mathematical head model have been compared to experimental measurements in a corresponding anthropomorphic gelatin-based head phantom irradiated by a thermal neutron beam as presently available at the MITR-II Research Reactor. Excellent agreement between Monte Carlo and experimental measurements has encouraged us to employ the Monte Carlo simulation technique to approach treatment planning problems in neutron capture therapy. We have also implemented a high-resolution alpha-track autoradiography technique originally developed in our laboratory at MIT. Initial autoradiograms produced by this technique meet our expectations in terms of the high resolution available and the ability to etch tracks without concommitant destruction of stained tissue. Our preliminary results with computer-aided track distribution analysis indicate that this approach is very promising in being able to quantify boron distributions in tissue at the subcellular level with a minimum amount of operator effort necessary.
Simulation of Rossi-α method with analog Monte-Carlo method
International Nuclear Information System (INIS)
Lu Yuzhao; Xie Qilin; Song Lingli; Liu Hangang
2012-01-01
The analog Monte-Carlo code for simulating Rossi-α method based on Geant4 was developed. The prompt neutron decay constant α of six metal uranium configurations in Oak Ridge National Laboratory were calculated. α was also calculated by Burst-Neutron method and the result was consistent with the result of Rossi-α method. There is the difference between results of analog Monte-Carlo simulation and experiment, and the reasons for the difference is the gaps between uranium layers. The influence of gaps decrease as the sub-criticality deepens. The relative difference between results of analog Monte-Carlo simulation and experiment changes from 19% to 0.19%. (authors)
MCNP: a general Monte Carlo code for neutron and photon transport. Version 3A. Revision 2
International Nuclear Information System (INIS)
Briesmeister, J.F.
1986-09-01
This manual is a practical guide for the use of our general-purpose Monte Carlo code MCNP. The first chapter is a primer for the novice user. The second chapter describes the mathematics, data, physics, and Monte Carlo simulation found in MCNP. This discussion is not meant to be exhaustive - details of the particular techniques and of the Monte Carlo method itself will have to be found elsewhere. The third chapter shows the user how to prepare input for the code. The fourth chapter contains several examples, and the fifth chapter explains the output. The appendices show how to use MCNP on particular computer systems at the Los Alamos National Laboratory and also give details about some of the code internals that those who wish to modify the code may find useful. 57 refs
EURADOS action for determination of americium in skull measures in vivo and Monte Carlo simulation
International Nuclear Information System (INIS)
Lopez Ponte, M. A.; Navarro Amaro, J. F.; Perez Lopez, B.; Navarro Bravo, T.; Nogueira, P.; Vrba, T.
2013-01-01
From the Group of WG7 internal dosimetry of the EURADOS Organization (European Radiation Dosimetry group, e.V.) which It coordinates CIEMAT, international action for the vivo measurement of americium has been conducted in three mannequins type skull with detectors of Germanium by gamma spectrometry and simulation by Monte Carlo methods. Such action has been raised as two separate exercises, with the participation of institutions in Europe, America and Asia. Other actions similar precede this vivo intercomparison of measurement and modeling Monte Carlo1. The preliminary results and associated findings are presented in this work. The laboratory of the body radioactivity (CRC) of service counter of dosimetry staff internal (DPI) of the CIEMAT, it has been one of the participants in vivo measures exercise. On the other hand part, the Group of numerical dosimetry of CIEMAT is participant of the Monte Carlo2 simulation exercise. (Author)
MCNP-REN a Monte Carlo tool for neutron detector design
Abhold, M E
2002-01-01
The development of neutron detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model fails to accurately predict detector response in common applications. For this reason, the general Monte Carlo code developed at Los Alamos National Laboratory, Monte Carlo N-Particle (MCNP), was modified to simulate the pulse streams that would be generated by a neutron detector and normally analyzed by a shift register. This modified code, MCNP-Random Exponentially Distributed Neutron Source (MCNP-REN), along with the Time Analysis Program, predicts neutron detector response without using the point reactor model, making it unnecessary for the user to decide whether or not the assumptions of the point model are met for their application. MCNP-REN is capable of simulating standard neutron coincidence counting as well as neutron multiplicity counting. Measurements of mixed oxide fresh fuel w...
Saxton Transportation Operations Laboratory
Federal Laboratory Consortium — The Saxton Transportation Operations Laboratory (Saxton Laboratory) is a state-of-the-art facility for conducting transportation operations research. The laboratory...
International Nuclear Information System (INIS)
Handin, J.
1980-01-01
Our task is to design mined-repository systems that will adequately secure high-level nuclear waste for at least 10,000 yr and that will be mechanically stable for 50 to 100-yr periods of retrievability during which mistakes could be corrected and a valuable source of energy could be reclaimed, should national policy on the reprocessing of spent fuel ever change. The only credible path for the escape of radionuclides from the repository to the biosphere is through ground-water, and in hard rock, bulk permeability is largely governed by natural and artificial fracture systems. Catastrophic failure of an excavation in hard rock is likely to occur at the weakest links - the discontinuities in the rock mass that is perturbed first by mining and then by radiogenic heating. The laboratory can contribute precise measurements of the pertinent thermomechanical, hydrological and chemical properties and improve our understanding of the fundamental processes through careful experiments under well controlled conditions that simulate the prototype environment. Thus laboratory investigations are necessary, but they are not sufficient, for conventional sample sizes are small relative to natural defects like joints - i.e., the rock mass is not a continuum - and test durations are short compared to those that predictive modeling must take into account. Laboratory investigators can contribute substantially more useful data if they are provided facilities for testing large specimens(say one cubic meter) and for creep testing of all candidate host rocks. Even so, extrapolations of laboratory data to the field in neither space nor time are valid without the firm theoretical foundations yet to be built. Meanwhile in-situ measurements of structure-sensitive physical properties and access to direct observations of rock-mass character will be absolutely necessary
International Nuclear Information System (INIS)
1980-06-01
The report contains summaries of work carried out under the following headings: fusion research experiments; U.K. contribution to the JET project; supporting studies; theoretical plasma physics, computational physics and computing; fusion reactor studies; engineering and technology; contract research; external relations; staff, finance and services. Appendices cover main characteristics of Culham fusion experiments, staff, extra-mural projects supported by Culham Laboratory, and a list of papers written by Culham staff. (U.K.)
International Nuclear Information System (INIS)
Seamster, A.G.; Weitkamp, W.G.
1984-01-01
The lead plating of the prototype resonator has been conducted entirely in the plating laboratory at SUNY Stony Brook. Because of the considerable cost and inconvenience in transporting personnel and materials to and from Stony Brook, it is clearly impractical to plate all the resonators there. Furthermore, the high-beta resonator cannot be accommodated at Stony Brook without modifying the set up there. Consequently the authors are constructing a plating lab in-house
Energy Technology Data Exchange (ETDEWEB)
Bettini, A., E-mail: Bettini@pd.infn.i [Padua University and INFN Section, Dipartimento di Fisca G. Galilei, Via Marzolo 8, 35131 Padova (Italy); Laboratorio Subterraneo de Canfranc, Plaza Ayuntamiento n1 2piso, Canfranc (Huesca) (Spain)
2011-01-21
Underground laboratories provide the low radioactive background environment necessary to frontier experiments in particle and nuclear astrophysics and other disciplines, geology and biology, that can profit of their unique characteristics. The cosmic silence allows to explore the highest energy scales that cannot be reached with accelerators by searching for extremely rare phenomena. I will briefly review the facilities that are operational or in an advanced status of approval around the world.
Final Report: 06-LW-013, Nuclear Physics the Monte Carlo Way
International Nuclear Information System (INIS)
Ormand, W.E.
2009-01-01
This is document reports the progress and accomplishments achieved in 2006-2007 with LDRD funding under the proposal 06-LW-013, 'Nuclear Physics the Monte Carlo Way'. The project was a theoretical study to explore a novel approach to dealing with a persistent problem in Monte Carlo approaches to quantum many-body systems. The goal was to implement a solution to the notorious 'sign-problem', which if successful, would permit, for the first time, exact solutions to quantum many-body systems that cannot be addressed with other methods. In this document, we outline the progress and accomplishments achieved during FY2006-2007 with LDRD funding in the proposal 06-LW-013, 'Nuclear Physics the Monte Carlo Way'. This project was funded under the Lab Wide LDRD competition at Lawrence Livermore National Laboratory. The primary objective of this project was to test the feasibility of implementing a novel approach to solving the generic quantum many-body problem, which is one of the most important problems being addressed in theoretical physics today. Instead of traditional methods based matrix diagonalization, this proposal focused a Monte Carlo method. The principal difficulty with Monte Carlo methods, is the so-called 'sign problem'. The sign problem, which will discussed in some detail later, is endemic to Monte Carlo approaches to the quantum many-body problem, and is the principal reason that they have not been completely successful in the past. Here, we outline our research in the 'shifted-contour method' applied the Auxiliary Field Monte Carlo (AFMC) method
'Odontologic dosimetric card' experiments and simulations using Monte Carlo methods
International Nuclear Information System (INIS)
Menezes, C.J.M.; Lima, R. de A.; Peixoto, J.E.; Vieira, J.W.
2008-01-01
The techniques for data processing, combined with the development of fast and more powerful computers, makes the Monte Carlo methods one of the most widely used tools in the radiation transport simulation. For applications in diagnostic radiology, this method generally uses anthropomorphic phantoms to evaluate the absorbed dose to patients during exposure. In this paper, some Monte Carlo techniques were used to simulation of a testing device designed for intra-oral X-ray equipment performance evaluation called Odontologic Dosimetric Card (CDO of 'Cartao Dosimetrico Odontologico' in Portuguese) for different thermoluminescent detectors. This paper used two computational models of exposition RXD/EGS4 and CDO/EGS4. In the first model, the simulation results are compared with experimental data obtained in the similar conditions. The second model, it presents the same characteristics of the testing device studied (CDO). For the irradiations, the X-ray spectra were generated by the IPEM report number 78, spectrum processor. The attenuated spectrum was obtained for IEC 61267 qualities and various additional filters for a Pantak 320 X-ray industrial equipment. The results obtained for the study of the copper filters used in the determination of the kVp were compared with experimental data, validating the model proposed for the characterization of the CDO. The results shower of the CDO will be utilized in quality assurance programs in order to guarantee that the equipment fulfill the requirements of the Norm SVS No. 453/98 MS (Brazil) 'Directives of Radiation Protection in Medical and Dental Radiodiagnostic'. We conclude that the EGS4 is a suitable code Monte Carlo to simulate thermoluminescent dosimeters and experimental procedures employed in the routine of the quality control laboratory in diagnostic radiology. (author)
Successful vectorization - reactor physics Monte Carlo code
International Nuclear Information System (INIS)
Martin, W.R.
1989-01-01
Most particle transport Monte Carlo codes in use today are based on the ''history-based'' algorithm, wherein one particle history at a time is simulated. Unfortunately, the ''history-based'' approach (present in all Monte Carlo codes until recent years) is inherently scalar and cannot be vectorized. In particular, the history-based algorithm cannot take advantage of vector architectures, which characterize the largest and fastest computers at the current time, vector supercomputers such as the Cray X/MP or IBM 3090/600. However, substantial progress has been made in recent years in developing and implementing a vectorized Monte Carlo algorithm. This algorithm follows portions of many particle histories at the same time and forms the basis for all successful vectorized Monte Carlo codes that are in use today. This paper describes the basic vectorized algorithm along with descriptions of several variations that have been developed by different researchers for specific applications. These applications have been mainly in the areas of neutron transport in nuclear reactor and shielding analysis and photon transport in fusion plasmas. The relative merits of the various approach schemes will be discussed and the present status of known vectorization efforts will be summarized along with available timing results, including results from the successful vectorization of 3-D general geometry, continuous energy Monte Carlo. (orig.)
Monte Carlo Simulation Tool Installation and Operation Guide
Energy Technology Data Exchange (ETDEWEB)
Aguayo Navarrete, Estanislao; Ankney, Austin S.; Berguson, Timothy J.; Kouzes, Richard T.; Orrell, John L.; Troy, Meredith D.; Wiseman, Clinton G.
2013-09-02
This document provides information on software and procedures for Monte Carlo simulations based on the Geant4 toolkit, the ROOT data analysis software and the CRY cosmic ray library. These tools have been chosen for its application to shield design and activation studies as part of the simulation task for the Majorana Collaboration. This document includes instructions for installation, operation and modification of the simulation code in a high cyber-security computing environment, such as the Pacific Northwest National Laboratory network. It is intended as a living document, and will be periodically updated. It is a starting point for information collection by an experimenter, and is not the definitive source. Users should consult with one of the authors for guidance on how to find the most current information for their needs.
MCB. A continuous energy Monte Carlo burnup simulation code
International Nuclear Information System (INIS)
Cetnar, J.; Wallenius, J.; Gudowski, W.
1999-01-01
A code for integrated simulation of neutrinos and burnup based upon continuous energy Monte Carlo techniques and transmutation trajectory analysis has been developed. Being especially well suited for studies of nuclear waste transmutation systems, the code is an extension of the well validated MCNP transport program of Los Alamos National Laboratory. Among the advantages of the code (named MCB) is a fully integrated data treatment combined with a time-stepping routine that automatically corrects for burnup dependent changes in reaction rates, neutron multiplication, material composition and self-shielding. Fission product yields are treated as continuous functions of incident neutron energy, using a non-equilibrium thermodynamical model of the fission process. In the present paper a brief description of the code and applied methods are given. (author)
BOMAB phantom manufacturing quality assurance study using Monte Carlo computations
International Nuclear Information System (INIS)
Mallett, M.W.
1994-01-01
Monte Carlo calculations have been performed to assess the importance of and quantify quality assurance protocols in the manufacturing of the Bottle-Manikin-Absorption (BOMAB) phantom for calibrating in vivo measurement systems. The parameters characterizing the BOMAB phantom that were examined included height, fill volume, fill material density, wall thickness, and source concentration. Transport simulation was performed for monoenergetic photon sources of 0.200, 0.662, and 1,460 MeV. A linear response was observed in the photon current exiting the exterior surface of the BOMAB phantom due to variations in these parameters. Sensitivity studies were also performed for an in vivo system in operation at the Pacific Northwest Laboratories in Richland, WA. Variations in detector current for this in vivo system are reported for changes in the BOMAB phantom parameters studied here. Physical justifications for the observed results are also discussed
Monte Carlo simulations for design of the KFUPM PGNAA facility
Naqvi, A A; Maslehuddin, M; Kidwai, S
2003-01-01
Monte Carlo simulations were carried out to design a 2.8 MeV neutron-based prompt gamma ray neutron activation analysis (PGNAA) setup for elemental analysis of cement samples. The elemental analysis was carried out using prompt gamma rays produced through capture of thermal neutrons in sample nuclei. The basic design of the PGNAA setup consists of a cylindrical cement sample enclosed in a cylindrical high-density polyethylene moderator placed between a neutron source and a gamma ray detector. In these simulations the predominant geometrical parameters of the PGNAA setup were optimized, including moderator size, sample size and shielding of the detector. Using the results of the simulations, an experimental PGNAA setup was then fabricated at the 350 kV Accelerator Laboratory of this University. The design calculations were checked experimentally through thermal neutron flux measurements inside the PGNAA moderator. A test prompt gamma ray spectrum of the PGNAA setup was also acquired from a Portland cement samp...
Monte Carlo strategies in scientific computing
Liu, Jun S
2008-01-01
This paperback edition is a reprint of the 2001 Springer edition This book provides a self-contained and up-to-date treatment of the Monte Carlo method and develops a common framework under which various Monte Carlo techniques can be "standardized" and compared Given the interdisciplinary nature of the topics and a moderate prerequisite for the reader, this book should be of interest to a broad audience of quantitative researchers such as computational biologists, computer scientists, econometricians, engineers, probabilists, and statisticians It can also be used as the textbook for a graduate-level course on Monte Carlo methods Many problems discussed in the alter chapters can be potential thesis topics for masters’ or PhD students in statistics or computer science departments Jun Liu is Professor of Statistics at Harvard University, with a courtesy Professor appointment at Harvard Biostatistics Department Professor Liu was the recipient of the 2002 COPSS Presidents' Award, the most prestigious one for sta...
Random Numbers and Monte Carlo Methods
Scherer, Philipp O. J.
Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.
Off-diagonal expansion quantum Monte Carlo.
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Reflections on early Monte Carlo calculations
International Nuclear Information System (INIS)
Spanier, J.
1992-01-01
Monte Carlo methods for solving various particle transport problems developed in parallel with the evolution of increasingly sophisticated computer programs implementing diffusion theory and low-order moments calculations. In these early years, Monte Carlo calculations and high-order approximations to the transport equation were seen as too expensive to use routinely for nuclear design but served as invaluable aids and supplements to design with less expensive tools. The earliest Monte Carlo programs were quite literal; i.e., neutron and other particle random walk histories were simulated by sampling from the probability laws inherent in the physical system without distoration. Use of such analogue sampling schemes resulted in a good deal of time being spent in examining the possibility of lowering the statistical uncertainties in the sample estimates by replacing simple, and intuitively obvious, random variables by those with identical means but lower variances
Monte Carlo simulation of Markov unreliability models
International Nuclear Information System (INIS)
Lewis, E.E.; Boehm, F.
1984-01-01
A Monte Carlo method is formulated for the evaluation of the unrealibility of complex systems with known component failure and repair rates. The formulation is in terms of a Markov process allowing dependences between components to be modeled and computational efficiencies to be achieved in the Monte Carlo simulation. Two variance reduction techniques, forced transition and failure biasing, are employed to increase computational efficiency of the random walk procedure. For an example problem these result in improved computational efficiency by more than three orders of magnitudes over analog Monte Carlo. The method is generalized to treat problems with distributed failure and repair rate data, and a batching technique is introduced and shown to result in substantial increases in computational efficiency for an example problem. A method for separating the variance due to the data uncertainty from that due to the finite number of random walks is presented. (orig.)
Shell model the Monte Carlo way
International Nuclear Information System (INIS)
Ormand, W.E.
1995-01-01
The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined
Shell model the Monte Carlo way
Energy Technology Data Exchange (ETDEWEB)
Ormand, W.E.
1995-03-01
The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.
SPQR: a Monte Carlo reactor kinetics code
International Nuclear Information System (INIS)
Cramer, S.N.; Dodds, H.L.
1980-02-01
The SPQR Monte Carlo code has been developed to analyze fast reactor core accident problems where conventional methods are considered inadequate. The code is based on the adiabatic approximation of the quasi-static method. This initial version contains no automatic material motion or feedback. An existing Monte Carlo code is used to calculate the shape functions and the integral quantities needed in the kinetics module. Several sample problems have been devised and analyzed. Due to the large statistical uncertainty associated with the calculation of reactivity in accident simulations, the results, especially at later times, differ greatly from deterministic methods. It was also found that in large uncoupled systems, the Monte Carlo method has difficulty in handling asymmetric perturbations
Systems guide to MCNP (Monte Carlo Neutron and Photon Transport Code)
International Nuclear Information System (INIS)
Kirk, B.L.; West, J.T.
1984-06-01
The subject of this report is the implementation of the Los Alamos National Laboratory Monte Carlo Neutron and Photon Transport Code - Version 3 (MCNP) on the different types of computer systems, especially the IBM MVS system. The report supplements the documentation of the RSIC computer code package CCC-200/MCNP. Details of the procedure to follow in executing MCNP on the IBM computers, either in batch mode or interactive mode, are provided
Current and future applications of Monte Carlo
International Nuclear Information System (INIS)
Zaidi, H.
2003-01-01
Full text: The use of radionuclides in medicine has a long history and encompasses a large area of applications including diagnosis and radiation treatment of cancer patients using either external or radionuclide radiotherapy. The 'Monte Carlo method'describes a very broad area of science, in which many processes, physical systems, and phenomena are simulated by statistical methods employing random numbers. The general idea of Monte Carlo analysis is to create a model, which is as similar as possible to the real physical system of interest, and to create interactions within that system based on known probabilities of occurrence, with random sampling of the probability density functions (pdfs). As the number of individual events (called 'histories') is increased, the quality of the reported average behavior of the system improves, meaning that the statistical uncertainty decreases. The use of the Monte Carlo method to simulate radiation transport has become the most accurate means of predicting absorbed dose distributions and other quantities of interest in the radiation treatment of cancer patients using either external or radionuclide radiotherapy. The same trend has occurred for the estimation of the absorbed dose in diagnostic procedures using radionuclides as well as the assessment of image quality and quantitative accuracy of radionuclide imaging. As a consequence of this generalized use, many questions are being raised primarily about the need and potential of Monte Carlo techniques, but also about how accurate it really is, what would it take to apply it clinically and make it available widely to the nuclear medicine community at large. Many of these questions will be answered when Monte Carlo techniques are implemented and used for more routine calculations and for in-depth investigations. In this paper, the conceptual role of the Monte Carlo method is briefly introduced and followed by a survey of its different applications in diagnostic and therapeutic
International Nuclear Information System (INIS)
Fraass, Benedick A.; Smathers, James; Deye, James
2003-01-01
Due to the significant interest in Monte Carlo dose calculations for external beam megavoltage radiation therapy from both the research and commercial communities, a workshop was held in October 2001 to assess the status of this computational method with regard to use for clinical treatment planning. The Radiation Research Program of the National Cancer Institute, in conjunction with the Nuclear Data and Analysis Group at the Oak Ridge National Laboratory, gathered a group of experts in clinical radiation therapy treatment planning and Monte Carlo dose calculations, and examined issues involved in clinical implementation of Monte Carlo dose calculation methods in clinical radiotherapy. The workshop examined the current status of Monte Carlo algorithms, the rationale for using Monte Carlo, algorithmic concerns, clinical issues, and verification methodologies. Based on these discussions, the workshop developed recommendations for future NCI-funded research and development efforts. This paper briefly summarizes the issues presented at the workshop and the recommendations developed by the group
Monte Carlo simulation applied to alpha spectrometry
International Nuclear Information System (INIS)
Baccouche, S.; Gharbi, F.; Trabelsi, A.
2007-01-01
Alpha particle spectrometry is a widely-used analytical method, in particular when we deal with pure alpha emitting radionuclides. Monte Carlo simulation is an adequate tool to investigate the influence of various phenomena on this analytical method. We performed an investigation of those phenomena using the simulation code GEANT of CERN. The results concerning the geometrical detection efficiency in different measurement geometries agree with analytical calculations. This work confirms that Monte Carlo simulation of solid angle of detection is a very useful tool to determine with very good accuracy the detection efficiency.
Simplified monte carlo simulation for Beijing spectrometer
International Nuclear Information System (INIS)
Wang Taijie; Wang Shuqin; Yan Wuguang; Huang Yinzhi; Huang Deqiang; Lang Pengfei
1986-01-01
The Monte Carlo method based on the functionization of the performance of detectors and the transformation of values of kinematical variables into ''measured'' ones by means of smearing has been used to program the Monte Carlo simulation of the performance of the Beijing Spectrometer (BES) in FORTRAN language named BESMC. It can be used to investigate the multiplicity, the particle type, and the distribution of four-momentum of the final states of electron-positron collision, and also the response of the BES to these final states. Thus, it provides a measure to examine whether the overall design of the BES is reasonable and to decide the physical topics of the BES
Self-learning Monte Carlo (dynamical biasing)
International Nuclear Information System (INIS)
Matthes, W.
1981-01-01
In many applications the histories of a normal Monte Carlo game rarely reach the target region. An approximate knowledge of the importance (with respect to the target) may be used to guide the particles more frequently into the target region. A Monte Carlo method is presented in which each history contributes to update the importance field such that eventually most target histories are sampled. It is a self-learning method in the sense that the procedure itself: (a) learns which histories are important (reach the target) and increases their probability; (b) reduces the probabilities of unimportant histories; (c) concentrates gradually on the more important target histories. (U.K.)
Burnup calculations using Monte Carlo method
International Nuclear Information System (INIS)
Ghosh, Biplab; Degweker, S.B.
2009-01-01
In the recent years, interest in burnup calculations using Monte Carlo methods has gained momentum. Previous burn up codes have used multigroup transport theory based calculations followed by diffusion theory based core calculations for the neutronic portion of codes. The transport theory methods invariably make approximations with regard to treatment of the energy and angle variables involved in scattering, besides approximations related to geometry simplification. Cell homogenisation to produce diffusion, theory parameters adds to these approximations. Moreover, while diffusion theory works for most reactors, it does not produce accurate results in systems that have strong gradients, strong absorbers or large voids. Also, diffusion theory codes are geometry limited (rectangular, hexagonal, cylindrical, and spherical coordinates). Monte Carlo methods are ideal to solve very heterogeneous reactors and/or lattices/assemblies in which considerable burnable poisons are used. The key feature of this approach is that Monte Carlo methods permit essentially 'exact' modeling of all geometrical detail, without resort to ene and spatial homogenization of neutron cross sections. Monte Carlo method would also be better for in Accelerator Driven Systems (ADS) which could have strong gradients due to the external source and a sub-critical assembly. To meet the demand for an accurate burnup code, we have developed a Monte Carlo burnup calculation code system in which Monte Carlo neutron transport code is coupled with a versatile code (McBurn) for calculating the buildup and decay of nuclides in nuclear materials. McBurn is developed from scratch by the authors. In this article we will discuss our effort in developing the continuous energy Monte Carlo burn-up code, McBurn. McBurn is intended for entire reactor core as well as for unit cells and assemblies. Generally, McBurn can do burnup of any geometrical system which can be handled by the underlying Monte Carlo transport code
Improvements for Monte Carlo burnup calculation
Energy Technology Data Exchange (ETDEWEB)
Shenglong, Q.; Dong, Y.; Danrong, S.; Wei, L., E-mail: qiangshenglong@tsinghua.org.cn, E-mail: d.yao@npic.ac.cn, E-mail: songdr@npic.ac.cn, E-mail: luwei@npic.ac.cn [Nuclear Power Inst. of China, Cheng Du, Si Chuan (China)
2015-07-01
Monte Carlo burnup calculation is development trend of reactor physics, there would be a lot of work to be done for engineering applications. Based on Monte Carlo burnup code MOI, non-fuel burnup calculation methods and critical search suggestions will be mentioned in this paper. For non-fuel burnup, mixed burnup mode will improve the accuracy of burnup calculation and efficiency. For critical search of control rod position, a new method called ABN based on ABA which used by MC21 will be proposed for the first time in this paper. (author)
A keff calculation method by Monte Carlo
International Nuclear Information System (INIS)
Shen, H; Wang, K.
2008-01-01
The effective multiplication factor (k eff ) is defined as the ratio between the number of neutrons in successive generations, which definition is adopted by most Monte Carlo codes (e.g. MCNP). Also, it can be thought of as the ratio of the generation rate of neutrons by the sum of the leakage rate and the absorption rate, which should exclude the effect of the neutron reaction such as (n, 2n) and (n, 3n). This article discusses the Monte Carlo method for k eff calculation based on the second definition. A new code has been developed and the results are presented. (author)
Monte Carlo electron/photon transport
International Nuclear Information System (INIS)
Mack, J.M.; Morel, J.E.; Hughes, H.G.
1985-01-01
A review of nonplasma coupled electron/photon transport using Monte Carlo method is presented. Remarks are mainly restricted to linerarized formalisms at electron energies from 1 keV to 1000 MeV. Applications involving pulse-height estimation, transport in external magnetic fields, and optical Cerenkov production are discussed to underscore the importance of this branch of computational physics. Advances in electron multigroup cross-section generation is reported, and its impact on future code development assessed. Progress toward the transformation of MCNP into a generalized neutral/charged-particle Monte Carlo code is described. 48 refs
Monte Carlo simulation of neutron scattering instruments
International Nuclear Information System (INIS)
Seeger, P.A.
1995-01-01
A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width
Simulation of transport equations with Monte Carlo
International Nuclear Information System (INIS)
Matthes, W.
1975-09-01
The main purpose of the report is to explain the relation between the transport equation and the Monte Carlo game used for its solution. The introduction of artificial particles carrying a weight provides one with high flexibility in constructing many different games for the solution of the same equation. This flexibility opens a way to construct a Monte Carlo game for the solution of the adjoint transport equation. Emphasis is laid mostly on giving a clear understanding of what to do and not on the details of how to do a specific game
Monte Carlo dose distributions for radiosurgery
International Nuclear Information System (INIS)
Perucha, M.; Leal, A.; Rincon, M.; Carrasco, E.
2001-01-01
The precision of Radiosurgery Treatment planning systems is limited by the approximations of their algorithms and by their dosimetrical input data. This fact is especially important in small fields. However, the Monte Carlo methods is an accurate alternative as it considers every aspect of particle transport. In this work an acoustic neurinoma is studied by comparing the dose distribution of both a planning system and Monte Carlo. Relative shifts have been measured and furthermore, Dose-Volume Histograms have been calculated for target and adjacent organs at risk. (orig.)
Fast sequential Monte Carlo methods for counting and optimization
Rubinstein, Reuven Y; Vaisman, Radislav
2013-01-01
A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the
Specialized Monte Carlo codes versus general-purpose Monte Carlo codes
International Nuclear Information System (INIS)
Moskvin, Vadim; DesRosiers, Colleen; Papiez, Lech; Lu, Xiaoyi
2002-01-01
The possibilities of Monte Carlo modeling for dose calculations and optimization treatment are quite limited in radiation oncology applications. The main reason is that the Monte Carlo technique for dose calculations is time consuming while treatment planning may require hundreds of possible cases of dose simulations to be evaluated for dose optimization. The second reason is that general-purpose codes widely used in practice, require an experienced user to customize them for calculations. This paper discusses the concept of Monte Carlo code design that can avoid the main problems that are preventing wide spread use of this simulation technique in medical physics. (authors)
On the use of stochastic approximation Monte Carlo for Monte Carlo integration
Liang, Faming
2009-03-01
The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration via a dynamically weighted estimator by calling some results from the literature of nonhomogeneous Markov chains. Our numerical results indicate that SAMC can yield significant savings over conventional Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, for the problems for which the energy landscape is rugged. © 2008 Elsevier B.V. All rights reserved.
2014-03-27
Vehicle Code System (VCS), the Monte Carlo Adjoint SHielding (MASH), and the Monte Carlo n- Particle ( MCNP ) code. Of the three, the oldest and still most...widely utilized radiation transport code is MCNP . First created at Los Alamos National Laboratory (LANL) in 1957, the code simulated neutral...particle types, and previous versions of MCNP were repeatedly validated using both simple and complex 10 geometries [12, 13]. Much greater discussion and
Parallel processing Monte Carlo radiation transport codes
International Nuclear Information System (INIS)
McKinney, G.W.
1994-01-01
Issues related to distributed-memory multiprocessing as applied to Monte Carlo radiation transport are discussed. Measurements of communication overhead are presented for the radiation transport code MCNP which employs the communication software package PVM, and average efficiency curves are provided for a homogeneous virtual machine
Monte Carlo determination of heteroepitaxial misfit structures
DEFF Research Database (Denmark)
Baker, J.; Lindgård, Per-Anker
1996-01-01
We use Monte Carlo simulations to determine the structure of KBr overlayers on a NaCl(001) substrate, a system with large (17%) heteroepitaxial misfit. The equilibrium relaxation structure is determined for films of 2-6 ML, for which extensive helium-atom scattering data exist for comparison...
The Monte Carlo applied for calculation dose
International Nuclear Information System (INIS)
Peixoto, J.E.
1988-01-01
The Monte Carlo method is showed for the calculation of absorbed dose. The trajectory of the photon is traced simulating sucessive interaction between the photon and the substance that consist the human body simulator. The energy deposition in each interaction of the simulator organ or tissue per photon is also calculated. (C.G.C.) [pt
Monte Carlo code for neutron radiography
International Nuclear Information System (INIS)
Milczarek, Jacek J.; Trzcinski, Andrzej; El-Ghany El Abd, Abd; Czachor, Andrzej
2005-01-01
The concise Monte Carlo code, MSX, for simulation of neutron radiography images of non-uniform objects is presented. The possibility of modeling the images of objects with continuous spatial distribution of specific isotopes is included. The code can be used for assessment of the scattered neutron component in neutron radiograms
Monte Carlo code for neutron radiography
Energy Technology Data Exchange (ETDEWEB)
Milczarek, Jacek J. [Institute of Atomic Energy, Swierk, 05-400 Otwock (Poland)]. E-mail: jjmilcz@cyf.gov.pl; Trzcinski, Andrzej [Institute for Nuclear Studies, Swierk, 05-400 Otwock (Poland); El-Ghany El Abd, Abd [Institute of Atomic Energy, Swierk, 05-400 Otwock (Poland); Nuclear Research Center, PC 13759, Cairo (Egypt); Czachor, Andrzej [Institute of Atomic Energy, Swierk, 05-400 Otwock (Poland)
2005-04-21
The concise Monte Carlo code, MSX, for simulation of neutron radiography images of non-uniform objects is presented. The possibility of modeling the images of objects with continuous spatial distribution of specific isotopes is included. The code can be used for assessment of the scattered neutron component in neutron radiograms.
Monte Carlo method in neutron activation analysis
International Nuclear Information System (INIS)
Majerle, M.; Krasa, A.; Svoboda, O.; Wagner, V.; Adam, J.; Peetermans, S.; Slama, O.; Stegajlov, V.I.; Tsupko-Sitnikov, V.M.
2009-01-01
Neutron activation detectors are a useful technique for the neutron flux measurements in spallation experiments. The study of the usefulness and the accuracy of this method at similar experiments was performed with the help of Monte Carlo codes MCNPX and FLUKA
Atomistic Monte Carlo simulation of lipid membranes
DEFF Research Database (Denmark)
Wüstner, Daniel; Sklenar, Heinz
2014-01-01
Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction...... of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol....
Computer system for Monte Carlo experimentation
International Nuclear Information System (INIS)
Grier, D.A.
1986-01-01
A new computer system for Monte Carlo Experimentation is presented. The new system speeds and simplifies the process of coding and preparing a Monte Carlo Experiment; it also encourages the proper design of Monte Carlo Experiments, and the careful analysis of the experimental results. A new functional language is the core of this system. Monte Carlo Experiments, and their experimental designs, are programmed in this new language; those programs are compiled into Fortran output. The Fortran output is then compiled and executed. The experimental results are analyzed with a standard statistics package such as Si, Isp, or Minitab or with a user-supplied program. Both the experimental results and the experimental design may be directly loaded into the workspace of those packages. The new functional language frees programmers from many of the details of programming an experiment. Experimental designs such as factorial, fractional factorial, or latin square are easily described by the control structures and expressions of the language. Specific mathematical modes are generated by the routines of the language
Scalable Domain Decomposed Monte Carlo Particle Transport
Energy Technology Data Exchange (ETDEWEB)
O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)
2013-12-05
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.
Monte Carlo methods beyond detailed balance
Schram, Raoul D.; Barkema, Gerard T.|info:eu-repo/dai/nl/101275080
2015-01-01
Monte Carlo algorithms are nearly always based on the concept of detailed balance and ergodicity. In this paper we focus on algorithms that do not satisfy detailed balance. We introduce a general method for designing non-detailed balance algorithms, starting from a conventional algorithm satisfying
Monte Carlo studies of ZEPLIN III
Dawson, J; Davidge, D C R; Gillespie, J R; Howard, A S; Jones, W G; Joshi, M; Lebedenko, V N; Sumner, T J; Quenby, J J
2002-01-01
A Monte Carlo simulation of a two-phase xenon dark matter detector, ZEPLIN III, has been achieved. Results from the analysis of a simulated data set are presented, showing primary and secondary signal distributions from low energy gamma ray events.
Biases in Monte Carlo eigenvalue calculations
Energy Technology Data Exchange (ETDEWEB)
Gelbard, E.M.
1992-12-01
The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the ``fixed-source`` case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated (``replicated``) over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here.
Biases in Monte Carlo eigenvalue calculations
Energy Technology Data Exchange (ETDEWEB)
Gelbard, E.M.
1992-01-01
The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the fixed-source'' case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated ( replicated'') over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here.
Dynamic bounds coupled with Monte Carlo simulations
Energy Technology Data Exchange (ETDEWEB)
Rajabalinejad, M., E-mail: M.Rajabalinejad@tudelft.n [Faculty of Civil Engineering, Delft University of Technology, Delft (Netherlands); Meester, L.E. [Delft Institute of Applied Mathematics, Delft University of Technology, Delft (Netherlands); Gelder, P.H.A.J.M. van; Vrijling, J.K. [Faculty of Civil Engineering, Delft University of Technology, Delft (Netherlands)
2011-02-15
For the reliability analysis of engineering structures a variety of methods is known, of which Monte Carlo (MC) simulation is widely considered to be among the most robust and most generally applicable. To reduce simulation cost of the MC method, variance reduction methods are applied. This paper describes a method to reduce the simulation cost even further, while retaining the accuracy of Monte Carlo, by taking into account widely present monotonicity. For models exhibiting monotonic (decreasing or increasing) behavior, dynamic bounds (DB) are defined, which in a coupled Monte Carlo simulation are updated dynamically, resulting in a failure probability estimate, as well as a strict (non-probabilistic) upper and lower bounds. Accurate results are obtained at a much lower cost than an equivalent ordinary Monte Carlo simulation. In a two-dimensional and a four-dimensional numerical example, the cost reduction factors are 130 and 9, respectively, where the relative error is smaller than 5%. At higher accuracy levels, this factor increases, though this effect is expected to be smaller with increasing dimension. To show the application of DB method to real world problems, it is applied to a complex finite element model of a flood wall in New Orleans.
Dynamic bounds coupled with Monte Carlo simulations
Rajabali Nejad, Mohammadreza; Meester, L.E.; van Gelder, P.H.A.J.M.; Vrijling, J.K.
2011-01-01
For the reliability analysis of engineering structures a variety of methods is known, of which Monte Carlo (MC) simulation is widely considered to be among the most robust and most generally applicable. To reduce simulation cost of the MC method, variance reduction methods are applied. This paper
Design and analysis of Monte Carlo experiments
Kleijnen, Jack P.C.; Gentle, J.E.; Haerdle, W.; Mori, Y.
2012-01-01
By definition, computer simulation or Monte Carlo models are not solved by mathematical analysis (such as differential calculus), but are used for numerical experimentation. The goal of these experiments is to answer questions about the real world; i.e., the experimenters may use their models to
Some problems on Monte Carlo method development
International Nuclear Information System (INIS)
Pei Lucheng
1992-01-01
This is a short paper on some problems of Monte Carlo method development. The content consists of deep-penetration problems, unbounded estimate problems, limitation of Mdtropolis' method, dependency problem in Metropolis' method, random error interference problems and random equations, intellectualisation and vectorization problems of general software
Monte Carlo simulations in theoretical physic
International Nuclear Information System (INIS)
Billoire, A.
1991-01-01
After a presentation of the MONTE CARLO method principle, the method is applied, first to the critical exponents calculations in the three dimensions ISING model, and secondly to the discrete quantum chromodynamic with calculation times in function of computer power. 28 refs., 4 tabs
Monte Carlo method for random surfaces
International Nuclear Information System (INIS)
Berg, B.
1985-01-01
Previously two of the authors proposed a Monte Carlo method for sampling statistical ensembles of random walks and surfaces with a Boltzmann probabilistic weight. In the present paper we work out the details for several models of random surfaces, defined on d-dimensional hypercubic lattices. (orig.)
Monte Carlo simulation of the microcanonical ensemble
International Nuclear Information System (INIS)
Creutz, M.
1984-01-01
We consider simulating statistical systems with a random walk on a constant energy surface. This combines features of deterministic molecular dynamics techniques and conventional Monte Carlo simulations. For discrete systems the method can be programmed to run an order of magnitude faster than other approaches. It does not require high quality random numbers and may also be useful for nonequilibrium studies. 10 references
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the
Coded aperture optimization using Monte Carlo simulations
International Nuclear Information System (INIS)
Martineau, A.; Rocchisani, J.M.; Moretti, J.L.
2010-01-01
Coded apertures using Uniformly Redundant Arrays (URA) have been unsuccessfully evaluated for two-dimensional and three-dimensional imaging in Nuclear Medicine. The images reconstructed from coded projections contain artifacts and suffer from poor spatial resolution in the longitudinal direction. We introduce a Maximum-Likelihood Expectation-Maximization (MLEM) algorithm for three-dimensional coded aperture imaging which uses a projection matrix calculated by Monte Carlo simulations. The aim of the algorithm is to reduce artifacts and improve the three-dimensional spatial resolution in the reconstructed images. Firstly, we present the validation of GATE (Geant4 Application for Emission Tomography) for Monte Carlo simulations of a coded mask installed on a clinical gamma camera. The coded mask modelling was validated by comparison between experimental and simulated data in terms of energy spectra, sensitivity and spatial resolution. In the second part of the study, we use the validated model to calculate the projection matrix with Monte Carlo simulations. A three-dimensional thyroid phantom study was performed to compare the performance of the three-dimensional MLEM reconstruction with conventional correlation method. The results indicate that the artifacts are reduced and three-dimensional spatial resolution is improved with the Monte Carlo-based MLEM reconstruction.
Biases in Monte Carlo eigenvalue calculations
International Nuclear Information System (INIS)
Gelbard, E.M.
1992-01-01
The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the ''fixed-source'' case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated (''replicated'') over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here
Monte Carlo studies of uranium calorimetry
International Nuclear Information System (INIS)
Brau, J.; Hargis, H.J.; Gabriel, T.A.; Bishop, B.L.
1985-01-01
Detailed Monte Carlo calculations of uranium calorimetry are presented which reveal a significant difference in the responses of liquid argon and plastic scintillator in uranium calorimeters. Due to saturation effects, neutrons from the uranium are found to contribute only weakly to the liquid argon signal. Electromagnetic sampling inefficiencies are significant and contribute substantially to compensation in both systems. 17 references
Federal Laboratory Consortium — Description/History: Chemistry and biology laboratoriesThe Bio Engineering Laboratory (BeL) is theonly full spectrum biotechnology capability within the Department...
FOOTWEAR PERFORMANCE LABORATORY
Federal Laboratory Consortium — This laboratory provides biomechanical and physical analyses for both military and commercial footwear. The laboratory contains equipment that is integral to the us...
Nanotechnology Characterization Laboratory
Federal Laboratory Consortium — The Nanotechnology Characterization Laboratory (NCL) at the Frederick National Laboratory for Cancer Research performs preclinical characterization of nanomaterials...
Physical Sciences Laboratory (PSL)
Federal Laboratory Consortium — PNNL's Physical Sciences Laboratory (PSL) houses 22 research laboratories for conducting a wide-range of research including catalyst formulation, chemical analysis,...
Distributed Energy Technology Laboratory
Federal Laboratory Consortium — The Distributed Energy Technologies Laboratory (DETL) is an extension of the power electronics testing capabilities of the Photovoltaic System Evaluation Laboratory...
Uncertainty analysis in Monte Carlo criticality computations
International Nuclear Information System (INIS)
Qi Ao
2011-01-01
Highlights: ► Two types of uncertainty methods for k eff Monte Carlo computations are examined. ► Sampling method has the least restrictions on perturbation but computing resources. ► Analytical method is limited to small perturbation on material properties. ► Practicality relies on efficiency, multiparameter applicability and data availability. - Abstract: Uncertainty analysis is imperative for nuclear criticality risk assessments when using Monte Carlo neutron transport methods to predict the effective neutron multiplication factor (k eff ) for fissionable material systems. For the validation of Monte Carlo codes for criticality computations against benchmark experiments, code accuracy and precision are measured by both the computational bias and uncertainty in the bias. The uncertainty in the bias accounts for known or quantified experimental, computational and model uncertainties. For the application of Monte Carlo codes for criticality analysis of fissionable material systems, an administrative margin of subcriticality must be imposed to provide additional assurance of subcriticality for any unknown or unquantified uncertainties. Because of a substantial impact of the administrative margin of subcriticality on economics and safety of nuclear fuel cycle operations, recently increasing interests in reducing the administrative margin of subcriticality make the uncertainty analysis in criticality safety computations more risk-significant. This paper provides an overview of two most popular k eff uncertainty analysis methods for Monte Carlo criticality computations: (1) sampling-based methods, and (2) analytical methods. Examples are given to demonstrate their usage in the k eff uncertainty analysis due to uncertainties in both neutronic and non-neutronic parameters of fissionable material systems.
Energy Technology Data Exchange (ETDEWEB)
Horber, E. [Swiss Experimental Station for Agriculture, Zurich-Oerlikon (Switzerland)
1963-09-15
Laboratory tests indicated that an X-ray dose of 3000 r was sufficient to induce sterility in male cockchafers. During two flight periods, sterilized males were released among a natural population in order to eliminate white grubs in a general farming region of north-western Switzerland. In 1950 an outbreak of this pest was reduced by a chemical treatment. Gradation had been watched during every flight from 1953 to 1962. In 1959 five areas, each with a surface of about 30 ha, were selected to serve as: (a) The treated area, where the males were captured, irradiated and released; (b) The ''bank,'' where cockchafers were collected and the males were irradiated for release in area a; and (c) Control areas, where undisturbed gradation was observed. The males were irradiated in a therapeutical X-ray unit. Irradiated males were hand-painted in order to estimate the ratio of sterilized males by means of the isotopic dilution technique. In 1959, for the first treatment, about 6 l of sterilized males, representing about 50% of the male population, were released in (a).The white grub infestation sampled in grassland dropped thereafter to about twothirds of that in the other areas. The reproduction rate was less than unity only in (a). A further reduction of the population in (a) to one-tenth of that in (b) and (c) was observed when the number of surviving cockchafers was estimated in 1962. The greatest mortality from 1959 to 1962 occurred in (a). In 1962, for the second treatment, a total of 17 l of irradiated males was released in (a). At least 76% of the male population of (a) had been sterilized. The following sampling of the white grub population showed complete eradication in (a). Some reduction was also observed in (b) and (c) due to drought in the whole region. It has been demonstrated that the sterile-male technique may successfully be applied to an insect pest in an area which is not strictly isolated geographically, the females of which mate several times and the
Pore-scale uncertainty quantification with multilevel Monte Carlo
Icardi, Matteo; Hoel, Haakon; Long, Quan; Tempone, Raul
2014-01-01
. Since there are no generic ways to parametrize the randomness in the porescale structures, Monte Carlo techniques are the most accessible to compute statistics. We propose a multilevel Monte Carlo (MLMC) technique to reduce the computational cost
Prospect on general software of Monte Carlo method
International Nuclear Information System (INIS)
Pei Lucheng
1992-01-01
This is a short paper on the prospect of Monte Carlo general software. The content consists of cluster sampling method, zero variance technique, self-improved method, and vectorized Monte Carlo method
Bayesian phylogeny analysis via stochastic approximation Monte Carlo
Cheon, Sooyoung; Liang, Faming
2009-01-01
in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method
Applications of Monte Carlo method in Medical Physics
International Nuclear Information System (INIS)
Diez Rios, A.; Labajos, M.
1989-01-01
The basic ideas of Monte Carlo techniques are presented. Random numbers and their generation by congruential methods, which underlie Monte Carlo calculations are shown. Monte Carlo techniques to solve integrals are discussed. The evaluation of a simple monodimensional integral with a known answer, by means of two different Monte Carlo approaches are discussed. The basic principles to simualate on a computer photon histories reduce variance and the current applications in Medical Physics are commented. (Author)
Directory of Open Access Journals (Sweden)
Ramon Dachs
2014-03-01
Full Text Available Nota biobibliográfica + poética + creaciones («Rebel·lió: problema visual», «Tarot de Marsella: poema aleatorio», «Codex mundi. Contingencia: escritura fractal II-3», «Blanc o nada. Topoemalogía (trilingüe», «New York, Portbou, Benjamin» y «Bar 12 heures» + cuestionario (Victoria Pineda
Directory of Open Access Journals (Sweden)
Yuste Jesus
2017-01-01
Full Text Available The knowledge of the behaviour of different grapevine rootstocks is basic to achieve a good adaptation of vine to its growing area. With the objective of knowing the agronomic and qualitative response of cv. Sauvignon Blanc to the use of several rootstocks, a trial was established in 2006 in the D.O. Rueda. The vines are vertical trellised, with bilateral Royat cordon pruning, and the vine spacing is 2.60 m × 1.25 m. The row orientation is NNW (N-25°. The experimental design consisted of randomized blocks with 4 replications and elemental plot of 60 vines. Over the period 2010–2014 it has been developed the study of the following 10 rootstocks (treatments: 110R, 101-14M, 420A, 3309C, 41B, 161-49C, 196-17C, Fercal, Gravesac and RGM. The rootstocks 420A and 41B showed a production higher than 50% with respect to 196-17C and 161-49C, and higher to 100% than RGM, due to the number of clusters per vine and, in greater amount, to the cluster weight. Fercal and Gravesac showed an increase of pruning wood weight of 24% with respect to 196-17C and 161-49C, and 90% with respect to RGM, mainly due to the shoot weight. The sugar concentration increased with 101-14M, 196-17C and Fercal, and became reduced with 161-49C, 41B and RGM. The pH of must was reduced with Fercal whereas the titratable acidity increased, which also showed increase with Gravesac and 161-49C. The tartaric acid hardly increased slightly with Fercal and 161-49C, whereas the malic acid increased with Gravesac and Fercal, and was reduced with 41B, 3309C, RGM and 101-14M. The potassium concentration increased with 196-17C, Gravesac and Fercal, and was reduced with 41B, 161-49C, 420A and 3309C. The effects observed show alternatives for rootstock election according to the growing conditions and objectives of the vineyard.
Monte Carlo simulations for generic granite repository studies
Energy Technology Data Exchange (ETDEWEB)
Chu, Shaoping [Los Alamos National Laboratory; Lee, Joon H [SNL; Wang, Yifeng [SNL
2010-12-08
In a collaborative study between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL) for the DOE-NE Office of Fuel Cycle Technologies Used Fuel Disposition (UFD) Campaign project, we have conducted preliminary system-level analyses to support the development of a long-term strategy for geologic disposal of high-level radioactive waste. A general modeling framework consisting of a near- and a far-field submodel for a granite GDSE was developed. A representative far-field transport model for a generic granite repository was merged with an integrated systems (GoldSim) near-field model. Integrated Monte Carlo model runs with the combined near- and farfield transport models were performed, and the parameter sensitivities were evaluated for the combined system. In addition, a sub-set of radionuclides that are potentially important to repository performance were identified and evaluated for a series of model runs. The analyses were conducted with different waste inventory scenarios. Analyses were also conducted for different repository radionuelide release scenarios. While the results to date are for a generic granite repository, the work establishes the method to be used in the future to provide guidance on the development of strategy for long-term disposal of high-level radioactive waste in a granite repository.
High-energy particle Monte Carlo at Los Alamos
International Nuclear Information System (INIS)
Prael, R.E.
1985-01-01
A major computational effort at Los Alamos has been the development of a code system based on the HETC code for the transport of nucleons, pions, and muons. The Los Alamos National Laboratory version of HETC utilizes MCNP geometry and interfaces with MCNP for the transport of neutrons below 20 MeV and photons at any energy. A major recent effort has been the development of the PHT code for treating the gamma cascade in excited nuclei (the residual nuclei from an HETC calculation) by the Monte Carlo method to generate a photon source for MCNP. The HETC/MCNP code system has been extensively used for design studies of accelerator targets and shielding, including the design of LAMPF-II. It is extensively used for the design and analysis of accelerator experiments. Los Alamos National Laboratory has been an active member of the International Collaboration on Advanced Neutron Sources; as such we engage in shared code development and computational efforts. In the past few years, additional effort has been devoted to the development of a Chen-model intranuclear cascade code (INCA1) featuring a cluster model for the nucleus and deuteron pickup reactions. Concurrently, the INCA2 code for the breakup of light, excited nuclei using the Fermi breakup model has been developed. Together, they have been used for the calculation of neutron and proton cross sections in the energy ranges appropriate to medical accelerators, and for the computation of tissue kerma factors
Characterization of decommissioned reactor internals: Monte Carlo analysis technique
International Nuclear Information System (INIS)
Reid, B.D.; Love, E.F.; Luksic, A.T.
1993-03-01
This study discusses computer analysis techniques for determining activation levels of irradiated reactor component hardware to yield data for the Department of Energy's Greater-Than-Class C Low-Level Radioactive Waste Program. The study recommends the Monte Carlo Neutron/Photon (MCNP) computer code as the best analysis tool for this application and compares the technique to direct sampling methodology. To implement the MCNP analysis, a computer model would be developed to reflect the geometry, material composition, and power history of an existing shutdown reactor. MCNP analysis would then be performed using the computer model, and the results would be validated by comparison to laboratory analysis results from samples taken from the shutdown reactor. The report estimates uncertainties for each step of the computational and laboratory analyses; the overall uncertainty of the MCNP results is projected to be ±35%. The primary source of uncertainty is identified as the material composition of the components, and research is suggested to address that uncertainty
Monte Carlo computation in the applied research of nuclear technology
International Nuclear Information System (INIS)
Xu Shuyan; Liu Baojie; Li Qin
2007-01-01
This article briefly introduces Monte Carlo Methods and their properties. It narrates the Monte Carlo methods with emphasis in their applications to several domains of nuclear technology. Monte Carlo simulation methods and several commonly used computer software to implement them are also introduced. The proposed methods are demonstrated by a real example. (authors)
Clouvas, A; Antonopoulos-Domis, M; Silva, J
2000-01-01
The dose rate conversion factors D/sub CF/ (absorbed dose rate in air per unit activity per unit of soil mass, nGy h/sup -1/ per Bq kg/sup -1/) are calculated 1 m above ground for photon emitters of natural radionuclides uniformly distributed in the soil. Three Monte Carlo codes are used: 1) The MCNP code of Los Alamos; 2) The GEANT code of CERN; and 3) a Monte Carlo code developed in the Nuclear Technology Laboratory of the Aristotle University of Thessaloniki. The accuracy of the Monte Carlo results is tested by the comparison of the unscattered flux obtained by the three Monte Carlo codes with an independent straightforward calculation. All codes and particularly the MCNP calculate accurately the absorbed dose rate in air due to the unscattered radiation. For the total radiation (unscattered plus scattered) the D/sub CF/ values calculated from the three codes are in very good agreement between them. The comparison between these results and the results deduced previously by other authors indicates a good ag...
Report on the Oak Ridge workshop on Monte Carlo codes for relativistic heavy-ion collisions
International Nuclear Information System (INIS)
Awes, T.C.; Sorensen, S.P.
1988-01-01
In order to make detailed predictions for the case of purely hadronic matter, several Monte Carlo codes have been developed to describe relativistic nucleus-nucleus collisions. Although these various models build upon models of hadron-hadron interactions and have been fitted to reproduce hadron-hadron collision data, they have rather different pictures of the underlying hadron collision process and of subsequent particle production. Until now, the different Monte Carlo codes have, in general, been compared to different sets of experimental data, according to which results were readily available to the model builder or which Monte Carlo code was readily available to an experimental group. As a result, it has been difficult to draw firm conclusions about whether the observed deviations between experiments and calculations were due to deficiencies in the particular model, experimental discrepancies, or interesting effects beyond a simple superposition of nucleon-nucleon collisions. For this reason, it was decided that it would be productive to have a structured confrontation between the available experimental data and the many models of high-energy nuclear collisions in a manner in which it could be ensured that the computer codes were run correctly and the experimental acceptances were properly taken into account. With this purpose in mind, a Workshop on Monte Carlo Codes for Relativistic Heavy-Ion Collisions was organized at the Joint Institute for Heavy Ion Research at Oak Ridge National Laboratory from September 12--23, 1988. This paper reviews this workshop. 11 refs., 6 figs
Radiological hazard assessment at the Monte Bello Islands
International Nuclear Information System (INIS)
Cooper, M.B.; Martin, L.J.; Wilks, M.J.; Wiliams, G.A.
1990-12-01
Field and laboratory measurements are described and data presented which enabled dose assessments for exposure to artificial radionuclides at the Monte Bello Islands, the sites of U.K. atomic weapons tests in 1952 and 1956. The report focuses on quantifying the inhalation hazard as exposure via the ingestion and wound contamination pathways is considered inconsequential. Surface soil concentrations of radionuclides and particle size analyses are presented for various sampling sites. Analyses of the distribution with depth indicated that, in general, the activity is more or less uniformly mixed through the top 40 mm, although in a few cases the top 10 mm contains the bulk of the activity. The 239 Pu/ 241 Am activity ratios were measured for selected samples. The only potential hazards to health from residual radioactive contamination on the Monte Bello Islands are due to the inhalation of actinides (specifically plutonium and americium) and from the external gamma-radiation field. Only one area, in the fallout plume of HURRICANE to the north-west of Main Beach, is a potential inhalation hazard. For an average inhalable dust loading of 0.1 mg/m 3 , three days occupancy of the most contaminated site will result in a committed effective dose equivalent of 1 mSv. The two ground zeros could not be considered inhalation hazards, considering the small areas concerned and the habits of visitors (full-time occupancy, over a period of one year or more, of the most contaminated sites at either of the G1 or G2 ground zeros would be required to reach 1 mSv). 25 refs., 23 tabs., 3 figs
Monte Carlo code criticality benchmark comparisons for waste packaging
International Nuclear Information System (INIS)
Alesso, H.P.; Annese, C.E.; Buck, R.M.; Pearson, J.S.; Lloyd, W.R.
1992-07-01
COG is a new point-wise Monte Carlo code being developed and tested at Lawrence Livermore National Laboratory (LLNL). It solves the Boltzmann equation for the transport of neutrons and photons. The objective of this paper is to report on COG results for criticality benchmark experiments both on a Cray mainframe and on a HP 9000 workstation. COG has been recently ported to workstations to improve its accessibility to a wider community of users. COG has some similarities to a number of other computer codes used in the shielding and criticality community. The recently introduced high performance reduced instruction set (RISC) UNIX workstations provide computational power that approach mainframes at a fraction of the cost. A version of COG is currently being developed for the Hewlett Packard 9000/730 computer with a UNIX operating system. Subsequent porting operations will move COG to SUN, DEC, and IBM workstations. In addition, a CAD system for preparation of the geometry input for COG is being developed. In July 1977, Babcock ampersand Wilcox Co. (B ampersand W) was awarded a contract to conduct a series of critical experiments that simulated close-packed storage of LWR-type fuel. These experiments provided data for benchmarking and validating calculational methods used in predicting K-effective of nuclear fuel storage in close-packed, neutron poisoned arrays. Low enriched UO2 fuel pins in water-moderated lattices in fuel storage represent a challenging criticality calculation for Monte Carlo codes particularly when the fuel pins extend out of the water. COG and KENO calculational results of these criticality benchmark experiments are presented
Monte Carlo-based tail exponent estimator
Barunik, Jozef; Vacha, Lukas
2010-11-01
In this paper we propose a new approach to estimation of the tail exponent in financial stock markets. We begin the study with the finite sample behavior of the Hill estimator under α-stable distributions. Using large Monte Carlo simulations, we show that the Hill estimator overestimates the true tail exponent and can hardly be used on samples with small length. Utilizing our results, we introduce a Monte Carlo-based method of estimation for the tail exponent. Our proposed method is not sensitive to the choice of tail size and works well also on small data samples. The new estimator also gives unbiased results with symmetrical confidence intervals. Finally, we demonstrate the power of our estimator on the international world stock market indices. On the two separate periods of 2002-2005 and 2006-2009, we estimate the tail exponent.
No-compromise reptation quantum Monte Carlo
International Nuclear Information System (INIS)
Yuen, W K; Farrar, Thomas J; Rothstein, Stuart M
2007-01-01
Since its publication, the reptation quantum Monte Carlo algorithm of Baroni and Moroni (1999 Phys. Rev. Lett. 82 4745) has been applied to several important problems in physics, but its mathematical foundations are not well understood. We show that their algorithm is not of typical Metropolis-Hastings type, and we specify conditions required for the generated Markov chain to be stationary and to converge to the intended distribution. The time-step bias may add up, and in many applications it is only the middle of a reptile that is the most important. Therefore, we propose an alternative, 'no-compromise reptation quantum Monte Carlo' to stabilize the middle of the reptile. (fast track communication)
Multilevel Monte Carlo Approaches for Numerical Homogenization
Efendiev, Yalchin R.
2015-10-01
In this article, we study the application of multilevel Monte Carlo (MLMC) approaches to numerical random homogenization. Our objective is to compute the expectation of some functionals of the homogenized coefficients, or of the homogenized solutions. This is accomplished within MLMC by considering different sizes of representative volumes (RVEs). Many inexpensive computations with the smallest RVE size are combined with fewer expensive computations performed on larger RVEs. Likewise, when it comes to homogenized solutions, different levels of coarse-grid meshes are used to solve the homogenized equation. We show that, by carefully selecting the number of realizations at each level, we can achieve a speed-up in the computations in comparison to a standard Monte Carlo method. Numerical results are presented for both one-dimensional and two-dimensional test-cases that illustrate the efficiency of the approach.
Monte Carlo simulations in skin radiotherapy
International Nuclear Information System (INIS)
Sarvari, A.; Jeraj, R.; Kron, T.
2000-01-01
The primary goal of this work was to develop a procedure for calculation the appropriate filter shape for a brachytherapy applicator used for skin radiotherapy. In the applicator a radioactive source is positioned close to the skin. Without a filter, the resultant dose distribution would be highly nonuniform.High uniformity is usually required however. This can be achieved using an appropriately shaped filter, which flattens the dose profile. Because of the complexity of the transport and geometry, Monte Carlo simulations had to be used. An 192 Ir high dose rate photon source was used. All necessary transport parameters were simulated with the MCNP4B Monte Carlo code. A highly efficient iterative procedure was developed, which enabled calculation of the optimal filter shape in only few iterations. The initially non-uniform dose distributions became uniform within a percent when applying the filter calculated by this procedure. (author)
Monte Carlo simulations on SIMD computer architectures
International Nuclear Information System (INIS)
Burmester, C.P.; Gronsky, R.; Wille, L.T.
1992-01-01
In this paper algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SIMD) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carl updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures
Coevolution Based Adaptive Monte Carlo Localization (CEAMCL
Directory of Open Access Journals (Sweden)
Luo Ronghua
2008-11-01
Full Text Available An adaptive Monte Carlo localization algorithm based on coevolution mechanism of ecological species is proposed. Samples are clustered into species, each of which represents a hypothesis of the robot's pose. Since the coevolution between the species ensures that the multiple distinct hypotheses can be tracked stably, the problem of premature convergence when using MCL in highly symmetric environments can be solved. And the sample size can be adjusted adaptively over time according to the uncertainty of the robot's pose by using the population growth model. In addition, by using the crossover and mutation operators in evolutionary computation, intra-species evolution can drive the samples move towards the regions where the desired posterior density is large. So a small size of samples can represent the desired density well enough to make precise localization. The new algorithm is termed coevolution based adaptive Monte Carlo localization (CEAMCL. Experiments have been carried out to prove the efficiency of the new localization algorithm.
Multilevel sequential Monte-Carlo samplers
Jasra, Ajay
2016-01-01
Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.
Monte Carlo simulation of gas Cerenkov detectors
International Nuclear Information System (INIS)
Mack, J.M.; Jain, M.; Jordan, T.M.
1984-01-01
Theoretical study of selected gamma-ray and electron diagnostic necessitates coupling Cerenkov radiation to electron/photon cascades. A Cerenkov production model and its incorporation into a general geometry Monte Carlo coupled electron/photon transport code is discussed. A special optical photon ray-trace is implemented using bulk optical properties assigned to each Monte Carlo zone. Good agreement exists between experimental and calculated Cerenkov data in the case of a carbon-dioxide gas Cerenkov detector experiment. Cerenkov production and threshold data are presented for a typical carbon-dioxide gas detector that converts a 16.7 MeV photon source to Cerenkov light, which is collected by optics and detected by a photomultiplier
Hypothesis testing of scientific Monte Carlo calculations
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Multilevel sequential Monte-Carlo samplers
Jasra, Ajay
2016-01-05
Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.
Monte Carlo Simulation for Particle Detectors
Pia, Maria Grazia
2012-01-01
Monte Carlo simulation is an essential component of experimental particle physics in all the phases of its life-cycle: the investigation of the physics reach of detector concepts, the design of facilities and detectors, the development and optimization of data reconstruction software, the data analysis for the production of physics results. This note briefly outlines some research topics related to Monte Carlo simulation, that are relevant to future experimental perspectives in particle physics. The focus is on physics aspects: conceptual progress beyond current particle transport schemes, the incorporation of materials science knowledge relevant to novel detection technologies, functionality to model radiation damage, the capability for multi-scale simulation, quantitative validation and uncertainty quantification to determine the predictive power of simulation. The R&D on simulation for future detectors would profit from cooperation within various components of the particle physics community, and synerg...
Status of Monte Carlo at Los Alamos
International Nuclear Information System (INIS)
Thompson, W.L.; Cashwell, E.D.; Godfrey, T.N.K.; Schrandt, R.G.; Deutsch, O.L.; Booth, T.E.
1980-05-01
Four papers were presented by Group X-6 on April 22, 1980, at the Oak Ridge Radiation Shielding Information Center (RSIC) Seminar-Workshop on Theory and Applications of Monte Carlo Methods. These papers are combined into one report for convenience and because they are related to each other. The first paper (by Thompson and Cashwell) is a general survey about X-6 and MCNP and is an introduction to the other three papers. It can also serve as a resume of X-6. The second paper (by Godfrey) explains some of the details of geometry specification in MCNP. The third paper (by Cashwell and Schrandt) illustrates calculating flux at a point with MCNP; in particular, the once-more-collided flux estimator is demonstrated. Finally, the fourth paper (by Thompson, Deutsch, and Booth) is a tutorial on some variance-reduction techniques. It should be required for a fledging Monte Carlo practitioner
Topological zero modes in Monte Carlo simulations
International Nuclear Information System (INIS)
Dilger, H.
1994-08-01
We present an improvement of global Metropolis updating steps, the instanton hits, used in a hybrid Monte Carlo simulation of the two-flavor Schwinger model with staggered fermions. These hits are designed to change the topological sector of the gauge field. In order to match these hits to an unquenched simulation with pseudofermions, the approximate zero mode structure of the lattice Dirac operator has to be considered explicitly. (orig.)
Handbook of Markov chain Monte Carlo
Brooks, Steve
2011-01-01
""Handbook of Markov Chain Monte Carlo"" brings together the major advances that have occurred in recent years while incorporating enough introductory material for new users of MCMC. Along with thorough coverage of the theoretical foundations and algorithmic and computational methodology, this comprehensive handbook includes substantial realistic case studies from a variety of disciplines. These case studies demonstrate the application of MCMC methods and serve as a series of templates for the construction, implementation, and choice of MCMC methodology.
The lund Monte Carlo for jet fragmentation
International Nuclear Information System (INIS)
Sjoestrand, T.
1982-03-01
We present a Monte Carlo program based on the Lund model for jet fragmentation. Quark, gluon, diquark and hadron jets are considered. Special emphasis is put on the fragmentation of colour singlet jet systems, for which energy, momentum and flavour are conserved explicitly. The model for decays of unstable particles, in particular the weak decay of heavy hadrons, is described. The central part of the paper is a detailed description on how to use the FORTRAN 77 program. (Author)
Monte Carlo methods for preference learning
DEFF Research Database (Denmark)
Viappiani, P.
2012-01-01
Utility elicitation is an important component of many applications, such as decision support systems and recommender systems. Such systems query the users about their preferences and give recommendations based on the system’s belief about the utility function. Critical to these applications is th...... is the acquisition of prior distribution about the utility parameters and the possibility of real time Bayesian inference. In this paper we consider Monte Carlo methods for these problems....
Monte Carlo methods for shield design calculations
International Nuclear Information System (INIS)
Grimstone, M.J.
1974-01-01
A suite of Monte Carlo codes is being developed for use on a routine basis in commercial reactor shield design. The methods adopted for this purpose include the modular construction of codes, simplified geometries, automatic variance reduction techniques, continuous energy treatment of cross section data, and albedo methods for streaming. Descriptions are given of the implementation of these methods and of their use in practical calculations. 26 references. (U.S.)
General purpose code for Monte Carlo simulations
International Nuclear Information System (INIS)
Wilcke, W.W.
1983-01-01
A general-purpose computer called MONTHY has been written to perform Monte Carlo simulations of physical systems. To achieve a high degree of flexibility the code is organized like a general purpose computer, operating on a vector describing the time dependent state of the system under simulation. The instruction set of the computer is defined by the user and is therefore adaptable to the particular problem studied. The organization of MONTHY allows iterative and conditional execution of operations
Autocorrelations in hybrid Monte Carlo simulations
International Nuclear Information System (INIS)
Schaefer, Stefan; Virotta, Francesco
2010-11-01
Simulations of QCD suffer from severe critical slowing down towards the continuum limit. This problem is known to be prominent in the topological charge, however, all observables are affected to various degree by these slow modes in the Monte Carlo evolution. We investigate the slowing down in high statistics simulations and propose a new error analysis method, which gives a realistic estimate of the contribution of the slow modes to the errors. (orig.)
Introduction to the Monte Carlo methods
International Nuclear Information System (INIS)
Uzhinskij, V.V.
1993-01-01
Codes illustrating the use of Monte Carlo methods in high energy physics such as the inverse transformation method, the ejection method, the particle propagation through the nucleus, the particle interaction with the nucleus, etc. are presented. A set of useful algorithms of random number generators is given (the binomial distribution, the Poisson distribution, β-distribution, γ-distribution and normal distribution). 5 figs., 1 tab
Sequential Monte Carlo with Highly Informative Observations
Del Moral, Pierre; Murray, Lawrence M.
2014-01-01
We propose sequential Monte Carlo (SMC) methods for sampling the posterior distribution of state-space models under highly informative observation regimes, a situation in which standard SMC methods can perform poorly. A special case is simulating bridges between given initial and final values. The basic idea is to introduce a schedule of intermediate weighting and resampling times between observation times, which guide particles towards the final state. This can always be done for continuous-...
Monte Carlo codes use in neutron therapy
International Nuclear Information System (INIS)
Paquis, P.; Mokhtari, F.; Karamanoukian, D.; Pignol, J.P.; Cuendet, P.; Iborra, N.
1998-01-01
Monte Carlo calculation codes allow to study accurately all the parameters relevant to radiation effects, like the dose deposition or the type of microscopic interactions, through one by one particle transport simulation. These features are very useful for neutron irradiations, from device development up to dosimetry. This paper illustrates some applications of these codes in Neutron Capture Therapy and Neutron Capture Enhancement of fast neutrons irradiations. (authors)
Quantum Monte Carlo calculations of light nuclei
International Nuclear Information System (INIS)
Pandharipande, V. R.
1999-01-01
Quantum Monte Carlo methods provide an essentially exact way to calculate various properties of nuclear bound, and low energy continuum states, from realistic models of nuclear interactions and currents. After a brief description of the methods and modern models of nuclear forces, we review the results obtained for all the bound, and some continuum states of up to eight nucleons. Various other applications of the methods are reviewed along with future prospects
Monte-Carlo simulation of electromagnetic showers
International Nuclear Information System (INIS)
Amatuni, Ts.A.
1984-01-01
The universal ELSS-1 program for Monte Carlo simulation of high energy electromagnetic showers in homogeneous absorbers of arbitrary geometry is written. The major processes and effects of electron and photon interaction with matter, particularly the Landau-Pomeranchuk-Migdal effect, are taken into account in the simulation procedures. The simulation results are compared with experimental data. Some characteristics of shower detectors and electromagnetic showers for energies up 1 TeV are calculated
Cost of splitting in Monte Carlo transport
International Nuclear Information System (INIS)
Everett, C.J.; Cashwell, E.D.
1978-03-01
In a simple transport problem designed to estimate transmission through a plane slab of x free paths by Monte Carlo methods, it is shown that m-splitting (m > or = 2) does not pay unless exp(x) > m(m + 3)/(m - 1). In such a case, the minimum total cost in terms of machine time is obtained as a function of m, and the optimal value of m is determined
From Monte Carlo to Quantum Computation
Heinrich, Stefan
2001-01-01
Quantum computing was so far mainly concerned with discrete problems. Recently, E. Novak and the author studied quantum algorithms for high dimensional integration and dealt with the question, which advantages quantum computing can bring over classical deterministic or randomized methods for this type of problem. In this paper we give a short introduction to the basic ideas of quantum computing and survey recent results on high dimensional integration. We discuss connections to the Monte Carl...
Monte Carlo simulation of Touschek effect
Directory of Open Access Journals (Sweden)
Aimin Xiao
2010-07-01
Full Text Available We present a Monte Carlo method implementation in the code elegant for simulating Touschek scattering effects in a linac beam. The local scattering rate and the distribution of scattered electrons can be obtained from the code either for a Gaussian-distributed beam or for a general beam whose distribution function is given. In addition, scattered electrons can be tracked through the beam line and the local beam-loss rate and beam halo information recorded.
Monte Carlo method for neutron transport problems
International Nuclear Information System (INIS)
Asaoka, Takumi
1977-01-01
Some methods for decreasing variances in Monte Carlo neutron transport calculations are presented together with the results of sample calculations. A general purpose neutron transport Monte Carlo code ''MORSE'' was used for the purpose. The first method discussed in this report is the method of statistical estimation. As an example of this method, the application of the coarse-mesh rebalance acceleration method to the criticality calculation of a cylindrical fast reactor is presented. Effective multiplication factor and its standard deviation are presented as a function of the number of histories and comparisons are made between the coarse-mesh rebalance method and the standard method. Five-group neutron fluxes at core center are also compared with the result of S4 calculation. The second method is the method of correlated sampling. This method was applied to the perturbation calculation of control rod worths in a fast critical assembly (FCA-V-3) Two methods of sampling (similar flight paths and identical flight paths) are tested and compared with experimental results. For every cases the experimental value lies within the standard deviation of the Monte Carlo calculations. The third method is the importance sampling. In this report a biased selection of particle flight directions discussed. This method was applied to the flux calculation in a spherical fast neutron system surrounded by a 10.16 cm iron reflector. Result-direction biasing, path-length stretching, and no biasing are compared with S8 calculation. (Aoki, K.)
Biased Monte Carlo optimization: the basic approach
International Nuclear Information System (INIS)
Campioni, Luca; Scardovelli, Ruben; Vestrucci, Paolo
2005-01-01
It is well-known that the Monte Carlo method is very successful in tackling several kinds of system simulations. It often happens that one has to deal with rare events, and the use of a variance reduction technique is almost mandatory, in order to have Monte Carlo efficient applications. The main issue associated with variance reduction techniques is related to the choice of the value of the biasing parameter. Actually, this task is typically left to the experience of the Monte Carlo user, who has to make many attempts before achieving an advantageous biasing. A valuable result is provided: a methodology and a practical rule addressed to establish an a priori guidance for the choice of the optimal value of the biasing parameter. This result, which has been obtained for a single component system, has the notable property of being valid for any multicomponent system. In particular, in this paper, the exponential and the uniform biases of exponentially distributed phenomena are investigated thoroughly
Quantum Monte Carlo for vibrating molecules
International Nuclear Information System (INIS)
Brown, W.R.; Lawrence Berkeley National Lab., CA
1996-08-01
Quantum Monte Carlo (QMC) has successfully computed the total electronic energies of atoms and molecules. The main goal of this work is to use correlation function quantum Monte Carlo (CFQMC) to compute the vibrational state energies of molecules given a potential energy surface (PES). In CFQMC, an ensemble of random walkers simulate the diffusion and branching processes of the imaginary-time time dependent Schroedinger equation in order to evaluate the matrix elements. The program QMCVIB was written to perform multi-state VMC and CFQMC calculations and employed for several calculations of the H 2 O and C 3 vibrational states, using 7 PES's, 3 trial wavefunction forms, two methods of non-linear basis function parameter optimization, and on both serial and parallel computers. In order to construct accurate trial wavefunctions different wavefunctions forms were required for H 2 O and C 3 . In order to construct accurate trial wavefunctions for C 3 , the non-linear parameters were optimized with respect to the sum of the energies of several low-lying vibrational states. In order to stabilize the statistical error estimates for C 3 the Monte Carlo data was collected into blocks. Accurate vibrational state energies were computed using both serial and parallel QMCVIB programs. Comparison of vibrational state energies computed from the three C 3 PES's suggested that a non-linear equilibrium geometry PES is the most accurate and that discrete potential representations may be used to conveniently determine vibrational state energies
Lattice gauge theories and Monte Carlo simulations
International Nuclear Information System (INIS)
Rebbi, C.
1981-11-01
After some preliminary considerations, the discussion of quantum gauge theories on a Euclidean lattice takes up the definition of Euclidean quantum theory and treatment of the continuum limit; analogy is made with statistical mechanics. Perturbative methods can produce useful results for strong or weak coupling. In the attempts to investigate the properties of the systems for intermediate coupling, numerical methods known as Monte Carlo simulations have proved valuable. The bulk of this paper illustrates the basic ideas underlying the Monte Carlo numerical techniques and the major results achieved with them according to the following program: Monte Carlo simulations (general theory, practical considerations), phase structure of Abelian and non-Abelian models, the observables (coefficient of the linear term in the potential between two static sources at large separation, mass of the lowest excited state with the quantum numbers of the vacuum (the so-called glueball), the potential between two static sources at very small distance, the critical temperature at which sources become deconfined), gauge fields coupled to basonic matter (Higgs) fields, and systems with fermions
Generalized hybrid Monte Carlo - CMFD methods for fission source convergence
International Nuclear Information System (INIS)
Wolters, Emily R.; Larsen, Edward W.; Martin, William R.
2011-01-01
In this paper, we generalize the recently published 'CMFD-Accelerated Monte Carlo' method and present two new methods that reduce the statistical error in CMFD-Accelerated Monte Carlo. The CMFD-Accelerated Monte Carlo method uses Monte Carlo to estimate nonlinear functionals used in low-order CMFD equations for the eigenfunction and eigenvalue. The Monte Carlo fission source is then modified to match the resulting CMFD fission source in a 'feedback' procedure. The two proposed methods differ from CMFD-Accelerated Monte Carlo in the definition of the required nonlinear functionals, but they have identical CMFD equations. The proposed methods are compared with CMFD-Accelerated Monte Carlo on a high dominance ratio test problem. All hybrid methods converge the Monte Carlo fission source almost immediately, leading to a large reduction in the number of inactive cycles required. The proposed methods stabilize the fission source more efficiently than CMFD-Accelerated Monte Carlo, leading to a reduction in the number of active cycles required. Finally, as in CMFD-Accelerated Monte Carlo, the apparent variance of the eigenfunction is approximately equal to the real variance, so the real error is well-estimated from a single calculation. This is an advantage over standard Monte Carlo, in which the real error can be underestimated due to inter-cycle correlation. (author)
Monte Carlo methods and models in finance and insurance
Korn, Ralf; Kroisandt, Gerald
2010-01-01
Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...
Energy Technology Data Exchange (ETDEWEB)
Lopez Ponte, M. A.; Navarro Amaro, J. F.; Perez Lopez, B.; Navarro Bravo, T.; Nogueira, P.; Vrba, T.
2013-07-01
From the Group of WG7 internal dosimetry of the EURADOS Organization (European Radiation Dosimetry group, e.V.) which It coordinates CIEMAT, international action for the vivo measurement of americium has been conducted in three mannequins type skull with detectors of Germanium by gamma spectrometry and simulation by Monte Carlo methods. Such action has been raised as two separate exercises, with the participation of institutions in Europe, America and Asia. Other actions similar precede this vivo intercomparison of measurement and modeling Monte Carlo1. The preliminary results and associated findings are presented in this work. The laboratory of the body radioactivity (CRC) of service counter of dosimetry staff internal (DPI) of the CIEMAT, it has been one of the participants in vivo measures exercise. On the other hand part, the Group of numerical dosimetry of CIEMAT is participant of the Monte Carlo2 simulation exercise. (Author)
PEREGRINE: An all-particle Monte Carlo code for radiation therapy
International Nuclear Information System (INIS)
Hartmann Siantar, C.L.; Chandler, W.P.; Rathkopf, J.A.; Svatos, M.M.; White, R.M.
1994-09-01
The goal of radiation therapy is to deliver a lethal dose to the tumor while minimizing the dose to normal tissues. To carry out this task, it is critical to calculate correctly the distribution of dose delivered. Monte Carlo transport methods have the potential to provide more accurate prediction of dose distributions than currently-used methods. PEREGRINE is a new Monte Carlo transport code developed at Lawrence Livermore National Laboratory for the specific purpose of modeling the effects of radiation therapy. PEREGRINE transports neutrons, photons, electrons, positrons, and heavy charged-particles, including protons, deuterons, tritons, helium-3, and alpha particles. This paper describes the PEREGRINE transport code and some preliminary results for clinically relevant materials and radiation sources
International Nuclear Information System (INIS)
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2015-01-01
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 ® problems. These benchmark and scaling studies show promising results
Bourva, L C A
1999-01-01
The general purpose neutron-photon-electron Monte Carlo N-Particle code, MCNP sup T sup M , has been used to simulate the neutronic characteristics of the on-site laboratory passive neutron coincidence counter to be installed, under Euratom Safeguards Directorate supervision, at the Sellafield reprocessing plant in Cumbria, UK. This detector is part of a series of nondestructive assay instruments to be installed for the accurate determination of the plutonium content of nuclear materials. The present work focuses on one aspect of this task, namely, the accurate calculation of the coincidence gate utilisation factor. This parameter is an important term in the interpretative model used to analyse the passive neutron coincidence count data acquired using pulse train deconvolution electronics based on the shift register technique. It accounts for the limited proportion of neutrons detected within the time interval for which the electronics gate is open. The Monte Carlo code MCF, presented in this work, represents...
Summary - COG: A new point-wise Monte Carlo code for burnup credit analysis
International Nuclear Information System (INIS)
Alesso, H.P.
1989-01-01
COG, a new point-wise Monte Carlo code being developed and tested at Lawrence Livermore National Laboratory (LLNL) for the Cray-1, solves the Boltzmann equation for the transport of neutrons, photons, and (in future versions) other particles. Techniques included in the code for modifying the random walk of particles make COG most suitable for solving deep-penetration (shielding) problems and a wide variety of criticality problems. COG is similar to a number of other computer codes used in the shielding community. Each code is a little different in its geometry input and its random-walk modification options. COG is a Monte Carlo code specifically designed for the CRAY (in 1986) to be as precise as the current state of physics knowledge. It has been extensively benchmarked and used as a shielding code at LLNL since 1986, and has recently been extended to accomplish criticality calculations. It will make an excellent tool for future shipping cask studies
Laboratory measurements of rock thermal properties
DEFF Research Database (Denmark)
Bording, Thue Sylvester; Balling, N.; Nielsen, S.B.
The thermal properties of rocks are key elements in understanding and modelling the temperature field of the subsurface. Thermal conductivity and thermal diffusivity can be measured in the laboratory if rock samples can be provided. We have introduced improvements to the divided bar and needle...... probe methods to be able to measure both thermal conductivity and thermal diffusivity. The improvements we implement include, for both methods, a combination of fast numerical finite element forward modelling and a Markov Chain Monte Carlo inversion scheme for estimating rock thermal parameters...
Belo Monte hydropower project: actual studies; AHE Belo Monte: os estudos atuais
Energy Technology Data Exchange (ETDEWEB)
Figueira Netto, Carlos Alberto de Moya [CNEC Engenharia S.A., Sao Paulo, SP (Brazil); Rezende, Paulo Fernando Vieira Souto [Centrais Eletricas Brasileiras S.A. (ELETROBRAS), Rio de Janeiro, RJ (Brazil)
2008-07-01
This article presents the evolution of the studies of Belo Monte Hydro Power Project (HPP) since the initial inventory studies of the Xingu River in 1979 until the current studies for conclusion of the Technical, Economic and Environmental Feasibility Studies the Belo Monte Hydro Power Project, as authorized by Brazilian National Congress. The current studies characterize the Belo Monte HPP with an installed capacity of 11,181.3 MW (20 units of 550 MW in the main power house and 7 units of 25.9 MW in the additional power house), connected to the Brazilian Interconnected Power Grid, allowing to generate 4,796 mean MW of firm energy, without depending on any flow rate regularization of the upstream Xingu river flooding only 441 k m2, of which approximately 200 k m2, correspond to the normal annual wet season flooding of the Xingu River. (author)
Guideline of Monte Carlo calculation. Neutron/gamma ray transport simulation by Monte Carlo method
2002-01-01
This report condenses basic theories and advanced applications of neutron/gamma ray transport calculations in many fields of nuclear energy research. Chapters 1 through 5 treat historical progress of Monte Carlo methods, general issues of variance reduction technique, cross section libraries used in continuous energy Monte Carlo codes. In chapter 6, the following issues are discussed: fusion benchmark experiments, design of ITER, experiment analyses of fast critical assembly, core analyses of JMTR, simulation of pulsed neutron experiment, core analyses of HTTR, duct streaming calculations, bulk shielding calculations, neutron/gamma ray transport calculations of the Hiroshima atomic bomb. Chapters 8 and 9 treat function enhancements of MCNP and MVP codes, and a parallel processing of Monte Carlo calculation, respectively. An important references are attached at the end of this report.
Statistical estimation Monte Carlo for unreliability evaluation of highly reliable system
International Nuclear Information System (INIS)
Xiao Gang; Su Guanghui; Jia Dounan; Li Tianduo
2000-01-01
Based on analog Monte Carlo simulation, statistical Monte Carlo methods for unreliable evaluation of highly reliable system are constructed, including direct statistical estimation Monte Carlo method and weighted statistical estimation Monte Carlo method. The basal element is given, and the statistical estimation Monte Carlo estimators are derived. Direct Monte Carlo simulation method, bounding-sampling method, forced transitions Monte Carlo method, direct statistical estimation Monte Carlo and weighted statistical estimation Monte Carlo are used to evaluate unreliability of a same system. By comparing, weighted statistical estimation Monte Carlo estimator has smallest variance, and has highest calculating efficiency
Federal Laboratory Consortium — Description/History: Chemistry laboratoryThe Advanced Chemistry Laboratory (ACL) is a unique facility designed for working with the most super toxic compounds known...
Federal Laboratory Consortium — The Lincoln Laboratory Grid (LLGrid) is an interactive, on-demand parallel computing system that uses a large computing cluster to enable Laboratory researchers to...
Federal Laboratory Consortium — The Gun Dynamics Laboratory is a research multi-task facility, which includes two firing bays, a high bay area and a second floor laboratory space. The high bay area...
NASA Space Radiation Laboratory
Federal Laboratory Consortium — The NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory is a NASA funded facility, delivering heavy ion beams to a target area where scientists...
Denver District Laboratory (DEN)
Federal Laboratory Consortium — Program CapabilitiesDEN-DO Laboratory is a multi-functional laboratory capable of analyzing most chemical analytes and pathogenic/non-pathogenic microorganisms found...
Investigating the impossible: Monte Carlo simulations
International Nuclear Information System (INIS)
Kramer, Gary H.; Crowley, Paul; Burns, Linda C.
2000-01-01
Designing and testing new equipment can be an expensive and time consuming process or the desired performance characteristics may preclude its construction due to technological shortcomings. Cost may also prevent equipment being purchased for other scenarios to be tested. An alternative is to use Monte Carlo simulations to make the investigations. This presentation exemplifies how Monte Carlo code calculations can be used to fill the gap. An example is given for the investigation of two sizes of germanium detector (70 mm and 80 mm diameter) at four different crystal thicknesses (15, 20, 25, and 30 mm) and makes predictions on how the size affects the counting efficiency and the Minimum Detectable Activity (MDA). The Monte Carlo simulations have shown that detector efficiencies can be adequately modelled using photon transport if the data is used to investigate trends. The investigation of the effect of detector thickness on the counting efficiency has shown that thickness for a fixed diameter detector of either 70 mm or 80 mm is unimportant up to 60 keV. At higher photon energies, the counting efficiency begins to decrease as the thickness decreases as expected. The simulations predict that the MDA of either the 70 mm or 80 mm diameter detectors does not differ by more than a factor of 1.15 at 17 keV or 1.2 at 60 keV when comparing detectors of equivalent thicknesses. The MDA is slightly increased at 17 keV, and rises by about 52% at 660 keV, when the thickness is decreased from 30 mm to 15 mm. One could conclude from this information that the extra cost associated with the larger area Ge detectors may not be justified for the slight improvement predicted in the MDA. (author)
Laboratory-acquired brucellosis
DEFF Research Database (Denmark)
Fabiansen, C.; Knudsen, J.D.; Lebech, A.M.
2008-01-01
Brucellosis is a rare disease in Denmark. We describe one case of laboratory-acquired brucellosis from an index patient to a laboratory technician following exposure to an infected blood culture in a clinical microbiology laboratory Udgivelsesdato: 2008/6/9......Brucellosis is a rare disease in Denmark. We describe one case of laboratory-acquired brucellosis from an index patient to a laboratory technician following exposure to an infected blood culture in a clinical microbiology laboratory Udgivelsesdato: 2008/6/9...
Monte Carlo Simulation of an American Option
Directory of Open Access Journals (Sweden)
Gikiri Thuo
2007-04-01
Full Text Available We implement gradient estimation techniques for sensitivity analysis of option pricing which can be efficiently employed in Monte Carlo simulation. Using these techniques we can simultaneously obtain an estimate of the option value together with the estimates of sensitivities of the option value to various parameters of the model. After deriving the gradient estimates we incorporate them in an iterative stochastic approximation algorithm for pricing an option with early exercise features. We illustrate the procedure using an example of an American call option with a single dividend that is analytically tractable. In particular we incorporate estimates for the gradient with respect to the early exercise threshold level.
Monte Carlo study of the multiquark systems
International Nuclear Information System (INIS)
Kerbikov, B.O.; Polikarpov, M.I.; Zamolodchikov, A.B.
1986-01-01
Random walks have been used to calculate the energies of the ground states in systems of N=3, 6, 9, 12 quarks. Multiquark states with N>3 are unstable with respect to the spontaneous dissociation into color singlet hadrons. The modified Green's function Monte Carlo algorithm which proved to be more simple and much accurate than the conventional few body methods have been employed. In contrast to other techniques, the same equations are used for any number of particles, while the computer time increases only linearly V, S the number of particles
Monte Carlo eigenfunction strategies and uncertainties
International Nuclear Information System (INIS)
Gast, R.C.; Candelore, N.R.
1974-01-01
Comparisons of convergence rates for several possible eigenfunction source strategies led to the selection of the ''straight'' analog of the analytic power method as the source strategy for Monte Carlo eigenfunction calculations. To insure a fair game strategy, the number of histories per iteration increases with increasing iteration number. The estimate of eigenfunction uncertainty is obtained from a modification of a proposal by D. B. MacMillan and involves only estimates of the usual purely statistical component of uncertainty and a serial correlation coefficient of lag one. 14 references. (U.S.)
ATLAS Monte Carlo tunes for MC09
The ATLAS collaboration
2010-01-01
This note describes the ATLAS tunes of underlying event and minimum bias description for the main Monte Carlo generators used in the MC09 production. For the main shower generators, pythia and herwig (with jimmy), the MRST LO* parton distribution functions (PDFs) were used for the first time in ATLAS. Special studies on the performance of these, conceptually new, PDFs for high pt physics processes at LHC energies are presented. In addition, a tune of jimmy for CTEQ6.6 is presented, for use with MC@NLO.
Markov chains analytic and Monte Carlo computations
Graham, Carl
2014-01-01
Markov Chains: Analytic and Monte Carlo Computations introduces the main notions related to Markov chains and provides explanations on how to characterize, simulate, and recognize them. Starting with basic notions, this book leads progressively to advanced and recent topics in the field, allowing the reader to master the main aspects of the classical theory. This book also features: Numerous exercises with solutions as well as extended case studies.A detailed and rigorous presentation of Markov chains with discrete time and state space.An appendix presenting probabilistic notions that are nec
Atomistic Monte Carlo simulation of lipid membranes
DEFF Research Database (Denmark)
Wüstner, Daniel; Sklenar, Heinz
2014-01-01
Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction...... into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches...
Monte Carlo method in radiation transport problems
International Nuclear Information System (INIS)
Dejonghe, G.; Nimal, J.C.; Vergnaud, T.
1986-11-01
In neutral radiation transport problems (neutrons, photons), two values are important: the flux in the phase space and the density of particles. To solve the problem with Monte Carlo method leads to, among other things, build a statistical process (called the play) and to provide a numerical value to a variable x (this attribution is called score). Sampling techniques are presented. Play biasing necessity is proved. A biased simulation is made. At last, the current developments (rewriting of programs for instance) are presented due to several reasons: two of them are the vectorial calculation apparition and the photon and neutron transport in vacancy media [fr
Mosaic crystal algorithm for Monte Carlo simulations
Seeger, P A
2002-01-01
An algorithm is presented for calculating reflectivity, absorption, and scattering of mosaic crystals in Monte Carlo simulations of neutron instruments. The algorithm uses multi-step transport through the crystal with an exact solution of the Darwin equations at each step. It relies on the kinematical model for Bragg reflection (with parameters adjusted to reproduce experimental data). For computation of thermal effects (the Debye-Waller factor and coherent inelastic scattering), an expansion of the Debye integral as a rapidly converging series of exponential terms is also presented. Any crystal geometry and plane orientation may be treated. The algorithm has been incorporated into the neutron instrument simulation package NISP. (orig.)
A note on simultaneous Monte Carlo tests
DEFF Research Database (Denmark)
Hahn, Ute
In this short note, Monte Carlo tests of goodness of fit for data of the form X(t), t ∈ I are considered, that reject the null hypothesis if X(t) leaves an acceptance region bounded by an upper and lower curve for some t in I. A construction of the acceptance region is proposed that complies to a...... to a given target level of rejection, and yields exact p-values. The construction is based on pointwise quantiles, estimated from simulated realizations of X(t) under the null hypothesis....
Monte Carlo methods to calculate impact probabilities
Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.
2014-09-01
Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward
MBR Monte Carlo Simulation in PYTHIA8
Ciesielski, R.
We present the MBR (Minimum Bias Rockefeller) Monte Carlo simulation of (anti)proton-proton interactions and its implementation in the PYTHIA8 event generator. We discuss the total, elastic, and total-inelastic cross sections, and three contributions from diffraction dissociation processes that contribute to the latter: single diffraction, double diffraction, and central diffraction or double-Pomeron exchange. The event generation follows a renormalized-Regge-theory model, successfully tested using CDF data. Based on the MBR-enhanced PYTHIA8 simulation, we present cross-section predictions for the LHC and beyond, up to collision energies of 50 TeV.
Spectral functions from Quantum Monte Carlo
International Nuclear Information System (INIS)
Silver, R.N.
1989-01-01
In his review, D. Scalapino identified two serious limitations on the application of Quantum Monte Carlo (QMC) methods to the models of interest in High T c Superconductivity (HTS). One is the ''sign problem''. The other is the ''analytic continuation problem'', which is how to extract electron spectral functions from QMC calculations of the imaginary time Green's functions. Through-out this Symposium on HTS, the spectral functions have been the focus for the discussion of normal state properties including the applicability of band theory, Fermi liquid theory, marginal Fermi liquids, and novel non-perturbative states. 5 refs., 1 fig
An analysis of Monte Carlo tree search
CSIR Research Space (South Africa)
James, S
2017-02-01
Full Text Available Tree Search Steven James∗, George Konidaris† & Benjamin Rosman∗‡ ∗University of the Witwatersrand, Johannesburg, South Africa †Brown University, Providence RI 02912, USA ‡Council for Scientific and Industrial Research, Pretoria, South Africa steven....james@students.wits.ac.za, gdk@cs.brown.edu, brosman@csir.co.za Abstract Monte Carlo Tree Search (MCTS) is a family of directed search algorithms that has gained widespread attention in re- cent years. Despite the vast amount of research into MCTS, the effect of modifications...
Monte Carlo simulation for the transport beamline
Energy Technology Data Exchange (ETDEWEB)
Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania (Italy); Attili, A.; Marchetto, F.; Russo, G. [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino (Italy); Cirrone, G. A. P.; Schillaci, F.; Scuderi, V. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Institute of Physics Czech Academy of Science, ELI-Beamlines project, Na Slovance 2, Prague (Czech Republic); Carpinelli, M. [INFN Sezione di Cagliari, c/o Dipartimento di Fisica, Università di Cagliari, Cagliari (Italy); Tramontana, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy)
2013-07-26
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.
Monte Carlo simulation for the transport beamline
International Nuclear Information System (INIS)
Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A.; Attili, A.; Marchetto, F.; Russo, G.; Cirrone, G. A. P.; Schillaci, F.; Scuderi, V.; Carpinelli, M.; Tramontana, A.
2013-01-01
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery
Diffusion quantum Monte Carlo for molecules
International Nuclear Information System (INIS)
Lester, W.A. Jr.
1986-07-01
A quantum mechanical Monte Carlo method has been used for the treatment of molecular problems. The imaginary-time Schroedinger equation written with a shift in zero energy [E/sub T/ - V(R)] can be interpreted as a generalized diffusion equation with a position-dependent rate or branching term. Since diffusion is the continuum limit of a random walk, one may simulate the Schroedinger equation with a function psi (note, not psi 2 ) as a density of ''walks.'' The walks undergo an exponential birth and death as given by the rate term. 16 refs., 2 tabs
Monte Carlo modelling for neutron guide losses
International Nuclear Information System (INIS)
Cser, L.; Rosta, L.; Toeroek, Gy.
1989-09-01
In modern research reactors, neutron guides are commonly used for beam conducting. The neutron guide is a well polished or equivalently smooth glass tube covered inside by sputtered or evaporated film of natural Ni or 58 Ni isotope where the neutrons are totally reflected. A Monte Carlo calculation was carried out to establish the real efficiency and the spectral as well as spatial distribution of the neutron beam at the end of a glass mirror guide. The losses caused by mechanical inaccuracy and mirror quality were considered and the effects due to the geometrical arrangement were analyzed. (author) 2 refs.; 2 figs
Mont Terri project, cyclic deformations in the Opalinus Clay
International Nuclear Information System (INIS)
Moeri, A.; Bossart, P.; Matray, J.M.; Mueller, H.; Frank, E.
2010-01-01
Document available in extended abstract form only. Shrinkage structures in the Opalinus Clay, related to seasonal changes in temperature and humidity, are observed on the tunnel walls of the Mont Terri Rock Laboratory. The structures open in winter, when relative humidity in the tunnel decreases to 65%. In summer the cracks close again because of the increase in the clay volume when higher humidity causes rock swelling. Shrinkage structures are monitored in the Mont Terri Rock Laboratory at two different sites within the undisturbed rock matrix and a major fault zone. The relative movements of the rock on both sides of the cracks are monitored in three directions and compared to the fluctuations in ambient relative humidity and temperature. The cyclic deformations (CD) experiment aims to quantify the variations in crack opening in relation to the evolution of climatic conditions and to identify the processes underlying these swell and shrinkage cycles. It consists of the following tasks: - Measuring and quantifying the long-term (now up to three yearly cycles) opening and closing and, if present, the associated shear displacements of selected shrinkage cracks along an undisturbed bedding plane as well as within a major fault zone ('Main Fault'). The measurements are accompanied by temperature and humidity records as well as by a long-term monitoring of tunnel convergence. - Analysing at the micro-scale the surfaces of the crack planes to identify potential relative movements, changes in the rock fabric on the crack surfaces and the formation of fault gouge material as observed in closed cracks. - Processing and analysing measured fluctuations of crack apertures and rock deformation in the time series as well as in the hydro-meteorological variables, in particular relative humidity Hr(t) and air temperature. - Studying and reconstructing the opening cycles on a drill-core sample under well-known laboratory conditions and observing potential propagation of
Photovoltaic Characterization Laboratory
Federal Laboratory Consortium — NIST's PV characterization laboratory is used to measure the electrical performance and opto-electronic properties of solar cells and modules. This facility consists...
Federal Laboratory Consortium — The ARDEC Rapid Prototyping (RP) Laboratory was established in December 1992 to provide low cost RP capabilities to the ARDEC engineering community. The Stratasys,...
Federal Laboratory Consortium — The TVA Central Laboratories Services is a comprehensive technical support center, offering you a complete range of scientific, engineering, and technical services....
Federal Laboratory Consortium — For more than 60 years, Sandia has delivered essential science and technology to resolve the nation's most challenging security issues.Sandia National Laboratories...
Federal Laboratory Consortium — The Wireless Emulation Laboratory (WEL) is a researchtest bed used to investigate fundamental issues in networkscience. It is a research infrastructure that emulates...
FOOD SAFETY TESTING LABORATORY
Federal Laboratory Consortium — This laboratory develops screening assays, tests and modifies biosensor equipment, and optimizes food safety testing protocols for the military and civilian sector...
Federal Laboratory Consortium — The Embedded Processor Laboratory provides the means to design, develop, fabricate, and test embedded computers for missile guidance electronics systems in support...
Vehicle Development Laboratory
Federal Laboratory Consortium — FUNCTION: Supports the development of prototype deployment platform vehicles for offboard countermeasure systems.DESCRIPTION: The Vehicle Development Laboratory is...
Acoustic Technology Laboratory
Federal Laboratory Consortium — This laboratory contains an electro-magnetic worldwide data collection and field measurement capability in the area of acoustic technology. Outfitted by NASA Langley...
COGNITIVE PERFORMANCE LABORATORY
Federal Laboratory Consortium — This laboratory conducts basic and applied human research studies to characterize cognitive performance as influenced by militarily-relevant contextual and physical...
Federal Laboratory Consortium — The Space Weather Computational Laboratory is a Unix and PC based modeling and simulation facility devoted to research analysis of naturally occurring electrically...
Atmospheric Measurements Laboratory (AML)
Federal Laboratory Consortium — The Atmospheric Measurements Laboratory (AML) is one of the nation's leading research facilities for understanding aerosols, clouds, and their interactions. The AML...
Composites Characterization Laboratory
Federal Laboratory Consortium — The purpose of the Composites Characterization Laboratory is to investigate new and/or modified matrix materials and fibers for advanced composite applications both...
Microgravity Emissions Laboratory (MEL)
Federal Laboratory Consortium — The Microgravity Emissions Laboratory (MEL) utilizes a low-frequency acceleration measurement system for the characterization of rigid body inertial forces generated...
Semiconductor Laser Measurements Laboratory
Federal Laboratory Consortium — The Semiconductor Laser Measurements Laboratory is equipped to investigate and characterize the lasing properties of semiconductor diode lasers. Lasing features such...
Federal Laboratory Consortium — NETL’s Fuels Processing Laboratory in Morgantown, WV, provides researchers with the equipment they need to thoroughly explore the catalytic issues associated with...
Advanced Manufacturing Laboratory
Federal Laboratory Consortium — The Advanced Manufacturing Laboratory at the University of Maryland provides the state of the art facilities for realizing next generation products and educating the...
Virtual Training Devices Laboratory
Federal Laboratory Consortium — The Virtual Training Devices (VTD) Laboratory at the Life Cycle Software Engineering Center, Picatinny Arsenal, provides a software testing and support environment...
Federal Laboratory Consortium — The Intelligent Optics Laboratory supports sophisticated investigations on adaptive and nonlinear optics; advancedimaging and image processing; ground-to-ground and...
ANALYTICAL MICROBIOLOGY LABORATORY
Federal Laboratory Consortium — This laboratory contains equipment that performs a broad array of microbiological analyses for pathogenic and spoilage microorganisms. It performs challenge studies...
Pritchard, Jack; Braker, Clifton
1982-01-01
Pritchard discusses the opportunities for applied learning afforded by laboratories. Braker describes the evaluation of cognitive, affective, and psychomotor skills in the agricultural mechanics laboratory. (SK)
Wind Structural Testing Laboratory
Federal Laboratory Consortium — This facility provides office space for industry researchers, experimental laboratories, computer facilities for analytical work, and space for assembling components...
Geospatial Services Laboratory
Federal Laboratory Consortium — FUNCTION: To process, store, and disseminate geospatial data to the Department of Defense and other Federal agencies.DESCRIPTION: The Geospatial Services Laboratory...
Thermogravimetric Analysis Laboratory
Federal Laboratory Consortium — At NETL’s Thermogravimetric Analysis Laboratory in Morgantown, WV, researchers study how chemical looping combustion (CLC) can be applied to fossil energy systems....
Research Combustion Laboratory (RCL)
Federal Laboratory Consortium — The Research Combustion Laboratory (RCL) develops aerospace propulsion technology by performing tests on propulsion components and materials. Altitudes up to 137,000...
Combustion Research Laboratory
Federal Laboratory Consortium — The Combustion Research Laboratory facilitates the development of new combustion systems or improves the operation of existing systems to meet the Army's mission for...
Coatings and Corrosion Laboratory
Federal Laboratory Consortium — Purpose: The mission of the Coatings and Corrosion Laboratory is to develop and analyze the effectiveness of innovative coatings test procedures while evaluating the...
Laboratory of Chemical Physics
Federal Laboratory Consortium — Current research in the Laboratory of Chemical Physics is primarily concerned with experimental, theoretical, and computational problems in the structure, dynamics,...
Optical Remote Sensing Laboratory
Federal Laboratory Consortium — The Optical Remote Sensing Laboratory deploys rugged, cutting-edge electro-optical instrumentation for the collection of various event signatures, with expertise in...
Tactical Systems Integration Laboratory
Federal Laboratory Consortium — The Tactical Systems Integration Laboratory is used to design and integrate computer hardware and software and related electronic subsystems for tactical vehicles....
Federal Laboratory Consortium — As part of the Electrical and Computer Engineering Department and The Institute for System Research, the Neural Systems Laboratory studies the functionality of the...
Environmental Microbiology Laboratory
Federal Laboratory Consortium — The Environmental Microbiology Laboratory, located in Bldg. 644 provides a dual-gas respirometer for measurement of oxygen consumption and carbon dioxide evolution...
Diffusion Monte Carlo approach versus adiabatic computation for local Hamiltonians
Bringewatt, Jacob; Dorland, William; Jordan, Stephen P.; Mink, Alan
2018-02-01
Most research regarding quantum adiabatic optimization has focused on stoquastic Hamiltonians, whose ground states can be expressed with only real non-negative amplitudes and thus for whom destructive interference is not manifest. This raises the question of whether classical Monte Carlo algorithms can efficiently simulate quantum adiabatic optimization with stoquastic Hamiltonians. Recent results have given counterexamples in which path-integral and diffusion Monte Carlo fail to do so. However, most adiabatic optimization algorithms, such as for solving MAX-k -SAT problems, use k -local Hamiltonians, whereas our previous counterexample for diffusion Monte Carlo involved n -body interactions. Here we present a 6-local counterexample which demonstrates that even for these local Hamiltonians there are cases where diffusion Monte Carlo cannot efficiently simulate quantum adiabatic optimization. Furthermore, we perform empirical testing of diffusion Monte Carlo on a standard well-studied class of permutation-symmetric tunneling problems and similarly find large advantages for quantum optimization over diffusion Monte Carlo.
Investigation of a Monte Carlo model for chemical reactions
International Nuclear Information System (INIS)
Hamm, R.N.; Turner, J.E.; Stabin, M.G.
1998-01-01
Monte Carlo computer simulations are in use at a number of laboratories for calculating time-dependent yields, which can be compared with experiments in the radiolysis of water. We report here on calculations to investigate the validity and consistency of the procedures used for simulating chemical reactions in our code, RADLYS. Model calculations were performed of the rate constants themselves. The rates thus determined showed an expected rapid decline over the first few hundred ps and a very gradual decline thereafter out to the termination of the calculations at 4.5 ns. Results are reported for different initial concentrations and numbers of reactive species. Generally, the calculated rate constants are smallest when the initial concentrations of the reactants are largest. It is found that inhomogeneities that quickly develop in the initial random spatial distribution of reactants persist in time as a result of subsequent chemical reactions, and thus conditions may poorly approximate those assumed from diffusion theory. We also investigated the reaction of a single species of one type placed among a large number of randomly distributed species of another type with which it could react. The distribution of survival times of the single species was calculated by using three different combinations of the diffusion constants for the two species, as is sometimes discussed in diffusion theory. The three methods gave virtually identical results. (orig.)
Monte Carlo learning/biasing experiment with intelligent random numbers
International Nuclear Information System (INIS)
Booth, T.E.
1985-01-01
A Monte Carlo learning and biasing technique is described that does its learning and biasing in the random number space rather than the physical phase-space. The technique is probably applicable to all linear Monte Carlo problems, but no proof is provided here. Instead, the technique is illustrated with a simple Monte Carlo transport problem. Problems encountered, problems solved, and speculations about future progress are discussed. 12 refs
Discrete diffusion Monte Carlo for frequency-dependent radiative transfer
International Nuclear Information System (INIS)
Densmore, Jeffery D.; Thompson, Kelly G.; Urbatsch, Todd J.
2011-01-01
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique. (author)
Monte Carlo criticality analysis for dissolvers with neutron poison
International Nuclear Information System (INIS)
Yu, Deshun; Dong, Xiufang; Pu, Fuxiang.
1987-01-01
Criticality analysis for dissolvers with neutron poison is given on the basis of Monte Carlo method. In Monte Carlo calculations of thermal neutron group parameters for fuel pieces, neutron transport length is determined in terms of maximum cross section approach. A set of related effective multiplication factors (K eff ) are calculated by Monte Carlo method for the three cases. Related numerical results are quite useful for the design and operation of this kind of dissolver in the criticality safety analysis. (author)
Temperature variance study in Monte-Carlo photon transport theory
International Nuclear Information System (INIS)
Giorla, J.
1985-10-01
We study different Monte-Carlo methods for solving radiative transfer problems, and particularly Fleck's Monte-Carlo method. We first give the different time-discretization schemes and the corresponding stability criteria. Then we write the temperature variance as a function of the variances of temperature and absorbed energy at the previous time step. Finally we obtain some stability criteria for the Monte-Carlo method in the stationary case [fr
International Nuclear Information System (INIS)
Delvin, W.L.
1977-01-01
The elements (principles) of quality assurance can be applied to the operation of the analytical chemistry laboratory to provide an effective tool for indicating the competence of the laboratory and for helping to upgrade competence if necessary. When used, those elements establish the planned and systematic actions necessary to provide adequate confidence in each analytical result reported by the laboratory (the definition of laboratory quality assurance). The elements, as used at the Hanford Engineering Development Laboratory (HEDL), are discussed and they are qualification of analysts, written methods, sample receiving and storage, quality control, audit, and documentation. To establish a laboratory quality assurance program, a laboratory QA program plan is prepared to specify how the elements are to be implemented into laboratory operation. Benefits that can be obtained from using laboratory quality assurance are given. Experience at HEDL has shown that laboratory quality assurance is not a burden, but it is a useful and valuable tool for the analytical chemistry laboratory
Le Comte de Monte Cristo: da literatura ao cinema
Caravela, Natércia Murta Silva
2008-01-01
A presente dissertação discute o diálogo estabelecido entre literatura e cinema no tratamento da personagem principal – um homem traído que se vinga de forma cruel dos seus inimigos – na obra literária Le Comte de Monte-Cristo, de Alexandre Dumas, e nas três adaptações fílmicas escolhidas: Le Comte de Monte-Cristo de Robert Vernay (1943); The count of Monte Cristo de David Greene (1975) e The count of Monte Cristo de Kevin Reynolds (2002). O projecto centra-se na análise da ...
Odd-flavor Simulations by the Hybrid Monte Carlo
Takaishi, Tetsuya; Takaishi, Tetsuya; De Forcrand, Philippe
2001-01-01
The standard hybrid Monte Carlo algorithm is known to simulate even flavors QCD only. Simulations of odd flavors QCD, however, can be also performed in the framework of the hybrid Monte Carlo algorithm where the inverse of the fermion matrix is approximated by a polynomial. In this exploratory study we perform three flavors QCD simulations. We make a comparison of the hybrid Monte Carlo algorithm and the R-algorithm which also simulates odd flavors systems but has step-size errors. We find that results from our hybrid Monte Carlo algorithm are in agreement with those from the R-algorithm obtained at very small step-size.
Wielandt acceleration for MCNP5 Monte Carlo eigenvalue calculations
International Nuclear Information System (INIS)
Brown, F.
2007-01-01
Monte Carlo criticality calculations use the power iteration method to determine the eigenvalue (k eff ) and eigenfunction (fission source distribution) of the fundamental mode. A recently proposed method for accelerating convergence of the Monte Carlo power iteration using Wielandt's method has been implemented in a test version of MCNP5. The method is shown to provide dramatic improvements in convergence rates and to greatly reduce the possibility of false convergence assessment. The method is effective and efficient, improving the Monte Carlo figure-of-merit for many problems. In addition, the method should eliminate most of the underprediction bias in confidence intervals for Monte Carlo criticality calculations. (authors)
Monte Carlo shielding analyses using an automated biasing procedure
International Nuclear Information System (INIS)
Tang, J.S.; Hoffman, T.J.
1988-01-01
A systematic and automated approach for biasing Monte Carlo shielding calculations is described. In particular, adjoint fluxes from a one-dimensional discrete ordinates calculation are used to generate biasing parameters for a Monte Carlo calculation. The entire procedure of adjoint calculation, biasing parameters generation, and Monte Carlo calculation has been automated. The automated biasing procedure has been applied to several realistic deep-penetration shipping cask problems. The results obtained for neutron and gamma-ray transport indicate that with the automated biasing procedure Monte Carlo shielding calculations of spent-fuel casks can be easily performed with minimum effort and that accurate results can be obtained at reasonable computing cost
Monte Carlo techniques for analyzing deep-penetration problems
International Nuclear Information System (INIS)
Cramer, S.N.; Gonnord, J.; Hendricks, J.S.
1986-01-01
Current methods and difficulties in Monte Carlo deep-penetration calculations are reviewed, including statistical uncertainty and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multigroup Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications
Igo - A Monte Carlo Code For Radiotherapy Planning
International Nuclear Information System (INIS)
Goldstein, M.; Regev, D.
1999-01-01
The goal of radiation therapy is to deliver a lethal dose to the tumor, while minimizing the dose to normal tissues and vital organs. To carry out this task, it is critical to calculate correctly the 3-D dose delivered. Monte Carlo transport methods (especially the Adjoint Monte Carlo have the potential to provide more accurate predictions of the 3-D dose the currently used methods. IG0 is a Monte Carlo code derived from the general Monte Carlo Program - MCNP, tailored specifically for calculating the effects of radiation therapy. This paper describes the IG0 transport code, the PIG0 interface and some preliminary results
Quantum statistical Monte Carlo methods and applications to spin systems
International Nuclear Information System (INIS)
Suzuki, M.
1986-01-01
A short review is given concerning the quantum statistical Monte Carlo method based on the equivalence theorem that d-dimensional quantum systems are mapped onto (d+1)-dimensional classical systems. The convergence property of this approximate tansformation is discussed in detail. Some applications of this general appoach to quantum spin systems are reviewed. A new Monte Carlo method, ''thermo field Monte Carlo method,'' is presented, which is an extension of the projection Monte Carlo method at zero temperature to that at finite temperatures
Variational Variance Reduction for Monte Carlo Criticality Calculations
International Nuclear Information System (INIS)
Densmore, Jeffery D.; Larsen, Edward W.
2001-01-01
A new variational variance reduction (VVR) method for Monte Carlo criticality calculations was developed. This method employs (a) a variational functional that is more accurate than the standard direct functional, (b) a representation of the deterministically obtained adjoint flux that is especially accurate for optically thick problems with high scattering ratios, and (c) estimates of the forward flux obtained by Monte Carlo. The VVR method requires no nonanalog Monte Carlo biasing, but it may be used in conjunction with Monte Carlo biasing schemes. Some results are presented from a class of criticality calculations involving alternating arrays of fuel and moderator regions
Applications of the Monte Carlo method in radiation protection
International Nuclear Information System (INIS)
Kulkarni, R.N.; Prasad, M.A.
1999-01-01
This paper gives a brief introduction to the application of the Monte Carlo method in radiation protection. It may be noted that an exhaustive review has not been attempted. The special advantage of the Monte Carlo method has been first brought out. The fundamentals of the Monte Carlo method have next been explained in brief, with special reference to two applications in radiation protection. Some sample current applications have been reported in the end in brief as examples. They are, medical radiation physics, microdosimetry, calculations of thermoluminescence intensity and probabilistic safety analysis. The limitations of the Monte Carlo method have also been mentioned in passing. (author)
Multi-Index Monte Carlo (MIMC)
Haji Ali, Abdul Lateef
2016-01-06
We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles s seminal work, instead of using first-order differences as in MLMC, we use in MIMC high-order mixed differences to reduce the variance of the hierarchical differences dramatically. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be of Total Degree (TD) type. When using such sets, MIMC yields new and improved complexity results, which are natural generalizations of Giles s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence, O(TOL-2).
Multi-Index Monte Carlo (MIMC)
Haji Ali, Abdul Lateef; Nobile, Fabio; Tempone, Raul
2016-01-01
We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles s seminal work, instead of using first-order differences as in MLMC, we use in MIMC high-order mixed differences to reduce the variance of the hierarchical differences dramatically. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be of Total Degree (TD) type. When using such sets, MIMC yields new and improved complexity results, which are natural generalizations of Giles s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence, O(TOL-2).
Multi-Index Monte Carlo (MIMC)
Haji Ali, Abdul Lateef; Nobile, Fabio; Tempone, Raul
2015-01-01
We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles’s seminal work, instead of using first-order differences as in MLMC, we use in MIMC high-order mixed differences to reduce the variance of the hierarchical differences dramatically. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be of Total Degree (TD) type. When using such sets, MIMC yields new and improved complexity results, which are natural generalizations of Giles’s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence.
International Nuclear Information System (INIS)
Ohta, Shigemi
1996-01-01
The Self-Test Monte Carlo (STMC) method resolves the main problems in using algebraic pseudo-random numbers for Monte Carlo (MC) calculations: that they can interfere with MC algorithms and lead to erroneous results, and that such an error often cannot be detected without known exact solution. STMC is based on good randomness of about 10 10 bits available from physical noise or transcendental numbers like π = 3.14---. Various bit modifiers are available to get more bits for applications that demands more than 10 10 random bits such as lattice quantum chromodynamics (QCD). These modifiers are designed so that a) each of them gives a bit sequence comparable in randomness as the original if used separately from each other, and b) their mutual interference when used jointly in a single MC calculation is adjustable. Intermediate data of the MC calculation itself are used to quantitatively test and adjust the mutual interference of the modifiers in respect of the MC algorithm. STMC is free of systematic error and gives reliable statistical error. Also it can be easily implemented on vector and parallel supercomputers. (author)
Algorithms for Monte Carlo calculations with fermions
International Nuclear Information System (INIS)
Weingarten, D.
1985-01-01
We describe a fermion Monte Carlo algorithm due to Petcher and the present author and another due to Fucito, Marinari, Parisi and Rebbi. For the first algorithm we estimate the number of arithmetic operations required to evaluate a vacuum expectation value grows as N 11 /msub(q) on an N 4 lattice with fixed periodicity in physical units and renormalized quark mass msub(q). For the second algorithm the rate of growth is estimated to be N 8 /msub(q) 2 . Numerical experiments are presented comparing the two algorithms on a lattice of size 2 4 . With a hopping constant K of 0.15 and β of 4.0 we find the number of operations for the second algorithm is about 2.7 times larger than for the first and about 13 000 times larger than for corresponding Monte Carlo calculations with a pure gauge theory. An estimate is given for the number of operations required for more realistic calculations by each algorithm on a larger lattice. (orig.)
Quantum Monte Carlo for atoms and molecules
International Nuclear Information System (INIS)
Barnett, R.N.
1989-11-01
The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H 2 , LiH, Li 2 , and H 2 O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li 2 , and H 2 O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations, the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions
Monte Carlo simulation of grain growth
Directory of Open Access Journals (Sweden)
Paulo Blikstein
1999-07-01
Full Text Available Understanding and predicting grain growth in Metallurgy is meaningful. Monte Carlo methods have been used in computer simulations in many different fields of knowledge. Grain growth simulation using this method is especially attractive as the statistical behavior of the atoms is properly reproduced; microstructural evolution depends only on the real topology of the grains and not on any kind of geometric simplification. Computer simulation has the advantage of allowing the user to visualize graphically the procedures, even dynamically and in three dimensions. Single-phase alloy grain growth simulation was carried out by calculating the free energy of each atom in the lattice (with its present crystallographic orientation and comparing this value to another one calculated with a different random orientation. When the resulting free energy is lower or equal to the initial value, the new orientation replaces the former. The measure of time is the Monte Carlo Step (MCS, which involves a series of trials throughout the lattice. A very close relationship between experimental and theoretical values for the grain growth exponent (n was observed.
Multi-Index Monte Carlo (MIMC)
Haji Ali, Abdul Lateef
2015-01-07
We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles’s seminal work, instead of using first-order differences as in MLMC, we use in MIMC high-order mixed differences to reduce the variance of the hierarchical differences dramatically. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be of Total Degree (TD) type. When using such sets, MIMC yields new and improved complexity results, which are natural generalizations of Giles’s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence.
Parallel Monte Carlo Search for Hough Transform
Lopes, Raul H. C.; Franqueira, Virginia N. L.; Reid, Ivan D.; Hobson, Peter R.
2017-10-01
We investigate the problem of line detection in digital image processing and in special how state of the art algorithms behave in the presence of noise and whether CPU efficiency can be improved by the combination of a Monte Carlo Tree Search, hierarchical space decomposition, and parallel computing. The starting point of the investigation is the method introduced in 1962 by Paul Hough for detecting lines in binary images. Extended in the 1970s to the detection of space forms, what came to be known as Hough Transform (HT) has been proposed, for example, in the context of track fitting in the LHC ATLAS and CMS projects. The Hough Transform transfers the problem of line detection, for example, into one of optimization of the peak in a vote counting process for cells which contain the possible points of candidate lines. The detection algorithm can be computationally expensive both in the demands made upon the processor and on memory. Additionally, it can have a reduced effectiveness in detection in the presence of noise. Our first contribution consists in an evaluation of the use of a variation of the Radon Transform as a form of improving theeffectiveness of line detection in the presence of noise. Then, parallel algorithms for variations of the Hough Transform and the Radon Transform for line detection are introduced. An algorithm for Parallel Monte Carlo Search applied to line detection is also introduced. Their algorithmic complexities are discussed. Finally, implementations on multi-GPU and multicore architectures are discussed.
Monte Carlo simulation for radiographic applications
International Nuclear Information System (INIS)
Tillack, G.R.; Bellon, C.
2003-01-01
Standard radiography simulators are based on the attenuation law complemented by built-up-factors (BUF) to describe the interaction of radiation with material. The assumption of BUF implies that scattered radiation reduces only the contrast in radiographic images. This simplification holds for a wide range of applications like weld inspection as known from practical experience. But only a detailed description of the different underlying interaction mechanisms is capable to explain effects like mottling or others that every radiographer has experienced in practice. The application of Monte Carlo models is capable to handle primary and secondary interaction mechanisms contributing to the image formation process like photon interactions (absorption, incoherent and coherent scattering including electron-binding effects, pair production) and electron interactions (electron tracing including X-Ray fluorescence and Bremsstrahlung production). It opens up possibilities like the separation of influencing factors and the understanding of the functioning of intensifying screen used in film radiography. The paper discusses the opportunities in applying the Monte Carlo method to investigate special features in radiography in terms of selected examples. (orig.) [de
Reactor perturbation calculations by Monte Carlo methods
International Nuclear Information System (INIS)
Gubbins, M.E.
1965-09-01
Whilst Monte Carlo methods are useful for reactor calculations involving complicated geometry, it is difficult to apply them to the calculation of perturbation worths because of the large amount of computing time needed to obtain good accuracy. Various ways of overcoming these difficulties are investigated in this report, with the problem of estimating absorbing control rod worths particularly in mind. As a basis for discussion a method of carrying out multigroup reactor calculations by Monte Carlo methods is described. Two methods of estimating a perturbation worth directly, without differencing two quantities of like magnitude, are examined closely but are passed over in favour of a third method based on a correlation technique. This correlation method is described, and demonstrated by a limited range of calculations for absorbing control rods in a fast reactor. In these calculations control rod worths of between 1% and 7% in reactivity are estimated to an accuracy better than 10% (3 standard errors) in about one hour's computing time on the English Electric KDF.9 digital computer. (author)
International Nuclear Information System (INIS)
David, Mariano Gazineu; Salata, Camila; Almeida, Carlos Eduardo
2014-01-01
The Laboratorio de Ciencias Radiologicas develops a methodology for the determination of the absorbed dose to water by Fricke chemical dosimetry method for brachytherapy sources of 192 Ir high dose rate and have compared their results with the laboratory of the National Research Council Canada. This paper describes the determination of the correction factors by Monte Carlo method, with the Penelope code. Values for all factors are presented, with a maximum difference of 0.22% for their determination by an alternative way. (author)
Monte Carlo simulation study of the muon-induced neutron flux at LNGS
International Nuclear Information System (INIS)
Persiani, R.; Garbini, M.; Massoli, F.; Sartorelli, G; Selvi, M.
2011-01-01
Muon-induced neutrons are ultimate background for all the experiments searching for rare events in underground laboratories. Several measurements and simulations were performed concerning the neutron production and propagation but there are disagreements between experimental data and simulations. In this work we present our Monte-Carlo simulation study, based on Geant4, to estimate the muon-induced neutron flux at LNGS. The obtained integral flux of neutrons above 1 MeV is 2.31 x 10 -10 n/cm 2 /s.
Modern clinical laboratory diagnostics
International Nuclear Information System (INIS)
Balakhovskij, I.S.
1986-01-01
Laboratory diagnosis is auxillary medical discipline studying specific laboratory symptoms of diseases, revealed by investigations of materials taken from patients. The structure of laboratory servie in our country and abroad, items of laboratory investigations, organizational principles are described. Attention is being given to the cost of analyses, the amount of conducted investigations, methods of result presentation, problems of accuracy, quality control and information content
Mobile spectrometric laboratory
International Nuclear Information System (INIS)
Isajenko, K.A.; Lipinski, P.
2002-01-01
The article presents the Mobile Spectrometric Laboratory used by Central Laboratory for Radiological Protection since year 2000. The equipment installed in the Mobile Laboratory and its uses is described. The results of international exercises and intercalibrations, in which the Laboratory participated are presented. (author)
International Nuclear Information System (INIS)
Bossart, P.; Nussbaum, C.
2007-01-01
The international Mont Terri project started in January 1996. Research is carried out in the Mont Terri rock laboratory, an underground facility near the security gallery of the Mont Terri motorway tunnel (vicinity of St-Ursanne, Canton of Jura, Switzerland). The aim of the project is the geological, hydrogeological, geochemical and geotechnical characterisation of a clay formation, specifically of the Opalinus Clay. Twelve Partners from European countries and Japan participate in the project. These are ANDRA, BGR, CRIEPI, ENRESA, GRS, HSK, IRSN, JAEA, NAGRA, OBAYASHI, SCK.CEN and swisstopo. Since 2006, swisstopo acts as operator of the rock laboratory and is responsible for the implementation of the research programme decided by the partners. The three following reports are milestones in the research history of the Mont Terri project. It was the first time that an in-situ heating test with about 20 observation boreholes over a time span of several years was carried out in a clay formation. The engineered barrier emplacement experiment has been extended due to very encouraging measurement results and is still going on. The ventilation test was and is a challenge, especially in the very narrow microtunnel. All three projects were financially supported by the European Commission and the Swiss State Secretariat for Education and Research. The three important scientific and technical reports, which are presented in the following, have been provided by a number of scientists, engineers and technicians from the Partners, but also from national research organisations and private contractors. Many fruitful meetings where held, at the rock laboratory and at other facilities, not to forget the weeks and months of installation and testing work carried out by the technicians and engineers. The corresponding names and organisations are listed in detail in the reports. Special thanks are going to the co-ordinators of the three projects for their motivation of the team during
MCNP-REN: a Monte Carlo tool for neutron detector design
International Nuclear Information System (INIS)
Abhold, M.E.; Baker, M.C.
2002-01-01
The development of neutron detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model fails to accurately predict detector response in common applications. For this reason, the general Monte Carlo code developed at Los Alamos National Laboratory, Monte Carlo N-Particle (MCNP), was modified to simulate the pulse streams that would be generated by a neutron detector and normally analyzed by a shift register. This modified code, MCNP-Random Exponentially Distributed Neutron Source (MCNP-REN), along with the Time Analysis Program, predicts neutron detector response without using the point reactor model, making it unnecessary for the user to decide whether or not the assumptions of the point model are met for their application. MCNP-REN is capable of simulating standard neutron coincidence counting as well as neutron multiplicity counting. Measurements of mixed oxide fresh fuel were taken with the Underwater Coincidence Counter, and measurements of highly enriched uranium reactor fuel were taken with the active neutron interrogation Research Reactor Fuel Counter and compared to calculation. Simulations completed for other detector design applications are described. The method used in MCNP-REN is demonstrated to be fundamentally sound and shown to eliminate the need to use the point model for detector performance predictions
Selection of important Monte Carlo histories
International Nuclear Information System (INIS)
Egbert, Stephen D.
1987-01-01
The 1986 Dosimetry System (DS86) for Japanese A-bomb survivors uses information describing the behavior of individual radiation particles, simulated by Monte Carlo methods, to calculate the transmission of radiation into structures and, thence, into humans. However, there are practical constraints on the number of such particle 'histories' that may be used. First, the number must be sufficiently high to provide adequate statistical precision fir any calculated quantity of interest. For integral quantities, such as dose or kerma, statistical precision of approximately 5% (standard deviation) is required to ensure that statistical uncertainties are not a major contributor to the overall uncertainty of the transmitted value. For differential quantities, such as scalar fluence spectra, 10 to 15% standard deviation on individual energy groups is adequate. Second, the number of histories cannot be so large as to require an unacceptably large amount of computer time to process the entire survivor data base. Given that there are approx. 30,000 survivors, each having 13 or 14 organs of interest, the number of histories per organ must be constrained to less than several ten's of thousands at the very most. Selection and use of the most important Monte Carlo leakage histories from among all those calculated allows the creation of an efficient house and organ radiation transmission system for use at RERF. While attempts have been made during the adjoint Monte Carlo calculation to bias the histories toward an efficient dose estimate, this effort has been far from satisfactory. Many of the adjoint histories on a typical leakage tape are either starting in an energy group in which there is very little kerma or dose or leaking into an energy group with very little free-field couple with. By knowing the typical free-field fluence and the fluence-to-dose factors with which the leaking histories will be used, one can select histories rom a leakage tape that will contribute to dose
Energy Materials Research Laboratory (EMRL)
Federal Laboratory Consortium — The Energy Materials Research Laboratory at the Savannah River National Laboratory (SRNL) creates a cross-disciplinary laboratory facility that lends itself to the...
Response decomposition with Monte Carlo correlated coupling
International Nuclear Information System (INIS)
Ueki, T.; Hoogenboom, J.E.; Kloosterman, J.L.
2001-01-01
Particle histories that contribute to a detector response are categorized according to whether they are fully confined inside a source-detector enclosure or cross and recross the same enclosure. The contribution from the confined histories is expressed using a forward problem with the external boundary condition on the source-detector enclosure. The contribution from the crossing and recrossing histories is expressed as the surface integral at the same enclosure of the product of the directional cosine and the fluxes in the foregoing forward problem and the adjoint problem for the whole spatial domain. The former contribution can be calculated by a standard forward Monte Carlo. The latter contribution can be calculated by correlated coupling of forward and adjoint histories independently of the former contribution. We briefly describe the computational method and discuss its application to perturbation analysis for localized material changes. (orig.)
Response decomposition with Monte Carlo correlated coupling
Energy Technology Data Exchange (ETDEWEB)
Ueki, T.; Hoogenboom, J.E.; Kloosterman, J.L. [Delft Univ. of Technology (Netherlands). Interfaculty Reactor Inst.
2001-07-01
Particle histories that contribute to a detector response are categorized according to whether they are fully confined inside a source-detector enclosure or cross and recross the same enclosure. The contribution from the confined histories is expressed using a forward problem with the external boundary condition on the source-detector enclosure. The contribution from the crossing and recrossing histories is expressed as the surface integral at the same enclosure of the product of the directional cosine and the fluxes in the foregoing forward problem and the adjoint problem for the whole spatial domain. The former contribution can be calculated by a standard forward Monte Carlo. The latter contribution can be calculated by correlated coupling of forward and adjoint histories independently of the former contribution. We briefly describe the computational method and discuss its application to perturbation analysis for localized material changes. (orig.)
Monte Carlo simulations of low background detectors
International Nuclear Information System (INIS)
Miley, H.S.; Brodzinski, R.L.; Hensley, W.K.; Reeves, J.H.
1995-01-01
An implementation of the Electron Gamma Shower 4 code (EGS4) has been developed to allow convenient simulation of typical gamma ray measurement systems. Coincidence gamma rays, beta spectra, and angular correlations have been added to adequately simulate a complete nuclear decay and provide corrections to experimentally determined detector efficiencies. This code has been used to strip certain low-background spectra for the purpose of extremely low-level assay. Monte Carlo calculations of this sort can be extremely successful since low background detectors are usually free of significant contributions from poorly localized radiation sources, such as cosmic muons, secondary cosmic neutrons, and radioactive construction or shielding materials. Previously, validation of this code has been obtained from a series of comparisons between measurements and blind calculations. An example of the application of this code to an exceedingly low background spectrum stripping will be presented. (author) 5 refs.; 3 figs.; 1 tab
Homogenized group cross sections by Monte Carlo
International Nuclear Information System (INIS)
Van Der Marck, S. C.; Kuijper, J. C.; Oppe, J.
2006-01-01
Homogenized group cross sections play a large role in making reactor calculations efficient. Because of this significance, many codes exist that can calculate these cross sections based on certain assumptions. However, the application to the High Flux Reactor (HFR) in Petten, the Netherlands, the limitations of such codes imply that the core calculations would become less accurate when using homogenized group cross sections (HGCS). Therefore we developed a method to calculate HGCS based on a Monte Carlo program, for which we chose MCNP. The implementation involves an addition to MCNP, and a set of small executables to perform suitable averaging after the MCNP run(s) have completed. Here we briefly describe the details of the method, and we report on two tests we performed to show the accuracy of the method and its implementation. By now, this method is routinely used in preparation of the cycle to cycle core calculations for HFR. (authors)
Nuclear reactions in Monte Carlo codes
Ferrari, Alfredo
2002-01-01
The physics foundations of hadronic interactions as implemented in most Monte Carlo codes are presented together with a few practical examples. The description of the relevant physics is presented schematically split into the major steps in order to stress the different approaches required for the full understanding of nuclear reactions at intermediate and high energies. Due to the complexity of the problem, only a few semi-qualitative arguments are developed in this paper. The description will be necessarily schematic and somewhat incomplete, but hopefully it will be useful for a first introduction into this topic. Examples are shown mostly for the high energy regime, where all mechanisms mentioned in the paper are at work and to which perhaps most of the readers are less accustomed. Examples for lower energies can be found in the references. (43 refs) .
Angular biasing in implicit Monte-Carlo
International Nuclear Information System (INIS)
Zimmerman, G.B.
1994-01-01
Calculations of indirect drive Inertial Confinement Fusion target experiments require an integrated approach in which laser irradiation and radiation transport in the hohlraum are solved simultaneously with the symmetry, implosion and burn of the fuel capsule. The Implicit Monte Carlo method has proved to be a valuable tool for the two dimensional radiation transport within the hohlraum, but the impact of statistical noise on the symmetric implosion of the small fuel capsule is difficult to overcome. We present an angular biasing technique in which an increased number of low weight photons are directed at the imploding capsule. For typical parameters this reduces the required computer time for an integrated calculation by a factor of 10. An additional factor of 5 can also be achieved by directing even smaller weight photons at the polar regions of the capsule where small mass zones are most sensitive to statistical noise
An accurate nonlinear Monte Carlo collision operator
International Nuclear Information System (INIS)
Wang, W.X.; Okamoto, M.; Nakajima, N.; Murakami, S.
1995-03-01
A three dimensional nonlinear Monte Carlo collision model is developed based on Coulomb binary collisions with the emphasis both on the accuracy and implementation efficiency. The operator of simple form fulfills particle number, momentum and energy conservation laws, and is equivalent to exact Fokker-Planck operator by correctly reproducing the friction coefficient and diffusion tensor, in addition, can effectively assure small-angle collisions with a binary scattering angle distributed in a limited range near zero. Two highly vectorizable algorithms are designed for its fast implementation. Various test simulations regarding relaxation processes, electrical conductivity, etc. are carried out in velocity space. The test results, which is in good agreement with theory, and timing results on vector computers show that it is practically applicable. The operator may be used for accurately simulating collisional transport problems in magnetized and unmagnetized plasmas. (author)
Computation cluster for Monte Carlo calculations
International Nuclear Information System (INIS)
Petriska, M.; Vitazek, K.; Farkas, G.; Stacho, M.; Michalek, S.
2010-01-01
Two computation clusters based on Rocks Clusters 5.1 Linux distribution with Intel Core Duo and Intel Core Quad based computers were made at the Department of the Nuclear Physics and Technology. Clusters were used for Monte Carlo calculations, specifically for MCNP calculations applied in Nuclear reactor core simulations. Optimization for computation speed was made on hardware and software basis. Hardware cluster parameters, such as size of the memory, network speed, CPU speed, number of processors per computation, number of processors in one computer were tested for shortening the calculation time. For software optimization, different Fortran compilers, MPI implementations and CPU multi-core libraries were tested. Finally computer cluster was used in finding the weighting functions of neutron ex-core detectors of VVER-440. (authors)
Monte Carlo stratified source-sampling
International Nuclear Information System (INIS)
Blomquist, R.N.; Gelbard, E.M.
1997-01-01
In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo open-quotes eigenvalue of the worldclose quotes problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. The original test-problem was treated by a special code designed specifically for that purpose. Recently ANL started work on a method for dealing with more realistic eigenvalue of the world configurations, and has been incorporating this method into VIM. The original method has been modified to take into account real-world statistical noise sources not included in the model problem. This paper constitutes a status report on work still in progress
Helminthiases in Montes Claros. Preliminary survey
Directory of Open Access Journals (Sweden)
Rina Girard Kaminsky
1976-04-01
Full Text Available A preliminary survey was conducted for the presence of helminths in the city of Montes Claros, M. G., Brazil. Three groups of persons were examined by the direct smear, Kato thick film and MIFC techniques; one group by direct smear and Kato only. General findings were: a high prevalence of hookworm, followed by ascariasis, S. mansoni, S. stercoralis and very light infections with T. trichiurá. E. vermicularis and H. nana were ranking parasites at an orphanage, with some hookworm and S. mansoni infections as well. At a pig slaughter house, the dominant parasites were hookworm and S. mansoni. Pig cysticercosis was an incidental finding worth mentioning for the health hazard it represents for humans as well as an economic loss. From the comparative results between the Kato and the MIF the former proved itself again as a more sensitive and reliable concentration method for helminth eggs, of low cost and easy performance.
Monte Carlo simulation of a CZT detector
International Nuclear Information System (INIS)
Chun, Sung Dae; Park, Se Hwan; Ha, Jang Ho; Kim, Han Soo; Cho, Yoon Ho; Kang, Sang Mook; Kim, Yong Kyun; Hong, Duk Geun
2008-01-01
CZT detector is one of the most promising radiation detectors for hard X-ray and γ-ray measurement. The energy spectrum of CZT detector has to be simulated to optimize the detector design. A CZT detector was fabricated with dimensions of 5x5x2 mm 3 . A Peltier cooler with a size of 40x40 mm 2 was installed below the fabricated CZT detector to reduce the operation temperature of the detector. Energy spectra of were measured with 59.5 keV γ-ray from 241 Am. A Monte Carlo code was developed to simulate the CZT energy spectrum, which was measured with a planar-type CZT detector, and the result was compared with the measured one. The simulation was extended to the CZT detector with strip electrodes. (author)
Computation cluster for Monte Carlo calculations
Energy Technology Data Exchange (ETDEWEB)
Petriska, M.; Vitazek, K.; Farkas, G.; Stacho, M.; Michalek, S. [Dep. Of Nuclear Physics and Technology, Faculty of Electrical Engineering and Information, Technology, Slovak Technical University, Ilkovicova 3, 81219 Bratislava (Slovakia)
2010-07-01
Two computation clusters based on Rocks Clusters 5.1 Linux distribution with Intel Core Duo and Intel Core Quad based computers were made at the Department of the Nuclear Physics and Technology. Clusters were used for Monte Carlo calculations, specifically for MCNP calculations applied in Nuclear reactor core simulations. Optimization for computation speed was made on hardware and software basis. Hardware cluster parameters, such as size of the memory, network speed, CPU speed, number of processors per computation, number of processors in one computer were tested for shortening the calculation time. For software optimization, different Fortran compilers, MPI implementations and CPU multi-core libraries were tested. Finally computer cluster was used in finding the weighting functions of neutron ex-core detectors of VVER-440. (authors)
Monte Carlo calculations of channeling radiation
International Nuclear Information System (INIS)
Bloom, S.D.; Berman, B.L.; Hamilton, D.C.; Alguard, M.J.; Barrett, J.H.; Datz, S.; Pantell, R.H.; Swent, R.H.
1981-01-01
Results of classical Monte Carlo calculations are presented for the radiation produced by ultra-relativistic positrons incident in a direction parallel to the (110) plane of Si in the energy range 30 to 100 MeV. The results all show the characteristic CR(channeling radiation) peak in the energy range 20 keV to 100 keV. Plots of the centroid energies, widths, and total yields of the CR peaks as a function of energy show the power law dependences of γ 1 5 , γ 1 7 , and γ 2 5 respectively. Except for the centroid energies and power-law dependence is only approximate. Agreement with experimental data is good for the centroid energies and only rough for the widths. Adequate experimental data for verifying the yield dependence on γ does not yet exist
Monte Carlo simulation of neutron scattering instruments
International Nuclear Information System (INIS)
Seeger, P.A.; Daemen, L.L.; Hjelm, R.P. Jr.
1998-01-01
A code package consisting of the Monte Carlo Library MCLIB, the executing code MC RUN, the web application MC Web, and various ancillary codes is proposed as an open standard for simulation of neutron scattering instruments. The architecture of the package includes structures to define surfaces, regions, and optical elements contained in regions. A particle is defined by its vector position and velocity, its time of flight, its mass and charge, and a polarization vector. The MC RUN code handles neutron transport and bookkeeping, while the action on the neutron within any region is computed using algorithms that may be deterministic, probabilistic, or a combination. Complete versatility is possible because the existing library may be supplemented by any procedures a user is able to code. Some examples are shown
Monte Carlo simulation of the ARGO
International Nuclear Information System (INIS)
Depaola, G.O.
1997-01-01
We use GEANT Monte Carlo code to design an outline of the geometry and simulate the performance of the Argentine gamma-ray observer (ARGO), a telescope based on silicon strip detector technlogy. The γ-ray direction is determined by geometrical means and the angular resolution is calculated for small variations of the basic design. The results show that the angular resolutions vary from a few degrees at low energies (∝50 MeV) to 0.2 , approximately, at high energies (>500 MeV). We also made simulations using as incoming γ-ray the energy spectrum of PKS0208-512 and PKS0528+134 quasars. Moreover, a method based on multiple scattering theory is also used to determine the incoming energy. We show that this method is applicable to energy spectrum. (orig.)
Variational Monte Carlo study of pentaquark states
Energy Technology Data Exchange (ETDEWEB)
Mark W. Paris
2005-07-01
Accurate numerical solution of the five-body Schrodinger equation is effected via variational Monte Carlo. The spectrum is assumed to exhibit a narrow resonance with strangeness S=+1. A fully antisymmetrized and pair-correlated five-quark wave function is obtained for the assumed non-relativistic Hamiltonian which has spin, isospin, and color dependent pair interactions and many-body confining terms which are fixed by the non-exotic spectra. Gauge field dynamics are modeled via flux tube exchange factors. The energy determined for the ground states with J=1/2 and negative (positive) parity is 2.22 GeV (2.50 GeV). A lower energy negative parity state is consistent with recent lattice results. The short-range structure of the state is analyzed via its diquark content.
Geometric Monte Carlo and black Janus geometries
Energy Technology Data Exchange (ETDEWEB)
Bak, Dongsu, E-mail: dsbak@uos.ac.kr [Physics Department, University of Seoul, Seoul 02504 (Korea, Republic of); B.W. Lee Center for Fields, Gravity & Strings, Institute for Basic Sciences, Daejeon 34047 (Korea, Republic of); Kim, Chanju, E-mail: cjkim@ewha.ac.kr [Department of Physics, Ewha Womans University, Seoul 03760 (Korea, Republic of); Kim, Kyung Kiu, E-mail: kimkyungkiu@gmail.com [Department of Physics, Sejong University, Seoul 05006 (Korea, Republic of); Department of Physics, College of Science, Yonsei University, Seoul 03722 (Korea, Republic of); Min, Hyunsoo, E-mail: hsmin@uos.ac.kr [Physics Department, University of Seoul, Seoul 02504 (Korea, Republic of); Song, Jeong-Pil, E-mail: jeong_pil_song@brown.edu [Department of Chemistry, Brown University, Providence, RI 02912 (United States)
2017-04-10
We describe an application of the Monte Carlo method to the Janus deformation of the black brane background. We present numerical results for three and five dimensional black Janus geometries with planar and spherical interfaces. In particular, we argue that the 5D geometry with a spherical interface has an application in understanding the finite temperature bag-like QCD model via the AdS/CFT correspondence. The accuracy and convergence of the algorithm are evaluated with respect to the grid spacing. The systematic errors of the method are determined using an exact solution of 3D black Janus. This numerical approach for solving linear problems is unaffected initial guess of a trial solution and can handle an arbitrary geometry under various boundary conditions in the presence of source fields.
Methods for Monte Carlo simulations of biomacromolecules.
Vitalis, Andreas; Pappu, Rohit V
2009-01-01
The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies.
Markov Chain Monte Carlo from Lagrangian Dynamics.
Lan, Shiwei; Stathopoulos, Vasileios; Shahbaba, Babak; Girolami, Mark
2015-04-01
Hamiltonian Monte Carlo (HMC) improves the computational e ciency of the Metropolis-Hastings algorithm by reducing its random walk behavior. Riemannian HMC (RHMC) further improves the performance of HMC by exploiting the geometric properties of the parameter space. However, the geometric integrator used for RHMC involves implicit equations that require fixed-point iterations. In some cases, the computational overhead for solving implicit equations undermines RHMC's benefits. In an attempt to circumvent this problem, we propose an explicit integrator that replaces the momentum variable in RHMC by velocity. We show that the resulting transformation is equivalent to transforming Riemannian Hamiltonian dynamics to Lagrangian dynamics. Experimental results suggests that our method improves RHMC's overall computational e ciency in the cases considered. All computer programs and data sets are available online (http://www.ics.uci.edu/~babaks/Site/Codes.html) in order to allow replication of the results reported in this paper.
PEPSI: a Monte Carlo generator for polarized leptoproduction
International Nuclear Information System (INIS)
Mankiewicz, L.
1992-01-01
We describe PEPSI (Polarized Electron Proton Scattering Interactions) a Monte Carlo program for the polarized deep inelastic leptoproduction mediated by electromagnetic interaction. The code is a modification of the LEPTO 4.3 Lund Monte Carlo for unpolarized scattering and requires the standard polarization-independent JETSET routines to perform fragmentation into final hadrons. (orig.)
Closed-shell variational quantum Monte Carlo simulation for the ...
African Journals Online (AJOL)
Closed-shell variational quantum Monte Carlo simulation for the electric dipole moment calculation of hydrazine molecule using casino-code. ... Nigeria Journal of Pure and Applied Physics ... The variational quantum Monte Carlo (VQMC) technique used in this work employed the restricted Hartree-Fock (RHF) scheme.
Efficiency and accuracy of Monte Carlo (importance) sampling
Waarts, P.H.
2003-01-01
Monte Carlo Analysis is often regarded as the most simple and accurate reliability method. Be-sides it is the most transparent method. The only problem is the accuracy in correlation with the efficiency. Monte Carlo gets less efficient or less accurate when very low probabilities are to be computed
Exponential convergence on a continuous Monte Carlo transport problem
International Nuclear Information System (INIS)
Booth, T.E.
1997-01-01
For more than a decade, it has been known that exponential convergence on discrete transport problems was possible using adaptive Monte Carlo techniques. An adaptive Monte Carlo method that empirically produces exponential convergence on a simple continuous transport problem is described
Multiple histogram method and static Monte Carlo sampling
Inda, M.A.; Frenkel, D.
2004-01-01
We describe an approach to use multiple-histogram methods in combination with static, biased Monte Carlo simulations. To illustrate this, we computed the force-extension curve of an athermal polymer from multiple histograms constructed in a series of static Rosenbluth Monte Carlo simulations. From
A Monte Carlo approach to combating delayed completion of ...
African Journals Online (AJOL)
The objective of this paper is to unveil the relevance of Monte Carlo critical path analysis in resolving problem of delays in scheduled completion of development projects. Commencing with deterministic network scheduling, Monte Carlo critical path analysis was advanced by assigning probability distributions to task times.