A CERN physicist receives the Gian Carlo Wick Medal
2007-01-01
T.D. Lee, Chairman of the Gian Carlo Wick Medal selection committee, André Martin, the 2007 recipient, and Antonino Zichichi, President of the World Federation of Scientists (WFS)(Copyright : WFS) The 2007 Gian Carlo Wick Gold Medal was presented to the CERN theoretical physicist André Martin in Erice (Italy) on 20 August. The prize is awarded each year by the WFS (World Federation of Scientists), whose president is Professor Antonino Zichichi, to a theoretical physicist for his outstanding contributions to particle physics. The selection committee is composed of eminent physicists and is chaired by the Nobel Physics Prize Laureate, T.D. Lee. André Martin was awarded the Medal in recognition of his work on the total cross-section for interactions between two particles and his contributions to the understanding of heavy quark-antiquark (or quarkonium) systems. In 1965, André Martin established a theoretical basis for the so-call...
Directory of Open Access Journals (Sweden)
Walter Zidaric
2009-11-01
Full Text Available Composer, librettist and stage director Gian Carlo Menotti has changed the American musical theatre by accepting to face burning questions related to his times, but which are at the same time universal. In The Consul (1950, the American context of the 1950s burst onto the operatic stage. The Mc Carron law and MacCarthysm are reflected in this opera which is, first of all, a metaphor of the negation of liberty. In The Saint ofBleecker Street (1954, the Little Italy microcosm suggests a new reflection about questions which are related to the concept of difference and integration, connected to considerations of a metaphysical and religious nature. As metaphors of American society, these two operas finally symbolise all kinds of human multicultural societies by revealing the polysemic character of Menotti’s works.
Stigmatising Faith? Differing Modes of Sanctification in Gian-Carlo ...
African Journals Online (AJOL)
It is argued in the present article that this work, which received numerous awards after its introduction in New York and within months was being performed internationally, can be interpreted more deeply as an exploration of the evergreen theme of popular religion confronting honest doubt in an evolving social and religious ...
Tensiometer with removable wick
Gee, Glendon W.; Campbell, Melvin D.
1992-01-01
The present invention relates to improvements in tensiometers for measuring soil water tension comprising a rod shaped wick. the rod shaped wick is shoestring, rolled paper towel, rolled glass microfiber filter, or solid ceramic. The rod shaped wick is secured to the tensiometer by a cone washer and a threaded fitting.
Wicked problems and a 'wicked' solution.
Walls, Helen L
2018-04-13
'Wicked' is the term used to describe some of the most challenging and complex issues of our time, many of which threaten human health. Climate change, biodiversity loss, persisting poverty, the advancing obesity epidemic, and food insecurity are all examples of such wicked problems. However there is a strong body of evidence describing the solutions for addressing many of these problems. Given that much is known about how many of these problems could be addressed - and given the risks of not acting - what will it take to create the 'tipping point' needed for effective action? A recent (2015) court ruling in The Hague held that the Dutch government's stance on climate change was illegal, ordering them to cut greenhouse gas emissions by at least 25% within 5 years (by 2020), relative to 1990 levels. The case was filed on behalf of 886 Dutch citizens, suing the government for violating human rights and climate changes treaties by failing to take adequate action to prevent the harmful impacts of climate change. This judicial ruling has the potential to provide a way forward, inspiring other civil movements and creating a template from which to address other wicked problems. This judicial strategy to address the need to lower greenhouse gas emissions in the Netherlands is not a magic bullet, and requires a particular legal and institutional setting. However it has the potential to be a game-changer - providing an example of a strategy for achieving domestic regulatory change that is likely to be replicable in some countries elsewhere, and providing an example of a particularly 'wicked' (in the positive, street-slang sense of the word) strategy to address seemingly intractable and wicked problems.
Anthropocene Age Wicked Challenges
DEFF Research Database (Denmark)
Edgeman, Rick; Wu, Zhaohui
2015-01-01
Grand global challenges, including wicked human caused or influenced ones key to sustainability, characterize the Anthropocene Age. Among these are climate change driven by increased methane and CO2 in the atmosphere; consequent global warming and increasing intensity and incidence of extreme wea...
DEFF Research Database (Denmark)
Tietjen, Anne; Jørgensen, Gertrud
2016-01-01
In a time of increasing globalisation and urbanisation, shrinking peripheral rural areas have become a truly wicked planning problem in many European countries. Although a problem can be easily perceived and measured by various indicators, the precise definition of the problem is problematic. Based......, place-based and project-oriented process directed at concrete physical outcomes. We frame strategic planning as a translation process where the interaction between human and non-human actors translates a unique, complex and contested situation into an innovated situation. We find that local physical......-understandings, increased social capital, and follow-up projects initiated beyond the actual planning process. We conclude that local physical projects, when conceived in a collaborative and strategic manner, can contribute to sustainable adaptation to rural shrinkage....
Edmondson, Amy C
2016-06-01
Companies today increasingly rely on teams that span many industries for radical innovation, especially to solve "wicked problems." So leaders have to understand how to promote collaboration when roles are uncertain, goals are shifting, expertise and organizational cultures are varied, and participants have clashing or even antagonistic perspectives. HBS professor Amy Edmondson has studied more than a dozen cross-industry innovation projects, among them the creation of a new city, a mango supply-chain transformation, and the design and construction of leading-edge buildings. She has identified the leadership practices that make successful cross-industry teams work: fostering an adaptable vision, promoting psychological safety, enabling knowledge sharing, and encouraging collaborative innovation. Though these practices are broadly familiar, their application within cross-industry teams calls for unique leadership approaches that combine flexibility, open-mindedness, humility, and fierce resolve.
Directory of Open Access Journals (Sweden)
Hemantkumar P. Bulsara
2013-12-01
Full Text Available Entrepreneurship is the solution to solve a problem of unemployment in any economy. Normally, we think of Technology innovations, we think of Engineers from top Technology Institutions. But innovations may also come from Grassroots people. This paper gives the Case study of Nature Technocrats – small business firm of Arvindbhai who has been supported by GIAN (Grassroots Innovations Augmentation Network, Technology Business Incubator for Grassroots Innovations in India. In India, there are many Technology Business Incubation centers but approach of GIAN is unique as it supports Grassroots innovators. This paper has come out of a larger study with Research design: Multiple Embedded Descriptive Case Study. The process of GIAN with unique mechanism of commercializing the Grassroots innovations is described. The problems in this area are also described. This case study may inspire other agencies in India or other countries too for working in the area of Grassroots innovations to Techno-entrepreneurship. Keywords: Grassroots innovations; Technology innovations; Techno-Entrepreneurship; GIAN; Technology Transfer.
Entrevista com Mathieu Dosse, Gian Luigi de Rosa e Michael Kegler
Directory of Open Access Journals (Sweden)
Andréia Guerini
2017-09-01
Full Text Available O conjunto das três entrevistas que seguem aborda a mesma temática, isto é, aspectos de tradução/adaptação em geral e da tradução do romance Estive em Lisboa e lembrei de você de Luiz Ruffato em particular. O romance de Ruffato foi escrito em 2009, editado em Portugal (Quetzal, 2010 e traduzido para diferentes línguas, como italiano (La Nuova Frontiera, 2011, espanhol (Eterna Cadencia, 2011, francês (Chandeigne, 2015, alemão (Assoziation A, 2016 e, nos próximos meses será publicado em finlandês (Into. O livro foi adaptado para o cinema em 2015, pelo cineasta português José Barahona, sendo exibido em festivais nacionais e internacionais. As entrevistas abaixo foram feitas com os tradutores Gian Luigi De Rosa, Mathieu Dosse e Michael Kegler em 2016. Gian Luigi De Rosa (Itália/1969-- possui doutorado em “Culture e Istituzioni dei paesi di lingue iberiche in età moderna e contemporanea” e é professor de português na Universidade do Salento, em Lecce. É autor de livros e ensaios sobre língua e linguística portuguesa, literatura portuguesa e brasileira e tradução audiovisual e intersemiótica. É também responsável pela elaboração das legendas em italiano do filme Estive em Lisboa e lembrei de você. Mathieu Dosse (Brasil/1978-- é formado pela Université Paris 8, onde estudou Teoria da Tradução. Traduziu para o francês Graciliano Ramos, Luiz Ruffato e João Guimarães Rosa. Michael Kegler (Alemanha/1967-- estudou literatura brasileira e portuguesa na universidade de Frankfurt, sem concluir o curso. Trabalhou como livreiro no Centro do Livro de Língua Portuguesa, antes de se tornar tradutor literário. Traduziu para o alemão José Eduardo Agualusa, Moacyr Scliar, Luiz Ruffato, entre outros autores. Em julho de 2016, junto com Ruffato, recebeu o prêmio literário Hermann Hesse na Alemanha, pela qualidade da obra publicada em conjunto com a sua tradução.
Governance of wicked climate adaptation problems
Termeer, C.J.A.M.; Dewulf, A.; Breeman, G.E.
2013-01-01
Climate change adaptation has been called a “wicked problem par excellence.” Wicked problems are hard to define because ‘the formulation of the problem is the problem; they are considered a symptom of another problem; they are highly resistant to solutions and extremely interconnected with other
Wicked Problems in Special and Inclusive Education
Armstrong, David
2017-01-01
This special paper provides a critical overview of wicked problems in special and inclusive education. Practically, this paper provides a strategic framework for future special issues in the "Journal of Special Educational Needs". Critical attention is also given to the concept of a wicked problem when applied to research in special and…
Machined Titanium Heat-Pipe Wick Structure
Rosenfeld, John H.; Minnerly, Kenneth G.; Gernert, Nelson J.
2009-01-01
Wick structures fabricated by machining of titanium porous material are essential components of lightweight titanium/ water heat pipes of a type now being developed for operation at temperatures up to 530 K in high-radiation environments. In the fabrication of some prior heat pipes, wicks have been made by extruding axial grooves into aluminum unfortunately, titanium cannot be extruded. In the fabrication of some other prior heat pipes, wicks have been made by in-situ sintering of metal powders shaped by the use of forming mandrels that are subsequently removed, but in the specific application that gave rise to the present fabrication method, the required dimensions and shapes of the heat-pipe structures would make it very difficult if not impossible to remove the mandrels due to the length and the small diameter. In the present method, a wick is made from one or more sections that are fabricated separately and assembled outside the tube that constitutes the outer heat pipe wall. The starting wick material is a slab of porous titanium material. This material is machined in its original flat configuration to form axial grooves. In addition, interlocking features are machined at the mating ends of short wick sections that are to be assembled to make a full-length continuous wick structure. Once the sections have been thus assembled, the resulting full-length flat wick structure is rolled into a cylindrical shape and inserted in the heatpipe tube (see figure). This wick-structure fabrication method is not limited to titanium/water heat pipes: It could be extended to other heat pipe materials and working fluids in which the wicks could be made from materials that could be pre-formed into porous slabs.
Kuusk, Priit, 1938-
1999-01-01
Londoni Royal Opera House avati 1. dets. galaõhtuga pärast 2 aastat kestnud ümberehitusi. Seoses oma ooperi "Konsul" uuslavastusega Soome Rahvusooperis viibis Helsingis C. G. Menotti. O. Messiaeni ainus ooper "Püha Franciscus Assisist" jõudis heliplaadile. Salzburgi festivali kunstiliseks juhiks aastast 2001 valiti P. Ruzicka
Studies on Wicking Behaviour of Polyester Fabric
Directory of Open Access Journals (Sweden)
Arobindo Chatterjee
2014-01-01
Full Text Available This paper aims to investigate vertical wicking properties of polyester fabric based on change in sample direction and change in tension. Also experimental results are compared with theoretical results. Polyester fabric made out of spun yarn with four types of variation in pick density was used. Theoretical values of vertical wicking were calculated by using Lucas-Washburn equation and experimental results were recorded using strip test method. Maximum height reached experimentally in both warp way and weft way is more than that of the theoretical values. The maximum height attained by fabric experimentally in weft is more as compared to warp way. Vertical wicking increases with increase in tension. This paper is focused on wicking which plays a vital role in determining comfort and moisture transport behavior of fabric.
Higher-derivative Lee-Wick unification
International Nuclear Information System (INIS)
Carone, Christopher D.
2009-01-01
We consider gauge coupling unification in Lee-Wick extensions of the Standard Model that include higher-derivative quadratic terms beyond the minimally required set. We determine how the beta functions are modified when some Standard Model particles have two Lee-Wick partners. We show that gauge coupling unification can be achieved in such models without requiring the introduction of additional fields in the higher-derivative theory and we comment on possible ultraviolet completions.
Additive Manufacturing of Heat Pipe Wicks, Phase I
National Aeronautics and Space Administration — Wick properties are often the limiting factor in a heat pipe design. Current technology uses conventional sintering of metal powders, screen wick, or grooves to...
Creating public value in global wicked problems
Geuijen, K. (Karin); Moore, M. (Mark); Cederquist, A. (Andrea); Ronning, R. (Rolf); M. van Twist (Mark)
2017-01-01
textabstractThis essay seeks to explore in which way Public Value Theory (PVT) would be useful in guiding analysis and action with respect to global wicked issues like forced migration. We found that (1) PVT enables envisioning global, collective, public value as well as value for individuals,
Solving Wicked Problems through Action Learning
Crul, Liselore
2014-01-01
This account of practice outlines the Oxyme Action Learning Program which was conducted as part of the Management Challenge in my final year of the MSc in Coaching and Behavioral Change at Henley Business School. The central research questions were: (1) how action learning can help to solve wicked problems and (2) what the effect of an action…
The wicked character of psychosocial risks
DEFF Research Database (Denmark)
Helbo Jespersen, Anne; Hasle, Peter; Nielsen, Klaus Tranetoft
2016-01-01
Psychosocial risks constitute a significant problem in most workplaces, and they are generally considered more difficult to regulate than many other occupational health and safety risks. This article investigates the challenges of regulating psychosocial risks in the workplace. The difficulties lie...... in the particular nature of psychosocial risks: their complexity, uncertainty, value, and power divergences. Psychosocial risks therefore resemble ‘wicked problems’, typically characterized by unclear cause-effect relationships and uncertain solutions. We use the ‘wicked problems’ concept to show how workplace...... regulation, and particularly the enforcement in the form of inspection and audits of certified occupational health and safety management systems, face challenges in assessing psychosocial risks and the strategies used by regulators to overcome these challenges. While regulation has become more effective...
Climate Change as a Wicked Problem
Directory of Open Access Journals (Sweden)
John FitzGibbon
2012-05-01
Full Text Available Understanding complexity suggests that some problems are more complex than others and defy conventional solutions. These wicked problems will not be solved by the same tools and processes that are complicit in creating them. Neither will they be resolved by approaches short on explicating the complex interconnections of the multiple causes, consequences, and cross-scale actors of the problem. Climate change is one such wicked problem confronting water management in Ghana with a dilemma. The physical consequences of climate change on Ghana’s water resources are progressively worsening. At the same time, existing institutional arrangements demonstrate weak capacities to tackle climate change–related complexities in water management. Therefore, it warrants a dynamic approach imbued with complex and adaptive systems thinking, which also capitalizes on instrumental gains from prior existing institutions. Adaptive Co-Management offers such an opportunity for Ghana to adapt its water management system to climate change.
The Wicked Character of Psychosocial Risks
DEFF Research Database (Denmark)
Jespersen, Anne Helbo; Hasle, Peter; Nielsen, Klaus Tranetoft
2016-01-01
Psychosocial risks constitute a significant problem in most workplaces, and they are generally considered more difficult to regulate than many other occupational health and safety risks. This article investigates the challenges of regulating psychosocial risks in the workplace. The difficulties lie...... in the particular nature of psychosocial risks: their complexity, uncertainty, value, and power divergences. Psychosocial risks therefore resemble ‘wicked problems’, typically characterized by unclear cause-effect relationships and uncertain solutions. We use the ‘wicked problems’ concept to show how workplace...... regulation, and particularly the enforcement in the form of inspection and audits of certified occupational health and safety management systems, face challenges in assessing psychosocial risks and the strategies used by regulators to overcome these challenges. While regulation has become more effective...
Instability of the Lee-Wick bounce
International Nuclear Information System (INIS)
Karouby, Johanna; Brandenberger, Robert; Qiu, Taotao
2011-01-01
It was recently realized [Y. F. Cai, T. t. Qiu, R. Brandenberger, and X. m. Zhang, Phys. Rev. D 80, 023511 (2009).] that a model constructed from a Lee-Wick type scalar field theory yields, at the level of homogeneous and isotropic background cosmology, a bouncing cosmology. However, bouncing cosmologies induced by pressureless matter are in general unstable to the addition of relativistic matter (i.e. radiation). Here we study the possibility of obtaining a bouncing cosmology if we add radiation coupled to the Lee-Wick scalar field. This coupling in principle would allow the energy to flow from radiation to matter, thus providing a drain for the radiation energy. However, we find that it takes an extremely unlikely fine-tuning of the initial phases of the field configurations for a sufficient amount of radiative energy to flow into matter. For general initial conditions, the evolution leads to a singularity rather than a smooth bounce.
A parametric study of porous wick in heat pipe
International Nuclear Information System (INIS)
Park, Yong Jin; Jun, Sang Ook; Jung, Ji Hun; Kim, Jeong Hwa; Lee, Dong Ho
2008-01-01
A Heat pipe which acquires a power by a capillary force has been used to mainly cool heat sources in satellites for long times. Among types of heat pipes, Loop Heat Pipe, Capillary Pumped Loop have usually used porous wicks such as sintered powder and fine wick to circulate a working fluid. These porous wicks have many design variables which affect thermal phenomena such as a capillary driven force, disjoining pressure, drying limitation, boiling limitation, etc. Additionally, Fins connecting an evaporator surface with a porous wick also after thermal characteristics of heat pipe. Especially, a vapor blanket thickness being main variables to decide a wick thickness has to do with fin sizes. Accordingly, understanding design variables' characteristics of porous wicks and fins is important to execute design optimization of heat pipe. In this paper, analytical methods and results will be discussed in terms of parametric study
Study of PTFE wick structure applied to loop heat pipe
International Nuclear Information System (INIS)
Wu, Shen-Chun; Gu, Tzu-Wei; Wang, Dawn; Chen, Yau-Ming
2015-01-01
This study investigated the use of sintered PTFE (polytetrafluoroethylene) particles as the wick material of loop heat pipe (LHP), taking advantage of PTFE's low thermal conductivity to reduce the heat leakage problem during LHP's operation. Different PTFE particle sizes were tried to find the one that resulted in the best wick; LHP performance tests were then conducted, and PTFE's potential for application to LHP was examined. Using PTFE particles ranging from 300–500 μm in size, the best wick properties were effective pore radius of 1.7 μm, porosity of 50%, and permeability of 6.2 × 10 −12 m 2 . LHP performance tests showed that, under typical electronic devices' operating temperature of 85 °C, the heat load reached 450 W, the thermal resistance was 0.145 °C/W, and the critical heat load (dryout heat load) reached 600 W. Compared to LHP with a nickel wick, LHP with a PTFE wick had a significantly lower operating temperature, indicating reduced heat leakage during operation, while having comparable performance; also, during the manufacturing process, a PTFE wick required lower sintering temperature, needed shorter sintering time, and had no need for hydrogen gas during sintering. The results of this study showed that, for high heat transfer capacity cooling devices, PTFE wicks possess great potential for applications to LHPs. - Highlights: • The performances of PTFE and nickel wicks in LHP are comparable for the first time. • PTFE wick allows for lower operating temperature and thus pressure in LHP system. • A wick requiring lower temperature and manufacturing cost and less time was made. • PTFE wick has potential to replace metal wick and enhance performance of LHP
Sintered Nickel Powder Wicks for Flat Vertical Heat Pipes
Directory of Open Access Journals (Sweden)
Geir Hansen
2015-03-01
Full Text Available The fabrication and performance of wicks for flat heat pipe applications produced by sintering a filamentary nickel powder has been investigated. Tape casting was used as an intermediate step in the wick production process. Thermogravimetric analysis was used to study the burn-off of the organic binder used and to study the oxidation and reduction processes of the nickel. The wicks produced were flat, rectangular and intended for liquid transport in the upwards vertical direction. Rate-of-rise experiments using heptane were used to test the flow characteristics of the wicks. The wick porosities were measured using isopropanol. The heat transfer limitation constituted by the vapour static pressure and the capillary pressure was discussed. The influence on wick performance by using pore former in the manufacturing was studied. When Pcap/Psat > 1, the use of a pore former to increase the wick permeability will always improve the wick performance. When Pcap/Psat < 1, it was shown that if the effective pore radius and the permeability increase with an equal percentage the overall influence on the wick capacity is negative. A criterion for a successful pore former introduction is proposed and the concept of a pore former evaluation plot is presented.
Thermodynamics of the Lee-Wick partners: An alternative approach
International Nuclear Information System (INIS)
Bhattacharya, Kaushik; Das, Suratna
2011-01-01
It was pointed out some time ago that there can be two variations in which the divergences of a quantum field theory can be tamed using the ideas presented by Lee and Wick. In one variation the Lee-Wick partners of the normal fields live in an indefinite metric Hilbert space but have positive energy and in the other variation the Lee-Wick partners can live in a normal Hilbert space but carry negative energy. Quantum mechanically the two variations mainly differ in the way the fields are quantized. In this article the second variation of Lee and Wick's idea is discussed. Using statistical mechanical methods the energy density, pressure and entropy density of the negative energy Lee-Wick fields have been calculated. The results exactly match with the thermodynamic results of the conventional, positive energy Lee-Wick fields. The result sheds some light on the second variation of Lee-Wick's idea. The result seems to say that the thermodynamics of the theories do not care about the way they are quantized.
Constraints on the Lee-Wick Higgs sector
International Nuclear Information System (INIS)
Carone, Christopher D.; Primulando, Reinard
2009-01-01
Lee-Wick partners to the standard model Higgs doublet may appear at a mass scale that is significantly lower than that of the remaining Lee-Wick partner states. The relevant effective theory is a two-Higgs doublet model in which one doublet has wrong-sign kinetic and mass terms. We determine bounds on this effective theory, including those from neutral B-meson mixing, b→X s γ, and Z→bb. The results differ from those of conventional two-Higgs doublet models and lead to meaningful constraints on the Lee-Wick Higgs sector.
Partially Acetylated Sugarcane Bagasse For Wicking Oil From Contaminated Wetlands
Sugarcane bagasse was partially acetylated to enhance its oil-wicking ability in saturated environments while holding moisture for hydrocarbon biodegradation. The water sorption capacity of raw bagasse was reduced fourfold after treatment, which indicated considerably increased ...
The Wick-Concept for Thermal Insulation of Cold Piping
DEFF Research Database (Denmark)
Koverdynsky, Vit; Korsgaard, Vagn; Rode, Carsten
2006-01-01
the wick-concept in either of two variations: the self-drying or the self-sealing system. Experiments have been carried out using different variations of the two systems to investigate the conditions for exploiting the drying capabilities of the systems, and the results are presented. The results show......The wick-concept for thermal insulation of cold piping is based on capillary suction of a fiber fabric to remove excess water from the pipe surface by transporting it to the outer surface of the insulation. From the surface of the insulation jacket, the water will evaporate to the ambient air....... This will prevent long-term accumulation of moisture in the insulation material. The wick keeps the hydrophobic insulation dry, allowing it to maintain its thermal performance. The liquid moisture is kept only in the wick fabric. This article presents the principle of operation of cold pipe insulation using...
Nanocoatings for Wicking of Low-Viscosity Cryogens Project
Fesmire, James E.
2014-01-01
The goal of this project is to develop smart, switchable materials systems for use in thermal management systems, including the evaluation of wicking nanocoatings for use in the transport and storage of cryogens.
Nanocoatings for Wicking of Low-Viscosity Cryogens
National Aeronautics and Space Administration — The goal of this project is to investigate and develop smart, switchable materials systems for use in thermal management systems, including the evaluation of wicking...
Stochastic symmetries of Wick type stochastic ordinary differential equations
Ünal, Gazanfer
2015-04-01
We consider Wick type stochastic ordinary differential equations with Gaussian white noise. We define the stochastic symmetry transformations and Lie equations in Kondratiev space (S)-1N. We derive the determining system of Wick type stochastic partial differential equations with Gaussian white noise. Stochastic symmetries for stochastic Bernoulli, Riccati and general stochastic linear equation in (S)-1N are obtained. A stochastic version of canonical variables is also introduced.
Felt-metal-wick heat-pipe solar receiver
Energy Technology Data Exchange (ETDEWEB)
Andraka, C.E.; Adkins, D.R.; Moss, T.A. [Sandia National Labs., Albuquerque, NM (United States); Cole, H.M. [Porous Metal Products, Jacksboro, TX (United States); Andreas, N.H. [Bekaert Corp., Marietta, GA (United States)
1994-12-31
Reflux heat-pipe receivers have been identified as a desirable interface to couple a Stirling-cycle engine with a parabolic dish solar concentrator. The reflux receiver provides power nearly isothermally to the engine heater heads while decoupling the heater head design from the solar absorber surface design. The independent design of the receiver and engine heater head leads to higher system efficiency. Heat pipe reflux receivers have been demonstrated at approximately 65 kW{sub t} power throughput. Several 25 to 30-kW{sub e} Stirling-cycle engines are under development, and will soon be incorporated in commercial dish-Stirling systems. These engines will require reflux receivers with power throughput limits reaching 90-kW{sub t}. The extension of heat pipe technology from 60 kW{sub t} to 100 kW{sub t} is not trivial. Current heat pipe wick technology is pushed to its limits. It is necessary to develop and test advanced wick structure technologies to perform this task. Sandia has developed and begun testing a Bekaert Corporation felt metal wick structure fabricated by Porous Metal Products Inc. This wick is about 95% porous, and has liquid permeability a factor of 2 to 8 times higher than conventional technologies for a given maximum pore radius. The wick has been successfully demonstrated in a bench-scale heat pipe, and a full-scale on-sun receiver has been fabricated. This report details the wick design, characterization and installation into a heat pipe receiver, and the results of the bench-scale tests are presented. The wick performance is modeled, and the model results are compared to test results.
Wicked ID: Conceptual Framework for Considering Instructional Design as a Wicked Problem
Directory of Open Access Journals (Sweden)
Katrin Becker
2007-02-01
Full Text Available The process of instructional design has parallels in other design disciplines. Software design is one that has experienced intense attention in the last 30 or so years, and many lessons learned there can be applied to ID. Using software design as a springboard, this concept paper seeks to propose a new approach to ID. It suggests that instructional design is almost always a Wicked Problem. The connection is formed between Wicked Problems as first described by Rittel and Webber in 1973, and the models in and processes of instructional design. The areas of social planning, organizational management and software design all posses some accepted and tested approaches to the solution of Wicked Problems. These will be described, and how they can be applied to ID will be explained. Finally, this paper will propose a meta-model for ID and explain how it can be used in the current context. Résumé : Le processus de conception pédagogique a des similitudes avec d’autres disciplines. La conception de logiciels a fait l’objet d’une grande attention dans les trente dernières années et de nombreuses leçons tirées peuvent s’appliquer à la conception pédagogique. En se basant sur la conception de logiciels, cet article cherche à proposer une nouvelle méthode à la conception pédagogique. Il laisse entendre que la conception pédagogique constitue presque toujours un problème épineux. Le lien est formé entre les problèmes épineux tels que décrits par Rittel et Webber en 1973 et les modèles et processus de la conception pédagogique. Les secteurs de l’organisation sociale, de la gestion des organisations et de la conception de logiciels comptent des méthodes acceptées et validées à la résolution de problèmes épineux. Nous les décrirons et expliquerons comment ils peuvent servir à la conception pédagogique. Enfin, l’article proposera un meta-modèle de la conception pédagogique et expliquera comment il peut servir dans le
The Wick-Concept for Thermal Insulation of Cold Piping
DEFF Research Database (Denmark)
Koverdynsky, Vit; Korsgaard, Vagn; Rode, Carsten
2006-01-01
The wick-concept for thermal insulation of cold piping is based on capillary suction of a fiber fabric to remove excess water from the pipe surface by transporting it to the outer surface of the insulation. From the surface of the insulation jacket, the water will evaporate to the ambient air....... This will prevent long-term accumulation of moisture in the insulation material. The wick keeps the hydrophobic insulation dry, allowing it to maintain its thermal performance. The liquid moisture is kept only in the wick fabric. This article presents the principle of operation of cold pipe insulation using...... that the variations of these types of insulation systems work for pipes with temperature above 0C and for ambient conditions within common ranges for industrial applications....
Deviations from Wick's theorem in the canonical ensemble
Schönhammer, K.
2017-07-01
Wick's theorem for the expectation values of products of field operators for a system of noninteracting fermions or bosons plays an important role in the perturbative approach to the quantum many-body problem. A finite-temperature version holds in the framework of the grand canonical ensemble, but not for the canonical ensemble appropriate for systems with fixed particle number such as ultracold quantum gases in optical lattices. Here we present formulas for expectation values of products of field operators in the canonical ensemble using a method in the spirit of Gaudin's proof of Wick's theorem for the grand canonical case. The deviations from Wick's theorem are examined quantitatively for two simple models of noninteracting fermions.
Performance of solar still with a concave wick evaporation surface
International Nuclear Information System (INIS)
Kabeel, A.E.
2009-01-01
Surfaces used for evaporation and condensation phenomenon play important roles in the performance of basin type solar still. In the present study, a concave wick surface was used for evaporation, whereas four sides of a pyramid shaped still were used for condensation. Use of jute wick increased the amount of absorbed solar radiation and enhanced the evaporation surface area. A concave shaped wick surface increases the evaporation area due to the capillary effect. Results show that average distillate productivity in day time was 4.1 l/m 2 and a maximum instantaneous system efficiency of 45% and average daily efficiency of 30% were recorded. The maximum hourly yield was 0.5 l/h. m 2 after solar noon. An estimated cost of 1 l of distillate was 0.065 $ for the presented solar still.
Radiation bounce from the Lee-Wick construction?
International Nuclear Information System (INIS)
Karouby, Johanna; Brandenberger, Robert
2010-01-01
It was recently realized that matter modeled by the scalar field sector of the Lee-Wick standard model yields, in the context of a homogeneous and isotropic cosmological background, a bouncing cosmology. However, bouncing cosmologies induced by pressureless matter are in general unstable to the addition of relativistic matter (i.e. radiation). Here we study the possibility of obtaining a bouncing cosmology if we add not only radiation, but also its Lee-Wick partner, to the matter sector. We find that, in general, no bounce occurs. The only way to obtain a bounce is to choose initial conditions with very special phases of the radiation field and its Lee-Wick partner.
Lee-Wick indefinite metric quantization: A functional integral approach
International Nuclear Information System (INIS)
Boulware, D.G.; Gross, D.J.
1984-01-01
In an attempt to study the stability of the Lee-Wick indefinite metric theory, the functional integral for indefinite metric quantum field theories is derived. Theories with an indefinite classical energy may be quantized with either a normal metric and an indefinite energy in Minkowski space or an indefinite metric and a positive energy in euclidean space. However, the functional integral in the latter formulation does not incorporate the Lee-Wick prescription for assuring the unitarity of the positive energy positive metric sector of the theory, hence the stability of the theory cannot be studied non-perturbatively. (orig.)
The wicked problems of supplier-driven innovation
DEFF Research Database (Denmark)
Christensen, Poul Rind; Munksgaard, Kristin Balslev; Bang, Anne Louise
2017-01-01
Suppliers stand in the wake of a new diversified strategic momentum in the global production network, where innovation is growing in importance. The term “supplier-driven innovation” is coined in contrast to the current hype on user-driven innovation; this paper aims to discuss the wicked problem...... for suppliers to actively engage in customers’ innovations.......Suppliers stand in the wake of a new diversified strategic momentum in the global production network, where innovation is growing in importance. The term “supplier-driven innovation” is coined in contrast to the current hype on user-driven innovation; this paper aims to discuss the wicked problems...
Hyperbolic white noise functional solutions of Wick-type stochastic compound KdV-Burgers equations
International Nuclear Information System (INIS)
Han Xiu; Xie Yingchao
2009-01-01
Variable coefficient and Wick-type stochastic compound KdV-Burgers equations are investigated. By using white noise analysis, Hermite transform and the hyperbolic function method, we obtain a number of Wick versions of hyperbolic white noise functional solutions and hyperbolic function solutions for Wick-type stochastic and variable coefficient compound KdV-Burgers equations, respectively.
Influence of wick properties in a vertical LHP on remove waste heat from electronic equipment
Energy Technology Data Exchange (ETDEWEB)
Smitka, Martin, E-mail: martin.smitka@fstroj.uniza.sk, E-mail: patrik.nemec@fstroj.uniza.sk, E-mail: milan.malcho@fstroj.uniza.sk; Nemec, Patrik, E-mail: martin.smitka@fstroj.uniza.sk, E-mail: patrik.nemec@fstroj.uniza.sk, E-mail: milan.malcho@fstroj.uniza.sk; Malcho, Milan, E-mail: martin.smitka@fstroj.uniza.sk, E-mail: patrik.nemec@fstroj.uniza.sk, E-mail: milan.malcho@fstroj.uniza.sk [University of Žilina, Faculty of Mechanical Engineering, Department of Power Engeneering, Univerzitna 1, 010 26 Žilina (Slovakia)
2014-08-06
The loop heat pipe is a vapour-liquid phase-change device that transfers heat from evaporator to condenser. One of the most important parts of the LHP is the porous wick structure. The wick structure provides capillary force to circulate the working fluid. To achieve good thermal performance of LHP, capillary wicks with high permeability and porosity and fine pore radius are expected. The aim of this work is to develop porous wick of sintered nickel powder with different grain sizes. These porous wicks were used in LHP and there were performed a series of measurements to remove waste heat from the insulated gate bipolar transistor (IGBT)
Virtuous Mess and Wicked Clarity: Struggle in Higher Education Research
McArthur, Jan
2012-01-01
This article considers the value of clarity--of theory, method and purposes--in educational research. It draws upon the work of early critical theorist, Theodor Adorno, and particularly his notion of negative dialectics and his challenge to the traditional dichotomy of theory and practice. Using the notions of virtuous mess and wicked clarity, I…
Multiphonon theory: generalized Wick's theorem and recursion formulas
International Nuclear Information System (INIS)
Silvestre-Brac, B.; Piepenbring, R.
1982-04-01
Overlaps and matrix elements of one and two-body operators are calculated in a space spanned by multiphonons of different types taking properly the Pauli principle into account. Two methods are developped: a generalized Wick's theorem dealing with new contractions and recursion formulas well suited for numerical applications
The complexity of wicked problems in large scale change
Waddock, S.; Meszoely, G.; Waddell, S.; Dentoni, D.
2015-01-01
Purpose – The purpose of this paper is to extend and elaborate the notion of successful organizational change to incorporate the concept of large system change (LSC), by developing a framework that brings together complexity and wicked problems theories to understand how individual organizations and
MOOCs, Wicked Problems, and the Spirit of the Liberal Arts
McClure, Maureen W.
2014-01-01
Higher education institutions today are increasingly considered to be "means," serving as suppliers for employers, not "ends" that address "wicked" problems. This disregards their role in the generational succession of civil societies. Massive open online courses can strengthen higher education institutions by working…
Embracing Wicked Problems: The Turn to Design in Composition Studies
Marback, Richard
2009-01-01
Recent appeal to the concept of design in composition studies benefits teaching writing in digital media. Yet the concept of design has not been developed enough to fully benefit composition instruction. This article develops an understanding of design as a matter of resolving wicked problems and makes a case for the advantages of this…
Fractal Loop Heat Pipe Performance Comparisons of a Soda Lime Glass and Compressed Carbon Foam Wick
Myre, David; Silk, Eric A.
2014-01-01
This study compares heat flux performance of a Loop Heat Pipe (LHP) wick structure fabricated from compressed carbon foam with that of a wick structure fabricated from sintered soda lime glass. Each wick was used in an LHP containing a fractal based evaporator. The Fractal Loop Heat Pipe (FLHP) was designed and manufactured by Mikros Manufacturing Inc. The compressed carbon foam wick structure was manufactured by ERG Aerospace Inc., and machined to specifications comparable to that of the initial soda lime glass wick structure. Machining of the compressed foam as well as performance testing was conducted at the United States Naval Academy. Performance testing with the sintered soda lime glass wick structures was conducted at NASA Goddard Space Flight Center. Heat input for both wick structures was supplied via cartridge heaters mounted in a copper block. The copper heater block was placed in contact with the FLHP evaporator which had a circular cross-sectional area of 0.88 cm(sup 2). Twice distilled, deionized water was used as the working fluid in both sets of experiments. Thermal performance data was obtained for three different Condenser/Subcooler temperatures under degassed conditions. Both wicks demonstrated comparable heat flux performance with a maximum of 75 W/cm observed for the soda lime glass wick and 70 W /cm(sup 2) for the compressed carbon foam wick.
Dynamic model of heat and mass transfer in an unsaturated porous wick of capillary pumped loop
International Nuclear Information System (INIS)
Boubaker, Riadh; Platel, Vincent; Berges, Alexis; Bancelin, Mathieu; Hannezo, Edouard
2015-01-01
This paper presents a numerical study of a Capillary Pumped Loop evaporator. A two-dimensional unsteady mathematical model of a flat evaporator is developed to simulate heat and mass transfer in unsaturated porous wick with phase change. The liquid–vapor phase change inside the porous wick is described by Langmuir's law. The governing equations are solved by the Finite Element Method. The results are presented then for a sintered nickel wick and methanol as a working fluid. The heat flux required to the transition from the all-liquid wick to the vapor–liquid wick is calculated. The dynamic and thermodynamic behavior of the working fluid in the capillary structure are discussed in this paper. - Highlights: • We develop an unsteady model of two phase flow in porous wick with phase change. • We describe the heat and mass transfer inside the CPL evaporator. • We study the dynamic growth of the vapor pocket inside the porous wick. • The transition from the all liquid wick to the vapor–liquid wick is examined. • Porous wick with large porosity and conductivity reduces the parasitic flux
Wick polynomials and time-evolution of cumulants
Lukkarinen, Jani; Marcozzi, Matteo
2016-08-01
We show how Wick polynomials of random variables can be defined combinatorially as the unique choice, which removes all "internal contractions" from the related cumulant expansions, also in a non-Gaussian case. We discuss how an expansion in terms of the Wick polynomials can be used for derivation of a hierarchy of equations for the time-evolution of cumulants. These methods are then applied to simplify the formal derivation of the Boltzmann-Peierls equation in the kinetic scaling limit of the discrete nonlinear Schödinger equation (DNLS) with suitable random initial data. We also present a reformulation of the standard perturbation expansion using cumulants, which could simplify the problem of a rigorous derivation of the Boltzmann-Peierls equation by separating the analysis of the solutions to the Boltzmann-Peierls equation from the analysis of the corrections. This latter scheme is general and not tied to the DNLS evolution equations.
Perturbative unitarity of Lee-Wick quantum field theory
Anselmi, Damiano; Piva, Marco
2017-08-01
We study the perturbative unitarity of the Lee-Wick models, formulated as nonanalytically Wick rotated Euclidean theories. The complex energy plane is divided into disconnected regions and the values of a loop integral in the various regions are related to one another by a nonanalytic procedure. We show that the one-loop diagrams satisfy the expected, unitary cutting equations in each region: only the physical d.o.f. propagate through the cuts. The goal can be achieved by working in suitable subsets of each region and proving that the cutting equations can be analytically continued as a whole. We make explicit calculations in the cases of the bubble and triangle diagrams and address the generality of our approach. We also show that the same higher-derivative models violate unitarity if they are formulated directly in Minkowski spacetime.
The wicked character of psychosocial risks:Implications for regulation
Helbo Jespersen, Anne; Hasle, Peter; Nielsen, Klaus Tranetoft
2016-01-01
Psychosocial risks constitute a significant problem in most workplaces, and they are generally considered more difficult to regulate than many other occupational health and safety risks. This article investigates the challenges of regulating psychosocial risks in the workplace. The difficulties lie in the particular nature of psychosocial risks: their complexity, uncertainty, value, and power divergences. Psychosocial risks therefore resemble ‘wicked problems’, typically characterized by uncl...
Intersection local times, loop soups and permanental Wick powers
Jan, Yves Le; Rosen, Jay
2017-01-01
Several stochastic processes related to transient Lévy processes with potential densities u(x,y)=u(y-x), that need not be symmetric nor bounded on the diagonal, are defined and studied. They are real valued processes on a space of measures \\mathcal{V} endowed with a metric d. Sufficient conditions are obtained for the continuity of these processes on (\\mathcal{V},d). The processes include n-fold self-intersection local times of transient Lévy processes and permanental chaoses, which are `loop soup n-fold self-intersection local times' constructed from the loop soup of the Lévy process. Loop soups are also used to define permanental Wick powers, which generalizes standard Wick powers, a class of n-th order Gaussian chaoses. Dynkin type isomorphism theorems are obtained that relate the various processes. Poisson chaos processes are defined and permanental Wick powers are shown to have a Poisson chaos decomposition. Additional properties of Poisson chaos processes are studied and a martingale extension is obt...
Analytic Model for Predicting the Permeability of Foam-type Wick
Energy Technology Data Exchange (ETDEWEB)
Ngo, Ich-Long; Byon, Chan [Yeungnam Univ., Gyeongsan (Korea, Republic of)
2016-06-15
Wicks play an important role in determining the thermal performance of heat pipes. Foam-type wicks are known to have good potential for enhancing the capillary performance of conventional types of wicks, and this is because of their high porosity and permeability. In this study, we develop an analytic expression for predicting the permeability of a foam-type wick based on extensive numerical work. The proposed correlation is based on the modified Kozeny-Carman’s equation, where the Kozeny-Carman coefficient is given as an exponential function of porosity. The proposed correlations are shown to predict the previous experimental results well for an extensive parametric range. The permeability of the foam-type wick is shown to be significantly higher than that of conventional wicks because of their high porosity.
Wrong vertex displacements due to Lee-Wick resonances at LHC
International Nuclear Information System (INIS)
Alvarez, E.; Schat, C.; Rold, L. da; Szynkman, A.
2009-01-01
We show how a resonance from the recently proposed Lee-Wick Standard Model could lead to wrong vertex displacements at LHCb. We study which could be the possible 'longest lived' Lee-Wick particle that could be created at LHC, and we study its possible decays and detections. We conclude that there is a region in the parameter space which would give wrong vertex displacements as a unique signature of the Lee-Wick Standard Model at LHCb. Further numerical simulation shows that LHC era could explore these wrong vertex displacements through Lee-Wick leptons below 500 GeV. (author)
Wicked problems in space technology development at NASA
Balint, Tibor S.; Stevens, John
2016-01-01
Technological innovation is key to enable future space exploration missions at NASA. Technology development, however, is not only driven by performance and resource considerations, but also by a broad range of directly or loosely interconnected factors. These include, among others, strategy, policy and politics at various levels, tactics and programmatics, interactions between stakeholders, resource requirements, performance goals from component to system level, mission infusion targets, portfolio execution and tracking, and technology push or mission pull. Furthermore, at NASA, these influences occur on varying timescales and at diverse geographic locations. Such a complex and interconnected system could impede space technology innovation in this examined segment of the government environment. Hence, understanding the process through NASA's Planning, Programming, Budget and Execution cycle could benefit strategic thinking, planning and execution. Insights could be gained through suitable models, for example assessing the key drivers against the framework of Wicked Problems. This paper discusses NASA specific space technology innovation and innovation barriers in the government environment through the characteristics of Wicked Problems; that is, they do not have right or wrong solutions, only improved outcomes that can be reached through authoritative, competitive, or collaborative means. We will also augment the Wicked Problems model to account for the temporally and spatially coupled, and cyclical nature of this NASA specific case, and propose how appropriate models could improve understanding of the key influencing factors. In turn, such understanding may subsequently lead to reducing innovation barriers, and stimulating technology innovation at NASA. Furthermore, our approach can be adopted for other government-directed environments to gain insights into their structures, hierarchies, operational flow, and interconnections to facilitate circular dialogs towards
Wicked Problems in Natural Hazard Assessment and Mitigation
Stein, S.; Steckler, M. S.; Rundle, J. B.; Dixon, T. H.
2017-12-01
Social scientists have defined "wicked" problems that are "messy, ill-defined, more complex than we fully grasp, and open to multiple interpretations based on one's point of view... No solution to a wicked problem is permanent or wholly satisfying, which leaves every solution open to easy polemical attack." These contrast with "tame" problems in which necessary information is available and solutions - even if difficult and expensive - are straightforward to identify and execute. Updating the U.S.'s aging infrastructure is a tame problem, because what is wrong and how to fix it are clear. In contrast, addressing climate change is a wicked problem because its effects are uncertain and the best strategies to address them are unclear. An analogous approach can be taken to natural hazard problems. In tame problems, we have a good model of the process, good information about past events, and data implying that the model should predict future events. In such cases, we can make a reasonable assessment of the hazard that can be used to develop mitigation strategies. Earthquake hazard mitigation for San Francisco is a relatively tame problem. We understand how the earthquakes result from known plate motions, have information about past earthquakes, and have geodetic data implying that future similar earthquakes will occur. As a result, it is straightforward to develop and implement mitigation strategies. However, in many cases, hazard assessment and mitigation is a wicked problem. How should we prepare for a great earthquake on plate boundaries where tectonics favor such events but we have no evidence that they have occurred and hence how large they may be or how often to expect them? How should we assess the hazard within plates, for example in the New Madrid seismic zone, where large earthquakes have occurred but we do not understand their causes and geodetic data show no strain accumulating? How can we assess the hazard and make sensible policy when the recurrence of
Experimental study on pore structure and performance of sintered porous wick
He, Da; Wang, Shufan; Liu, Rutie; Wang, Zhubo; Xiong, Xiang; Zou, Jianpeng
2018-02-01
Porous wicks were prepared via powder metallurgy using NH4HCO3 powders as pore-forming agent. The pore-forming agent particle size was varied to control the pore structure and equivalent pore size distribution feature of porous wick. The effect of pore-forming agent particle size on the porosity, pore structures, equivalent pore size distribution and capillary pumping performance were investigated. Results show that with the particle size of pore-forming agent decrease, the green density and the volume shrinkage of the porous wicks gradually increase and the porosity reduces slightly. There are two types of pores inside the porous wick, large-sized prefabricated pores and small-sized gap pores. With the particle size of pore-forming agent decrease, the size of the prefabricated pores becomes smaller and the distribution tends to be uniform. Gap pores and prefabricated pores inside the wick can make up different types of pore channels. The equivalent pore size of wick is closely related to the structure of pore channels. Furthermore, the equivalent pore size distribution of wick shows an obvious double-peak feature when the pore-forming agent particle size is large. With the particle size of pore-forming agent decrease, the two peaks of equivalent pore size distribution approach gradually to each other, resulting in a single-peak feature. Porous wick with single-peak feature equivalent pore size distribution possesses the better capillary pumping performances.
A novel approach of manufacturing Nickel Wicks for loop heat pipes ...
Indian Academy of Sciences (India)
size is a matter of concern for these bi-modal wicks as the authors reported that the pore size in the sintered ... powder cannot be injected alone because of very high viscosity binder is mixed with powder to decrease the ..... Huang X and Franchi G 2008 Design and fabrication of hybrid bi-modal wick structure for heat pipe.
Wick rotations of solutions to the minimal surface equation, the zero ...
Indian Academy of Sciences (India)
60
1 From left to right: The doubly periodic Scherk minimal surface, Scherk type zero mean curvature surface in [19] and the corresponding solution to (3). In general, solutions to the equations (1), (2) and (3) are related by changing pa- rameters called Wick rotations. In 1954, the physicist Wick [25] argued that one is allowed to ...
78 FR 62676 - Anthony E. Wicks, M.D. Decision and Order
2013-10-22
... DEPARTMENT OF JUSTICE Drug Enforcement Administration Anthony E. Wicks, M.D. Decision and Order On... Administration, issued an Order to Show Cause to Anthony E. Wicks, M.D. (Applicant), of Tampa, Florida. Show.... See 28 CFR 0.100(b). ``[T]hese factors are considered in the disjunctive.'' Robert A. Leslie, 68 FR...
DFR Perturbative Quantum Field Theory on Quantum Space Time, and Wick Reduction
Piacitelli, Gherardo
We discuss the perturbative approach à la Dyson to a quantum field theory with nonlocal self-interaction :φ⋆···⋆φ, according to Doplicher, Fredenhagen and Roberts (DFR). In particular, we show that the Wick reduction of nonlocally time-ordered products of Wick monomials can be performed as usual, and we discuss a very simple Dyson diagram.
Porous Foam Based Wick Structures for Loop Heat Pipes
Silk, Eric A.
2012-01-01
As part of an effort to identify cost efficient fabrication techniques for Loop Heat Pipe (LHP) construction, NASA Goddard Space Flight Center's Cryogenics and Fluids Branch collaborated with the U.S. Naval Academy s Aerospace Engineering Department in Spring 2012 to investigate the viability of carbon foam as a wick material within LHPs. The carbon foam was manufactured by ERG Aerospace and machined to geometric specifications at the U.S. Naval Academy s Materials, Mechanics and Structures Machine Shop. NASA GSFC s Fractal Loop Heat Pipe (developed under SBIR contract #NAS5-02112) was used as the validation LHP platform. In a horizontal orientation, the FLHP system demonstrated a heat flux of 75 Watts per square centimeter with deionized water as the working fluid. Also, no failed start-ups occurred during the 6 week performance testing period. The success of this study validated that foam can be used as a wick structure. Furthermore, given the COTS status of foam materials this study is one more step towards development of a low cost LHP.
Gravitationally Driven Wicking for Enhanced Condensation Heat Transfer.
Preston, Daniel J; Wilke, Kyle L; Lu, Zhengmao; Cruz, Samuel S; Zhao, Yajing; Becerra, Laura L; Wang, Evelyn N
2018-04-05
Vapor condensation is routinely used as an effective means of transferring heat or separating fluids. Filmwise condensation is prevalent in typical industrial-scale systems, where the condensed fluid forms a thin liquid film due to the high surface energy associated with many industrial materials. Conversely, dropwise condensation, where the condensate forms discrete liquid droplets which grow, coalesce, and shed, results in an improvement in heat transfer performance of an order of magnitude compared to filmwise condensation. However, current state-of-the-art dropwise technology relies on functional hydrophobic coatings, for example, long chain fatty acids or polymers, which are often not robust and therefore undesirable in industrial conditions. In addition, low surface tension fluid condensates, such as hydrocarbons, pose a unique challenge because common hydrophobic condenser coatings used to shed water (with a surface tension of 73 mN/m) often do not repel fluids with lower surface tensions (heat transfer using gravitationally driven flow through a porous metal wick, which takes advantage of the condensate's affinity to wet the surface and also eliminates the need for condensate-phobic coatings. The condensate-filled wick has a lower thermal resistance than the fluid film observed during filmwise condensation, resulting in an improved heat transfer coefficient of up to an order of magnitude and comparable to that observed during dropwise condensation. The improved heat transfer realized by this design presents the opportunity for significant energy savings in natural gas processing, thermal management, heating and cooling, and power generation.
Wicking and spreading of water droplets on nanotubes.
Ahn, Ho Seon; Park, Gunyeop; Kim, Joonwon; Kim, Moo Hwan
2012-02-07
Recently, there has been intensive research on the use of nanotechnology to improve the wettability of solid surfaces. It is well-known that nanostructures can improve the wettability of a surface, and this is a very important safety consideration in regard to the occurrence of boiling crises during two-phase heat transfer, especially in the operation of nuclear power plant systems. Accordingly, there is considerable interest in wetting phenomena on nanostructures in the field of nuclear heat transfer. Much of the latest research on liquid absorption on a surface with nanostructures indicates that liquid spreading is generated by capillary wicking. However, there has been comparatively little research on how capillary forces affect liquid spreading on a surface with nanotubes. In this paper, we present a visualization of liquid spreading on a zircaloy surface with nanotubes, and establish a simple quantitative method for measuring the amount of water absorbed by the nanotubes. We successfully describe liquid spreading on a two-dimensional surface via one-dimensional analysis. As a result, we are able to postulate a relationship between liquid spreading and capillary wicking in the nanotubes.
The Wicked Character of Psychosocial Risks: Implications for Regulation
Directory of Open Access Journals (Sweden)
Anne Helbo Jespersen
2016-10-01
Full Text Available Psychosocial risks constitute a significant problem in most workplaces, and they are generally considered more difficult to regulate than many other occupational health and safety risks. This article investigates the challenges of regulating psychosocial risks in the workplace. The difficulties lie in the particular nature of psychosocial risks: their complexity, uncertainty, value, and power divergences. Psychosocial risks therefore resemble ‘wicked problems’, typically characterized by unclear cause-effect relationships and uncertain solutions. We use the ‘wicked problems’ concept to show how workplace regulation, and particularly the enforcement in the form of inspection and audits of certified occupational health and safety management systems, face challenges in assessing psychosocial risks and the strategies used by regulators to overcome these challenges. While regulation has become more effective in several countries, a better understanding of the nature of the challenges is still needed. It is necessary to accept the uncertain nature of psychosocial risks in the search for more efficient regulation. Achieving more effective regulation should involve stakeholders in the workplace who deal with the prerogatives of management, and should help develop the competencies of the inspectors and auditors in the field.
International Nuclear Information System (INIS)
Nagaraj, G.; Hanumantha Rao, A.; Gopalachari, N.C.
1976-01-01
Wick feeding and leaf smearing methods have been compared for their relative efficiencies for root distribution studies with tobacco plant. The applied radioactivity gets equilibrated within 3 days in the tobacco plant. Root sections of the plants fed through the wick contained higher quantity for the radioactivity over those of the leaf smeared ones. Because of the case of application and better translocation of applied radioactivity the wick-feeding method appears to have good utility for root distribution studies with hard stemmed plants. (author)
Searching for Lee-Wick Gauge Bosons at the LHC
Energy Technology Data Exchange (ETDEWEB)
Rizzo, Thomas G.
2007-04-30
In an extension of the Standard Model(SM) based on the ideas of Lee and Wick, Grinstein, O'Connell and Wise have found an interesting way to remove the usual quadratically divergent contributions to the Higgs mass induced by radiative corrections. Phenomenologically, the model predicts the existence of Terascale, negative-norm copies of the usual SM fields with rather unique properties: ghost-like propagators and negative decay widths, but with otherwise SM-like couplings. The model is both unitary and causal on macroscopic scales. In this paper we examine whether or not such states with these unusual properties can be uniquely identified as such at the LHC. We find that in the extended strong and electroweak gauge boson sector of the model, which is the simplest one to analyze, such an identification can be rather difficult. Observation of heavy gluon-like resonances in the dijet channel offers the best hope for this identification.
Salt creep and wicking counteract hydrophobic organic structures
Burkhardt, Juergen
2017-04-01
The hydrophobic nature of many biological and edaphic surfaces prevents wetting and water movement. Already small amounts of salts and other hygroscopic material (e.g. by aerosol deposition to leaf surfaces) may change this situation. Salts attract minute amounts of liquid water to the surface and may dynamically expand on the original surface by creeping (evaporation-driven extension of crystals). Creeping is governed by fluctuations of relative humidity and increases with time. Under high, almost saturated concentrations of the salt solutions, ions from the chaotropic side of the Hofmeister series creep most efficiently. Once established, continuous salt connections may act to channel small water flows along the surface. They may act as wicks if water is removed from one side by evaporation. Stomata may in this way become 'leaky' by the leaf surface accumulation of hygroscopic aerosols.
Carbon Nanotube Bonding Strength Enhancement Using Metal "Wicking" Process
Lamb, James L.; Dickie, Matthew R.; Kowalczyk, Robert S.; Liao, Anna; Bronikowski, Michael J.
2012-01-01
Carbon nanotubes grown from a surface typically have poor bonding strength at the interface. A process has been developed for adding a metal coat to the surface of carbon nano tubes (CNTs) through a wicking process, which could lead to an enhanced bonding strength at the interface. This process involves merging CNTs with indium as a bump-bonding enhancement. Classical capillary theory would not normally allow materials that do not wet carbon or graphite to be drawn into the spacings by capillary action because the contact angle is greater than 90 degrees. However, capillary action can be induced through JPL's ability to fabricate oriented CNT bundles to desired spacings, and through the use of deposition techniques and temperature to control the size and mobility of the liquid metal streams and associated reservoirs. A reflow and plasma cleaning process has also been developed and demonstrated to remove indium oxide, and to obtain smooth coatings on the CNT bundles.
Wicked problems: a value chain approach from Vietnam's dairy product.
Khoi, Nguyen Viet
2013-12-01
In the past few years, dairy industry has become one of the fastest growing sectors in the packaged food industry of Vietnam. However, the value-added creation among different activities in the value chain of Vietnam dairy sector is distributed unequally. In the production activities, the dairy farmers gain low value-added rate due to high input cost. Whereas the processing activities, which managed by big companies, generates high profitability and Vietnamese consumers seem to have few choices due to the lack of dairy companies in the market. These wicked problems caused an unsustainable development to the dairy value chain of Vietnam. This paper, therefore, will map and analyze the value chain of the dairy industry in Vietnam. It will also assess the value created in each activity in order to imply solutions for a sustainable development of Vietnam's dairy industry. M10, M11.
Evaporation effect on two-dimensional wicking in porous media.
Benner, Eric M; Petsev, Dimiter N
2018-03-15
We analyze the effect of evaporation on expanding capillary flow for losses normal to the plane of a two-dimensional porous medium using the potential flow theory formulation of the Lucas-Washburn method. Evaporation induces a finite steady state liquid flux on capillary flows into fan-shaped domains which is significantly greater than the flux into media of constant cross section. We introduce the evaporation-capillary number, a new dimensionless quantity, which governs the frontal motion when multiplied by the scaled time. This governing product divides the wicking behavior into simple regimes of capillary dominated flow and evaporative steady state, as well as the intermediate regime of evaporation influenced capillary driven motion. We also show flow dimensionality and evaporation reduce the propagation rate of the wet front relative to the Lucas-Washburn law. Copyright © 2017 Elsevier Inc. All rights reserved.
Super-renormalizable or finite Lee–Wick quantum gravity
Directory of Open Access Journals (Sweden)
Leonardo Modesto
2016-08-01
Full Text Available We propose a class of multidimensional higher derivative theories of gravity without extra real degrees of freedom besides the graviton field. The propagator shows up the usual real graviton pole in k2=0 and extra complex conjugates poles that do not contribute to the absorptive part of the physical scattering amplitudes. Indeed, they may consistently be excluded from the asymptotic observable states of the theory making use of the Lee–Wick and Cutkosky, Landshoff, Olive and Polkinghorne prescription for the construction of a unitary S-matrix. Therefore, the spectrum consists of the graviton and short lived elementary unstable particles that we named “anti-gravitons” because of their repulsive contribution to the gravitational potential at short distance. However, another interpretation of the complex conjugate pairs is proposed based on the Calmet's suggestion, i.e. they could be understood as black hole precursors long established in the classical theory. Since the theory is CPT invariant, the conjugate complex of the micro black hole precursor can be interpreted as a white hole precursor consistently with the 't Hooft complementarity principle. It is proved that the quantum theory is super-renormalizable in even dimension, i.e. only a finite number of divergent diagrams survive, and finite in odd dimension. Furthermore, turning on a local potential of the Riemann tensor we can make the theory finite in any dimension. The singularity-free Newtonian gravitational potential is explicitly computed for a range of higher derivative theories. Finally, we propose a new super-renormalizable or finite Lee–Wick standard model of particle physics.
Vadose Zone Soil Moisture Wicking Using Super Absorbent Polymers
Energy Technology Data Exchange (ETDEWEB)
Oostrom, Martinus; Smoot, Katherine V.; Wietsma, Thomas W.; Truex, Michael J.; Benecke, Mark W.; Chronister, Glen B.
2012-11-19
Super-absorbent polymers (SAPs) have the potential to remove water and associated contaminants from unsaturated sediments in the field. Column and flow cell experiment were conducted to test the ability of four types of SAPs to remove water from unsaturated porous media. Column experiments, with emplacement of a layer of polymer on top of unsaturated porous media, showed the ability of the SAPs to extract up to 80% of the initially emplaced water against gravity into the sorbent over periods up to four weeks. In column experiments where the sorbent was emplaced between layers of unsaturated porous media, gel formation was observed at both the sorbent-porous medium interfaces. The extraction percentages over four weeks of contact time were similar for both column configurations and no obvious differences were observed for the four tested SAPs. Two different flow cells were used to test the wicking behavior of SAPs in two dimensions using three configurations. The largest removal percentages occurred for the horizontal sorbent layer configuration which has the largest sorbent-porous medium interfacial area. In a larger flow cell, a woven nylon “sock” was packed with sorbent and subsequently placed between perforated metal plates, mimicking a well configuration. After one week of contact time the sock was removed and replaced by a fresh sock. The results of this experiment showed that the sorbent was able to continuously extract water from the porous media, although the rate decreased over time. The declining yield during both periods is associated with the sharp reduction in water saturation and relative permeability near the sorbent. It was also observed that the capillary pressure continued to increase over the total contact time of 14 days, indicating that the sorbent remained active over that period. This work has demonstrated the potential of soil moisture wicking using SAPs at the proof-of-principle level.
DEFF Research Database (Denmark)
Edgeman, Rick; Wu, Zhaohui
Grand global challenges include wicked sustainability ones associated with human influences. Among these are climate change in relation to increased atmospheric presence of CO2 and methane, global warming and increasing intensity and incidence rates of extreme weather events, drought...
Effect of Bamboo Viscose on the Wicking and Moisture Management Properties of Gauze
Akbar, Abdul R.; Su, Siwei; Amjad, Bilal; Cai, Yingjie; Lin, Lina
2017-12-01
Bamboo viscose or regenerated cellulose fibers were used to check their absorbency properties effect on the wicking and moisture management in gauzes. Bamboo viscose and cotton fibers were spun into five different yarn samples with different fiber proportion by ring spinning. Fifteen different gauze samples were made of these yarn samples. The gauze samples were subjected to wicking test to check the wicking ability. Water vapor transmission test was applied to check the vapor transmission rate. These tests were applied to measure the effectiveness of bamboo viscose, cotton and blended gauze samples in wound healing. Pure bamboo gauzes and gauzes with high content of bamboo fiber, i.e. 75B:25C and 50B:50C, shows better wicking and vapor transmission properties. It makes gauzes with high bamboo viscose suitable for wound care applications because of moisture absorbency.
Directory of Open Access Journals (Sweden)
Daniel Daia
2013-05-01
Full Text Available The paper describe the results obtained from theoretical calculus of the kinematics of the wicked gates for the correlation: ao=f(α; aor=ao/Do(α; S=f(α and propose analytical formulas for ao=f(α correlation, applicable to 16, 24 wicked gates blade number and asymmetrical hydrofoils; also, numerical results compared with graphical values are presented.
Antiseptic wick: does it reduce the incidence of wound infection following appendectomy?
LENUS (Irish Health Repository)
McGreal, Gerald T
2012-02-03
The role of prophylactic antibiotics is well established for contaminated wounds, but the use of antiseptic wound wicks is controversial. The aim of this work was to study the potential use of wound wicks to reduce the rate of infection following appendectomy. This prospective randomized controlled clinical trial was conducted at a university hospital in the department of surgery. The subjects were patients undergoing appendectomy for definite acute appendicitis. They were randomized by computer to primary subcuticular wound closure or use of an antiseptic wound wick. For the latter, ribbon gauze soaked in povidone-iodine was placed between interrupted nylon skin sutures. Wicks were soaked daily and removed on the fourth postoperative day. All patients received antibiotic prophylaxis. They were reviewed while in hospital and 4 weeks following operation for evidence of wound infection. The main outcome measures were wound infection, wound discomfort, and cosmetic result. The overall wound infection rate was 8.6% (15\\/174). In patients with wound wicks it was 11.6% (10\\/86) compared to 5.6% (5\\/88) in those whose wounds were closed by subcuticular sutures (p = NS). We concluded that the use of wound wicks was not associated with decreased wound infection rates following appendectomy. Subcuticular closure is therefore appropriate in view of its greater convenience and safety.
Another look at the identity of the ‘wicked woman’ in 4Q184
Directory of Open Access Journals (Sweden)
Ananda Geyser-Fouché
2016-11-01
Full Text Available In this study, I take another look at the possible identity of the ‘wicked woman’ in 4Q184. Although a number of scholars attempted to identify the ‘wicked woman’, I would like to examine two other possibilities that (as far as I know have not been discussed yet. The first possibility is that it can be seen as a metaphor for the city Jerusalem. This possibility is inspected by comparing the ‘wicked terminology’ that was used to describe the ‘wicked priest(s’ in the Habakkuk commentary with the ‘wicked terminology’ that was used in 4Q184, as well as in a study of existing traditions in the Old Testament where Jerusalem was portrayed as a woman or wife. The other option is that the ‘wicked woman’ is a metaphor for foreign wisdom, specifically in the form of Hellenism and Greek philosophy or Hellenistic (non-Israelite diviners. The fact that 4Q184 refers to ‘teaching’ and warns against her influence (this kind of wisdom, that she can let righteous and upright people (not foolish young people go astray might be a very strong possibility that the Yaḥad is warned not to get diverted by this ‘upcoming culture’ that seems to be so attractive.
Web 2.0 Solutions to Wicked Climate Change Problems
Directory of Open Access Journals (Sweden)
Alanah Kazlauskas
2010-01-01
Full Text Available One of the most pressing ‘wicked problems’ facing humankind is climate change together with its many interrelated environmental concerns. The complexity of this set of problems can be overwhelming as there is such diversity among both the interpretations of the scientific evidence and the viability of possible solutions. Among the social technologies associated with the second generation of the Internet known as Web 2.0, there are tools that allow people to communicate, coordinate and collaborate in ways that reduce their carbon footprint and a potential to become part of the climate change solution. However the way forward is not obvious or easy as Web 2.0, while readily accepted in the chaotic social world, is often treated with suspicion in the more ordered world of business and government. This paper applies a holistic theoretical sense-making framework to research and practice on potential Web 2.0 solutions to climate change problems. The suite of issues, activities and tools involved are viewed as an ecosystem where all elements are dynamic and inter-related. Through such innovative thinking the Information Systems community can make a valuable contribution to a critical global problem and hence find a new relevance as part of the solution.
The Wicked Problem of the Intersection between Supervision and Evaluation
Directory of Open Access Journals (Sweden)
Ian M. METTE
2017-03-01
Full Text Available The purpose of this research was to explore how principals in eight high-functioning elementary schools in one American school district balanced teacher supervision and evaluation in their role as an instructional leader. Using the theoretical framework of ‘wicked problems’, to unpack the circular used to problematize teacher supervision and evaluation, the findings analyse how elementary principals in these eight buildings acknowledge the tensions and conflicts between supervision and evaluation, specifically as they relate to improving teacher instruction. Specifically, the results of this study highlight not only the differences between supervision and evaluation, but also the intersection between the two functions, as well as how high-performing elementary school principals serve as an instructional coach rather than a manager of teachers. While the two functions of supervision and evaluation are inherently different, it is the acknowledgement of the intersection between the two functions that can allow building principals to progress as instructional coaches who can better develop human resources and create higher-functioning school systems. Overall, this study points toward the importance of elementary principals having the instructional leadership skills to differentiate supervision and professional development need for teachers, which in turn influences the evaluation of a teacher is in her/his respective career.
The wicked problem of the intersection between supervision and evaluation
Directory of Open Access Journals (Sweden)
Ian M. Mette
2017-03-01
Full Text Available The purpose of this research was to explore how principals in eight high-functioning elementary schools in one American school district balanced teacher supervision and evaluation in their role as an instructional leader. Using the theoretical framework of ‘wicked problems’, to unpack the circular used to problematize teacher supervision and evaluation, the findings analyse how elementary principals in these eight buildings acknowledge the tensions and conflicts between supervision and evaluation, specifically as they relate to improving teacher instruction. Specifically, the results of this study highlight not only the differences between supervision and evaluation, but also the intersection between the two functions, as well as how high-performing elementary school principals serve as an instructional coach rather than a manager of teachers. While the two functions of supervision and evaluation are inherently different, it is the acknowledgement of the intersection between the two functions that can allow building principals to progress as instructional coaches who can better develop human resources and create higher-functioning school systems. Overall, this study points toward the importance of elementary principals having the instructional leadership skills to differentiate supervision and professional development need for teachers, which in turn influences the evaluation of a teacher is in her/his respective career.
Wicking and flooding of liquids on vertical porous sheets
Kim, Seong Jin; Choi, Jin Woo; Moon, Myoung-Woon; Lee, Kwang-Ryeol; Chang, Young Soo; Lee, Dae-Young; Kim, Ho-Young
2015-03-01
When one brings a wet paintbrush into contact with a vertical watercolor paper, the paint may wick into the porous sheet completely or run down to ruin the art. We study a simple model of this spreading dynamics of liquids on hydrophilic porous sheets under the effects of gravity, using a capillary as a liquid source and thin fabrics of non-woven polyethylene terephthalate. Upon finding the maximum flow rate, Qw, that can be absorbed into the fabric, we show that the model can be used to obtain an estimate of the in-plane permeability of fabrics in a simpler manner than the conventional schemes. The shape of a wetting area that grows when the flow rate exceeds Qw to lead to rivulet formation is also theoretically given. The nose shape of the wetting front is shown to be time-invariant, while its profile depends on the properties of the liquid and the fabric. This study can be applied to understand and improve the liquid absorption behavior of hygiene items, heating, ventilation, and air-conditioning equipments, and fuel cell membranes in addition to elucidating the mundane painting activity.
Tackling wicked problems: how theories of agency can provide new insights.
Varpio, Lara; Aschenbrener, Carol; Bates, Joanna
2017-04-01
This paper reviews why and how theories of agency can be used as analytical lenses to help health professions education (HPE) scholars address our community's wicked problems. Wicked problems are those that resist clear problem statements, defy traditional analysis approaches, and refuse definitive resolution (e.g. student remediation, assessments of professionalism, etc.). We illustrate how theories of agency can provide new insights into such challenges by examining the application of these theories to one particular wicked problem in HPE: interprofessional education (IPE). After searching the HPE literature and finding that theories of agency had received little attention, we borrowed techniques from narrative literature reviews to search databases indexing a broad scope of disciplines (i.e. ERIC, Web of Science, Scopus, MEDLINE and PubMed) for publications (1994-2014) that: (i) examined agency, or (ii) incorporated an agency-informed analytical perspective. The lead author identified the theories of agency used in these articles, and reviewed the texts on agency cited therein and the original sources of each theory. We identified 10 theories of agency that we considered to be applicable to HPE's wicked problems. To select a subset of theories for presentation in this paper, we discussed each theory in relation to some of HPE's wicked problems. Through debate and reflection, we unanimously agreed on the applicability of a subset of theories for illuminating HPE's wicked problems. This subset is described in this paper. We present four theories of agency: Butler's post-structural formulation; Giddens' sociological formulation; cultural historical activity theory's formulation, and Bandura's social cognitive psychology formulation. We introduce each theory and apply each to the challenges of engaging in IPE. Theories of agency can inform HPE scholarship in novel and generative ways. Each theory offers new insights into the roots of wicked problems and means for
Zijp, M.C.; Posthuma, L.; Wintersen, A.; Devilee, J.; Swartjes, F.A.
2016-01-01
This paper introduces Solution-focused Sustainability Assessment (SfSA), provides practical guidance formatted as a versatile process framework, and illustrates its utility for solving a wicked environmental management problem. Society faces complex and increasingly wicked environmental problems for
Deposition of sol-gel sensor spots by nanoimprint lithography and hemi-wicking
DEFF Research Database (Denmark)
Mikkelsen, Morten Bo Lindholm; Marie, Rodolphe; Hansen, Jan H.
2011-01-01
-wicking, a deposited droplet spreads, guided by the posts, to automatically fill the imprinted structure, not being sensitive to alignment as long as it is deposited inside the patterned area. Hemi-wicking is an effective method to immobilize a low viscosity liquid material in well-defined spots on a surface, when...... ratio is therefore constant all over the surface of the liquid spread by hemi-wicking, when considering length scales larger than the microstructure period. Material redistribution caused by solvent evaporation, i.e., the "coffee ring effect", can therefore be avoided because the evaporation rate does...... conventional methods such as screen- or stamp-printing do not work. On length scales of the order of the microstructure period, surface tension will govern the shape of the liquid-air interface, and the liquid will climb up the pillars to keep a fixed contact angle with the sidewalls. The surface to volume...
Capillary pressure and liquid wicking in three-dimensional nonwoven materials
Mao, N.; Russell, S. J.
2008-08-01
The capillary pressure and liquid wicking in fibrous nonwoven materials depend on the structural arrangement of fibers in three dimensions, which is influenced by the method and conditions used to manufacture the material. By adapting the hydraulic radius mechanism and drag force theory, a model is established for predicting the directional capillary pressure in three-dimensional nonwoven materials. As a case study, equations to predict the velocity of liquid wicking in a one-dimensional wicking strip test for nonwovens having a three-dimensional fiber orientation distribution are given based on the newly established capillary pressure model. These models and equations are based on measurable structural parameters including the fiber orientation distribution, fiber diameter, and fabric porosity.
Dentoni, D.; Ross, R.
2013-01-01
Part Two of our Special Issue on wicked problems in agribusiness, “Towards a Theory of Managing Wicked Problems through Multi-Stakeholder Engagements: Evidence from the Agribusiness Sector,” will contribute to four open questions in the broader fields of management and policy: why, when, which and
A novel approach of manufacturing Nickel Wicks for loop heat pipes ...
Indian Academy of Sciences (India)
e-mail: prosenjit.sct.cmeri@gmail.com. MS received 12 October 2011; revised 1 June 2012; accepted 13 September 2012. Abstract. Sintered nickel powder is proposed to be used as porous wicks in loop heat pipes used for space applications such as satellites and space crafts. In this work, the manufacturing procedure for ...
Inequality--"Wicked Problems", Labour Market Outcomes and the Search for Silver Bullets
Keep, Ewart; Mayhew, Ken
2014-01-01
In recent years concerns about inequality have been growing in prominence within UK policy debates. The many causes of inequality of earnings and income are complex in their interactions and their tendency to reinforce one another. This makes inequality an intractable or "wicked" policy problem, particularly within a contemporary context…
Periodic solutions of Wick-type stochastic Korteweg–de Vries ...
Indian Academy of Sciences (India)
2016-09-20
Sep 20, 2016 ... Keywords. Wick-type stochastic Korteweg–de Vries equation; Hermite transform; Kudrayshov method; white noise functionals. ... geneous balance and white noise analysis method, Xie. [14] obtained positonic solutions for ... method [16], homotopy perturbation method [17], F- expansion method [18] ...
Leadership Development in Governments of the United Arab Emirates: Re-Framing a Wicked Problem
Mathias, Megan
2017-01-01
Developing the next generation of leaders in government is seen as a strategic challenge of national importance in the United Arab Emirates (UAE). This article examines the wicked nature of the UAE's leadership development challenge, identifying patterns of complexity, uncertainty, and divergence in the strategic intentions underlying current…
Periodic solutions of Wick-type stochastic Korteweg–de Vries ...
Indian Academy of Sciences (India)
2016-09-20
Sep 20, 2016 ... illustrations in two- and three-dimensional plots of the obtained solutions depending on time and space are also given with white noise functionals. Keywords. Wick-type stochastic Korteweg–de Vries equation; Hermite transform; Kudrayshov method; white noise functionals. PACS Nos 02.30.lk; 02.30.Jr. 1.
Cook, Kristin
2015-01-01
In their work with teachers and community members in Kenya, Cassie Quigley and colleagues seek to localize the "wicked problems" (Churchman in "Manag Sci" 14(4):141-142, 1967) of environmental sustainability through the use of decolonizing methods to challenge top-down approaches to solution-generation in the bountiful yet…
Small Schools in a Big World: Thinking about a Wicked Problem
Corbett, Michael; Tinkham, Jennifer
2014-01-01
The position of small rural schools is precarious in much of rural Canada today. What is to be done about small schools in rural communities which are often experiencing population decline and aging, economic restructuring, and the loss of employment and services? We argue this issue is a classic "wicked" policy problem. Small schools…
Periodic solutions of Wick-type stochastic Korteweg–de Vries ...
Indian Academy of Sciences (India)
Nonlinear stochastic partial differential equations have a wide range of applications in science and engineering. Finding exact solutions of the Wick-type stochastic equation will be helpful in the theories and numerical studies of such equations. In this paper, Kudrayshov method together with Hermite transform ...
Women, Leadership, and Power Revisiting the Wicked Witch of the West
Kruse, Sharon D.; Prettyman, Sandra Spickard
2008-01-01
By examining the cultural images present in the popular musical "Wicked", cultural norms and biases toward women in leadership and women's leadership practices are explored. The discussion rests on conceptions of male and female leadership "styles", how power is obtained and utilised within organisational settings and how resistance and…
Police Self-Deployment at Critical Incidents: A Wicked Problem or a Part of the Solution
2017-09-01
2017. http://www.caloes.ca.gov/cal-oes-divisions/ law -enforcement/mutual-aid- system. Camillus, John. Wicked Strategies: How Companies Conquer...policy and training recommendations, including the suggestions that law enforcement embrace, rather than prohibit, self-deployment and that federally...the definition resulted in policy and training recommendations, including the suggestions that law enforcement embrace, rather than prohibit, self
Lönngren, Johanna; Ingerman, Åke; Svanström, Magdalena
2017-01-01
Wicked sustainability problems (WSPs) are an important and particularly challenging type of problem. Science and engineering education can play an important role in preparing students to deal with such problems, but current educational practice may not adequately prepare students to do so. We address this gap by providing insights related to…
van Woezik, Anne F G; Braakman-Jansen, Louise M A; Kulyk, Olga; Siemons, Liseth; van Gemert-Pijnen, Julia E W C
2016-01-01
Infection prevention and control can be seen as a wicked public health problem as there is no consensus regarding problem definition and solution, multiple stakeholders with different needs and values are involved, and there is no clear end-point of the problem-solving process. Co-creation with stakeholders has been proposed as a suitable strategy to tackle wicked problems, yet little information and no clear step-by-step guide exist on how to do this. The objectives of this study were to develop a guideline to assist developers in tackling wicked problems using co-creation with stakeholders, and to apply this guideline to practice with an example case in the field of infection prevention and control. A mixed-method approach consisting of the integration of both quantitative and qualitative research was used. Relevant stakeholders from the veterinary, human health, and public health sectors were identified using a literature scan, expert recommendations, and snowball sampling. The stakeholder salience approach was used to select key stakeholders based on 3 attributes: power, legitimacy, and urgency. Key values of stakeholders (N = 20) were derived by qualitative semi-structured interviews and quantitatively weighted and prioritized using an online survey. Our method showed that stakeholder identification and analysis are prerequisites for understanding the complex stakeholder network that characterizes wicked problems. A total of 73 stakeholders were identified of which 36 were selected as potential key stakeholders, and only one was seen as a definite stakeholder. In addition, deriving key stakeholder values is a necessity to gain insights into different problem definitions, solutions and needs stakeholders have regarding the wicked problem. Based on the methods used, we developed a step-by-step guideline for co-creation with stakeholders when tackling wicked problems. The mixed-methods guideline presented here provides a systematic, transparent method to
Wicking: a rapid method for manually inserting ion channels into planar lipid bilayers.
Costa, Justin A; Nguyen, Dac A; Leal-Pinto, Edgar; Gordon, Ronald E; Hanss, Basil
2013-01-01
The planar lipid bilayer technique has a distinguished history in electrophysiology but is arguably the most technically difficult and time-consuming method in the field. Behind this is a lack of experimental consistency between laboratories, the challenges associated with painting unilamellar bilayers, and the reconstitution of ion channels into them. While there has be a trend towards automation of this technique, there remain many instances where manual bilayer formation and subsequent membrane protein insertion is both required and advantageous. We have developed a comprehensive method, which we have termed "wicking", that greatly simplifies many experimental aspects of the lipid bilayer system. Wicking allows one to manually insert ion channels into planar lipid bilayers in a matter of seconds, without the use of a magnetic stir bar or the addition of other chemicals to monitor or promote the fusion of proteoliposomes. We used the wicking method in conjunction with a standard membrane capacitance test and a simple method of proteoliposome preparation that generates a heterogeneous mixture of vesicle sizes. To determine the robustness of this technique, we selected two ion channels that have been well characterized in the literature: CLIC1 and α-hemolysin. When reconstituted using the wicking technique, CLIC1 showed biophysical characteristics congruent with published reports from other groups; and α-hemolysin demonstrated Type A and B events when threading single stranded DNA through the pore. We conclude that the wicking method gives the investigator a high degree of control over many aspects of the lipid bilayer system, while greatly reducing the time required for channel reconstitution.
Wicking: a rapid method for manually inserting ion channels into planar lipid bilayers.
Directory of Open Access Journals (Sweden)
Justin A Costa
Full Text Available The planar lipid bilayer technique has a distinguished history in electrophysiology but is arguably the most technically difficult and time-consuming method in the field. Behind this is a lack of experimental consistency between laboratories, the challenges associated with painting unilamellar bilayers, and the reconstitution of ion channels into them. While there has be a trend towards automation of this technique, there remain many instances where manual bilayer formation and subsequent membrane protein insertion is both required and advantageous. We have developed a comprehensive method, which we have termed "wicking", that greatly simplifies many experimental aspects of the lipid bilayer system. Wicking allows one to manually insert ion channels into planar lipid bilayers in a matter of seconds, without the use of a magnetic stir bar or the addition of other chemicals to monitor or promote the fusion of proteoliposomes. We used the wicking method in conjunction with a standard membrane capacitance test and a simple method of proteoliposome preparation that generates a heterogeneous mixture of vesicle sizes. To determine the robustness of this technique, we selected two ion channels that have been well characterized in the literature: CLIC1 and α-hemolysin. When reconstituted using the wicking technique, CLIC1 showed biophysical characteristics congruent with published reports from other groups; and α-hemolysin demonstrated Type A and B events when threading single stranded DNA through the pore. We conclude that the wicking method gives the investigator a high degree of control over many aspects of the lipid bilayer system, while greatly reducing the time required for channel reconstitution.
'Wicked' ethics: Compliance work and the practice of ethics in HIV research.
Heimer, Carol A
2013-12-01
Using ethnographic material collected between 2003 and 2007 in five HIV clinics in the US, South Africa, Uganda, and Thailand, this article examines "official ethics" and "ethics on the ground." It compares the ethical conundrums clinic staff and researchers confront in their daily work as HIV researchers with the dilemmas officially identified as ethical issues by bioethicists and people responsible for ethics reviews and compliance with ethics regulations. The tangled relation between ethical problems and solutions invites a comparison to Rittel and Webber's "wicked problems." Official ethics' attempts to produce universal solutions often make ethics problems even more wickedly intractable. Ethics on the ground is in part a reaction to this intractability. Copyright © 2012 Elsevier Ltd. All rights reserved.
DEFF Research Database (Denmark)
Edgeman, Rick; Eskildsen, Jacob Kjær; Blahova, Michaela
economies are organizations and half are nations – so that organizational influence, whether for good or bad, can be profound (Fifka and Drabble, 2012). Approach: Enterprise approaches to business excellence, sustainability, resilience and robustness are examined with special attention dedicated to social......Purpose: This present age is one that is human-made, rather than defined by geologic strata. As such it has come to be called the Anthropocene Age. It is an age fraught with wicked social, ecological, and environmental challenges for which no clear resolutions exist. While many means of warfare......-ecological innovation that also delivers positive economic impact. Models for social-ecological innovation (SEI) and sustainable enterprise excellence, resilience and robustness (SEER2) are briefly presented prior to their deeper consideration within organizational contexts and in light of wicked global challenges...
Energy Technology Data Exchange (ETDEWEB)
Wolfe, Larry
2009-04-22
Design of Experiments (DoE) were developed and performed in an effort to discover and resolve the causes of three different manufacturing issues; large panel voids after Hot Air Solder Leveling (HASL), cable hole locations out of tolerance after lamination and delamination/solder wicking around flat flex cable circuit lands after HASL. Results from a first DoE indicated large panel voids could be eliminated by removing the pre-HASL cleaning. It also revealed eliminating the pre-HASL bake would not be detrimental when using a hard press pad lamination stackup. A second DoE indicated a reduction in hard press pad stackup lamination pressure reduced panel stretch in the y axis approximately 70%. A third DoE illustrated increasing the pre-HASL bake temperature could reduce delamination/solder wicking when using a soft press pad lamination stackup.
Stochastic Effects for the Reaction-Duffing Equation with Wick-Type Product
Directory of Open Access Journals (Sweden)
Jin Hyuk Choi
2016-01-01
Full Text Available We construct new explicit solutions of the Wick-type stochastic reaction-Duffing equation arising from mathematical physics with the help of the white noise theory and the system technique. Based on these exact solutions, we also discuss the influences of stochastic effects for dynamical behaviors according to functions h1(t, h2(t, and Brownian motion B(t which are the solitary wave group velocities.
Conformal generally covariant quantum field theory. The scalar field and its Wick products
International Nuclear Information System (INIS)
Pinamonti, N.
2008-06-01
In this paper we generalize the construction of generally covariant quantum theories given in [R. Brunetti, K. Fredenhagen, R. Verch, Commun. Math. Phys. 237, 31 (2003)] to encompass the conformal covariant case. After introducing the abstract framework, we discuss the massless conformally coupled Klein Gordon field theory, showing that its quantization corresponds to a functor between two certain categories. At the abstract level, the ordinary fields, could be thought as natural transformations in the sense of category theory. We show that, the Wick monomials without derivatives (Wick powers), can be interpreted as fields in this generalized sense, provided a non trivial choice of the renormalization constants is given. A careful analysis shows that the transformation law of Wick powers is characterized by a weight, and it turns out that the sum of fields with different weights breaks the conformal covariance. At this point there is a difference between the previously given picture due to the presence of a bigger group of covariance. It is furthermore shown that the construction does not depend upon the scale μ appearing in the Hadamard parametrix, used to regularize the fields. Finally, we briefly discuss some further examples of more involved fields. (orig.)
Household light makes global heat: high black carbon emissions from kerosene wick lamps.
Lam, Nicholas L; Chen, Yanju; Weyant, Cheryl; Venkataraman, Chandra; Sadavarte, Pankaj; Johnson, Michael A; Smith, Kirk R; Brem, Benjamin T; Arineitwe, Joseph; Ellis, Justin E; Bond, Tami C
2012-12-18
Kerosene-fueled wick lamps used in millions of developing-country households are a significant but overlooked source of black carbon (BC) emissions. We present new laboratory and field measurements showing that 7-9% of kerosene consumed by widely used simple wick lamps is converted to carbonaceous particulate matter that is nearly pure BC. These high emission factors increase previous BC emission estimates from kerosene by 20-fold, to 270 Gg/year (90% uncertainty bounds: 110, 590 Gg/year). Aerosol climate forcing on atmosphere and snow from this source is estimated at 22 mW/m² (8, 48 mW/m²), or 7% of BC forcing by all other energy-related sources. Kerosene lamps have affordable alternatives that pose few clear adoption barriers and would provide immediate benefit to user welfare. The net effect on climate is definitively positive forcing as coemitted organic carbon is low. No other major BC source has such readily available alternatives, definitive climate forcing effects, and cobenefits. Replacement of kerosene-fueled wick lamps deserves strong consideration for programs that target short-lived climate forcers.
Variational Monte Carlo Technique
Indian Academy of Sciences (India)
ias
RESONANCE ⎜ August 2014. GENERAL ⎜ ARTICLE. Variational Monte Carlo Technique. Ground State Energies of Quantum Mechanical Systems. Sukanta Deb. Keywords. Variational methods, Monte. Carlo techniques, harmonic os- cillators, quantum mechanical systems. Sukanta Deb is an. Assistant Professor in the.
Indian Academy of Sciences (India)
. Keywords. Gibbs sampling, Markov Chain. Monte Carlo, Bayesian inference, stationary distribution, conver- gence, image restoration. Arnab Chakraborty. We describe the mathematics behind the Markov. Chain Monte Carlo method of ...
Getting off the carousel: Exploring the wicked problem of curriculum reform.
Hawick, Lorraine; Cleland, Jennifer; Kitto, Simon
2017-10-01
Making substantial changes to the form and delivery of medical education is challenging. One reason for this may be misalignment between existing conceptualizations of curricula and curriculum reform in medical education, with the former perceived as 'complex' yet the latter as linear. Reframing curriculum reform as a process-driven, complex entity may enhance the possibility of change. To explore the utility of this approach, we carried out an exploratory case study of curriculum reform in a real-life context. We used a qualitative case study approach. Data were collected from 17 interviews with senior faculty involved in curriculum reform in one medical school plus document analysis of approximately 50 documents and files, to provide background, context, and aid triangulation. Data coding and analysis was initially inductive, using thematic analysis. After themes were identified, we applied the 'wicked problem' framework to highlight aspects of the data. This paper focuses on two main analytic themes. First, that multiple players hold different views and values in relation to curriculum reform, resulting in various influences on the process and outcomes of reform. Second, 'solutions' generate consequences which go beyond the anticipated advantages of curriculum reform. This is the first empirical study of curriculum reform in medical education which uses the wicked problem framework to conceptually illuminate the complex processes which occur in relation to reform. Those involved in reform must be reflective and attentive to the possibility that persistent and emerging challenges may be a result of wicked problems.
The rebuilding imperative in fisheries: Clumsy solutions for a wicked problem?
Khan, Ahmed S.; Neis, Barb
2010-10-01
There is mounting evidence that global fisheries are in crisis and about 25-30% of fish stocks are over exploited, depleted or recovering. Fish landings are increasingly coming from fully-exploited and over-exploited fisheries, and from intensive aquaculture that often relies indirectly on reduction fisheries. This poses severe challenges for marine ecosystems as well as food security and the livelihoods of resource-dependent coastal communities. Growing awareness of these social, economic and ecological consequences of overfishing is reflected in an expanding literature which shows that reducing fishing effort to allow fish stocks to recover has been the main focus of management efforts, but successful examples of stock recovery are few. An alternative, less explored social-ecological approach focuses on rebuilding entire ‘fish chains’ from oceans to plate. This paper supports this alternative approach. A review and synthesis of stock rebuilding initiatives worldwide suggests effective governance is central to rebuilding, and fisheries governance is a wicked problem. Wicked problems are complex, persistent or reoccurring and hard to fix because they are linked to broader social, economic and policy issues. This review and analysis implies that, due to socioeconomic and sociopolitical concerns, fisheries governance challenges are particularly wicked when dealing with collapsed fisheries and rebuilding efforts. The paper concludes that rebuilding might benefit from experimenting with clumsy solutions. Clumsy solutions are exploratory, include inputs from a broad range of stakeholders along the fish chain, and require information sharing, knowledge synthesis, and trust building. Moreover, clumsy solutions that address power relations, collective action dilemmas, and the fundamental question of ‘rebuilding for whom’ are essential for stewardship, equity and long-term resource sustainability.
Dunn, William L
2012-01-01
Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble
Directory of Open Access Journals (Sweden)
Bardenet Rémi
2013-07-01
Full Text Available Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.
Exact solutions for the Wick-type stochastic Kersten-Krasil'shchik coupled KdV-mKdV equations
Singh, S.; Saha Ray, S.
2017-11-01
In this article, exact solutions of Wick-type stochastic Kersten-Krasil'shchik coupled KdV-mKdV equations have been obtained by using the Jacobian elliptic function expansion method. We have used the Hermite transform for transforming the Wick-type stochastic Kersten-Krasil'shchik coupled KdV-mKdV equation into a deterministic partial differential equation. Also, we have applied the inverse Hermite transform for obtaining a set of stochastic solutions in the white noise space.
Directory of Open Access Journals (Sweden)
Hans Jørgen Timm Guthe
Full Text Available To measure colloid osmotic pressure in interstitial fluid (COP(i from human subcutaneous tissue with the modified wick technique in order to determine influence of topical application of anaesthetics, dry vs. wet wick and implantation time on COP(i.In 50 healthy volunteers interstitial fluid (IF was collected by subcutaneous implantation of multi-filamentous nylon wicks. Study subjects were allocated to two groups; one for comparing COP(i obtained from dry and saline soaked wicks, and one for comparing COP(i from unanaesthetized skin, and skin after application of a eutectic mixture of local anaesthetic (EMLA®, Astra Zeneca cream. IF was sampled from the skin of the shoulders, and implantation time was 30, 60, 75, 90 and 120 min. Colloid osmotic pressure was measured with a colloid osmometer. Pain assessment during the procedure was compared for EMLA cream and no topical anaesthesia using a visual analogue scale (VAS in a subgroup of 10 subjects.There were no significant differences between COP(i obtained from dry compared to wet wicks, except that the values after 75 and 90 min. were somewhat higher for the dry wicks. Topical anaesthesia with EMLA cream did not affect COP(i values. COP(i decreased from 30 to 75 min. of implantation (23.2 ± 4.4 mmHg to 19.6 ± 2.9 mmHg, p = 0.008 and subsequently tended to increase until 120 min. EMLA cream resulted in significant lower VAS score for the procedure.COP(i from subcutaneous tissue was easily obtained and fluid harvesting was well tolerated when topical anaesthetic was used. The difference in COP(i assessed by dry and wet wicks between 75 min. and 90 min. of implantation was in accordance with previous reports. The use of topical analgesia did not influence COP(i and topical analgesia may make the wick technique more acceptable for subjects who dislike technical procedures, including children.ClinicalTrials.gov NCT01044979.
A comparison of field-line resonances observed at the Goose Bay and Wick radars
Directory of Open Access Journals (Sweden)
G. Provan
Full Text Available Previous observations with the Goose Bay HF coherent-scatter radar have revealed structured spectral peaks at ultra-low frequencies. The frequencies of these spectral peaks have been demonstrated to be extremely consistent from day to day. The stability of these spectral peaks can be seen as evidence for the existence of global magnetospheric cavity modes whose resonant frequencies are independent of latitude. Field-line resonances occur when successive harmonics of the eigenfrequency of the magnetospheric cavity or waveguide match either the first harmonic eigenfrequency of the geomagnetic field lines or higher harmonics of this frequency. Power spectra observed at the SABRE VHF coherent-scatter radar at Wick, Scotland, during night and early morning are revealed to show similarly clearly structured spectral peaks. These spectral peaks are the result of local field-line resonances due to Alfvén waves standing on magnetospheric field lines. A comparison of the spectra observed by the Goose Bay and Wick radars demonstrate that the frequencies of the field-line resonances are, on average, almost identical, despite the different latitudinal ranges covered by the two radars. Possible explanations for the similarity of the signatures on the two radar systems are discussed.
A comparison of field-line resonances observed at the Goose Bay and Wick radars
Directory of Open Access Journals (Sweden)
G. Provan
1997-02-01
Full Text Available Previous observations with the Goose Bay HF coherent-scatter radar have revealed structured spectral peaks at ultra-low frequencies. The frequencies of these spectral peaks have been demonstrated to be extremely consistent from day to day. The stability of these spectral peaks can be seen as evidence for the existence of global magnetospheric cavity modes whose resonant frequencies are independent of latitude. Field-line resonances occur when successive harmonics of the eigenfrequency of the magnetospheric cavity or waveguide match either the first harmonic eigenfrequency of the geomagnetic field lines or higher harmonics of this frequency. Power spectra observed at the SABRE VHF coherent-scatter radar at Wick, Scotland, during night and early morning are revealed to show similarly clearly structured spectral peaks. These spectral peaks are the result of local field-line resonances due to Alfvén waves standing on magnetospheric field lines. A comparison of the spectra observed by the Goose Bay and Wick radars demonstrate that the frequencies of the field-line resonances are, on average, almost identical, despite the different latitudinal ranges covered by the two radars. Possible explanations for the similarity of the signatures on the two radar systems are discussed.
Conical evaporator and liquid-return wick model for vapor anode, multi-tube AMTEC cells
Tournier, Jean-Michel; El-Genk, Mohamed S.
2000-01-01
A detailed, 2-D thermal-hydraulic model for conical and flat evaporators and the liquid sodium return artery in PX-type AMTEC cells was developed, which predicts incipient dryout at the evaporator wick surface. Results obtained at fixed hot and cold side temperatures showed that the flat evaporator provided a slightly lower vapor pressure, but reached the capillary limit at higher temperature. The loss of performance due to partial recondensation over up to 20% of the wick surface of the deep conical evaporators was offset by the larger surface area available for evaporation, providing a slightly higher vapor pressure. Model results matched the PX-3A cell's experimental data of electrical power output, but the predicted temperature of the cell's conical evaporator was consistently ~50 K above measurements. A preliminary analysis indicated that sodium vapor leakage in the cell (through microcracks in the BASE tubes' walls or brazes) may explain the difference between predicted and measured evaporator temperatures in PX-3A. .
Passive cooling of standalone flat PV module with cotton wick structures
International Nuclear Information System (INIS)
Chandrasekar, M.; Suresh, S.; Senthilkumar, T.; Ganesh karthikeyan, M.
2013-01-01
Highlights: • A simple passive cooling system is developed for standalone flat PV modules. • 30% Reduction in module temperature is observed with developed cooling system. • 15.61% Increase in output power of PV module is found with developed cooling system. • Module efficiency is increased by 1.4% with cooling arrangement. • Lower thermal degradation due to narrow range of temperature characteristics. - Abstract: In common, PV module converts only 4–17% of the incoming solar radiation into electricity. Thus more than 50% of the incident solar energy is converted as heat and the temperature of PV module is increased. The increase in module temperature in turn decreases the electrical yield and efficiency of the module with a permanent structural damage of the module due to prolonged period of thermal stress (also known as thermal degradation of the module). An effective way of improving efficiency and reducing the rate of thermal degradation of a PV module is to reduce the operating temperature of PV module. This can be achieved by cooling the PV module during operation. Hence in the present work, a simple passive cooling system with cotton wick structures is developed for standalone flat PV modules. The thermal and electrical performance of flat PV module with cooling system consisting of cotton wick structures in combination with water, Al 2 O 3 /water nanofluid and CuO/water nanofluid are investigated experimentally. The experimental results are also compared with the thermal and electrical performance of flat PV module without cooling system
International Nuclear Information System (INIS)
Cai Yifu; Qiu Taotao; Brandenberger, Robert; Zhang Xinmin
2009-01-01
We study the cosmology of a Lee-Wick type scalar field theory. First, we consider homogeneous and isotropic background solutions and find that they are nonsingular, leading to cosmological bounces. Next, we analyze the spectrum of cosmological perturbations which result from this model. Unless either the potential of the Lee-Wick theory or the initial conditions are finely tuned, it is impossible to obtain background solutions which have a sufficiently long period of inflation after the bounce. More interestingly, however, we find that in the generic noninflationary bouncing cosmology, perturbations created from quantum vacuum fluctuations in the contracting phase have the correct form to lead to a scale-invariant spectrum of metric inhomogeneities in the expanding phase. Since the background is nonsingular, the evolution of the fluctuations is defined unambiguously through the bounce. We also analyze the evolution of fluctuations which emerge from thermal initial conditions in the contracting phase. The spectrum of gravitational waves stemming from quantum vacuum fluctuations in the contracting phase is also scale-invariant, and the tensor to scalar ratio is not suppressed.
An auto-Baecklund transformation and exact solutions of stochastic Wick-type Sawada-Kotera equations
Energy Technology Data Exchange (ETDEWEB)
Chen Bin E-mail: journal@xznu.edu.cn; Xie Yingchao E-mail: ycxie588@public.xz.js.cn
2005-01-01
This paper shows an auto-Baecklund transformation and soliton solutions for variable coefficient Sawada-Kotera equations and stochastic soliton solutions of stochastic Wick-type Sawada-Kotera equations by using the Hermite transform in Kondratiev distribution space (S){sub -1}.
Dentoni, D.; Hospes, O.; Ross, R.
2013-01-01
Environmental degradation and biodiversity loss, persisting poverty, a mounting obesity epidemic, food insecurity and the use of biotechnology are all examples of wicked problems faced by agricultural and food organizations. Yet, managers and policy-makers often do not recognize that these problems
Amour, Laurent; Khodja, Mohamed; Nourrigat, Jean
2011-01-01
We study the Wick symbol of a solution of the time dependent Hartree Fock equation, under weaker hypotheses than those needed for the Weyl symbol in the first paper with thesame title. With similar, we prove some kind of Ehrenfest theorem for observables that are not pseudo-differential operators.
Stahl, Cynthia; Cimorelli, Alan
2013-01-01
Because controversy, conflict, and lawsuits frequently characterize US Environmental Protection Agency (USEPA) decisions, it is important that USEPA decision makers understand how to evaluate and then make decisions that have simultaneously science-based, social, and political implications. Air quality management is one category of multidimensional decision making at USEPA. The Philadelphia, Pennsylvania metropolitan area experiences unhealthy levels of ozone, fine particulate matter, and air toxics. Many ozone precursors are precursors for particulate matter and certain air toxics. Additionally, some precursors for particulate matter are air toxics. However, air quality management practices have typically evaluated these problems separately. This approach has led to the development of independent (and potentially counterproductive) implementation strategies. This is a methods article about the necessity and feasibility of using a clumsy approach on wicked problems, using an example case study. Air quality management in Philadelphia is a wicked problem. Wicked problems are those where stakeholders define or view the problem differently, there are many different ways to describe the problem (i.e., different dimensions or levels of abstraction), no efficient or optimal solutions exist, and they are often complicated by moral, political, or professional dimensions. The USEPA has developed the multicriteria integrated resource assessment (MIRA) decision analytic approach that engages stakeholder participation through transparency, transdisciplinary learning, and the explicit use of value sets; in other words, a clumsy approach. MIRA's approach to handling technical indicators, expert judgment, and stakeholder values makes it a potentially effective method for tackling wicked environmental problems. Copyright © 2012 SETAC.
Energy Technology Data Exchange (ETDEWEB)
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.
International Nuclear Information System (INIS)
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described
Does labelling frequency affect N rhizodeposition assessment using the cotton-wick method?
DEFF Research Database (Denmark)
Mahieu, S.; Fustec, J.; Jensen, Erik Steen
2009-01-01
solution and the feeding frequency on assessment of nitrogen rhizodeposition were studied in two greenhouse experiments using the field pea (Pisum sativum L.). Neither the method nor the feeding frequency altered plant biomass and N partitioning, and the method appeared well adapted for assessing......, the assessment of nitrogen rhizodeposition was more reliable when plants were labelled continuously with a dilute solution of 15N urea.......The aim of the present study was to test and improve the reliability of the 15N cotton-wick method for measuring soil N derived from plant rhizodeposition, a critical value for assessing belowground nitrogen input in field-grown legumes. The effects of the concentration of the 15N labelling...
Variational Monte Carlo Technique
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 8. Variational Monte Carlo Technique: Ground State Energies of Quantum Mechanical Systems. Sukanta Deb. General Article Volume 19 Issue 8 August 2014 pp 713-739 ...
Monte Carlo codes and Monte Carlo simulator program
International Nuclear Information System (INIS)
Higuchi, Kenji; Asai, Kiyoshi; Suganuma, Masayuki.
1990-03-01
Four typical Monte Carlo codes KENO-IV, MORSE, MCNP and VIM have been vectorized on VP-100 at Computing Center, JAERI. The problems in vector processing of Monte Carlo codes on vector processors have become clear through the work. As the result, it is recognized that these are difficulties to obtain good performance in vector processing of Monte Carlo codes. A Monte Carlo computing machine, which processes the Monte Carlo codes with high performances is being developed at our Computing Center since 1987. The concept of Monte Carlo computing machine and its performance have been investigated and estimated by using a software simulator. In this report the problems in vectorization of Monte Carlo codes, Monte Carlo pipelines proposed to mitigate these difficulties and the results of the performance estimation of the Monte Carlo computing machine by the simulator are described. (author)
2009-01-01
Carlo Rubbia turned 75 on March 31, and CERN held a symposium to mark his birthday and pay tribute to his impressive contribution to both CERN and science. Carlo Rubbia, 4th from right, together with the speakers at the symposium.On 7 April CERN hosted a celebration marking Carlo Rubbia’s 75th birthday and 25 years since he was awarded the Nobel Prize for Physics. "Today we will celebrate 100 years of Carlo Rubbia" joked CERN’s Director-General, Rolf Heuer in his opening speech, "75 years of his age and 25 years of the Nobel Prize." Rubbia received the Nobel Prize along with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. During the symposium, which was held in the Main Auditorium, several eminent speakers gave lectures on areas of science to which Carlo Rubbia made decisive contributions. Among those who spoke were Michel Spiro, Director of the French National Insti...
Hrivnacova, I; Berejnov, V V; Brun, R; Carminati, F; Fassò, A; Futo, E; Gheata, A; Caballero, I G; Morsch, Andreas
2003-01-01
The concept of Virtual Monte Carlo (VMC) has been developed by the ALICE Software Project to allow different Monte Carlo simulation programs to run without changing the user code, such as the geometry definition, the detector response simulation or input and output formats. Recently, the VMC classes have been integrated into the ROOT framework, and the other relevant packages have been separated from the AliRoot framework and can be used individually by any other HEP project. The general concept of the VMC and its set of base classes provided in ROOT will be presented. Existing implementations for Geant3, Geant4 and FLUKA and simple examples of usage will be described.
Siim Nestor soovitab : Wicked Beat. President of Funk. Muuseumiööl. Springz Bashmendil / Siim Nestor
Nestor, Siim, 1974-
2002-01-01
16. mail toimub Emajõe äärses klubis Atlantis Eesti esimene nu-skool breaks õhtu Wicked Beat, kus peaesinejaks on DJ ja produtsent Databass (plaadifirma Freakaboom "juhatuse esimees" Justin Owen). 18. mail house-muusika pidu Tartus muusikagaleriis Damtan Dance. 17. mai ööl Rotermanni soolalaos triod: 1. Riho Sibul, Jaak Sooäär ja Raul Saaremets ning 2. Tõnis Leemets, Robert Jürjendal ja Aivar Tõnso. 17. mail Von Krahlis Dj Dr. Koit
Metzger, E. P.; Curren, R. R.
2016-12-01
Effective engagement with the problems of sustainability begins with an understanding of the nature of the challenges. The entanglement of interacting human and Earth systems produces solution-resistant dilemmas that are often portrayed as wicked problems. As introduced by urban planners Rittel and Webber (1973), wicked problems are "dynamically complex, ill-structured, public problems" arising from complexity in both biophysical and socio-economic systems. The wicked problem construct is still in wide use across diverse contexts, disciplines, and sectors. Discourse about wicked problems as related to sustainability is often connected to discussion of complexity or complex systems. In preparation for life and work in an uncertain, dynamic and hyperconnected world, students need opportunities to investigate real problems that cross social, political and disciplinary divides. They need to grapple with diverse perspectives and values, and collaborate with others to devise potential solutions. Such problems are typically multi-casual and so intertangled with other problems that they cannot be resolved using the expertise and analytical tools of any single discipline, individual, or organization. We have developed a trio of illustrative case studies that focus on energy, water and food, because these resources are foundational, interacting, and causally connected in a variety of ways with climate destabilization. The three interrelated case studies progress in scale from the local and regional, to the national and international and include: 1) the 2010 Gulf of Mexico oil spill with examination of the multiple immediate and root causes of the disaster, its ecological, social, and economic impacts, and the increasing risk and declining energy return on investment associated with the relentless quest for fossil fuels; 2) development of Australia's innovative National Water Management System; and 3) changing patterns of food production and the intertwined challenge of
Energy Technology Data Exchange (ETDEWEB)
Son, Hong Hyun; Seo, Gwang Hyeok; Kim, Sung Joong [Hanyang University, Seoul (Korea, Republic of)
2016-10-15
In light of boiling heat transfer, the smooth surface potentially reduces active nucleation of bubbles and rewetting of dry spots near the critical heat flux (CHF). This kind of process is highly likely to deteriorate the CHF. Thus, it is essential to produce appropriate microstructures on the surface for the enhancement of the CHF. In this study, to investigate the microstructural effect of thin film-fabricated surfaces on the pool boiling CHF, we controlled the surface roughness in a narrow range of 0.1-0.25 μm and its morphologies, in the form of micro-scratches using PVD sputtering technique. Specifically for DC magnetron sputtering, pure chromium (Cr) was selected as a target material owing to its high oxidation resistance. In order to analyze the CHF trend with changes in roughness, we introduced existing capillary wicking-based models because superhydrophilic characteristics of microstructures are highly related to the capillary wicking behaviors in micro-flow channels. After Cr sputtering under given conditions, the Cr-sputtered surfaces showed superhydrophilic characteristics and its capability became more enhanced with an increase of surface roughness. Judging from spreading behavior of a liquid droplet, the presence of micro-wicking channels, coupled with Cr nanostructures, effectively enhanced the advancing rate of drop base diameter. The CHF exhibited an increasing trend with increasing surface roughness. However, the enhancement ratio agreed poorly with the predictions of the roughness factor-based models, all of which originated from a conventional static force balance.
Variational Monte Carlo Technique
Indian Academy of Sciences (India)
ias
nonprobabilistic) problem [5]. ... In quantum mechanics, the MC methods are used to simulate many-particle systems us- ing random ...... D Ceperley, G V Chester and M H Kalos, Monte Carlo simulation of a many-fermion study, Physical Review Vol.
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 3. Markov Chain Monte Carlo - Examples. Arnab Chakraborty. General Article Volume 7 Issue 3 March 2002 pp 25-34. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/007/03/0025-0034. Keywords.
Leonardo Rossi
Carlo Caso (1940 - 2007) Our friend and colleague Carlo Caso passed away on July 7th, after several months of courageous fight against cancer. Carlo spent most of his scientific career at CERN, taking an active part in the experimental programme of the laboratory. His long and fruitful involvement in particle physics started in the sixties, in the Genoa group led by G. Tomasini. He then made several experiments using the CERN liquid hydrogen bubble chambers -first the 2000HBC and later BEBC- to study various facets of the production and decay of meson and baryon resonances. He later made his own group and joined the NA27 Collaboration to exploit the EHS Spectrometer with a rapid cycling bubble chamber as vertex detector. Amongst their many achievements, they were the first to measure, with excellent precision, the lifetime of the charmed D mesons. At the start of the LEP era, Carlo and his group moved to the DELPHI experiment, participating in the construction and running of the HPC electromagnetic c...
Directory of Open Access Journals (Sweden)
Abdullah Mohd Fareez Edzuan
2017-01-01
Full Text Available As compared with conventional diesel fuel, biodiesel has better lubricity and lower particulate matter (PM emissions however nitrogen oxides (NOx emissions generally increase in biodiesel-fuelled diesel engine. Strict regulation on NOx emissions is being implemented in current Euro 6 standard and it is expected to be tighter in next standard, thus increase of NOx cannot be accepted. In this study, biodiesel unsaturation degree effects on NOx emissions are investigated. Canola, palm and coconut oils are selected as the feedstock based on their unsaturation degree. Biodiesel blends of B20 were used to fuel a single cylinder diesel engine and exhaust emissions were sampled directly at exhaust tailpipe with a flue gas analyser. Biodiesel flame temperature was measured from a cotton wick burned in simple atmospheric conditions using a thermocouple. Fourier transform infrared (FTIR spectrometer was also used to identify the functional groups presence in the biodiesel blends. Oxygen content in biodiesel may promote complete combustion as the NOx emissions and flame temperatures were increased while the carbon monoxide (CO emissions were decreased for all biodiesel blends. It is interesting to note that the NOx emissions and flame temperatures were directly proportional with biodiesel unsaturation degree. It might be suggested that apart from excess oxygen and free radical formation, higher NOx emissions can also be caused by the elevated flame temperatures due to the presence of double bonds in unsaturated biodiesel.
Binder, Andrea; Zirkelbach, Daniel; Künzel, Hartwig
2010-05-01
Applying an interior insulation often is the only option for a thermal retrofit, especially when heritage buildings are concerned. In doing so, the original construction becomes colder in winter and interstitial condensation may occur. The common way to avoid harmful condensation beneath the interior insulation of the external wall is the installation of a vapor barrier. Since such a barrier works both ways, it may adversely affect the drying potential of the wall during the warmer seasons. One way to avoid the problems described is the installation of an interior insulation system without a vapor barrier to the inside. Here, the effect of capillary transport in porous hydrophilic media is used to conduct condensing moisture away from the wall/insulation interface back to the surface in contact with the indoor air. Following an increasing demand, several water wicking insulation materials (e.g. Calcium-silicate, Autoclave Aerated Concrete based mineral foam, hydrophilic Glass fiber, Cellulose fiber) have appeared on the market. In the past, different methods have been developed to measure and describe the liquid transport properties of hydrophilic porous media. However, the evaluation of the moisture transport mechanisms and their efficiency in this special field of implementation is very complex because of the interacting vapor- and liquid moisture transfer processes. Therefore, there is no consensus yet on its determination and quantification.
Lönngren, Johanna; Ingerman, Åke; Svanström, Magdalena
2017-08-01
Wicked sustainability problems (WSPs) are an important and particularly challenging type of problem. Science and engineering education can play an important role in preparing students to deal with such problems, but current educational practice may not adequately prepare students to do so. We address this gap by providing insights related to students' abilities to address WSPs. Specifically, we aim to (I) describe key constituents of engineering students' approaches to a WSP, (II) evaluate these approaches in relation to the normative context of education for sustainable development (ESD), and (III) identify relevant aspects of learning related to WSPs. Aim I is addressed through a phenomenographic study, while aims II and III are addressed by relating the results to research literature about human problem solving, sustainable development, and ESD. We describe four qualitatively different ways of approaching a specific WSP, as the outcome of the phenomenographic study: A. Simplify and avoid, B. Divide and control, C. Isolate and succumb, and D. Integrate and balance. We identify approach D as the most appropriate approach in the context of ESD, while A and C are not. On this basis, we identify three learning objectives related to students' abilities to address WSPs: learn to use a fully integrative approach, distinguish WSPs from tame and well-structured problems, and understand and consider the normative context of SD. Finally, we provide recommendations for how these learning objectives can be used to guide the design of science and engineering educational activities.
The other Higgses, at resonance, in the Lee-Wick extension of the Standard Model
Figy, Terrance
2011-01-01
Within the framework of the Lee Wick Standard Model (LWSM) we investigate Higgs pair production $gg \\to h_0 h_0$, $gg \\to h_0 \\tilde p_0$ and top pair production $gg \\to \\bar tt$ at the Large Hadron Collider (LHC), where the neutral particles from the Higgs sector ($h_0$, $\\tilde h_0$ and $\\tilde p_0$) appear as possible resonant intermediate states. We investigate the signal $gg \\to h_0 h_0 \\to \\bar b b \\gamma \\gamma$ and we find that the LW Higgs, depending on its mass-range, can be seen not long after the LHC upgrade in 2012. More precisely this happens when the new LW Higgs states are below the top pair threshold. In $gg \\to \\bar tt$ the LW states, due to the wrong-sign propagator and negative width, lead to a dip-peak structure instead of the usual peak-dip structure which gives a characteristic signal especially for low-lying LW Higgs states. We comment on the LWSM and the forward-backward asymmetry in view of the measurement at the TeVatron. Furthermore, we present a technique which reduces the hyperbo...
Kalos, Melvin H
2008-01-01
This introduction to Monte Carlo methods seeks to identify and study the unifying elements that underlie their effective application. Initial chapters provide a short treatment of the probability and statistics needed as background, enabling those without experience in Monte Carlo techniques to apply these ideas to their research.The book focuses on two basic themes: The first is the importance of random walks as they occur both in natural stochastic systems and in their relationship to integral and differential equations. The second theme is that of variance reduction in general and importance sampling in particular as a technique for efficient use of the methods. Random walks are introduced with an elementary example in which the modeling of radiation transport arises directly from a schematic probabilistic description of the interaction of radiation with matter. Building on this example, the relationship between random walks and integral equations is outlined
Directory of Open Access Journals (Sweden)
Pedro Medina Avendaño
1981-01-01
Full Text Available Carlos Vega Duarte tenía la sencillez de los seres elementales y puros. Su corazón era limpio como oro de aluvión. Su trato directo y coloquial ponía de relieve a un santandereano sin contaminaciones que amaba el fulgor de las armas y se encandilaba con el destello de las frases perfectas
Energy Technology Data Exchange (ETDEWEB)
Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
Wormhole Hamiltonian Monte Carlo
Lan, S; Streets, J; Shahbaba, B
2014-01-01
Copyright © 2014, Association for the Advancement of Artificial Intelligence. In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, espe...
International Nuclear Information System (INIS)
Creutz, M.
1986-01-01
The author discusses a recently developed algorithm for simulating statistical systems. The procedure interpolates between molecular dynamics methods and canonical Monte Carlo. The primary advantages are extremely fast simulations of discrete systems such as the Ising model and a relative insensitivity to random number quality. A variation of the algorithm gives rise to a deterministic dynamics for Ising spins. This model may be useful for high speed simulation of non-equilibrium phenomena
Energy Technology Data Exchange (ETDEWEB)
Brockway, D.; Soran, P.; Whalen, P.
1985-01-01
A Monte Carlo algorithm to efficiently calculate static alpha eigenvalues, N = ne/sup ..cap alpha..t/, for supercritical systems has been developed and tested. A direct Monte Carlo approach to calculating a static alpha is to simply follow the buildup in time of neutrons in a supercritical system and evaluate the logarithmic derivative of the neutron population with respect to time. This procedure is expensive, and the solution is very noisy and almost useless for a system near critical. The modified approach is to convert the time-dependent problem to a static ..cap alpha../sup -/eigenvalue problem and regress ..cap alpha.. on solutions of a/sup -/ k/sup -/eigenvalue problem. In practice, this procedure is much more efficient than the direct calculation, and produces much more accurate results. Because the Monte Carlo codes are intrinsically three-dimensional and use elaborate continuous-energy cross sections, this technique is now used as a standard for evaluating other calculational techniques in odd geometries or with group cross sections.
Directory of Open Access Journals (Sweden)
Charlie Samuya Veric
2001-12-01
Full Text Available The importance of Carlos Bulosan in Filipino and Filipino-American radical history and literature is indisputable. His eminence spans the pacific, and he is known, diversely, as a radical poet, fictionist, novelist, and labor organizer. Author of the canonical America Iis the Hearts, Bulosan is celebrated for chronicling the conditions in America in his time, such as racism and unemployment. In the history of criticism on Bulosan's life and work, however, there is an undeclared general consensus that views Bulosan and his work as coherent permanent texts of radicalism and anti-imperialism. Central to the existence of such a tradition of critical reception are the generations of critics who, in more ways than one, control the discourse on and of Carlos Bulosan. This essay inquires into the sphere of the critical reception that orders, for our time and for the time ahead, the reading and interpretation of Bulosan. What eye and seeing, the essay asks, determine the perception of Bulosan as the angel of radicalism? What is obscured in constructing Bulosan as an immutable figure of the political? What light does the reader conceive when the personal is brought into the open and situated against the political? the essay explores the answers to these questions in Bulosan's loving letters to various friends, strangers, and white American women. The presence of these interrogations, the essay believes, will secure ultimately the continuing importance of Carlos Bulosan to radical literature and history.
2009-01-01
On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency and Professor at the IUSS School for Advanced Studies in Pavia will speak about his work with Carlo Rubbia. Finally, Hans Joachim Sch...
2009-01-01
On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency, will speak about his work with Carlo Rubbia. Finally, Hans Joachim Schellnhuber of the Potsdam Institute for Climate Research and Sven Kul...
Tackling racism as a "wicked" public health problem: Enabling allies in anti-racism praxis.
Came, Heather; Griffith, Derek
2018-02-01
Racism is a "wicked" public health problem that fuels systemic health inequities between population groups in New Zealand, the United States and elsewhere. While literature has examined racism and its effects on health, the work describing how to intervene to address racism in public health is less developed. While the notion of raising awareness of racism through socio-political education is not new, given the way racism has morphed into new narratives in health institutional settings, it has become critical to support allies to make informing efforts to address racism as a fundamental cause of health inequities. In this paper, we make the case for anti-racism praxis as a tool to address inequities in public health, and focus on describing an anti-racism praxis framework to inform the training and support of allies. The limited work on anti-racism rarely articulates the unique challenges or needs of allies or targets of racism, but we seek to help fill that gap. Our anti-racism praxis for allies includes five core elements: reflexive relational praxis, structural power analysis, socio-political education, monitoring and evaluation and systems change approaches. We recognize that racism is a modifiable determinant of health and racial inequities can be eliminated with the necessary political will and a planned system change approach. Anti-racism praxis provides the tools to examine the interconnection and interdependence of cultural and institutional factors as a foundation for examining where and how to intervene to address racism. Copyright © 2017 Elsevier Ltd. All rights reserved.
Patterson, James J; Smith, Carl; Bellamy, Jennifer
2013-10-15
Nonpoint source (NPS) water pollution in catchments is a 'wicked' problem that threatens water quality, water security, ecosystem health and biodiversity, and thus the provision of ecosystem services that support human livelihoods and wellbeing from local to global scales. However, it is a difficult problem to manage because water catchments are linked human and natural systems that are complex, dynamic, multi-actor, and multi-scalar in nature. This in turn raises questions about understanding and influencing change across multiple levels of planning, decision-making and action. A key challenge in practice is enabling implementation of local management action, which can be influenced by a range of factors across multiple levels. This paper reviews and synthesises important 'enabling' capacities that can influence implementation of local management action, and develops a conceptual framework for understanding and analysing these in practice. Important enabling capacities identified include: history and contingency; institutional arrangements; collaboration; engagement; vision and strategy; knowledge building and brokerage; resourcing; entrepreneurship and leadership; and reflection and adaptation. Furthermore, local action is embedded within multi-scalar contexts and therefore, is highly contextual. The findings highlight the need for: (1) a systemic and integrative perspective for understanding and influencing change for managing the wicked problem of NPS water pollution; and (2) 'enabling' social and institutional arenas that support emergent and adaptive management structures, processes and innovations for addressing NPS water pollution in practice. These findings also have wider relevance to other 'wicked' natural resource management issues facing similar implementation challenges. Copyright © 2013 Elsevier Ltd. All rights reserved.
Agarwal, Shilpi; Kumar, Varun; Shakher, Chandra
2018-03-01
This paper presents the effect of magnetic field (upward decreasing, uniform and upward increasing) on wick stabilized micro diffusion flame by using digital holographic interferometry (DHI). The investigations reveal that under the influence of upward decreasing and uniform magnetic field temperature inside the micro flame increases in comparison to temperature inside micro flame without magnetic field. This is in contrary to normal diffusion flame, where uniform magnetic field has a little or no effect on the temperature. DHI is inherently more accurate more precise and is having better spatial resolution. DHI is ideally suited to study micro flame.
International Nuclear Information System (INIS)
Talley, T.L.; Evans, F.
1988-01-01
Prior work demonstrated the importance of nuclear scattering to fusion product energy deposition in hot plasmas. This suggests careful examination of nuclear physics details in burning plasma simulations. An existing Monte Carlo fast ion transport code is being expanded to be a test bed for this examination. An initial extension, the energy deposition of fast alpha particles in a hot deuterium plasma, is reported. The deposition times and deposition ranges are modified by allowing nuclear scattering. Up to 10% of the initial alpha particle energy is carried to greater ranges and times by the more mobile recoil deuterons. 4 refs., 5 figs., 2 tabs
International Nuclear Information System (INIS)
Nguyen, Xuan Hung; Sung, Byung Ho; Choi, Jee Hoon; Kim, Chul Ju; Yoo, Jung Hyung; Seo, Min Whan
2008-01-01
This paper investigates the plate loop heat pipe system with an evaporator mounted with fin-wick structure to dissipate effectively the heat generated by the electronic components. The heat transfer formulation is modeled and predicted through thermal resistance analysis of the fin-wick structure in the evaporator. The experimental approach measures the thermal resistances and the operating characteristics. These results gathered in this investigation have been used to the objective of the information to improve the LHP system design so as to apply as the future cooling devices of the electronic components
The wicked problem of earthquake hazard in developing countries: the example of Bangladesh
Steckler, M. S.; Akhter, S. H.; Stein, S.; Seeber, L.
2017-12-01
Many developing nations in earthquake-prone areas confront a tough problem: how much of their limited resources to use mitigating earthquake hazards? This decision is difficult because it is unclear when an infrequent major earthquake may happen, how big it could be, and how much harm it may cause. This issue faces nations with profound immediate needs and ongoing rapid urbanization. Earthquake hazard mitigation in Bangladesh is a wicked problem. It is the world's most densely populated nation, with 160 million people in an area the size of Iowa. Complex geology and sparse data make assessing a possibly-large earthquake hazard difficult. Hence it is hard to decide how much of the limited resources available should be used for earthquake hazard mitigation, given other more immediate needs. Per capita GDP is $1200, so Bangladesh is committed to economic growth and resources are needed to address many critical challenges and hazards. In their subtropical environment, rural Bangladeshis traditionally relied on modest mud or bamboo homes. Their rapidly growing, crowded capital, Dhaka, is filled with multistory concrete buildings likely to be vulnerable to earthquakes. The risk is compounded by the potential collapse of services and accessibility after a major temblor. However, extensive construction as the population shifts from rural to urban provides opportunity for earthquake-risk reduction. While this situation seems daunting, it is not hopeless. Robust risk management is practical, even for developing nations. It involves recognizing uncertainties and developing policies that should give a reasonable outcome for a range of the possible hazard and loss scenarios. Over decades, Bangladesh has achieved a thousandfold reduction in risk from tropical cyclones by building shelters and setting up a warning system. Similar efforts are underway for earthquakes. Smart investments can be very effective, even if modest. Hence, we suggest strategies consistent with high
International Nuclear Information System (INIS)
Boo, Joon Hong; Chung, Won Bok
2005-01-01
A small-scale Loop Heat Pipe (LHP) with polypropylene wick was fabricated and tested for investigation of its thermal performance. The container and tubing of the system were made of stainless steel and several working fluids were tested including methanol, ethanol, and acetone. The heating area was 35 mm x 35 mm and nine axial grooves were provided in the evaporator to provide vapor passages. The pore size of the polypropylene wick inside the evaporator was varied from 0.5 μm to 25 μm. The inner diameter of liquid and vapor transport lines were 2.0 mm and 4.0 mm, respectively and the length of which were 0.5 mm. The size of condenser was 40 mm (W) x 50 mm (L) in which ten coolant paths were provided. Start-up characteristics as well as steady-state performance was analyzed and discussed. The minimum thermal load of 10 W (0.8W/cam 2 ) and maximum thermal load of 80 W (6.5 W/cm 2 ) were achieved using methanol as working fluid with the condenser temperature of 20 deg. C with horizontal position
Monte Carlo Methods in Physics
International Nuclear Information System (INIS)
Santoso, B.
1997-01-01
Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained
Drengenberg, Nicholas; Bain, Alan
2017-01-01
This paper addresses the wicked problem of measuring the productivity of learning and teaching in higher education. We show how fundamental validity issues and difficulties identified in educational productivity research point to the need for a qualitatively different framework when considering the entire question. We describe the work that needs…
Metropolis Methods for Quantum Monte Carlo Simulations
Ceperley, D. M.
2003-01-01
Since its first description fifty years ago, the Metropolis Monte Carlo method has been used in a variety of different ways for the simulation of continuum quantum many-body systems. This paper will consider some of the generalizations of the Metropolis algorithm employed in quantum Monte Carlo: Variational Monte Carlo, dynamical methods for projector monte carlo ({\\it i.e.} diffusion Monte Carlo with rejection), multilevel sampling in path integral Monte Carlo, the sampling of permutations, ...
An Ecosocial Approach to Well-Being: A Solution to the Wicked Problems in the Era of Anthropocene
Directory of Open Access Journals (Sweden)
Arto O. Salonen
2015-07-01
Full Text Available Modern Western states have a history of the thinking tradition, where the development of human societies is seen as independent from ecological constraints. Our thinking is a social construction, a product of the human mind. It can be changed. In this article we describe a new approach to well-being called an Ecosocial Approach to Well-Being (EAW. It is a holistic and multidisciplinary approach to well-being that will facilitate the analysis and management of the world’s complexity from a socio-ecological perspective. The EAW is based on the fact that without the well-functioning biosphere there can be no society and without a society there can be no societal functions, including an economy. Fundamentally all wicked problems in the era of Anthropocene are global and have social and environmental backgrounds. A more holistic and multi-disciplinary systems thinking is needed to analyze and manage the causal complexity of the world in which we live. The EAW asks us to focus on post-material values because they are only loosely coupled with resource consumption. On the finite planet that is the question of what is enough and what is good for us. The EAW leads us maximizing psychological well-being and nurturing social harmony and cohesion. The EAW holds promise not only for solving social and ecological problems but also for helping people to be happier. It emphasizes human relationships and the meaningfulness of people’s unique lives. If people properly reflected on their values, especially what is ultimately good for those they care about, most of the wicked problems would be resolved.How to reference this articleSalonen, A. O., & Konkka, J. (2015. An Ecosocial Approach to Well-Being: A Solution to the Wicked Problems in the Era of Anthropocene. Foro de Educación, 13(19, 19-34. doi: http://dx.doi.org/10.14516/fde.2015.013.019.002
Parallelizing Monte Carlo with PMC
International Nuclear Information System (INIS)
Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.
1994-11-01
PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described
Lectures on Monte Carlo methods
Madras, Neal
2001-01-01
Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati
Wormhole Hamiltonian Monte Carlo
Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak
2015-01-01
In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551
Wormhole Hamiltonian Monte Carlo.
Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak
2014-07-31
In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function.
Directory of Open Access Journals (Sweden)
Renee Newman-Storen
2014-09-01
Full Text Available Fundamental to Leadership in Sustainability, a course in the Masters in Sustainability and Climate Policy (coursework offered through Curtin University Sustainability Policy (CUSP Institute, is that the complexity, flexibility and vitality of sustainability are precisely why sustainability practitioners commit themselves to finding new and innovative solutions to complex problems. The course asks the student to “think differently” and to engage in debate that inspires and encourages creative thinking strategies for the planning and development of our cities and communities. This paper details what the course is about, how it is structured and what the connections are between creativity, sustainability and theories of leadership, arguing that strong and resilient leadership requires thinking differently in order to deal with “wicked problems” associated with sustainability.
Directory of Open Access Journals (Sweden)
Colleen Elaine Donnelly
2016-12-01
Full Text Available Fantasy and horror often exploit disabled people, presenting them as embodiments of terror and evil. In contemporary fantasy, we sometimes see archetypically evil characters redefined primarily by the telling of their backstories to provide rationale for their behavior and to evoke sympathy or pity from the audience. Pity often places the viewer in the position to seem benevolent while masking the ways that disabled people are often treated as inferior, different, and are isolated from the rest of society. In Wicked, Maleficent, and Game of Thrones, we are asked to confront the judgments and behaviors in which spectators and society engage. Instead of reaffirming the views and values of society, these works question and denounce our consumption of the stereotypes we have learned and our often unexamined behaviors towards those who are often treated as "others."
Li, Xingyu; Li, Jingyao; Michielsen, Stephen
2017-07-01
Bloodstain pattern analysis (BPA) of bloodstains on hard, non-porous surfaces has found widespread use in crime scene analysis and reconstruction for violent crimes in which bloodshed occurs. At many violent crime scenes, bloody clothing is also found and may be analyzed. However, to date, there are no definitive methods for analyzing bloodstains on textiles, even for simple drip stains. There are two major classes of textiles used for apparel and household textiles, weaves and knits. In this article, drip stains on two 100% cotton plain weave fabrics representative of bed sheets are analyzed. Since it is common practice in the manufacture of bed sheeting to use different types of yarn in the warp and weft direction to reduce cost, custom weaves were made from yarns produced by each of the three most common staple yarn production techniques to control this variable. It was found that porcine blood wicked into the fabrics made with ring spun yarn, but not into those made with open end or vortex spun yarns. The uneven wicking of blood into the different yarns resulted in elliptical-shaped stains on commercial bed sheeting that can be misleading when performing bloodstain pattern interpretation based on the stain morphology. This surprising result demonstrates that it is not sufficient to analyze the structure of the fabric, but one must also characterize the yarns from which the fabric is made. This study highlights the importance of a deeper characterization of the textile structure, even down to the yarn level, for BPA on textiles. Copyright © 2017 Elsevier B.V. All rights reserved.
Advanced Multilevel Monte Carlo Methods
Jasra, Ajay
2017-04-24
This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.
Handbook of Monte Carlo methods
National Research Council Canada - National Science Library
Kroese, Dirk P; Taimre, Thomas; Botev, Zdravko I
2011-01-01
... in rapid succession, the staggering number of related techniques, ideas, concepts and algorithms makes it difficult to maintain an overall picture of the Monte Carlo approach. This book attempts to encapsulate the emerging dynamics of this field of study"--
TARC: Carlo Rubbia's Energy Amplifier
Laurent Guiraud
1997-01-01
Transmutation by Adiabatic Resonance Crossing (TARC) is Carlo Rubbia's energy amplifier. This CERN experiment demonstrated that long-lived fission fragments, such as 99-TC, can be efficiently destroyed.
Monte Carlo simulation for IRRMA
International Nuclear Information System (INIS)
Gardner, R.P.; Liu Lianyan
2000-01-01
Monte Carlo simulation is fast becoming a standard approach for many radiation applications that were previously treated almost entirely by experimental techniques. This is certainly true for Industrial Radiation and Radioisotope Measurement Applications - IRRMA. The reasons for this include: (1) the increased cost and inadequacy of experimentation for design and interpretation purposes; (2) the availability of low cost, large memory, and fast personal computers; and (3) the general availability of general purpose Monte Carlo codes that are increasingly user-friendly, efficient, and accurate. This paper discusses the history and present status of Monte Carlo simulation for IRRMA including the general purpose (GP) and specific purpose (SP) Monte Carlo codes and future needs - primarily from the experience of the authors
Carlos Chagas: biographical sketch.
Moncayo, Alvaro
2010-01-01
Carlos Chagas was born on 9 July 1878 in the farm "Bon Retiro" located close to the City of Oliveira in the interior of the State of Minas Gerais, Brazil. He started his medical studies in 1897 at the School of Medicine of Rio de Janeiro. In the late XIX century, the works by Louis Pasteur and Robert Koch induced a change in the medical paradigm with emphasis in experimental demonstrations of the causal link between microbes and disease. During the same years in Germany appeared the pathological concept of disease, linking organic lesions with symptoms. All these innovations were adopted by the reforms of the medical schools in Brazil and influenced the scientific formation of Chagas. Chagas completed his medical studies between 1897 and 1903 and his examinations during these years were always ranked with high grades. Oswaldo Cruz accepted Chagas as a doctoral candidate and directed his thesis on "Hematological studies of Malaria" which was received with honors by the examiners. In 1903 the director appointed Chagas as research assistant at the Institute. In those years, the Institute of Manguinhos, under the direction of Oswaldo Cruz, initiated a process of institutional growth and gathered a distinguished group of Brazilian and foreign scientists. In 1907, he was requested to investigate and control a malaria outbreak in Lassance, Minas Gerais. In this moment Chagas could not have imagined that this field research was the beginning of one of the most notable medical discoveries. Chagas was, at the age of 28, a Research Assistant at the Institute of Manguinhos and was studying a new flagellate parasite isolated from triatomine insects captured in the State of Minas Gerais. Chagas made his discoveries in this order: first the causal agent, then the vector and finally the human cases. These notable discoveries were carried out by Chagas in twenty months. At the age of 33 Chagas had completed his discoveries and published the scientific articles that gave him world
Adjoint electron Monte Carlo calculations
International Nuclear Information System (INIS)
Jordan, T.M.
1986-01-01
Adjoint Monte Carlo is the most efficient method for accurate analysis of space systems exposed to natural and artificially enhanced electron environments. Recent adjoint calculations for isotropic electron environments include: comparative data for experimental measurements on electronics boxes; benchmark problem solutions for comparing total dose prediction methodologies; preliminary assessment of sectoring methods used during space system design; and total dose predictions on an electronics package. Adjoint Monte Carlo, forward Monte Carlo, and experiment are in excellent agreement for electron sources that simulate space environments. For electron space environments, adjoint Monte Carlo is clearly superior to forward Monte Carlo, requiring one to two orders of magnitude less computer time for relatively simple geometries. The solid-angle sectoring approximations used for routine design calculations can err by more than a factor of 2 on dose in simple shield geometries. For critical space systems exposed to severe electron environments, these potential sectoring errors demand the establishment of large design margins and/or verification of shield design by adjoint Monte Carlo/experiment
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros
2016-08-29
In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . âˆž>h0>h1â‹¯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. Â© 2016 Elsevier B.V.
Zijp, Michiel C; Posthuma, Leo; Wintersen, Arjen; Devilee, Jeroen; Swartjes, Frank A
2016-05-01
This paper introduces Solution-focused Sustainability Assessment (SfSA), provides practical guidance formatted as a versatile process framework, and illustrates its utility for solving a wicked environmental management problem. Society faces complex and increasingly wicked environmental problems for which sustainable solutions are sought. Wicked problems are multi-faceted, and deriving of a management solution requires an approach that is participative, iterative, innovative, and transparent in its definition of sustainability and translation to sustainability metrics. We suggest to add the use of a solution-focused approach. The SfSA framework is collated from elements from risk assessment, risk governance, adaptive management and sustainability assessment frameworks, expanded with the 'solution-focused' paradigm as recently proposed in the context of risk assessment. The main innovation of this approach is the broad exploration of solutions upfront in assessment projects. The case study concerns the sustainable management of slightly contaminated sediments continuously formed in ditches in rural, agricultural areas. This problem is wicked, as disposal of contaminated sediment on adjacent land is potentially hazardous to humans, ecosystems and agricultural products. Non-removal would however reduce drainage capacity followed by increased risks of flooding, while contaminated sediment removal followed by offsite treatment implies high budget costs and soil subsidence. Application of the steps in the SfSA-framework served in solving this problem. Important elements were early exploration of a wide 'solution-space', stakeholder involvement from the onset of the assessment, clear agreements on the risk and sustainability metrics of the problem and on the interpretation and decision procedures, and adaptive management. Application of the key elements of the SfSA approach eventually resulted in adoption of a novel sediment management policy. The stakeholder
Markov Chain Monte Carlo Methods-Simple Monte Carlo
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 8; Issue 4. Markov Chain Monte Carlo ... New York 14853, USA. Indian Statistical Institute 8th Mile, Mysore Road Bangalore 560 059, India. Systat Software Asia-Pacific (PI Ltd., Floor 5, 'C' Tower Golden Enclave, Airport Road Bangalore 560017, India.
Exact Monte Carlo for molecules
Energy Technology Data Exchange (ETDEWEB)
Lester, W.A. Jr.; Reynolds, P.J.
1985-03-01
A brief summary of the fixed-node quantum Monte Carlo method is presented. Results obtained for binding energies, the classical barrier height for H + H2, and the singlet-triplet splitting in methylene are presented and discussed. 17 refs.
Markov Chain Monte Carlo Methods
Indian Academy of Sciences (India)
time Technical Consultant to. Systat Software Asia-Pacific. (P) Ltd., in Bangalore, where the technical work for the development of the statistical software Systat takes place. His research interests have been in statistical pattern recognition and biostatistics. Keywords. Markov chain, Monte Carlo sampling, Markov chain Monte.
Markov Chain Monte Carlo Methods
Indian Academy of Sciences (India)
Markov Chain Monte Carlo Methods. 2. The Markov Chain Case. K B Athreya, Mohan Delampady and T Krishnan. K B Athreya is a Professor at. Cornell University. His research interests include mathematical analysis, probability theory and its application and statistics. He enjoys writing for Resonance. His spare time is ...
Markov Chain Monte Carlo Methods
Indian Academy of Sciences (India)
GENERAL ! ARTICLE. Markov Chain Monte Carlo Methods. 3. Statistical Concepts. K B Athreya, Mohan Delampady and T Krishnan. K B Athreya is a Professor at. Cornell University. His research interests include mathematical analysis, probability theory and its application and statistics. He enjoys writing for Resonance.
Monte Carlo calculations of nuclei
Energy Technology Data Exchange (ETDEWEB)
Pieper, S.C. [Argonne National Lab., IL (United States). Physics Div.
1997-10-01
Nuclear many-body calculations have the complication of strong spin- and isospin-dependent potentials. In these lectures the author discusses the variational and Green`s function Monte Carlo techniques that have been developed to address this complication, and presents a few results.
Markov Chain Monte Carlo Methods
Indian Academy of Sciences (India)
ter of the 20th century, due to rapid developments in computing technology ... early part of this development saw a host of Monte ... These iterative. Monte Carlo procedures typically generate a random se- quence with the Markov property such that the Markov chain is ergodic with a limiting distribution coinciding with the ...
Is Monte Carlo embarrassingly parallel?
International Nuclear Information System (INIS)
Hoogenboom, J. E.
2012-01-01
Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)
International Nuclear Information System (INIS)
Wagutu, Agatha W.; Thoruwa, Thomas F.N.; Chhabra, Sumesh C.; Lang'at-Thoruwa, Caroline C.; Mahunnah, R.L.A.
2010-01-01
With depletion of solid biomass fuels and their rising costs in recent years, there has been a shift towards using kerosene and liquefied petroleum gas (LPG) for domestic cooking in Kenya. However, the use of kerosene is associated with health and safety problems. Therefore, it is necessary to develop a clean, safe and sustainable liquid bio-fuel. Plant oil derivatives fatty acid methyl esters (FAME) present such a promising solution. This paper presents the performance of a wick stove using FAME fuels derived from oil plants: Jatropha curcus L. (Physic nut), Croton megalocarpus Hutch, Calodendrum capense (L.f.) Thunb., Cocos nucifera L. (coconut), soyabeans and sunflower. The FAME performance tests were based on the standard water-boiling tests (WBT) and compared with kerosene. Unlike kerosene all FAME fuels burned with odorless and non-pungent smell generating an average firepower of 1095 W with specific fuel consumption of 44.6 g L -1 (55% higher than kerosene). The flash points of the FAME fuels obtained were typically much higher (2.3-3.3 times) than kerosene implying that they are much safer to use than kerosene. From the results obtained, it was concluded that the FAME fuels have potential to provide safe and sustainable cooking liquid fuel in developing countries.
Energy Technology Data Exchange (ETDEWEB)
Wagutu, Agatha W.; Chhabra, Sumesh C.; Lang' at-Thoruwa, Caroline C. [Department of Chemistry, Kenyatta University, P.O. Box 43844-0100, Nairobi (Kenya); Thoruwa, Thomas F.N. [Department of Energy Engineering, Kenyatta University, P.O. Box 43844, Nairobi (Kenya); Mahunnah, R.L.A. [University of Dar-es Salaam, Muhimbili College of Medicine, P.O. Box 53486, Dar-es Salaam (Tanzania)
2010-08-15
With depletion of solid biomass fuels and their rising costs in recent years, there has been a shift towards using kerosene and liquefied petroleum gas (LPG) for domestic cooking in Kenya. However, the use of kerosene is associated with health and safety problems. Therefore, it is necessary to develop a clean, safe and sustainable liquid bio-fuel. Plant oil derivatives fatty acid methyl esters (FAME) present such a promising solution. This paper presents the performance of a wick stove using FAME fuels derived from oil plants: Jatropha curcus L. (Physic nut), Croton megalocarpus Hutch, Calodendrum capense (L.f.) Thunb., Cocos nucifera L. (coconut), soyabeans and sunflower. The FAME performance tests were based on the standard water-boiling tests (WBT) and compared with kerosene. Unlike kerosene all FAME fuels burned with odorless and non-pungent smell generating an average firepower of 1095 W with specific fuel consumption of 44.6 g L{sup -1} (55% higher than kerosene). The flash points of the FAME fuels obtained were typically much higher (2.3-3.3 times) than kerosene implying that they are much safer to use than kerosene. From the results obtained, it was concluded that the FAME fuels have potential to provide safe and sustainable cooking liquid fuel in developing countries. (author)
Szcześ, Aleksandra; Yan, Yingdi; Chibowski, Emil; Hołysz, Lucyna; Banach, Marcin
2018-03-01
Surface free energy is one of the parameters accompanying interfacial phenomena, occurring also in the biological systems. In this study the thin layer wicking method was used to determine surface free energy and its components for synthetic hydroxyapatite (HA) and natural one obtained from pig bones. The Raman, FTIR and X-Ray photoelectron spectroscopy, X-ray diffraction techniques and thermal analysis showed that both samples consist of carbonated hydroxyapatite without any organic components. Surface free energy and its apolar and polar components were found to be similar for both investigated samples and equalled γSTOT = 52.4 mJ/m2, γSLW = 40.2 mJ/m2 and γSAB = 12.3 mJ/m2 for the synthetic HA and γSTOT = 54.6 mJ/m2, γSLW = 40.3 mJ/m2 and γSAB = 14.3 mJ/m2 for the natural one. Both HA samples had different electron acceptor (γs+) and electron donor (γs-) parameters. The higher value of the electron acceptor was found for the natural HA whereas the electron donor one was higher for the synthetic HA.
Monte Carlo - Advances and Challenges
International Nuclear Information System (INIS)
Brown, Forrest B.; Mosteller, Russell D.; Martin, William R.
2008-01-01
Abstract only, full text follows: With ever-faster computers and mature Monte Carlo production codes, there has been tremendous growth in the application of Monte Carlo methods to the analysis of reactor physics and reactor systems. In the past, Monte Carlo methods were used primarily for calculating k eff of a critical system. More recently, Monte Carlo methods have been increasingly used for determining reactor power distributions and many design parameters, such as β eff , l eff , τ, reactivity coefficients, Doppler defect, dominance ratio, etc. These advanced applications of Monte Carlo methods are now becoming common, not just feasible, but bring new challenges to both developers and users: Convergence of 3D power distributions must be assured; confidence interval bias must be eliminated; iterated fission probabilities are required, rather than single-generation probabilities; temperature effects including Doppler and feedback must be represented; isotopic depletion and fission product buildup must be modeled. This workshop focuses on recent advances in Monte Carlo methods and their application to reactor physics problems, and on the resulting challenges faced by code developers and users. The workshop is partly tutorial, partly a review of the current state-of-the-art, and partly a discussion of future work that is needed. It should benefit both novice and expert Monte Carlo developers and users. In each of the topic areas, we provide an overview of needs, perspective on past and current methods, a review of recent work, and discussion of further research and capabilities that are required. Electronic copies of all workshop presentations and material will be available. The workshop is structured as 2 morning and 2 afternoon segments: - Criticality Calculations I - convergence diagnostics, acceleration methods, confidence intervals, and the iterated fission probability, - Criticality Calculations II - reactor kinetics parameters, dominance ratio, temperature
(U) Introduction to Monte Carlo Methods
Energy Technology Data Exchange (ETDEWEB)
Hungerford, Aimee L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-03-20
Monte Carlo methods are very valuable for representing solutions to particle transport problems. Here we describe a “cook book” approach to handling the terms in a transport equation using Monte Carlo methods. Focus is on the mechanics of a numerical Monte Carlo code, rather than the mathematical foundations of the method.
Shell model Monte Carlo methods
International Nuclear Information System (INIS)
Koonin, S.E.
1996-01-01
We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs
Zimmerman, George B.
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
International Nuclear Information System (INIS)
Zimmerman, G.B.
1997-01-01
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials. copyright 1997 American Institute of Physics
International Nuclear Information System (INIS)
Zimmerman, George B.
1997-01-01
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials
Adaptive Multilevel Monte Carlo Simulation
Hoel, H
2011-08-23
This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).
Extending canonical Monte Carlo methods
International Nuclear Information System (INIS)
Velazquez, L; Curilef, S
2010-01-01
In this paper, we discuss the implications of a recently obtained equilibrium fluctuation-dissipation relation for the extension of the available Monte Carlo methods on the basis of the consideration of the Gibbs canonical ensemble to account for the existence of an anomalous regime with negative heat capacities C α with α≈0.2 for the particular case of the 2D ten-state Potts model
Parallel Monte Carlo reactor neutronics
International Nuclear Information System (INIS)
Blomquist, R.N.; Brown, F.B.
1994-01-01
The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved
Directory of Open Access Journals (Sweden)
J. Henrich. Bruggemann
2012-12-01
Full Text Available High-latitude coral reefs may be a refuge and area of reef expansion under climate change. As these locations are expected to become dryer and as livestock and agricultural yields decline, coastal populations may become increasingly dependent on marine resources. To evaluate this social-ecological conundrum, we examined the Grand Récif of Toliara (GRT, southwest Madagascar, which was intensively studied in the 1960s and has been highly degraded since the 1980s. We analyzed the social and ecological published and unpublished literature on this region and provide new data to assess the magnitude of the changes and evaluate the causes of reef degradation. Top-down controls were identified as the major drivers: human population growth and migrations, overfishing, and climate change, specifically decreased rainfall and rising temperature. Water quality has not changed since originally studied, and bottom-up control was ruled out. The identified network of social-ecological processes acting at different scales implies that decision makers will face complex problems that are linked to broader social, economic, and policy issues. This characterizes wicked problems, which are often dealt with by partial solutions that are exploratory and include inputs from various stakeholders along with information sharing, knowledge synthesis, and trust building. A hybrid approach based on classical fishery management options and preferences, along with monitoring, feedback and forums for searching solutions, could move the process of adaptation forward once an adaptive and appropriately scaled governance system is functioning. This approach has broad implications for resources management given the emerging climate change and multiple social and environmental stresses.
Directory of Open Access Journals (Sweden)
Andreas Velthuizen
2012-07-01
Full Text Available The paper is presented against a background of many wicked problems that confront us in the world today such as violent crime, conflict that emanates from political power seeking, contests for scarce resources, the increasing reaction all over the world to the deterioration of socio-economic conditions and the devastation caused by natural disasters. This article will argue that the challenge of violent conflict requires an innovative approach to research and problem solving and proposes a research methodology that follows a transdisciplinary approach. The argument is informed by field research during 2006 on the management of knowledge in the Great Lakes region of Africa, including research on how knowledge on the 1994 genocide in Rwanda is managed. The paper will make recommendations on how transdisciplinary research is required to determine the causes of violent conflict in an African context and how practitioners and academics should engage in transdisciplinarity. It was found that trans- disciplinary research is required to gain better insight into the causes of violent conflict in an African context. It requires from the researcher to recognise the many levels of reality that has to be integrated towards a synthesis to reveal new insights into the causes of violent conflict, including recognising the existence of a normative-spiritual realm that informs the epistemology of Africa. It furthermore requires a methodology that allows us to break out of the stifling constraints of systems thinking and linear processes into the inner space at the juncture where disciplines meet (the diversity of African communities. Keywords: Africa, conflict, Rwanda, crime, genocide, violence, transdisciplinary Disciplines: politics, education, law, epistemology, sociology, theology, management science
Cuartel San Carlos. Yacimiento veterano
Directory of Open Access Journals (Sweden)
Mariana Flores
2007-01-01
Full Text Available El Cuartel San Carlos es un monumento histórico nacional (1986 de finales del siglo XVIII (1785-1790, caracterizado por sufrir diversas adversidades en su construcción y soportar los terremotos de 1812 y 1900. En el año 2006, el organismo encargado de su custodia, el Instituto de Patrimonio Cultural del Ministerio de Cultura, ejecutó tres etapas de exploración arqueológica, que abarcaron las áreas Traspatio, Patio Central y las Naves Este y Oeste de la edificación. Este trabajo reseña el análisis de la documentación arqueológica obtenida en el sitio, a partir de la realización de dicho proyecto, denominado EACUSAC (Estudio Arqueológico del Cuartel San Carlos, que representa además, la tercera campaña realizada en el sitio. La importancia de este yacimiento histórico, radica en su participación en los acontecimientos que propiciaron conflictos de poder durante el surgimiento de la República y en los sucesos políticos del siglo XX. De igual manera, se encontró en el sitio una amplia muestra de materiales arqueológicos que reseñan un estilo de vida cotidiana militar, así como las dinámicas sociales internas ocurridas en el San Carlos, como lugar estratégico para la defensa de los diferentes regímenes que atravesó el país, desde la época del imperialismo español hasta nuestros días.
Carlos Battilana: Profesor, Gestor, Amigo
Directory of Open Access Journals (Sweden)
José Pacheco
2009-12-01
Full Text Available El Comité Editorial de Anales ha perdido a uno de sus miembros más connotados. Brillante docente de nuestra Facultad, Carlos Alberto Battilana Guanilo (1945-2009 supo transmitir los conocimientos y atraer la atención de sus auditorios, de jóvenes estudiantes o de contemporáneos ya no tan jóvenes. Interesó a sus alumnos en la senda de la capacitación permanente y en la investigación. Por otro lado, comprometió a médicos distinguidos a conformar y liderar grupos con interés en la ciencia-amistad. Su vocación docente lo vinculó a facultades de medicina y academias y sociedades científicas, en donde coordinó cursos y congresos de grato recuerdo. Su producción científica la dedicó a la nefrología, inmunología, cáncer, costos en el tratamiento médico. Su capacidad gestora y de liderazgo presente desde su época de estudiante, le permitió llegar a ser director regional de un laboratorio farmacéutico de mucho prestigio, organizar una facultad de medicina y luego tener el cargo de decano de la facultad de ciencias de la salud de dicha universidad privada. Carlos fue elemento importante para que Anales alcanzara un sitial de privilegio entre las revistas biomédicas peruanas. En la semblanza que publicamos tratamos de resumir apretadamente la trayectoria de Carlos Battilana, semanas después de su partida sin retorno.
Directory of Open Access Journals (Sweden)
Rafael Maya
1979-04-01
Full Text Available Entre los poetasa del Centenario tuvo Luis Carlos López mucha popularidad en el extranjero, desde la publicación de su primer libro. Creo que su obra llamó la atención de filósofos como Unamuno y, si no estoy equivocado, Darío se refirió a ella en términos elogiosos. En Colombia ha sido encomiada hiperbólicamente por algunos, a tiemp que otros no le conceden mayor mérito.
Antitwilight II: Monte Carlo simulations.
Richtsmeier, Steven C; Lynch, David K; Dearborn, David S P
2017-07-01
For this paper, we employ the Monte Carlo scene (MCScene) radiative transfer code to elucidate the underlying physics giving rise to the structure and colors of the antitwilight, i.e., twilight opposite the Sun. MCScene calculations successfully reproduce colors and spatial features observed in videos and still photos of the antitwilight taken under clear, aerosol-free sky conditions. Through simulations, we examine the effects of solar elevation angle, Rayleigh scattering, molecular absorption, aerosol scattering, multiple scattering, and surface reflectance on the appearance of the antitwilight. We also compare MCScene calculations with predictions made by the MODTRAN radiative transfer code for a solar elevation angle of +1°.
Carlos Restrepo. Un verdadero Maestro
Pelayo Correa
2009-01-01
Carlos Restrepo fue el primer profesor de Patología y un miembro ilustre del grupo de pioneros que fundaron la Facultad de Medicina de la Universidad del Valle. Estos pioneros convergieron en Cali en la década de 1950, en posesión de un espíritu renovador y creativo que emprendió con mucho éxito la labor de cambiar la cultura académica del Valle del Cauca. Ellos encontraron una sociedad apacible, que disfrutaba de la generosidad de su entorno, sin deseos de romper las tradiciones centenarias...
Monte Carlo techniques in radiation therapy
Verhaegen, Frank
2013-01-01
Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...
Mean field simulation for Monte Carlo integration
Del Moral, Pierre
2013-01-01
In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko
Monte Carlo simulations of neutron scattering instruments
International Nuclear Information System (INIS)
Aestrand, Per-Olof; Copenhagen Univ.; Lefmann, K.; Nielsen, K.
2001-01-01
A Monte Carlo simulation is an important computational tool used in many areas of science and engineering. The use of Monte Carlo techniques for simulating neutron scattering instruments is discussed. The basic ideas, techniques and approximations are presented. Since the construction of a neutron scattering instrument is very expensive, Monte Carlo software used for design of instruments have to be validated and tested extensively. The McStas software was designed with these aspects in mind and some of the basic principles of the McStas software will be discussed. Finally, some future prospects are discussed for using Monte Carlo simulations in optimizing neutron scattering experiments. (R.P.)
Status of Monte Carlo dose planning
International Nuclear Information System (INIS)
Mackie, T.R.
1995-01-01
Monte Carlo simulation will become increasing important for treatment planning for radiotherapy. The EGS4 Monte Carlo system, a general particle transport system, has been used most often for simulation tasks in radiotherapy although ETRAN/ITS and MCNP have also been used. Monte Carlo treatment planning requires that the beam characteristics such as the energy spectrum and angular distribution of particles emerging from clinical accelerators be accurately represented. An EGS4 Monte Carlo code, called BEAM, was developed by the OMEGA Project (a collaboration between the University of Wisconsin and the National Research Council of Canada) to transport particles through linear accelerator heads. This information was used as input to simulate the passage of particles through CT-based representations of phantoms or patients using both an EGS4 code (DOSXYZ) and the macro Monte Carlo (MMC) method. Monte Carlo computed 3-D electron beam dose distributions compare well to measurements obtained in simple and complex heterogeneous phantoms. The present drawback with most Monte Carlo codes is that simulation times are slower than most non-stochastic dose computation algorithms. This is especially true for photon dose planning. In the future dedicated Monte Carlo treatment planning systems like Peregrine (from Lawrence Livermore National Laboratory), which will be capable of computing the dose from all beam types, or the Macro Monte Carlo (MMC) system, which is an order of magnitude faster than other algorithms, may dominate the field
Directory of Open Access Journals (Sweden)
Fernando Garavito
1981-06-01
Full Text Available La crítica literaria de los últimos años se ha acostumbrado a ver en Guillermo Valencia la cifra de una época, a la que es necesario referirse, para bien o para mal, cuando se trata de fijar límites a la actividad poética de cualquiera otro de sus contemporáneos. Y aunque el aserto no es valedero en un todo respecto de quienes se consideran sus discípulos, porque en este caso la augusta soberbia del maestro de Popayán los coloca al margen, sí lo es, y en alto grado, cuando se trata de Luis Carlos López, quien por su tono, sus temas y su "aliento" ha pasado a ser manoseable.
Monte Carlo lattice program KIM
International Nuclear Information System (INIS)
Cupini, E.; De Matteis, A.; Simonini, R.
1980-01-01
The Monte Carlo program KIM solves the steady-state linear neutron transport equation for a fixed-source problem or, by successive fixed-source runs, for the eigenvalue problem, in a two-dimensional thermal reactor lattice. Fluxes and reaction rates are the main quantities computed by the program, from which power distribution and few-group averaged cross sections are derived. The simulation ranges from 10 MeV to zero and includes anisotropic and inelastic scattering in the fast energy region, the epithermal Doppler broadening of the resonances of some nuclides, and the thermalization phenomenon by taking into account the thermal velocity distribution of some molecules. Besides the well known combinatorial geometry, the program allows complex configurations to be represented by a discrete set of points, an approach greatly improving calculation speed
Directory of Open Access Journals (Sweden)
Bárbara Bustamante
2005-01-01
Full Text Available El talento de Carlos Alonso (Argentina, 1929 ha logrado conquistar un lenguaje con estilo propio. La creación de dibujos, pinturas, pasteles y tintas, collages y grabados fijaron en el campo visual la proyección de su subjetividad. Tanto la imagen como la palabra explicitan una visión crítica de la realidad, que tensiona al espectador obligándolo a una condición reflexiva y comprometida con el mensaje; este es el aspecto más destacado por los historiadores del arte. Sin embargo, la presente investigación pretende focalizar aspectos icónicos y plásticos de su hacer.
Directory of Open Access Journals (Sweden)
Bárbara Bustamante
2005-10-01
Full Text Available El talento de Carlos Alonso (Argentina, 1929 ha logrado conquistar un lenguaje con estilo propio. La creación de dibujos, pinturas, pasteles y tintas, collages y grabados fijaron en el campo visual la proyección de su subjetividad. Tanto la imagen como la palabra explicitan una visión crítica de la realidad, que tensiona al espectador obligándolo a una condición reflexiva y comprometida con el mensaje; este es el aspecto más destacado por los historiadores del arte. Sin embargo, la presente investigación pretende focalizar aspectos icónicos y plásticos de su hacer.
Monte Carlo simulation of experiments
International Nuclear Information System (INIS)
Opat, G.I.
1977-07-01
An outline of the technique of computer simulation of particle physics experiments by the Monte Carlo method is presented. Useful special purpose subprograms are listed and described. At each stage the discussion is made concrete by direct reference to the programs SIMUL8 and its variant MONTE-PION, written to assist in the analysis of the radiative decay experiments μ + → e + ν sub(e) antiνγ and π + → e + ν sub(e)γ, respectively. These experiments were based on the use of two large sodium iodide crystals, TINA and MINA, as e and γ detectors. Instructions for the use of SIMUL8 and MONTE-PION are given. (author)
Monte Carlo Simulation of Phase Transitions
村井, 信行; N., MURAI; 中京大学教養部
1983-01-01
In the Monte Carlo simulation of phase transition, a simple heat bath method is applied to the classical Heisenberg model in two dimensions. It reproduces the correlation length predicted by the Monte Carlo renor-malization group and also computed in the non-linear σ model
Advanced Computational Methods for Monte Carlo Calculations
Energy Technology Data Exchange (ETDEWEB)
Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2018-01-12
This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.
DEFF Research Database (Denmark)
Maron, Martine; Ives, Christopher D.; Kujala, Heini
2016-01-01
for biodiversity conservation outcomes, and what do we need to know to decide? We present a concise synthesis of the most contentious issues related to biodiversity offsetting, categorized as ethical, social, technical, or governance challenges. In each case, we discuss avenues for reducing disagreement over...... these issues and identify those that are likely to remain unresolved. We argue that there are many risks associated with the unscrutinized expansion of offset policy. Nevertheless, governments are increasingly adopting offset policies, so working rapidly to clarify and-where possible-to resolve these issues...
The MC21 Monte Carlo Transport Code
International Nuclear Information System (INIS)
Sutton TM; Donovan TJ; Trumbull TH; Dobreff PS; Caro E; Griesheimer DP; Tyburski LJ; Carpenter DC; Joo H
2007-01-01
MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities
Monte Carlo simulation in nuclear medicine
International Nuclear Information System (INIS)
Morel, Ch.
2007-01-01
The Monte Carlo method allows for simulating random processes by using series of pseudo-random numbers. It became an important tool in nuclear medicine to assist in the design of new medical imaging devices, optimise their use and analyse their data. Presently, the sophistication of the simulation tools allows the introduction of Monte Carlo predictions in data correction and image reconstruction processes. The availability to simulate time dependent processes opens up new horizons for Monte Carlo simulation in nuclear medicine. In a near future, these developments will allow to tackle simultaneously imaging and dosimetry issues and soon, case system Monte Carlo simulations may become part of the nuclear medicine diagnostic process. This paper describes some Monte Carlo method basics and the sampling methods that were developed for it. It gives a referenced list of different simulation software used in nuclear medicine and enumerates some of their present and prospective applications. (author)
Monte carlo simulation for soot dynamics
Zhou, Kun
2012-01-01
A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.
Monte Carlo approaches to light nuclei
International Nuclear Information System (INIS)
Carlson, J.
1990-01-01
Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of 16 O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs
Monte Carlo approaches to light nuclei
Energy Technology Data Exchange (ETDEWEB)
Carlson, J.
1990-01-01
Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of {sup 16}O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs.
Importance iteration in MORSE Monte Carlo calculations
International Nuclear Information System (INIS)
Kloosterman, J.L.; Hoogenboom, J.E.
1994-02-01
An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example, which shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation. (orig.)
Adaptive Markov Chain Monte Carlo
Jadoon, Khan
2016-08-08
A substantial interpretation of electromagnetic induction (EMI) measurements requires quantifying optimal model parameters and uncertainty of a nonlinear inverse problem. For this purpose, an adaptive Bayesian Markov chain Monte Carlo (MCMC) algorithm is used to assess multi-orientation and multi-offset EMI measurements in an agriculture field with non-saline and saline soil. In the MCMC simulations, posterior distribution was computed using Bayes rule. The electromagnetic forward model based on the full solution of Maxwell\\'s equations was used to simulate the apparent electrical conductivity measured with the configurations of EMI instrument, the CMD mini-Explorer. The model parameters and uncertainty for the three-layered earth model are investigated by using synthetic data. Our results show that in the scenario of non-saline soil, the parameters of layer thickness are not well estimated as compared to layers electrical conductivity because layer thicknesses in the model exhibits a low sensitivity to the EMI measurements, and is hence difficult to resolve. Application of the proposed MCMC based inversion to the field measurements in a drip irrigation system demonstrate that the parameters of the model can be well estimated for the saline soil as compared to the non-saline soil, and provide useful insight about parameter uncertainty for the assessment of the model outputs.
Advanced computers and Monte Carlo
International Nuclear Information System (INIS)
Jordan, T.L.
1979-01-01
High-performance parallelism that is currently available is synchronous in nature. It is manifested in such architectures as Burroughs ILLIAC-IV, CDC STAR-100, TI ASC, CRI CRAY-1, ICL DAP, and many special-purpose array processors designed for signal processing. This form of parallelism has apparently not been of significant value to many important Monte Carlo calculations. Nevertheless, there is much asynchronous parallelism in many of these calculations. A model of a production code that requires up to 20 hours per problem on a CDC 7600 is studied for suitability on some asynchronous architectures that are on the drawing board. The code is described and some of its properties and resource requirements ae identified to compare with corresponding properties and resource requirements are identified to compare with corresponding properties and resource requirements are identified to compare with corresponding properties and resources of some asynchronous multiprocessor architectures. Arguments are made for programer aids and special syntax to identify and support important asynchronous parallelism. 2 figures, 5 tables
11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing
Nuyens, Dirk
2016-01-01
This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.
Monte Carlo simulations for plasma physics
International Nuclear Information System (INIS)
Okamoto, M.; Murakami, S.; Nakajima, N.; Wang, W.X.
2000-07-01
Plasma behaviours are very complicated and the analyses are generally difficult. However, when the collisional processes play an important role in the plasma behaviour, the Monte Carlo method is often employed as a useful tool. For examples, in neutral particle injection heating (NBI heating), electron or ion cyclotron heating, and alpha heating, Coulomb collisions slow down high energetic particles and pitch angle scatter them. These processes are often studied by the Monte Carlo technique and good agreements can be obtained with the experimental results. Recently, Monte Carlo Method has been developed to study fast particle transports associated with heating and generating the radial electric field. Further it is applied to investigating the neoclassical transport in the plasma with steep gradients of density and temperatures which is beyong the conventional neoclassical theory. In this report, we briefly summarize the researches done by the present authors utilizing the Monte Carlo method. (author)
Hybrid Monte Carlo methods in computational finance
Leitao Rodriguez, A.
2017-01-01
Monte Carlo methods are highly appreciated and intensively employed in computational finance in the context of financial derivatives valuation or risk management. The method offers valuable advantages like flexibility, easy interpretation and straightforward implementation. Furthermore, the
Simulation and the Monte Carlo method
Rubinstein, Reuven Y
2016-01-01
Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...
"Shaakal" Carlos kaebas arreteerija kohtusse / Margo Pajuste
Pajuste, Margo
2006-01-01
Ilmunud ka: Postimees : na russkom jazõke 3. juuli lk. 11. Vangistatud kurikuulus terrorist "Shaakal" Carlos alustas kohtuasja oma kunagise vahistaja vastu. Ta süüdistab Prantsusmaa luureteenistuse endist juhti inimröövis
Monte Carlo methods for particle transport
Haghighat, Alireza
2015-01-01
The Monte Carlo method has become the de facto standard in radiation transport. Although powerful, if not understood and used appropriately, the method can give misleading results. Monte Carlo Methods for Particle Transport teaches appropriate use of the Monte Carlo method, explaining the method's fundamental concepts as well as its limitations. Concise yet comprehensive, this well-organized text: * Introduces the particle importance equation and its use for variance reduction * Describes general and particle-transport-specific variance reduction techniques * Presents particle transport eigenvalue issues and methodologies to address these issues * Explores advanced formulations based on the author's research activities * Discusses parallel processing concepts and factors affecting parallel performance Featuring illustrative examples, mathematical derivations, computer algorithms, and homework problems, Monte Carlo Methods for Particle Transport provides nuclear engineers and scientists with a practical guide ...
Monte Carlo code development in Los Alamos
International Nuclear Information System (INIS)
Carter, L.L.; Cashwell, E.D.; Everett, C.J.; Forest, C.A.; Schrandt, R.G.; Taylor, W.M.; Thompson, W.L.; Turner, G.D.
1974-01-01
The present status of Monte Carlo code development at Los Alamos Scientific Laboratory is discussed. A brief summary is given of several of the most important neutron, photon, and electron transport codes. 17 references. (U.S.)
Quantum Monte Carlo approaches for correlated systems
Becca, Federico
2017-01-01
Over the past several decades, computational approaches to studying strongly-interacting systems have become increasingly varied and sophisticated. This book provides a comprehensive introduction to state-of-the-art quantum Monte Carlo techniques relevant for applications in correlated systems. Providing a clear overview of variational wave functions, and featuring a detailed presentation of stochastic samplings including Markov chains and Langevin dynamics, which are developed into a discussion of Monte Carlo methods. The variational technique is described, from foundations to a detailed description of its algorithms. Further topics discussed include optimisation techniques, real-time dynamics and projection methods, including Green's function, reptation and auxiliary-field Monte Carlo, from basic definitions to advanced algorithms for efficient codes, and the book concludes with recent developments on the continuum space. Quantum Monte Carlo Approaches for Correlated Systems provides an extensive reference ...
Monte Carlo Algorithms for Linear Problems
Dimov, Ivan
2000-01-01
MSC Subject Classification: 65C05, 65U05. Monte Carlo methods are a powerful tool in many fields of mathematics, physics and engineering. It is known, that these methods give statistical estimates for the functional of the solution by performing random sampling of a certain chance variable whose mathematical expectation is the desired functional. Monte Carlo methods are methods for solving problems using random variables. In the book [16] edited by Yu. A. Shreider one can find the followin...
Multilevel Monte Carlo in Approximate Bayesian Computation
Jasra, Ajay
2017-02-13
In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.
Bayesian statistics and Monte Carlo methods
Koch, K. R.
2018-03-01
The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.
Successful vectorization - reactor physics Monte Carlo code
International Nuclear Information System (INIS)
Martin, W.R.
1989-01-01
Most particle transport Monte Carlo codes in use today are based on the ''history-based'' algorithm, wherein one particle history at a time is simulated. Unfortunately, the ''history-based'' approach (present in all Monte Carlo codes until recent years) is inherently scalar and cannot be vectorized. In particular, the history-based algorithm cannot take advantage of vector architectures, which characterize the largest and fastest computers at the current time, vector supercomputers such as the Cray X/MP or IBM 3090/600. However, substantial progress has been made in recent years in developing and implementing a vectorized Monte Carlo algorithm. This algorithm follows portions of many particle histories at the same time and forms the basis for all successful vectorized Monte Carlo codes that are in use today. This paper describes the basic vectorized algorithm along with descriptions of several variations that have been developed by different researchers for specific applications. These applications have been mainly in the areas of neutron transport in nuclear reactor and shielding analysis and photon transport in fusion plasmas. The relative merits of the various approach schemes will be discussed and the present status of known vectorization efforts will be summarized along with available timing results, including results from the successful vectorization of 3-D general geometry, continuous energy Monte Carlo. (orig.)
Monte Carlo simulation of Markov unreliability models
International Nuclear Information System (INIS)
Lewis, E.E.; Boehm, F.
1984-01-01
A Monte Carlo method is formulated for the evaluation of the unrealibility of complex systems with known component failure and repair rates. The formulation is in terms of a Markov process allowing dependences between components to be modeled and computational efficiencies to be achieved in the Monte Carlo simulation. Two variance reduction techniques, forced transition and failure biasing, are employed to increase computational efficiency of the random walk procedure. For an example problem these result in improved computational efficiency by more than three orders of magnitudes over analog Monte Carlo. The method is generalized to treat problems with distributed failure and repair rate data, and a batching technique is introduced and shown to result in substantial increases in computational efficiency for an example problem. A method for separating the variance due to the data uncertainty from that due to the finite number of random walks is presented. (orig.)
Adiabatic optimization versus diffusion Monte Carlo methods
Jarret, Michael; Jordan, Stephen P.; Lackey, Brad
2016-10-01
Most experimental and theoretical studies of adiabatic optimization use stoquastic Hamiltonians, whose ground states are expressible using only real nonnegative amplitudes. This raises a question as to whether classical Monte Carlo methods can simulate stoquastic adiabatic algorithms with polynomial overhead. Here we analyze diffusion Monte Carlo algorithms. We argue that, based on differences between L1 and L2 normalized states, these algorithms suffer from certain obstructions preventing them from efficiently simulating stoquastic adiabatic evolution in generality. In practice however, we obtain good performance by introducing a method that we call Substochastic Monte Carlo. In fact, our simulations are good classical optimization algorithms in their own right, competitive with the best previously known heuristic solvers for MAX-k -SAT at k =2 ,3 ,4 .
Shell model the Monte Carlo way
International Nuclear Information System (INIS)
Ormand, W.E.
1995-01-01
The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined
Shell model the Monte Carlo way
Energy Technology Data Exchange (ETDEWEB)
Ormand, W.E.
1995-03-01
The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.
Off-diagonal expansion quantum Monte Carlo.
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Off-diagonal expansion quantum Monte Carlo
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Random Numbers and Monte Carlo Methods
Scherer, Philipp O. J.
Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.
Monte Carlo strategies in scientific computing
Liu, Jun S
2008-01-01
This paperback edition is a reprint of the 2001 Springer edition This book provides a self-contained and up-to-date treatment of the Monte Carlo method and develops a common framework under which various Monte Carlo techniques can be "standardized" and compared Given the interdisciplinary nature of the topics and a moderate prerequisite for the reader, this book should be of interest to a broad audience of quantitative researchers such as computational biologists, computer scientists, econometricians, engineers, probabilists, and statisticians It can also be used as the textbook for a graduate-level course on Monte Carlo methods Many problems discussed in the alter chapters can be potential thesis topics for masters’ or PhD students in statistics or computer science departments Jun Liu is Professor of Statistics at Harvard University, with a courtesy Professor appointment at Harvard Biostatistics Department Professor Liu was the recipient of the 2002 COPSS Presidents' Award, the most prestigious one for sta...
Simulation of transport equations with Monte Carlo
International Nuclear Information System (INIS)
Matthes, W.
1975-09-01
The main purpose of the report is to explain the relation between the transport equation and the Monte Carlo game used for its solution. The introduction of artificial particles carrying a weight provides one with high flexibility in constructing many different games for the solution of the same equation. This flexibility opens a way to construct a Monte Carlo game for the solution of the adjoint transport equation. Emphasis is laid mostly on giving a clear understanding of what to do and not on the details of how to do a specific game
Self-learning Monte Carlo (dynamical biasing)
International Nuclear Information System (INIS)
Matthes, W.
1981-01-01
In many applications the histories of a normal Monte Carlo game rarely reach the target region. An approximate knowledge of the importance (with respect to the target) may be used to guide the particles more frequently into the target region. A Monte Carlo method is presented in which each history contributes to update the importance field such that eventually most target histories are sampled. It is a self-learning method in the sense that the procedure itself: (a) learns which histories are important (reach the target) and increases their probability; (b) reduces the probabilities of unimportant histories; (c) concentrates gradually on the more important target histories. (U.K.)
Monte Carlo electron/photon transport
International Nuclear Information System (INIS)
Mack, J.M.; Morel, J.E.; Hughes, H.G.
1985-01-01
A review of nonplasma coupled electron/photon transport using Monte Carlo method is presented. Remarks are mainly restricted to linerarized formalisms at electron energies from 1 keV to 1000 MeV. Applications involving pulse-height estimation, transport in external magnetic fields, and optical Cerenkov production are discussed to underscore the importance of this branch of computational physics. Advances in electron multigroup cross-section generation is reported, and its impact on future code development assessed. Progress toward the transformation of MCNP into a generalized neutral/charged-particle Monte Carlo code is described. 48 refs
A keff calculation method by Monte Carlo
International Nuclear Information System (INIS)
Shen, H; Wang, K.
2008-01-01
The effective multiplication factor (k eff ) is defined as the ratio between the number of neutrons in successive generations, which definition is adopted by most Monte Carlo codes (e.g. MCNP). Also, it can be thought of as the ratio of the generation rate of neutrons by the sum of the leakage rate and the absorption rate, which should exclude the effect of the neutron reaction such as (n, 2n) and (n, 3n). This article discusses the Monte Carlo method for k eff calculation based on the second definition. A new code has been developed and the results are presented. (author)
Monte Carlo Treatment Planning for Advanced Radiotherapy
DEFF Research Database (Denmark)
Cronholm, Rickard
This Ph.d. project describes the development of a workflow for Monte Carlo Treatment Planning for clinical radiotherapy plans. The workflow may be utilized to perform an independent dose verification of treatment plans. Modern radiotherapy treatment delivery is often conducted by dynamically...... modulating the intensity of the field during the irradiation. The workflow described has the potential to fully model the dynamic delivery, including gantry rotation during irradiation, of modern radiotherapy. Three corner stones of Monte Carlo Treatment Planning are identified: Building, commissioning...
Monte Carlo dose distributions for radiosurgery
International Nuclear Information System (INIS)
Perucha, M.; Leal, A.; Rincon, M.; Carrasco, E.
2001-01-01
The precision of Radiosurgery Treatment planning systems is limited by the approximations of their algorithms and by their dosimetrical input data. This fact is especially important in small fields. However, the Monte Carlo methods is an accurate alternative as it considers every aspect of particle transport. In this work an acoustic neurinoma is studied by comparing the dose distribution of both a planning system and Monte Carlo. Relative shifts have been measured and furthermore, Dose-Volume Histograms have been calculated for target and adjacent organs at risk. (orig.)
Monte Carlo applications to radiation shielding problems
International Nuclear Information System (INIS)
Subbaiah, K.V.
2009-01-01
Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling of physical and mathematical systems to compute their results. However, basic concepts of MC are both simple and straightforward and can be learned by using a personal computer. Uses of Monte Carlo methods require large amounts of random numbers, and it was their use that spurred the development of pseudorandom number generators, which were far quicker to use than the tables of random numbers which had been previously used for statistical sampling. In Monte Carlo simulation of radiation transport, the history (track) of a particle is viewed as a random sequence of free flights that end with an interaction event where the particle changes its direction of movement, loses energy and, occasionally, produces secondary particles. The Monte Carlo simulation of a given experimental arrangement (e.g., an electron beam, coming from an accelerator and impinging on a water phantom) consists of the numerical generation of random histories. To simulate these histories we need an interaction model, i.e., a set of differential cross sections (DCS) for the relevant interaction mechanisms. The DCSs determine the probability distribution functions (pdf) of the random variables that characterize a track; 1) free path between successive interaction events, 2) type of interaction taking place and 3) energy loss and angular deflection in a particular event (and initial state of emitted secondary particles, if any). Once these pdfs are known, random histories can be generated by using appropriate sampling methods. If the number of generated histories is large enough, quantitative information on the transport process may be obtained by simply averaging over the simulated histories. The Monte Carlo method yields the same information as the solution of the Boltzmann transport equation, with the same interaction model, but is easier to implement. In particular, the simulation of radiation
Fast sequential Monte Carlo methods for counting and optimization
Rubinstein, Reuven Y; Vaisman, Radislav
2013-01-01
A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the
Use of Monte Carlo Methods in brachytherapy; Uso del metodo de Monte Carlo en braquiterapia
Energy Technology Data Exchange (ETDEWEB)
Granero Cabanero, D.
2015-07-01
The Monte Carlo method has become a fundamental tool for brachytherapy dosimetry mainly because no difficulties associated with experimental dosimetry. In brachytherapy the main handicap of experimental dosimetry is the high dose gradient near the present sources making small uncertainties in the positioning of the detectors lead to large uncertainties in the dose. This presentation will review mainly the procedure for calculating dose distributions around a fountain using the Monte Carlo method showing the difficulties inherent in these calculations. In addition we will briefly review other applications of the method of Monte Carlo in brachytherapy dosimetry, as its use in advanced calculation algorithms, calculating barriers or obtaining dose applicators around. (Author)
Specialized Monte Carlo codes versus general-purpose Monte Carlo codes
International Nuclear Information System (INIS)
Moskvin, Vadim; DesRosiers, Colleen; Papiez, Lech; Lu, Xiaoyi
2002-01-01
The possibilities of Monte Carlo modeling for dose calculations and optimization treatment are quite limited in radiation oncology applications. The main reason is that the Monte Carlo technique for dose calculations is time consuming while treatment planning may require hundreds of possible cases of dose simulations to be evaluated for dose optimization. The second reason is that general-purpose codes widely used in practice, require an experienced user to customize them for calculations. This paper discusses the concept of Monte Carlo code design that can avoid the main problems that are preventing wide spread use of this simulation technique in medical physics. (authors)
On the use of stochastic approximation Monte Carlo for Monte Carlo integration
Liang, Faming
2009-03-01
The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration via a dynamically weighted estimator by calling some results from the literature of nonhomogeneous Markov chains. Our numerical results indicate that SAMC can yield significant savings over conventional Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, for the problems for which the energy landscape is rugged. © 2008 Elsevier B.V. All rights reserved.
Monte Carlo methods in AB initio quantum chemistry quantum Monte Carlo for molecules
Lester, William A; Reynolds, PJ
1994-01-01
This book presents the basic theory and application of the Monte Carlo method to the electronic structure of atoms and molecules. It assumes no previous knowledge of the subject, only a knowledge of molecular quantum mechanics at the first-year graduate level. A working knowledge of traditional ab initio quantum chemistry is helpful, but not essential.Some distinguishing features of this book are: Clear exposition of the basic theory at a level to facilitate independent study. Discussion of the various versions of the theory: diffusion Monte Carlo, Green's function Monte Carlo, and release n
Directory of Open Access Journals (Sweden)
Steven G. Pueppke
2018-04-01
Full Text Available The Ili River originates in the mountains of Xinjiang, China, and flows across an increasingly arid landscape before terminating in Kazakhstan’s Lake Balkhash, which has no outlet to the ocean. The river has been extensively impounded and diverted over the past half century to produce hydroelectric power and food on irrigated land. Water withdrawals are increasing to the extent that they are beginning to threaten the ecosystem, just as it is becoming stressed by altered inflows as glaciers retreat and disappear. If the Ili River ecosystem is to be preserved, it is crucial that we thoroughly understand the spatial and temporal nuances of the interrelationships between water, energy, and food—and the vulnerability of these components to climate change. The ecosystem has all of the characteristics of a classically-defined “wicked problem”, and so it warrants treatment as a complex and dynamic challenge subject to changing assumptions, unexpected consequences, and strong social and economic overtones. Research should thus focus not just on new knowledge about the water, energy, or food component, but on advancing our understanding of the ecosystem as a whole. This will require the participation of interdisciplinary teams of researchers with both tacit and specialized knowledge.
Monte Carlo method in neutron activation analysis
International Nuclear Information System (INIS)
Majerle, M.; Krasa, A.; Svoboda, O.; Wagner, V.; Adam, J.; Peetermans, S.; Slama, O.; Stegajlov, V.I.; Tsupko-Sitnikov, V.M.
2009-01-01
Neutron activation detectors are a useful technique for the neutron flux measurements in spallation experiments. The study of the usefulness and the accuracy of this method at similar experiments was performed with the help of Monte Carlo codes MCNPX and FLUKA
Biases in Monte Carlo eigenvalue calculations
Energy Technology Data Exchange (ETDEWEB)
Gelbard, E.M.
1992-12-01
The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the ``fixed-source`` case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated (``replicated``) over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here.
Biases in Monte Carlo eigenvalue calculations
Energy Technology Data Exchange (ETDEWEB)
Gelbard, E.M.
1992-01-01
The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the fixed-source'' case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated ( replicated'') over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here.
Monte Carlo method for random surfaces
International Nuclear Information System (INIS)
Berg, B.
1985-01-01
Previously two of the authors proposed a Monte Carlo method for sampling statistical ensembles of random walks and surfaces with a Boltzmann probabilistic weight. In the present paper we work out the details for several models of random surfaces, defined on d-dimensional hypercubic lattices. (orig.)
Computer system for Monte Carlo experimentation
International Nuclear Information System (INIS)
Grier, D.A.
1986-01-01
A new computer system for Monte Carlo Experimentation is presented. The new system speeds and simplifies the process of coding and preparing a Monte Carlo Experiment; it also encourages the proper design of Monte Carlo Experiments, and the careful analysis of the experimental results. A new functional language is the core of this system. Monte Carlo Experiments, and their experimental designs, are programmed in this new language; those programs are compiled into Fortran output. The Fortran output is then compiled and executed. The experimental results are analyzed with a standard statistics package such as Si, Isp, or Minitab or with a user-supplied program. Both the experimental results and the experimental design may be directly loaded into the workspace of those packages. The new functional language frees programmers from many of the details of programming an experiment. Experimental designs such as factorial, fractional factorial, or latin square are easily described by the control structures and expressions of the language. Specific mathematical modes are generated by the routines of the language
Monte Carlo simulation of the microcanonical ensemble
International Nuclear Information System (INIS)
Creutz, M.
1984-01-01
We consider simulating statistical systems with a random walk on a constant energy surface. This combines features of deterministic molecular dynamics techniques and conventional Monte Carlo simulations. For discrete systems the method can be programmed to run an order of magnitude faster than other approaches. It does not require high quality random numbers and may also be useful for nonequilibrium studies. 10 references
Workshop: Monte Carlo computational performance benchmark - Contributions
International Nuclear Information System (INIS)
Hoogenboom, J.E.; Petrovic, B.; Martin, W.R.; Sutton, T.; Leppaenen, J.; Forget, B.; Romano, P.; Siegel, A.; Hoogenboom, E.; Wang, K.; Li, Z.; She, D.; Liang, J.; Xu, Q.; Qiu, Y.; Yu, J.; Sun, J.; Fan, X.; Yu, G.; Bernard, F.; Cochet, B.; Jinaphanh, A.; Jacquet, O.; Van der Marck, S.; Tramm, J.; Felker, K.; Smith, K.; Horelik, N.; Capellan, N.; Herman, B.
2013-01-01
This series of slides is divided into 3 parts. The first part is dedicated to the presentation of the Monte-Carlo computational performance benchmark (aims, specifications and results). This benchmark aims at performing a full-size Monte Carlo simulation of a PWR core with axial and pin-power distribution. Many different Monte Carlo codes have been used and their results have been compared in terms of computed values and processing speeds. It appears that local power values mostly agree quite well. The first part also includes the presentations of about 10 participants in which they detail their calculations. In the second part, an extension of the benchmark is proposed in order to simulate a more realistic reactor core (for instance non-uniform temperature) and to assess feedback coefficients due to change of some parameters. The third part deals with another benchmark, the BEAVRS benchmark (Benchmark for Evaluation And Validation of Reactor Simulations). BEAVRS is also a full-core PWR benchmark for Monte Carlo simulations
Monte Carlo determination of heteroepitaxial misfit structures
DEFF Research Database (Denmark)
Baker, J.; Lindgård, Per-Anker
1996-01-01
We use Monte Carlo simulations to determine the structure of KBr overlayers on a NaCl(001) substrate, a system with large (17%) heteroepitaxial misfit. The equilibrium relaxation structure is determined for films of 2-6 ML, for which extensive helium-atom scattering data exist for comparison...
Dynamic bounds coupled with Monte Carlo simulations
Rajabali Nejad, Mohammadreza; Meester, L.E.; van Gelder, P.H.A.J.M.; Vrijling, J.K.
2011-01-01
For the reliability analysis of engineering structures a variety of methods is known, of which Monte Carlo (MC) simulation is widely considered to be among the most robust and most generally applicable. To reduce simulation cost of the MC method, variance reduction methods are applied. This paper
Atomistic Monte Carlo simulation of lipid membranes
DEFF Research Database (Denmark)
Wüstner, Daniel; Sklenar, Heinz
2014-01-01
Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction...... of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol....
Design and analysis of Monte Carlo experiments
Kleijnen, Jack P.C.; Gentle, J.E.; Haerdle, W.; Mori, Y.
2012-01-01
By definition, computer simulation or Monte Carlo models are not solved by mathematical analysis (such as differential calculus), but are used for numerical experimentation. The goal of these experiments is to answer questions about the real world; i.e., the experimenters may use their models to
Juan Carlos D'Olivo: A portrait
Aguilar-Arévalo, Alexis A.
2013-06-01
This report attempts to give a brief bibliographical sketch of the academic life of Juan Carlos D'Olivo, researcher and teacher at the Instituto de Ciencias Nucleares of UNAM, devoted to advancing the fields of High Energy Physics and Astroparticle Physics in Mexico and Latin America.
Scalable Domain Decomposed Monte Carlo Particle Transport
Energy Technology Data Exchange (ETDEWEB)
O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)
2013-12-05
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.
An analysis of Monte Carlo tree search
CSIR Research Space (South Africa)
James, S
2017-02-01
Full Text Available Monte Carlo Tree Search (MCTS) is a family of directed search algorithms that has gained widespread attention in recent years. Despite the vast amount of research into MCTS, the effect of modifications on the algorithm, as well as the manner...
Parallel processing Monte Carlo radiation transport codes
International Nuclear Information System (INIS)
McKinney, G.W.
1994-01-01
Issues related to distributed-memory multiprocessing as applied to Monte Carlo radiation transport are discussed. Measurements of communication overhead are presented for the radiation transport code MCNP which employs the communication software package PVM, and average efficiency curves are provided for a homogeneous virtual machine
Monte Carlo studies of uranium calorimetry
International Nuclear Information System (INIS)
Brau, J.; Hargis, H.J.; Gabriel, T.A.; Bishop, B.L.
1985-01-01
Detailed Monte Carlo calculations of uranium calorimetry are presented which reveal a significant difference in the responses of liquid argon and plastic scintillator in uranium calorimeters. Due to saturation effects, neutrons from the uranium are found to contribute only weakly to the liquid argon signal. Electromagnetic sampling inefficiencies are significant and contribute substantially to compensation in both systems. 17 references
Coded aperture optimization using Monte Carlo simulations
International Nuclear Information System (INIS)
Martineau, A.; Rocchisani, J.M.; Moretti, J.L.
2010-01-01
Coded apertures using Uniformly Redundant Arrays (URA) have been unsuccessfully evaluated for two-dimensional and three-dimensional imaging in Nuclear Medicine. The images reconstructed from coded projections contain artifacts and suffer from poor spatial resolution in the longitudinal direction. We introduce a Maximum-Likelihood Expectation-Maximization (MLEM) algorithm for three-dimensional coded aperture imaging which uses a projection matrix calculated by Monte Carlo simulations. The aim of the algorithm is to reduce artifacts and improve the three-dimensional spatial resolution in the reconstructed images. Firstly, we present the validation of GATE (Geant4 Application for Emission Tomography) for Monte Carlo simulations of a coded mask installed on a clinical gamma camera. The coded mask modelling was validated by comparison between experimental and simulated data in terms of energy spectra, sensitivity and spatial resolution. In the second part of the study, we use the validated model to calculate the projection matrix with Monte Carlo simulations. A three-dimensional thyroid phantom study was performed to compare the performance of the three-dimensional MLEM reconstruction with conventional correlation method. The results indicate that the artifacts are reduced and three-dimensional spatial resolution is improved with the Monte Carlo-based MLEM reconstruction.
Uncertainty analysis in Monte Carlo criticality computations
International Nuclear Information System (INIS)
Qi Ao
2011-01-01
Highlights: ► Two types of uncertainty methods for k eff Monte Carlo computations are examined. ► Sampling method has the least restrictions on perturbation but computing resources. ► Analytical method is limited to small perturbation on material properties. ► Practicality relies on efficiency, multiparameter applicability and data availability. - Abstract: Uncertainty analysis is imperative for nuclear criticality risk assessments when using Monte Carlo neutron transport methods to predict the effective neutron multiplication factor (k eff ) for fissionable material systems. For the validation of Monte Carlo codes for criticality computations against benchmark experiments, code accuracy and precision are measured by both the computational bias and uncertainty in the bias. The uncertainty in the bias accounts for known or quantified experimental, computational and model uncertainties. For the application of Monte Carlo codes for criticality analysis of fissionable material systems, an administrative margin of subcriticality must be imposed to provide additional assurance of subcriticality for any unknown or unquantified uncertainties. Because of a substantial impact of the administrative margin of subcriticality on economics and safety of nuclear fuel cycle operations, recently increasing interests in reducing the administrative margin of subcriticality make the uncertainty analysis in criticality safety computations more risk-significant. This paper provides an overview of two most popular k eff uncertainty analysis methods for Monte Carlo criticality computations: (1) sampling-based methods, and (2) analytical methods. Examples are given to demonstrate their usage in the k eff uncertainty analysis due to uncertainties in both neutronic and non-neutronic parameters of fissionable material systems.
minimum thresholds of monte carlo cycles for nigerian empirical
African Journals Online (AJOL)
2012-11-03
Nov 3, 2012 ... Abstract. Monte Carlo simulation has proven to be an effective means of incorporating reliability analysis into the ... Monte Carlo simulation cycle of 2, 500 thresholds were enough to be used to provide sufficient repeatability for ... rameters using Monte Carlo method with the aid of. MATrixLABoratory.
Multilevel sequential Monte-Carlo samplers
Jasra, Ajay
2016-01-05
Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.
Status of Monte Carlo at Los Alamos
International Nuclear Information System (INIS)
Thompson, W.L.; Cashwell, E.D.
1980-01-01
At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time
Monte Carlo simulation of gas Cerenkov detectors
International Nuclear Information System (INIS)
Mack, J.M.; Jain, M.; Jordan, T.M.
1984-01-01
Theoretical study of selected gamma-ray and electron diagnostic necessitates coupling Cerenkov radiation to electron/photon cascades. A Cerenkov production model and its incorporation into a general geometry Monte Carlo coupled electron/photon transport code is discussed. A special optical photon ray-trace is implemented using bulk optical properties assigned to each Monte Carlo zone. Good agreement exists between experimental and calculated Cerenkov data in the case of a carbon-dioxide gas Cerenkov detector experiment. Cerenkov production and threshold data are presented for a typical carbon-dioxide gas detector that converts a 16.7 MeV photon source to Cerenkov light, which is collected by optics and detected by a photomultiplier
No-compromise reptation quantum Monte Carlo
International Nuclear Information System (INIS)
Yuen, W K; Farrar, Thomas J; Rothstein, Stuart M
2007-01-01
Since its publication, the reptation quantum Monte Carlo algorithm of Baroni and Moroni (1999 Phys. Rev. Lett. 82 4745) has been applied to several important problems in physics, but its mathematical foundations are not well understood. We show that their algorithm is not of typical Metropolis-Hastings type, and we specify conditions required for the generated Markov chain to be stationary and to converge to the intended distribution. The time-step bias may add up, and in many applications it is only the middle of a reptile that is the most important. Therefore, we propose an alternative, 'no-compromise reptation quantum Monte Carlo' to stabilize the middle of the reptile. (fast track communication)
Multilevel Monte Carlo Approaches for Numerical Homogenization
Efendiev, Yalchin R.
2015-10-01
In this article, we study the application of multilevel Monte Carlo (MLMC) approaches to numerical random homogenization. Our objective is to compute the expectation of some functionals of the homogenized coefficients, or of the homogenized solutions. This is accomplished within MLMC by considering different sizes of representative volumes (RVEs). Many inexpensive computations with the smallest RVE size are combined with fewer expensive computations performed on larger RVEs. Likewise, when it comes to homogenized solutions, different levels of coarse-grid meshes are used to solve the homogenized equation. We show that, by carefully selecting the number of realizations at each level, we can achieve a speed-up in the computations in comparison to a standard Monte Carlo method. Numerical results are presented for both one-dimensional and two-dimensional test-cases that illustrate the efficiency of the approach.
EU Commissioner Carlos Moedas visits SESAME
CERN Bulletin
2015-01-01
The European Commissioner for research, science and innovation, Carlos Moedas, visited the SESAME laboratory in Jordan on Monday 13 April. When it begins operation in 2016, SESAME, a synchrotron light source, will be the Middle East’s first major international science centre, carrying out experiments ranging from the physical sciences to environmental science and archaeology. CERN Director-General Rolf Heuer (left) and European Commissioner Carlos Moedas with the model SESAME magnet. © European Union, 2015. Commissioner Moedas was accompanied by a European Commission delegation led by Robert-Jan Smits, Director-General of DG Research and Innovation, as well as Rolf Heuer, CERN Director-General, Jean-Pierre Koutchouk, coordinator of the CERN-EC Support for SESAME Magnets (CESSAMag) project and Princess Sumaya bint El Hassan of Jordan, a leading advocate of science in the region. They toured the SESAME facility together with SESAME Director, Khaled Tou...
Status of Monte Carlo at Los Alamos
International Nuclear Information System (INIS)
Thompson, W.L.; Cashwell, E.D.; Godfrey, T.N.K.; Schrandt, R.G.; Deutsch, O.L.; Booth, T.E.
1980-05-01
Four papers were presented by Group X-6 on April 22, 1980, at the Oak Ridge Radiation Shielding Information Center (RSIC) Seminar-Workshop on Theory and Applications of Monte Carlo Methods. These papers are combined into one report for convenience and because they are related to each other. The first paper (by Thompson and Cashwell) is a general survey about X-6 and MCNP and is an introduction to the other three papers. It can also serve as a resume of X-6. The second paper (by Godfrey) explains some of the details of geometry specification in MCNP. The third paper (by Cashwell and Schrandt) illustrates calculating flux at a point with MCNP; in particular, the once-more-collided flux estimator is demonstrated. Finally, the fourth paper (by Thompson, Deutsch, and Booth) is a tutorial on some variance-reduction techniques. It should be required for a fledging Monte Carlo practitioner
Monte Carlo Particle Transport: Algorithm and Performance Overview
International Nuclear Information System (INIS)
Gentile, N.; Procassini, R.; Scott, H.
2005-01-01
Monte Carlo methods are frequently used for neutron and radiation transport. These methods have several advantages, such as relative ease of programming and dealing with complex meshes. Disadvantages include long run times and statistical noise. Monte Carlo photon transport calculations also often suffer from inaccuracies in matter temperature due to the lack of implicitness. In this paper we discuss the Monte Carlo algorithm as it is applied to neutron and photon transport, detail the differences between neutron and photon Monte Carlo, and give an overview of the ways the numerical method has been modified to deal with issues that arise in photon Monte Carlo simulations
Introduction to the Monte Carlo methods
International Nuclear Information System (INIS)
Uzhinskij, V.V.
1993-01-01
Codes illustrating the use of Monte Carlo methods in high energy physics such as the inverse transformation method, the ejection method, the particle propagation through the nucleus, the particle interaction with the nucleus, etc. are presented. A set of useful algorithms of random number generators is given (the binomial distribution, the Poisson distribution, β-distribution, γ-distribution and normal distribution). 5 figs., 1 tab
Monte Carlo modeling of eye iris color
Koblova, Ekaterina V.; Bashkatov, Alexey N.; Dolotov, Leonid E.; Sinichkin, Yuri P.; Kamenskikh, Tatyana G.; Genina, Elina A.; Tuchin, Valery V.
2007-05-01
Based on the presented two-layer eye iris model, the iris diffuse reflectance has been calculated by Monte Carlo technique in the spectral range 400-800 nm. The diffuse reflectance spectra have been recalculated in L*a*b* color coordinate system. Obtained results demonstrated that the iris color coordinates (hue and chroma) can be used for estimation of melanin content in the range of small melanin concentrations, i.e. for estimation of melanin content in blue and green eyes.
Handbook of Markov chain Monte Carlo
Brooks, Steve
2011-01-01
""Handbook of Markov Chain Monte Carlo"" brings together the major advances that have occurred in recent years while incorporating enough introductory material for new users of MCMC. Along with thorough coverage of the theoretical foundations and algorithmic and computational methodology, this comprehensive handbook includes substantial realistic case studies from a variety of disciplines. These case studies demonstrate the application of MCMC methods and serve as a series of templates for the construction, implementation, and choice of MCMC methodology.
Monte Carlo methods for shield design calculations
International Nuclear Information System (INIS)
Grimstone, M.J.
1974-01-01
A suite of Monte Carlo codes is being developed for use on a routine basis in commercial reactor shield design. The methods adopted for this purpose include the modular construction of codes, simplified geometries, automatic variance reduction techniques, continuous energy treatment of cross section data, and albedo methods for streaming. Descriptions are given of the implementation of these methods and of their use in practical calculations. 26 references. (U.S.)
Replica Exchange for Reactive Monte Carlo Simulations
Czech Academy of Sciences Publication Activity Database
Turner, C.H.; Brennan, J.K.; Lísal, Martin
2007-01-01
Roč. 111, č. 43 (2007), s. 15706-15715 ISSN 1932-7447 R&D Projects: GA ČR GA203/05/0725; GA AV ČR 1ET400720409; GA AV ČR 1ET400720507 Institutional research plan: CEZ:AV0Z40720504 Keywords : monte carlo * simulation * reactive system Subject RIV: CF - Physical ; Theoretical Chemistry
Applications of Maxent to quantum Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Silver, R.N.; Sivia, D.S.; Gubernatis, J.E. (Los Alamos National Lab., NM (USA)); Jarrell, M. (Ohio State Univ., Columbus, OH (USA). Dept. of Physics)
1990-01-01
We consider the application of maximum entropy methods to the analysis of data produced by computer simulations. The focus is the calculation of the dynamical properties of quantum many-body systems by Monte Carlo methods, which is termed the Analytical Continuation Problem.'' For the Anderson model of dilute magnetic impurities in metals, we obtain spectral functions and transport coefficients which obey Kondo Universality.'' 24 refs., 7 figs.
Monte Carlo methods for preference learning
DEFF Research Database (Denmark)
Viappiani, P.
2012-01-01
Utility elicitation is an important component of many applications, such as decision support systems and recommender systems. Such systems query the users about their preferences and give recommendations based on the system’s belief about the utility function. Critical to these applications is th...... is the acquisition of prior distribution about the utility parameters and the possibility of real time Bayesian inference. In this paper we consider Monte Carlo methods for these problems....
General purpose code for Monte Carlo simulations
International Nuclear Information System (INIS)
Wilcke, W.W.
1983-01-01
A general-purpose computer called MONTHY has been written to perform Monte Carlo simulations of physical systems. To achieve a high degree of flexibility the code is organized like a general purpose computer, operating on a vector describing the time dependent state of the system under simulation. The instruction set of the computer is defined by the user and is therefore adaptable to the particular problem studied. The organization of MONTHY allows iterative and conditional execution of operations
The lund Monte Carlo for jet fragmentation
International Nuclear Information System (INIS)
Sjoestrand, T.
1982-03-01
We present a Monte Carlo program based on the Lund model for jet fragmentation. Quark, gluon, diquark and hadron jets are considered. Special emphasis is put on the fragmentation of colour singlet jet systems, for which energy, momentum and flavour are conserved explicitly. The model for decays of unstable particles, in particular the weak decay of heavy hadrons, is described. The central part of the paper is a detailed description on how to use the FORTRAN 77 program. (Author)
Carlo Rosselli e il socialismo delle autonomie
Calabrò, Carmelo
2008-01-01
L’impegno teorico di Carlo Rosselli è riconducibile alle molteplici esperienze minoritarie (almeno a livello continentale) che, negli anni ’20, mirano al superamento dell’impianto dottrinario del socialismo marxista. Tanto nella variante riformista, quanto in quella massimalista, classismo, olismo e collettivismo sono principi tendenzialmente comuni alla cultura del marxismo; principi dicotomici rispetto al liberalismo e problematici nei confronti della democrazia. Rosselli, contro questa tra...
Autocorrelations in hybrid Monte Carlo simulations
International Nuclear Information System (INIS)
Schaefer, Stefan; Virotta, Francesco
2010-11-01
Simulations of QCD suffer from severe critical slowing down towards the continuum limit. This problem is known to be prominent in the topological charge, however, all observables are affected to various degree by these slow modes in the Monte Carlo evolution. We investigate the slowing down in high statistics simulations and propose a new error analysis method, which gives a realistic estimate of the contribution of the slow modes to the errors. (orig.)
Topological zero modes in Monte Carlo simulations
International Nuclear Information System (INIS)
Dilger, H.
1994-08-01
We present an improvement of global Metropolis updating steps, the instanton hits, used in a hybrid Monte Carlo simulation of the two-flavor Schwinger model with staggered fermions. These hits are designed to change the topological sector of the gauge field. In order to match these hits to an unquenched simulation with pseudofermions, the approximate zero mode structure of the lattice Dirac operator has to be considered explicitly. (orig.)
Monte Carlo simulation of Touschek effect
Directory of Open Access Journals (Sweden)
Aimin Xiao
2010-07-01
Full Text Available We present a Monte Carlo method implementation in the code elegant for simulating Touschek scattering effects in a linac beam. The local scattering rate and the distribution of scattered electrons can be obtained from the code either for a Gaussian-distributed beam or for a general beam whose distribution function is given. In addition, scattered electrons can be tracked through the beam line and the local beam-loss rate and beam halo information recorded.
Biased Monte Carlo optimization: the basic approach
International Nuclear Information System (INIS)
Campioni, Luca; Scardovelli, Ruben; Vestrucci, Paolo
2005-01-01
It is well-known that the Monte Carlo method is very successful in tackling several kinds of system simulations. It often happens that one has to deal with rare events, and the use of a variance reduction technique is almost mandatory, in order to have Monte Carlo efficient applications. The main issue associated with variance reduction techniques is related to the choice of the value of the biasing parameter. Actually, this task is typically left to the experience of the Monte Carlo user, who has to make many attempts before achieving an advantageous biasing. A valuable result is provided: a methodology and a practical rule addressed to establish an a priori guidance for the choice of the optimal value of the biasing parameter. This result, which has been obtained for a single component system, has the notable property of being valid for any multicomponent system. In particular, in this paper, the exponential and the uniform biases of exponentially distributed phenomena are investigated thoroughly
Generalized hybrid Monte Carlo - CMFD methods for fission source convergence
International Nuclear Information System (INIS)
Wolters, Emily R.; Larsen, Edward W.; Martin, William R.
2011-01-01
In this paper, we generalize the recently published 'CMFD-Accelerated Monte Carlo' method and present two new methods that reduce the statistical error in CMFD-Accelerated Monte Carlo. The CMFD-Accelerated Monte Carlo method uses Monte Carlo to estimate nonlinear functionals used in low-order CMFD equations for the eigenfunction and eigenvalue. The Monte Carlo fission source is then modified to match the resulting CMFD fission source in a 'feedback' procedure. The two proposed methods differ from CMFD-Accelerated Monte Carlo in the definition of the required nonlinear functionals, but they have identical CMFD equations. The proposed methods are compared with CMFD-Accelerated Monte Carlo on a high dominance ratio test problem. All hybrid methods converge the Monte Carlo fission source almost immediately, leading to a large reduction in the number of inactive cycles required. The proposed methods stabilize the fission source more efficiently than CMFD-Accelerated Monte Carlo, leading to a reduction in the number of active cycles required. Finally, as in CMFD-Accelerated Monte Carlo, the apparent variance of the eigenfunction is approximately equal to the real variance, so the real error is well-estimated from a single calculation. This is an advantage over standard Monte Carlo, in which the real error can be underestimated due to inter-cycle correlation. (author)
Monte carlo methods and models in finance and insurance
Korn, Ralf; Kroisandt, Gerald
2010-01-01
Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...
Monte Carlo methods and models in finance and insurance
Korn, Ralf; Kroisandt, Gerald
2010-01-01
Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...
Investigating the impossible: Monte Carlo simulations
International Nuclear Information System (INIS)
Kramer, Gary H.; Crowley, Paul; Burns, Linda C.
2000-01-01
Designing and testing new equipment can be an expensive and time consuming process or the desired performance characteristics may preclude its construction due to technological shortcomings. Cost may also prevent equipment being purchased for other scenarios to be tested. An alternative is to use Monte Carlo simulations to make the investigations. This presentation exemplifies how Monte Carlo code calculations can be used to fill the gap. An example is given for the investigation of two sizes of germanium detector (70 mm and 80 mm diameter) at four different crystal thicknesses (15, 20, 25, and 30 mm) and makes predictions on how the size affects the counting efficiency and the Minimum Detectable Activity (MDA). The Monte Carlo simulations have shown that detector efficiencies can be adequately modelled using photon transport if the data is used to investigate trends. The investigation of the effect of detector thickness on the counting efficiency has shown that thickness for a fixed diameter detector of either 70 mm or 80 mm is unimportant up to 60 keV. At higher photon energies, the counting efficiency begins to decrease as the thickness decreases as expected. The simulations predict that the MDA of either the 70 mm or 80 mm diameter detectors does not differ by more than a factor of 1.15 at 17 keV or 1.2 at 60 keV when comparing detectors of equivalent thicknesses. The MDA is slightly increased at 17 keV, and rises by about 52% at 660 keV, when the thickness is decreased from 30 mm to 15 mm. One could conclude from this information that the extra cost associated with the larger area Ge detectors may not be justified for the slight improvement predicted in the MDA. (author)
Monte Carlo eigenfunction strategies and uncertainties
International Nuclear Information System (INIS)
Gast, R.C.; Candelore, N.R.
1974-01-01
Comparisons of convergence rates for several possible eigenfunction source strategies led to the selection of the ''straight'' analog of the analytic power method as the source strategy for Monte Carlo eigenfunction calculations. To insure a fair game strategy, the number of histories per iteration increases with increasing iteration number. The estimate of eigenfunction uncertainty is obtained from a modification of a proposal by D. B. MacMillan and involves only estimates of the usual purely statistical component of uncertainty and a serial correlation coefficient of lag one. 14 references. (U.S.)
Atomistic Monte Carlo simulation of lipid membranes
DEFF Research Database (Denmark)
Wüstner, Daniel; Sklenar, Heinz
2014-01-01
Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction...... into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches...
Monte Carlo method in radiation transport problems
International Nuclear Information System (INIS)
Dejonghe, G.; Nimal, J.C.; Vergnaud, T.
1986-11-01
In neutral radiation transport problems (neutrons, photons), two values are important: the flux in the phase space and the density of particles. To solve the problem with Monte Carlo method leads to, among other things, build a statistical process (called the play) and to provide a numerical value to a variable x (this attribution is called score). Sampling techniques are presented. Play biasing necessity is proved. A biased simulation is made. At last, the current developments (rewriting of programs for instance) are presented due to several reasons: two of them are the vectorial calculation apparition and the photon and neutron transport in vacancy media [fr
MBR Monte Carlo Simulation in PYTHIA8
Ciesielski, R.
We present the MBR (Minimum Bias Rockefeller) Monte Carlo simulation of (anti)proton-proton interactions and its implementation in the PYTHIA8 event generator. We discuss the total, elastic, and total-inelastic cross sections, and three contributions from diffraction dissociation processes that contribute to the latter: single diffraction, double diffraction, and central diffraction or double-Pomeron exchange. The event generation follows a renormalized-Regge-theory model, successfully tested using CDF data. Based on the MBR-enhanced PYTHIA8 simulation, we present cross-section predictions for the LHC and beyond, up to collision energies of 50 TeV.
Markov chains analytic and Monte Carlo computations
Graham, Carl
2014-01-01
Markov Chains: Analytic and Monte Carlo Computations introduces the main notions related to Markov chains and provides explanations on how to characterize, simulate, and recognize them. Starting with basic notions, this book leads progressively to advanced and recent topics in the field, allowing the reader to master the main aspects of the classical theory. This book also features: Numerous exercises with solutions as well as extended case studies.A detailed and rigorous presentation of Markov chains with discrete time and state space.An appendix presenting probabilistic notions that are nec
Score Bounded Monte-Carlo Tree Search
Cazenave, Tristan; Saffidine, Abdallah
Monte-Carlo Tree Search (MCTS) is a successful algorithm used in many state of the art game engines. We propose to improve a MCTS solver when a game has more than two outcomes. It is for example the case in games that can end in draw positions. In this case it improves significantly a MCTS solver to take into account bounds on the possible scores of a node in order to select the nodes to explore. We apply our algorithm to solving Seki in the game of Go and to Connect Four.
IN MEMORIAM CARLOS RESTREPO. UN VERDADERO MAESTRO
Pelayo Correa
2009-01-01
Carlos Restrepo fue el primer profesor de Patología y un miembro ilustre del grupo de pioneros que fundaron la Facultad de Medicina de la Universidad del Valle. Estos pioneros convergieron en Cali en la década de 1950, en posesión de un espíritu renovador y creativo que emprendió con mucho éxito la labor de cambiar la cultura académica del Valle del Cauca. Ellos encontraron una sociedad apacible, que disfrutaba de la generosidad de su entorno, sin deseos de romper las tradiciones centenarias ...
Monte Carlo study of the multiquark systems
International Nuclear Information System (INIS)
Kerbikov, B.O.; Polikarpov, M.I.; Zamolodchikov, A.B.
1986-01-01
Random walks have been used to calculate the energies of the ground states in systems of N=3, 6, 9, 12 quarks. Multiquark states with N>3 are unstable with respect to the spontaneous dissociation into color singlet hadrons. The modified Green's function Monte Carlo algorithm which proved to be more simple and much accurate than the conventional few body methods have been employed. In contrast to other techniques, the same equations are used for any number of particles, while the computer time increases only linearly V, S the number of particles
by means of FLUKA Monte Carlo method
Directory of Open Access Journals (Sweden)
Ermis Elif Ebru
2015-01-01
Full Text Available Calculations of gamma-ray mass attenuation coefficients of various detector materials (crystals were carried out by means of FLUKA Monte Carlo (MC method at different gamma-ray energies. NaI, PVT, GSO, GaAs and CdWO4 detector materials were chosen in the calculations. Calculated coefficients were also compared with the National Institute of Standards and Technology (NIST values. Obtained results through this method were highly in accordance with those of the NIST values. It was concluded from the study that FLUKA MC method can be an alternative way to calculate the gamma-ray mass attenuation coefficients of the detector materials.
Pseudo-extended Markov chain Monte Carlo
Nemeth, Christopher; Lindsten, Fredrik; Filippone, Maurizio; Hensman, James
2017-01-01
Sampling from the posterior distribution using Markov chain Monte Carlo (MCMC) methods can require an exhaustive number of iterations to fully explore the correct posterior. This is often the case when the posterior of interest is multi-modal, as the MCMC sampler can become trapped in a local mode for a large number of iterations. In this paper, we introduce the pseudo-extended MCMC method as an approach for improving the mixing of the MCMC sampler in complex posterior distributions. The pseu...
Diffusion quantum Monte Carlo for molecules
International Nuclear Information System (INIS)
Lester, W.A. Jr.
1986-07-01
A quantum mechanical Monte Carlo method has been used for the treatment of molecular problems. The imaginary-time Schroedinger equation written with a shift in zero energy [E/sub T/ - V(R)] can be interpreted as a generalized diffusion equation with a position-dependent rate or branching term. Since diffusion is the continuum limit of a random walk, one may simulate the Schroedinger equation with a function psi (note, not psi 2 ) as a density of ''walks.'' The walks undergo an exponential birth and death as given by the rate term. 16 refs., 2 tabs
Discrete diffusion Monte Carlo for frequency-dependent radiative transfer
Energy Technology Data Exchange (ETDEWEB)
Densmore, Jeffrey D [Los Alamos National Laboratory; Kelly, Thompson G [Los Alamos National Laboratory; Urbatish, Todd J [Los Alamos National Laboratory
2010-11-17
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.
Monte Carlo criticality analysis for dissolvers with neutron poison
International Nuclear Information System (INIS)
Yu, Deshun; Dong, Xiufang; Pu, Fuxiang.
1987-01-01
Criticality analysis for dissolvers with neutron poison is given on the basis of Monte Carlo method. In Monte Carlo calculations of thermal neutron group parameters for fuel pieces, neutron transport length is determined in terms of maximum cross section approach. A set of related effective multiplication factors (K eff ) are calculated by Monte Carlo method for the three cases. Related numerical results are quite useful for the design and operation of this kind of dissolver in the criticality safety analysis. (author)
Monte Carlo Based Framework to Support HAZOP Study
DEFF Research Database (Denmark)
Danko, Matej; Frutiger, Jerome; Jelemenský, Ľudovít
2017-01-01
This study combines Monte Carlo based process simulation features with classical hazard identification techniques for consequences of deviations from normal operating conditions investigation and process safety examination. A Monte Carlo based method has been used to sample and evaluate different...... deviations in process parameters simultaneously, thereby bringing an improvement to the Hazard and Operability study (HAZOP), which normally considers only one at a time deviation in process parameters. Furthermore, Monte Carlo filtering was then used to identify operability and hazard issues including...
Monte Carlo simulations for instrumentation at SINQ
International Nuclear Information System (INIS)
Filges, U.; Ronnow, H.M.; Zsigmond, G.
2006-01-01
The Paul Scherrer Institut (PSI) operates a spallation source SINQ equipped with 11 different neutron scattering instruments. Beside the optimization of the existing instruments, the extension with new instruments and devices are continuously done at PSI. For design and performance studies different Monte Carlo packages are used. Presently two major projects are in an advanced stage of planning. These are the new thermal neutron triple-axis spectrometer Enhanced Intensity and Greater Energy Range (EIGER) and the ultra-cold neutron source (UCN-PSI). The EIGER instrument design is focused on an optimal signal-to-background ratio. A very important design part was to realize a monochromator shielding which covers best shielding characteristic, low background production and high instrument functionality. The Monte Carlo package MCNPX was used to find the best choice. Due to the sharp energy distribution of ultra-cold neutrons (UCN) which can be Doppler-shifted towards cold neutron energies, a UCN phase space transformation (PST) device could produce highly monochromatic cold and very cold neutrons (VCN). The UCN-PST instrumentation project running at PSI is very timely since a new-generation superthermal spallation source of UCN is under construction at PSI with a UCN density of 3000-4000 n cm -3 . Detailed numerical simulations have been carried out to optimize the UCN density and flux. Recent results on numerical simulations of an UCN-PST-based source of highly monochromatic cold neutrons and VCN are presented
Monte Carlo simulation for radiographic applications
International Nuclear Information System (INIS)
Tillack, G.R.; Bellon, C.
2003-01-01
Standard radiography simulators are based on the attenuation law complemented by built-up-factors (BUF) to describe the interaction of radiation with material. The assumption of BUF implies that scattered radiation reduces only the contrast in radiographic images. This simplification holds for a wide range of applications like weld inspection as known from practical experience. But only a detailed description of the different underlying interaction mechanisms is capable to explain effects like mottling or others that every radiographer has experienced in practice. The application of Monte Carlo models is capable to handle primary and secondary interaction mechanisms contributing to the image formation process like photon interactions (absorption, incoherent and coherent scattering including electron-binding effects, pair production) and electron interactions (electron tracing including X-Ray fluorescence and Bremsstrahlung production). It opens up possibilities like the separation of influencing factors and the understanding of the functioning of intensifying screen used in film radiography. The paper discusses the opportunities in applying the Monte Carlo method to investigate special features in radiography in terms of selected examples. (orig.) [de
Multilevel Monte Carlo simulation of Coulomb collisions
Energy Technology Data Exchange (ETDEWEB)
Rosin, M.S., E-mail: msr35@math.ucla.edu [Mathematics Department, University of California at Los Angeles, Los Angeles, CA 90036 (United States); Department of Mathematics and Science, Pratt Institute, Brooklyn, NY 11205 (United States); Ricketson, L.F. [Mathematics Department, University of California at Los Angeles, Los Angeles, CA 90036 (United States); Dimits, A.M. [Lawrence Livermore National Laboratory, L-637, P.O. Box 808, Livermore, CA 94511-0808 (United States); Caflisch, R.E. [Mathematics Department, University of California at Los Angeles, Los Angeles, CA 90036 (United States); Institute for Pure and Applied Mathematics, University of California at Los Angeles, Los Angeles, CA 90095 (United States); Cohen, B.I. [Lawrence Livermore National Laboratory, L-637, P.O. Box 808, Livermore, CA 94511-0808 (United States)
2014-10-01
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε, the computational cost of the method is O(ε{sup −2}) or O(ε{sup −2}(lnε){sup 2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε{sup −3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10{sup −5}. We discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.
Parallel Monte Carlo Search for Hough Transform
Lopes, Raul H. C.; Franqueira, Virginia N. L.; Reid, Ivan D.; Hobson, Peter R.
2017-10-01
We investigate the problem of line detection in digital image processing and in special how state of the art algorithms behave in the presence of noise and whether CPU efficiency can be improved by the combination of a Monte Carlo Tree Search, hierarchical space decomposition, and parallel computing. The starting point of the investigation is the method introduced in 1962 by Paul Hough for detecting lines in binary images. Extended in the 1970s to the detection of space forms, what came to be known as Hough Transform (HT) has been proposed, for example, in the context of track fitting in the LHC ATLAS and CMS projects. The Hough Transform transfers the problem of line detection, for example, into one of optimization of the peak in a vote counting process for cells which contain the possible points of candidate lines. The detection algorithm can be computationally expensive both in the demands made upon the processor and on memory. Additionally, it can have a reduced effectiveness in detection in the presence of noise. Our first contribution consists in an evaluation of the use of a variation of the Radon Transform as a form of improving theeffectiveness of line detection in the presence of noise. Then, parallel algorithms for variations of the Hough Transform and the Radon Transform for line detection are introduced. An algorithm for Parallel Monte Carlo Search applied to line detection is also introduced. Their algorithmic complexities are discussed. Finally, implementations on multi-GPU and multicore architectures are discussed.
Multi-Index Monte Carlo (MIMC)
Haji Ali, Abdul Lateef
2016-01-06
We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles s seminal work, instead of using first-order differences as in MLMC, we use in MIMC high-order mixed differences to reduce the variance of the hierarchical differences dramatically. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be of Total Degree (TD) type. When using such sets, MIMC yields new and improved complexity results, which are natural generalizations of Giles s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence, O(TOL-2).
Multi-Index Monte Carlo (MIMC)
Haji Ali, Abdul Lateef
2015-01-07
We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles’s seminal work, instead of using first-order differences as in MLMC, we use in MIMC high-order mixed differences to reduce the variance of the hierarchical differences dramatically. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be of Total Degree (TD) type. When using such sets, MIMC yields new and improved complexity results, which are natural generalizations of Giles’s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence.
International Nuclear Information System (INIS)
Ohta, Shigemi
1996-01-01
The Self-Test Monte Carlo (STMC) method resolves the main problems in using algebraic pseudo-random numbers for Monte Carlo (MC) calculations: that they can interfere with MC algorithms and lead to erroneous results, and that such an error often cannot be detected without known exact solution. STMC is based on good randomness of about 10 10 bits available from physical noise or transcendental numbers like π = 3.14---. Various bit modifiers are available to get more bits for applications that demands more than 10 10 random bits such as lattice quantum chromodynamics (QCD). These modifiers are designed so that a) each of them gives a bit sequence comparable in randomness as the original if used separately from each other, and b) their mutual interference when used jointly in a single MC calculation is adjustable. Intermediate data of the MC calculation itself are used to quantitatively test and adjust the mutual interference of the modifiers in respect of the MC algorithm. STMC is free of systematic error and gives reliable statistical error. Also it can be easily implemented on vector and parallel supercomputers. (author)
Algorithms for Monte Carlo calculations with fermions
International Nuclear Information System (INIS)
Weingarten, D.
1985-01-01
We describe a fermion Monte Carlo algorithm due to Petcher and the present author and another due to Fucito, Marinari, Parisi and Rebbi. For the first algorithm we estimate the number of arithmetic operations required to evaluate a vacuum expectation value grows as N 11 /msub(q) on an N 4 lattice with fixed periodicity in physical units and renormalized quark mass msub(q). For the second algorithm the rate of growth is estimated to be N 8 /msub(q) 2 . Numerical experiments are presented comparing the two algorithms on a lattice of size 2 4 . With a hopping constant K of 0.15 and β of 4.0 we find the number of operations for the second algorithm is about 2.7 times larger than for the first and about 13 000 times larger than for corresponding Monte Carlo calculations with a pure gauge theory. An estimate is given for the number of operations required for more realistic calculations by each algorithm on a larger lattice. (orig.)
Quantum Monte Carlo for atoms and molecules
International Nuclear Information System (INIS)
Barnett, R.N.
1989-11-01
The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H 2 , LiH, Li 2 , and H 2 O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li 2 , and H 2 O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations, the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions
Wielandt acceleration for MCNP5 Monte Carlo eigenvalue calculations
International Nuclear Information System (INIS)
Brown, F.
2007-01-01
Monte Carlo criticality calculations use the power iteration method to determine the eigenvalue (k eff ) and eigenfunction (fission source distribution) of the fundamental mode. A recently proposed method for accelerating convergence of the Monte Carlo power iteration using Wielandt's method has been implemented in a test version of MCNP5. The method is shown to provide dramatic improvements in convergence rates and to greatly reduce the possibility of false convergence assessment. The method is effective and efficient, improving the Monte Carlo figure-of-merit for many problems. In addition, the method should eliminate most of the underprediction bias in confidence intervals for Monte Carlo criticality calculations. (authors)
Odd-flavor Simulations by the Hybrid Monte Carlo
Takaishi, Tetsuya; Takaishi, Tetsuya; De Forcrand, Philippe
2001-01-01
The standard hybrid Monte Carlo algorithm is known to simulate even flavors QCD only. Simulations of odd flavors QCD, however, can be also performed in the framework of the hybrid Monte Carlo algorithm where the inverse of the fermion matrix is approximated by a polynomial. In this exploratory study we perform three flavors QCD simulations. We make a comparison of the hybrid Monte Carlo algorithm and the R-algorithm which also simulates odd flavors systems but has step-size errors. We find that results from our hybrid Monte Carlo algorithm are in agreement with those from the R-algorithm obtained at very small step-size.
Quantum Monte Carlo Endstation for Petascale Computing
Energy Technology Data Exchange (ETDEWEB)
Lubos Mitas
2011-01-26
NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlo code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13
Ambrose, Jay H. (Inventor); Holmes, Rolland (Inventor)
2016-01-01
A heat pipe has an evaporator portion, a condenser portion, and at least one flexible portion that is sealingly coupled between the evaporator portion and the condenser portion. The flexible portion has a flexible tube and a flexible separator plate held in place within the flexible tube so as to divide the flexible tube into a gas-phase passage and a liquid-phase artery. The separator plate and flexible tube are configured such that the flexible portion is flexible in a plane that is perpendicular to the separator plate.
'Repeal Obamacare, a wicked problem'
Tanke, MAC; Zwart, Dorien
2017-01-01
Een van de eerste decreten die Donald Trump na zijn inauguratie als president van de Verenigde Staten tekende, was een verordening om de financiering van ‘Obamacare’ te bemoeilijken met ‘alles wat binnen de wet mogelijk was’. Wat willen Trump en de Republikeinen met Obamacare en wat zou dit kunnen
Monte Carlo simulation of the ARGO
International Nuclear Information System (INIS)
Depaola, G.O.
1997-01-01
We use GEANT Monte Carlo code to design an outline of the geometry and simulate the performance of the Argentine gamma-ray observer (ARGO), a telescope based on silicon strip detector technlogy. The γ-ray direction is determined by geometrical means and the angular resolution is calculated for small variations of the basic design. The results show that the angular resolutions vary from a few degrees at low energies (∝50 MeV) to 0.2 , approximately, at high energies (>500 MeV). We also made simulations using as incoming γ-ray the energy spectrum of PKS0208-512 and PKS0528+134 quasars. Moreover, a method based on multiple scattering theory is also used to determine the incoming energy. We show that this method is applicable to energy spectrum. (orig.)
San Carlos Apache Tribe - Energy Organizational Analysis
Energy Technology Data Exchange (ETDEWEB)
Rapp, James; Albert, Steve
2012-04-01
The San Carlos Apache Tribe (SCAT) was awarded $164,000 in late-2011 by the U.S. Department of Energy (U.S. DOE) Tribal Energy Program's "First Steps Toward Developing Renewable Energy and Energy Efficiency on Tribal Lands" Grant Program. This grant funded: The analysis and selection of preferred form(s) of tribal energy organization (this Energy Organization Analysis, hereinafter referred to as "EOA"). Start-up staffing and other costs associated with the Phase 1 SCAT energy organization. An intern program. Staff training. Tribal outreach and workshops regarding the new organization and SCAT energy programs and projects, including two annual tribal energy summits (2011 and 2012). This report documents the analysis and selection of preferred form(s) of a tribal energy organization.
CARLOS MARTÍ ARÍS: CABOS SUELTOS
Directory of Open Access Journals (Sweden)
Ángel Martínez García-Posada
2012-11-01
Full Text Available Al viento de su mismo título, ondea este libro otoñal su carácter diverso y su direccionalidad múltiple: con la apariencia de una clásica recopilación de presentaciones, conferencias o artículos, alentados estos últimos años a propósito de causas ajenas y afinidades electivas, esta edición agavilla comentarios, prefacios y notas en páginas dispersas, del profesor Carlos Martí, y compone un orden silencioso, secreto autorretrato, velado tras la trama de una tupida cartografía de lazos suaves pero seguros.
Methods for Monte Carlo simulations of biomacromolecules.
Vitalis, Andreas; Pappu, Rohit V
2009-01-01
The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies.
Variational Monte Carlo study of pentaquark states
Energy Technology Data Exchange (ETDEWEB)
Mark W. Paris
2005-07-01
Accurate numerical solution of the five-body Schrodinger equation is effected via variational Monte Carlo. The spectrum is assumed to exhibit a narrow resonance with strangeness S=+1. A fully antisymmetrized and pair-correlated five-quark wave function is obtained for the assumed non-relativistic Hamiltonian which has spin, isospin, and color dependent pair interactions and many-body confining terms which are fixed by the non-exotic spectra. Gauge field dynamics are modeled via flux tube exchange factors. The energy determined for the ground states with J=1/2 and negative (positive) parity is 2.22 GeV (2.50 GeV). A lower energy negative parity state is consistent with recent lattice results. The short-range structure of the state is analyzed via its diquark content.
Monte Carlo simulation of a CZT detector
International Nuclear Information System (INIS)
Chun, Sung Dae; Park, Se Hwan; Ha, Jang Ho; Kim, Han Soo; Cho, Yoon Ho; Kang, Sang Mook; Kim, Yong Kyun; Hong, Duk Geun
2008-01-01
CZT detector is one of the most promising radiation detectors for hard X-ray and γ-ray measurement. The energy spectrum of CZT detector has to be simulated to optimize the detector design. A CZT detector was fabricated with dimensions of 5x5x2 mm 3 . A Peltier cooler with a size of 40x40 mm 2 was installed below the fabricated CZT detector to reduce the operation temperature of the detector. Energy spectra of were measured with 59.5 keV γ-ray from 241 Am. A Monte Carlo code was developed to simulate the CZT energy spectrum, which was measured with a planar-type CZT detector, and the result was compared with the measured one. The simulation was extended to the CZT detector with strip electrodes. (author)
Linear stories in Carlo Scarpa's architectural drawings
DEFF Research Database (Denmark)
Dayer, Carolina
2017-01-01
, an architect guides the viewer’s imagination into another not-yet-real world that is projected much like divinatory practices of reading palms or tarot cards. The magic-real field of facts and fictions coexisting in one realm can be understood as a confabulation. A confabulation brings together both fact...... and fiction through fārī, a Fable, meaning 'to speak'. In the field of neurology, a mental patient’s confabulation may be when convinces himself that he is in Venice, although he also admits that the town he is seeing through the window is Alexandria. He knows both places, he feels both places and, despite...... the contradiction, both places constitute his reality. Venetian architect and storyteller par excellence, Carlo Scarpa, exercised the power of confabulations throughout his practice of drawing and building. While architectural historians have attempted to explain Scarpa’s work as layers coming together, very little...
Monte Carlo and detector simulation in OOP
International Nuclear Information System (INIS)
Atwood, W.B.; Blankenbecler, R.; Kunz, P.; Burnett, T.; Storr, K.M.
1990-01-01
Object-Oriented Programming techniques are explored with an eye towards applications in High Energy Physics codes. Two prototype examples are given: MCOOP (a particle Monte Carlo generator) and GISMO (a detector simulation/analysis package). The OOP programmer does no explicit or detailed memory management nor other bookkeeping chores; hence, the writing, modification, and extension of the code is considerably simplified. Inheritance can be used to simplify the class definitions as well as the instance variables and action methods of each class; thus the work required to add new classes, parameters, or new methods is minimal. The software industry is moving rapidly to OOP since it has been proven to improve programmer productivity, and promises even more for the future by providing truly reusable software. The High Energy Physics community clearly needs to follow this trend
Geometric Monte Carlo and black Janus geometries
Energy Technology Data Exchange (ETDEWEB)
Bak, Dongsu, E-mail: dsbak@uos.ac.kr [Physics Department, University of Seoul, Seoul 02504 (Korea, Republic of); B.W. Lee Center for Fields, Gravity & Strings, Institute for Basic Sciences, Daejeon 34047 (Korea, Republic of); Kim, Chanju, E-mail: cjkim@ewha.ac.kr [Department of Physics, Ewha Womans University, Seoul 03760 (Korea, Republic of); Kim, Kyung Kiu, E-mail: kimkyungkiu@gmail.com [Department of Physics, Sejong University, Seoul 05006 (Korea, Republic of); Department of Physics, College of Science, Yonsei University, Seoul 03722 (Korea, Republic of); Min, Hyunsoo, E-mail: hsmin@uos.ac.kr [Physics Department, University of Seoul, Seoul 02504 (Korea, Republic of); Song, Jeong-Pil, E-mail: jeong_pil_song@brown.edu [Department of Chemistry, Brown University, Providence, RI 02912 (United States)
2017-04-10
We describe an application of the Monte Carlo method to the Janus deformation of the black brane background. We present numerical results for three and five dimensional black Janus geometries with planar and spherical interfaces. In particular, we argue that the 5D geometry with a spherical interface has an application in understanding the finite temperature bag-like QCD model via the AdS/CFT correspondence. The accuracy and convergence of the algorithm are evaluated with respect to the grid spacing. The systematic errors of the method are determined using an exact solution of 3D black Janus. This numerical approach for solving linear problems is unaffected initial guess of a trial solution and can handle an arbitrary geometry under various boundary conditions in the presence of source fields.
Morse Monte Carlo Radiation Transport Code System
Energy Technology Data Exchange (ETDEWEB)
Emmett, M.B.
1975-02-01
The report contains sections containing descriptions of the MORSE and PICTURE codes, input descriptions, sample problems, deviations of the physical equations and explanations of the various error messages. The MORSE code is a multipurpose neutron and gamma-ray transport Monte Carlo code. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry may be used with an albedo option available at any material surface. The PICTURE code provide aid in preparing correct input data for the combinatorial geometry package CG. It provides a printed view of arbitrary two-dimensional slices through the geometry. By inspecting these pictures one may determine if the geometry specified by the input cards is indeed the desired geometry. 23 refs. (WRF)
Monte Carlo modeling and meteor showers
International Nuclear Information System (INIS)
Kulikova, N.V.
1987-01-01
Prediction of short lived increases in the cosmic dust influx, the concentration in lower thermosphere of atoms and ions of meteor origin and the determination of the frequency of micrometeor impacts on spacecraft are all of scientific and practical interest and all require adequate models of meteor showers at an early stage of their existence. A Monte Carlo model of meteor matter ejection from a parent body at any point of space was worked out by other researchers. This scheme is described. According to the scheme, the formation of ten well known meteor streams was simulated and the possibility of genetic affinity of each of them with the most probable parent comet was analyzed. Some of the results are presented
Monte Carlo simulations of medical imaging modalities
Energy Technology Data Exchange (ETDEWEB)
Estes, G.P. [Los Alamos National Lab., NM (United States)
1998-09-01
Because continuous-energy Monte Carlo radiation transport calculations can be nearly exact simulations of physical reality (within data limitations, geometric approximations, transport algorithms, etc.), it follows that one should be able to closely approximate the results of many experiments from first-principles computations. This line of reasoning has led to various MCNP studies that involve simulations of medical imaging modalities and other visualization methods such as radiography, Anger camera, computerized tomography (CT) scans, and SABRINA particle track visualization. It is the intent of this paper to summarize some of these imaging simulations in the hope of stimulating further work, especially as computer power increases. Improved interpretation and prediction of medical images should ultimately lead to enhanced medical treatments. It is also reasonable to assume that such computations could be used to design new or more effective imaging instruments.
[Chagas Carlos Justiniano Ribeiro (1879-1934)].
Pays, J F
2009-12-01
The story of the life of Carlos Chagas is closely associated with the discovery of American Human Trypanosomiasis, caused by Trypanosoma cruzi. Indeed, he worked on this for almost all of his life. Nowadays he is considered as a national hero, but, when he was alive, he was criticised more severely in his own country than elsewhere, often unjustly and motivated by jealousy, but sometimes with good reason. Cases of Chagas disease in non-endemic countries became such a concern that public health measures have had to be taken. In this article we give a short account of the scientific journey of this man, who can be said to occupy his very own place in the history of Tropical Medicine.
Angular biasing in implicit Monte-Carlo
International Nuclear Information System (INIS)
Zimmerman, G.B.
1994-01-01
Calculations of indirect drive Inertial Confinement Fusion target experiments require an integrated approach in which laser irradiation and radiation transport in the hohlraum are solved simultaneously with the symmetry, implosion and burn of the fuel capsule. The Implicit Monte Carlo method has proved to be a valuable tool for the two dimensional radiation transport within the hohlraum, but the impact of statistical noise on the symmetric implosion of the small fuel capsule is difficult to overcome. We present an angular biasing technique in which an increased number of low weight photons are directed at the imploding capsule. For typical parameters this reduces the required computer time for an integrated calculation by a factor of 10. An additional factor of 5 can also be achieved by directing even smaller weight photons at the polar regions of the capsule where small mass zones are most sensitive to statistical noise
Monte Carlo simulations on SIMD computer architectures
International Nuclear Information System (INIS)
Burmester, C.P.; Gronsky, R.; Wille, L.T.
1992-01-01
In this paper algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SIMD) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carl updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures
Monte Carlo modelling of TRIGA research reactor
International Nuclear Information System (INIS)
El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.
2010-01-01
The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucleaires de la Maamora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S(α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file 'up259'. The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.
Accelerated GPU based SPECT Monte Carlo simulations.
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-07
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational
Accelerated GPU based SPECT Monte Carlo simulations
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-01
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency
Monte Carlo modelling of TRIGA research reactor
El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.
2010-10-01
The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.
Monte carlo analysis of multicolour LED light engine
DEFF Research Database (Denmark)
Chakrabarti, Maumita; Thorseth, Anders; Jepsen, Jørgen
2015-01-01
A new Monte Carlo simulation as a tool for analysing colour feedback systems is presented here to analyse the colour uncertainties and achievable stability in a multicolour dynamic LED system. The Monte Carlo analysis presented here is based on an experimental investigation of a multicolour LED...
Projector Quantum Monte Carlo without minus-sign problem
Frick, M.; Raedt, H. De
Quantum Monte Carlo techniques often suffer from the so-called minus-sign problem. This paper explores a possibility to circumvent this fundamental problem by combining the Projector Quantum Monte Carlo method with the variational principle. Results are presented for the two-dimensional Hubbard
Multiple histogram method and static Monte Carlo sampling
Inda, M.A.; Frenkel, D.
2004-01-01
We describe an approach to use multiple-histogram methods in combination with static, biased Monte Carlo simulations. To illustrate this, we computed the force-extension curve of an athermal polymer from multiple histograms constructed in a series of static Rosenbluth Monte Carlo simulations. From
Monte Carlo methods for pricing ﬁnancial options
Indian Academy of Sciences (India)
Monte Carlo methods have increasingly become a popular computational tool to price complex ﬁnancial options, especially when the underlying space of assets has a large dimensionality, as the performance of other numerical methods typically suffer from the 'curse of dimensionality'. However, even Monte-Carlo ...
A MONTE CARLO COMPARISON OF PAAAM_ETRIC AND ...
African Journals Online (AJOL)
kernel nonparametric method is proposed and developed for estimating low flow quantiles. Ba&ed on annual minimum low flow data and Monte Carlo. Si•ulation Experiments, the proposed model is eotnpand with ... Carlo simulation technique using the criteria of the descriptive ability and predictive ability of a model.
New Approaches and Applications for Monte Carlo Perturbation Theory
Energy Technology Data Exchange (ETDEWEB)
Aufiero, Manuele; Bidaud, Adrien; Kotlyar, Dan; Leppänen, Jaakko; Palmiotti, Giuseppe; Salvatores, Massimo; Sen, Sonat; Shwageraus, Eugene; Fratoni, Massimiliano
2017-02-01
This paper presents some of the recent and new advancements in the extension of Monte Carlo Perturbation Theory methodologies and application. In particular, the discussed problems involve Brunup calculation, perturbation calculation based on continuous energy functions, and Monte Carlo Perturbation Theory in loosely coupled systems.
Forecasting with nonlinear time series model: A Monte-Carlo ...
African Journals Online (AJOL)
In this paper, we propose a new method of forecasting with nonlinear time series model using Monte-Carlo Bootstrap method. This new method gives better result in terms of forecast root mean squared error (RMSE) when compared with the traditional Bootstrap method and Monte-Carlo method of forecasting using a ...
Exponential convergence on a continuous Monte Carlo transport problem
International Nuclear Information System (INIS)
Booth, T.E.
1997-01-01
For more than a decade, it has been known that exponential convergence on discrete transport problems was possible using adaptive Monte Carlo techniques. An adaptive Monte Carlo method that empirically produces exponential convergence on a simple continuous transport problem is described
A Monte Carlo approach to combating delayed completion of ...
African Journals Online (AJOL)
The objective of this paper is to unveil the relevance of Monte Carlo critical path analysis in resolving problem of delays in scheduled completion of development projects. Commencing with deterministic network scheduling, Monte Carlo critical path analysis was advanced by assigning probability distributions to task times.
Debating the Social Thinking of Carlos Nelson Coutinho
Directory of Open Access Journals (Sweden)
Bruno Bruziguessi
2017-10-01
Full Text Available BRAZ, Marcelo; RODRIGUES, Mavi (Org.. Cultura, democracia e socialismo: as idéias de Carlos Nelson Coutinho em debate. [Culture, democracy and socialism: The ideas of Carlos Nelson Coutinho in debate]. Rio de Janeiro: Mórula, 2016. 248 p.
Quantum Monte Carlo method for attractive Coulomb potentials
Kole, J.S.; Raedt, H. De
2001-01-01
Starting from an exact lower bound on the imaginary-time propagator, we present a path-integral quantum Monte Carlo method that can handle singular attractive potentials. We illustrate the basic ideas of this quantum Monte Carlo algorithm by simulating the ground state of hydrogen and helium.
Forest canopy BRDF simulation using Monte Carlo method
Huang, J.; Wu, B.; Zeng, Y.; Tian, Y.
2006-01-01
Monte Carlo method is a random statistic method, which has been widely used to simulate the Bidirectional Reflectance Distribution Function (BRDF) of vegetation canopy in the field of visible remote sensing. The random process between photons and forest canopy was designed using Monte Carlo method.
Crop canopy BRDF simulation and analysis using Monte Carlo method
Huang, J.; Wu, B.; Tian, Y.; Zeng, Y.
2006-01-01
This author designs the random process between photons and crop canopy. A Monte Carlo model has been developed to simulate the Bi-directional Reflectance Distribution Function (BRDF) of crop canopy. Comparing Monte Carlo model to MCRM model, this paper analyzes the variations of different LAD and
Efficiency and accuracy of Monte Carlo (importance) sampling
Waarts, P.H.
2003-01-01
Monte Carlo Analysis is often regarded as the most simple and accurate reliability method. Be-sides it is the most transparent method. The only problem is the accuracy in correlation with the efficiency. Monte Carlo gets less efficient or less accurate when very low probabilities are to be computed
Nuclear data treatment for SAM-CE Monte Carlo calculations
International Nuclear Information System (INIS)
Lichtenstein, H.; Troubetzkoy, E.S.; Beer, M.
1980-01-01
The treatment of nuclear data by the SAM-CE Monte Carlo code system is presented. The retrieval of neutron, gamma production, and photon data from the ENDF/B fils is described. Integral cross sections as well as differential data are utilized in the Monte Carlo calculations, and the processing procedures for the requisite data are summarized
Approximating Sievert Integrals to Monte Carlo Methods to calculate ...
African Journals Online (AJOL)
Radiation dose rates along the transverse axis of a miniature P192PIr source were calculated using Sievert Integral (considered simple and inaccurate), and by the sophisticated and accurate Monte Carlo method. Using data obt-ained by the Monte Carlo method as benchmark and applying least squares regression curve ...
On the Markov Chain Monte Carlo (MCMC) method
Indian Academy of Sciences (India)
In this article, we give an introduction to Monte Carlo techniques with special emphasis on. Markov Chain Monte Carlo (MCMC). Since the latter needs Markov chains with state space that is R or Rd and most text books on Markov chains do not discuss such chains, we have included a short appendix that gives basic ...
Neutron point-flux calculation by Monte Carlo
International Nuclear Information System (INIS)
Eichhorn, M.
1986-04-01
A survey of the usual methods for estimating flux at a point is given. The associated variance-reducing techniques in direct Monte Carlo games are explained. The multigroup Monte Carlo codes MC for critical systems and PUNKT for point source-point detector-systems are represented, and problems in applying the codes to practical tasks are discussed. (author)
Baräo, Fernando; Nakagawa, Masayuki; Távora, Luis; Vaz, Pedro
2001-01-01
This book focusses on the state of the art of Monte Carlo methods in radiation physics and particle transport simulation and applications, the latter involving in particular, the use and development of electron--gamma, neutron--gamma and hadronic codes. Besides the basic theory and the methods employed, special attention is paid to algorithm development for modeling, and the analysis of experiments and measurements in a variety of fields ranging from particle to medical physics.
Research on perturbation based Monte Carlo reactor criticality search
International Nuclear Information System (INIS)
Li Zeguang; Wang Kan; Li Yangliu; Deng Jingkang
2013-01-01
Criticality search is a very important aspect in reactor physics analysis. Due to the advantages of Monte Carlo method and the development of computer technologies, Monte Carlo criticality search is becoming more and more necessary and feasible. Traditional Monte Carlo criticality search method is suffered from large amount of individual criticality runs and uncertainty and fluctuation of Monte Carlo results. A new Monte Carlo criticality search method based on perturbation calculation is put forward in this paper to overcome the disadvantages of traditional method. By using only one criticality run to get initial k eff and differential coefficients of concerned parameter, the polynomial estimator of k eff changing function is solved to get the critical value of concerned parameter. The feasibility of this method was tested. The results show that the accuracy and efficiency of perturbation based criticality search method are quite inspiring and the method overcomes the disadvantages of traditional one. (authors)
Directory of Open Access Journals (Sweden)
Mario Augusto Pagnotta
2018-01-01
Full Text Available During his lifetime, Professor Scarascia Mugnozza contributed significantly to the field of population genetics, his research ranging from wheat breeding in arid and semi-arid regions, to the conservation of forest ecosystems. He promoted regional networks across the Mediterranean, linking science and policy at national and international levels, focusing on the conservation and sustainable use of genetic diversity. In addition, he worked intensely on improvement of knowledge bases, raising awareness on how research could inform international agreements, and thus lead to evidence-based policies. The loss of biodiversity and the resulting implications for environmental, socio-economic, political, and ethical management of plant genetic resources were of major concern, and he highlighted the absolute necessity for conservation of genetic diversity, stressing the importance of building positive feedback linkages among ex situ, in situ, on-farm conservation strategies, and participatory approaches at the community level. His work emphasized the importance of access to diverse plant genetic resources by researchers and farmers, and promoted equitable access to genetic resources through international frameworks. Farmers’ rights, especially those in centres of origin and diversity of cultivated plants, were a key concern for Professor Scarascia Mugnozza, as their access to germplasm needed to be secured as custodians of diversity and the knowledge of how to use these vital resources. Consequently, he promoted the development of North-South cooperation mechanisms and platforms, including technology transfer and the sharing of information of how to maintain and use genetic resources sustainably.
Carlos Gardel, el patrimonio que sonrie
Directory of Open Access Journals (Sweden)
María Julia Carozzi
2003-10-01
Full Text Available Analizando los modos en que los porteños recordaron a Carlos Gardel en el mes del 68 aniversario de su muerte, el artículo intenta dar cuenta de una de las formas en que los habitantes de la ciudad de Buenos Aires conciben aquello que es memorable, identifican aquello en que se reconocen como porteños y singularizan aquello frente a lo cual experimentan sentimientos de pertenencia colectiva. El trabajo señala la centralidad que el milagro, la mimesis y el contacto directo con su cuerpo desempeñan en la preservación de la memoria de Gardel, quien encarna tanto al tango como a su éxito en el mundo. El caso de Gardel se presenta como un ejemplo de la organización de la memoria y la identidad de los porteños en particular y los argentinos en general en torno a personas reales a quienes se les asigna un valor extraordinario. Al sostener su profundo enraizamiento en cuerpos humanos concretos, tornan problemática la adopción local de los conceptos globalmente aceptados de patrimonio histórico y cultural.The article analyses one of the ways in which the inhabitants of Buenos Aires conceive that which is memorable, source of positive identification and origin of feelings of communitas by examining their commemoration of the 68th anniversary of the death of Carlos Gardel. It underscores the central role that miracles, mimesis and direct bodily contact play in the preservation of the memory of the star, who incarnates both the tango and its world-wide success. The case of Gardel is presented as an example of the centrality that real persons of extraordinary value have in the organization of local memory and collective identity. Since they are embedded in concrete human bodies, they reveal problems in the local adoption of globally accepted concepts of historical and cultural heritage.
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Present status and future prospects of neutronics Monte Carlo
International Nuclear Information System (INIS)
Gelbard, E.M.
1990-01-01
It is fair to say that the Monte Carlo method, over the last decade, has grown steadily more important as a neutronics computational tool. Apparently this has happened for assorted reasons. Thus, for example, as the power of computers has increased, the cost of the method has dropped, steadily becoming less and less of an obstacle to its use. In addition, more and more sophisticated input processors have now made it feasible to model extremely complicated systems routinely with really remarkable fidelity. Finally, as we demand greater and greater precision in reactor calculations, Monte Carlo is often found to be the only method accurate enough for use in benchmarking. Cross section uncertainties are now almost the only inherent limitations in our Monte Carlo capabilities. For this reason Monte Carlo has come to occupy a special position, interposed between experiment and other computational techniques. More and more often deterministic methods are tested by comparison with Monte Carlo, and cross sections are tested by comparing Monte Carlo with experiment. In this way one can distinguish very clearly between errors due to flaws in our numerical methods, and those due to deficiencies in cross section files. The special role of Monte Carlo as a benchmarking tool, often the only available benchmarking tool, makes it crucially important that this method should be polished to perfection. Problems relating to Eigenvalue calculations, variance reduction and the use of advanced computers are reviewed in this paper. (author)
Simulation and the Monte Carlo Method, Student Solutions Manual
Rubinstein, Reuven Y
2012-01-01
This accessible new edition explores the major topics in Monte Carlo simulation Simulation and the Monte Carlo Method, Second Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over twenty-five years ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, suc
Multiple Monte Carlo Testing with Applications in Spatial Point Processes
DEFF Research Database (Denmark)
Mrkvička, Tomáš; Myllymäki, Mari; Hahn, Ute
The rank envelope test (Myllym\\"aki et al., Global envelope tests for spatial processes, arXiv:1307.0239 [stat.ME]) is proposed as a solution to multiple testing problem for Monte Carlo tests. Three different situations are recognized: 1) a few univariate Monte Carlo tests, 2) a Monte Carlo test ...... for one group of point patterns, comparison of several groups of point patterns, test of dependence of components in a multi-type point pattern, and test of Boolean assumption for random closed sets....
The Monte Carlo method the method of statistical trials
Shreider, YuA
1966-01-01
The Monte Carlo Method: The Method of Statistical Trials is a systematic account of the fundamental concepts and techniques of the Monte Carlo method, together with its range of applications. Some of these applications include the computation of definite integrals, neutron physics, and in the investigation of servicing processes. This volume is comprised of seven chapters and begins with an overview of the basic features of the Monte Carlo method and typical examples of its application to simple problems in computational mathematics. The next chapter examines the computation of multi-dimensio
[Carlos Chagas Filho's choice of biological physics: reason and motivations].
de Almeida, Darcy Fontoura
2008-01-01
This study investigates the reasons and motivations behind Carlos Chagas Filho's choice to abandon the line of study developed by his father, Carlos Chagas, and brother, Evandro Chagas, both of whom had very successful careers researching tropical diseases. Though Carlos Chagas Filho first worked on anatomical pathology, he suddenly shifted his attentions of the physicochemical aspects of vital processes. Extant sources show that a number of unforeseen circumstances took place from early on in Chagas Filho's education. There was a chance he could carry out work of a similar import in a different area and he set his sights, with uncommon luck, on the introduction of scientific research at university.
Quantum Monte Carlo on graphical processing units
Anderson, Amos G.; Goddard, William A.; Schröder, Peter
2007-08-01
Quantum Monte Carlo (QMC) is among the most accurate methods for solving the time independent Schrödinger equation. Unfortunately, the method is very expensive and requires a vast array of computing resources in order to obtain results of a reasonable convergence level. On the other hand, the method is not only easily parallelizable across CPU clusters, but as we report here, it also has a high degree of data parallelism. This facilitates the use of recent technological advances in Graphical Processing Units (GPUs), a powerful type of processor well known to computer gamers. In this paper we report on an end-to-end QMC application with core elements of the algorithm running on a GPU. With individual kernels achieving as much as 30× speed up, the overall application performs at up to 6× faster relative to an optimized CPU implementation, yet requires only a modest increase in hardware cost. This demonstrates the speedup improvements possible for QMC in running on advanced hardware, thus exploring a path toward providing QMC level accuracy as a more standard tool. The major current challenge in running codes of this type on the GPU arises from the lack of fully compliant IEEE floating point implementations. To achieve better accuracy we propose the use of the Kahan summation formula in matrix multiplications. While this drops overall performance, we demonstrate that the proposed new algorithm can match CPU single precision.
Monte Carlo simulations for heavy ion dosimetry
Energy Technology Data Exchange (ETDEWEB)
Geithner, O.
2006-07-26
Water-to-air stopping power ratio (s{sub w,air}) calculations for the ionization chamber dosimetry of clinically relevant ion beams with initial energies from 50 to 450 MeV/u have been performed using the Monte Carlo technique. To simulate the transport of a particle in water the computer code SHIELD-HIT v2 was used which is a substantially modified version of its predecessor SHIELD-HIT v1. The code was partially rewritten, replacing formerly used single precision variables with double precision variables. The lowest particle transport specific energy was decreased from 1 MeV/u down to 10 keV/u by modifying the Bethe- Bloch formula, thus widening its range for medical dosimetry applications. Optional MSTAR and ICRU-73 stopping power data were included. The fragmentation model was verified using all available experimental data and some parameters were adjusted. The present code version shows excellent agreement with experimental data. Additional to the calculations of stopping power ratios, s{sub w,air}, the influence of fragments and I-values on s{sub w,air} for carbon ion beams was investigated. The value of s{sub w,air} deviates as much as 2.3% at the Bragg peak from the recommended by TRS-398 constant value of 1.130 for an energy of 50 MeV/u. (orig.)
The GENIE neutrino Monte Carlo generator
International Nuclear Information System (INIS)
Andreopoulos, C.; Bell, A.; Bhattacharya, D.; Cavanna, F.; Dobson, J.; Dytman, S.; Gallagher, H.; Guzowski, P.; Hatcher, R.; Kehayias, P.; Meregaglia, A.; Naples, D.; Pearce, G.; Rubbia, A.; Whalley, M.; Yang, T.
2010-01-01
GENIE is a new neutrino event generator for the experimental neutrino physics community. The goal of the project is to develop a 'canonical' neutrino interaction physics Monte Carlo whose validity extends to all nuclear targets and neutrino flavors from MeV to PeV energy scales. Currently, emphasis is on the few-GeV energy range, the challenging boundary between the non-perturbative and perturbative regimes, which is relevant for the current and near future long-baseline precision neutrino experiments using accelerator-made beams. The design of the package addresses many challenges unique to neutrino simulations and supports the full life-cycle of simulation and generator-related analysis tasks. GENIE is a large-scale software system, consisting of ∼120000 lines of C++ code, featuring a modern object-oriented design and extensively validated physics content. The first official physics release of GENIE was made available in August 2007, and at the time of the writing of this article, the latest available version was v2.4.4.
Pseudopotentials for quantum-Monte-Carlo-calculations
International Nuclear Information System (INIS)
Burkatzki, Mark Thomas
2008-01-01
The author presents scalar-relativistic energy-consistent Hartree-Fock pseudopotentials for the main-group and 3d-transition-metal elements. The pseudopotentials do not exhibit a singularity at the nucleus and are therefore suitable for quantum Monte Carlo (QMC) calculations. The author demonstrates their transferability through extensive benchmark calculations of atomic excitation spectra as well as molecular properties. In particular, the author computes the vibrational frequencies and binding energies of 26 first- and second-row diatomic molecules using post Hartree-Fock methods, finding excellent agreement with the corresponding all-electron values. The author shows that the presented pseudopotentials give superior accuracy than other existing pseudopotentials constructed specifically for QMC. The localization error and the efficiency in QMC are discussed. The author also presents QMC calculations for selected atomic and diatomic 3d-transitionmetal systems. Finally, valence basis sets of different sizes (VnZ with n=D,T,Q,5 for 1st and 2nd row; with n=D,T for 3rd to 5th row; with n=D,T,Q for the 3d transition metals) optimized for the pseudopotentials are presented. (orig.)
Monte Carlo simulations for heavy ion dosimetry
International Nuclear Information System (INIS)
Geithner, O.
2006-01-01
Water-to-air stopping power ratio (s w,air ) calculations for the ionization chamber dosimetry of clinically relevant ion beams with initial energies from 50 to 450 MeV/u have been performed using the Monte Carlo technique. To simulate the transport of a particle in water the computer code SHIELD-HIT v2 was used which is a substantially modified version of its predecessor SHIELD-HIT v1. The code was partially rewritten, replacing formerly used single precision variables with double precision variables. The lowest particle transport specific energy was decreased from 1 MeV/u down to 10 keV/u by modifying the Bethe- Bloch formula, thus widening its range for medical dosimetry applications. Optional MSTAR and ICRU-73 stopping power data were included. The fragmentation model was verified using all available experimental data and some parameters were adjusted. The present code version shows excellent agreement with experimental data. Additional to the calculations of stopping power ratios, s w,air , the influence of fragments and I-values on s w,air for carbon ion beams was investigated. The value of s w,air deviates as much as 2.3% at the Bragg peak from the recommended by TRS-398 constant value of 1.130 for an energy of 50 MeV/u. (orig.)
Parallel Monte Carlo Simulation of Aerosol Dynamics
Directory of Open Access Journals (Sweden)
Kun Zhou
2014-02-01
Full Text Available A highly efficient Monte Carlo (MC algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process. Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI. The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles.
Rare event simulation using Monte Carlo methods
Rubino, Gerardo
2009-01-01
In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. ...
Parallel Monte Carlo simulation of aerosol dynamics
Zhou, K.
2014-01-01
A highly efficient Monte Carlo (MC) algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process). Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI). The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands) of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD) function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles. 2014 Kun Zhou et al.
Atomistic Monte Carlo Simulation of Lipid Membranes
Directory of Open Access Journals (Sweden)
Daniel Wüstner
2014-01-01
Full Text Available Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA for the phospholipid dipalmitoylphosphatidylcholine (DPPC. We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol.
A continuation multilevel Monte Carlo algorithm
Collier, Nathan
2014-09-05
We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error tolerance is satisfied. CMLMC assumes discretization hierarchies that are defined a priori for each level and are geometrically refined across levels. The actual choice of computational work across levels is based on parametric models for the average cost per sample and the corresponding variance and weak error. These parameters are calibrated using Bayesian estimation, taking particular notice of the deepest levels of the discretization hierarchy, where only few realizations are available to produce the estimates. The resulting CMLMC estimator exhibits a non-trivial splitting between bias and statistical contributions. We also show the asymptotic normality of the statistical error in the MLMC estimator and justify in this way our error estimate that allows prescribing both required accuracy and confidence in the final result. Numerical results substantiate the above results and illustrate the corresponding computational savings in examples that are described in terms of differential equations either driven by random measures or with random coefficients. © 2014, Springer Science+Business Media Dordrecht.
Los motivos del lobo. Entrevista con Carlos Alazraki
Guzmán, Héctor; Alazraki, Carlos
1995-01-01
Entrevista al publicista mexicano Carlos Alazraki en la que se tocan los temas del humor y la cultura mexicana en los anuncios publicitarios. Incluye obra visual del pintor Davis Birks, reproducida en blanco y negro.
On the Markov Chain Monte Carlo (MCMC) method
Indian Academy of Sciences (India)
Abstract. Markov Chain Monte Carlo (MCMC) is a popular method used to generate samples from arbitrary distributions, which may be speciﬁed indirectly. In this article, we give an introduction to this method along with some examples.
Usefulness of the Monte Carlo method in reliability calculations
International Nuclear Information System (INIS)
Lanore, J.M.; Kalli, H.
1977-01-01
Three examples of reliability Monte Carlo programs developed in the LEP (Laboratory for Radiation Shielding Studies in the Nuclear Research Center at Saclay) are presented. First, an uncertainty analysis is given for a simplified spray system; a Monte Carlo program PATREC-MC has been written to solve the problem with the system components given in the fault tree representation. The second program MONARC 2 has been written to solve the problem of complex systems reliability by the Monte Carlo simulation, here again the system (a residual heat removal system) is in the fault tree representation. Third, the Monte Carlo program MONARC was used instead of the Markov diagram to solve the simulation problem of an electric power supply including two nets and two stand-by diesels
3D SURVEY OF THE SAN CARLO THEATRE IN NAPLES
Directory of Open Access Journals (Sweden)
V. Cappellini
2012-09-01
Full Text Available The article reports the approach developed for the 3D modeling of an important monument in Naples: San Carlo Theatre, the oldest Opera House in Europe recognized as a UNESCO World Heritage site.
Carlo Ginzburg: anomaalia viitab normile / intervjueerinud Marek Tamm
Ginzburg, Carlo, 1939-
2014-01-01
Intervjuu itaalia ajaloolase Carlo Ginzburgiga tema raamatu "Ükski saar pole saar : neli pilguheitu inglise kirjandusele globaalsest vaatenurgast" eesti keeles ilmumise puhul. Teos ilmus Tallinna Ülikooli Kirjastuses
The Monte Carlo simulation of the Ladon photon beam facility
International Nuclear Information System (INIS)
Strangio, C.
1976-01-01
The backward compton scattering of laser light against high energy electrons has been simulated with a Monte Carlo method. The main features of the produced photon beam are reported as well as a careful description of the numerical calculation
Monte Carlo methods for the self-avoiding walk
International Nuclear Information System (INIS)
Janse van Rensburg, E J
2009-01-01
The numerical simulation of self-avoiding walks remains a significant component in the study of random objects in lattices. In this review, I give a comprehensive overview of the current state of Monte Carlo simulations of models of self-avoiding walks. The self-avoiding walk model is revisited, and the motivations for Monte Carlo simulations of this model are discussed. Efficient sampling of self-avoiding walks remains an elusive objective, but significant progress has been made over the last three decades. The model still poses challenging numerical questions however, and I review specific Monte Carlo methods for improved sampling including general Monte Carlo techniques such as Metropolis sampling, umbrella sampling and multiple Markov Chain sampling. In addition, specific static and dynamic algorithms for walks are presented, and I give an overview of recent innovations in this field, including algorithms such as flatPERM, flatGARM and flatGAS. (topical review)
Bayesian phylogeny analysis via stochastic approximation Monte Carlo
Cheon, Sooyoung
2009-11-01
Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time. © 2009 Elsevier Inc. All rights reserved.
Combinatorial nuclear level density by a Monte Carlo method
International Nuclear Information System (INIS)
Cerf, N.
1994-01-01
We present a new combinatorial method for the calculation of the nuclear level density. It is based on a Monte Carlo technique, in order to avoid a direct counting procedure which is generally impracticable for high-A nuclei. The Monte Carlo simulation, making use of the Metropolis sampling scheme, allows a computationally fast estimate of the level density for many fermion systems in large shell model spaces. We emphasize the advantages of this Monte Carlo approach, particularly concerning the prediction of the spin and parity distributions of the excited states,and compare our results with those derived from a traditional combinatorial or a statistical method. Such a Monte Carlo technique seems very promising to determine accurate level densities in a large energy range for nuclear reaction calculations
Monte Carlo techniques for analyzing deep penetration problems
International Nuclear Information System (INIS)
Cramer, S.N.; Gonnord, J.; Hendricks, J.S.
1985-01-01
A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications. 29 refs
Time step length versus efficiency of Monte Carlo burnup calculations
International Nuclear Information System (INIS)
Dufek, Jan; Valtavirta, Ville
2014-01-01
Highlights: • Time step length largely affects efficiency of MC burnup calculations. • Efficiency of MC burnup calculations improves with decreasing time step length. • Results were obtained from SIE-based Monte Carlo burnup calculations. - Abstract: We demonstrate that efficiency of Monte Carlo burnup calculations can be largely affected by the selected time step length. This study employs the stochastic implicit Euler based coupling scheme for Monte Carlo burnup calculations that performs a number of inner iteration steps within each time step. In a series of calculations, we vary the time step length and the number of inner iteration steps; the results suggest that Monte Carlo burnup calculations get more efficient as the time step length is reduced. More time steps must be simulated as they get shorter; however, this is more than compensated by the decrease in computing cost per time step needed for achieving a certain accuracy
Herwig: The Evolution of a Monte Carlo Simulation
CERN. Geneva
2015-01-01
Monte Carlo event generation has seen significant developments in the last 10 years starting with preparation for the LHC and then during the first LHC run. I will discuss the basic ideas behind Monte Carlo event generators and then go on to discuss these developments, focussing on the developments in Herwig(++) event generator. I will conclude by presenting the current status of event generation together with some results of the forthcoming new version of Herwig, Herwig 7.
NUEN-618 Class Project: Actually Implicit Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Vega, R. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brunner, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2017-12-14
This research describes a new method for the solution of the thermal radiative transfer (TRT) equations that is implicit in time which will be called Actually Implicit Monte Carlo (AIMC). This section aims to introduce the TRT equations, as well as the current workhorse method which is known as Implicit Monte Carlo (IMC). As the name of the method proposed here indicates, IMC is a misnomer in that it is only semi-implicit, which will be shown in this section as well.
Studies of Monte Carlo Modelling of Jets at ATLAS
Kar, Deepak; The ATLAS collaboration
2017-01-01
The predictions of different Monte Carlo generators for QCD jet production, both in multijets and for jets produced in association with other objects, are presented. Recent improvements in showering Monte Carlos provide new tools for assessing systematic uncertainties associated with these jets. Studies of the dependence of physical observables on the choice of shower tune parameters and new prescriptions for assessing systematic uncertainties associated with the choice of shower model and tune are presented.
Multiscale Monte Carlo equilibration: Pure Yang-Mills theory
Endres, Michael G.; Brower, Richard C.; Detmold, William; Orginos, Kostas; Pochinsky, Andrew V.
2015-12-01
We present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
The sine Gordon model perturbation theory and cluster Monte Carlo
Hasenbusch, M; Pinn, K
1994-01-01
We study the expansion of the surface thickness in the 2-dimensional lattice Sine Gordon model in powers of the fugacity z. Using the expansion to order z**2, we derive lines of constant physics in the rough phase. We describe and test a VMR cluster algorithm for the Monte Carlo simulation of the model. The algorithm shows nearly no critical slowing down. We apply the algorithm in a comparison of our perturbative results with Monte Carlo data.
Monte Carlo methods and applications in nuclear physics
International Nuclear Information System (INIS)
Carlson, J.
1990-01-01
Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs
Monte Carlo methods and applications in nuclear physics
Energy Technology Data Exchange (ETDEWEB)
Carlson, J.
1990-01-01
Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs.
Monte Carlo method for solving a parabolic problem
Directory of Open Access Journals (Sweden)
Tian Yi
2016-01-01
Full Text Available In this paper, we present a numerical method based on random sampling for a parabolic problem. This method combines use of the Crank-Nicolson method and Monte Carlo method. In the numerical algorithm, we first discretize governing equations by Crank-Nicolson method, and obtain a large sparse system of linear algebraic equations, then use Monte Carlo method to solve the linear algebraic equations. To illustrate the usefulness of this technique, we apply it to some test problems.
Monte Carlos of the new generation: status and progress
International Nuclear Information System (INIS)
Frixione, Stefano
2005-01-01
Standard parton shower monte carlos are designed to give reliable descriptions of low-pT physics. In the very high-energy regime of modern colliders, this is may lead to largely incorrect predictions of the basic reaction processes. This motivated the recent theoretical efforts aimed at improving monte carlos through the inclusion of matrix elements computed beyond the leading order in QCD. I briefly review the progress made, and discuss bottom production at the Tevatron
The computation of Greeks with multilevel Monte Carlo
Sylvestre Burgos; M. B. Giles
2011-01-01
In mathematical finance, the sensitivities of option prices to various market parameters, also known as the “Greeks”, reflect the exposure to different sources of risk. Computing these is essential to predict the impact of market moves on portfolios and to hedge them adequately. This is commonly done using Monte Carlo simulations. However, obtaining accurate estimates of the Greeks can be computationally costly. Multilevel Monte Carlo offers complexity improvements over standard Monte Carl...
Modern analysis of ion channeling data by Monte Carlo simulations
Energy Technology Data Exchange (ETDEWEB)
Nowicki, Lech [Andrzej SoItan Institute for Nuclear Studies, ul. Hoza 69, 00-681 Warsaw (Poland)]. E-mail: lech.nowicki@fuw.edu.pl; Turos, Andrzej [Institute of Electronic Materials Technology, Wolczynska 133, 01-919 Warsaw (Poland); Ratajczak, Renata [Andrzej SoItan Institute for Nuclear Studies, ul. Hoza 69, 00-681 Warsaw (Poland); Stonert, Anna [Andrzej SoItan Institute for Nuclear Studies, ul. Hoza 69, 00-681 Warsaw (Poland); Garrido, Frederico [Centre de Spectrometrie Nucleaire et Spectrometrie de Masse, CNRS-IN2P3-Universite Paris-Sud, 91405 Orsay (France)
2005-10-15
Basic scheme of ion channeling spectra Monte Carlo simulation is reformulated in terms of statistical sampling. The McChasy simulation code is described and two examples of the code applications are presented. These are: calculation of projectile flux in uranium dioxide crystal and defect analysis for ion implanted InGaAsP/InP superlattice. Virtues and pitfalls of defect analysis using Monte Carlo simulations are discussed.
Advances in Monte Carlo electron transport
International Nuclear Information System (INIS)
Bielajew, Alex F.
1995-01-01
Notwithstanding the success of Monte Carlo (MC) calculations for determining ion chamber correction factors for air-kerma standards and radiotherapy applications, a great challenge remains. MC is unable to calculate ion chamber response to better than 1% for low-Z and 3% for high-Z wall materials. Moreover, the two major MC code systems employed in radiation dosimetry, the EGS and ITS codes, differ in opposite directions from ion chamber experiments. The discrepancy with experiment is due to inadequacies in the underlying e - condensed-history algorithms. As modeled by MC calculations, the e - step-lengths in the chamber walls and the ionisation cavity differ in terms of material traversed by about three orders of magnitude. This demands that the underlying e - transport algorithms be very stable over a great dynamic range. Otherwise a spurious e - disequilibrium may be generated. The multiple-scattering (MS) algorithms, Moliere in the case of EGS and Goudsmit-Saunderson (GS) in the case of ITS, are either mathematically or numerically unstable in the plural-scattering environment of the ionisation cavity. Recently, a new MS theory has been developed that is an exact solution of the Wentzel small-angle formalism using a screened Rutherford cross section. This new MS theory is mathematically, physically and numerically stable from the no-scattering to the MS regimes. This theory is the small-angle equivalent of the GS equation for a Rutherford cross section. Large-angle corrections connecting this theory to GS theory have been derived by Bethe. The Moliere theory is the large-pathlength limit of this theory. The strategy for employing this new theory for ion chamber and radiotherapy calculations is described
Monte carlo sampling of fission multiplicity.
Energy Technology Data Exchange (ETDEWEB)
Hendricks, J. S. (John S.)
2004-01-01
Two new methods have been developed for fission multiplicity modeling in Monte Carlo calculations. The traditional method of sampling neutron multiplicity from fission is to sample the number of neutrons above or below the average. For example, if there are 2.7 neutrons per fission, three would be chosen 70% of the time and two would be chosen 30% of the time. For many applications, particularly {sup 3}He coincidence counting, a better estimate of the true number of neutrons per fission is required. Generally, this number is estimated by sampling a Gaussian distribution about the average. However, because the tail of the Gaussian distribution is negative and negative neutrons cannot be produced, a slight positive bias can be found in the average value. For criticality calculations, the result of rejecting the negative neutrons is an increase in k{sub eff} of 0.1% in some cases. For spontaneous fission, where the average number of neutrons emitted from fission is low, the error also can be unacceptably large. If the Gaussian width approaches the average number of fissions, 10% too many fission neutrons are produced by not treating the negative Gaussian tail adequately. The first method to treat the Gaussian tail is to determine a correction offset, which then is subtracted from all sampled values of the number of neutrons produced. This offset depends on the average value for any given fission at any energy and must be computed efficiently at each fission from the non-integrable error function. The second method is to determine a corrected zero point so that all neutrons sampled between zero and the corrected zero point are killed to compensate for the negative Gaussian tail bias. Again, the zero point must be computed efficiently at each fission. Both methods give excellent results with a negligible computing time penalty. It is now possible to include the full effects of fission multiplicity without the negative Gaussian tail bias.
Monte Carlo Volcano Seismic Moment Tensors
Waite, G. P.; Brill, K. A.; Lanza, F.
2015-12-01
Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.
Mean field theory of the swap Monte Carlo algorithm.
Ikeda, Harukuni; Zamponi, Francesco; Ikeda, Atsushi
2017-12-21
The swap Monte Carlo algorithm combines the translational motion with the exchange of particle species and is unprecedentedly efficient for some models of glass former. In order to clarify the physics underlying this acceleration, we study the problem within the mean field replica liquid theory. We extend the Gaussian Ansatz so as to take into account the exchange of particles of different species, and we calculate analytically the dynamical glass transition points corresponding to the swap and standard Monte Carlo algorithms. We show that the system evolved with the standard Monte Carlo algorithm exhibits the dynamical transition before that of the swap Monte Carlo algorithm. We also test the result by performing computer simulations of a binary mixture of the Mari-Kurchan model, both with standard and swap Monte Carlo. This scenario provides a possible explanation for the efficiency of the swap Monte Carlo algorithm. Finally, we discuss how the thermodynamic theory of the glass transition should be modified based on our results.
Bayesian Optimal Experimental Design Using Multilevel Monte Carlo
Ben Issaid, Chaouki
2015-01-07
Experimental design is very important since experiments are often resource-exhaustive and time-consuming. We carry out experimental design in the Bayesian framework. To measure the amount of information, which can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data for our purpose. One of the major difficulties in evaluating the expected information gain is that the integral is nested and can be high dimensional. We propose using Multilevel Monte Carlo techniques to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, the Multilevel Monte Carlo can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the Multilevel Monte Carlo method imposes less assumptions, such as the concentration of measures, required by Laplace method. We test our Multilevel Monte Carlo technique using a numerical example on the design of sensor deployment for a Darcy flow problem governed by one dimensional Laplace equation. We also compare the performance of the Multilevel Monte Carlo, Laplace approximation and direct double loop Monte Carlo.
Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments
International Nuclear Information System (INIS)
Pevey, Ronald E.
2005-01-01
Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL
Present status of transport code development based on Monte Carlo method
International Nuclear Information System (INIS)
Nakagawa, Masayuki
1985-01-01
The present status of development in Monte Carlo code is briefly reviewed. The main items are the followings; Application fields, Methods used in Monte Carlo code (geometry spectification, nuclear data, estimator and variance reduction technique) and unfinished works, Typical Monte Carlo codes and Merits of continuous energy Monte Carlo code. (author)
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
Energy Technology Data Exchange (ETDEWEB)
Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications Group; Univ. of New Mexico, Albuquerque, NM (United States). Nuclear Engineering Dept.
2016-11-29
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations
Monte Carlo systems used for treatment planning and dose verification
Energy Technology Data Exchange (ETDEWEB)
Brualla, Lorenzo [Universitaetsklinikum Essen, NCTeam, Strahlenklinik, Essen (Germany); Rodriguez, Miguel [Centro Medico Paitilla, Balboa (Panama); Lallena, Antonio M. [Universidad de Granada, Departamento de Fisica Atomica, Molecular y Nuclear, Granada (Spain)
2017-04-15
General-purpose radiation transport Monte Carlo codes have been used for estimation of the absorbed dose distribution in external photon and electron beam radiotherapy patients since several decades. Results obtained with these codes are usually more accurate than those provided by treatment planning systems based on non-stochastic methods. Traditionally, absorbed dose computations based on general-purpose Monte Carlo codes have been used only for research, owing to the difficulties associated with setting up a simulation and the long computation time required. To take advantage of radiation transport Monte Carlo codes applied to routine clinical practice, researchers and private companies have developed treatment planning and dose verification systems that are partly or fully based on fast Monte Carlo algorithms. This review presents a comprehensive list of the currently existing Monte Carlo systems that can be used to calculate or verify an external photon and electron beam radiotherapy treatment plan. Particular attention is given to those systems that are distributed, either freely or commercially, and that do not require programming tasks from the end user. These systems are compared in terms of features and the simulation time required to compute a set of benchmark calculations. (orig.) [German] Seit mehreren Jahrzehnten werden allgemein anwendbare Monte-Carlo-Codes zur Simulation des Strahlungstransports benutzt, um die Verteilung der absorbierten Dosis in der perkutanen Strahlentherapie mit Photonen und Elektronen zu evaluieren. Die damit erzielten Ergebnisse sind meist akkurater als solche, die mit nichtstochastischen Methoden herkoemmlicher Bestrahlungsplanungssysteme erzielt werden koennen. Wegen des damit verbundenen Arbeitsaufwands und der langen Dauer der Berechnungen wurden Monte-Carlo-Simulationen von Dosisverteilungen in der konventionellen Strahlentherapie in der Vergangenheit im Wesentlichen in der Forschung eingesetzt. Im Bemuehen, Monte-Carlo
Nonlinear Spatial Inversion Without Monte Carlo Sampling
Curtis, A.; Nawaz, A.
2017-12-01
High-dimensional, nonlinear inverse or inference problems usually have non-unique solutions. The distribution of solutions are described by probability distributions, and these are usually found using Monte Carlo (MC) sampling methods. These take pseudo-random samples of models in parameter space, calculate the probability of each sample given available data and other information, and thus map out high or low probability values of model parameters. However, such methods would converge to the solution only as the number of samples tends to infinity; in practice, MC is found to be slow to converge, convergence is not guaranteed to be achieved in finite time, and detection of convergence requires the use of subjective criteria. We propose a method for Bayesian inversion of categorical variables such as geological facies or rock types in spatial problems, which requires no sampling at all. The method uses a 2-D Hidden Markov Model over a grid of cells, where observations represent localized data constraining the model in each cell. The data in our example application are seismic properties such as P- and S-wave impedances or rock density; our model parameters are the hidden states and represent the geological rock types in each cell. The observations at each location are assumed to depend on the facies at that location only - an assumption referred to as `localized likelihoods'. However, the facies at a location cannot be determined solely by the observation at that location as it also depends on prior information concerning its correlation with the spatial distribution of facies elsewhere. Such prior information is included in the inversion in the form of a training image which represents a conceptual depiction of the distribution of local geologies that might be expected, but other forms of prior information can be used in the method as desired. The method provides direct (pseudo-analytic) estimates of posterior marginal probability distributions over each variable
Khrushcheva, O; Malerba, L; Becquart, C S; Domain, C; Hou, M
2003-01-01
Several variants are possible in the suite of programs forming multiscale predictive tools to estimate the yield strength increase caused by irradiation in RPV steels. For instance, at the atomic scale, both the Metropolis and the lattice kinetic Monte Carlo methods (MMC and LKMC respectively) allow predicting copper precipitation under irradiation conditions. Since these methods are based on different physical models, the present contribution discusses their consistency on the basis of a realistic case study. A cascade debris in iron containing 0.2% of copper was modelled by molecular dynamics with the DYMOKA code, which is part of the REVE suite. We use this debris as input for both the MMC and the LKMC simulations. Thermal motion and lattice relaxation can be avoided in the MMC, making the model closer to the LKMC (LMMC method). The predictions and the complementarity of the three methods for modelling the same phenomenon are then discussed.
REVIEW: Fifty years of Monte Carlo simulations for medical physics
Rogers, D. W. O.
2006-07-01
Monte Carlo techniques have become ubiquitous in medical physics over the last 50 years with a doubling of papers on the subject every 5 years between the first PMB paper in 1967 and 2000 when the numbers levelled off. While recognizing the many other roles that Monte Carlo techniques have played in medical physics, this review emphasizes techniques for electron-photon transport simulations. The broad range of codes available is mentioned but there is special emphasis on the EGS4/EGSnrc code system which the author has helped develop for 25 years. The importance of the 1987 Erice Summer School on Monte Carlo techniques is highlighted. As an illustrative example of the role Monte Carlo techniques have played, the history of the correction for wall attenuation and scatter in an ion chamber is presented as it demonstrates the interplay between a specific problem and the development of tools to solve the problem which in turn leads to applications in other areas. This paper is dedicated to W Ralph Nelson and to the memory of Martin J Berger, two men who have left indelible marks on the field of Monte Carlo simulation of electron-photon transport.
Numerical integration of detector response functions via Monte Carlo simulations
Kelly, K. J.; O'Donnell, J. M.; Gomez, J. A.; Taddeucci, T. N.; Devlin, M.; Haight, R. C.; White, M. C.; Mosby, S. M.; Neudecker, D.; Buckner, M. Q.; Wu, C. Y.; Lee, H. Y.
2017-09-01
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated in this way can be used to create Monte Carlo simulation output spectra a factor of ∼ 1000 × faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. This method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.
Monte Carlo studies of high-transverse-energy hadronic interactions
International Nuclear Information System (INIS)
Corcoran, M.D.
1985-01-01
A four-jet Monte Carlo calculation has been used to simulate hadron-hadron interactions which deposit high transverse energy into a large-solid-angle calorimeter and limited solid-angle regions of the calorimeter. The calculation uses first-order QCD cross sections to generate two scattered jets and also produces beam and target jets. Field-Feynman fragmentation has been used in the hadronization. The sensitivity of the results to a few features of the Monte Carlo program has been studied. The results are found to be very sensitive to the method used to ensure overall energy conservation after the fragmentation of the four jets is complete. Results are also sensitive to the minimum momentum transfer in the QCD subprocesses and to the distribution of p/sub T/ to the jet axis and the multiplicities in the fragmentation. With reasonable choices of these features of the Monte Carlo program, good agreement with data at Fermilab/CERN SPS energies is obtained, comparable to the agreement achieved with more sophisticated parton-shower models. With other choices, however, the calculation gives qualitatively different results which are in strong disagreement with the data. These results have important implications for extracting physics conclusions from Monte Carlo calculations. It is not possible to test the validity of a particular model or distinguish between different models unless the Monte Carlo results are unambiguous and different models exhibit clearly different behavior
Monte Carlo capabilities of the SCALE code system
International Nuclear Information System (INIS)
Rearden, B.T.; Petrie, L.M.; Peplow, D.E.; Bekar, K.B.; Wiarda, D.; Celik, C.; Perfetti, C.M.; Ibrahim, A.M.; Hart, S.W.D.; Dunn, M.E.; Marshall, W.J.
2015-01-01
Highlights: • Foundational Monte Carlo capabilities of SCALE are described. • Improvements in continuous-energy treatments are detailed. • New methods for problem-dependent temperature corrections are described. • New methods for sensitivity analysis and depletion are described. • Nuclear data, users interfaces, and quality assurance activities are summarized. - Abstract: SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. An overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2
Parallel MCNP Monte Carlo transport calculations with MPI
International Nuclear Information System (INIS)
Wagner, J.C.; Haghighat, A.
1996-01-01
The steady increase in computational performance has made Monte Carlo calculations for large/complex systems possible. However, in order to make these calculations practical, order of magnitude increases in performance are necessary. The Monte Carlo method is inherently parallel (particles are simulated independently) and thus has the potential for near-linear speedup with respect to the number of processors. Further, the ever-increasing accessibility of parallel computers, such as workstation clusters, facilitates the practical use of parallel Monte Carlo. Recognizing the nature of the Monte Carlo method and the trends in available computing, the code developers at Los Alamos National Laboratory implemented the message-passing general-purpose Monte Carlo radiation transport code MCNP (version 4A). The PVM package was chosen by the MCNP code developers because it supports a variety of communication networks, several UNIX platforms, and heterogeneous computer systems. This PVM version of MCNP has been shown to produce speedups that approach the number of processors and thus, is a very useful tool for transport analysis. Due to software incompatibilities on the local IBM SP2, PVM has not been available, and thus it is not possible to take advantage of this useful tool. Hence, it became necessary to implement an alternative message-passing library package into MCNP. Because the message-passing interface (MPI) is supported on the local system, takes advantage of the high-speed communication switches in the SP2, and is considered to be the emerging standard, it was selected
Introduction to Monte Carlo methods: sampling techniques and random numbers
International Nuclear Information System (INIS)
Bhati, Sharda; Patni, H.K.
2009-01-01
The Monte Carlo method describes a very broad area of science, in which many processes, physical systems and phenomena that are statistical in nature and are difficult to solve analytically are simulated by statistical methods employing random numbers. The general idea of Monte Carlo analysis is to create a model, which is similar as possible to the real physical system of interest, and to create interactions within that system based on known probabilities of occurrence, with random sampling of the probability density functions. As the number of individual events (called histories) is increased, the quality of the reported average behavior of the system improves, meaning that the statistical uncertainty decreases. Assuming that the behavior of physical system can be described by probability density functions, then the Monte Carlo simulation can proceed by sampling from these probability density functions, which necessitates a fast and effective way to generate random numbers uniformly distributed on the interval (0,1). Particles are generated within the source region and are transported by sampling from probability density functions through the scattering media until they are absorbed or escaped the volume of interest. The outcomes of these random samplings or trials, must be accumulated or tallied in an appropriate manner to produce the desired result, but the essential characteristic of Monte Carlo is the use of random sampling techniques to arrive at a solution of the physical problem. The major components of Monte Carlo methods for random sampling for a given event are described in the paper
Study on random number generator in Monte Carlo code
International Nuclear Information System (INIS)
Oya, Kentaro; Kitada, Takanori; Tanaka, Shinichi
2011-01-01
The Monte Carlo code uses a sequence of pseudo-random numbers with a random number generator (RNG) to simulate particle histories. A pseudo-random number has its own period depending on its generation method and the period is desired to be long enough not to exceed the period during one Monte Carlo calculation to ensure the correctness especially for a standard deviation of results. The linear congruential generator (LCG) is widely used as Monte Carlo RNG and the period of LCG is not so long by considering the increasing rate of simulation histories in a Monte Carlo calculation according to the remarkable enhancement of computer performance. Recently, many kinds of RNG have been developed and some of their features are better than those of LCG. In this study, we investigate the appropriate RNG in a Monte Carlo code as an alternative to LCG especially for the case of enormous histories. It is found that xorshift has desirable features compared with LCG, and xorshift has a larger period, a comparable speed to generate random numbers, a better randomness, and good applicability to parallel calculation. (author)
Monte Carlo dose calculations in advanced radiotherapy
Bush, Karl Kenneth
The remarkable accuracy of Monte Carlo (MC) dose calculation algorithms has led to the widely accepted view that these methods should and will play a central role in the radiotherapy treatment verification and planning of the future. The advantages of using MC clinically are particularly evident for radiation fields passing through inhomogeneities, such as lung and air cavities, and for small fields, including those used in today's advanced intensity modulated radiotherapy techniques. Many investigators have reported significant dosimetric differences between MC and conventional dose calculations in such complex situations, and have demonstrated experimentally the unmatched ability of MC calculations in modeling charged particle disequilibrium. The advantages of using MC dose calculations do come at a cost. The nature of MC dose calculations require a highly detailed, in-depth representation of the physical system (accelerator head geometry/composition, anatomical patient geometry/composition and particle interaction physics) to allow accurate modeling of external beam radiation therapy treatments. To perform such simulations is computationally demanding and has only recently become feasible within mainstream radiotherapy practices. In addition, the output of the accelerator head simulation can be highly sensitive to inaccuracies within a model that may not be known with sufficient detail. The goal of this dissertation is to both improve and advance the implementation of MC dose calculations in modern external beam radiotherapy. To begin, a novel method is proposed to fine-tune the output of an accelerator model to better represent the measured output. In this method an intensity distribution of the electron beam incident on the model is inferred by employing a simulated annealing algorithm. The method allows an investigation of arbitrary electron beam intensity distributions and is not restricted to the commonly assumed Gaussian intensity. In a second component of
Monte Carlo techniques for real-time quantum dynamics
International Nuclear Information System (INIS)
Dowling, Mark R.; Davis, Matthew J.; Drummond, Peter D.; Corney, Joel F.
2007-01-01
The stochastic-gauge representation is a method of mapping the equation of motion for the quantum mechanical density operator onto a set of equivalent stochastic differential equations. One of the stochastic variables is termed the 'weight', and its magnitude is related to the importance of the stochastic trajectory. We investigate the use of Monte Carlo algorithms to improve the sampling of the weighted trajectories and thus reduce sampling error in a simulation of quantum dynamics. The method can be applied to calculations in real time, as well as imaginary time for which Monte Carlo algorithms are more-commonly used. The Monte-Carlo algorithms are applicable when the weight is guaranteed to be real, and we demonstrate how to ensure this is the case. Examples are given for the anharmonic oscillator, where large improvements over stochastic sampling are observed
Monte Carlo simulation of neutron counters for safeguards applications
International Nuclear Information System (INIS)
Looman, Marc; Peerani, Paolo; Tagziria, Hamid
2009-01-01
MCNP-PTA is a new Monte Carlo code for the simulation of neutron counters for nuclear safeguards applications developed at the Joint Research Centre (JRC) in Ispra (Italy). After some preliminary considerations outlining the general aspects involved in the computational modelling of neutron counters, this paper describes the specific details and approximations which make up the basis of the model implemented in the code. One of the major improvements allowed by the use of Monte Carlo simulation is a considerable reduction in both the experimental work and in the reference materials required for the calibration of the instruments. This new approach to the calibration of counters using Monte Carlo simulation techniques is also discussed.
Monte Carlo techniques in diagnostic and therapeutic nuclear medicine
International Nuclear Information System (INIS)
Zaidi, H.
2002-01-01
Monte Carlo techniques have become one of the most popular tools in different areas of medical radiation physics following the development and subsequent implementation of powerful computing systems for clinical use. In particular, they have been extensively applied to simulate processes involving random behaviour and to quantify physical parameters that are difficult or even impossible to calculate analytically or to determine by experimental measurements. The use of the Monte Carlo method to simulate radiation transport turned out to be the most accurate means of predicting absorbed dose distributions and other quantities of interest in the radiation treatment of cancer patients using either external or radionuclide radiotherapy. The same trend has occurred for the estimation of the absorbed dose in diagnostic procedures using radionuclides. There is broad consensus in accepting that the earliest Monte Carlo calculations in medical radiation physics were made in the area of nuclear medicine, where the technique was used for dosimetry modelling and computations. Formalism and data based on Monte Carlo calculations, developed by the Medical Internal Radiation Dose (MIRD) committee of the Society of Nuclear Medicine, were published in a series of supplements to the Journal of Nuclear Medicine, the first one being released in 1968. Some of these pamphlets made extensive use of Monte Carlo calculations to derive specific absorbed fractions for electron and photon sources uniformly distributed in organs of mathematical phantoms. Interest in Monte Carlo-based dose calculations with β-emitters has been revived with the application of radiolabelled monoclonal antibodies to radioimmunotherapy. As a consequence of this generalized use, many questions are being raised primarily about the need and potential of Monte Carlo techniques, but also about how accurate it really is, what would it take to apply it clinically and make it available widely to the medical physics
Monte Carlo simulated dynamical magnetization of single-chain magnets
Energy Technology Data Exchange (ETDEWEB)
Li, Jun; Liu, Bang-Gui, E-mail: bgliu@iphy.ac.cn
2015-03-15
Here, a dynamical Monte-Carlo (DMC) method is used to study temperature-dependent dynamical magnetization of famous Mn{sub 2}Ni system as typical example of single-chain magnets with strong magnetic anisotropy. Simulated magnetization curves are in good agreement with experimental results under typical temperatures and sweeping rates, and simulated coercive fields as functions of temperature are also consistent with experimental curves. Further analysis indicates that the magnetization reversal is determined by both thermal-activated effects and quantum spin tunnelings. These can help explore basic properties and applications of such important magnetic systems. - Highlights: • Monte Carlo simulated magnetization curves are in good agreement with experimental results. • Simulated coercive fields as functions of temperature are consistent with experimental results. • The magnetization reversal is understood in terms of the Monte Carlo simulations.
Exploring cluster Monte Carlo updates with Boltzmann machines.
Wang, Lei
2017-11-01
Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.
A Multivariate Time Series Method for Monte Carlo Reactor Analysis
International Nuclear Information System (INIS)
Taro Ueki
2008-01-01
A robust multivariate time series method has been established for the Monte Carlo calculation of neutron multiplication problems. The method is termed Coarse Mesh Projection Method (CMPM) and can be implemented using the coarse statistical bins for acquisition of nuclear fission source data. A novel aspect of CMPM is the combination of the general technical principle of projection pursuit in the signal processing discipline and the neutron multiplication eigenvalue problem in the nuclear engineering discipline. CMPM enables reactor physicists to accurately evaluate major eigenvalue separations of nuclear reactors with continuous energy Monte Carlo calculation. CMPM was incorporated in the MCNP Monte Carlo particle transport code of Los Alamos National Laboratory. The great advantage of CMPM over the traditional Fission Matrix method is demonstrated for the three space-dimensional modeling of the initial core of a pressurized water reactor
Application of biasing techniques to the contributon Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Dubi, A.; Gerstl, S.A.W.
1980-01-01
Recently, a new Monte Carlo Method called the Contribution Monte Carlo Method was developed. The method is based on the theory of contributions, and uses a new receipe for estimating target responses by a volume integral over the contribution current. The analog features of the new method were discussed in previous publications. The application of some biasing methods to the new contribution scheme is examined here. A theoretical model is developed that enables an analytic prediction of the benefit to be expected when these biasing schemes are applied to both the contribution method and regular Monte Carlo. This model is verified by a variety of numerical experiments and is shown to yield satisfying results, especially for deep-penetration problems. Other considerations regarding the efficient use of the new method are also discussed, and remarks are made as to the application of other biasing methods. 14 figures, 1 tables.
Exploring cluster Monte Carlo updates with Boltzmann machines
Wang, Lei
2017-11-01
Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.
Minimum variance Monte Carlo importance sampling with parametric dependence
International Nuclear Information System (INIS)
Ragheb, M.M.H.; Halton, J.; Maynard, C.W.
1981-01-01
An approach for Monte Carlo Importance Sampling with parametric dependence is proposed. It depends upon obtaining by proper weighting over a single stage the overall functional dependence of the variance on the importance function parameter over a broad range of its values. Results corresponding to minimum variance are adapted and other results rejected. Numerical calculation for the estimation of intergrals are compared to Crude Monte Carlo. Results explain the occurrences of the effective biases (even though the theoretical bias is zero) and infinite variances which arise in calculations involving severe biasing and a moderate number of historis. Extension to particle transport applications is briefly discussed. The approach constitutes an extension of a theory on the application of Monte Carlo for the calculation of functional dependences introduced by Frolov and Chentsov to biasing, or importance sample calculations; and is a generalization which avoids nonconvergence to the optimal values in some cases of a multistage method for variance reduction introduced by Spanier. (orig.) [de
Fixed forced detection for fast SPECT Monte-Carlo simulation
Cajgfinger, T.; Rit, S.; Létang, J. M.; Halty, A.; Sarrut, D.
2018-03-01
Monte-Carlo simulations of SPECT images are notoriously slow to converge due to the large ratio between the number of photons emitted and detected in the collimator. This work proposes a method to accelerate the simulations based on fixed forced detection (FFD) combined with an analytical response of the detector. FFD is based on a Monte-Carlo simulation but forces the detection of a photon in each detector pixel weighted by the probability of emission (or scattering) and transmission to this pixel. The method was evaluated with numerical phantoms and on patient images. We obtained differences with analog Monte Carlo lower than the statistical uncertainty. The overall computing time gain can reach up to five orders of magnitude. Source code and examples are available in the Gate V8.0 release.
MORET: Version 4.B. A multigroup Monte Carlo criticality code
International Nuclear Information System (INIS)
Jacquet, Olivier; Miss, Joachim; Courtois, Gerard
2003-01-01
MORET 4 is a three dimensional multigroup Monte Carlo code which calculates the effective multiplication factor (keff) of any configurations more or less complex as well as reaction rates in the different volumes of the geometry and the leakage out of the system. MORET 4 is the Monte Carlo code of the APOLLO2-MORET 4 standard route of CRISTAL, the French criticality package. It is the most commonly used Monte Carlo code for French criticality calculations. During the last four years, the MORET 4 team has developed or improved the following major points: modernization of the geometry, implementation of perturbation algorithms, source distribution convergence, statistical detection of stationarity, unbiased variance estimation and creation of pre-processing and post-processing tools. The purpose of this paper is not only to present the new features of MORET but also to detail clearly the physical models and the mathematical methods used in the code. (author)
Stabilization effect of fission source in coupled Monte Carlo simulations
Energy Technology Data Exchange (ETDEWEB)
Olsen, Borge; Dufek, Jan [Div. of Nuclear Reactor Technology, KTH Royal Institute of Technology, AlbaNova University Center, Stockholm (Sweden)
2017-08-15
A fission source can act as a stabilization element in coupled Monte Carlo simulations. We have observed this while studying numerical instabilities in nonlinear steady-state simulations performed by a Monte Carlo criticality solver that is coupled to a xenon feedback solver via fixed-point iteration. While fixed-point iteration is known to be numerically unstable for some problems, resulting in large spatial oscillations of the neutron flux distribution, we show that it is possible to stabilize it by reducing the number of Monte Carlo criticality cycles simulated within each iteration step. While global convergence is ensured, development of any possible numerical instability is prevented by not allowing the fission source to converge fully within a single iteration step, which is achieved by setting a small number of criticality cycles per iteration step. Moreover, under these conditions, the fission source may converge even faster than in criticality calculations with no feedback, as we demonstrate in our numerical test simulations.
Monte Carlo Simulation in Statistical Physics An Introduction
Binder, Kurt
2010-01-01
Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. The fifth edition covers Classical as well as Quantum Monte Carlo methods. Furthermore a new chapter on the sampling of free-energy landscapes has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was awarded the Berni J. Alder CECAM Award for Computational Physics 2001 as well ...
Two proposed convergence criteria for Monte Carlo solutions
International Nuclear Information System (INIS)
Forster, R.A.; Pederson, S.P.; Booth, T.E.
1992-01-01
The central limit theorem (CLT) can be applied to a Monte Carlo solution if two requirements are satisfied: (1) The random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these two conditions are satisfied, a confidence interval (CI) based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the Monte Carlo tally being used. The Monte Carlo practitioner has a limited number of marginal methods to assess the fulfillment of the second requirement, such as statistical error reduction proportional to 1/√N with error magnitude guidelines. Two proposed methods are discussed in this paper to assist in deciding if N is large enough: estimating the relative variance of the variance (VOV) and examining the empirical history score probability density function (pdf)
Applicability of quasi-Monte Carlo for lattice systems
Energy Technology Data Exchange (ETDEWEB)
Ammon, Andreas [Berlin Humboldt-Univ. (Germany). Dept. of Physics; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Hartung, Tobias [King' s College London (United Kingdom). Dept. of Mathematics; Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leovey, Hernan; Griewank, Andreas [Berlin Humboldt-Univ. (Germany). Dept. of Mathematics; Mueller-Preussker, Michael [Berlin Humboldt-Univ. (Germany). Dept. of Physics
2013-11-15
This project investigates the applicability of quasi-Monte Carlo methods to Euclidean lattice systems in order to improve the asymptotic error scaling of observables for such theories. The error of an observable calculated by averaging over random observations generated from ordinary Monte Carlo simulations scales like N{sup -1/2}, where N is the number of observations. By means of quasi-Monte Carlo methods it is possible to improve this scaling for certain problems to N{sup -1}, or even further if the problems are regular enough. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling of all investigated observables in both cases.
Applicability of quasi-Monte Carlo for lattice systems
International Nuclear Information System (INIS)
Ammon, Andreas; Deutsches Elektronen-Synchrotron; Hartung, Tobias; Jansen, Karl; Leovey, Hernan; Griewank, Andreas; Mueller-Preussker, Michael
2013-11-01
This project investigates the applicability of quasi-Monte Carlo methods to Euclidean lattice systems in order to improve the asymptotic error scaling of observables for such theories. The error of an observable calculated by averaging over random observations generated from ordinary Monte Carlo simulations scales like N -1/2 , where N is the number of observations. By means of quasi-Monte Carlo methods it is possible to improve this scaling for certain problems to N -1 , or even further if the problems are regular enough. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling of all investigated observables in both cases.
Estimation of flux distributions with Monte Carlo functional expansion tallies
International Nuclear Information System (INIS)
Griesheimer, D. P.; Martin, W. R.; Holloway, J. P.
2005-01-01
Monte Carlo methods provide a powerful technique for estimating the average radiation flux in a volume (or across a surface) in cases where analytical solutions may not be possible. Unfortunately, Monte Carlo simulations typically provide only integral results and do not offer any further details about the distribution of the flux with respect to space, angle, time or energy. In the functional expansion tally (FET) a Monte Carlo simulation is used to estimate the functional expansion coefficients for flux distributions with respect to an orthogonal set of basis functions. The expansion coefficients are then used in post-processing to reconstruct a series approximation to the true distribution. Discrete event FET estimators are derived and their application in estimating radiation flux or current distributions is demonstrated. Sources of uncertainty in the FET are quantified and estimators for the statistical and truncation errors are derived. Numerical results are presented to support the theoretical development. (authors)
Monte Carlo simulation of continuous-space crystal growth
International Nuclear Information System (INIS)
Dodson, B.W.; Taylor, P.A.
1986-01-01
We describe a method, based on Monte Carlo techniques, of simulating the atomic growth of crystals without the discrete lattice space assumed by conventional Monte Carlo growth simulations. Since no lattice space is assumed, problems involving epitaxial growth, heteroepitaxy, phonon-driven mechanisms, surface reconstruction, and many other phenomena incompatible with the lattice-space approximation can be studied. Also, use of the Monte Carlo method circumvents to some extent the extreme limitations on simulated timescale inherent in crystal-growth techniques which might be proposed using molecular dynamics. The implementation of the new method is illustrated by studying the growth of strained-layer superlattice (SLS) interfaces in two-dimensional Lennard-Jones atomic systems. Despite the extreme simplicity of such systems, the qualitative features of SLS growth seen here are similar to those observed experimentally in real semiconductor systems
Molecular dynamics algorithms for quantum Monte Carlo methods
Miura, Shinichi
2009-11-01
In the present Letter, novel molecular dynamics methods compatible with corresponding quantum Monte Carlo methods are developed. One is a variational molecular dynamics method that is a molecular dynamics analog of quantum variational Monte Carlo method. The other is a variational path integral molecular dynamics method, which is based on the path integral molecular dynamics method for finite temperature systems by Tuckerman et al. [M. Tuckerman, B.J. Berne, G.J. Martyna, M.L. Klein, J. Chem. Phys. 99 (1993) 2796]. These methods are applied to model systems including the liquid helium-4, demonstrated to work satisfactorily for the tested ground state calculations.
Grain-boundary melting: A Monte Carlo study
DEFF Research Database (Denmark)
Besold, Gerhard; Mouritsen, Ole G.
1994-01-01
Grain-boundary melting in a lattice-gas model of a bicrystal is studied by Monte Carlo simulation using the grand canonical ensemble. Well below the bulk melting temperature T(m), a disordered liquidlike layer gradually emerges at the grain boundary. Complete interfacial wetting can be observed...... when the temperature approaches T(m) from below. Monte Carlo data over an extended temperature range indicate a logarithmic divergence w(T) approximately - ln(T(m)-T) of the width of the disordered layer w, in agreement with mean-field theory....
Monte Carlo Form-Finding Method for Tensegrity Structures
Li, Yue; Feng, Xi-Qiao; Cao, Yan-Ping
2010-05-01
In this paper, we propose a Monte Carlo-based approach to solve tensegrity form-finding problems. It uses a stochastic procedure to find the deterministic equilibrium configuration of a tensegrity structure. The suggested Monte Carlo form-finding (MCFF) method is highly efficient because it does not involve complicated matrix operations and symmetry analysis and it works for arbitrary initial configurations. Both regular and non-regular tensegrity problems of large scale can be solved. Some representative examples are presented to demonstrate the efficiency and accuracy of this versatile method.
Utilising Monte Carlo Simulation for the Valuation of Mining Concessions
Directory of Open Access Journals (Sweden)
Rosli Said
2005-12-01
Full Text Available Valuation involves the analyses of various input data to produce an estimated value. Since each input is itself often an estimate, there is an element of uncertainty in the input. This leads to uncertainty in the resultant output value. It is argued that a valuation must also convey information on the uncertainty, so as to be more meaningful and informative to the user. The Monte Carlo simulation technique can generate the information on uncertainty and is therefore potentially useful to valuation. This paper reports on the investigation that has been conducted to apply Monte Carlo simulation technique in mineral valuation, more specifically, in the valuation of a quarry concession.
Aspects of perturbative QCD in Monte Carlo shower models
International Nuclear Information System (INIS)
Gottschalk, T.D.
1986-01-01
The perturbative QCD content of Monte Carlo models for high energy hadron-hadron scattering is examined. Particular attention is given to the recently developed backwards evolution formalism for initial state parton showers, and the merging of parton shower evolution with hard scattering cross sections. Shower estimates of K-factors are discussed, and a simple scheme is presented for incorporating 2 → QCD cross sections into shower model calculations without double counting. Additional issues in the development of hard scattering Monte Carlo models are summarized. 69 references, 20 figures
ALEPH: An optimal approach to Monte Carlo burn-up
International Nuclear Information System (INIS)
Verboomen, B.
2007-01-01
The incentive of creating Monte Carlo burn-up codes arises from its ability to provide the most accurate locally dependent spectra and flux values in realistic 3D geometries of any type. These capabilities linked with the ability to handle nuclear data not only in its most basic but also most complex form (namely continuous energy cross sections, detailed energy-angle correlations, multi-particle physics, etc.) could make Monte Carlo burn-up codes very powerful, especially for hybrid and advanced nuclear systems (like for instance Accelerator Driven Systems). Still, such Monte Carlo burn-up codes have had limited success mainly due to the rather long CPU time required to carry out very detailed and accurate calculations, even with modern computer technology. To work around this issue, users often have to reduce the number of nuclides in the evolution chains or to consider either longer irradiation time steps and/or larger spatial burn-up cells, jeopardizing the accuracy of the calculation in all cases. There should always be a balance between accuracy and what is (reasonably) achievable. So when the Monte Carlo simulation time is as low as possible and if calculating the cross sections and flux values required for the depletion calculation takes little or no extra time compared to this simulation time, then we can actually be as accurate as we want. That is the optimum situation for Monte Carlo burn-up calculations.The ultimate goal of this work is to provide the Monte Carlo community with an efficient, flexible and easy to use alternative for Monte Carlo burn-up and activation calculations, which is what we did with ALEPH. ALEPH is a Monte Carlo burn-up code that uses ORIGEN 2.2 as a depletion module and any version of MCNP or MCNPX as the transport module. For now, ALEPH has been limited to updating microscopic cross section data only. By providing an easy to understand user interface, we also take away the burden from the user. For the user, it is as if he is
Monte carlo analysis of multicolour LED light engine
DEFF Research Database (Denmark)
Chakrabarti, Maumita; Thorseth, Anders; Jepsen, Jørgen
2015-01-01
A new Monte Carlo simulation as a tool for analysing colour feedback systems is presented here to analyse the colour uncertainties and achievable stability in a multicolour dynamic LED system. The Monte Carlo analysis presented here is based on an experimental investigation of a multicolour LED...... light engine designed for white tuneable studio lighting. The measured sensitivities to the various factors influencing the colour uncertainty for similar system are incorporated. The method aims to provide uncertainties in the achievable chromaticity coordinates as output over the tuneable range, e...
Novel Quantum Monte Carlo Approaches for Quantum Liquids
Rubenstein, Brenda M.
Quantum Monte Carlo methods are a powerful suite of techniques for solving the quantum many-body problem. By using random numbers to stochastically sample quantum properties, QMC methods are capable of studying low-temperature quantum systems well beyond the reach of conventional deterministic techniques. QMC techniques have likewise been indispensible tools for augmenting our current knowledge of superfluidity and superconductivity. In this thesis, I present two new quantum Monte Carlo techniques, the Monte Carlo Power Method and Bose-Fermi Auxiliary-Field Quantum Monte Carlo, and apply previously developed Path Integral Monte Carlo methods to explore two new phases of quantum hard spheres and hydrogen. I lay the foundation for a subsequent description of my research by first reviewing the physics of quantum liquids in Chapter One and the mathematics behind Quantum Monte Carlo algorithms in Chapter Two. I then discuss the Monte Carlo Power Method, a stochastic way of computing the first several extremal eigenvalues of a matrix too memory-intensive to be stored and therefore diagonalized. As an illustration of the technique, I demonstrate how it can be used to determine the second eigenvalues of the transition matrices of several popular Monte Carlo algorithms. This information may be used to quantify how rapidly a Monte Carlo algorithm is converging to the equilibrium probability distribution it is sampling. I next present the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm. This algorithm generalizes the well-known Auxiliary-Field Quantum Monte Carlo algorithm for fermions to bosons and Bose-Fermi mixtures. Despite some shortcomings, the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm represents the first exact technique capable of studying Bose-Fermi mixtures of any size in any dimension. In Chapter Six, I describe a new Constant Stress Path Integral Monte Carlo algorithm for the study of quantum mechanical systems under high pressures. While
Juan Carlos Onetti encerrado con un solo juguete: un libro
Becerra, Eduardo
2009-01-01
La presente semblanza de Juan Carlos Onetti pretende rescatar un hilo conductor de su vida basado en ciertas actitudes y episodios que se remontan a su niñez y se prolongan hasta sus últimos años. En ellas puede verse una relación entre la soledad, el encierro y la imaginación que explica tanto ciertos rasgos de su personalidad como características fundamentales de su literatura This profile of Juan Carlos Onetti aims to recover a principal current of his life based on ce...
Dialogue concerning Carlo Revelli's "Il catalogo per soggetti"
Directory of Open Access Journals (Sweden)
Alberto Cheti
2012-04-01
Full Text Available This article presents, in the form of a conversation, the content of the "The Author Catalog" by Carlo Revelli, published in the '70s and recently reprinted. Carlo Revelli puts together a systematic approach to the subject cataloguing, a great experience, and a wide theoretic and historical information. The importance of this work in the context of Italian library Science is beyond doubt as it is a tool for working in library and for the training of new librarians. It is the classic cataloguing book. Despite technology advancements that undermine its application, this work keeps intact its value for the understading of subject indexing.
THE APPLICATION OF MONTE CARLO SIMULATION FOR A DECISION PROBLEM
Directory of Open Access Journals (Sweden)
Çiğdem ALABAŞ
2001-01-01
Full Text Available The ultimate goal of the standard decision tree approach is to calculate the expected value of a selected performance measure. In the real-world situations, the decision problems become very complex as the uncertainty factors increase. In such cases, decision analysis using standard decision tree approach is not useful. One way of overcoming this difficulty is the Monte Carlo simulation. In this study, a Monte Carlo simulation model is developed for a complex problem and statistical analysis is performed to make the best decision.
A study on the shielding element using Monte Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Kim, Ki Jeong [Dept. of Radiology, Konkuk University Medical Center, Seoul (Korea, Republic of); Shim, Jae Goo [Dept. of Radiologic Technology, Daegu Health College, Daegu (Korea, Republic of)
2017-06-15
In this research, we simulated the elementary star shielding ability using Monte Carlo simulation to apply medical radiation shielding sheet which can replace existing lead. In the selection of elements, mainly elements and metal elements having a large atomic number, which are known to have high shielding performance, recently, various composite materials have improved shielding performance, so that weight reduction, processability, In consideration of activity etc., 21 elements were selected. The simulation tools were utilized Monte Carlo method. As a result of simulating the shielding performance by each element, it was estimated that the shielding ratio is the highest at 98.82% and 98.44% for tungsten and gold.
Carlos Leffler Inc. - still growing after more than 50 years
International Nuclear Information System (INIS)
Anon.
1993-01-01
In 1941, Carlos, R. Leffler, then 17 years old, bought his first truck with his life savings. He used it to transport fertilizer, coal and milk to the farmers of Lebanon County, PA. With a reputation for reliability, gained from his efforts with this first unit, he was able to expand his activities to fueloil delivery. In 1945, Leffler moved the business to Richland, PA, which is still the company's hometown, and embarked on the course of growth, which is still the company's hallmark. Today, Carlos R. Leffler, Inc. serves customers in 45 out the 67 counties of Pennsylvania as well as customers in New Jersey, Maryland and Delaware
Therapeutic Applications of Monte Carlo Calculations in Nuclear Medicine
International Nuclear Information System (INIS)
Coulot, J
2003-01-01
Monte Carlo techniques are involved in many applications in medical physics, and the field of nuclear medicine has seen a great development in the past ten years due to their wider use. Thus, it is of great interest to look at the state of the art in this domain, when improving computer performances allow one to obtain improved results in a dramatically reduced time. The goal of this book is to make, in 15 chapters, an exhaustive review of the use of Monte Carlo techniques in nuclear medicine, also giving key features which are not necessary directly related to the Monte Carlo method, but mandatory for its practical application. As the book deals with therapeutic' nuclear medicine, it focuses on internal dosimetry. After a general introduction on Monte Carlo techniques and their applications in nuclear medicine (dosimetry, imaging and radiation protection), the authors give an overview of internal dosimetry methods (formalism, mathematical phantoms, quantities of interest). Then, some of the more widely used Monte Carlo codes are described, as well as some treatment planning softwares. Some original techniques are also mentioned, such as dosimetry for boron neutron capture synovectomy. It is generally well written, clearly presented, and very well documented. Each chapter gives an overview of each subject, and it is up to the reader to investigate it further using the extensive bibliography provided. Each topic is discussed from a practical point of view, which is of great help for non-experienced readers. For instance, the chapter about mathematical aspects of Monte Carlo particle transport is very clear and helps one to apprehend the philosophy of the method, which is often a difficulty with a more theoretical approach. Each chapter is put in the general (clinical) context, and this allows the reader to keep in mind the intrinsic limitation of each technique involved in dosimetry (for instance activity quantitation). Nevertheless, there are some minor remarks to
International Nuclear Information System (INIS)
Hoogenboom, J.E.
2000-01-01
The Monte Carlo method is a statistical method to solve mathematical and physical problems using random numbers. The principle of the methods will be demonstrated for a simple mathematical problem and for neutron transport. Various types of estimators will be discussed, as well as generally applied variance reduction methods like splitting, Russian roulette and importance biasing. The theoretical formulation for solving eigenvalue problems for multiplying systems will be shown. Some reflections will be given about the applicability of the Monte Carlo method, its limitations and its future prospects for reactor physics calculations. Adjoint Monte Carlo is a Monte Carlo game to solve the adjoint neutron (or photon) transport equation. The adjoint transport equation can be interpreted in terms of simulating histories of artificial particles, which show properties of neutrons that move backwards in history. These particles will start their history at the detector from which the response must be estimated and give a contribution to the estimated quantity when they hit or pass through the neutron source. Application to multigroup transport formulation will be demonstrated Possible implementation for the continuous energy case will be outlined. The inherent advantages and disadvantages of the method will be discussed. The Midway Monte Carlo method will be presented for calculating a detector response due to a (neutron or photon) source. A derivation will be given of the basic formula for the Midway Monte Carlo method The black absorber technique, allowing for a cutoff of particle histories when reaching the midway surface in one of the calculations will be derived. An extension of the theory to coupled neutron-photon problems is given. The method will be demonstrated for an oil well logging problem, comprising a neutron source in a borehole and photon detectors to register the photons generated by inelastic neutron scattering. (author)
Stochastic simulation and Monte-Carlo methods; Simulation stochastique et methodes de Monte-Carlo
Energy Technology Data Exchange (ETDEWEB)
Graham, C. [Centre National de la Recherche Scientifique (CNRS), 91 - Gif-sur-Yvette (France); Ecole Polytechnique, 91 - Palaiseau (France); Talay, D. [Institut National de Recherche en Informatique et en Automatique (INRIA), 78 - Le Chesnay (France); Ecole Polytechnique, 91 - Palaiseau (France)
2011-07-01
This book presents some numerical probabilistic methods of simulation with their convergence speed. It combines mathematical precision and numerical developments, each proposed method belonging to a precise theoretical context developed in a rigorous and self-sufficient manner. After some recalls about the big numbers law and the basics of probabilistic simulation, the authors introduce the martingales and their main properties. Then, they develop a chapter on non-asymptotic estimations of Monte-Carlo method errors. This chapter gives a recall of the central limit theorem and precises its convergence speed. It introduces the Log-Sobolev and concentration inequalities, about which the study has greatly developed during the last years. This chapter ends with some variance reduction techniques. In order to demonstrate in a rigorous way the simulation results of stochastic processes, the authors introduce the basic notions of probabilities and of stochastic calculus, in particular the essential basics of Ito calculus, adapted to each numerical method proposed. They successively study the construction and important properties of the Poisson process, of the jump and deterministic Markov processes (linked to transport equations), and of the solutions of stochastic differential equations. Numerical methods are then developed and the convergence speed results of algorithms are rigorously demonstrated. In passing, the authors describe the probabilistic interpretation basics of the parabolic partial derivative equations. Non-trivial applications to real applied problems are also developed. (J.S.)
Energy Technology Data Exchange (ETDEWEB)
Burkatzki, Mark Thomas
2008-07-01
The author presents scalar-relativistic energy-consistent Hartree-Fock pseudopotentials for the main-group and 3d-transition-metal elements. The pseudopotentials do not exhibit a singularity at the nucleus and are therefore suitable for quantum Monte Carlo (QMC) calculations. The author demonstrates their transferability through extensive benchmark calculations of atomic excitation spectra as well as molecular properties. In particular, the author computes the vibrational frequencies and binding energies of 26 first- and second-row diatomic molecules using post Hartree-Fock methods, finding excellent agreement with the corresponding all-electron values. The author shows that the presented pseudopotentials give superior accuracy than other existing pseudopotentials constructed specifically for QMC. The localization error and the efficiency in QMC are discussed. The author also presents QMC calculations for selected atomic and diatomic 3d-transitionmetal systems. Finally, valence basis sets of different sizes (VnZ with n=D,T,Q,5 for 1st and 2nd row; with n=D,T for 3rd to 5th row; with n=D,T,Q for the 3d transition metals) optimized for the pseudopotentials are presented. (orig.)
Monte Carlo simulation of virtual compton scattering at MAMI
International Nuclear Information System (INIS)
D'Hose, N.; Ducret, J.E.; Gousset, TH.; Guichon, P.A.M.; Kerhoas, S.; Lhuillier, D.; Marchand, C.; Marchand, D.; Martino, J.; Mougey, J.; Roche, J.; Vanderhaeghen, M.; Vernin, P.; Bohm, H.; Distler, M.; Edelhoff, R.; Friedrich, J.M.; Geiges, R.; Jennewein, P.; Kahrau, M.; Korn, M.; Kramer, H.; Krygier, K.W.; Kunde, V.; Liesenfeld, A.; Merkel, H.; Merle, K.; Neuhausen, R.; Pospischil, TH.; Rosner, G.; Sauer, P.; Schmieden, H.; Schardt, S.; Tamas, G.; Wagner, A.; Walcher, TH.; Wolf, S.; Hyde-Wright, CH.; Boeglin, W.U.; Van de Wiele, J.
1996-01-01
The Monte Carlo simulation developed specially for the VCS experiments taking place at MAMI in fully described. This simulation can generate events according to the Bethe-Heitler + Born cross section behaviour and takes into account resolution deteriorating effects. It is used to determine solid angles for the various experimental settings. (authors)
La luce pesante Carlo Rubbia, cronaca di un Nobel
Bertin, Antonio
1984-01-01
In questo libro, attraverso una serie di colloqui con Carlo Rubbia, premio Nobel per la Fisica 1984, gli autori raccontano la storia delle sue scoperte, che hanno permesso all'Europa di effettuare un significativo sorpasso scientifico nei confronti degli Stati Uniti d'America, tradizionalmente all'avanguardia in fisica subnucleare.
A multi-microcomputer system for Monte Carlo calculations
International Nuclear Information System (INIS)
Hertzberger, L.O.; Berg, B.; Krasemann, H.
1981-01-01
We propose a microcomputer system which allows parallel processing for Monte Carlo calculations in lattice gauge theories, simulations of high energy physics experiments and presumably many other fields of current interest. The master-n-slave multiprocessor system is based on the Motorola MC 68000 microprocessor. One attraction if this processor is that it allows up to 16 M Byte random access memory. (orig.)
Monte Carlo simulation of fluorescence correlation spectroscopy data
Czech Academy of Sciences Publication Activity Database
Košovan, P.; Uhlík, F.; Kuldová, J.; Štěpánek, M.; Limpouchová, Z.; Procházka, K.; Benda, Aleš; Humpolíčková, Jana; Hof, Martin
2011-01-01
Roč. 76, č. 3 (2011), s. 207-222 ISSN 0010-0765 R&D Projects: GA AV ČR IAA400400621 Institutional research plan: CEZ:AV0Z40400503 Keywords : Monte Carlo Study * fluorescence * spectroscopy Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.283, year: 2011
Continuous energy Monte Carlo method based lattice homogeinzation
International Nuclear Information System (INIS)
Li Mancang; Yao Dong; Wang Kan
2014-01-01
Based on the Monte Carlo code MCNP, the continuous energy Monte Carlo multi-group constants generation code MCMC has been developed. The track length scheme has been used as the foundation of cross section generation. The scattering matrix and Legendre components require special techniques, and the scattering event method has been proposed to solve this problem. Three methods have been developed to calculate the diffusion coefficients for diffusion reactor core codes and the Legendre method has been applied in MCMC. To the satisfaction of the equivalence theory, the general equivalence theory (GET) and the superhomogenization method (SPH) have been applied to the Monte Carlo method based group constants. The super equivalence method (SPE) has been proposed to improve the equivalence. GET, SPH and SPE have been implemented into MCMC. The numerical results showed that generating the homogenization multi-group constants via Monte Carlo method overcomes the difficulties in geometry and treats energy in continuum, thus provides more accuracy parameters. Besides, the same code and data library can be used for a wide range of applications due to the versatility. The MCMC scheme can be seen as a potential alternative to the widely used deterministic lattice codes. (authors)
Monte Carlo Simulation of Partially Confined Flexible Polymers
Hermsen, G.F.; de Geeter, B.A.; van der Vegt, N.F.A.; Wessling, Matthias
2002-01-01
We have studied conformational properties of flexible polymers partially confined to narrow pores of different size using configurational biased Monte Carlo simulations under athermal conditions. The asphericity of the chain has been studied as a function of its center of mass position along the
Present Status and Extensions of the Monte Carlo Performance Benchmark
Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.
2014-06-01
The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.
Minimum Thresholds of Monte Carlo Cycles for Nigerian Empirical ...
African Journals Online (AJOL)
Monte Carlo simulation has proven to be an eective means of incorporating reliability analysisinto the Mechanistic-Empirical (M-E) design process for exible pavements. Nigerian Empirical-Mechanistic Pavement Analysis and Design System procedure for Nigeria Environments has beenproposed. This work aimed at ...
Direct determination of liquid phase coexistence by Monte Carlo simulations
Zweistra, H.J.A.; Besseling, N.A.M.
2006-01-01
A formalism to determine coexistence points by means of Monte Carlo simulations is presented. The general idea of the method is to perform a simulation simultaneously in several unconnected boxes which can exchange particles. At equilibrium, most of the boxes will be occupied by a homogeneous phase.
An Overview of the Monte Carlo Methods, Codes, & Applications Group
Energy Technology Data Exchange (ETDEWEB)
Trahan, Travis John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-08-30
This report sketches the work of the Group to deliver first-principle Monte Carlo methods, production quality codes, and radiation transport-based computational and experimental assessments using the codes MCNP and MCATK for such applications as criticality safety, non-proliferation, nuclear energy, nuclear threat reduction and response, radiation detection and measurement, radiation health protection, and stockpile stewardship.
The University of San Carlos Herbarium, Cebu City, The Philippines
Seidenschwarz, F.
1990-01-01
The University of San Carlos, Cebu City, (‘USC’) holds a botanical collection which is the fourth largest in size within the Philippines. The three leading herbaria of the Philippines are situated in or close to Manila. The 36 year old USC Herbarium is the only major collection in the Philippines
A separable shadow Hamiltonian hybrid Monte Carlo method
Sweet, Christopher R.; Hampton, Scott S.; Skeel, Robert D.; Izaguirre, Jesús A.
2009-11-01
Hybrid Monte Carlo (HMC) is a rigorous sampling method that uses molecular dynamics (MD) as a global Monte Carlo move. The acceptance rate of HMC decays exponentially with system size. The shadow hybrid Monte Carlo (SHMC) was previously introduced to reduce this performance degradation by sampling instead from the shadow Hamiltonian defined for MD when using a symplectic integrator. SHMC's performance is limited by the need to generate momenta for the MD step from a nonseparable shadow Hamiltonian. We introduce the separable shadow Hamiltonian hybrid Monte Carlo (S2HMC) method based on a formulation of the leapfrog/Verlet integrator that corresponds to a separable shadow Hamiltonian, which allows efficient generation of momenta. S2HMC gives the acceptance rate of a fourth order integrator at the cost of a second-order integrator. Through numerical experiments we show that S2HMC consistently gives a speedup greater than two over HMC for systems with more than 4000 atoms for the same variance. By comparison, SHMC gave a maximum speedup of only 1.6 over HMC. S2HMC has the additional advantage of not requiring any user parameters beyond those of HMC. S2HMC is available in the program PROTOMOL 2.1. A Python version, adequate for didactic purposes, is also in MDL (http://mdlab.sourceforge.net/s2hmc).
Application of Monte Carlo Method to Steady State Heat Conduction ...
African Journals Online (AJOL)
The Monte Carlo method was used in modelling steady state heat conduction problems. The method uses the fixed and the floating random walks to determine temperature in the domain of the definition of the heat conduction equation, at a single point directly. A heat conduction problem with an irregular shaped geometry ...
Monte Carlo capabilities of the SCALE code system
International Nuclear Information System (INIS)
Rearden, B.T.; Petrie, L.M.; Peplow, D.E.; Bekar, K.B.; Wiarda, D.; Celik, C.; Perfetti, C.M.; Ibrahim, A.M.; Dunn, M.E.; Hart, S.W.D.
2013-01-01
SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a 'plug-and-play' framework that includes three deterministic and three Monte Carlo radiation transport solvers (KENO, MAVRIC, TSUNAMI) that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE's graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2, to be released in 2014, will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. An overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2. (authors)
A novel Monte Carlo approach to hybrid local volatility models
A.W. van der Stoep (Anton); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)
2017-01-01
textabstractWe present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant.
A combination of Monte Carlo Temperature Basin Paving and Graph ...
Indian Academy of Sciences (India)
theory: Water cluster low energy structures and completeness of search. RAJAN SHRIVASTAVA, AVIJIT RAKSHIT, ... Monte Carlo sampling; water cluster; graph theory. 1. Introduction. Exploration of the energy landscape ..... use our algorithm for large water clusters, at present, it turns out that use of this for (H20)20 would ...
Sensitivity analysis for oblique incidence reflectometry using Monte Carlo simulations
DEFF Research Database (Denmark)
Kamran, Faisal; Andersen, Peter E.
2015-01-01
profiles. This article presents a sensitivity analysis of the technique in turbid media. Monte Carlo simulations are used to investigate the technique and its potential to distinguish the small changes between different levels of scattering. We present various regions of the dynamic range of optical...
Monte Carlo radiation transport: A revolution in science
International Nuclear Information System (INIS)
Hendricks, J.
1993-01-01
When Enrico Fermi, Stan Ulam, Nicholas Metropolis, John von Neuman, and Robert Richtmyer invented the Monte Carlo method fifty years ago, little could they imagine the far-flung consequences, the international applications, and the revolution in science epitomized by their abstract mathematical method. The Monte Carlo method is used in a wide variety of fields to solve exact computational models approximately by statistical sampling. It is an alternative to traditional physics modeling methods which solve approximate computational models exactly by deterministic methods. Modern computers and improved methods, such as variance reduction, have enhanced the method to the point of enabling a true predictive capability in areas such as radiation or particle transport. This predictive capability has contributed to a radical change in the way science is done: design and understanding come from computations built upon experiments rather than being limited to experiments, and the computer codes doing the computations have become the repository for physics knowledge. The MCNP Monte Carlo computer code effort at Los Alamos is an example of this revolution. Physicians unfamiliar with physics details can design cancer treatments using physics buried in the MCNP computer code. Hazardous environments and hypothetical accidents can be explored. Many other fields, from underground oil well exploration to aerospace, from physics research to energy production, from safety to bulk materials processing, benefit from MCNP, the Monte Carlo method, and the revolution in science
A combination of Monte Carlo Temperature Basin Paving and Graph ...
Indian Academy of Sciences (India)
Abstract. The knowledge of degree of completeness of energy landscape search by stochastic algorithms is often lacking. A graph theory based method is used to investigate the completeness of search performed by. Monte Carlo Temperature Basin Paving (MCTBP) algorithm for (H2O)n, (n=6, 7, and 20). In the second part.
Monte Carlo simulation of the seed germination process
International Nuclear Information System (INIS)
Gladyszewska, B.; Koper, R.
2000-01-01
Paper presented a mathematical model of seed germination process based on the Monte Carlo method and theoretical premises resulted from the physiology of seed germination suggesting three consecutive stages: physical, biochemical and physiological. The model was experimentally verified by determination of germination characteristics for seeds of ground tomatoes, Promyk cultivar, within broad range of temperatures (from 15 to 30 deg C)
A novel Monte Carlo approach to hybrid local volatility models
van der Stoep, A.W.; Grzelak, L.A.; Oosterlee, C.W.
2017-01-01
We present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant. Finance,
Tackling the premature convergence problem in Monte-Carlo localization
Kootstra, Gert; de Boer, Bart
2009-01-01
Monte-Carlo localization uses particle filtering to estimate the position of the robot. The method is known to suffer from the loss of potential positions when there is ambiguity present in the environment. Since many indoor environments are highly symmetric, this problem of premature convergence is
Monte Carlo simulation models of breeding-population advancement.
J.N. King; G.R. Johnson
1993-01-01
Five generations of population improvement were modeled using Monte Carlo simulations. The model was designed to address questions that are important to the development of an advanced generation breeding population. Specifically we addressed the effects on both gain and effective population size of different mating schemes when creating a recombinant population for...
Back propagation and Monte Carlo algorithms for neural network computations
International Nuclear Information System (INIS)
Junczys, R.; Wit, R.
1996-01-01
Results of teaching procedures for neural network for two different algorithms are presented. The first one is based on the well known back-propagation technique, the second is an adopted version of the Monte Carlo global minimum seeking method. Combination of these two, different in nature, approaches provides promising results. (author) nature, approaches provides promising results. (author)
Genetic algorithms and Monte Carlo simulation for optimal plant design
International Nuclear Information System (INIS)
Cantoni, M.; Marseguerra, M.; Zio, E.
2000-01-01
We present an approach to the optimal plant design (choice of system layout and components) under conflicting safety and economic constraints, based upon the coupling of a Monte Carlo evaluation of plant operation with a Genetic Algorithms-maximization procedure. The Monte Carlo simulation model provides a flexible tool, which enables one to describe relevant aspects of plant design and operation, such as standby modes and deteriorating repairs, not easily captured by analytical models. The effects of deteriorating repairs are described by means of a modified Brown-Proschan model of imperfect repair which accounts for the possibility of an increased proneness to failure of a component after a repair. The transitions of a component from standby to active, and vice versa, are simulated using a multiplicative correlation model. The genetic algorithms procedure is demanded to optimize a profit function which accounts for the plant safety and economic performance and which is evaluated, for each possible design, by the above Monte Carlo simulation. In order to avoid an overwhelming use of computer time, for each potential solution proposed by the genetic algorithm, we perform only few hundreds Monte Carlo histories and, then, exploit the fact that during the genetic algorithm population evolution, the fit chromosomes appear repeatedly many times, so that the results for the solutions of interest (i.e. the best ones) attain statistical significance
A Monte Carlo adapted finite element method for dislocation ...
Indian Academy of Sciences (India)
P Zakian
2017-10-10
Oct 10, 2017 ... simulations are proposed. Various comparisons are examined to illustrate the capability of both methods for random simulation of faults. Keywords. Monte Carlo simulation; stochastic modeling; split node technique; finite element method; earthquake fault dislocation. 1. Introduction. In material science, a ...
A Monte Carlo adapted finite element method for dislocation ...
Indian Academy of Sciences (India)
Mean and standard deviation values, as well as probability density function of ground surface responses due to the dislocation are computed. Based on analytical and numerical calculation of dislocation, two approaches of Monte Carlo simulations are proposed. Various comparisons are examined to illustrate the capability ...
Faster comparison of stopping times by nested conditional Monte Carlo
Dickmann, Fabian; Schweizer, Nikolaus
2016-01-01
We show that deliberately introducing a nested simulation stage can lead to significant variance reductions when comparing two stopping times by Monte Carlo. We derive the optimal number of nested simulations and prove that the algorithm is remarkably robust to misspecifications of this number. The
Monte Carlo simulation of quantum statistical lattice models
Raedt, Hans De; Lagendijk, Ad
1985-01-01
In this article we review recent developments in computational methods for quantum statistical lattice problems. We begin by giving the necessary mathematical basis, the generalized Trotter formula, and discuss the computational tools, exact summations and Monte Carlo simulation, that will be used
Monte Carlo simulations of the stability of delta-Pu
DEFF Research Database (Denmark)
Landa, A.; Soderlind, P.; Ruban, Andrei
2003-01-01
The transition temperature (T-c) for delta-Pu has been calculated for the first time. A Monte Carlo method is employed for this purpose and the effective cluster interactions are obtained from first-principles calculations incorporated with the Connolly-Williams and generalized perturbation methods...
Closed-shell variational quantum Monte Carlo simulation for the ...
African Journals Online (AJOL)
Closed-shell variational quantum Monte Carlo simulation for the electric dipole moment calculation of hydrazine molecule using casino-code. ... From our result, though the VQMC method showed much fluctuation, the technique calculated the electric dipole moment of hydrazine molecule as 2.0 D, which is in closer ...
Monte Carlo studies of nuclei and quantum liquid drops
Energy Technology Data Exchange (ETDEWEB)
Pandharipande, V.R.; Pieper, S.C.
1989-01-01
The progress in application of variational and Green's function Monte Carlo methods to nuclei is reviewed. The nature of single-particle orbitals in correlated quantum liquid drops is discussed, and it is suggested that the difference between quasi-particle and mean-field orbitals may be of importance in nuclear structure physics. 27 refs., 7 figs., 2 tabs.
Variational Monte Carlo calculations of few-body nuclei
International Nuclear Information System (INIS)
Wiringa, R.B.
1986-01-01
The variational Monte Carlo method is described. Results for the binding energies, density distributions, momentum distributions, and static longitudinal structure functions of the 3 H, 3 He, and 4 He ground states, and for the energies of the low-lying scattering states in 4 He are presented. 25 refs., 3 figs
Monte Carlo studies of nuclei and quantum liquid drops
International Nuclear Information System (INIS)
Pandharipande, V.R.; Pieper, S.C.
1989-01-01
The progress in application of variational and Green's function Monte Carlo methods to nuclei is reviewed. The nature of single-particle orbitals in correlated quantum liquid drops is discussed, and it is suggested that the difference between quasi-particle and mean-field orbitals may be of importance in nuclear structure physics. 27 refs., 7 figs., 2 tabs
Variational Monte Carlo calculations of few-body nuclei
Energy Technology Data Exchange (ETDEWEB)
Wiringa, R.B.
1986-01-01
The variational Monte Carlo method is described. Results for the binding energies, density distributions, momentum distributions, and static longitudinal structure functions of the /sup 3/H, /sup 3/He, and /sup 4/He ground states, and for the energies of the low-lying scattering states in /sup 4/He are presented. 25 refs., 3 figs.
A Monte Carlo adapted finite element method for dislocation ...
Indian Academy of Sciences (India)
Dislocation modelling of an earthquake fault is of great importance due to the fact that ground surface response may be predicted by the model. However, geological features of a fault cannot be measured exactly, and therefore these features and data involve uncertainties. This paper presents a Monte Carlo based random ...