WorldWideScience

Sample records for preparation running monte

  1. Running the EGS4 Monte Carlo code with Fortran 90 on a pentium computer

    International Nuclear Information System (INIS)

    Caon, M.; Bibbo, G.; Pattison, J.

    1996-01-01

    The possibility to run the EGS4 Monte Carlo code radiation transport system for medical radiation modelling on a microcomputer is discussed. This has been done using a Fortran 77 compiler with a 32-bit memory addressing system running under a memory extender operating system. In addition a virtual memory manager such as QEMM386 was required. It has successfully run on a SUN Sparcstation2. In 1995 faster Pentium-based microcomputers became available as did the Windows 95 operating system which can handle 32-bit programs, multitasking and provides its own virtual memory management. The paper describe how with simple modification to the batch files it was possible to run EGS4 on a Pentium under Fortran 90 and Windows 95. This combination of software and hardware is cheaper and faster than running it on a SUN Sparcstation2. 8 refs., 1 tab

  2. Running the EGS4 Monte Carlo code with Fortran 90 on a pentium computer

    Energy Technology Data Exchange (ETDEWEB)

    Caon, M. [Flinders Univ. of South Australia, Bedford Park, SA (Australia)]|[Univercity of South Australia, SA (Australia); Bibbo, G. [Womens and Childrens hospital, SA (Australia); Pattison, J. [Univercity of South Australia, SA (Australia)

    1996-09-01

    The possibility to run the EGS4 Monte Carlo code radiation transport system for medical radiation modelling on a microcomputer is discussed. This has been done using a Fortran 77 compiler with a 32-bit memory addressing system running under a memory extender operating system. In addition a virtual memory manager such as QEMM386 was required. It has successfully run on a SUN Sparcstation2. In 1995 faster Pentium-based microcomputers became available as did the Windows 95 operating system which can handle 32-bit programs, multitasking and provides its own virtual memory management. The paper describe how with simple modification to the batch files it was possible to run EGS4 on a Pentium under Fortran 90 and Windows 95. This combination of software and hardware is cheaper and faster than running it on a SUN Sparcstation2. 8 refs., 1 tab.

  3. ATLAS Data Preparation in Run 2

    CERN Document Server

    Laycock, Paul; The ATLAS collaboration

    2016-01-01

    In this presentation, the data preparation workflows for Run 2 are presented. Online data quality uses a new hybrid software release that incorporates the latest offline data quality monitoring software for the online environment. This is used to provide fast feedback in the control room during a data acquisition (DAQ) run, via a histogram-based monitoring framework as well as the online Event Display. Data are sent to several streams for offline processing at the dedicated Tier-0 computing facility, including dedicated calibration streams and an "express" physics stream containing approximately 2% of the main physics stream. This express stream is processed as data arrives, allowing a first look at the offline data quality within hours of a run end. A prompt calibration loop starts once an ATLAS DAQ run ends, nominally defining a 48 hour period in which calibrations and alignments can be derived using the dedicated calibration and express streams. The bulk processing of the main physics stream starts on expi...

  4. Monte Carlo Generators for the Production of a $W$ or $Z/\\gamma^*$ Boson in Association with Jets at ATLAS in Run 2

    CERN Document Server

    The ATLAS collaboration

    2016-01-01

    This note documents the Monte Carlo generators used by the ATLAS collaboration at the start of Run 2 for processes where a $W$ or $Z/\\gamma^*$ boson is produced in association with jets. The available event generators are briefly described and comparisons are made with ATLAS measurements of $W$ or $Z/\\gamma^*$+jets performed with Run 1 data, collected at the centre-of-mass energy of 7 TeV. The model predictions are then compared at the Run 2 centre-of-mass energy of 13~TeV. A comparison is also made with an early Run 2 ATLAS $Z/\\gamma^*$+jets data measurement. Investigations into tuning the parameters of the models and evaluating systematic uncertainties on the Monte Carlo predictions are also presented.

  5. Herwig: The Evolution of a Monte Carlo Simulation

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Monte Carlo event generation has seen significant developments in the last 10 years starting with preparation for the LHC and then during the first LHC run. I will discuss the basic ideas behind Monte Carlo event generators and then go on to discuss these developments, focussing on the developments in Herwig(++) event generator. I will conclude by presenting the current status of event generation together with some results of the forthcoming new version of Herwig, Herwig 7.

  6. Massively parallel Monte Carlo. Experiences running nuclear simulations on a large condor cluster

    International Nuclear Information System (INIS)

    Tickner, James; O'Dwyer, Joel; Roach, Greg; Uher, Josef; Hitchen, Greg

    2010-01-01

    The trivially-parallel nature of Monte Carlo (MC) simulations make them ideally suited for running on a distributed, heterogeneous computing environment. We report on the setup and operation of a large, cycle-harvesting Condor computer cluster, used to run MC simulations of nuclear instruments ('jobs') on approximately 4,500 desktop PCs. Successful operation must balance the competing goals of maximizing the availability of machines for running jobs whilst minimizing the impact on users' PC performance. This requires classification of jobs according to anticipated run-time and priority and careful optimization of the parameters used to control job allocation to host machines. To maximize use of a large Condor cluster, we have created a powerful suite of tools to handle job submission and analysis, as the manual creation, submission and evaluation of large numbers (hundred to thousands) of jobs would be too arduous. We describe some of the key aspects of this suite, which has been interfaced to the well-known MCNP and EGSnrc nuclear codes and our in-house PHOTON optical MC code. We report on our practical experiences of operating our Condor cluster and present examples of several large-scale instrument design problems that have been solved using this tool. (author)

  7. ATLAS data preparation in run 2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00037318; The ATLAS collaboration; Chelstowska, Magda Anna; Cuhadar Donszelmann, Tulay; Guenther, Jaroslav; Nairz, Armin Michael; Nicolaidou, Rosy; Shabalina, Elizaveta; Strandberg, Jonas; Taffard, Anyes; Wang, Song-Ming

    2017-01-01

    In this contribution, the data preparation workflows for Run 2 are presented. The challenges posed by the excellent performance and high live time fraction of the LHC are discussed, and the solutions implemented by ATLAS are described. The prompt calibration loop procedures are described and examples are given. Several levels of data quality assessment are used to quickly spot problems in the control room and prevent data loss, and to provide the final selection used for physics analysis. Finally the data quality efficiency for physics analysis is shown.

  8. Using Static Percentiles of AE9/AP9 to Approximate Dynamic Monte Carlo Runs for Radiation Analysis of Spiral Transfer Orbits

    Science.gov (United States)

    Kwan, Betty P.; O'Brien, T. Paul

    2015-06-01

    The Aerospace Corporation performed a study to determine whether static percentiles of AE9/AP9 can be used to approximate dynamic Monte Carlo runs for radiation analysis of spiral transfer orbits. Solar panel degradation is a major concern for solar-electric propulsion because solar-electric propulsion depends on the power output of the solar panel. Different spiral trajectories have different radiation environments that could lead to solar panel degradation. Because the spiral transfer orbits only last weeks to months, an average environment does not adequately address the possible transient enhancements of the radiation environment that must be accounted for in optimizing the transfer orbit trajectory. Therefore, to optimize the trajectory, an ensemble of Monte Carlo simulations of AE9/AP9 would normally be run for every spiral trajectory to determine the 95th percentile radiation environment. To avoid performing lengthy Monte Carlo dynamic simulations for every candidate spiral trajectory in the optimization, we found a static percentile that would be an accurate representation of the full Monte Carlo simulation for a representative set of spiral trajectories. For 3 LEO to GEO and 1 LEO to MEO trajectories, a static 90th percentile AP9 is a good approximation of the 95th percentile fluence with dynamics for 4-10 MeV protons, and a static 80th percentile AE9 is a good approximation of the 95th percentile fluence with dynamics for 0.5-2 MeV electrons. While the specific percentiles chosen cannot necessarily be used in general for other orbit trade studies, the concept of determining a static percentile as a quick approximation to a full Monte Carlo ensemble of simulations can likely be applied to other orbit trade studies. We expect the static percentile to depend on the region of space traversed, the mission duration, and the radiation effect considered.

  9. TART 2000: A Coupled Neutron-Photon, 3-D, Combinatorial Geometry, Time Dependent, Monte Carlo Transport Code

    International Nuclear Information System (INIS)

    Cullen, D.E

    2000-01-01

    TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files

  10. TART 2000 A Coupled Neutron-Photon, 3-D, Combinatorial Geometry, Time Dependent, Monte Carlo Transport Code

    CERN Document Server

    Cullen, D

    2000-01-01

    TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files.

  11. CMS operations for Run II preparation and commissioning of the offline infrastructure

    CERN Document Server

    Cerminara, Gianluca

    2016-01-01

    The restart of the LHC coincided with an intense activity for the CMS experiment. Both at the beginning of Run II in 2015 and the restart of operations in 2016, the collaboration was engaged in an extensive re-commissioning of the CMS data-taking operations. After the long stop, the detector was fully aligned and calibrated. Data streams were redesigned, to fit the priorities dictated by the physics program for 2015 and 2016. A new reconstruction software (both online and offline) was commissioned with early collisions and further developed during the year. A massive campaign of Monte Carlo production was launched, to assist physics analyses. This presentation reviews the main event of this commissioning journey and describes the status of CMS physics performances for 2016.

  12. Automatic fission source convergence criteria for Monte Carlo criticality calculations

    International Nuclear Information System (INIS)

    Shim, Hyung Jin; Kim, Chang Hyo

    2005-01-01

    The Monte Carlo criticality calculations for the multiplication factor and the power distribution in a nuclear system require knowledge of stationary or fundamental-mode fission source distribution (FSD) in the system. Because it is a priori unknown, so-called inactive cycle Monte Carlo (MC) runs are performed to determine it. The inactive cycle MC runs should be continued until the FSD converges to the stationary FSD. Obviously, if one stops them prematurely, the MC calculation results may have biases because the followup active cycles may be run with the non-stationary FSD. Conversely, if one performs the inactive cycle MC runs more than necessary, one is apt to waste computing time because inactive cycle MC runs are used to elicit the fundamental-mode FSD only. In the absence of suitable criteria for terminating the inactive cycle MC runs, one cannot but rely on empiricism in deciding how many inactive cycles one should conduct for a given problem. Depending on the problem, this may introduce biases into Monte Carlo estimates of the parameters one tries to calculate. The purpose of this paper is to present new fission source convergence criteria designed for the automatic termination of inactive cycle MC runs

  13. Monte Carlo calculations of electron transport on microcomputers

    International Nuclear Information System (INIS)

    Chung, Manho; Jester, W.A.; Levine, S.H.; Foderaro, A.H.

    1990-01-01

    In the work described in this paper, the Monte Carlo program ZEBRA, developed by Berber and Buxton, was converted to run on the Macintosh computer using Microsoft BASIC to reduce the cost of Monte Carlo calculations using microcomputers. Then the Eltran2 program was transferred to an IBM-compatible computer. Turbo BASIC and Microsoft Quick BASIC have been used on the IBM-compatible Tandy 4000SX computer. The paper shows the running speed of the Monte Carlo programs on the different computers, normalized to one for Eltran2 on the Macintosh-SE or Macintosh-Plus computer. Higher values refer to faster running times proportionally. Since Eltran2 is a one-dimensional program, it calculates energy deposited in a semi-infinite multilayer slab. Eltran2 has been modified to a two-dimensional program called Eltran3 to computer more accurately the case with a point source, a small detector, and a short source-to-detector distance. The running time of Eltran3 is about twice as long as that of Eltran2 for a similar case

  14. Homogenized group cross sections by Monte Carlo

    International Nuclear Information System (INIS)

    Van Der Marck, S. C.; Kuijper, J. C.; Oppe, J.

    2006-01-01

    Homogenized group cross sections play a large role in making reactor calculations efficient. Because of this significance, many codes exist that can calculate these cross sections based on certain assumptions. However, the application to the High Flux Reactor (HFR) in Petten, the Netherlands, the limitations of such codes imply that the core calculations would become less accurate when using homogenized group cross sections (HGCS). Therefore we developed a method to calculate HGCS based on a Monte Carlo program, for which we chose MCNP. The implementation involves an addition to MCNP, and a set of small executables to perform suitable averaging after the MCNP run(s) have completed. Here we briefly describe the details of the method, and we report on two tests we performed to show the accuracy of the method and its implementation. By now, this method is routinely used in preparation of the cycle to cycle core calculations for HFR. (authors)

  15. Parallel Monte Carlo simulations on an ARC-enabled computing grid

    International Nuclear Information System (INIS)

    Nilsen, Jon K; Samset, Bjørn H

    2011-01-01

    Grid computing opens new possibilities for running heavy Monte Carlo simulations of physical systems in parallel. The presentation gives an overview of GaMPI, a system for running an MPI-based random walker simulation on grid resources. Integrating the ARC middleware and the new storage system Chelonia with the Ganga grid job submission and control system, we show that MPI jobs can be run on a world-wide computing grid with good performance and promising scaling properties. Results for relatively communication-heavy Monte Carlo simulations run on multiple heterogeneous, ARC-enabled computing clusters in several countries are presented.

  16. Validation of uncertainty of weighing in the preparation of radionuclide standards by Monte Carlo Method

    International Nuclear Information System (INIS)

    Cacais, F.L.; Delgado, J.U.; Loayza, V.M.

    2016-01-01

    In preparing solutions for the production of radionuclide metrology standards is necessary measuring the quantity Activity by mass. The gravimetric method by elimination is applied to perform weighing with smaller uncertainties. At this work is carried out the validation, by the Monte Carlo method, of the uncertainty calculation approach implemented by Lourenco and Bobin according to ISO GUM for the method by elimination. The results obtained by both uncertainty calculation methods were consistent indicating that were fulfilled the conditions for the application of ISO GUM in the preparation of radioactive standards. (author)

  17. Research on perturbation based Monte Carlo reactor criticality search

    International Nuclear Information System (INIS)

    Li Zeguang; Wang Kan; Li Yangliu; Deng Jingkang

    2013-01-01

    Criticality search is a very important aspect in reactor physics analysis. Due to the advantages of Monte Carlo method and the development of computer technologies, Monte Carlo criticality search is becoming more and more necessary and feasible. Traditional Monte Carlo criticality search method is suffered from large amount of individual criticality runs and uncertainty and fluctuation of Monte Carlo results. A new Monte Carlo criticality search method based on perturbation calculation is put forward in this paper to overcome the disadvantages of traditional method. By using only one criticality run to get initial k_e_f_f and differential coefficients of concerned parameter, the polynomial estimator of k_e_f_f changing function is solved to get the critical value of concerned parameter. The feasibility of this method was tested. The results show that the accuracy and efficiency of perturbation based criticality search method are quite inspiring and the method overcomes the disadvantages of traditional one. (authors)

  18. Importance iteration in MORSE Monte Carlo calculations

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Hoogenboom, J.E.

    1994-01-01

    An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example that shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation

  19. Importance iteration in MORSE Monte Carlo calculations

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Hoogenboom, J.E.

    1994-02-01

    An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example, which shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation. (orig.)

  20. ATLAS strip detector: Operational Experience and Run1 → Run2 transition

    CERN Document Server

    NAGAI, K; The ATLAS collaboration

    2014-01-01

    The ATLAS SCT operational experience and the detector performance during the RUN1 period of LHC will be reported. Additionally the preparation outward to RUN2 during the long shut down 1 will be mentioned.

  1. Neutrino oscillation parameter sampling with MonteCUBES

    Science.gov (United States)

    Blennow, Mattias; Fernandez-Martinez, Enrique

    2010-01-01

    used in GLoBES [1,2]. Solution method: MonteCUBES is written as a plug-in to the GLoBES software [1,2] and provides the necessary methods to perform Markov Chain Monte Carlo sampling of the parameter space. This allows an efficient sampling of the parameter space and has a complexity which does not grow exponentially with the parameter space dimension. The integration of the MonteCUBES package with the GLoBES software makes sure that the experimental definitions already in use by the community can also be used with MonteCUBES, while also lowering the learning threshold for users who already know GLoBES. Additional comments: A Matlab GUI for interpretation of results is included in the distribution. Running time: The typical running time varies depending on the dimensionality of the parameter space, the complexity of the experiment, and how well the parameter space should be sampled. The running time for our simulations [3] with 15 free parameters at a Neutrino Factory with O(10) samples varied from a few hours to tens of hours. References:P. Huber, M. Lindner, W. Winter, Comput. Phys. Comm. 167 (2005) 195, hep-ph/0407333. P. Huber, J. Kopp, M. Lindner, M. Rolinec, W. Winter, Comput. Phys. Comm. 177 (2007) 432, hep-ph/0701187. S. Antusch, M. Blennow, E. Fernandez-Martinez, J. Lopez-Pavon, arXiv:0903.3986 [hep-ph].

  2. One-run Monte Carlo calculation of effective delayed neutron fraction and area-ratio reactivity

    Energy Technology Data Exchange (ETDEWEB)

    Zhaopeng Zhong; Talamo, Alberto; Gohar, Yousry, E-mail: zzhong@anl.gov, E-mail: alby@anl.gov, E-mail: gohar@anl.gov [Nuclear Engineering Division, Argonne National Laboratory, IL (United States)

    2011-07-01

    The Monte Carlo code MCNPX has been utilized to calculate the effective delayed neutron fraction and reactivity by using the area-ratio method. The effective delayed neutron fraction β{sub eff} has been calculated with the fission probability method proposed by Meulekamp and van der Marck. MCNPX was used to calculate separately the fission probability of the delayed and the prompt neutrons by using the TALLYX user subroutine of MCNPX. In this way, β{sub eff} was obtained from the one criticality (k-code) calculation without performing an adjoint calculation. The traditional k-ratio method requires two criticality calculations to calculate β{sub eff}, while this approach utilizes only one MCNPX criticality calculation. Therefore, the approach described here is referred to as a one-run method. In subcritical systems driven by a pulsed neutron source, the area-ratio method is used to calculate reactivity (in dollar units) as the ratio between the prompt and delayed areas. These areas represent the integral of the reaction rates induced from the prompt and delayed neutrons during the pulse period. Traditionally, application of the area-ratio method requires two separate fixed source MCNPX simulations: one with delayed neutrons and the other without. The number of source particles in these two simulations must be extremely high in order to obtain accurate results with low statistical errors because the values of the total and prompt areas are very close. Consequently, this approach is time consuming and suffers from the statistical errors of the two simulations. The present paper introduces a more efficient method for estimating the reactivity calculated with the area method by taking advantage of the TALLYX user subroutine of MCNPX. This subroutine has been developed for separately scoring the reaction rates caused by the delayed and the prompt neutrons during a single simulation. Therefore the method is referred to as a one run calculation. These methodologies have

  3. One-run Monte Carlo calculation of effective delayed neutron fraction and area-ratio reactivity

    International Nuclear Information System (INIS)

    Zhaopeng Zhong; Talamo, Alberto; Gohar, Yousry

    2011-01-01

    The Monte Carlo code MCNPX has been utilized to calculate the effective delayed neutron fraction and reactivity by using the area-ratio method. The effective delayed neutron fraction β_e_f_f has been calculated with the fission probability method proposed by Meulekamp and van der Marck. MCNPX was used to calculate separately the fission probability of the delayed and the prompt neutrons by using the TALLYX user subroutine of MCNPX. In this way, β_e_f_f was obtained from the one criticality (k-code) calculation without performing an adjoint calculation. The traditional k-ratio method requires two criticality calculations to calculate β_e_f_f, while this approach utilizes only one MCNPX criticality calculation. Therefore, the approach described here is referred to as a one-run method. In subcritical systems driven by a pulsed neutron source, the area-ratio method is used to calculate reactivity (in dollar units) as the ratio between the prompt and delayed areas. These areas represent the integral of the reaction rates induced from the prompt and delayed neutrons during the pulse period. Traditionally, application of the area-ratio method requires two separate fixed source MCNPX simulations: one with delayed neutrons and the other without. The number of source particles in these two simulations must be extremely high in order to obtain accurate results with low statistical errors because the values of the total and prompt areas are very close. Consequently, this approach is time consuming and suffers from the statistical errors of the two simulations. The present paper introduces a more efficient method for estimating the reactivity calculated with the area method by taking advantage of the TALLYX user subroutine of MCNPX. This subroutine has been developed for separately scoring the reaction rates caused by the delayed and the prompt neutrons during a single simulation. Therefore the method is referred to as a one run calculation. These methodologies have been

  4. ATLAS Strip Detector: Operational Experience and Run1-> Run2 Transition

    CERN Document Server

    Nagai, Koichi; The ATLAS collaboration

    2014-01-01

    Large hadron collider was operated very successfully during the Run1 and provided a lot of opportunities of physics studies. It currently has a consolidation work toward to the operation at $\\sqrt{s}=14 \\mathrm{TeV}$ in Run2. The ATLAS experiment has achieved excellent performance in Run1 operation, delivering remarkable physics results. The SemiConductor Tracker contributed to the precise measurement of momentum of charged particles. This paper describes the operation experience of the SemiConductor Tracker in Run1 and the preparation toward to the Run2 operation during the LS1.

  5. Perturbation based Monte Carlo criticality search in density, enrichment and concentration

    International Nuclear Information System (INIS)

    Li, Zeguang; Wang, Kan; Deng, Jingkang

    2015-01-01

    Highlights: • A new perturbation based Monte Carlo criticality search method is proposed. • The method could get accurate results with only one individual criticality run. • The method is used to solve density, enrichment and concentration search problems. • Results show the feasibility and good performances of this method. • The relationship between results’ accuracy and perturbation order is discussed. - Abstract: Criticality search is a very important aspect in reactor physics analysis. Due to the advantages of Monte Carlo method and the development of computer technologies, Monte Carlo criticality search is becoming more and more necessary and feasible. Existing Monte Carlo criticality search methods need large amount of individual criticality runs and may have unstable results because of the uncertainties of criticality results. In this paper, a new perturbation based Monte Carlo criticality search method is proposed and discussed. This method only needs one individual criticality calculation with perturbation tallies to estimate k eff changing function using initial k eff and differential coefficients results, and solves polynomial equations to get the criticality search results. The new perturbation based Monte Carlo criticality search method is implemented in the Monte Carlo code RMC, and criticality search problems in density, enrichment and concentration are taken out. Results show that this method is quite inspiring in accuracy and efficiency, and has advantages compared with other criticality search methods

  6. Validation of uncertainty of weighing in the preparation of radionuclide standards by Monte Carlo Method; Validacao da incerteza de pesagens no preparo de padroes de radionuclideos por Metodo de Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Cacais, F.L.; Delgado, J.U., E-mail: facacais@gmail.com [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Loayza, V.M. [Instituto Nacional de Metrologia (INMETRO), Rio de Janeiro, RJ (Brazil). Qualidade e Tecnologia

    2016-07-01

    In preparing solutions for the production of radionuclide metrology standards is necessary measuring the quantity Activity by mass. The gravimetric method by elimination is applied to perform weighing with smaller uncertainties. At this work is carried out the validation, by the Monte Carlo method, of the uncertainty calculation approach implemented by Lourenco and Bobin according to ISO GUM for the method by elimination. The results obtained by both uncertainty calculation methods were consistent indicating that were fulfilled the conditions for the application of ISO GUM in the preparation of radioactive standards. (author)

  7. Statistical implications in Monte Carlo depletions - 051

    International Nuclear Information System (INIS)

    Zhiwen, Xu; Rhodes, J.; Smith, K.

    2010-01-01

    As a result of steady advances of computer power, continuous-energy Monte Carlo depletion analysis is attracting considerable attention for reactor burnup calculations. The typical Monte Carlo analysis is set up as a combination of a Monte Carlo neutron transport solver and a fuel burnup solver. Note that the burnup solver is a deterministic module. The statistical errors in Monte Carlo solutions are introduced into nuclide number densities and propagated along fuel burnup. This paper is towards the understanding of the statistical implications in Monte Carlo depletions, including both statistical bias and statistical variations in depleted fuel number densities. The deterministic Studsvik lattice physics code, CASMO-5, is modified to model the Monte Carlo depletion. The statistical bias in depleted number densities is found to be negligible compared to its statistical variations, which, in turn, demonstrates the correctness of the Monte Carlo depletion method. Meanwhile, the statistical variation in number densities generally increases with burnup. Several possible ways of reducing the statistical errors are discussed: 1) to increase the number of individual Monte Carlo histories; 2) to increase the number of time steps; 3) to run additional independent Monte Carlo depletion cases. Finally, a new Monte Carlo depletion methodology, called the batch depletion method, is proposed, which consists of performing a set of independent Monte Carlo depletions and is thus capable of estimating the overall statistical errors including both the local statistical error and the propagated statistical error. (authors)

  8. Scalable Domain Decomposed Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  9. Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment

    Science.gov (United States)

    Ritsch, E.; Atlas Collaboration

    2014-06-01

    The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during Run 1 relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for Run 2, and beyond. A number of fast detector simulation, digitization and reconstruction techniques are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.

  10. Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment

    CERN Document Server

    Chapman, J; Duehrssen, M; Elsing, M; Froidevaux, D; Harrington, R; Jansky, R; Langenberg, R; Mandrysch, R; Marshall, Z; Ritsch, E; Salzburger, A

    2014-01-01

    The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during run I relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for run II, and beyond. A number of fast detector simulation, digitization and reconstruction techniques and are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.

  11. Monte Carlo methods of PageRank computation

    NARCIS (Netherlands)

    Litvak, Nelli

    2004-01-01

    We describe and analyze an on-line Monte Carlo method of PageRank computation. The PageRank is being estimated basing on results of a large number of short independent simulation runs initiated from each page that contains outgoing hyperlinks. The method does not require any storage of the hyperlink

  12. Alternative implementations of the Monte Carlo power method

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Gelbard, E.M.

    2002-01-01

    We compare nominal efficiencies, i.e. variances in power shapes for equal running time, of different versions of the Monte Carlo eigenvalue computation, as applied to criticality safety analysis calculations. The two main methods considered here are ''conventional'' Monte Carlo and the superhistory method, and both are used in criticality safety codes. Within each of these major methods, different variants are available for the main steps of the basic Monte Carlo algorithm. Thus, for example, different treatments of the fission process may vary in the extent to which they follow, in analog fashion, the details of real-world fission, or may vary in details of the methods by which they choose next-generation source sites. In general the same options are available in both the superhistory method and conventional Monte Carlo, but there seems not to have been much examination of the special properties of the two major methods and their minor variants. We find, first, that the superhistory method is just as efficient as conventional Monte Carlo and, secondly, that use of different variants of the basic algorithms may, in special cases, have a surprisingly large effect on Monte Carlo computational efficiency

  13. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  14. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  15. The Monte Carlo photoionization and moving-mesh radiation hydrodynamics code CMACIONIZE

    Science.gov (United States)

    Vandenbroucke, B.; Wood, K.

    2018-04-01

    We present the public Monte Carlo photoionization and moving-mesh radiation hydrodynamics code CMACIONIZE, which can be used to simulate the self-consistent evolution of HII regions surrounding young O and B stars, or other sources of ionizing radiation. The code combines a Monte Carlo photoionization algorithm that uses a complex mix of hydrogen, helium and several coolants in order to self-consistently solve for the ionization and temperature balance at any given type, with a standard first order hydrodynamics scheme. The code can be run as a post-processing tool to get the line emission from an existing simulation snapshot, but can also be used to run full radiation hydrodynamical simulations. Both the radiation transfer and the hydrodynamics are implemented in a general way that is independent of the grid structure that is used to discretize the system, allowing it to be run both as a standard fixed grid code, but also as a moving-mesh code.

  16. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1997-08-01

    Zero variance procedures have been in existence since the dawn of Monte Carlo. Previous works all treat the problem of zero variance solutions for a single tally. One often wants to get low variance solutions to more than one tally. When the sets of random walks needed for two tallies are similar, it is more efficient to do zero variance biasing for both tallies in the same Monte Carlo run, instead of two separate runs. The theory presented here correlates the random walks of particles by the similarity of their tallies. Particles with dissimilar tallies rapidly become uncorrelated whereas particles with similar tallies will stay correlated through most of their random walk. The theory herein should allow practitioners to make efficient use of zero-variance biasing procedures in practical problems

  17. Vectorization of phase space Monte Carlo code in FACOM vector processor VP-200

    International Nuclear Information System (INIS)

    Miura, Kenichi

    1986-01-01

    This paper describes the vectorization techniques for Monte Carlo codes in Fujitsu's Vector Processor System. The phase space Monte Carlo code FOWL is selected as a benchmark, and scalar and vector performances are compared. The vectorized kernel Monte Carlo routine which contains heavily nested IF tests runs up to 7.9 times faster in vector mode than in scalar mode. The overall performance improvement of the vectorized FOWL code over the original scalar code reaches 3.3. The results of this study strongly indicate that supercomputer can be a powerful tool for Monte Carlo simulations in high energy physics. (Auth.)

  18. Mesh-based weight window approach for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Liu, L.; Gardner, R.P.

    1997-01-01

    The Monte Carlo method has been increasingly used to solve particle transport problems. Statistical fluctuation from random sampling is the major limiting factor of its application. To obtain the desired precision, variance reduction techniques are indispensable for most practical problems. Among various variance reduction techniques, the weight window method proves to be one of the most general, powerful, and robust. The method is implemented in the current MCNP code. An importance map is estimated during a regular Monte Carlo run, and then the map is used in the subsequent run for splitting and Russian roulette games. The major drawback of this weight window method is lack of user-friendliness. It normally requires that users divide the large geometric cells into smaller ones by introducing additional surfaces to ensure an acceptable spatial resolution of the importance map. In this paper, we present a new weight window approach to overcome this drawback

  19. RMCgui: a new interface for the workflow associated with running Reverse Monte Carlo simulations

    International Nuclear Information System (INIS)

    Dove, Martin T; Rigg, Gary

    2013-01-01

    The Reverse Monte Carlo method enables construction and refinement of large atomic models of materials that are tuned to give best agreement with experimental data such as neutron and x-ray total scattering data, capturing both the average structure and fluctuations. The practical drawback with the current implementations of this approach is the relatively complex workflow required, from setting up the configuration and simulation details through to checking the final outputs and analysing the resultant configurations. In order to make this workflow more accessible to users, we have developed an end-to-end workflow wrapped within a graphical user interface—RMCgui—designed to make the Reverse Monte Carlo more widely accessible. (paper)

  20. Monte Carlo simulations on a 9-node PC cluster

    International Nuclear Information System (INIS)

    Gouriou, J.

    2001-01-01

    Monte Carlo simulation methods are frequently used in the fields of medical physics, dosimetry and metrology of ionising radiation. Nevertheless, the main drawback of this technique is to be computationally slow, because the statistical uncertainty of the result improves only as the square root of the computational time. We present a method, which allows to reduce by a factor 10 to 20 the used effective running time. In practice, the aim was to reduce the calculation time in the LNHB metrological applications from several weeks to a few days. This approach includes the use of a PC-cluster, under Linux operating system and PVM parallel library (version 3.4). The Monte Carlo codes EGS4, MCNP and PENELOPE have been implemented on this platform and for the two last ones adapted for running under the PVM environment. The maximum observed speedup is ranging from a factor 13 to 18 according to the codes and the problems to be simulated. (orig.)

  1. Monte Carlo simulation of the microcanonical ensemble

    International Nuclear Information System (INIS)

    Creutz, M.

    1984-01-01

    We consider simulating statistical systems with a random walk on a constant energy surface. This combines features of deterministic molecular dynamics techniques and conventional Monte Carlo simulations. For discrete systems the method can be programmed to run an order of magnitude faster than other approaches. It does not require high quality random numbers and may also be useful for nonequilibrium studies. 10 references

  2. Monte Carlo calculation with unquenched Wilson-Fermions

    International Nuclear Information System (INIS)

    Montvay, I.

    1984-01-01

    A Monte Carlo updating procedure taking into account the virtual quark loops is described. It is based on high order hopping parameter expansion of the quark determinant for Wilson-fermions. In a first test run Wilson-loop expectation values are measured on 6 4 lattice at β=5.70 using 16sup(th) order hopping parameter expansion for the quark determinant. (orig.)

  3. Validation of Monte Carlo event generators in the ATLAS Collaboration for LHC Run 2

    CERN Document Server

    The ATLAS collaboration

    2016-01-01

    This note reviews the main steps followed by the ATLAS Collaboration to validate the properties of particle-level simulated events from Monte Carlo event generators in order to ensure the correctness of all event generator configurations and production samples used in physics analyses. A central validation procedure is adopted which permits the continual validation of the functionality and the performance of the ATLAS event simulation infrastructure. Revisions and updates of the Monte Carlo event generators are also monitored. The methodology behind the validation and tools developed for that purpose, as well as various usage cases, are presented. The strategy has proven to play an essential role in identifying possible problems or unwanted features within a restricted timescale, verifying their origin and pointing to possible bug fixes before full-scale processing is initiated.

  4. Incidence and risk factors of running-related injuries during preparation for a 4-mile recreational running event

    NARCIS (Netherlands)

    Buist, I.; Bredeweg, S. W.; Bessem, B.; van Mechelen, W.; Lemmink, K. A. P. M.; Diercks, R. L.

    Objective In this study, the incidence and the sex-specific predictors of running-related injury (RRI) among a group of recreational runners training for a 4-mile running event were determined and identified, respectively. Design Prospective cohort study. Methods Several potential risk factors were

  5. Lower Three Runs Remediation Safety Preparation Strategy - 13318

    International Nuclear Information System (INIS)

    Mackay, Alexander; Fryar, Scotty; Doane, Alan

    2013-01-01

    The Savannah River Site (SRS) is a 310-square-mile United States Department of Energy (US DOE) nuclear facility located along the Savannah River near Aiken, South Carolina that contains six primary stream/river systems. The Lower Three Runs Stream (LTR) is one of the primary streams within the site that is located in the southeast portion of the Savannah River Site. It is a large blackwater stream system that originates in the northeast portion of SRS and follows a southerly direction before it enters the Savannah River. During reactor operations, secondary reactor cooling water, storm sewer discharges, and miscellaneous wastewater was discharged and contaminated a 20 mile stretch of Lower Three Runs Stream that narrows and provides a limited buffer of US DOE property along the stream and flood-plain. Based on data collected during the years 2009 and 2010 under American Recovery and Re-investment Act funding, the stream was determined to be contaminated with cesium-137 at levels that exceeded acceptable risk based limits. In agreement with the Environmental Protection Agency and the South Carolina Department of Health and Environmental Control, three areas were identified for remediation [1] (SRNS April 2012). A comprehensive safety preparation strategy was developed for safe execution of the LTR remediation project. Contract incentives for safety encouraged the contractor to perform a complete evaluation of the work and develop an implementation plan to perform the work. The safety coverage was controlled to ensure all work was observed and assessed by one person per work area within the project. This was necessary due to the distances between the fence work and three transects being worked, approximately 20 miles. Contractor Management field observations were performed along with DOE assessments to ensure contractor focus on safe performance of the work. Dedicated ambulance coverage for remote worker work activities was provided. This effort was augmented with

  6. Lower Three Runs Remediation Safety Preparation Strategy - 13318

    Energy Technology Data Exchange (ETDEWEB)

    Mackay, Alexander; Fryar, Scotty; Doane, Alan [United States Department of Energy, Building 730-B, Aiken, SC 29808 (United States)

    2013-07-01

    The Savannah River Site (SRS) is a 310-square-mile United States Department of Energy (US DOE) nuclear facility located along the Savannah River near Aiken, South Carolina that contains six primary stream/river systems. The Lower Three Runs Stream (LTR) is one of the primary streams within the site that is located in the southeast portion of the Savannah River Site. It is a large blackwater stream system that originates in the northeast portion of SRS and follows a southerly direction before it enters the Savannah River. During reactor operations, secondary reactor cooling water, storm sewer discharges, and miscellaneous wastewater was discharged and contaminated a 20 mile stretch of Lower Three Runs Stream that narrows and provides a limited buffer of US DOE property along the stream and flood-plain. Based on data collected during the years 2009 and 2010 under American Recovery and Re-investment Act funding, the stream was determined to be contaminated with cesium-137 at levels that exceeded acceptable risk based limits. In agreement with the Environmental Protection Agency and the South Carolina Department of Health and Environmental Control, three areas were identified for remediation [1] (SRNS April 2012). A comprehensive safety preparation strategy was developed for safe execution of the LTR remediation project. Contract incentives for safety encouraged the contractor to perform a complete evaluation of the work and develop an implementation plan to perform the work. The safety coverage was controlled to ensure all work was observed and assessed by one person per work area within the project. This was necessary due to the distances between the fence work and three transects being worked, approximately 20 miles. Contractor Management field observations were performed along with DOE assessments to ensure contractor focus on safe performance of the work. Dedicated ambulance coverage for remote worker work activities was provided. This effort was augmented with

  7. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    Science.gov (United States)

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. ATLAS simulation of boson plus jets processes in Run 2

    CERN Document Server

    The ATLAS collaboration

    2017-01-01

    This note describes the ATLAS simulation setup used to model the production of single electroweak bosons ($W$, $Z\\gamma^\\ast$ and prompt $\\gamma$) in association with jets in proton--proton collisions at centre-of-mass energies of 8 and 13 TeV. Several Monte Carlo generator predictions are compared in regions of phase space relevant for data analyses during the LHC Run-2, or compared to unfolded data distributions measured in previous Run-1 or early Run-2 ATLAS analyses. Comparisons are made for regions of phase space with or without additional requirements on the heavy-flavour content of the accompanying jets, as well as electroweak $Vjj$ production processes. Both higher-order corrections and systematic uncertainties are also discussed.

  9. The MCLIB library: Monte Carlo simulation of neutron scattering instruments

    Energy Technology Data Exchange (ETDEWEB)

    Seeger, P.A.

    1995-09-01

    Monte Carlo is a method to integrate over a large number of variables. Random numbers are used to select a value for each variable, and the integrand is evaluated. The process is repeated a large number of times and the resulting values are averaged. For a neutron transport problem, first select a neutron from the source distribution, and project it through the instrument using either deterministic or probabilistic algorithms to describe its interaction whenever it hits something, and then (if it hits the detector) tally it in a histogram representing where and when it was detected. This is intended to simulate the process of running an actual experiment (but it is much slower). This report describes the philosophy and structure of MCLIB, a Fortran library of Monte Carlo subroutines which has been developed for design of neutron scattering instruments. A pair of programs (LQDGEOM and MC{_}RUN) which use the library are shown as an example.

  10. The MCLIB library: Monte Carlo simulation of neutron scattering instruments

    International Nuclear Information System (INIS)

    Seeger, P.A.

    1995-01-01

    Monte Carlo is a method to integrate over a large number of variables. Random numbers are used to select a value for each variable, and the integrand is evaluated. The process is repeated a large number of times and the resulting values are averaged. For a neutron transport problem, first select a neutron from the source distribution, and project it through the instrument using either deterministic or probabilistic algorithms to describe its interaction whenever it hits something, and then (if it hits the detector) tally it in a histogram representing where and when it was detected. This is intended to simulate the process of running an actual experiment (but it is much slower). This report describes the philosophy and structure of MCLIB, a Fortran library of Monte Carlo subroutines which has been developed for design of neutron scattering instruments. A pair of programs (LQDGEOM and MC RUN) which use the library are shown as an example

  11. Computer system for Monte Carlo experimentation

    International Nuclear Information System (INIS)

    Grier, D.A.

    1986-01-01

    A new computer system for Monte Carlo Experimentation is presented. The new system speeds and simplifies the process of coding and preparing a Monte Carlo Experiment; it also encourages the proper design of Monte Carlo Experiments, and the careful analysis of the experimental results. A new functional language is the core of this system. Monte Carlo Experiments, and their experimental designs, are programmed in this new language; those programs are compiled into Fortran output. The Fortran output is then compiled and executed. The experimental results are analyzed with a standard statistics package such as Si, Isp, or Minitab or with a user-supplied program. Both the experimental results and the experimental design may be directly loaded into the workspace of those packages. The new functional language frees programmers from many of the details of programming an experiment. Experimental designs such as factorial, fractional factorial, or latin square are easily described by the control structures and expressions of the language. Specific mathematical modes are generated by the routines of the language

  12. Application of inactive cycle stopping criteria for Monte Carlo Wielandt calculations

    International Nuclear Information System (INIS)

    Shim, H. J.; Kim, C. H.

    2009-01-01

    The Wielandt method is incorporated into Monte Carlo (MC) eigenvalue calculation as a way to speed up fission source convergence. To make the most of the MC Wielandt method, however, it is highly desirable to halt inactive cycle runs in a timely manner because it requires a much longer computational time to execute a single cycle MC run than the conventional MC eigenvalue calculations. This paper presents an algorithm to detect the onset of the active cycles and thereby to stop automatically the inactive cycle MC runs based on two anterior stopping criteria. The effectiveness of the algorithm is demonstrated by applying it to a slow convergence problem. (authors)

  13. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    International Nuclear Information System (INIS)

    Brown, Forrest B.; Univ. of New Mexico, Albuquerque, NM

    2016-01-01

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations

  14. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications Group; Univ. of New Mexico, Albuquerque, NM (United States). Nuclear Engineering Dept.

    2016-11-29

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations

  15. ATLAS detector performance in Run1: Calorimeters

    CERN Document Server

    Burghgrave, B; The ATLAS collaboration

    2014-01-01

    ATLAS operated with an excellent efficiency during the Run 1 data taking period, recording respectively in 2011 and 2012 an integrated luminosity of 5.3 fb-1 at √s = 7 TeV and 21.6 fb-1 at √s = 8TeV. The Liquid Argon and Tile Calorimeter contributed to this effort by operating with a good data quality efficiency, improving over the whole Run 1. This poster presents the Run 1 overall status and performance, LS1 works and Preparations for Run 2.

  16. Electron transport in radiotherapy using local-to-global Monte Carlo

    International Nuclear Information System (INIS)

    Svatos, M.M.; Chandler, W.P.; Siantar, C.L.H.; Rathkopf, J.A.; Ballinger, C.T.

    1994-09-01

    Local-to-Global (L-G) Monte Carlo methods are a way to make three-dimensional electron transport both fast and accurate relative to other Monte Carlo methods. This is achieved by breaking the simulation into two stages: a local calculation done over small geometries having the size and shape of the ''steps'' to be taken through the mesh; and a global calculation which relies on a stepping code that samples the stored results of the local calculation. The increase in speed results from taking fewer steps in the global calculation than required by ordinary Monte Carlo codes and by speeding up the calculation per step. The potential for accuracy comes from the ability to use long runs of detailed codes to compile probability distribution functions (PDFs) in the local calculation. Specific examples of successful Local-to-Global algorithms are given

  17. Implementation of a Monte Carlo simulation environment for fully 3D PET on a high-performance parallel platform

    CERN Document Server

    Zaidi, H; Morel, Christian

    1998-01-01

    This paper describes the implementation of the Eidolon Monte Carlo program designed to simulate fully three-dimensional (3D) cylindrical positron tomographs on a MIMD parallel architecture. The original code was written in Objective-C and developed under the NeXTSTEP development environment. Different steps involved in porting the software on a parallel architecture based on PowerPC 604 processors running under AIX 4.1 are presented. Basic aspects and strategies of running Monte Carlo calculations on parallel computers are described. A linear decrease of the computing time was achieved with the number of computing nodes. The improved time performances resulting from parallelisation of the Monte Carlo calculations makes it an attractive tool for modelling photon transport in 3D positron tomography. The parallelisation paradigm used in this work is independent from the chosen parallel architecture

  18. LHCb computing in Run II and its evolution towards Run III

    CERN Document Server

    Falabella, Antonio

    2016-01-01

    his contribution reports on the experience of the LHCb computing team during LHC Run 2 and its preparation for Run 3. Furthermore a brief introduction on LHCbDIRAC, i.e. the tool to interface to the experiment distributed computing resources for its data processing and data management operations, is given. Run 2, which started in 2015, has already seen several changes in the data processing workflows of the experiment. Most notably the ability to align and calibrate the detector between two different stages of the data processing in the high level trigger farm, eliminating the need for a second pass processing of the data offline. In addition a fraction of the data is immediately reconstructed to its final physics format in the high level trigger and only this format is exported from the experiment site to the physics analysis. This concept have successfully been tested and will continue to be used for the rest of Run 2. Furthermore the distributed data processing has been improved with new concepts and techn...

  19. Similar Running Economy With Different Running Patterns Along the Aerial-Terrestrial Continuum.

    Science.gov (United States)

    Lussiana, Thibault; Gindre, Cyrille; Hébert-Losier, Kim; Sagawa, Yoshimasa; Gimenez, Philippe; Mourot, Laurent

    2017-04-01

    No unique or ideal running pattern is the most economical for all runners. Classifying the global running patterns of individuals into 2 categories (aerial and terrestrial) using the Volodalen method could permit a better understanding of the relationship between running economy (RE) and biomechanics. The main purpose was to compare the RE of aerial and terrestrial runners. Two coaches classified 58 runners into aerial (n = 29) or terrestrial (n = 29) running patterns on the basis of visual observations. RE, muscle activity, kinematics, and spatiotemporal parameters of both groups were measured during a 5-min run at 12 km/h on a treadmill. Maximal oxygen uptake (V̇O 2 max) and peak treadmill speed (PTS) were assessed during an incremental running test. No differences were observed between aerial and terrestrial patterns for RE, V̇O 2 max, and PTS. However, at 12 km/h, aerial runners exhibited earlier gastrocnemius lateralis activation in preparation for contact, less dorsiflexion at ground contact, higher coactivation indexes, and greater leg stiffness during stance phase than terrestrial runners. Terrestrial runners had more pronounced semitendinosus activation at the start and end of the running cycle, shorter flight time, greater leg compression, and a more rear-foot strike. Different running patterns were associated with similar RE. Aerial runners appear to rely more on elastic energy utilization with a rapid eccentric-concentric coupling time, whereas terrestrial runners appear to propel the body more forward rather than upward to limit work against gravity. Excluding runners with a mixed running pattern from analyses did not affect study interpretation.

  20. Comparison of ONETRAN calculations of electron beam dose profiles with Monte Carlo and experiment

    International Nuclear Information System (INIS)

    Garth, J.C.; Woolf, S.

    1987-01-01

    Electron beam dose profiles have been calculated using a multigroup, discrete ordinates solution of the Spencer-Lewis electron transport equation. This was accomplished by introducing electron transport cross-sections into the ONETRAN code in a simple manner. The authors' purpose is to ''benchmark'' this electron transport model and to demonstrate its accuracy and capabilities over the energy range from 30 keV to 20 MeV. Many of their results are compared with the extensive measurements and TIGER Monte Carlo data. In general the ONETRAN results are smoother, agree with TIGER within the statistical error of the Monte Carlo histograms and require about one tenth the running time of Monte Carlo

  1. Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments

    International Nuclear Information System (INIS)

    Pevey, Ronald E.

    2005-01-01

    Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL

  2. Usage of burnt fuel isotopic compositions from engineering codes in Monte-Carlo code calculations

    International Nuclear Information System (INIS)

    Aleshin, Sergey S.; Gorodkov, Sergey S.; Shcherenko, Anna I.

    2015-01-01

    A burn-up calculation of VVER's cores by Monte-Carlo code is complex process and requires large computational costs. This fact makes Monte-Carlo codes usage complicated for project and operating calculations. Previously prepared isotopic compositions are proposed to use for the Monte-Carlo code (MCU) calculations of different states of VVER's core with burnt fuel. Isotopic compositions are proposed to calculate by an approximation method. The approximation method is based on usage of a spectral functionality and reference isotopic compositions, that are calculated by engineering codes (TVS-M, PERMAK-A). The multiplication factors and power distributions of FA and VVER with infinite height are calculated in this work by the Monte-Carlo code MCU using earlier prepared isotopic compositions. The MCU calculation data were compared with the data which were obtained by engineering codes.

  3. Monte Carlo codes and Monte Carlo simulator program

    International Nuclear Information System (INIS)

    Higuchi, Kenji; Asai, Kiyoshi; Suganuma, Masayuki.

    1990-03-01

    Four typical Monte Carlo codes KENO-IV, MORSE, MCNP and VIM have been vectorized on VP-100 at Computing Center, JAERI. The problems in vector processing of Monte Carlo codes on vector processors have become clear through the work. As the result, it is recognized that these are difficulties to obtain good performance in vector processing of Monte Carlo codes. A Monte Carlo computing machine, which processes the Monte Carlo codes with high performances is being developed at our Computing Center since 1987. The concept of Monte Carlo computing machine and its performance have been investigated and estimated by using a software simulator. In this report the problems in vectorization of Monte Carlo codes, Monte Carlo pipelines proposed to mitigate these difficulties and the results of the performance estimation of the Monte Carlo computing machine by the simulator are described. (author)

  4. Cost effective distributed computing for Monte Carlo radiation dosimetry

    International Nuclear Information System (INIS)

    Wise, K.N.; Webb, D.V.

    2000-01-01

    Full text: An inexpensive computing facility has been established for performing repetitive Monte Carlo simulations with the BEAM and EGS4/EGSnrc codes of linear accelerator beams, for calculating effective dose from diagnostic imaging procedures and of ion chambers and phantoms used for the Australian high energy absorbed dose standards. The facility currently consists of 3 dual-processor 450 MHz processor PCs linked by a high speed LAN. The 3 PCs can be accessed either locally from a single keyboard/monitor/mouse combination using a SwitchView controller or remotely via a computer network from PCs with suitable communications software (e.g. Telnet, Kermit etc). All 3 PCs are identically configured to have the Red Hat Linux 6.0 operating system. A Fortran compiler and the BEAM and EGS4/EGSnrc codes are available on the 3 PCs. The preparation of sequences of jobs utilising the Monte Carlo codes is simplified using load-distributing software (enFuzion 6.0 marketed by TurboLinux Inc, formerly Cluster from Active Tools) which efficiently distributes the computing load amongst all 6 processors. We describe 3 applications of the system - (a) energy spectra from radiotherapy sources, (b) mean mass-energy absorption coefficients and stopping powers for absolute absorbed dose standards and (c) dosimetry for diagnostic procedures; (a) and (b) are based on the transport codes BEAM and FLURZnrc while (c) is a Fortran/EGS code developed at ARPANSA. Efficiency gains ranged from 3 for (c) to close to the theoretical maximum of 6 for (a) and (b), with the gain depending on the amount of 'bookkeeping' to begin each task and the time taken to complete a single task. We have found the use of a load-balancing batch processing system with many PCs to be an economical way of achieving greater productivity for Monte Carlo calculations or of any computer intensive task requiring many runs with different parameters. Copyright (2000) Australasian College of Physical Scientists and

  5. Monte Carlo simulation of neutron scattering instruments

    International Nuclear Information System (INIS)

    Seeger, P.A.; Daemen, L.L.; Hjelm, R.P. Jr.

    1998-01-01

    A code package consisting of the Monte Carlo Library MCLIB, the executing code MC RUN, the web application MC Web, and various ancillary codes is proposed as an open standard for simulation of neutron scattering instruments. The architecture of the package includes structures to define surfaces, regions, and optical elements contained in regions. A particle is defined by its vector position and velocity, its time of flight, its mass and charge, and a polarization vector. The MC RUN code handles neutron transport and bookkeeping, while the action on the neutron within any region is computed using algorithms that may be deterministic, probabilistic, or a combination. Complete versatility is possible because the existing library may be supplemented by any procedures a user is able to code. Some examples are shown

  6. LHCb : First years of running for the LHCb calorimeter system and preparation for run 2

    CERN Multimedia

    Chefdeville, Maximilien

    2015-01-01

    The LHCb experiment is dedicated to precision measurements of CP violation and rare decays of B hadrons at the Large Hadron Collider (LHC) at CERN (Geneva). It comprises a calorimeter system composed of four subdetectors: a Scintillating Pad Detector (SPD) and a Pre-Shower detector (PS) in front of an electromagnetic calorimeter (ECAL) which is followed by a hadron calorimeter (HCAL). They are used to select transverse energy hadron, electron and photon candidates for the first trigger level and they provides the identification of electrons, photons and hadrons as well as the measurement of their energies and positions. The calorimeter has been pre-calibrated before its installation in the pit. The calibration techniques have been tested with data taken in 2010 and used regularly during run 1. For run 2, new calibration methods have been devised to follow and correct online the calorimeter detector response. The design and construction characteristics of the LHCb calorimeter will be recalled. Strategies for...

  7. PDF4LHC recommendations for LHC Run II

    CERN Document Server

    Butterworth, Jon; Cooper-Sarkar, Amanda; De Roeck, Albert; Feltesse, Joel; Forte, Stefano; Gao, Jun; Glazov, Sasha; Huston, Joey; Kassabov, Zahari; McNulty, Ronan; Morsch, Andreas; Nadolsky, Pavel; Radescu, Voica; Rojo, Juan; Thorne, Robert

    2016-01-01

    We provide an updated recommendation for the usage of sets of parton distribution functions (PDFs) and the assessment of PDF and PDF+$\\alpha_s$ uncertainties suitable for applications at the LHC Run II. We review developments since the previous PDF4LHC recommendation, and discuss and compare the new generation of PDFs, which include substantial information from experimental data from the Run I of the LHC. We then propose a new prescription for the combination of a suitable subset of the available PDF sets, which is presented in terms of a single combined PDF set. We finally discuss tools which allow for the delivery of this combined set in terms of optimized sets of Hessian eigenvectors or Monte Carlo replicas, and their usage, and provide some examples of their application to LHC phenomenology.

  8. Reconstruction, Energy Calibration, and Identification of Hadronically Decaying Tau Leptons in the ATLAS Experiment for Run-2 of the LHC

    CERN Document Server

    The ATLAS collaboration

    2015-01-01

    The reconstruction algorithm, energy calibration, and identification methods for hadronically decaying tau leptons in ATLAS used at the start of Run-2 of the Large Hadron Collider are described in this note. All algorithms have been optimised for Run-2 conditions. The energy calibration relies on Monte Carlo samples with hadronic tau lepton decays, and applies multiplicative factors based on the pT of the reconstructed tau lepton to the energy measurements in the calorimeters. The identification employs boosted decision trees. Systematic uncertainties on the energy scale, reconstruction efficiency and identification efficiency of hadronically decaying tau leptons are determined using Monte Carlo samples that simulate varying conditions.

  9. Neutrino astronomy at Mont Blanc: from LSD to LSD-2

    International Nuclear Information System (INIS)

    Saavedra, O.; Aglietta, M.; Badino, G.

    1988-01-01

    In this paper we present the upgrading of the LSD experiment, presently running in the Mont Blanc Laboratory. The data recorded during the period when supernova 1987A exploded are analysed in detail. The research program of LSD-2, the same experiment as LSD but with an higher sensitivity to search for neutrino burst from collapsing stars, is also discussed

  10. Spent Fuel Drying System Test Results (Dry-Run in Preparation for Run 8)

    International Nuclear Information System (INIS)

    Oliver, B.M.; Klinger, G.S.; Abrefah, J.; Marschman, S.C.; MacFarlan, P.J.; Ritter, G.A.

    1999-01-01

    The water-filled K-Basins in the Hanford 100 Area have been used to store N-Reactor spent nuclear fuel (SNF) since the 1970s. Because some leaks in the basin have been detected and some of the fuel is breached due to handling damage and corrosion, efforts are underway to remove the fuel elements from wet storage. An Integrated Process Strategy (IPS) has been developed to package, dry, transport, and store these metallic uranium fuel elements in an interim storage facility on the Hanford Site (WHC 1995). Information required to support the development of the drying processes, and the required safety analyses, is being obtained from characterization tests conducted on fuel elements removed from the K-Basins. A series of whole element drying tests (reported in separate documents, see Section 7.0) have been conducted by Pacific Northwest National Laboratory (PNNL)(a)on several intact and damaged fuel elements recovered from both the K-East and K-West Basins. This report documents the results of a test ''dry-run'' conducted prior to the eighth and last of those tests, which was conducted on an N-Reactor outer fuel element removed from K-West canister6513U. The system used for the dry-run test was the Whole Element Furnace Testing System, described in Section 2.0, located in the Postirradiation Testing Laboratory (PTL, 327 Building). The test conditions and methodologies are given in Section 3.0. The experimental results are provided in Section 4.0 and discussed Section 5.0

  11. MC21 Monte Carlo analysis of the Hoogenboom-Martin full-core PWR benchmark problem - 301

    International Nuclear Information System (INIS)

    Kelly, D.J.; Sutton, Th.M.; Trumbull, T.H.; Dobreff, P.S.

    2010-01-01

    At the 2009 American Nuclear Society Mathematics and Computation conference, Hoogenboom and Martin proposed a full-core PWR model to monitor the improvement of Monte Carlo codes to compute detailed power density distributions. This paper describes the application of the MC21 Monte Carlo code to the analysis of this benchmark model. With the MC21 code, we obtained detailed power distributions over the entire core. The model consisted of 214 assemblies, each made up of a 17x17 array of pins. Each pin was subdivided into 100 axial nodes, thus resulting in over seven million tally regions. Various cases were run to assess the statistical convergence of the model. This included runs of 10 billion and 40 billion neutron histories, as well as ten independent runs of 4 billion neutron histories each. The 40 billion neutron-history calculation resulted in 43% of all regions having a 95% confidence level of 2% or less implying a relative standard deviation of 1%. Furthermore, 99.7% of regions having a relative power density of 1.0 or greater have a similar confidence level. We present timing results that assess the MC21 performance relative to the number of tallies requested. Source convergence was monitored by analyzing plots of the Shannon entropy and eigenvalue versus active cycle. We also obtained an estimate of the dominance ratio. Additionally, we performed an analysis of the error in an attempt to ascertain the validity of the confidence intervals predicted by MC21. Finally, we look forward to the prospect of full core 3-D Monte Carlo depletion by scoping out the required problem size. This study provides an initial data point for the Hoogenboom-Martin benchmark model using a state-of-the-art Monte Carlo code. (authors)

  12. Monte Carlo and Quasi-Monte Carlo Sampling

    CERN Document Server

    Lemieux, Christiane

    2009-01-01

    Presents essential tools for using quasi-Monte Carlo sampling in practice. This book focuses on issues related to Monte Carlo methods - uniform and non-uniform random number generation, variance reduction techniques. It covers several aspects of quasi-Monte Carlo methods.

  13. CMacIonize: Monte Carlo photoionisation and moving-mesh radiation hydrodynamics

    Science.gov (United States)

    Vandenbroucke, Bert; Wood, Kenneth

    2018-02-01

    CMacIonize simulates the self-consistent evolution of HII regions surrounding young O and B stars, or other sources of ionizing radiation. The code combines a Monte Carlo photoionization algorithm that uses a complex mix of hydrogen, helium and several coolants in order to self-consistently solve for the ionization and temperature balance at any given time, with a standard first order hydrodynamics scheme. The code can be run as a post-processing tool to get the line emission from an existing simulation snapshot, but can also be used to run full radiation hydrodynamical simulations. Both the radiation transfer and the hydrodynamics are implemented in a general way that is independent of the grid structure that is used to discretize the system, allowing it to be run both as a standard fixed grid code and also as a moving-mesh code.

  14. CMS Monte Carlo production in the WLCG computing grid

    International Nuclear Information System (INIS)

    Hernandez, J M; Kreuzer, P; Hof, C; Khomitch, A; Mohapatra, A; Filippis, N D; Pompili, A; My, S; Abbrescia, M; Maggi, G; Donvito, G; Weirdt, S D; Maes, J; Mulders, P v; Villella, I; Wakefield, S; Guan, W; Fanfani, A; Evans, D; Flossdorf, A

    2008-01-01

    Monte Carlo production in CMS has received a major boost in performance and scale since the past CHEP06 conference. The production system has been re-engineered in order to incorporate the experience gained in running the previous system and to integrate production with the new CMS event data model, data management system and data processing framework. The system is interfaced to the two major computing Grids used by CMS, the LHC Computing Grid (LCG) and the Open Science Grid (OSG). Operational experience and integration aspects of the new CMS Monte Carlo production system is presented together with an analysis of production statistics. The new system automatically handles job submission, resource monitoring, job queuing, job distribution according to the available resources, data merging, registration of data into the data bookkeeping, data location, data transfer and placement systems. Compared to the previous production system automation, reliability and performance have been considerably improved. A more efficient use of computing resources and a better handling of the inherent Grid unreliability have resulted in an increase of production scale by about an order of magnitude, capable of running in parallel at the order of ten thousand jobs and yielding more than two million events per day

  15. Monte Carlo Production Management at CMS

    CERN Document Server

    Boudoul, G.; Pol, A; Srimanobhas, P; Vlimant, J R; Franzoni, Giovanni

    2015-01-01

    The analysis of the LHC data at the Compact Muon Solenoid (CMS) experiment requires the production of a large number of simulated events.During the runI of LHC (2010-2012), CMS has produced over 12 Billion simulated events,organized in approximately sixty different campaigns each emulating specific detector conditions and LHC running conditions (pile up).In order toaggregate the information needed for the configuration and prioritization of the events production,assure the book-keeping and of all the processing requests placed by the physics analysis groups,and to interface with the CMS production infrastructure,the web-based service Monte Carlo Management (McM) has been developed and put in production in 2012.McM is based on recent server infrastructure technology (CherryPy + java) and relies on a CouchDB database back-end.This contribution will coverthe one and half year of operational experience managing samples of simulated events for CMS,the evolution of its functionalitiesand the extension of its capabi...

  16. Monte Carlo applications to core-following of the National Research Universal reactor (NRU)

    International Nuclear Information System (INIS)

    Nguyen, T.S.; Wang, X.; Leung, T.

    2014-01-01

    Reactor code TRIAD, relying on a two-group neutron diffusion model, is currently used for core-following of NRU - to track reactor assembly locations and burnups. The Monte Carlo (MCNP or SERPENT) full-reactor models of NRU can be used to provide the core power distribution for calculating fuel burnups, with WIMS-AECL providing fuel depletion calculations. The MCNP/WIMS core-following results were in good agreement with the measured data, within the expected biases. The Monte Carlo methods, still very time-consuming, need to be able to run faster before they can replace TRIAD for timely support of NRU operations. (author)

  17. HEXANN-EVALU - a Monte Carlo program system for pressure vessel neutron irradiation calculation

    International Nuclear Information System (INIS)

    Lux, Ivan

    1983-08-01

    The Monte Carlo program HEXANN and the evaluation program EVALU are intended to calculate Monte Carlo estimates of reaction rates and currents in segments of concentric angular regions around a hexagonal reactor-core region. The report describes the theoretical basis, structure and activity of the programs. Input data preparation guides and a sample problem are also included. Theoretical considerations as well as numerical experimental results suggest the user a nearly optimum way of making use of the Monte Carlo efficiency increasing options included in the program

  18. The PDF4LHC report on PDFs and LHC data: Results from Run I and preparation for Run II

    CERN Document Server

    Rojo, Juan; Ball, Richard D; Cooper-Sarkar, Amanda; de Roeck, Albert; Farry, Stephen; Ferrando, James; Forte, Stefano; Gao, Jun; Harland-Lang, Lucian; Huston, Joey; Glazov, Alexander; Gouzevitch, Maxime; Gwenlan, Claire; Lipka, Katerina; Lisovyi, Mykhailo; Mangano, Michelangelo; Nadolsky, Pavel; Perrozzi, Luca; Placakyte, Ringaile; Radescu, Voica; Salam, Gavin P; Thorne, Robert

    2015-01-01

    The accurate determination of the Parton Distribution Functions (PDFs) of the proton is an essential ingredient of the Large Hadron Collider (LHC) program. PDF uncertainties impact a wide range of processes, from Higgs boson characterisation and precision Standard Model measurements to New Physics searches. A major recent development in modern PDF analyses has been to exploit the wealth of new information contained in precision measurements from the LHC Run I, as well as progress in tools and methods to include these data in PDF fits. In this report we summarise the information that PDF-sensitive measurements at the LHC have provided so far, and review the prospects for further constraining PDFs with data from the recently started Run II. This document aims to provide useful input to the LHC collaborations to prioritise their PDF-sensitive measurements at Run II, as well as a comprehensive reference for the PDF-fitting collaborations.

  19. The PDF4LHC report on PDFs and LHC data. Results from Run I and preparation for Run II

    International Nuclear Information System (INIS)

    Rojo, Juan; Ball, Richard D.; CERN, Geneva

    2015-07-01

    The accurate determination of the Parton Distribution Functions (PDFs) of the proton is an essential ingredient of the Large Hadron Collider (LHC) program. PDF uncertainties impact a wide range of processes, from Higgs boson characterisation and precision Standard Model measurements to New Physics searches. A major recent development in modern PDF analyses has been to exploit the wealth of new information contained in precision measurements from the LHC Run I, as well as progress in tools and methods to include these data in PDF fits. In this report we summarise the information that PDF-sensitive measurements at the LHC have provided so far, and review the prospects for further constraining PDFs with data from the recently started Run II. This document aims to provide useful input to the LHC collaborations to prioritise their PDF-sensitive measurements at Run II, as well as a comprehensive reference for the PDF-fitting collaborations.

  20. The PDF4LHC report on PDFs and LHC data: results from Run I and preparation for Run II

    International Nuclear Information System (INIS)

    Rojo, Juan; Accardi, Alberto; Ball, Richard D; Cooper-Sarkar, Amanda; Gwenlan, Claire; Roeck, Albert de; Mangano, Michelangelo; Farry, Stephen; Ferrando, James; Forte, Stefano; Gao, Jun; Harland-Lang, Lucian; Huston, Joey; Glazov, Alexander; Lipka, Katerina; Gouzevitch, Maxime; Lisovyi, Mykhailo; Nadolsky, Pavel

    2015-01-01

    The accurate determination of the parton distribution functions (PDFs) of the proton is an essential ingredient of the Large Hadron Collider (LHC) program. PDF uncertainties impact a wide range of processes, from Higgs boson characterization and precision Standard Model measurements to new physics searches. A major recent development in modern PDF analyses has been to exploit the wealth of new information contained in precision measurements from the LHC Run I, as well as progress in tools and methods to include these data in PDF fits. In this report we summarize the information that PDF-sensitive measurements at the LHC have provided so far, and review the prospects for further constraining PDFs with data from the recently started Run II. This document aims to provide useful input to the LHC collaborations to prioritize their PDF-sensitive measurements at Run II, as well as a comprehensive reference for the PDF-fitting collaborations. (topical review)

  1. Uncertainties in models of tropospheric ozone based on Monte Carlo analysis: Tropospheric ozone burdens, atmospheric lifetimes and surface distributions

    Science.gov (United States)

    Derwent, Richard G.; Parrish, David D.; Galbally, Ian E.; Stevenson, David S.; Doherty, Ruth M.; Naik, Vaishali; Young, Paul J.

    2018-05-01

    Recognising that global tropospheric ozone models have many uncertain input parameters, an attempt has been made to employ Monte Carlo sampling to quantify the uncertainties in model output that arise from global tropospheric ozone precursor emissions and from ozone production and destruction in a global Lagrangian chemistry-transport model. Ninety eight quasi-randomly Monte Carlo sampled model runs were completed and the uncertainties were quantified in tropospheric burdens and lifetimes of ozone, carbon monoxide and methane, together with the surface distribution and seasonal cycle in ozone. The results have shown a satisfactory degree of convergence and provide a first estimate of the likely uncertainties in tropospheric ozone model outputs. There are likely to be diminishing returns in carrying out many more Monte Carlo runs in order to refine further these outputs. Uncertainties due to model formulation were separately addressed using the results from 14 Atmospheric Chemistry Coupled Climate Model Intercomparison Project (ACCMIP) chemistry-climate models. The 95% confidence ranges surrounding the ACCMIP model burdens and lifetimes for ozone, carbon monoxide and methane were somewhat smaller than for the Monte Carlo estimates. This reflected the situation where the ACCMIP models used harmonised emissions data and differed only in their meteorological data and model formulations whereas a conscious effort was made to describe the uncertainties in the ozone precursor emissions and in the kinetic and photochemical data in the Monte Carlo runs. Attention was focussed on the model predictions of the ozone seasonal cycles at three marine boundary layer stations: Mace Head, Ireland, Trinidad Head, California and Cape Grim, Tasmania. Despite comprehensively addressing the uncertainties due to global emissions and ozone sources and sinks, none of the Monte Carlo runs were able to generate seasonal cycles that matched the observations at all three MBL stations. Although

  2. Weighted-delta-tracking for Monte Carlo particle transport

    International Nuclear Information System (INIS)

    Morgan, L.W.G.; Kotlyar, D.

    2015-01-01

    Highlights: • This paper presents an alteration to the Monte Carlo Woodcock tracking technique. • The alteration improves computational efficiency within regions of high absorbers. • The rejection technique is replaced by a statistical weighting mechanism. • The modified Woodcock method is shown to be faster than standard Woodcock tracking. • The modified Woodcock method achieves a lower variance, given a specified accuracy. - Abstract: Monte Carlo particle transport (MCPT) codes are incredibly powerful and versatile tools to simulate particle behavior in a multitude of scenarios, such as core/criticality studies, radiation protection, shielding, medicine and fusion research to name just a small subset applications. However, MCPT codes can be very computationally expensive to run when the model geometry contains large attenuation depths and/or contains many components. This paper proposes a simple modification to the Woodcock tracking method used by some Monte Carlo particle transport codes. The Woodcock method utilizes the rejection method for sampling virtual collisions as a method to remove collision distance sampling at material boundaries. However, it suffers from poor computational efficiency when the sample acceptance rate is low. The proposed method removes rejection sampling from the Woodcock method in favor of a statistical weighting scheme, which improves the computational efficiency of a Monte Carlo particle tracking code. It is shown that the modified Woodcock method is less computationally expensive than standard ray-tracing and rejection-based Woodcock tracking methods and achieves a lower variance, given a specified accuracy

  3. IMPLEMENTASI METODE MARKOV CHAIN MONTE CARLO DALAM PENENTUAN HARGA KONTRAK BERJANGKA KOMODITAS

    Directory of Open Access Journals (Sweden)

    PUTU AMANDA SETIAWANI

    2015-06-01

    Full Text Available The aim of the research is to implement Markov Chain Monte Carlo (MCMC simulation method to price the futures contract of cocoa commodities. The result shows that MCMC is more flexible than Standard Monte Carlo (SMC simulation method because MCMC method uses hit-and-run sampler algorithm to generate proposal movements that are subsequently accepted or rejected with a probability that depends on the distribution of the target that we want to be achieved. This research shows that MCMC method is suitable to be used to simulate the model of cocoa commodity price movement. The result of this research is a simulation of future contract prices for the next three months and future contract prices that must be paid at the time the contract expires. Pricing future contract by using MCMC method will produce the cheaper contract price if it compares to Standard Monte Carlo simulation.

  4. MMCTP: a radiotherapy research environment for Monte Carlo and patient-specific treatment planning

    International Nuclear Information System (INIS)

    Alexander, A; DeBlois, F; Stroian, G; Al-Yahya, K; Heath, E; Seuntjens, J

    2007-01-01

    Radiotherapy research lacks a flexible computational research environment for Monte Carlo (MC) and patient-specific treatment planning. The purpose of this study was to develop a flexible software package on low-cost hardware with the aim of integrating new patient-specific treatment planning with MC dose calculations suitable for large-scale prospective and retrospective treatment planning studies. We designed the software package 'McGill Monte Carlo treatment planning' (MMCTP) for the research development of MC and patient-specific treatment planning. The MMCTP design consists of a graphical user interface (GUI), which runs on a simple workstation connected through standard secure-shell protocol to a cluster for lengthy MC calculations. Treatment planning information (e.g., images, structures, beam geometry properties and dose distributions) is converted into a convenient MMCTP local file storage format designated, the McGill RT format. MMCTP features include (a) DICOM R T, RTOG and CADPlan CART format imports; (b) 2D and 3D visualization views for images, structure contours, and dose distributions; (c) contouring tools; (d) DVH analysis, and dose matrix comparison tools; (e) external beam editing; (f) MC transport calculation from beam source to patient geometry for photon and electron beams. The MC input files, which are prepared from the beam geometry properties and patient information (e.g., images and structure contours), are uploaded and run on a cluster using shell commands controlled from the MMCTP GUI. The visualization, dose matrix operation and DVH tools offer extensive options for plan analysis and comparison between MC plans and plans imported from commercial treatment planning systems. The MMCTP GUI provides a flexible research platform for the development of patient-specific MC treatment planning for photon and electron external beam radiation therapy. The impact of this tool lies in the fact that it allows for systematic, platform

  5. Modelling of an industrial environment, part 1.: Monte Carlo simulations of photon transport

    International Nuclear Information System (INIS)

    Kis, Z.; Eged, K.; Meckbach, R.; Voigt, G.

    2002-01-01

    After a nuclear accident releasing radioactive material into the environment the external exposures may contribute significantly to the radiation exposure of the population (UNSCEAR 1988, 2000). For urban populations the external gamma exposure from radionuclides deposited on the surfaces of the urban-industrial environments yields the dominant contributions to the total dose to the public (Kelly 1987; Jacob and Meckbach 1990). The radiation field is naturally influenced by the environment around the sources. For calculations of the shielding effect of the structures in complex and realistic urban environments Monte Carlo methods turned out to be useful tools (Jacob and Meckbach 1987; Meckbach et al. 1988). Using these methods a complex environment can be set up in which the photon transport can be solved on a reliable way. The accuracy of the methods is in principle limited only by the knowledge of the atomic cross sections and the computational time. Several papers using Monte Carlo results for calculating doses from the external gamma exposures were published (Jacob and Meckbach 1987, 1990; Meckbach et al. 1988; Rochedo et al. 1996). In these papers the Monte Carlo simulations were run in urban environments and for different photon energies. The industrial environment can be defined as such an area where productive and/or commercial activity is carried out. A good example can be a factory or a supermarket. An industrial environment can rather be different from the urban ones as for the types and structures of the buildings and their dimensions. These variations will affect the radiation field of this environment. Hence there is a need to run new Monte Carlo simulations designed specially for the industrial environments

  6. Monte Carlo method for array criticality calculations

    International Nuclear Information System (INIS)

    Dickinson, D.; Whitesides, G.E.

    1976-01-01

    The Monte Carlo method for solving neutron transport problems consists of mathematically tracing paths of individual neutrons collision by collision until they are lost by absorption or leakage. The fate of the neutron after each collision is determined by the probability distribution functions that are formed from the neutron cross-section data. These distributions are sampled statistically to establish the successive steps in the neutron's path. The resulting data, accumulated from following a large number of batches, are analyzed to give estimates of k/sub eff/ and other collision-related quantities. The use of electronic computers to produce the simulated neutron histories, initiated at Los Alamos Scientific Laboratory, made the use of the Monte Carlo method practical for many applications. In analog Monte Carlo simulation, the calculation follows the physical events of neutron scattering, absorption, and leakage. To increase calculational efficiency, modifications such as the use of statistical weights are introduced. The Monte Carlo method permits the use of a three-dimensional geometry description and a detailed cross-section representation. Some of the problems in using the method are the selection of the spatial distribution for the initial batch, the preparation of the geometry description for complex units, and the calculation of error estimates for region-dependent quantities such as fluxes. The Monte Carlo method is especially appropriate for criticality safety calculations since it permits an accurate representation of interacting units of fissile material. Dissimilar units, units of complex shape, moderators between units, and reflected arrays may be calculated. Monte Carlo results must be correlated with relevant experimental data, and caution must be used to ensure that a representative set of neutron histories is produced

  7. ATLAS Run 1 Pythia8 tunes

    CERN Document Server

    The ATLAS collaboration

    2014-01-01

    We present tunes of the Pythia8 Monte~Carlo event generator's parton shower and multiple parton interaction parameters to a range of data observables from ATLAS Run 1. Four new tunes have been constructed, corresponding to the four leading-order parton density functions, CTEQ6L1, MSTW2008LO, NNPDF23LO, and HERAPDF15LO, each simultaneously tuning ten generator parameters. A set of systematic variations is provided for the NNPDF tune, based on the eigentune method. These tunes improve the modeling of observables that can be described by leading-order + parton shower simulation, and are primarily intended for use in situations where next-to-leading-order and/or multileg parton-showered simulations are unavailable or impractical.

  8. GRS' research on clay rock in the Mont Terri underground laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Wieczorek, Klaus; Czaikowski, Oliver [Gesellschaft fuer Anlagen- und Reaktorsicherheit gGmbH, Braunschweig (Germany)

    2016-07-15

    For constructing a nuclear waste repository and for ensuring the safety requirements are met over very long time periods, thorough knowledge about the safety-relevant processes occurring in the coupled system of waste containers, engineered barriers, and the host rock is indispensable. For respectively targeted research work, the Mont Terri rock laboratory is a unique facility where repository research is performed in a clay rock environment. It is run by 16 international partners, and a great variety of questions are investigated. Some of the work which GRS as one of the Mont Terri partners is involved in is presented in this article. The focus is on thermal, hydraulic and mechanical behaviour of host rock and/or engineered barriers.

  9. Load Balancing of Parallel Monte Carlo Transport Calculations

    International Nuclear Information System (INIS)

    Procassini, R J; O'Brien, M J; Taylor, J M

    2005-01-01

    The performance of parallel Monte Carlo transport calculations which use both spatial and particle parallelism is increased by dynamically assigning processors to the most worked domains. Since he particle work load varies over the course of the simulation, this algorithm determines each cycle if dynamic load balancing would speed up the calculation. If load balancing is required, a small number of particle communications are initiated in order to achieve load balance. This method has decreased the parallel run time by more than a factor of three for certain criticality calculations

  10. A quick and easy improvement of Monte Carlo codes for simulation

    Science.gov (United States)

    Lebrere, A.; Talhi, R.; Tripathy, M.; Pyée, M.

    The simulation of trials of independent random variables of given distribution is a critical element of running Monte-Carlo codes. This is usually performed by using pseudo-random number generators (and in most cases linearcongruential ones). We present here an alternative way to generate sequences with given statistical properties. This sequences are purely deterministic and are given by closed formulae, and can give in some cases better results than classical generators.

  11. ALICE installs new hardware in preparation for the 2012 run

    CERN Multimedia

    CERN Bulletin and ALICE Matters

    2012-01-01

    2011 was a fantastic year for the heavy-ion run at ALICE despite unprecedented challenges and difficult conditions. The data collected is at least one order of magnitude greater than the 2010 data. Thanks to a planned upgrade to two subdetectors during the 2011/2012 winter shutdown and a reorganisation of ALICE’s Physics Working Groups that should allow them to better deal with the greater challenges imposed by the LHC, the collaboration is confident that the 2011 run will allow ALICE to extend its physics reach and improve its performance.   Photograph of ALICE taken by Antonio Saba during this year's winter shutdown. The annual winter shutdown has been a very intense period for the ALICE collaboration. In conjunction with the general maintenance, modifications and tests of the experiment, two major projects – the installation of 3 supermodules of the Transition Radiation Detector (TRD) and 2 supermodules of the Electromagnetic Calorimeter (EMCal) – hav...

  12. NRMC - A GPU code for N-Reverse Monte Carlo modeling of fluids in confined media

    Science.gov (United States)

    Sánchez-Gil, Vicente; Noya, Eva G.; Lomba, Enrique

    2017-08-01

    NRMC is a parallel code for performing N-Reverse Monte Carlo modeling of fluids in confined media [V. Sánchez-Gil, E.G. Noya, E. Lomba, J. Chem. Phys. 140 (2014) 024504]. This method is an extension of the usual Reverse Monte Carlo method to obtain structural models of confined fluids compatible with experimental diffraction patterns, specifically designed to overcome the problem of slow diffusion that can appear under conditions of tight confinement. Most of the computational time in N-Reverse Monte Carlo modeling is spent in the evaluation of the structure factor for each trial configuration, a calculation that can be easily parallelized. Implementation of the structure factor evaluation in NVIDIA® CUDA so that the code can be run on GPUs leads to a speed up of up to two orders of magnitude.

  13. Computational efficiency using the CYBER-205 computer for the PACER Monte Carlo Program

    International Nuclear Information System (INIS)

    Candelore, N.R.; Maher, C.M.; Gast, R.C.

    1985-09-01

    The use of the large memory of the CYBER-205 and its vector data handling logic produced speedups over scalar code ranging from a factor of 7 for unit cell calculations with relatively few compositions to a factor of 5 for problems having more detailed geometry and materials. By vectorizing the neutron tracking in PACER (the collision analysis remained in scalar code), an asymptotic value of 200 neutrons/cpu-second was achieved for a batch size of 10,000 neutrons. The complete vectorization of the Monte Carlo method as performed by Brown resulted in even higher speedups in neutron processing rates over the use of scalar code. Large speedups in neutron processing rates are beneficial not only to achieve more accurate results for the neutronics calculations which are routinely done using Monte Carlo, but also to extend the use of the Monte Carlo method to applications that were previously considered impractical because of large running times

  14. CMS Software and Computing Ready for Run 2

    CERN Document Server

    Bloom, Kenneth

    2015-01-01

    In Run 1 of the Large Hadron Collider, software and computing was a strategic strength of the Compact Muon Solenoid experiment. The timely processing of data and simulation samples and the excellent performance of the reconstruction algorithms played an important role in the preparation of the full suite of searches used for the observation of the Higgs boson in 2012. In Run 2, the LHC will run at higher intensities and CMS will record data at a higher trigger rate. These new running conditions will provide new challenges for the software and computing systems. Over the two years of Long Shutdown 1, CMS has built upon the successes of Run 1 to improve the software and computing to meet these challenges. In this presentation we will describe the new features in software and computing that will once again put CMS in a position of physics leadership.

  15. Running the running

    OpenAIRE

    Cabass, Giovanni; Di Valentino, Eleonora; Melchiorri, Alessandro; Pajer, Enrico; Silk, Joseph

    2016-01-01

    We use the recent observations of Cosmic Microwave Background temperature and polarization anisotropies provided by the Planck satellite experiment to place constraints on the running $\\alpha_\\mathrm{s} = \\mathrm{d}n_{\\mathrm{s}} / \\mathrm{d}\\log k$ and the running of the running $\\beta_{\\mathrm{s}} = \\mathrm{d}\\alpha_{\\mathrm{s}} / \\mathrm{d}\\log k$ of the spectral index $n_{\\mathrm{s}}$ of primordial scalar fluctuations. We find $\\alpha_\\mathrm{s}=0.011\\pm0.010$ and $\\beta_\\mathrm{s}=0.027\\...

  16. RefDB: The Reference Database for CMS Monte Carlo Production

    CERN Document Server

    Lefébure, V

    2003-01-01

    RefDB is the CMS Monte Carlo Reference Database. It is used for recording and managing all details of physics simulation, reconstruction and analysis requests, for coordinating task assignments to world-wide distributed Regional Centers, Grid-enabled or not, and trace their progress rate. RefDB is also the central database that the workflow-planner contacts in order to get task instructions. It is automatically and asynchronously updated with book-keeping run summaries. Finally it is the end-user interface to data catalogues.

  17. Zinc Enolate/Sulfinate Prepared from a Single-Run Reaction Using Zinc Dust with O-Tosylated 4-Hydroxy Coumarin and Pyrone

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Ueon Sang; Joo, Seong-Ryu; Kim, Seung-Hoi [Dankook University, Cheonan (Korea, Republic of)

    2016-07-15

    We demonstrated the preparation of new zinc complexes, 2-oxo-2H-chromen-4-yloxy tosylzinc (I), and 6-methyl-2-oxo-2H-pyran-4-yloxy tosylzinc (II), by the oxidative addition of readily available zinc dust into the corresponding 4-tosylated coumarin (A) and pyrone (B), respectively. Of special interest, the thus-obtained zinc complexes showed an electrophile-dependent reactivity. The subsequent coupling reactions of I and II with a variety of acid chlorides provided the O-acylation product in moderate yields. More interestingly, it should be emphasized that the thus-prepared zinc complexes (I and II) functioned both as zinc enolate and zinc sulfinate, providing C(3)-disubstituted product (b) and sulfone (c), respectively, from a single-run reaction when I or II was treated with benzyl halides. Even though somewhat low yields were achieved under the nonoptimized conditions, the novel zinc complexes present another potential application for zinc reagents. Versatile applications of this discovery are currently underway.

  18. Zinc Enolate/Sulfinate Prepared from a Single-Run Reaction Using Zinc Dust with O-Tosylated 4-Hydroxy Coumarin and Pyrone

    International Nuclear Information System (INIS)

    Shin, Ueon Sang; Joo, Seong-Ryu; Kim, Seung-Hoi

    2016-01-01

    We demonstrated the preparation of new zinc complexes, 2-oxo-2H-chromen-4-yloxy tosylzinc (I), and 6-methyl-2-oxo-2H-pyran-4-yloxy tosylzinc (II), by the oxidative addition of readily available zinc dust into the corresponding 4-tosylated coumarin (A) and pyrone (B), respectively. Of special interest, the thus-obtained zinc complexes showed an electrophile-dependent reactivity. The subsequent coupling reactions of I and II with a variety of acid chlorides provided the O-acylation product in moderate yields. More interestingly, it should be emphasized that the thus-prepared zinc complexes (I and II) functioned both as zinc enolate and zinc sulfinate, providing C(3)-disubstituted product (b) and sulfone (c), respectively, from a single-run reaction when I or II was treated with benzyl halides. Even though somewhat low yields were achieved under the nonoptimized conditions, the novel zinc complexes present another potential application for zinc reagents. Versatile applications of this discovery are currently underway

  19. Monte Carlo lattice program KIM

    International Nuclear Information System (INIS)

    Cupini, E.; De Matteis, A.; Simonini, R.

    1980-01-01

    The Monte Carlo program KIM solves the steady-state linear neutron transport equation for a fixed-source problem or, by successive fixed-source runs, for the eigenvalue problem, in a two-dimensional thermal reactor lattice. Fluxes and reaction rates are the main quantities computed by the program, from which power distribution and few-group averaged cross sections are derived. The simulation ranges from 10 MeV to zero and includes anisotropic and inelastic scattering in the fast energy region, the epithermal Doppler broadening of the resonances of some nuclides, and the thermalization phenomenon by taking into account the thermal velocity distribution of some molecules. Besides the well known combinatorial geometry, the program allows complex configurations to be represented by a discrete set of points, an approach greatly improving calculation speed

  20. Dynamic Load Balancing of Parallel Monte Carlo Transport Calculations

    International Nuclear Information System (INIS)

    O'Brien, M; Taylor, J; Procassini, R

    2004-01-01

    The performance of parallel Monte Carlo transport calculations which use both spatial and particle parallelism is increased by dynamically assigning processors to the most worked domains. Since the particle work load varies over the course of the simulation, this algorithm determines each cycle if dynamic load balancing would speed up the calculation. If load balancing is required, a small number of particle communications are initiated in order to achieve load balance. This method has decreased the parallel run time by more than a factor of three for certain criticality calculations

  1. A midway forward-adjoint coupling method for neutron and photon Monte Carlo transport

    International Nuclear Information System (INIS)

    Serov, I.V.; John, T.M.; Hoogenboom, J.E.

    1999-01-01

    The midway Monte Carlo method for calculating detector responses combines a forward and an adjoint Monte Carlo calculation. In both calculations, particle scores are registered at a surface to be chosen by the user somewhere between the source and detector domains. The theory of the midway response determination is developed within the framework of transport theory for external sources and for criticality theory. The theory is also developed for photons, which are generated at inelastic scattering or capture of neutrons. In either the forward or the adjoint calculation a so-called black absorber technique can be applied; i.e., particles need not be followed after passing the midway surface. The midway Monte Carlo method is implemented in the general-purpose MCNP Monte Carlo code. The midway Monte Carlo method is demonstrated to be very efficient in problems with deep penetration, small source and detector domains, and complicated streaming paths. All the problems considered pose difficult variance reduction challenges. Calculations were performed using existing variance reduction methods of normal MCNP runs and using the midway method. The performed comparative analyses show that the midway method appears to be much more efficient than the standard techniques in an overwhelming majority of cases and can be recommended for use in many difficult variance reduction problems of neutral particle transport

  2. Particle Communication and Domain Neighbor Coupling: Scalable Domain Decomposed Algorithms for Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, M. J.; Brantley, P. S.

    2015-01-20

    In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 221 = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.

  3. Structural change and forecasting long-run energy prices

    International Nuclear Information System (INIS)

    Bernard, J.T.; Khalaf, L.

    2004-01-01

    Fluctuating energy prices have a significant impact on the economies of industrialized nations. A recent study has shown a strong non-linear relationship between changes in oil prices and growth in gross domestic product (GDP). In order to forecast the behaviour of energy prices, a complete model must take into account domestic and international supply and demand conditions, market regulations, technological advances and geopolitics. In 1999, Pindyck suggested that for long-term forecasting, a simple model should be adopted where prices grow in real terms and at a fixed rate. This paper tests the statistical significance of Pindyck's suggested class of econometric equations that model the behaviour of long-run real energy prices. The models assume mean-reverting prices with continuous and random changes in their level and trend. They are estimated using Kalman filtering. The authors used simulation-based procedures to address the issue of non-standard test statistics and nuisance parameters. Results were reported for a standard Monte Carlo test and a maximized Monte Carlo test. Results shown statistically significant instabilities for coal and natural gas prices, but not for crude oil prices. Various models were differentiated using out-of-sample forecasting exercises. 25 refs., 3 tabs

  4. Understanding the T2 traffic in CMS during Run-1

    CERN Document Server

    T, Wildish

    2015-01-01

    In the run-up to Run-1 CMS was operating its facilities according to the MONARC model, where data-transfers were strictly hierarchical in nature. Direct transfers between Tier-2 nodes was excluded, being perceived as operationally intensive and risky in an era where the network was expected to be a major source of errors. By the end of Run-1 wide-area networks were more capable and stable than originally anticipated. The original data-placement model was relaxed, and traffic was allowed between Tier-2 nodes.Tier-2 to Tier-2 traffic in 2012 already exceeded the amount of Tier-2 to Tier-1 traffic, so it clearly has the potential to become important in the future. Moreover, while Tier-2 to Tier-1 traffic is mostly upload of Monte Carlo data, the Tier-2 to Tier-2 traffic represents data moved in direct response to requests from the physics analysis community. As such, problems or delays there are more likely to have a direct impact on the user community.Tier-2 to Tier-2 traffic may also traverse parts of the WAN ...

  5. Track 4: basic nuclear science variance reduction for Monte Carlo criticality simulations. 6. Variational Variance Reduction for Monte Carlo Criticality Calculations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Larsen, Edward W.

    2001-01-01

    . We present results from a class of criticality calculations. These problems consist of alternating arrays of fuel and moderator regions, each region being 3.0 cm thick. Forward Monte Carlo calculations were run with (a) traditional Monte Carlo using a track-length estimate of k and survival biasing (SB); (b) the new VVR method without the linear spatial term (VVR1); (c) the new VVR method without the linear spatial term, but with SB (VVR1/SB); (d) the new VVR method with the linear spatial term (VVR2); and (e) the new VVR method with the linear spatial term and with SB (VVR2/SB). The traditional Monte Carlo calculation was performed with SB since this resulted in a higher FOM than using analog Monte Carlo. We performed the adjoint calculation using a finite difference diffusion code with a fine-mesh size of Δx = 0.1 cm. The time required to perform the deterministic adjoint calculation was much less than the time required for the Monte Carlo calculation and evaluation of the variational functional and is not included in the FOM. For each problem, the new VVR method outperforms the traditional Monte Carlo method, and the VVR method with the linear spatial term performs slightly better. For the largest problem, the two VVR methods without survival biasing (SB) outperformed the traditional Monte Carlo method by a factor of 36. We note that the use of SB decreases the efficiency of the VVR method. This decrease in FOM is due to the extra cost per history of the VVR method and the longer history length incurred by using SB. However, the new VVR method still outperforms the traditional Monte Carlo calculation even when (non-optimally) used with SB. In conclusion, we have developed a new VVR method for Monte Carlo criticality calculations. This method employs (a) a variational functional that is more accurate than the standard direct functional, (b) a representation of the deterministically obtained adjoint flux that is especially accurate for optically thick problems with

  6. The research program of the Liquid Scintillation Detector (LSD) in the Mont Blanc Laboratory

    Science.gov (United States)

    Dadykin, V. L.; Yakushev, V. F.; Korchagin, P. V.; Korchagin, V. B.; Malgin, A. S.; Ryassny, F. G.; Ryazhskaya, O. G.; Talochkin, V. P.; Zatsepin, G. T.; Badino, G.

    1985-01-01

    A massive (90 tons) liquid scintillation detector (LSD) has been running since October 1984 in the Mont Blanc Laboratory at a depth of 5,200 hg/sq cm of standard rock. The research program of the experiment covers a variety of topics in particle physics and astrophysics. The performance of the detector, the main fields of research are presented and the preliminary results are discussed.

  7. Non-analogue Monte Carlo method, application to neutron simulation; Methode de Monte Carlo non analogue, application a la simulation des neutrons

    Energy Technology Data Exchange (ETDEWEB)

    Morillon, B.

    1996-12-31

    With most of the traditional and contemporary techniques, it is still impossible to solve the transport equation if one takes into account a fully detailed geometry and if one studies precisely the interactions between particles and matters. Only the Monte Carlo method offers such a possibility. However with significant attenuation, the natural simulation remains inefficient: it becomes necessary to use biasing techniques where the solution of the adjoint transport equation is essential. The Monte Carlo code Tripoli has been using such techniques successfully for a long time with different approximate adjoint solutions: these methods require from the user to find out some parameters. If this parameters are not optimal or nearly optimal, the biases simulations may bring about small figures of merit. This paper presents a description of the most important biasing techniques of the Monte Carlo code Tripoli ; then we show how to calculate the importance function for general geometry with multigroup cases. We present a completely automatic biasing technique where the parameters of the biased simulation are deduced from the solution of the adjoint transport equation calculated by collision probabilities. In this study we shall estimate the importance function through collision probabilities method and we shall evaluate its possibilities thanks to a Monte Carlo calculation. We compare different biased simulations with the importance function calculated by collision probabilities for one-group and multigroup problems. We have run simulations with new biasing method for one-group transport problems with isotropic shocks and for multigroup problems with anisotropic shocks. The results show that for the one-group and homogeneous geometry transport problems the method is quite optimal without splitting and russian roulette technique but for the multigroup and heterogeneous X-Y geometry ones the figures of merit are higher if we add splitting and russian roulette technique.

  8. Scaling up ATLAS Database Release Technology for the LHC Long Run

    International Nuclear Information System (INIS)

    Borodin, M; Nevski, P; Vaniachine, A

    2011-01-01

    To overcome scalability limitations in database access on the Grid, ATLAS introduced the Database Release technology replicating databases in files. For years Database Release technology assured scalable database access for Monte Carlo production on the Grid. Since previous CHEP, Database Release technology was used successfully in ATLAS data reprocessing on the Grid. Frozen Conditions DB snapshot guarantees reproducibility and transactional consistency isolating Grid data processing tasks from continuous conditions updates at the 'live' Oracle server. Database Release technology fully satisfies the requirements of ATLAS data reprocessing and Monte Carlo production. We parallelized the Database Release build workflow to avoid linear dependency of the build time on the length of LHC data-taking period. In recent data reprocessing campaigns the build time was reduced by an order of magnitude thanks to a proven master-worker architecture used in the Google MapReduce. We describe further Database Release optimizations scaling up the technology for the LHC long run.

  9. Monte Carlo verification of polymer gel dosimetry applied to radionuclide therapy: a phantom study

    International Nuclear Information System (INIS)

    Gear, J I; Partridge, M; Flux, G D; Charles-Edwards, E

    2011-01-01

    This study evaluates the dosimetric performance of the polymer gel dosimeter 'Methacrylic and Ascorbic acid in Gelatin, initiated by Copper' and its suitability for quality assurance and analysis of I-131-targeted radionuclide therapy dosimetry. Four batches of gel were manufactured in-house and sets of calibration vials and phantoms were created containing different concentrations of I-131-doped gel. Multiple dose measurements were made up to 700 h post preparation and compared to equivalent Monte Carlo simulations. In addition to uniformly filled phantoms the cross-dose distribution from a hot insert to a surrounding phantom was measured. In this example comparisons were made with both Monte Carlo and a clinical scintigraphic dosimetry method. Dose-response curves generated from the calibration data followed a sigmoid function. The gels appeared to be stable over many weeks of internal irradiation with a delay in gel response observed at 29 h post preparation. This was attributed to chemical inhibitors and slow reaction rates of long-chain radical species. For this reason, phantom measurements were only made after 190 h of irradiation. For uniformly filled phantoms of I-131 the accuracy of dose measurements agreed to within 10% when compared to Monte Carlo simulations. A radial cross-dose distribution measured using the gel dosimeter compared well to that calculated with Monte Carlo. Small inhomogeneities were observed in the dosimeter attributed to non-uniform mixing of monomer during preparation. However, they were not detrimental to this study where the quantitative accuracy and spatial resolution of polymer gel dosimetry were far superior to that calculated using scintigraphy. The difference between Monte Carlo and gel measurements was of the order of a few cGy, whilst with the scintigraphic method differences of up to 8 Gy were observed. A manipulation technique is also presented which allows 3D scintigraphic dosimetry measurements to be compared to polymer

  10. New Monte Carlo approach to the adjoint Boltzmann equation

    International Nuclear Information System (INIS)

    De Matteis, A.; Simonini, R.

    1978-01-01

    A class of stochastic models for the Monte Carlo integration of the adjoint neutron transport equation is described. Some current general methods are brought within this class, thus preparing the ground for subsequent comparisons. Monte Carlo integration of the adjoint Boltzmann equation can be seen as a simulation of the transport of mathematical particles with reaction kernels not normalized to unity. This last feature is a source of difficulty: It can influence the variance of the result negatively and also often leads to preparation of special ''libraries'' consisting of tables of normalization factors as functions of energy, presently used by several methods. These are the two main points that are discussed and that are taken into account to devise a nonmultigroup method of solution for a certain class of problems. Reactions considered in detail are radiative capture, elastic scattering, discrete levels and continuum inelastic scattering, for which the need for tables has been almost completely eliminated. The basic policy pursued to avoid a source of statistical fluctuations is to try to make the statistical weight of the traveling particle dependent only on its starting and current energies, at least in simple cases. The effectiveness of the sampling schemes proposed is supported by numerical comparison with other more general adjoint Monte Carlo methods. Computation of neutron flux at a point by means of an adjoint formulation is the problem taken as a test for numerical experiments. Very good results have been obtained in the difficult case of resonant cross sections

  11. The design of the run Clever randomized trial: running volume, -intensity and running-related injuries.

    Science.gov (United States)

    Ramskov, Daniel; Nielsen, Rasmus Oestergaard; Sørensen, Henrik; Parner, Erik; Lind, Martin; Rasmussen, Sten

    2016-04-23

    Injury incidence and prevalence in running populations have been investigated and documented in several studies. However, knowledge about injury etiology and prevention is needed. Training errors in running are modifiable risk factors and people engaged in recreational running need evidence-based running schedules to minimize the risk of injury. The existing literature on running volume and running intensity and the development of injuries show conflicting results. This may be related to previously applied study designs, methods used to quantify the performed running and the statistical analysis of the collected data. The aim of the Run Clever trial is to investigate if a focus on running intensity compared with a focus on running volume in a running schedule influences the overall injury risk differently. The Run Clever trial is a randomized trial with a 24-week follow-up. Healthy recreational runners between 18 and 65 years and with an average of 1-3 running sessions per week the past 6 months are included. Participants are randomized into two intervention groups: Running schedule-I and Schedule-V. Schedule-I emphasizes a progression in running intensity by increasing the weekly volume of running at a hard pace, while Schedule-V emphasizes a progression in running volume, by increasing the weekly overall volume. Data on the running performed is collected by GPS. Participants who sustain running-related injuries are diagnosed by a diagnostic team of physiotherapists using standardized diagnostic criteria. The members of the diagnostic team are blinded. The study design, procedures and informed consent were approved by the Ethics Committee Northern Denmark Region (N-20140069). The Run Clever trial will provide insight into possible differences in injury risk between running schedules emphasizing either running intensity or running volume. The risk of sustaining volume- and intensity-related injuries will be compared in the two intervention groups using a competing

  12. Running Club - Nocturne des Evaux

    CERN Multimedia

    Running club

    2017-01-01

    Les coureurs du CERN sont encore montés sur les plus hautes marches du podium lors de la course interentreprises. Cette course d’équipe qui se déroule de nuit et par équipe de 3 à 4 coureurs est unique dans la région de par son originalité : départ groupé toutes les 30 secondes, les 3 premiers coureurs doivent passer la ligne d’arrivée ensemble. Double victoire pour le running club a la nocturne !!!! 1ère place pour les filles et 22e au classement général; 1ère place pour l'équipe mixte et 4e au général, battant par la même occasion le record de l'épreuve en mixte d'environ 1 minute; 10e place pour l'équipe homme. Retrouvez tous les résultats sur http://www.chp-geneve.ch/web-cms/index.php/nocturne-des-evaux

  13. Towards scalable parellelism in Monte Carlo particle transport codes using remote memory access

    Energy Technology Data Exchange (ETDEWEB)

    Romano, Paul K [Los Alamos National Laboratory; Brown, Forrest B [Los Alamos National Laboratory; Forget, Benoit [MIT

    2010-01-01

    One forthcoming challenge in the area of high-performance computing is having the ability to run large-scale problems while coping with less memory per compute node. In this work, they investigate a novel data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node. In this method, each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain. initial results demonstrate that while the method does allow large problems to be run in a memory-limited environment, achieving scalability may be difficult due to inefficiencies in the current implementation of RMA operations.

  14. Towards scalable parallelism in Monte Carlo particle transport codes using remote memory access

    International Nuclear Information System (INIS)

    Romano, Paul K.; Forget, Benoit; Brown, Forrest

    2010-01-01

    One forthcoming challenge in the area of high-performance computing is having the ability to run large-scale problems while coping with less memory per compute node. In this work, we investigate a novel data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node. In this method, each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain. Initial results demonstrate that while the method does allow large problems to be run in a memory-limited environment, achieving scalability may be difficult due to inefficiencies in the current implementation of RMA operations. (author)

  15. Monte Carlo methods

    Directory of Open Access Journals (Sweden)

    Bardenet Rémi

    2013-07-01

    Full Text Available Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.

  16. Influence of treadmill acceleration on actual walk-to-run transition.

    Science.gov (United States)

    Van Caekenberghe, I; Segers, V; De Smet, K; Aerts, P; De Clercq, D

    2010-01-01

    When accelerating continuously, humans spontaneously change from a walking to a running pattern by means of a walk-to-run transition (WRT). Results of previous studies indicate that when higher treadmill accelerations are imposed, higher WRT-speeds can be expected. By studying the kinematics of the WRT at different accelerations, the underlying mechanisms can be unravelled. 19 young, healthy female subjects performed walk-to-run transitions on a constantly accelerating treadmill (0.1, 0.2 and 0.5 m s(-2)). A higher acceleration induced a higher WRT-speed, by effecting the preparation of transition, as well as the actual transition step. Increasing the acceleration caused a higher WRT-speed as a result of a greater step length during the transition step, which was mainly a consequence of a prolonged airborne phase. Besides this effect on the transition step, the direct preparation phase of transition (i.e. the last walking step before transition) appeared to fulfil specific constraints required to execute the transition regardless of the acceleration imposed. This highlights an important role for this step in the debate regarding possible determinants of WRT. In addition spatiotemporal and kinematical data confirmed that WRT remains a discontinuous change of gait pattern in all accelerations imposed. It is concluded that the walk-to-run transition is a discontinuous switch from walking to running which depends on the magnitude of treadmill belt acceleration. Copyright 2009 Elsevier B.V. All rights reserved.

  17. Milagro Version 2 An Implicit Monte Carlo Code for Thermal Radiative Transfer: Capabilities, Development, and Usage

    Energy Technology Data Exchange (ETDEWEB)

    T.J. Urbatsch; T.M. Evans

    2006-02-15

    We have released Version 2 of Milagro, an object-oriented, C++ code that performs radiative transfer using Fleck and Cummings' Implicit Monte Carlo method. Milagro, a part of the Jayenne program, is a stand-alone driver code used as a methods research vehicle and to verify its underlying classes. These underlying classes are used to construct Implicit Monte Carlo packages for external customers. Milagro-2 represents a design overhaul that allows better parallelism and extensibility. New features in Milagro-2 include verified momentum deposition, restart capability, graphics capability, exact energy conservation, and improved load balancing and parallel efficiency. A users' guide also describes how to configure, make, and run Milagro2.

  18. 40 CFR 1054.501 - How do I run a valid emission test?

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false How do I run a valid emission test... Procedures § 1054.501 How do I run a valid emission test? (a) Applicability. This subpart is addressed to you... provisions of 40 CFR 1065.405 describes how to prepare an engine for testing. However, you may consider...

  19. Development of Monte Carlo-based pebble bed reactor fuel management code

    International Nuclear Information System (INIS)

    Setiadipura, Topan; Obara, Toru

    2014-01-01

    Highlights: • A new Monte Carlo-based fuel management code for OTTO cycle pebble bed reactor was developed. • The double-heterogeneity was modeled using statistical method in MVP-BURN code. • The code can perform analysis of equilibrium and non-equilibrium phase. • Code-to-code comparisons for Once-Through-Then-Out case were investigated. • Ability of the code to accommodate the void cavity was confirmed. - Abstract: A fuel management code for pebble bed reactors (PBRs) based on the Monte Carlo method has been developed in this study. The code, named Monte Carlo burnup analysis code for PBR (MCPBR), enables a simulation of the Once-Through-Then-Out (OTTO) cycle of a PBR from the running-in phase to the equilibrium condition. In MCPBR, a burnup calculation based on a continuous-energy Monte Carlo code, MVP-BURN, is coupled with an additional utility code to be able to simulate the OTTO cycle of PBR. MCPBR has several advantages in modeling PBRs, namely its Monte Carlo neutron transport modeling, its capability of explicitly modeling the double heterogeneity of the PBR core, and its ability to model different axial fuel speeds in the PBR core. Analysis at the equilibrium condition of the simplified PBR was used as the validation test of MCPBR. The calculation results of the code were compared with the results of diffusion-based fuel management PBR codes, namely the VSOP and PEBBED codes. Using JENDL-4.0 nuclide library, MCPBR gave a 4.15% and 3.32% lower k eff value compared to VSOP and PEBBED, respectively. While using JENDL-3.3, MCPBR gave a 2.22% and 3.11% higher k eff value compared to VSOP and PEBBED, respectively. The ability of MCPBR to analyze neutron transport in the top void of the PBR core and its effects was also confirmed

  20. Entropic sampling in the path integral Monte Carlo method

    International Nuclear Information System (INIS)

    Vorontsov-Velyaminov, P N; Lyubartsev, A P

    2003-01-01

    We have extended the entropic sampling Monte Carlo method to the case of path integral representation of a quantum system. A two-dimensional density of states is introduced into path integral form of the quantum canonical partition function. Entropic sampling technique within the algorithm suggested recently by Wang and Landau (Wang F and Landau D P 2001 Phys. Rev. Lett. 86 2050) is then applied to calculate the corresponding entropy distribution. A three-dimensional quantum oscillator is considered as an example. Canonical distributions for a wide range of temperatures are obtained in a single simulation run, and exact data for the energy are reproduced

  1. Monte-Carlo code calculation of 3D reactor core model with usage of burnt fuel isotopic compositions, obtained by engineering codes

    Energy Technology Data Exchange (ETDEWEB)

    Aleshin, Sergey S.; Gorodkov, Sergey S.; Shcherenko, Anna I. [National Research Centre ' Kurchatov Institute' , Moscow (Russian Federation)

    2016-09-15

    A burn-up calculation of large systems by Monte-Carlo code (MCU) is complex process and it requires large computational costs. Previously prepared isotopic compositions are proposed to be used for the Monte-Carlo code calculations of different system states with burnt fuel. Isotopic compositions are calculated by an approximation method. The approximation method is based on usage of a spectral functionality and reference isotopic compositions, that are calculated by the engineering codes (TVS-M, BIPR-7A and PERMAK-A). The multiplication factors and power distributions of FAs from a 3-D reactor core are calculated in this work by the Monte-Carlo code MCU using earlier prepared isotopic compositions. The separate conditions of the burnt core are observed. The results of MCU calculations were compared with those that were obtained by engineering codes.

  2. Transport methods: general. 2. Monte Carlo Particle Transport in Media with Exponentially Varying Time-Dependent Cross Sections

    International Nuclear Information System (INIS)

    Brown, Forrest B.; Martin, William R.

    2001-01-01

    We have investigated Monte Carlo schemes for analyzing particle transport through media with exponentially varying time-dependent cross sections. For such media, the cross sections are represented in the form Σ(t) = Σ 0 e -at (1) or equivalently as Σ(x) = Σ 0 e -bx (2) where b = av and v is the particle speed. For the following discussion, the parameters a and b may be either positive, for exponentially decreasing cross sections, or negative, for exponentially increasing cross sections. For most time-dependent Monte Carlo applications, the time and spatial variations of the cross-section data are handled by means of a stepwise procedure, holding the cross sections constant for each region over a small time interval Δt, performing the Monte Carlo random walk over the interval Δt, updating the cross sections, and then repeating for a series of time intervals. Continuously varying spatial- or time-dependent cross sections can be treated in a rigorous Monte Carlo fashion using delta-tracking, but inefficiencies may arise if the range of cross-section variation is large. In this paper, we present a new method for sampling collision distances directly for cross sections that vary exponentially in space or time. The method is exact and efficient and has direct application to Monte Carlo radiation transport methods. To verify that the probability density function (PDF) is correct and that the random-sampling procedure yields correct results, numerical experiments were performed using a one-dimensional Monte Carlo code. The physical problem consisted of a beam source impinging on a purely absorbing infinite slab, with a slab thickness of 1 cm and Σ 0 = 1 cm -1 . Monte Carlo calculations with 10 000 particles were run for a range of the exponential parameter b from -5 to +20 cm -1 . Two separate Monte Carlo calculations were run for each choice of b, a continuously varying case using the random-sampling procedures described earlier, and a 'conventional' case where the

  3. Running economy and energy cost of running with backpacks.

    Science.gov (United States)

    Scheer, Volker; Cramer, Leoni; Heitkamp, Hans-Christian

    2018-05-02

    Running is a popular recreational activity and additional weight is often carried in backpacks on longer runs. Our aim was to examine running economy and other physiological parameters while running with a 1kg and 3 kg backpack at different submaximal running velocities. 10 male recreational runners (age 25 ± 4.2 years, VO2peak 60.5 ± 3.1 ml·kg-1·min-1) performed runs on a motorized treadmill of 5 minutes durations at three different submaximal speeds of 70, 80 and 90% of anaerobic lactate threshold (LT) without additional weight, and carrying a 1kg and 3 kg backpack. Oxygen consumption, heart rate, lactate and RPE were measured and analysed. Oxygen consumption, energy cost of running and heart rate increased significantly while running with a backpack weighing 3kg compared to running without additional weight at 80% of speed at lactate threshold (sLT) (p=0.026, p=0.009 and p=0.003) and at 90% sLT (p<0.001, p=0.001 and p=0.001). Running with a 1kg backpack showed a significant increase in heart rate at 80% sLT (p=0.008) and a significant increase in oxygen consumption and heart rate at 90% sLT (p=0.045 and p=0.007) compared to running without additional weight. While running at 70% sLT running economy and cardiovascular effort increased with weighted backpack running compared to running without additional weight, however these increases did not reach statistical significance. Running economy deteriorates and cardiovascular effort increases while running with additional backpack weight especially at higher submaximal running speeds. Backpack weight should therefore be kept to a minimum.

  4. Effects of a concurrent strength and endurance training on running performance and running economy in recreational marathon runners.

    Science.gov (United States)

    Ferrauti, Alexander; Bergermann, Matthias; Fernandez-Fernandez, Jaime

    2010-10-01

    The purpose of this study was to investigate the effects of a concurrent strength and endurance training program on running performance and running economy of middle-aged runners during their marathon preparation. Twenty-two (8 women and 14 men) recreational runners (mean ± SD: age 40.0 ± 11.7 years; body mass index 22.6 ± 2.1 kg·m⁻²) were separated into 2 groups (n = 11; combined endurance running and strength training program [ES]: 9 men, 2 women and endurance running [E]: 7 men, and 4 women). Both completed an 8-week intervention period that consisted of either endurance training (E: 276 ± 108 minute running per week) or a combined endurance and strength training program (ES: 240 ± 121-minute running plus 2 strength training sessions per week [120 minutes]). Strength training was focused on trunk (strength endurance program) and leg muscles (high-intensity program). Before and after the intervention, subjects completed an incremental treadmill run and maximal isometric strength tests. The initial values for VO2peak (ES: 52.0 ± 6.1 vs. E: 51.1 ± 7.5 ml·kg⁻¹·min⁻¹) and anaerobic threshold (ES: 3.5 ± 0.4 vs. E: 3.4 ± 0.5 m·s⁻¹) were identical in both groups. A significant time × intervention effect was found for maximal isometric force of knee extension (ES: from 4.6 ± 1.4 to 6.2 ± 1.0 N·kg⁻¹, p marathon running velocities (2.4 and 2.8 m·s⁻¹) and submaximal blood lactate thresholds (2.0, 3.0, and 4.0 mmol·L⁻¹). Stride length and stride frequency also remained unchanged. The results suggest no benefits of an 8-week concurrent strength training for running economy and coordination of recreational marathon runners despite a clear improvement in leg strength, maybe because of an insufficient sample size or a short intervention period.

  5. LHC Report: Run 1 – the final flurry

    CERN Multimedia

    Mike Lamont for the LHC team

    2013-01-01

    The proton-lead run ended early on the morning of Sunday, 10 February. The run can be considered an unqualified success and a testament to the painstaking preparation by the ion team. It was followed by a few short days of proton-proton collisions at intermediate energy, after which the final physics beams of what is now being called Run 1 (2009 – 2013) were dumped at 07:24 on Thursday, 14 February.   The five weeks of operations originally scheduled for 2013 had two main objectives: the delivery of 30 inverse nanobarns with proton-lead collisions; and around 5 inverse picobarns of proton-proton collisions at a beam energy of 1.38 TeV. Both of these objectives were met. As described in previous reports, the proton-lead run has gone remarkably well for a completely novel operational mode. However, there were some issues following the switch of beam direction on Friday, 1 February. In this exercise the ions become the clockwise beam and the experiments received lead-proton instead of ...

  6. Quantum computational finance: Monte Carlo pricing of financial derivatives

    OpenAIRE

    Rebentrost, Patrick; Gupt, Brajesh; Bromley, Thomas R.

    2018-01-01

    Financial derivatives are contracts that can have a complex payoff dependent upon underlying benchmark assets. In this work, we present a quantum algorithm for the Monte Carlo pricing of financial derivatives. We show how the relevant probability distributions can be prepared in quantum superposition, the payoff functions can be implemented via quantum circuits, and the price of financial derivatives can be extracted via quantum measurements. We show how the amplitude estimation algorithm can...

  7. Monte Carlo simulations of plutonium gamma-ray spectra

    International Nuclear Information System (INIS)

    Koenig, Z.M.; Carlson, J.B.; Wang, Tzu-Fang; Ruhter, W.D.

    1993-01-01

    Monte Carlo calculations were investigated as a means of simulating the gamma-ray spectra of Pu. These simulated spectra will be used to develop and evaluate gamma-ray analysis techniques for various nondestructive measurements. Simulated spectra of calculational standards can be used for code intercomparisons, to understand systematic biases and to estimate minimum detection levels of existing and proposed nondestructive analysis instruments. The capability to simulate gamma-ray spectra from HPGe detectors could significantly reduce the costs of preparing large numbers of real reference materials. MCNP was used for the Monte Carlo transport of the photons. Results from the MCNP calculations were folded in with a detector response function for a realistic spectrum. Plutonium spectrum peaks were produced with Lorentzian shapes, for the x-rays, and Gaussian distributions. The MGA code determined the Pu isotopes and specific power of this calculated spectrum and compared it to a similar analysis on a measured spectrum

  8. Overview of the MCU Monte Carlo software package

    International Nuclear Information System (INIS)

    Kalugin, M.A.; Oleynik, D.S.; Shkarovsky, D.A.

    2013-01-01

    MCU (Monte Carlo Universal) is a project on development and practical use of a universal computer code for simulation of particle transport (neutrons, photons, electrons, positrons) in three-dimensional systems by means of the Monte Carlo method. This paper provides the information on the current state of the project. The developed libraries of constants are briefly described, and the potentialities of the MCU-5 package modules and the executable codes compiled from them are characterized. Examples of important problems of reactor physics solved with the code are presented. It is shown that the MCU constructor tool is able to assemble a full-scale 3D model from templates describing single components using simple and intuitive graphic user interface. The templates are prepared by a skilled user and stored in constructor's templates library. Ordinary user works with the graphic user interface and does not deal with MCU input data directly. At the present moment there are template libraries for several types of reactors

  9. Comparing Effects of Feedstock and Run Conditions on Pyrolysis Products Produced at Pilot-Scale

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, Timothy C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gaston, Katherine R [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Wilcox, Esther [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-01-19

    Fast pyrolysis is a promising pathway for mass production of liquid transportable biofuels. The Thermochemical Process Development Unit (TCPDU) pilot plant at NREL is conducting research to support the Bioenergy Technologies Office's 2017 goal of a $3 per gallon biofuel. In preparation for down select of feedstock and run conditions, four different feedstocks were run at three different run conditions. The products produced were characterized extensively. Hot pyrolysis vapors and light gasses were analyzed on a slip stream, and oil and char samples were characterized post run.

  10. Application of monte-carlo method in definition of key categories of most radioactive polluted soil

    Energy Technology Data Exchange (ETDEWEB)

    Mahmudov, H M; Valibeyova, G; Jafarov, Y D; Musaeva, Sh Z [Institute of Radiation Problems, Azerbaijan National Academy of Sciences, Baku (Azerbaijan); others, and

    2006-10-15

    Full text: The principle of analysis by Monte Carlo method consists of a choice of random variables of coefficients of an exposition doze capasites of radiation and data on activity within the boundaries of their individual density of frequency distribution of exposition doses capacities.The analysis using Monte Carlo method is useful for realization of sensitivity analysis of measured capacity amount of an exposition dose in order to define the major factors causing uncertainly in reports.Reception of such conceptions can be valuable for definition of key categories of radiation polluted soil and establishment of priorities to use resources for enhancement of the report.Relative uncertainly of radiation polluted soil categories determined with the help of the analysis by Monte Carlo method in case of their availability can be applied using more significant divergence between average value and a confidential limit in case when borders of resources available for preparation and to prepare possible estimations for the most significant categories of sources.Usage of the notion {sup u}ncertainty{sup i}n reports also allows to set threshold value for a key category of sources, if it necessary, for exact reflection of 90 per cent uncertainty in reports.According to radiation safety norms level of radiation backgrounds exceeding 33 mkR/hour is considered dangerous.By calculated Monte Carlo method much more dangerous sites and sites frequently imposed to disposals and utilization were chosen from analyzed samples of polluted soil.

  11. Application of monte-carlo method in definition of key categories of most radioactive polluted soil

    International Nuclear Information System (INIS)

    Mahmudov, H.M; Valibeyova, G.; Jafarov, Y.D; Musaeva, Sh.Z

    2006-01-01

    Full text: The principle of analysis by Monte Carlo method consists of a choice of random variables of coefficients of an exposition doze capasites of radiation and data on activity within the boundaries of their individual density of frequency distribution of exposition doses capacities.The analysis using Monte Carlo method is useful for realization of sensitivity analysis of measured capacity amount of an exposition dose in order to define the major factors causing uncertainly in reports.Reception of such conceptions can be valuable for definition of key categories of radiation polluted soil and establishment of priorities to use resources for enhancement of the report.Relative uncertainly of radiation polluted soil categories determined with the help of the analysis by Monte Carlo method in case of their availability can be applied using more significant divergence between average value and a confidential limit in case when borders of resources available for preparation and to prepare possible estimations for the most significant categories of sources.Usage of the notion u ncertainty i n reports also allows to set threshold value for a key category of sources, if it necessary, for exact reflection of 90 per cent uncertainty in reports.According to radiation safety norms level of radiation backgrounds exceeding 33 mkR/hour is considered dangerous.By calculated Monte Carlo method much more dangerous sites and sites frequently imposed to disposals and utilization were chosen from analyzed samples of polluted soil.

  12. Exploring Monte Carlo methods

    CERN Document Server

    Dunn, William L

    2012-01-01

    Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble

  13. Specialized Monte Carlo codes versus general-purpose Monte Carlo codes

    International Nuclear Information System (INIS)

    Moskvin, Vadim; DesRosiers, Colleen; Papiez, Lech; Lu, Xiaoyi

    2002-01-01

    The possibilities of Monte Carlo modeling for dose calculations and optimization treatment are quite limited in radiation oncology applications. The main reason is that the Monte Carlo technique for dose calculations is time consuming while treatment planning may require hundreds of possible cases of dose simulations to be evaluated for dose optimization. The second reason is that general-purpose codes widely used in practice, require an experienced user to customize them for calculations. This paper discusses the concept of Monte Carlo code design that can avoid the main problems that are preventing wide spread use of this simulation technique in medical physics. (authors)

  14. Talk | Physical preparation before a sports competition | 16 April

    CERN Multimedia

    2013-01-01

    In the run-up to the annual CERN Relay Race and as part of the Move! Eat better campaign, the Medical Service is organising a talk on physical preparation before a sports competition or before the start of a sporting season.     Come along to the Council Chamber on 16 April at 5:00 p.m. to discover the secrets of good physical preparation. You'll get plenty of tips, techniques and exercises, and find out how your whole sporting experience can be enhanced by good physical preparation.  This advice will be especially useful to help you prepare for the CERN Relay Race, whether you’re a casual jogger or a hardened road-racer. The talk will be moderated (in French) by: Rachel Bray, President of the CERN Fitness Club; Olivier Baldacchino, professional running coach and trainer of the CERN Running Club, who will give tips on how to prepare for races, in particular the CERN Relay Race; and Jean-Yves Le Meur, a member of the French national disa...

  15. Barefoot running: biomechanics and implications for running injuries.

    Science.gov (United States)

    Altman, Allison R; Davis, Irene S

    2012-01-01

    Despite the technological developments in modern running footwear, up to 79% of runners today get injured in a given year. As we evolved barefoot, examining this mode of running is insightful. Barefoot running encourages a forefoot strike pattern that is associated with a reduction in impact loading and stride length. Studies have shown a reduction in injuries to shod forefoot strikers as compared with rearfoot strikers. In addition to a forefoot strike pattern, barefoot running also affords the runner increased sensory feedback from the foot-ground contact, as well as increased energy storage in the arch. Minimal footwear is being used to mimic barefoot running, but it is not clear whether it truly does. The purpose of this article is to review current and past research on shod and barefoot/minimal footwear running and their implications for running injuries. Clearly more research is needed, and areas for future study are suggested.

  16. Monte Carlo principles and applications

    Energy Technology Data Exchange (ETDEWEB)

    Raeside, D E [Oklahoma Univ., Oklahoma City (USA). Health Sciences Center

    1976-03-01

    The principles underlying the use of Monte Carlo methods are explained, for readers who may not be familiar with the approach. The generation of random numbers is discussed, and the connection between Monte Carlo methods and random numbers is indicated. Outlines of two well established Monte Carlo sampling techniques are given, together with examples illustrating their use. The general techniques for improving the efficiency of Monte Carlo calculations are considered. The literature relevant to the applications of Monte Carlo calculations in medical physics is reviewed.

  17. The influence of training and mental skills preparation on injury incidence and performance in marathon runners.

    Science.gov (United States)

    Hamstra-Wright, Karrie L; Coumbe-Lilley, John E; Kim, Hajwa; McFarland, Jose A; Huxel Bliven, Kellie C

    2013-10-01

    There has been a considerable increase in the number of participants running marathons over the past several years. The 26.2-mile race requires physical and mental stamina to successfully complete it. However, studies have not investigated how running and mental skills preparation influence injury and performance. The purpose of our study was to describe the training and mental skills preparation of a typical group of runners as they began a marathon training program, assess the influence of training and mental skills preparation on injury incidence, and examine how training and mental skills preparation influence marathon performance. Healthy adults (N = 1,957) participating in an 18-week training program for a fall 2011 marathon were recruited for the study. One hundred twenty-five runners enrolled and received 4 surveys: pretraining, 6 weeks, 12 weeks, posttraining. The pretraining survey asked training and mental skills preparation questions. The 6- and 12-week surveys asked about injury incidence. The posttraining survey asked about injury incidence and marathon performance. Tempo runs during training preparation had a significant positive relationship to injury incidence in the 6-week survey (ρ[93] = 0.26, p = 0.01). The runners who reported incorporating tempo and interval runs, running more miles per week, and running more days per week in their training preparation ran significantly faster than did those reporting less tempo and interval runs, miles per week, and days per week (p ≤ 0.05). Mental skills preparation did not influence injury incidence or marathon performance. To prevent injury, and maximize performance, while marathon training, it is important that coaches and runners ensure that a solid foundation of running fitness and experience exists, followed by gradually building volume, and then strategically incorporating runs of various speeds and distances.

  18. Study of TXRF experimental system by Monte Carlo simulation

    International Nuclear Information System (INIS)

    Costa, Ana Cristina M.; Leitao, Roberta G.; Lopes, Ricardo T.; Anjos, Marcelino J.; Conti, Claudio C.

    2011-01-01

    The Total-Reflection X-ray Fluorescence (TXRF) technique offers unique possibilities to study the concentrations of a wide range of trace elements in various types of samples. Besides that, the TXRF technique is widely used to study the trace elements in biological, medical and environmental samples due to its multielemental character as well as simplicity of sample preparation and quantification methods used. In general the TXRF experimental setup is not simple and might require substantial experimental efforts. On the other hand, in recent years, experimental TXRF portable systems have been developed. It has motivated us to develop our own TXRF portable system. In this work we presented a first step in order to optimize a TXRF experimental setup using Monte Carlo simulation by MCNP code. The results found show that the Monte Carlo simulation method can be used to investigate the development of a TXRF experimental system before its assembly. (author)

  19. 11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing

    CERN Document Server

    Nuyens, Dirk

    2016-01-01

    This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.

  20. Evolution of ATLAS conditions data and its management for LHC Run-2

    CERN Document Server

    Boehler, Michael; Formica, Andrea; Gallas, Elizabeth; Radescu, Voica

    2015-01-01

    The ATLAS detector at the LHC consists of several sub-detector systems. Both data taking and Monte Carlo (MC) simulation rely on an accurate description of the detector conditions from every subsystem, such as calibration constants, different scenarios of pile-up and noise conditions, size and position of the beam spot, etc. In order to guarantee database availability for critical online applications during data-taking, two database systems, one for online access and another one for all other database access have been implemented. The long shutdown period has provided the opportunity to review and improve the Run-1 system: revise workflows, include new and innovative monitoring and maintenance tools and implement a new database instance for Run-2 conditions data. The detector conditions are organized by tag identification strings and managed independently from the different sub-detector experts. The individual tags are then collected and associated into a global conditions tag, assuring synchronization of var...

  1. Application of Monte-Carlo method in definition of key categories of most radioactive polluted soil

    International Nuclear Information System (INIS)

    Mahmudov, H.M.; Valibeyova, G.; Jafarov, Y.D.; Musaeva, Sh.Z.

    2006-01-01

    Full text: The principle of analysis by Monte Carlo method consists of a choice of random variables of coefficients of an exposition doze capacities of radiation and data on activity within the boundaries of their individual density of frequency distribution upon corresponding sizes of exposition doses capacities. This procedure repeats for many times using computer and results of each round of calculations create universal density of frequency distribution of exposition doses capacities. The analysis using Monte Carlo method can be carried out at the level of radiation polluted soil categories. The analysis by Monte Carlo method is useful for realization of sensitivity analysis of measured capacity amount of an exposition dose in order to define the major factors causing uncertainty in reports. Reception of such conceptions can be valuable for definition of key categories of radiation polluted soil and establishment of priorities to use resources for enhancement of the report. Relative uncertainty of radiation polluted soil categories determined with the help of the analysis by Monte Carlo method in case of their availability can be applied using more significant divergence between average value and a confidential limit in case when borders of a confidential interval are asymmetric. It is important to determine key categories of radiation polluted soil to establish priorities to use reports of resources available for preparation and to prepare possible estimations for the most significant categories of sources. Usage of the notion u ncertainty i n reports also allows to set threshold value for a key category of sources, if it is necessary, for exact reflection of 90 percent uncertainty in reports. According to radiation safety norms level of radiation background exceeding 33 mkR/hour is considered dangerous. By calculated Monte Carlo method much more dangerous sites and sites frequently imposed to disposals and utilization were chosen from analyzed samples of

  2. Application of Monte-Carlo method in definition of key categories of most radioactive polluted soil

    Energy Technology Data Exchange (ETDEWEB)

    Mahmudov, H M; Valibeyova, G; Jafarov, Y D; Musaeva, Sh Z [Institute of Radiation Problems, Azerbaijan National Academy of Sciences, Baku (Azerbaijan)

    2006-11-15

    Full text: The principle of analysis by Monte Carlo method consists of a choice of random variables of coefficients of an exposition doze capacities of radiation and data on activity within the boundaries of their individual density of frequency distribution upon corresponding sizes of exposition doses capacities. This procedure repeats for many times using computer and results of each round of calculations create universal density of frequency distribution of exposition doses capacities. The analysis using Monte Carlo method can be carried out at the level of radiation polluted soil categories. The analysis by Monte Carlo method is useful for realization of sensitivity analysis of measured capacity amount of an exposition dose in order to define the major factors causing uncertainty in reports. Reception of such conceptions can be valuable for definition of key categories of radiation polluted soil and establishment of priorities to use resources for enhancement of the report. Relative uncertainty of radiation polluted soil categories determined with the help of the analysis by Monte Carlo method in case of their availability can be applied using more significant divergence between average value and a confidential limit in case when borders of a confidential interval are asymmetric. It is important to determine key categories of radiation polluted soil to establish priorities to use reports of resources available for preparation and to prepare possible estimations for the most significant categories of sources. Usage of the notion {sup u}ncertainty{sup i}n reports also allows to set threshold value for a key category of sources, if it is necessary, for exact reflection of 90 percent uncertainty in reports. According to radiation safety norms level of radiation background exceeding 33 mkR/hour is considered dangerous. By calted Monte Carlo method much more dangerous sites and sites frequently imposed to disposals and utilization were chosen from analyzed samples of

  3. Will ALICE run in the HL-LHC era?

    International Nuclear Information System (INIS)

    Wessels, J.P.

    2012-01-01

    We will present the perspectives for ion running in the HL-LHC era. In particular, ALICE is preparing a significant upgrade of its rate capabilities and is further extending its particle identification potential. This paves the way for heavy ion physics at unprecedented luminosities, which are expected in the HL-LHC era with the heaviest ions. Here, we outline a scenario, in which ALICE will be taking data at a luminosity of L > 6*10 27 cm -2 *s -1 for Pb-Pb with the aim of collecting at least 10 nb -1 . The potential interest of data-taking during high luminosity proton runs for ATLAS and CMS will also be commented. (author)

  4. Interactive three-dimensional visualization and creation of geometries for Monte Carlo calculations

    International Nuclear Information System (INIS)

    Theis, C.; Buchegger, K.H.; Brugger, M.; Forkel-Wirth, D.; Roesler, S.; Vincke, H.

    2006-01-01

    The implementation of three-dimensional geometries for the simulation of radiation transport problems is a very time-consuming task. Each particle transport code supplies its own scripting language and syntax for creating the geometries. All of them are based on the Constructive Solid Geometry scheme requiring textual description. This makes the creation a tedious and error-prone task, which is especially hard to master for novice users. The Monte Carlo code FLUKA comes with built-in support for creating two-dimensional cross-sections through the geometry and FLUKACAD, a custom-built converter to the commercial Computer Aided Design package AutoCAD, exists for 3D visualization. For other codes, like MCNPX, a couple of different tools are available, but they are often specifically tailored to the particle transport code and its approach used for implementing geometries. Complex constructive solid modeling usually requires very fast and expensive special purpose hardware, which is not widely available. In this paper SimpleGeo is presented, which is an implementation of a generic versatile interactive geometry modeler using off-the-shelf hardware. It is running on Windows, with a Linux version currently under preparation. This paper describes its functionality, which allows for rapid interactive visualization as well as generation of three-dimensional geometries, and also discusses critical issues regarding common CAD systems

  5. Efficient heterogeneous execution of Monte Carlo shielding calculations on a Beowulf cluster

    International Nuclear Information System (INIS)

    Dewar, D.; Hulse, P.; Cooper, A.; Smith, N.

    2005-01-01

    Recent work has been done in using a high-performance 'Beowulf' cluster computer system for the efficient distribution of Monte Carlo shielding calculations. This has enabled the rapid solution of complex shielding problems at low cost and with greater modularity and scalability than traditional platforms. The work has shown that a simple approach to distributing the workload is as efficient as using more traditional techniques such as PVM (Parallel Virtual Machine). In addition, when used in an operational setting this technique is fairer with the use of resources than traditional methods, in that it does not tie up a single computing resource but instead shares the capacity with other tasks. These developments in computing technology have enabled shielding problems to be solved that would have taken an unacceptably long time to run on traditional platforms. This paper discusses the BNFL Beowulf cluster and a number of tests that have recently been run to demonstrate the efficiency of the asynchronous technique in running the MCBEND program. The BNFL Beowulf currently consists of 84 standard PCs running RedHat Linux. Current performance of the machine has been estimated to be between 40 and 100 Gflop s -1 . When the whole system is employed on one problem up to four million particles can be tracked per second. There are plans to review its size in line with future business needs. (authors)

  6. SU-E-T-180: Fano Cavity Test of Proton Transport in Monte Carlo Codes Running On GPU and Xeon Phi

    International Nuclear Information System (INIS)

    Sterpin, E; Sorriaux, J; Souris, K; Lee, J; Vynckier, S; Schuemann, J; Paganetti, H; Jia, X; Jiang, S

    2014-01-01

    Purpose: In proton dose calculation, clinically compatible speeds are now achieved with Monte Carlo codes (MC) that combine 1) adequate simplifications in the physics of transport and 2) the use of hardware architectures enabling massive parallel computing (like GPUs). However, the uncertainties related to the transport algorithms used in these codes must be kept minimal. Such algorithms can be checked with the so-called “Fano cavity test”. We implemented the test in two codes that run on specific hardware: gPMC on an nVidia GPU and MCsquare on an Intel Xeon Phi (60 cores). Methods: gPMC and MCsquare are designed for transporting protons in CT geometries. Both codes use the method of fictitious interaction to sample the step-length for each transport step. The considered geometry is a water cavity (2×2×0.2 cm 3 , 0.001 g/cm 3 ) in a 10×10×50 cm 3 water phantom (1 g/cm 3 ). CPE in the cavity is established by generating protons over the phantom volume with a uniform momentum (energy E) and a uniform intensity per unit mass I. Assuming no nuclear reactions and no generation of other secondaries, the computed cavity dose should equal IE, according to Fano's theorem. Both codes were tested for initial proton energies of 50, 100, and 200 MeV. Results: For all energies, gPMC and MCsquare are within 0.3 and 0.2 % of the theoretical value IE, respectively (0.1% standard deviation). Single-precision computations (instead of double) increased the error by about 0.1% in MCsquare. Conclusion: Despite the simplifications in the physics of transport, both gPMC and MCsquare successfully pass the Fano test. This ensures optimal accuracy of the codes for clinical applications within the uncertainties on the underlying physical models. It also opens the path to other applications of these codes, like the simulation of ion chamber response

  7. SU-E-T-180: Fano Cavity Test of Proton Transport in Monte Carlo Codes Running On GPU and Xeon Phi

    Energy Technology Data Exchange (ETDEWEB)

    Sterpin, E; Sorriaux, J; Souris, K; Lee, J; Vynckier, S [Universite catholique de Louvain, Brussels, Brussels (Belgium); Schuemann, J; Paganetti, H [Massachusetts General Hospital, Boston, MA (United States); Jia, X; Jiang, S [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)

    2014-06-01

    Purpose: In proton dose calculation, clinically compatible speeds are now achieved with Monte Carlo codes (MC) that combine 1) adequate simplifications in the physics of transport and 2) the use of hardware architectures enabling massive parallel computing (like GPUs). However, the uncertainties related to the transport algorithms used in these codes must be kept minimal. Such algorithms can be checked with the so-called “Fano cavity test”. We implemented the test in two codes that run on specific hardware: gPMC on an nVidia GPU and MCsquare on an Intel Xeon Phi (60 cores). Methods: gPMC and MCsquare are designed for transporting protons in CT geometries. Both codes use the method of fictitious interaction to sample the step-length for each transport step. The considered geometry is a water cavity (2×2×0.2 cm{sup 3}, 0.001 g/cm{sup 3}) in a 10×10×50 cm{sup 3} water phantom (1 g/cm{sup 3}). CPE in the cavity is established by generating protons over the phantom volume with a uniform momentum (energy E) and a uniform intensity per unit mass I. Assuming no nuclear reactions and no generation of other secondaries, the computed cavity dose should equal IE, according to Fano's theorem. Both codes were tested for initial proton energies of 50, 100, and 200 MeV. Results: For all energies, gPMC and MCsquare are within 0.3 and 0.2 % of the theoretical value IE, respectively (0.1% standard deviation). Single-precision computations (instead of double) increased the error by about 0.1% in MCsquare. Conclusion: Despite the simplifications in the physics of transport, both gPMC and MCsquare successfully pass the Fano test. This ensures optimal accuracy of the codes for clinical applications within the uncertainties on the underlying physical models. It also opens the path to other applications of these codes, like the simulation of ion chamber response.

  8. FERMILAB: Preparing to collide

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Against the background of stringent Environment, Safety and Health (ES&H) regulations mandated by the US Department of Energy for all national Labs, Fermilab prepared to mount the next major Tevatron proton-antiproton collider run

  9. Evolution of ATLAS conditions data and its management for LHC Run-2

    International Nuclear Information System (INIS)

    Böhler, Michael; Borodin, Mikhail; Formica, Andrea; Gallas, Elizabeth; Radescu, Voica

    2015-01-01

    The ATLAS detector at the LHC consists of several sub-detector systems. Both data taking and Monte Carlo (MC) simulation rely on an accurate description of the detector conditions from every subsystem, such as calibration constants, different scenarios of pile-up and noise conditions, size and position of the beam spot, etc. In order to guarantee database availability for critical online applications during data-taking, two database systems, one for online access and another one for all other database access, have been implemented.The long shutdown period has provided the opportunity to review and improve the Run-1 system: revise workflows, include new and innovative monitoring and maintenance tools and implement a new database instance for Run-2 conditions data. The detector conditions are organized by tag identification strings and managed independently by the different sub-detector experts. The individual tags are then collected and associated into a global conditions tag, assuring synchronization of various sub-detector improvements. Furthermore, a new concept was introduced to maintain conditions over all different data run periods into a single tag, by using Interval of Validity (IOV) dependent detector conditions for the MC database as well. This allows on the fly preservation of past conditions for data and MC and assures their sustainability with software evolution.This paper presents an overview of the commissioning of the new database instance, improved tools and workflows, and summarizes the actions taken during the Run-2 commissioning phase in the beginning of 2015. (paper)

  10. CDF run II run control and online monitor

    International Nuclear Information System (INIS)

    Arisawa, T.; Ikado, K.; Badgett, W.; Chlebana, F.; Maeshima, K.; McCrory, E.; Meyer, A.; Patrick, J.; Wenzel, H.; Stadie, H.; Wagner, W.; Veramendi, G.

    2001-01-01

    The authors discuss the CDF Run II Run Control and online event monitoring system. Run Control is the top level application that controls the data acquisition activities across 150 front end VME crates and related service processes. Run Control is a real-time multi-threaded application implemented in Java with flexible state machines, using JDBC database connections to configure clients, and including a user friendly and powerful graphical user interface. The CDF online event monitoring system consists of several parts: the event monitoring programs, the display to browse their results, the server program which communicates with the display via socket connections, the error receiver which displays error messages and communicates with Run Control, and the state manager which monitors the state of the monitor programs

  11. Testing Lorentz Invariance Emergence in the Ising Model using Monte Carlo simulations

    CERN Document Server

    Dias Astros, Maria Isabel

    2017-01-01

    In the context of the Lorentz invariance as an emergent phenomenon at low energy scales to study quantum gravity a system composed by two 3D interacting Ising models (one with an anisotropy in one direction) was proposed. Two Monte Carlo simulations were run: one for the 2D Ising model and one for the target model. In both cases the observables (energy, magnetization, heat capacity and magnetic susceptibility) were computed for different lattice sizes and a Binder cumulant introduced in order to estimate the critical temperature of the systems. Moreover, the correlation function was calculated for the 2D Ising model.

  12. The Effect of Training in Minimalist Running Shoes on Running Economy.

    Science.gov (United States)

    Ridge, Sarah T; Standifird, Tyler; Rivera, Jessica; Johnson, A Wayne; Mitchell, Ulrike; Hunter, Iain

    2015-09-01

    The purpose of this study was to examine the effect of minimalist running shoes on oxygen uptake during running before and after a 10-week transition from traditional to minimalist running shoes. Twenty-five recreational runners (no previous experience in minimalist running shoes) participated in submaximal VO2 testing at a self-selected pace while wearing traditional and minimalist running shoes. Ten of the 25 runners gradually transitioned to minimalist running shoes over 10 weeks (experimental group), while the other 15 maintained their typical training regimen (control group). All participants repeated submaximal VO2 testing at the end of 10 weeks. Testing included a 3 minute warm-up, 3 minutes of running in the first pair of shoes, and 3 minutes of running in the second pair of shoes. Shoe order was randomized. Average oxygen uptake was calculated during the last minute of running in each condition. The average change from pre- to post-training for the control group during testing in traditional and minimalist shoes was an improvement of 3.1 ± 15.2% and 2.8 ± 16.2%, respectively. The average change from pre- to post-training for the experimental group during testing in traditional and minimalist shoes was an improvement of 8.4 ± 7.2% and 10.4 ± 6.9%, respectively. Data were analyzed using a 2-way repeated measures ANOVA. There were no significant interaction effects, but the overall improvement in running economy across time (6.15%) was significant (p = 0.015). Running in minimalist running shoes improves running economy in experienced, traditionally shod runners, but not significantly more than when running in traditional running shoes. Improvement in running economy in both groups, regardless of shoe type, may have been due to compliance with training over the 10-week study period and/or familiarity with testing procedures. Key pointsRunning in minimalist footwear did not result in a change in running economy compared to running in traditional footwear

  13. Managing the CMS Data and Monte Carlo Processing during LHC Run 2

    Science.gov (United States)

    Wissing, C.; CMS Collaboration

    2017-10-01

    In order to cope with the challenges expected during the LHC Run 2 CMS put in a number of enhancements into the main software packages and the tools used for centrally managed processing. In the presentation we will highlight these improvements that allow CMS to deal with the increased trigger output rate, the increased pileup and the evolution in computing technology. The overall system aims at high flexibility, improved operational flexibility and largely automated procedures. The tight coupling of workflow classes to types of sites has been drastically relaxed. Reliable and high-performing networking between most of the computing sites and the successful deployment of a data-federation allow the execution of workflows using remote data access. That required the development of a largely automatized system to assign workflows and to handle necessary pre-staging of data. Another step towards flexibility has been the introduction of one large global HTCondor Pool for all types of processing workflows and analysis jobs. Besides classical Grid resources also some opportunistic resources as well as Cloud resources have been integrated into that Pool, which gives reach to more than 200k CPU cores.

  14. Monte Carlo simulations for generic granite repository studies

    Energy Technology Data Exchange (ETDEWEB)

    Chu, Shaoping [Los Alamos National Laboratory; Lee, Joon H [SNL; Wang, Yifeng [SNL

    2010-12-08

    In a collaborative study between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL) for the DOE-NE Office of Fuel Cycle Technologies Used Fuel Disposition (UFD) Campaign project, we have conducted preliminary system-level analyses to support the development of a long-term strategy for geologic disposal of high-level radioactive waste. A general modeling framework consisting of a near- and a far-field submodel for a granite GDSE was developed. A representative far-field transport model for a generic granite repository was merged with an integrated systems (GoldSim) near-field model. Integrated Monte Carlo model runs with the combined near- and farfield transport models were performed, and the parameter sensitivities were evaluated for the combined system. In addition, a sub-set of radionuclides that are potentially important to repository performance were identified and evaluated for a series of model runs. The analyses were conducted with different waste inventory scenarios. Analyses were also conducted for different repository radionuelide release scenarios. While the results to date are for a generic granite repository, the work establishes the method to be used in the future to provide guidance on the development of strategy for long-term disposal of high-level radioactive waste in a granite repository.

  15. MCNP: a general Monte Carlo code for neutron and photon transport. Version 3A. Revision 2

    International Nuclear Information System (INIS)

    Briesmeister, J.F.

    1986-09-01

    This manual is a practical guide for the use of our general-purpose Monte Carlo code MCNP. The first chapter is a primer for the novice user. The second chapter describes the mathematics, data, physics, and Monte Carlo simulation found in MCNP. This discussion is not meant to be exhaustive - details of the particular techniques and of the Monte Carlo method itself will have to be found elsewhere. The third chapter shows the user how to prepare input for the code. The fourth chapter contains several examples, and the fifth chapter explains the output. The appendices show how to use MCNP on particular computer systems at the Los Alamos National Laboratory and also give details about some of the code internals that those who wish to modify the code may find useful. 57 refs

  16. Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy

    Energy Technology Data Exchange (ETDEWEB)

    Randriantsizafy, R D; Ramanandraibe, M J [Madagascar Institut National des Sciences et Techniques Nucleaires, Antananarivo (Madagascar); Raboanary, R [Institut of astro and High-Energy Physics Madagascar, University of Antananarivo, Antananarivo (Madagascar)

    2007-07-01

    The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.

  17. Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy

    International Nuclear Information System (INIS)

    Randriantsizafy, R.D.; Ramanandraibe, M.J.; Raboanary, R.

    2007-01-01

    The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.

  18. Monte Carlo modeling of the Fastscan whole body counter response

    International Nuclear Information System (INIS)

    Graham, H.R.; Waller, E.J.

    2015-01-01

    Monte Carlo N-Particle (MCNP) was used to make a model of the Fastscan for the purpose of calibration. Two models were made one for the Pickering Nuclear Site, and one for the Darlington Nuclear Site. Once these models were benchmarked and found to be in good agreement, simulations were run to study the effect different sized phantoms had on the detected response, and the shielding effect of torso fat was not negligible. Simulations into the nature of a source being positioned externally on the anterior or posterior of a person were also conducted to determine a ratio that could be used to determine if a source is externally or internally placed. (author)

  19. On the use of stochastic approximation Monte Carlo for Monte Carlo integration

    KAUST Repository

    Liang, Faming

    2009-03-01

    The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration via a dynamically weighted estimator by calling some results from the literature of nonhomogeneous Markov chains. Our numerical results indicate that SAMC can yield significant savings over conventional Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, for the problems for which the energy landscape is rugged. © 2008 Elsevier B.V. All rights reserved.

  20. On the use of stochastic approximation Monte Carlo for Monte Carlo integration

    KAUST Repository

    Liang, Faming

    2009-01-01

    The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration

  1. ATLAS Distributed Computing in LHC Run2

    CERN Document Server

    Campana, Simone; The ATLAS collaboration

    2015-01-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run2. An increased data rate and computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (ProdSys2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward the flexible computing model. The flexible computing utilization exploring the opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model, the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover a new data management strategy, based on defined lifetime for each dataset, has been defin...

  2. Responding for sucrose and wheel-running reinforcement: effect of pre-running.

    Science.gov (United States)

    Belke, Terry W

    2006-01-10

    Six male albino Wistar rats were placed in running wheels and exposed to a fixed interval 30-s schedule that produced either a drop of 15% sucrose solution or the opportunity to run for 15s as reinforcing consequences for lever pressing. Each reinforcer type was signaled by a different stimulus. To assess the effect of pre-running, animals were allowed to run for 1h prior to a session of responding for sucrose and running. Results showed that, after pre-running, response rates in the later segments of the 30-s schedule decreased in the presence of a wheel-running stimulus and increased in the presence of a sucrose stimulus. Wheel-running rates were not affected. Analysis of mean post-reinforcement pauses (PRP) broken down by transitions between successive reinforcers revealed that pre-running lengthened pausing in the presence of the stimulus signaling wheel running and shortened pauses in the presence of the stimulus signaling sucrose. No effect was observed on local response rates. Changes in pausing in the presence of stimuli signaling the two reinforcers were consistent with a decrease in the reinforcing efficacy of wheel running and an increase in the reinforcing efficacy of sucrose. Pre-running decreased motivation to respond for running, but increased motivation to work for food.

  3. Commissioning with low-intensity beams helps prepare CMS for this year’s physics run. This event is one of the first low-intensity collisions recorded in the CMS detector, during the early hours of 23 April 2016

    CERN Multimedia

    AUTHOR|(CDS)2068005

    2016-01-01

    Commissioning with low-intensity beams helps prepare CMS for this year’s physics run. This event is one of the first low-intensity collisions recorded in the CMS detector, during the early hours of 23 April 2016

  4. Vectorized Monte Carlo

    International Nuclear Information System (INIS)

    Brown, F.B.

    1981-01-01

    Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes

  5. Report on the Oak Ridge workshop on Monte Carlo codes for relativistic heavy-ion collisions

    International Nuclear Information System (INIS)

    Awes, T.C.; Sorensen, S.P.

    1988-01-01

    In order to make detailed predictions for the case of purely hadronic matter, several Monte Carlo codes have been developed to describe relativistic nucleus-nucleus collisions. Although these various models build upon models of hadron-hadron interactions and have been fitted to reproduce hadron-hadron collision data, they have rather different pictures of the underlying hadron collision process and of subsequent particle production. Until now, the different Monte Carlo codes have, in general, been compared to different sets of experimental data, according to which results were readily available to the model builder or which Monte Carlo code was readily available to an experimental group. As a result, it has been difficult to draw firm conclusions about whether the observed deviations between experiments and calculations were due to deficiencies in the particular model, experimental discrepancies, or interesting effects beyond a simple superposition of nucleon-nucleon collisions. For this reason, it was decided that it would be productive to have a structured confrontation between the available experimental data and the many models of high-energy nuclear collisions in a manner in which it could be ensured that the computer codes were run correctly and the experimental acceptances were properly taken into account. With this purpose in mind, a Workshop on Monte Carlo Codes for Relativistic Heavy-Ion Collisions was organized at the Joint Institute for Heavy Ion Research at Oak Ridge National Laboratory from September 12--23, 1988. This paper reviews this workshop. 11 refs., 6 figs

  6. Physiological demands of running during long distance runs and triathlons.

    Science.gov (United States)

    Hausswirth, C; Lehénaff, D

    2001-01-01

    The aim of this review article is to identify the main metabolic factors which have an influence on the energy cost of running (Cr) during prolonged exercise runs and triathlons. This article proposes a physiological comparison of these 2 exercises and the relationship between running economy and performance. Many terms are used as the equivalent of 'running economy' such as 'oxygen cost', 'metabolic cost', 'energy cost of running', and 'oxygen consumption'. It has been suggested that these expressions may be defined by the rate of oxygen uptake (VO2) at a steady state (i.e. between 60 to 90% of maximal VO2) at a submaximal running speed. Endurance events such as triathlon or marathon running are known to modify biological constants of athletes and should have an influence on their running efficiency. The Cr appears to contribute to the variation found in distance running performance among runners of homogeneous level. This has been shown to be important in sports performance, especially in events like long distance running. In addition, many factors are known or hypothesised to influence Cr such as environmental conditions, participant specificity, and metabolic modifications (e.g. training status, fatigue). The decrease in running economy during a triathlon and/or a marathon could be largely linked to physiological factors such as the enhancement of core temperature and a lack of fluid balance. Moreover, the increase in circulating free fatty acids and glycerol at the end of these long exercise durations bear witness to the decrease in Cr values. The combination of these factors alters the Cr during exercise and hence could modify the athlete's performance in triathlons or a prolonged run.

  7. Adjoint electron Monte Carlo calculations

    International Nuclear Information System (INIS)

    Jordan, T.M.

    1986-01-01

    Adjoint Monte Carlo is the most efficient method for accurate analysis of space systems exposed to natural and artificially enhanced electron environments. Recent adjoint calculations for isotropic electron environments include: comparative data for experimental measurements on electronics boxes; benchmark problem solutions for comparing total dose prediction methodologies; preliminary assessment of sectoring methods used during space system design; and total dose predictions on an electronics package. Adjoint Monte Carlo, forward Monte Carlo, and experiment are in excellent agreement for electron sources that simulate space environments. For electron space environments, adjoint Monte Carlo is clearly superior to forward Monte Carlo, requiring one to two orders of magnitude less computer time for relatively simple geometries. The solid-angle sectoring approximations used for routine design calculations can err by more than a factor of 2 on dose in simple shield geometries. For critical space systems exposed to severe electron environments, these potential sectoring errors demand the establishment of large design margins and/or verification of shield design by adjoint Monte Carlo/experiment

  8. AREVA Developments for an Efficient and Reliable use of Monte Carlo codes for Radiation Transport Applications

    Science.gov (United States)

    Chapoutier, Nicolas; Mollier, François; Nolin, Guillaume; Culioli, Matthieu; Mace, Jean-Reynald

    2017-09-01

    In the context of the rising of Monte Carlo transport calculations for any kind of application, AREVA recently improved its suite of engineering tools in order to produce efficient Monte Carlo workflow. Monte Carlo codes, such as MCNP or TRIPOLI, are recognized as reference codes to deal with a large range of radiation transport problems. However the inherent drawbacks of theses codes - laboring input file creation and long computation time - contrast with the maturity of the treatment of the physical phenomena. The goals of the recent AREVA developments were to reach similar efficiency as other mature engineering sciences such as finite elements analyses (e.g. structural or fluid dynamics). Among the main objectives, the creation of a graphical user interface offering CAD tools for geometry creation and other graphical features dedicated to the radiation field (source definition, tally definition) has been reached. The computations times are drastically reduced compared to few years ago thanks to the use of massive parallel runs, and above all, the implementation of hybrid variance reduction technics. From now engineering teams are capable to deliver much more prompt support to any nuclear projects dealing with reactors or fuel cycle facilities from conceptual phase to decommissioning.

  9. The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code

    International Nuclear Information System (INIS)

    Sutton, T.M.; Brown, F.B.; Bischoff, F.G.; MacMillan, D.B.; Ellis, C.L.; Ward, J.T.; Ballinger, C.T.; Kelly, D.J.; Schindler, L.

    1999-01-01

    This report describes the MCV (Monte Carlo - Vectorized)Monte Carlo neutron transport code [Brown, 1982, 1983; Brown and Mendelson, 1984a]. MCV is a module in the RACER system of codes that is used for Monte Carlo reactor physics analysis. The MCV module contains all of the neutron transport and statistical analysis functions of the system, while other modules perform various input-related functions such as geometry description, material assignment, output edit specification, etc. MCV is very closely related to the 05R neutron Monte Carlo code [Irving et al., 1965] developed at Oak Ridge National Laboratory. 05R evolved into the 05RR module of the STEMB system, which was the forerunner of the RACER system. Much of the overall logic and physics treatment of 05RR has been retained and, indeed, the original verification of MCV was achieved through comparison with STEMB results. MCV has been designed to be very computationally efficient [Brown, 1981, Brown and Martin, 1984b; Brown, 1986]. It was originally programmed to make use of vector-computing architectures such as those of the CDC Cyber- 205 and Cray X-MP. MCV was the first full-scale production Monte Carlo code to effectively utilize vector-processing capabilities. Subsequently, MCV was modified to utilize both distributed-memory [Sutton and Brown, 1994] and shared memory parallelism. The code has been compiled and run on platforms ranging from 32-bit UNIX workstations to clusters of 64-bit vector-parallel supercomputers. The computational efficiency of the code allows the analyst to perform calculations using many more neutron histories than is practical with most other Monte Carlo codes, thereby yielding results with smaller statistical uncertainties. MCV also utilizes variance reduction techniques such as survival biasing, splitting, and rouletting to permit additional reduction in uncertainties. While a general-purpose neutron Monte Carlo code, MCV is optimized for reactor physics calculations. It has the

  10. Quality assurance for the ALICE Monte Carlo procedure

    CERN Document Server

    Ajaz, M; Hristov, Peter; Revol, Jean Pierre

    2009-01-01

    We implement the already existing macro,$ALICE_ROOT/STEER /CheckESD.C that is ran after reconstruction to compute the physics efficiency, as a task that will run on proof framework like CAF. The task was implemented in a C++ class called AliAnalysisTaskCheckESD and it inherits from AliAnalysisTaskSE base class. The function of AliAnalysisTaskCheckESD is to compute the ratio of the number of reconstructed particles to the number of particle generated by the Monte Carlo generator.The class AliAnalysisTaskCheckESD was successfully implemented. It was used during the production for first physics and permitted to discover several problems (missing track in the MUON arm reconstruction, low efficiency in the PHOS detector etc.). The code is committed to the SVN repository and will become standard tool for quality assurance.

  11. Monte Carlo: Basics

    OpenAIRE

    Murthy, K. P. N.

    2001-01-01

    An introduction to the basics of Monte Carlo is given. The topics covered include, sample space, events, probabilities, random variables, mean, variance, covariance, characteristic function, chebyshev inequality, law of large numbers, central limit theorem (stable distribution, Levy distribution), random numbers (generation and testing), random sampling techniques (inversion, rejection, sampling from a Gaussian, Metropolis sampling), analogue Monte Carlo and Importance sampling (exponential b...

  12. Influence of the Lower Jaw Position on the Running Pattern.

    Directory of Open Access Journals (Sweden)

    Christian Maurer

    Full Text Available The effects of manipulated dental occlusion on body posture has been investigated quite often and discussed controversially in the literature. Far less attention has been paid to the influence of dental occlusion position on human movement. If human movement was analysed, it was mostly while walking and not while running. This study was therefore designed to identify the effect of lower jaw positions on running behaviour according to different dental occlusion positions.Twenty healthy young recreational runners (mean age = 33.9±5.8 years participated in this study. Kinematic data were collected using an eight-camera Vicon motion capture system (VICON Motion Systems, Oxford, UK. Subjects were consecutively prepared with four different dental occlusion conditions in random order and performed five running trials per test condition on a level walkway with their preferred running shoes. Vector based pattern recognition methods, in particular cluster analysis and support vector machines (SVM were used for movement pattern identification.Subjects exhibited unique movement patterns leading to 18 clusters for the 20 subjects. No overall classification of the splint condition could be observed. Within individual subjects different running patterns could be identified for the four splint conditions. The splint conditions lead to a more symmetrical running pattern than the control condition.The influence of an occlusal splint on running pattern can be confirmed in this study. Wearing a splint increases the symmetry of the running pattern. A more symmetrical running pattern might help to reduce the risk of injuries or help in performance. The change of the movement pattern between the neutral condition and any of the three splint conditions was significant within subjects but not across subjects. Therefore the dental splint has a measureable influence on the running pattern of subjects, however subjects individuality has to be considered when choosing the

  13. Does a crouched leg posture enhance running stability and robustness?

    Science.gov (United States)

    Blum, Yvonne; Birn-Jeffery, Aleksandra; Daley, Monica A; Seyfarth, Andre

    2011-07-21

    Humans and birds both walk and run bipedally on compliant legs. However, differences in leg architecture may result in species-specific leg control strategies as indicated by the observed gait patterns. In this work, control strategies for stable running are derived based on a conceptual model and compared with experimental data on running humans and pheasants (Phasianus colchicus). From a model perspective, running with compliant legs can be represented by the planar spring mass model and stabilized by applying swing leg control. Here, linear adaptations of the three leg parameters, leg angle, leg length and leg stiffness during late swing phase are assumed. Experimentally observed kinematic control parameters (leg rotation and leg length change) of human and avian running are compared, and interpreted within the context of this model, with specific focus on stability and robustness characteristics. The results suggest differences in stability characteristics and applied control strategies of human and avian running, which may relate to differences in leg posture (straight leg posture in humans, and crouched leg posture in birds). It has been suggested that crouched leg postures may improve stability. However, as the system of control strategies is overdetermined, our model findings suggest that a crouched leg posture does not necessarily enhance running stability. The model also predicts different leg stiffness adaptation rates for human and avian running, and suggests that a crouched avian leg posture, which is capable of both leg shortening and lengthening, allows for stable running without adjusting leg stiffness. In contrast, in straight-legged human running, the preparation of the ground contact seems to be more critical, requiring leg stiffness adjustment to remain stable. Finally, analysis of a simple robustness measure, the normalized maximum drop, suggests that the crouched leg posture may provide greater robustness to changes in terrain height

  14. The ATLAS Trigger system upgrade and performance in Run 2

    CERN Document Server

    Shaw, Savanna Marie; The ATLAS collaboration

    2017-01-01

    The ATLAS trigger has been used very successfully for the online event selection during the first part of the LHC Run-2 in 2015/16 at a centre-of-mass energy of 13 TeV. The trigger system is composed of a hardware Level-1 trigger and a software-based high-level trigger; it reduces the event rate from the bunch-crossing rate of 40 MHz to an average recording rate of about 1 kHz. The excellent performance of the ATLAS trigger has been vital for the ATLAS physics program of Run-2, selecting interesting collision events for wide variety of physics signatures with high efficiency. The trigger selection capabilities of ATLAS during Run-2 have been significantly improved compared to Run-1, in order to cope with the higher event rates and pile-up which are the result of the almost doubling of the center-of-mass collision energy and the increase in the instantaneous luminosity of the LHC. In order to prepare for the anticipated further luminosity increase of the LHC in 2017/18, improving the trigger performance remain...

  15. Understanding the T2 traffic in CMS during Run-1

    Science.gov (United States)

    T, Wildish

    2015-12-01

    In the run-up to Run-1 CMS was operating its facilities according to the MONARC model, where data-transfers were strictly hierarchical in nature. Direct transfers between Tier-2 nodes was excluded, being perceived as operationally intensive and risky in an era where the network was expected to be a major source of errors. By the end of Run-1 wide-area networks were more capable and stable than originally anticipated. The original data-placement model was relaxed, and traffic was allowed between Tier-2 nodes. Tier-2 to Tier-2 traffic in 2012 already exceeded the amount of Tier-2 to Tier-1 traffic, so it clearly has the potential to become important in the future. Moreover, while Tier-2 to Tier-1 traffic is mostly upload of Monte Carlo data, the Tier-2 to Tier-2 traffic represents data moved in direct response to requests from the physics analysis community. As such, problems or delays there are more likely to have a direct impact on the user community. Tier-2 to Tier-2 traffic may also traverse parts of the WAN that are at the 'edge' of our network, with limited network capacity or reliability compared to, say, the Tier-0 to Tier-1 traffic which goes the over LHCOPN network. CMS is looking to exploit technologies that allow us to interact with the network fabric so that it can manage our traffic better for us, this we hope to achieve before the end of Run-2. Tier-2 to Tier-2 traffic would be the most interesting use-case for such traffic management, precisely because it is close to the users' analysis and far from the 'core' network infrastructure. As such, a better understanding of our Tier-2 to Tier-2 traffic is important. Knowing the characteristics of our data-flows can help us place our data more intelligently. Knowing how widely the data moves can help us anticipate the requirements for network capacity, and inform the dynamic data placement algorithms we expect to have in place for Run-2. This paper presents an analysis of the CMS Tier-2 traffic during Run 1.

  16. MORSE Monte Carlo code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described

  17. European Decommissioning Academy (EDA) - successful 1. run in june 2015

    International Nuclear Information System (INIS)

    Slugen, V.; Hornacek, M.

    2015-01-01

    Experiences from the first run of the European Decommissioning Academy (EDA) are reported in details. EDA was created at the Slovak University of Technology in Bratislava Slovakia, based on discussion and expressed needs declared at many international meetings including ECED2013. The first run successfully passed 14 participants during 7.-20.6. 2015. Academy was focused on decommissioning issues via lessons, practical exercises in laboratories, on-site training prepared at NPP V-1 in Jaslovske Bohunice, Slovakia as well as 4 days technical tour to other European decommissioning facilities (Swiss, Italy), respectively. Detailed information can be found at http://kome.snus.sk/inpe/. (authors)

  18. The running pattern and its importance in running long-distance gears

    Directory of Open Access Journals (Sweden)

    Jarosław Hoffman

    2017-07-01

    Full Text Available The running pattern is individual for each runner, regardless of distance. We can characterize it as the sum of the data of the runner (age, height, training time, etc. and the parameters of his run. Building the proper technique should focus first and foremost on the work of movement coordination and the power of the runner. In training the correct running steps we can use similar tools as working on deep feeling. The aim of this paper was to define what we can call a running pattern, what is its influence in long-distance running, and the relationship between the training technique and the running pattern. The importance of a running pattern in long-distance racing is immense, as the more distracted and departed from the norm, the greater the harm to the body will cause it to repetition in long run. Putting on training exercises that shape the technique is very important and affects the running pattern significantly.

  19. Applying graphics processor units to Monte Carlo dose calculation in radiation therapy

    Directory of Open Access Journals (Sweden)

    Bakhtiari M

    2010-01-01

    Full Text Available We investigate the potential in using of using a graphics processor unit (GPU for Monte-Carlo (MC-based radiation dose calculations. The percent depth dose (PDD of photons in a medium with known absorption and scattering coefficients is computed using a MC simulation running on both a standard CPU and a GPU. We demonstrate that the GPU′s capability for massive parallel processing provides a significant acceleration in the MC calculation, and offers a significant advantage for distributed stochastic simulations on a single computer. Harnessing this potential of GPUs will help in the early adoption of MC for routine planning in a clinical environment.

  20. Monte Carlo theory and practice

    International Nuclear Information System (INIS)

    James, F.

    1987-01-01

    Historically, the first large-scale calculations to make use of the Monte Carlo method were studies of neutron scattering and absorption, random processes for which it is quite natural to employ random numbers. Such calculations, a subset of Monte Carlo calculations, are known as direct simulation, since the 'hypothetical population' of the narrower definition above corresponds directly to the real population being studied. The Monte Carlo method may be applied wherever it is possible to establish equivalence between the desired result and the expected behaviour of a stochastic system. The problem to be solved may already be of a probabilistic or statistical nature, in which case its Monte Carlo formulation will usually be a straightforward simulation, or it may be of a deterministic or analytic nature, in which case an appropriate Monte Carlo formulation may require some imagination and may appear contrived or artificial. In any case, the suitability of the method chosen will depend on its mathematical properties and not on its superficial resemblance to the problem to be solved. The authors show how Monte Carlo techniques may be compared with other methods of solution of the same physical problem

  1. Data assimilation using a GPU accelerated path integral Monte Carlo approach

    Science.gov (United States)

    Quinn, John C.; Abarbanel, Henry D. I.

    2011-09-01

    The answers to data assimilation questions can be expressed as path integrals over all possible state and parameter histories. We show how these path integrals can be evaluated numerically using a Markov Chain Monte Carlo method designed to run in parallel on a graphics processing unit (GPU). We demonstrate the application of the method to an example with a transmembrane voltage time series of a simulated neuron as an input, and using a Hodgkin-Huxley neuron model. By taking advantage of GPU computing, we gain a parallel speedup factor of up to about 300, compared to an equivalent serial computation on a CPU, with performance increasing as the length of the observation time used for data assimilation increases.

  2. Monte Carlo simulation for the design of industrial gamma-ray transmission tomography

    International Nuclear Information System (INIS)

    Kim, Jongbum; Jung, Sunghee; Moon, Jinho; Kwon, Taekyong; Cho, Gyuseong

    2011-01-01

    The Monte Carlo simulation and experiment were carried out for a large-scale industrial gamma ray tomographic scanning geometry. The geometry of the tomographic system has a moving source with 16 stationary detectors. This geometry is advantageous for the diagnosis of a large-scale industrial plant. The simulation data was carried out for the phantom with 32 views, 16 detectors, and a different energy bin. The simulation data was processed to be used for image reconstruction. Image reconstruction was performed by a Diagonally-Scaled Gradient-Ascent algorithm for simulation data. Experiments were conducted in a 78 cm diameter column filled with polypropylene grains. Sixteen 0.5-inch-thick and 1 inch long NaI(Tl) cylindrical detectors, and 20 mCi of 137 Cs radioactive source were used. The experimental results were compared to the simulation data. The experimental results were similar to Monte Carlo simulation results. This result showed that the Monte Carlo simulation is useful for predicting the result of the industrial gamma tomographic scan method And it can also give a solution for designing the industrial gamma tomography system and preparing the field experiment. (author)

  3. Short-Run and Long-Run Elasticities of Diesel Demand in Korea

    Directory of Open Access Journals (Sweden)

    Seung-Hoon Yoo

    2012-11-01

    Full Text Available This paper investigates the demand function for diesel in Korea covering the period 1986–2011. The short-run and long-run elasticities of diesel demand with respect to price and income are empirically examined using a co-integration and error-correction model. The short-run and long-run price elasticities are estimated to be −0.357 and −0.547, respectively. The short-run and long-run income elasticities are computed to be 1.589 and 1.478, respectively. Thus, diesel demand is relatively inelastic to price change and elastic to income change in both the short-run and long-run. Therefore, a demand-side management through raising the price of diesel will be ineffective and tightening the regulation of using diesel more efficiently appears to be more effective in Korea. The demand for diesel is expected to continuously increase as the economy grows.

  4. Integrated Tiger Series of electron/photon Monte Carlo transport codes: a user's guide for use on IBM mainframes

    International Nuclear Information System (INIS)

    Kirk, B.L.

    1985-12-01

    The ITS (Integrated Tiger Series) Monte Carlo code package developed at Sandia National Laboratories and distributed as CCC-467/ITS by the Radiation Shielding Information Center (RSIC) at Oak Ridge National Laboratory (ORNL) consists of eight codes - the standard codes, TIGER, CYLTRAN, ACCEPT; the P-codes, TIGERP, CYLTRANP, ACCEPTP; and the M-codes ACCEPTM, CYLTRANM. The codes have been adapted to run on the IBM 3081, VAX 11/780, CDC-7600, and Cray 1 with the use of the update emulator UPEML. This manual should serve as a guide to a user running the codes on IBM computers having 370 architecture. The cases listed were tested on the IBM 3033, under the MVS operating system using the VS Fortran Level 1.3.1 compiler

  5. Toolkit for high performance Monte Carlo radiation transport and activation calculations for shielding applications in ITER

    International Nuclear Information System (INIS)

    Serikov, A.; Fischer, U.; Grosse, D.; Leichtle, D.; Majerle, M.

    2011-01-01

    The Monte Carlo (MC) method is the most suitable computational technique of radiation transport for shielding applications in fusion neutronics. This paper is intended for sharing the results of long term experience of the fusion neutronics group at Karlsruhe Institute of Technology (KIT) in radiation shielding calculations with the MCNP5 code for the ITER fusion reactor with emphasizing on the use of several ITER project-driven computer programs developed at KIT. Two of them, McCad and R2S, seem to be the most useful in radiation shielding analyses. The McCad computer graphical tool allows to perform automatic conversion of the MCNP models from the underlying CAD (CATIA) data files, while the R2S activation interface couples the MCNP radiation transport with the FISPACT activation allowing to estimate nuclear responses such as dose rate and nuclear heating after the ITER reactor shutdown. The cell-based R2S scheme was applied in shutdown photon dose analysis for the designing of the In-Vessel Viewing System (IVVS) and the Glow Discharge Cleaning (GDC) unit in ITER. Newly developed at KIT mesh-based R2S feature was successfully tested on the shutdown dose rate calculations for the upper port in the Neutral Beam (NB) cell of ITER. The merits of McCad graphical program were broadly acknowledged by the neutronic analysts and its continuous improvement at KIT has introduced its stable and more convenient run with its Graphical User Interface. Detailed 3D ITER neutronic modeling with the MCNP Monte Carlo method requires a lot of computation resources, inevitably leading to parallel calculations on clusters. Performance assessments of the MCNP5 parallel runs on the JUROPA/HPC-FF supercomputer cluster permitted to find the optimal number of processors for ITER-type runs. (author)

  6. Monte Carlo Methods in Physics

    International Nuclear Information System (INIS)

    Santoso, B.

    1997-01-01

    Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained

  7. Optimal Spatial Subdivision method for improving geometry navigation performance in Monte Carlo particle transport simulation

    International Nuclear Information System (INIS)

    Chen, Zhenping; Song, Jing; Zheng, Huaqing; Wu, Bin; Hu, Liqin

    2015-01-01

    Highlights: • The subdivision combines both advantages of uniform and non-uniform schemes. • The grid models were proved to be more efficient than traditional CSG models. • Monte Carlo simulation performance was enhanced by Optimal Spatial Subdivision. • Efficiency gains were obtained for realistic whole reactor core models. - Abstract: Geometry navigation is one of the key aspects of dominating Monte Carlo particle transport simulation performance for large-scale whole reactor models. In such cases, spatial subdivision is an easily-established and high-potential method to improve the run-time performance. In this study, a dedicated method, named Optimal Spatial Subdivision, is proposed for generating numerically optimal spatial grid models, which are demonstrated to be more efficient for geometry navigation than traditional Constructive Solid Geometry (CSG) models. The method uses a recursive subdivision algorithm to subdivide a CSG model into non-overlapping grids, which are labeled as totally or partially occupied, or not occupied at all, by CSG objects. The most important point is that, at each stage of subdivision, a conception of quality factor based on a cost estimation function is derived to evaluate the qualities of the subdivision schemes. Only the scheme with optimal quality factor will be chosen as the final subdivision strategy for generating the grid model. Eventually, the model built with the optimal quality factor will be efficient for Monte Carlo particle transport simulation. The method has been implemented and integrated into the Super Monte Carlo program SuperMC developed by FDS Team. Testing cases were used to highlight the performance gains that could be achieved. Results showed that Monte Carlo simulation runtime could be reduced significantly when using the new method, even as cases reached whole reactor core model sizes

  8. Monte Carlo techniques in radiation therapy

    CERN Document Server

    Verhaegen, Frank

    2013-01-01

    Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...

  9. Monte Carlo simulation for IRRMA

    International Nuclear Information System (INIS)

    Gardner, R.P.; Liu Lianyan

    2000-01-01

    Monte Carlo simulation is fast becoming a standard approach for many radiation applications that were previously treated almost entirely by experimental techniques. This is certainly true for Industrial Radiation and Radioisotope Measurement Applications - IRRMA. The reasons for this include: (1) the increased cost and inadequacy of experimentation for design and interpretation purposes; (2) the availability of low cost, large memory, and fast personal computers; and (3) the general availability of general purpose Monte Carlo codes that are increasingly user-friendly, efficient, and accurate. This paper discusses the history and present status of Monte Carlo simulation for IRRMA including the general purpose (GP) and specific purpose (SP) Monte Carlo codes and future needs - primarily from the experience of the authors

  10. Run Clever - No difference in risk of injury when comparing progression in running volume and running intensity in recreational runners

    DEFF Research Database (Denmark)

    Ramskov, Daniel; Rasmussen, Sten; Sørensen, Henrik

    2018-01-01

    Background/aim: The Run Clever trial investigated if there was a difference in injury occurrence across two running schedules, focusing on progression in volume of running intensity (Sch-I) or in total running volume (Sch-V). It was hypothesised that 15% more runners with a focus on progression...... in volume of running intensity would sustain an injury compared with runners with a focus on progression in total running volume. Methods: Healthy recreational runners were included and randomly allocated to Sch-I or Sch-V. In the first eight weeks of the 24-week follow-up, all participants (n=839) followed...... participants received real-time, individualised feedback on running intensity and running volume. The primary outcome was running-related injury (RRI). Results: After preconditioning a total of 80 runners sustained an RRI (Sch-I n=36/Sch-V n=44). The cumulative incidence proportion (CIP) in Sch-V (reference...

  11. Monte Carlo sampling strategies for lattice gauge calculations

    International Nuclear Information System (INIS)

    Guralnik, G.; Zemach, C.; Warnock, T.

    1985-01-01

    We have sought to optimize the elements of the Monte Carlo processes for thermalizing and decorrelating sequences of lattice gauge configurations and for this purpose, to develop computational and theoretical diagnostics to compare alternative techniques. These have been applied to speed up generations of random matrices, compare heat bath and Metropolis stepping methods, and to study autocorrelations of sequences in terms of the classical moment problem. The efficient use of statistically correlated lattice data is an optimization problem depending on the relation between computer times to generate lattice sequences of sufficiently small correlation and times to analyze them. We can solve this problem with the aid of a representation of auto-correlation data for various step lags as moments of positive definite distributions, using methods known for the moment problem to put bounds on statistical variances, in place of estimating the variances by too-lengthy computer runs

  12. Monte Carlo Studies of Phase Separation in Compressible 2-dim Ising Models

    Science.gov (United States)

    Mitchell, S. J.; Landau, D. P.

    2006-03-01

    Using high resolution Monte Carlo simulations, we study time-dependent domain growth in compressible 2-dim ferromagnetic (s=1/2) Ising models with continuous spin positions and spin-exchange moves [1]. Spins interact with slightly modified Lennard-Jones potentials, and we consider a model with no lattice mismatch and one with 4% mismatch. For comparison, we repeat calculations for the rigid Ising model [2]. For all models, large systems (512^2) and long times (10^ 6 MCS) are examined over multiple runs, and the growth exponent is measured in the asymptotic scaling regime. For the rigid model and the compressible model with no lattice mismatch, the growth exponent is consistent with the theoretically expected value of 1/3 [1] for Model B type growth. However, we find that non-zero lattice mismatch has a significant and unexpected effect on the growth behavior.Supported by the NSF.[1] D.P. Landau and K. Binder, A Guide to Monte Carlo Simulations in Statistical Physics, second ed. (Cambridge University Press, New York, 2005).[2] J. Amar, F. Sullivan, and R.D. Mountain, Phys. Rev. B 37, 196 (1988).

  13. Post-processing of Monte Carlo simulations for rapid BNCT source optimization studies

    International Nuclear Information System (INIS)

    Bleuel, D.L.; Chu, W.T.; Donahue, R.J.; Ludewigt, B.A.; Vujic, J.

    2000-01-01

    A great advantage of some neutron sources, such as accelerator-produced sources, is that they can be tuned to produce different spectra. Unfortunately, optimization studies are often time-consuming and difficult, as they require a lengthy Monte Carlo simulation for each source. When multiple characteristics, such as energy, angle, and spatial distribution of a neutron beam are allowed to vary, an overwhelming number of simulations may be required. Many optimization studies, therefore, suffer from a small number of datapoints, restrictive treatment conditions, or poor statistics. By scoring pertinent information from every particle tally in a Monte Carlo simulation, then applying appropriate source variable weight factors in a post-processing algorithm, a single simulation can be used to model any number of multiple sources. Through this method, the response to a new source can be modeled in minutes or seconds, rather than hours or days, allowing for the analysis of truly variable source conditions of much greater resolution than is normally possible when a new simulation must be run for each datapoint in a study. This method has been benchmarked and used to recreate optimization studies in a small fraction of the time spent in the original studies

  14. Continuous-time quantum Monte Carlo impurity solvers

    Science.gov (United States)

    Gull, Emanuel; Werner, Philipp; Fuchs, Sebastian; Surer, Brigitte; Pruschke, Thomas; Troyer, Matthias

    2011-04-01

    representations of quantum dots and molecular conductors and play an increasingly important role in the theory of "correlated electron" materials as auxiliary problems whose solution gives the "dynamical mean field" approximation to the self-energy and local correlation functions. Solution method: Quantum impurity models require a method of solution which provides access to both high and low energy scales and is effective for wide classes of physically realistic models. The continuous-time quantum Monte Carlo algorithms for which we present implementations here meet this challenge. Continuous-time quantum impurity methods are based on partition function expansions of quantum impurity models that are stochastically sampled to all orders using diagrammatic quantum Monte Carlo techniques. For a review of quantum impurity models and their applications and of continuous-time quantum Monte Carlo methods for impurity models we refer the reader to [2]. Additional comments: Use of dmft requires citation of this paper. Use of any ALPS program requires citation of the ALPS [1] paper. Running time: 60 s-8 h per iteration.

  15. Running and osteoarthritis.

    Science.gov (United States)

    Willick, Stuart E; Hansen, Pamela A

    2010-07-01

    The overall health benefits of cardiovascular exercise, such as running, are well established. However, it is also well established that in certain circumstances running can lead to overload injuries of muscle, tendon, and bone. In contrast, it has not been established that running leads to degeneration of articular cartilage, which is the hallmark of osteoarthritis. This article reviews the available literature on the association between running and osteoarthritis, with a focus on clinical epidemiologic studies. The preponderance of clinical reports refutes an association between running and osteoarthritis. Copyright 2010 Elsevier Inc. All rights reserved.

  16. Backward running or absence of running from Creutz ratios

    International Nuclear Information System (INIS)

    Giedt, Joel; Weinberg, Evan

    2011-01-01

    We extract the running coupling based on Creutz ratios in SU(2) lattice gauge theory with two Dirac fermions in the adjoint representation. Depending on how the extrapolation to zero fermion mass is performed, either backward running or an absence of running is observed at strong bare coupling. This behavior is consistent with other findings which indicate that this theory has an infrared fixed point.

  17. Changes in Running Mechanics During a 6-Hour Running Race.

    Science.gov (United States)

    Giovanelli, Nicola; Taboga, Paolo; Lazzer, Stefano

    2017-05-01

    To investigate changes in running mechanics during a 6-h running race. Twelve ultraendurance runners (age 41.9 ± 5.8 y, body mass 68.3 ± 12.6 kg, height 1.72 ± 0.09 m) were asked to run as many 874-m flat loops as possible in 6 h. Running speed, contact time (t c ), and aerial time (t a ) were measured in the first lap and every 30 ± 2 min during the race. Peak vertical ground-reaction force (F max ), stride length (SL), vertical downward displacement of the center of mass (Δz), leg-length change (ΔL), vertical stiffness (k vert ), and leg stiffness (k leg ) were then estimated. Mean distance covered by the athletes during the race was 62.9 ± 7.9 km. Compared with the 1st lap, running speed decreased significantly from 4 h 30 min onward (mean -5.6% ± 0.3%, P running, reaching the maximum difference after 5 h 30 min (+6.1%, P = .015). Conversely, k vert decreased after 4 h, reaching the lowest value after 5 h 30 min (-6.5%, P = .008); t a and F max decreased after 4 h 30 min through to the end of the race (mean -29.2% and -5.1%, respectively, P running, suggesting a possible time threshold that could affect performance regardless of absolute running speed.

  18. Geology of Maxwell Montes, Venus

    Science.gov (United States)

    Head, J. W.; Campbell, D. B.; Peterfreund, A. R.; Zisk, S. A.

    1984-01-01

    Maxwell Montes represent the most distinctive topography on the surface of Venus, rising some 11 km above mean planetary radius. The multiple data sets of the Pioneer missing and Earth based radar observations to characterize Maxwell Montes are analyzed. Maxwell Montes is a porkchop shaped feature located at the eastern end of Lakshmi Planum. The main massif trends about North 20 deg West for approximately 1000 km and the narrow handle extends several hundred km West South-West WSW from the north end of the main massif, descending down toward Lakshmi Planum. The main massif is rectilinear and approximately 500 km wide. The southern and northern edges of Maxwell Montes coincide with major topographic boundaries defining the edge of Ishtar Terra.

  19. Efficient Geometry and Data Handling for Large-Scale Monte Carlo - Thermal-Hydraulics Coupling

    Science.gov (United States)

    Hoogenboom, J. Eduard

    2014-06-01

    Detailed coupling of thermal-hydraulics calculations to Monte Carlo reactor criticality calculations requires each axial layer of each fuel pin to be defined separately in the input to the Monte Carlo code in order to assign to each volume the temperature according to the result of the TH calculation, and if the volume contains coolant, also the density of the coolant. This leads to huge input files for even small systems. In this paper a methodology for dynamical assignment of temperatures with respect to cross section data is demonstrated to overcome this problem. The method is implemented in MCNP5. The method is verified for an infinite lattice with 3x3 BWR-type fuel pins with fuel, cladding and moderator/coolant explicitly modeled. For each pin 60 axial zones are considered with different temperatures and coolant densities. The results of the axial power distribution per fuel pin are compared to a standard MCNP5 run in which all 9x60 cells for fuel, cladding and coolant are explicitly defined and their respective temperatures determined from the TH calculation. Full agreement is obtained. For large-scale application the method is demonstrated for an infinite lattice with 17x17 PWR-type fuel assemblies with 25 rods replaced by guide tubes. Again all geometrical detailed is retained. The method was used in a procedure for coupled Monte Carlo and thermal-hydraulics iterations. Using an optimised iteration technique, convergence was obtained in 11 iteration steps.

  20. The neutron instrument Monte Carlo library MCLIB: Recent developments

    International Nuclear Information System (INIS)

    Seeger, P.A.; Daemen, L.L.; Hjelm, R.P. Jr.; Thelliez, T.G.

    1998-01-01

    A brief review is given of the developments since the ICANS-XIII meeting made in the neutron instrument design codes using the Monte Carlo library MCLIB. Much of the effort has been to assure that the library and the executing code MC RUN connect efficiently with the World Wide Web application MC-WEB as part of the Los Alamos Neutron Instrument Simulation Package (NISP). Since one of the most important features of MCLIB is its open structure and capability to incorporate any possible neutron transport or scattering algorithm, this document describes the current procedure that would be used by an outside user to add a feature to MCLIB. Details of the calling sequence of the core subroutine OPERATE are discussed, and questions of style are considered and additional guidelines given. Suggestions for standardization are solicited, as well as code for new algorithms

  1. AREVA Developments for an Efficient and Reliable use of Monte Carlo codes for Radiation Transport Applications

    Directory of Open Access Journals (Sweden)

    Chapoutier Nicolas

    2017-01-01

    Full Text Available In the context of the rising of Monte Carlo transport calculations for any kind of application, AREVA recently improved its suite of engineering tools in order to produce efficient Monte Carlo workflow. Monte Carlo codes, such as MCNP or TRIPOLI, are recognized as reference codes to deal with a large range of radiation transport problems. However the inherent drawbacks of theses codes - laboring input file creation and long computation time - contrast with the maturity of the treatment of the physical phenomena. The goals of the recent AREVA developments were to reach similar efficiency as other mature engineering sciences such as finite elements analyses (e.g. structural or fluid dynamics. Among the main objectives, the creation of a graphical user interface offering CAD tools for geometry creation and other graphical features dedicated to the radiation field (source definition, tally definition has been reached. The computations times are drastically reduced compared to few years ago thanks to the use of massive parallel runs, and above all, the implementation of hybrid variance reduction technics. From now engineering teams are capable to deliver much more prompt support to any nuclear projects dealing with reactors or fuel cycle facilities from conceptual phase to decommissioning.

  2. Effects of Cycling vs. Running Training on Endurance Performance in Preparation for Inline Speed Skating.

    Science.gov (United States)

    Stangier, Carolin; Abel, Thomas; Hesse, Clemens; Claen, Stephanie; Mierau, Julia; Hollmann, Wildor; Strüder, Heiko K

    2016-06-01

    Winter weather conditions restrict regular sport-specific endurance training in inline speed skating. As a result, this study was designed to compare the effects of cycling and running training programs on inline speed skaters' endurance performance. Sixteen (8 men, 8 women) high-level athletes (mean ± SD 24 ± 8 years) were randomly assigned to 1 of 2 groups (running and cycling). Both groups trained twice a week for 8 weeks, one group on a treadmill and the other on a cycle ergometer. Training intensity and duration was individually calculated (maximal fat oxidation: ∼52% of V[Combining Dot Above]O2peak: 500 kcal per session). Before and after the training intervention, all athletes performed an incremental specific (inline speed skating) and 1 nonspecific (cycling or running) step test according to the group affiliation. In addition to blood lactate concentration, oxygen uptake (V[Combining Dot Above]O2), ventilatory equivalent (VE/V[Combining Dot Above]O2), respiratory exchange ratio (RER), and heart rate were measured. The specific posttest revealed significantly increased absolute V[Combining Dot Above]O2peak values (2.9 ± 0.4, 3.4 ± 0.7, p = 0.01) and submaximal V[Combining Dot Above]O2 values (p ≤ 0.01). VE/V[Combining Dot Above]O2 and RER significantly decreased at maximal (46.6 ± 6.6, 38.5 ± 3.4, p = 0.005; 1.1 ± 0.03, 1.0 ± 0.04, p = 0.001) and submaximal intensities (p ≤ 0.04). None of the analysis revealed a significant group effect (p ≥ 0.15). The results indicate that both cycling vs. running exercise at ∼52% of V[Combining Dot Above]O2peak had a positive effect on the athletes' endurance performance. The increased submaximal V[Combining Dot Above]O2 values indicate a reduction in athletes' inline speed skating technique. Therefore, athletes would benefit from a focus on technique training in the subsequent period.

  3. Muscle injury after low-intensity downhill running reduces running economy.

    Science.gov (United States)

    Baumann, Cory W; Green, Michael S; Doyle, J Andrew; Rupp, Jeffrey C; Ingalls, Christopher P; Corona, Benjamin T

    2014-05-01

    Contraction-induced muscle injury may reduce running economy (RE) by altering motor unit recruitment, lowering contraction economy, and disturbing running mechanics, any of which may have a deleterious effect on endurance performance. The purpose of this study was to determine if RE is reduced 2 days after performing injurious, low-intensity exercise in 11 healthy active men (27.5 ± 5.7 years; 50.05 ± 1.67 VO2peak). Running economy was determined at treadmill speeds eliciting 65 and 75% of the individual's peak rate of oxygen uptake (VO2peak) 1 day before and 2 days after injury induction. Lower extremity muscle injury was induced with a 30-minute downhill treadmill run (6 × 5 minutes runs, 2 minutes rest, -12% grade, and 12.9 km·h(-1)) that elicited 55% VO2peak. Maximal quadriceps isometric torque was reduced immediately and 2 days after the downhill run by 18 and 10%, and a moderate degree of muscle soreness was present. Two days after the injury, steady-state VO2 and metabolic work (VO2 L·km(-1)) were significantly greater (4-6%) during the 65% VO2peak run. Additionally, postinjury VCO2, VE and rating of perceived exertion were greater at 65% but not at 75% VO2peak, whereas whole blood-lactate concentrations did not change pre-injury to postinjury at either intensity. In conclusion, low-intensity downhill running reduces RE at 65% but not 75% VO2peak. The results of this study and other studies indicate the magnitude to which RE is altered after downhill running is dependent on the severity of the injury and intensity of the RE test.

  4. (U) Introduction to Monte Carlo Methods

    Energy Technology Data Exchange (ETDEWEB)

    Hungerford, Aimee L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-20

    Monte Carlo methods are very valuable for representing solutions to particle transport problems. Here we describe a “cook book” approach to handling the terms in a transport equation using Monte Carlo methods. Focus is on the mechanics of a numerical Monte Carlo code, rather than the mathematical foundations of the method.

  5. Excessive Progression in Weekly Running Distance and Risk of Running-related Injuries

    DEFF Research Database (Denmark)

    Nielsen, R.O.; Parner, Erik Thorlund; Nohr, Ellen Aagaard

    2014-01-01

    Study Design An explorative, 1-year prospective cohort study. Objective To examine whether an association between a sudden change in weekly running distance and running-related injury varies according to injury type. Background It is widely accepted that a sudden increase in running distance...... is strongly related to injury in runners. But the scientific knowledge supporting this assumption is limited. Methods A volunteer sample of 874 healthy novice runners who started a self-structured running regimen were provided a global-positioning-system watch. After each running session during the study...... period, participants were categorized into 1 of the following exposure groups, based on the progression of their weekly running distance: less than 10% or regression, 10% to 30%, or more than 30%. The primary outcome was running-related injury. Results A total of 202 runners sustained a running...

  6. A Monte Carlo method using octree structure in photon and electron transport

    International Nuclear Information System (INIS)

    Ogawa, K.; Maeda, S.

    1995-01-01

    Most of the early Monte Carlo calculations in medical physics were used to calculate absorbed dose distributions, and detector responses and efficiencies. Recently, data acquisition in Single Photon Emission CT (SPECT) has been simulated by a Monte Carlo method to evaluate scatter photons generated in a human body and a collimator. Monte Carlo simulations in SPECT data acquisition are generally based on the transport of photons only because the photons being simulated are low energy, and therefore the bremsstrahlung productions by the electrons generated are negligible. Since the transport calculation of photons without electrons is much simpler than that with electrons, it is possible to accomplish the high-speed simulation in a simple object with one medium. Here, object description is important in performing the photon and/or electron transport using a Monte Carlo method efficiently. The authors propose a new description method using an octree representation of an object. Thus even if the boundaries of each medium are represented accurately, high-speed calculation of photon transport can be accomplished because the number of voxels is much fewer than that of the voxel-based approach which represents an object by a union of the voxels of the same size. This Monte Carlo code using the octree representation of an object first establishes the simulation geometry by reading octree string, which is produced by forming an octree structure from a set of serial sections for the object before the simulation; then it transports photons in the geometry. Using the code, if the user just prepares a set of serial sections for the object in which he or she wants to simulate photon trajectories, he or she can perform the simulation automatically using the suboptimal geometry simplified by the octree representation without forming the optimal geometry by handwriting

  7. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  8. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  9. Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Verification

    Science.gov (United States)

    Hanson, John M.; Beard, Bernard B.

    2010-01-01

    This paper is focused on applying Monte Carlo simulation to probabilistic launch vehicle design and requirements verification. The approaches developed in this paper can be applied to other complex design efforts as well. Typically the verification must show that requirement "x" is met for at least "y" % of cases, with, say, 10% consumer risk or 90% confidence. Two particular aspects of making these runs for requirements verification will be explored in this paper. First, there are several types of uncertainties that should be handled in different ways, depending on when they become known (or not). The paper describes how to handle different types of uncertainties and how to develop vehicle models that can be used to examine their characteristics. This includes items that are not known exactly during the design phase but that will be known for each assembled vehicle (can be used to determine the payload capability and overall behavior of that vehicle), other items that become known before or on flight day (can be used for flight day trajectory design and go/no go decision), and items that remain unknown on flight day. Second, this paper explains a method (order statistics) for determining whether certain probabilistic requirements are met or not and enables the user to determine how many Monte Carlo samples are required. Order statistics is not new, but may not be known in general to the GN&C community. The methods also apply to determining the design values of parameters of interest in driving the vehicle design. The paper briefly discusses when it is desirable to fit a distribution to the experimental Monte Carlo results rather than using order statistics.

  10. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    Science.gov (United States)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  11. Monte Carlo calculations on a parallel computer using MORSE-C.G

    International Nuclear Information System (INIS)

    Wood, J.

    1995-01-01

    The general purpose particle transport Monte Carlo code, MORSE-C.G., is implemented on a parallel computing transputer-based system having MIMD architecture. Example problems are solved which are representative of the 3-principal types of problem that can be solved by the original serial code, namely, fixed source, eigenvalue (k-eff) and time-dependent. The results from the parallelized version of the code are compared in tables with the serial code run on a mainframe serial computer, and with an independent, deterministic transport code. The performance of the parallel computer as the number of processors is varied is shown graphically. For the parallel strategy used, the loss of efficiency as the number of processors is increased, is investigated. (author)

  12. MCBooster: a library for fast Monte Carlo generation of phase-space decays on massively parallel platforms.

    Science.gov (United States)

    Alves Júnior, A. A.; Sokoloff, M. D.

    2017-10-01

    MCBooster is a header-only, C++11-compliant library that provides routines to generate and perform calculations on large samples of phase space Monte Carlo events. To achieve superior performance, MCBooster is capable to perform most of its calculations in parallel using CUDA- and OpenMP-enabled devices. MCBooster is built on top of the Thrust library and runs on Linux systems. This contribution summarizes the main features of MCBooster. A basic description of the user interface and some examples of applications are provided, along with measurements of performance in a variety of environments

  13. GridPP - Preparing for LHC Run 2 and the Wider Context

    Science.gov (United States)

    Coles, Jeremy

    2015-12-01

    This paper elaborates upon the operational status and directions within the UK Computing for Particle Physics (GridPP) project as it approaches LHC Run 2. It details the pressures that have been gradually reshaping the deployed hardware and middleware environments at GridPP sites - from the increasing adoption of larger multicore nodes to the move towards alternative batch systems and cloud alternatives - as well as changes being driven by funding considerations. The paper highlights work being done with non-LHC communities and describes some of the early outcomes of adopting a generic DIRAC based job submission and management framework. The paper presents results from an analysis of how GridPP effort is distributed across various deployment and operations tasks and how this may be used to target further improvements in efficiency.

  14. Equations describing contamination of run of mine coal with dirt in the Upper Silesian Coalfield

    Energy Technology Data Exchange (ETDEWEB)

    Winiewski, J J

    1977-12-01

    Statistical analysis proved that contamination with dirt of run of mine coal from seams in the series 200 to 600 of the Upper Silesian Coalfield depends on the average ash content of a given raw coal. A regression equation is deduced for coarse and fine sizes of each coal. These equations can be used to predict the degree of contamination of run of mine coal to an accuracy sufficient for coal preparation purposes.

  15. SRNA-2K5, Proton Transport Using 3-D by Monte Carlo Techniques

    International Nuclear Information System (INIS)

    Ilic, Radovan D.

    2005-01-01

    1 - Description of program or function: SRNA-2K5 performs Monte Carlo transport simulation of proton in 3D source and 3D geometry of arbitrary materials. The proton transport based on condensed history model, and on model of compound nuclei decays that creates in nonelastic nuclear interaction by proton absorption. 2 - Methods: The SRNA-2K5 package is developed for time independent simulation of proton transport by Monte Carlo techniques for numerical experiments in complex geometry, using PENGEOM from PENELOPE with different material compositions, and arbitrary spectrum of proton generated from the 3D source. This package developed for 3D proton dose distribution in proton therapy and dosimetry, and it was based on the theory of multiple scattering. The compound nuclei decay was simulated by our and Russian MSDM models using ICRU 49 and ICRU 63 data. If protons trajectory is divided on great number of steps, protons passage can be simulated according to Berger's Condensed Random Walk model. Conditions of angular distribution and fluctuation of energy loss determinate step length. Physical picture of these processes is described by stopping power, Moliere's angular distribution, Vavilov's distribution with Sulek's correction per all electron orbits, and Chadwick's cross sections for nonelastic nuclear interactions, obtained by his GNASH code. According to physical picture of protons passage and with probabilities of protons transition from previous to next stage, which is prepared by SRNADAT program, simulation of protons transport in all SRNA codes runs according to usual Monte Carlo scheme: (i) proton from the spectrum prepared for random choice of energy, position and space angle is emitted from the source; (ii) proton is loosing average energy on the step; (iii) on that step, proton experience a great number of collisions, and it changes direction of movement randomly chosen from angular distribution; (iv) random fluctuation is added to average energy loss; (v

  16. OGRE, Monte-Carlo System for Gamma Transport Problems

    International Nuclear Information System (INIS)

    1984-01-01

    1 - Nature of physical problem solved: The OGRE programme system was designed to calculate, by Monte Carlo methods, any quantity related to gamma-ray transport. The system is represented by two examples - OGRE-P1 and OGRE-G. The OGRE-P1 programme is a simple prototype which calculates dose rate on one side of a slab due to a plane source on the other side. The OGRE-G programme, a prototype of a programme utilizing a general-geometry routine, calculates dose rate at arbitrary points. A very general source description in OGRE-G may be employed by reading a tape prepared by the user. 2 - Method of solution: Case histories of gamma rays in the prescribed geometry are generated and analyzed to produce averages of any desired quantity which, in the case of the prototypes, are gamma-ray dose rates. The system is designed to achieve generality by ease of modification. No importance sampling is built into the prototypes, a very general geometry subroutine permits the treatment of complicated geometries. This is essentially the same routine used in the O5R neutron transport system. Boundaries may be either planes or quadratic surfaces, arbitrarily oriented and intersecting in arbitrary fashion. Cross section data is prepared by the auxiliary master cross section programme XSECT which may be used to originate, update, or edit the master cross section tape. The master cross section tape is utilized in the OGRE programmes to produce detailed tables of macroscopic cross sections which are used during the Monte Carlo calculations. 3 - Restrictions on the complexity of the problem: Maximum cross-section array information may be estimated by a given formula for a specific problem. The number of regions must be less than or equal to 50

  17. Running and Osteoarthritis: Does Recreational or Competitive Running Increase the Risk?

    Science.gov (United States)

    2017-06-01

    Exercise, like running, is good for overall health and, specifically, our hearts, lungs, muscles, bones, and brains. However, some people are concerned about the impact of running on longterm joint health. Does running lead to higher rates of arthritis in knees and hips? While many researchers find that running protects bone health, others are concerned that this exercise poses a high risk for age-related changes to hips and knees. A study published in the June 2017 issue of JOSPT suggests that the difference in these outcomes depends on the frequency and intensity of running. J Orthop Sports Phys Ther 2017;47(6):391. doi:10.2519/jospt.2017.0505.

  18. The effect of footwear on running performance and running economy in distance runners.

    Science.gov (United States)

    Fuller, Joel T; Bellenger, Clint R; Thewlis, Dominic; Tsiros, Margarita D; Buckley, Jonathan D

    2015-03-01

    The effect of footwear on running economy has been investigated in numerous studies. However, no systematic review and meta-analysis has synthesised the available literature and the effect of footwear on running performance is not known. The aim of this systematic review and meta-analysis was to investigate the effect of footwear on running performance and running economy in distance runners, by reviewing controlled trials that compare different footwear conditions or compare footwear with barefoot. The Web of Science, Scopus, MEDLINE, CENTRAL (Cochrane Central Register of Controlled Trials), EMBASE, AMED (Allied and Complementary Medicine), CINAHL and SPORTDiscus databases were searched from inception up until April 2014. Included articles reported on controlled trials that examined the effects of footwear or footwear characteristics (including shoe mass, cushioning, motion control, longitudinal bending stiffness, midsole viscoelasticity, drop height and comfort) on running performance or running economy and were published in a peer-reviewed journal. Of the 1,044 records retrieved, 19 studies were included in the systematic review and 14 studies were included in the meta-analysis. No studies were identified that reported effects on running performance. Individual studies reported significant, but trivial, beneficial effects on running economy for comfortable and stiff-soled shoes [standardised mean difference (SMD) beneficial effect on running economy for cushioned shoes (SMD = 0.37; P beneficial effect on running economy for training in minimalist shoes (SMD = 0.79; P beneficial effects on running economy for light shoes and barefoot compared with heavy shoes (SMD running was identified (P running economy. Certain models of footwear and footwear characteristics can improve running economy. Future research in footwear performance should include measures of running performance.

  19. Multilevel Monte Carlo methods using ensemble level mixed MsFEM for two-phase flow and transport simulations

    KAUST Repository

    Efendiev, Yalchin R.

    2013-08-21

    (and expensive) forward simulations are run with fewer samples, while less accurate (and inexpensive) forward simulations are run with a larger number of samples. Selecting the number of expensive and inexpensive simulations based on the number of coarse degrees of freedom, one can show that MLMC methods can provide better accuracy at the same cost as Monte Carlo (MC) methods. The main objective of the paper is twofold. First, we would like to compare NLSO and LSO mixed MsFEMs. Further, we use both approaches in the context of MLMC to speedup MC calculations. © 2013 Springer Science+Business Media Dordrecht.

  20. Numerical experiment on variance biases and Monte Carlo neutronics analysis with thermal hydraulic feedback

    International Nuclear Information System (INIS)

    Hyung, Jin Shim; Beom, Seok Han; Chang, Hyo Kim

    2003-01-01

    Monte Carlo (MC) power method based on the fixed number of fission sites at the beginning of each cycle is known to cause biases in the variances of the k-eigenvalue (keff) and the fission reaction rate estimates. Because of the biases, the apparent variances of keff and the fission reaction rate estimates from a single MC run tend to be smaller or larger than the real variances of the corresponding quantities, depending on the degree of the inter-generational correlation of the sample. We demonstrate this through a numerical experiment involving 100 independent MC runs for the neutronics analysis of a 17 x 17 fuel assembly of a pressurized water reactor (PWR). We also demonstrate through the numerical experiment that Gelbard and Prael's batch method and Ueki et al's covariance estimation method enable one to estimate the approximate real variances of keff and the fission reaction rate estimates from a single MC run. We then show that the use of the approximate real variances from the two-bias predicting methods instead of the apparent variances provides an efficient MC power iteration scheme that is required in the MC neutronics analysis of a real system to determine the pin power distribution consistent with the thermal hydraulic (TH) conditions of individual pins of the system. (authors)

  1. Recursive Monte Carlo method for deep-penetration problems

    International Nuclear Information System (INIS)

    Goldstein, M.; Greenspan, E.

    1980-01-01

    The Recursive Monte Carlo (RMC) method developed for estimating importance function distributions in deep-penetration problems is described. Unique features of the method, including the ability to infer the importance function distribution pertaining to many detectors from, essentially, a single M.C. run and the ability to use the history tape created for a representative region to calculate the importance function in identical regions, are illustrated. The RMC method is applied to the solution of two realistic deep-penetration problems - a concrete shield problem and a Tokamak major penetration problem. It is found that the RMC method can provide the importance function distributions, required for importance sampling, with accuracy that is suitable for an efficient solution of the deep-penetration problems considered. The use of the RMC method improved, by one to three orders of magnitude, the solution efficiency of the two deep-penetration problems considered: a concrete shield problem and a Tokamak major penetration problem. 8 figures, 4 tables

  2. Lectures on Monte Carlo methods

    CERN Document Server

    Madras, Neal

    2001-01-01

    Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati

  3. Hybrid SN/Monte Carlo research and results

    International Nuclear Information System (INIS)

    Baker, R.S.

    1993-01-01

    The neutral particle transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S N ) and stochastic (Monte Carlo) methods are applied. The Monte Carlo and S N regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid Monte Carlo/S N method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S N is well suited for by themselves. The hybrid method has been successfully applied to realistic shielding problems. The vectorized Monte Carlo algorithm in the hybrid method has been ported to the massively parallel architecture of the Connection Machine. Comparisons of performance on a vector machine (Cray Y-MP) and the Connection Machine (CM-2) show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when realistic problems requiring variance reduction are considered. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may be expected to perform well

  4. Cumulative percent energy deposition of photon beam incident on different targets, simulated by Monte Carlo

    International Nuclear Information System (INIS)

    Kandic, A.; Jevremovic, T.; Boreli, F.

    1989-01-01

    Monte Carlo simulation (without secondary radiation) of the standard photon interactions (Compton scattering, photoelectric absorption and pair protection) for the complex slab's geometry is used in numerical code ACCA. A typical ACCA run will yield: (a) transmission of primary photon radiation differential in energy, (b) the spectrum of energy deposited in the target as a function of position and (c) the cumulative percent energy deposition as a function of position. A cumulative percent energy deposition of photon monoenergetic beam incident on simplest and complexity tissue slab and Fe slab are presented in this paper. (author). 5 refs.; 2 figs

  5. Monte Carlo simulation in nuclear medicine

    International Nuclear Information System (INIS)

    Morel, Ch.

    2007-01-01

    The Monte Carlo method allows for simulating random processes by using series of pseudo-random numbers. It became an important tool in nuclear medicine to assist in the design of new medical imaging devices, optimise their use and analyse their data. Presently, the sophistication of the simulation tools allows the introduction of Monte Carlo predictions in data correction and image reconstruction processes. The availability to simulate time dependent processes opens up new horizons for Monte Carlo simulation in nuclear medicine. In a near future, these developments will allow to tackle simultaneously imaging and dosimetry issues and soon, case system Monte Carlo simulations may become part of the nuclear medicine diagnostic process. This paper describes some Monte Carlo method basics and the sampling methods that were developed for it. It gives a referenced list of different simulation software used in nuclear medicine and enumerates some of their present and prospective applications. (author)

  6. Running coupling from gluon and ghost propagators in the Landau gauge: Yang-Mills theories with adjoint fermions

    Science.gov (United States)

    Bergner, Georg; Piemonte, Stefano

    2018-04-01

    Non-Abelian gauge theories with fermions transforming in the adjoint representation of the gauge group (AdjQCD) are a fundamental ingredient of many models that describe the physics beyond the Standard Model. Two relevant examples are N =1 supersymmetric Yang-Mills (SYM) theory and minimal walking technicolor, which are gauge theories coupled to one adjoint Majorana and two adjoint Dirac fermions, respectively. While confinement is a property of N =1 SYM, minimal walking technicolor is expected to be infrared conformal. We study the propagators of ghost and gluon fields in the Landau gauge to compute the running coupling in the MiniMom scheme. We analyze several different ensembles of lattice Monte Carlo simulations for the SU(2) adjoint QCD with Nf=1 /2 ,1 ,3 /2 , and 2 Dirac fermions. We show how the running of the coupling changes as the number of interacting fermions is increased towards the conformal window.

  7. Reactor modification, preparation and operation

    International Nuclear Information System (INIS)

    Weill, J.; Furet, J.; Baillet, J.; Donvez, G.; Duchene, J.; Gras, R.; Mercier, R.; Chenouard, J.; Leconte, J.

    1962-01-01

    In the course of preparations for the dosimetry experiment at the R-B reactor the control and safety equipment of the reactor was found to be inadequate for operation at a constant power level of several watts. After completing the study of control and safety issues by CEA, safety and control were defined for the purpose of the Joint Dosimetry Experiment. Preparations for the Dosimetry Experiment included: installation of equipment for control and safety of the reactor; supplying 6570 Kg of heavy water by UK, reinforcement of the reactor wall on the outside of the building; constructing the protection of the control room; start-up, measuring of the critical heavy water level, and check of control and safety rods worth. After the final check of safety rod mechanisms, eight runs were performed at a power of 5 Watt, and then a 1 k Watt run was carried out and the power stabilized at this power for 30 min by automatic control system

  8. Reactor modification, preparation and operation

    Energy Technology Data Exchange (ETDEWEB)

    Weill, J; Furet, J; Baillet, J; Donvez, G; Duchene, J; Gras, R; Mercier, R [Electronics Dept., Independent Section of Reactor Electronics, Saclay (France); Chenouard, J; Leconte, J [Dept. of Physical Chemistry, Stable Isotopes Section, Saclay (France)

    1962-03-01

    In the course of preparations for the dosimetry experiment at the R-B reactor the control and safety equipment of the reactor was found to be inadequate for operation at a constant power level of several watts. After completing the study of control and safety issues by CEA, safety and control were defined for the purpose of the Joint Dosimetry Experiment. Preparations for the Dosimetry Experiment included: installation of equipment for control and safety of the reactor; supplying 6570 Kg of heavy water by UK, reinforcement of the reactor wall on the outside of the building; constructing the protection of the control room; start-up, measuring of the critical heavy water level, and check of control and safety rods worth. After the final check of safety rod mechanisms, eight runs were performed at a power of 5 Watt, and then a 1 k Watt run was carried out and the power stabilized at this power for 30 min by automatic control system.

  9. Reactor modification, preparation and operation

    Energy Technology Data Exchange (ETDEWEB)

    Weill, J; Furet, J; Baillet, J; Donvez, G; Duchene, J; Gras, R; Mercier, R [Electronics Dept., Independent Section of Reactor Electronics, Saclay (France); Chenouard, J; Leconte, J [Dept. of Physical Chemistry, Stable Isotopes Section, Saclay (France)

    1962-03-15

    In the course of preparations for the dosimetry experiment at the R-B reactor the control and safety equipment of the reactor was found to be inadequate for operation at a constant power level of several watts. After completing the study of control and safety issues by CEA, safety and control were defined for the purpose of the Joint Dosimetry Experiment. Preparations for the Dosimetry Experiment included: installation of equipment for control and safety of the reactor; supplying 6570 Kg of heavy water by UK, reinforcement of the reactor wall on the outside of the building; constructing the protection of the control room; start-up, measuring of the critical heavy water level, and check of control and safety rods worth. After the final check of safety rod mechanisms, eight runs were performed at a power of 5 Watt, and then a 1 k Watt run was carried out and the power stabilized at this power for 30 min by automatic control system.

  10. Quantum Monte Carlo: Faster, More Reliable, And More Accurate

    Science.gov (United States)

    Anderson, Amos Gerald

    2010-06-01

    combination of Generalized Valence Bond wavefunctions, improved correlation functions, and stabilized weighting techniques for calculations run on graphics cards, represents a new way for using Quantum Monte Carlo to study arbitrarily sized molecules.

  11. Triathlon: running injuries.

    Science.gov (United States)

    Spiker, Andrea M; Dixit, Sameer; Cosgarea, Andrew J

    2012-12-01

    The running portion of the triathlon represents the final leg of the competition and, by some reports, the most important part in determining a triathlete's overall success. Although most triathletes spend most of their training time on cycling, running injuries are the most common injuries encountered. Common causes of running injuries include overuse, lack of rest, and activities that aggravate biomechanical predisposers of specific injuries. We discuss the running-associated injuries in the hip, knee, lower leg, ankle, and foot of the triathlete, and the causes, presentation, evaluation, and treatment of each.

  12. Comparison of 2 accelerators of Monte Carlo radiation transport calculations, NVIDIA tesla M2090 GPU and Intel Xeon Phi 5110p coprocessor: a case study for X-ray CT Imaging Dose calculation

    International Nuclear Information System (INIS)

    Liu, T.; Xu, X.G.; Carothers, C.D.

    2013-01-01

    Hardware accelerators are currently becoming increasingly important in boosting high performance computing systems. In this study, we tested the performance of two accelerator models, NVIDIA Tesla M2090 GPU and Intel Xeon Phi 5110p coprocessor, using a new Monte Carlo photon transport package called ARCHER-CT we have developed for fast CT imaging dose calculation. The package contains three code variants, ARCHER-CT(CPU), ARCHER-CT(GPU) and ARCHER-CT(COP) to run in parallel on the multi-core CPU, GPU and coprocessor architectures respectively. A detailed GE LightSpeed Multi-Detector Computed Tomography (MDCT) scanner model and a family of voxel patient phantoms were included in the code to calculate absorbed dose to radiosensitive organs under specified scan protocols. The results from ARCHER agreed well with those from the production code Monte Carlo N-Particle eXtended (MCNPX). It was found that all the code variants were significantly faster than the parallel MCNPX running on 12 MPI processes, and that the GPU and coprocessor performed equally well, being 2.89-4.49 and 3.01-3.23 times faster than the parallel ARCHER-CT(CPU) running with 12 hyper-threads. (authors)

  13. Comparison of Two Accelerators for Monte Carlo Radiation Transport Calculations, NVIDIA Tesla M2090 GPU and Intel Xeon Phi 5110p Coprocessor: A Case Study for X-ray CT Imaging Dose Calculation

    Science.gov (United States)

    Liu, Tianyu; Xu, X. George; Carothers, Christopher D.

    2014-06-01

    Hardware accelerators are currently becoming increasingly important in boosting high performance computing sys- tems. In this study, we tested the performance of two accelerator models, NVIDIA Tesla M2090 GPU and Intel Xeon Phi 5110p coprocessor, using a new Monte Carlo photon transport package called ARCHER-CT we have developed for fast CT imaging dose calculation. The package contains three code variants, ARCHER - CTCPU, ARCHER - CTGPU and ARCHER - CTCOP to run in parallel on the multi-core CPU, GPU and coprocessor architectures respectively. A detailed GE LightSpeed Multi-Detector Computed Tomography (MDCT) scanner model and a family of voxel patient phantoms were included in the code to calculate absorbed dose to radiosensitive organs under specified scan protocols. The results from ARCHER agreed well with those from the production code Monte Carlo N-Particle eXtended (MCNPX). It was found that all the code variants were significantly faster than the parallel MCNPX running on 12 MPI processes, and that the GPU and coprocessor performed equally well, being 2.89~4.49 and 3.01~3.23 times faster than the parallel ARCHER - CTCPU running with 12 hyperthreads.

  14. KENO IV: an improved Monte Carlo criticality program

    International Nuclear Information System (INIS)

    Petrie, L.M.; Cross, N.F.

    1975-11-01

    KENO IV is a multigroup Monte Carlo criticality program written for the IBM 360 computers. It executes rapidly and is flexibly dimensioned so the allowed size of a problem (i.e., the number of energy groups, number of geometry cards, etc., are arbitrary) is limited only by the total data storage required. The input data, with the exception of cross sections, fission spectra and albedos, may be entered in free form. The geometry input is quite simple to prepare and complicated three-dimensional systems can often be described with a minimum of effort. The results calculated by KENO IV include k-effective, lifetime and generation time, energy-dependent leakages and absorptions, energy- and region-dependent fluxes and region-dependent fission densities. Criticality searches can be made on unit dimensions or on the number of units in an array. A summary of the theory utilized by KENO IV, a section describing the logical program flow, a compilation of the error messages printed by the code and a comprehensive data guide for preparing input to the code are presented. 14 references

  15. LHC Report: Tests of new LHC running modes

    CERN Document Server

    Verena Kain for the LHC team

    2012-01-01

    On 13 September, the LHC collided lead ions with protons for the first time. This outstanding achievement was key preparation for the planned 2013 operation in this mode. Outside of two special physics runs, the LHC has continued productive proton-proton luminosity operation.   Celebrating proton-ion collisions. The first week of September added another 1 fb-1 of integrated luminosity to ATLAS’s and CMS’s proton-proton data set. It was a week of good and steady production mixed with the usual collection of minor equipment faults. The peak performance was slightly degraded at the start of the week but thanks to the work of the teams in the LHC injectors the beam brightness – and thus the LHC peak performance – were restored to previous levels by the weekend. The LHC then switched to new running modes and spectacularly proved its potential as a multi-purpose machine. This is due in large part to the LHC equipment and controls, which have been designed wi...

  16. Belo Monte hydropower project: actual studies; AHE Belo Monte: os estudos atuais

    Energy Technology Data Exchange (ETDEWEB)

    Figueira Netto, Carlos Alberto de Moya [CNEC Engenharia S.A., Sao Paulo, SP (Brazil); Rezende, Paulo Fernando Vieira Souto [Centrais Eletricas Brasileiras S.A. (ELETROBRAS), Rio de Janeiro, RJ (Brazil)

    2008-07-01

    This article presents the evolution of the studies of Belo Monte Hydro Power Project (HPP) since the initial inventory studies of the Xingu River in 1979 until the current studies for conclusion of the Technical, Economic and Environmental Feasibility Studies the Belo Monte Hydro Power Project, as authorized by Brazilian National Congress. The current studies characterize the Belo Monte HPP with an installed capacity of 11,181.3 MW (20 units of 550 MW in the main power house and 7 units of 25.9 MW in the additional power house), connected to the Brazilian Interconnected Power Grid, allowing to generate 4,796 mean MW of firm energy, without depending on any flow rate regularization of the upstream Xingu river flooding only 441 k m2, of which approximately 200 k m2, correspond to the normal annual wet season flooding of the Xingu River. (author)

  17. Post-processing of Monte Carlo simulations for rapid BNCT source optimization studies

    International Nuclear Information System (INIS)

    Bleuel, D.L.; Chu, W.T.; Donahue, R.J.; Ludewigt, B.A.; Vujic, J.

    2000-01-01

    A great advantage of some neutron sources, such as accelerator-produced sources, is that they can be tuned to produce different spectra. Unfortunately, optimization studies are often time-consuming and difficult, as they require a lengthy Monte Carlo simulation for each source. When multiple characteristics, such as energy, angle, and spatial distribution of a neutron beam are allowed to vary, an overwhelming number of simulations may be required. Many optimization studies, therefore, suffer from a small number of data points, restrictive treatment conditions, or poor statistics. By scoring pertinent information from every particle tally in a Monte Carlo simulation, then applying appropriate source variable weight factors in a post-processing algorithm; a single simulation can be used to model any number of multiple sources. Through this method, the response to a new source can be modeled in minutes or seconds, rather than hours or days, allowing for the analysis of truly variable source conditions of much greater resolution than is normally possible when a new simulation must be run for each data point in a study. This method has been benchmarked and used to recreate optimization studies in a small fraction of the time spent in the original studies. (author)

  18. Habitual Minimalist Shod Running Biomechanics and the Acute Response to Running Barefoot.

    Science.gov (United States)

    Tam, Nicholas; Darragh, Ian A J; Divekar, Nikhil V; Lamberts, Robert P

    2017-09-01

    The aim of the study was to determine whether habitual minimalist shoe runners present with purported favorable running biomechanithat reduce running injury risk such as initial loading rate. Eighteen minimalist and 16 traditionally cushioned shod runners were assessed when running both in their preferred training shoe and barefoot. Ankle and knee joint kinetics and kinematics, initial rate of loading, and footstrike angle were measured. Sagittal ankle and knee joint stiffness were also calculated. Results of a two-factor ANOVA presented no group difference in initial rate of loading when participants were running either shod or barefoot; however, initial loading rate increased for both groups when running barefoot (p=0.008). Differences in footstrike angle were observed between groups when running shod, but not when barefoot (minimalist:8.71±8.99 vs. traditional: 17.32±11.48 degrees, p=0.002). Lower ankle joint stiffness was found in both groups when running barefoot (p=0.025). These findings illustrate that risk factors for injury potentially differ between the two groups. Shoe construction differences do change mechanical demands, however, once habituated to the demands of a given shoe condition, certain acute favorable or unfavorable responses may be moderated. The purported benefits of minimalist running shoes in mimicking habitual barefoot running is questioned, and risk of injury may not be attenuated. © Georg Thieme Verlag KG Stuttgart · New York.

  19. Advanced Multilevel Monte Carlo Methods

    KAUST Repository

    Jasra, Ajay

    2017-04-24

    This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

  20. Advanced Multilevel Monte Carlo Methods

    KAUST Repository

    Jasra, Ajay; Law, Kody; Suciu, Carina

    2017-01-01

    This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

  1. Electricity prices and fuel costs. Long-run relations and short-run dynamics

    International Nuclear Information System (INIS)

    Mohammadi, Hassan

    2009-01-01

    The paper examines the long-run relation and short-run dynamics between electricity prices and three fossil fuel prices - coal, natural gas and crude oil - using annual data for the U.S. for 1960-2007. The results suggest (1) a stable long-run relation between real prices for electricity and coal (2) Bi-directional long-run causality between coal and electricity prices. (3) Insignificant long-run relations between electricity and crude oil and/or natural gas prices. And (4) no evidence of asymmetries in the adjustment of electricity prices to deviations from equilibrium. A number of implications are addressed. (author)

  2. First Results from BM@N Technical Run with Deuteron Beam

    Science.gov (United States)

    Baranov, D.; Kapishin, M.; Kulish, E.; Maksymchuk, A.; Mamontova, T.; Pokatashkin, G.; Rufanov, I.; Vasendina, V.; Zinchenko, A.

    2018-03-01

    BM@N (Baryonic Matter at Nuclotron) is the first experiment to be realized at the accelerator complex of NICA-Nuclotron at JINR (Dubna). The aim of the experiment is to study interactions of relativistic heavy ion beams with a kinetic energy from 1 to 4.5 AGeV with fixed targets. The BM@N set-up at the starting phase of the experiment is introduced. First results of the analysis of minimum bias experimental data collected in the technical run in interactions of the deuteron beam of 4 AGeV with different targets are presented. The spacial, momentum and primary vertex resolution of the GEM tracker are studied. The signal of Lambda-hyperon is reconstructed in the proton-pion invariant mass spectrum. The data results are described by Monte Carlo simulations. The investigation has been performed at the Laboratory of High Energy Physics, JINR.

  3. Monte Carlo - Advances and Challenges

    International Nuclear Information System (INIS)

    Brown, Forrest B.; Mosteller, Russell D.; Martin, William R.

    2008-01-01

    Abstract only, full text follows: With ever-faster computers and mature Monte Carlo production codes, there has been tremendous growth in the application of Monte Carlo methods to the analysis of reactor physics and reactor systems. In the past, Monte Carlo methods were used primarily for calculating k eff of a critical system. More recently, Monte Carlo methods have been increasingly used for determining reactor power distributions and many design parameters, such as β eff , l eff , τ, reactivity coefficients, Doppler defect, dominance ratio, etc. These advanced applications of Monte Carlo methods are now becoming common, not just feasible, but bring new challenges to both developers and users: Convergence of 3D power distributions must be assured; confidence interval bias must be eliminated; iterated fission probabilities are required, rather than single-generation probabilities; temperature effects including Doppler and feedback must be represented; isotopic depletion and fission product buildup must be modeled. This workshop focuses on recent advances in Monte Carlo methods and their application to reactor physics problems, and on the resulting challenges faced by code developers and users. The workshop is partly tutorial, partly a review of the current state-of-the-art, and partly a discussion of future work that is needed. It should benefit both novice and expert Monte Carlo developers and users. In each of the topic areas, we provide an overview of needs, perspective on past and current methods, a review of recent work, and discussion of further research and capabilities that are required. Electronic copies of all workshop presentations and material will be available. The workshop is structured as 2 morning and 2 afternoon segments: - Criticality Calculations I - convergence diagnostics, acceleration methods, confidence intervals, and the iterated fission probability, - Criticality Calculations II - reactor kinetics parameters, dominance ratio, temperature

  4. Monte Carlo photon transport on shared memory and distributed memory parallel processors

    International Nuclear Information System (INIS)

    Martin, W.R.; Wan, T.C.; Abdel-Rahman, T.S.; Mudge, T.N.; Miura, K.

    1987-01-01

    Parallelized Monte Carlo algorithms for analyzing photon transport in an inertially confined fusion (ICF) plasma are considered. Algorithms were developed for shared memory (vector and scalar) and distributed memory (scalar) parallel processors. The shared memory algorithm was implemented on the IBM 3090/400, and timing results are presented for dedicated runs with two, three, and four processors. Two alternative distributed memory algorithms (replication and dispatching) were implemented on a hypercube parallel processor (1 through 64 nodes). The replication algorithm yields essentially full efficiency for all cube sizes; with the 64-node configuration, the absolute performance is nearly the same as with the CRAY X-MP. The dispatching algorithm also yields efficiencies above 80% in a large simulation for the 64-processor configuration

  5. Liquidity Runs

    NARCIS (Netherlands)

    Matta, R.; Perotti, E.

    2016-01-01

    Can the risk of losses upon premature liquidation produce bank runs? We show how a unique run equilibrium driven by asset liquidity risk arises even under minimal fundamental risk. To study the role of illiquidity we introduce realistic norms on bank default, such that mandatory stay is triggered

  6. Dr. Sheehan on Running.

    Science.gov (United States)

    Sheehan, George A.

    This book is both a personal and technical account of the experience of running by a heart specialist who began a running program at the age of 45. In its seventeen chapters, there is information presented on the spiritual, psychological, and physiological results of running; treatment of athletic injuries resulting from running; effects of diet…

  7. Monte Carlo simulation of experiments

    International Nuclear Information System (INIS)

    Opat, G.I.

    1977-07-01

    An outline of the technique of computer simulation of particle physics experiments by the Monte Carlo method is presented. Useful special purpose subprograms are listed and described. At each stage the discussion is made concrete by direct reference to the programs SIMUL8 and its variant MONTE-PION, written to assist in the analysis of the radiative decay experiments μ + → e + ν sub(e) antiνγ and π + → e + ν sub(e)γ, respectively. These experiments were based on the use of two large sodium iodide crystals, TINA and MINA, as e and γ detectors. Instructions for the use of SIMUL8 and MONTE-PION are given. (author)

  8. The adaptation method in the Monte Carlo simulation for computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyoung Gun; Yoon, Chang Yeon; Lee, Won Ho [Dept. of Bio-convergence Engineering, Korea University, Seoul (Korea, Republic of); Cho, Seung Ryong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Sung Ho [Dept. of Neurosurgery, Ulsan University Hospital, Ulsan (Korea, Republic of)

    2015-06-15

    The patient dose incurred from diagnostic procedures during advanced radiotherapy has become an important issue. Many researchers in medical physics are using computational simulations to calculate complex parameters in experiments. However, extended computation times make it difficult for personal computers to run the conventional Monte Carlo method to simulate radiological images with high-flux photons such as images produced by computed tomography (CT). To minimize the computation time without degrading imaging quality, we applied a deterministic adaptation to the Monte Carlo calculation and verified its effectiveness by simulating CT image reconstruction for an image evaluation phantom (Catphan; Phantom Laboratory, New York NY, USA) and a human-like voxel phantom (KTMAN-2) (Los Alamos National Laboratory, Los Alamos, NM, USA). For the deterministic adaptation, the relationship between iteration numbers and the simulations was estimated and the option to simulate scattered radiation was evaluated. The processing times of simulations using the adaptive method were at least 500 times faster than those using a conventional statistical process. In addition, compared with the conventional statistical method, the adaptive method provided images that were more similar to the experimental images, which proved that the adaptive method was highly effective for a simulation that requires a large number of iterations-assuming no radiation scattering in the vicinity of detectors minimized artifacts in the reconstructed image.

  9. Study of cold neutron sources: Implementation and validation of a complete computation scheme for research reactor using Monte Carlo codes TRIPOLI-4.4 and McStas

    International Nuclear Information System (INIS)

    Campioni, Guillaume; Mounier, Claude

    2006-01-01

    The main goal of the thesis about studies of cold neutrons sources (CNS) in research reactors was to create a complete set of tools to design efficiently CNS. The work raises the problem to run accurate simulations of experimental devices inside reactor reflector valid for parametric studies. On one hand, deterministic codes have reasonable computation times but introduce problems for geometrical description. On the other hand, Monte Carlo codes give the possibility to compute on precise geometry, but need computation times so important that parametric studies are impossible. To decrease this computation time, several developments were made in the Monte Carlo code TRIPOLI-4.4. An uncoupling technique is used to isolate a study zone in the complete reactor geometry. By recording boundary conditions (incoming flux), further simulations can be launched for parametric studies with a computation time reduced by a factor 60 (case of the cold neutron source of the Orphee reactor). The short response time allows to lead parametric studies using Monte Carlo code. Moreover, using biasing methods, the flux can be recorded on the surface of neutrons guides entries (low solid angle) with a further gain of running time. Finally, the implementation of a coupling module between TRIPOLI- 4.4 and the Monte Carlo code McStas for research in condensed matter field gives the possibility to obtain fluxes after transmission through neutrons guides, thus to have the neutron flux received by samples studied by scientists of condensed matter. This set of developments, involving TRIPOLI-4.4 and McStas, represent a complete computation scheme for research reactors: from nuclear core, where neutrons are created, to the exit of neutrons guides, on samples of matter. This complete calculation scheme is tested against ILL4 measurements of flux in cold neutron guides. (authors)

  10. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  11. ATLAS Distributed Computing in LHC Run2

    International Nuclear Information System (INIS)

    Campana, Simone

    2015-01-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run-2. An increase in both the data rate and the computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (Prodsys-2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward a flexible computing model. A flexible computing utilization exploring the use of opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model; the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover, a new data management strategy, based on a defined lifetime for each dataset, has been defined to better manage the lifecycle of the data. In this note, an overview of an operational experience of the new system and its evolution is presented. (paper)

  12. Development of M3C code for Monte Carlo reactor physics criticality calculations

    International Nuclear Information System (INIS)

    Kumar, Anek; Kannan, Umasankari; Krishanani, P.D.

    2015-06-01

    The development of Monte Carlo code (M3C) for reactor design entails use of continuous energy nuclear data and Monte Carlo simulations for each of the neutron interaction processes. BARC has started a concentrated effort for developing a new general geometry continuous energy Monte Carlo code for reactor physics calculation indigenously. The code development required a comprehensive understanding of the basic continuous energy cross section sets. The important features of this code are treatment of heterogeneous lattices by general geometry, use of point cross sections along with unionized energy grid approach, thermal scattering model for low energy treatment, capability of handling the microscopic fuel particles dispersed randomly. The capability of handling the randomly dispersed microscopic fuel particles which is very useful for the modeling of High-Temperature Gas-Cooled reactor fuels which are composed of thousands of microscopic fuel particle (TRISO fuel particle), randomly dispersed in a graphite matrix. The Monte Carlo code for criticality calculation is a pioneering effort and has been used to study several types of lattices including cluster geometries. The code has been verified for its accuracy against more than 60 sample problems covering a wide range from simple (like spherical) to complex geometry (like PHWR lattice). Benchmark results show that the code performs quite well for the criticality calculation of the system. In this report, the current status of the code, features of the code, some of the benchmark results for the testing of the code and input preparation etc. are discussed. (author)

  13. The MC21 Monte Carlo Transport Code

    International Nuclear Information System (INIS)

    Sutton TM; Donovan TJ; Trumbull TH; Dobreff PS; Caro E; Griesheimer DP; Tyburski LJ; Carpenter DC; Joo H

    2007-01-01

    MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities

  14. Monte Carlo Treatment Planning for Advanced Radiotherapy

    DEFF Research Database (Denmark)

    Cronholm, Rickard

    This Ph.d. project describes the development of a workflow for Monte Carlo Treatment Planning for clinical radiotherapy plans. The workflow may be utilized to perform an independent dose verification of treatment plans. Modern radiotherapy treatment delivery is often conducted by dynamically...... modulating the intensity of the field during the irradiation. The workflow described has the potential to fully model the dynamic delivery, including gantry rotation during irradiation, of modern radiotherapy. Three corner stones of Monte Carlo Treatment Planning are identified: Building, commissioning...... and validation of a Monte Carlo model of a medical linear accelerator (i), converting a CT scan of a patient to a Monte Carlo compliant phantom (ii) and translating the treatment plan parameters (including beam energy, angles of incidence, collimator settings etc) to a Monte Carlo input file (iii). A protocol...

  15. Electron run-away

    International Nuclear Information System (INIS)

    Levinson, I.B.

    1975-01-01

    The run-away effect of electrons for the Coulomb scattering has been studied by Dricer, but the question for other scattering mechanisms is not yet studied. Meanwhile, if the scattering is quasielastic, a general criterion for the run-away may be formulated; in this case the run-away influence on the distribution function may also be studied in somewhat general and qualitative manner. (Auth.)

  16. Massively parallel Monte Carlo for many-particle simulations on GPUs

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Joshua A.; Jankowski, Eric [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Grubb, Thomas L. [Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Engel, Michael [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Glotzer, Sharon C., E-mail: sglotzer@umich.edu [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)

    2013-12-01

    Current trends in parallel processors call for the design of efficient massively parallel algorithms for scientific computing. Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. In this paper, we present a massively parallel method that obeys detailed balance and implement it for a system of hard disks on the GPU. We reproduce results of serial high-precision Monte Carlo runs to verify the method. This is a good test case because the hard disk equation of state over the range where the liquid transforms into the solid is particularly sensitive to small deviations away from the balance conditions. On a Tesla K20, our GPU implementation executes over one billion trial moves per second, which is 148 times faster than on a single Intel Xeon E5540 CPU core, enables 27 times better performance per dollar, and cuts energy usage by a factor of 13. With this improved performance we are able to calculate the equation of state for systems of up to one million hard disks. These large system sizes are required in order to probe the nature of the melting transition, which has been debated for the last forty years. In this paper we present the details of our computational method, and discuss the thermodynamics of hard disks separately in a companion paper.

  17. SU-E-J-144: Low Activity Studies of Carbon 11 Activation Via GATE Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Elmekawy, A; Ewell, L [Hampton University, Hampton, VA (United States); Butuceanu, C; Qu, L [Hampton University Proton Therapy Institute, Hampton, VA (United States)

    2015-06-15

    Purpose: To investigate the behavior of a Monte Carlo simulation code with low levels of activity (∼1,000Bq). Such activity levels are expected from phantoms and patients activated via a proton therapy beam. Methods: Three different ranges for a therapeutic proton radiation beam were examined in a Monte Carlo simulation code: 13.5, 17.0 and 21.0cm. For each range, the decay of an equivalent length{sup 11}C source and additional sources of length plus or minus one cm was studied in a benchmark PET simulation for activities of 1000, 2000 and 3000Bq. The ranges were chosen to coincide with a previous activation study, and the activities were chosen to coincide with the approximate level of isotope creation expected in a phantom or patient irradiated by a therapeutic proton beam. The GATE 7.0 simulation was completed on a cluster node, running Scientific Linux Carbon 6 (Red Hat©). The resulting Monte Carlo data were investigated with the ROOT (CERN) analysis tool. The half-life of{sup 11}C was extracted via a histogram fit to the number of simulated PET events vs. time. Results: The average slope of the deviation of the extracted carbon half life from the expected/nominal value vs. activity showed a generally positive value. This was unexpected, as the deviation should, in principal, decrease with increased activity and lower statistical uncertainty. Conclusion: For activity levels on the order of 1,000Bq, the behavior of a benchmark PET test was somewhat unexpected. It is important to be aware of the limitations of low activity PET images, and low activity Monte Carlo simulations. This work was funded in part by the Philips corporation.

  18. New Tools to Prepare ACE Cross-section Files for MCNP Analytic Test Problems

    International Nuclear Information System (INIS)

    Brown, Forrest B.

    2016-01-01

    Monte Carlo calculations using one-group cross sections, multigroup cross sections, or simple continuous energy cross sections are often used to: (1) verify production codes against known analytical solutions, (2) verify new methods and algorithms that do not involve detailed collision physics, (3) compare Monte Carlo calculation methods with deterministic methods, and (4) teach fundamentals to students. In this work we describe 2 new tools for preparing the ACE cross-section files to be used by MCNP ® for these analytic test problems, simple a ce.pl and simple a ce m g.pl.

  19. High throughput and accurate serum proteome profiling by integrated sample preparation technology and single-run data independent mass spectrometry analysis.

    Science.gov (United States)

    Lin, Lin; Zheng, Jiaxin; Yu, Quan; Chen, Wendong; Xing, Jinchun; Chen, Chenxi; Tian, Ruijun

    2018-03-01

    Mass spectrometry (MS)-based serum proteome analysis is extremely challenging due to its high complexity and dynamic range of protein abundances. Developing high throughput and accurate serum proteomic profiling approach capable of analyzing large cohorts is urgently needed for biomarker discovery. Herein, we report a streamlined workflow for fast and accurate proteomic profiling from 1μL of blood serum. The workflow combined an integrated technique for highly sensitive and reproducible sample preparation and a new data-independent acquisition (DIA)-based MS method. Comparing with standard data dependent acquisition (DDA) approach, the optimized DIA method doubled the number of detected peptides and proteins with better reproducibility. Without protein immunodepletion and prefractionation, the single-run DIA analysis enables quantitative profiling of over 300 proteins with 50min gradient time. The quantified proteins span more than five orders of magnitude of abundance range and contain over 50 FDA-approved disease markers. The workflow allowed us to analyze 20 serum samples per day, with about 358 protein groups per sample being identified. A proof-of-concept study on renal cell carcinoma (RCC) serum samples confirmed the feasibility of the workflow for large scale serum proteomic profiling and disease-related biomarker discovery. Blood serum or plasma is the predominant specimen for clinical proteomic studies while the analysis is extremely challenging for its high complexity. Many efforts had been made in the past for serum proteomics for maximizing protein identifications, whereas few have been concerned with throughput and reproducibility. Here, we establish a rapid, robust and high reproducible DIA-based workflow for streamlined serum proteomic profiling from 1μL serum. The workflow doesn't need protein depletion and pre-fractionation, while still being able to detect disease-relevant proteins accurately. The workflow is promising in clinical application

  20. G4-STORK: A Geant4-based Monte Carlo reactor kinetics simulation code

    International Nuclear Information System (INIS)

    Russell, Liam; Buijs, Adriaan; Jonkmans, Guy

    2014-01-01

    Highlights: • G4-STORK is a new, time-dependent, Monte Carlo code for reactor physics applications. • G4-STORK was built by adapting and expanding on the Geant4 Monte Carlo toolkit. • G4-STORK was designed to simulate short-term fluctuations in reactor cores. • G4-STORK is well suited for simulating sub- and supercritical assemblies. • G4-STORK was verified through comparisons with DRAGON and MCNP. - Abstract: In this paper we introduce G4-STORK (Geant4 STOchastic Reactor Kinetics), a new, time-dependent, Monte Carlo particle tracking code for reactor physics applications. G4-STORK was built by adapting and expanding on the Geant4 Monte Carlo toolkit. The toolkit provides the fundamental physics models and particle tracking algorithms that track each particle in space and time. It is a framework for further development (e.g. for projects such as G4-STORK). G4-STORK derives reactor physics parameters (e.g. k eff ) from the continuous evolution of a population of neutrons in space and time in the given simulation geometry. In this paper we detail the major additions to the Geant4 toolkit that were necessary to create G4-STORK. These include a renormalization process that maintains a manageable number of neutrons in the simulation even in very sub- or supercritical systems, scoring processes (e.g. recording fission locations, total neutrons produced and lost, etc.) that allow G4-STORK to calculate the reactor physics parameters, and dynamic simulation geometries that can change over the course of simulation to illicit reactor kinetics responses (e.g. fuel temperature reactivity feedback). The additions are verified through simple simulations and code-to-code comparisons with established reactor physics codes such as DRAGON and MCNP. Additionally, G4-STORK was developed to run a single simulation in parallel over many processors using MPI (Message Passing Interface) pipes

  1. A Running Start: Resource Guide for Youth Running Programs

    Science.gov (United States)

    Jenny, Seth; Becker, Andrew; Armstrong, Tess

    2016-01-01

    The lack of physical activity is an epidemic problem among American youth today. In order to combat this, many schools are incorporating youth running programs as a part of their comprehensive school physical activity programs. These youth running programs are being implemented before or after school, at school during recess at the elementary…

  2. Monte carlo simulation for soot dynamics

    KAUST Repository

    Zhou, Kun

    2012-01-01

    A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.

  3. Frequentist and Bayesian Orbital Parameter Estimaton from Radial Velocity Data Using RVLIN, BOOTTRAN, and RUN DMC

    Science.gov (United States)

    Nelson, Benjamin Earl; Wright, Jason Thomas; Wang, Sharon

    2015-08-01

    For this hack session, we will present three tools used in analyses of radial velocity exoplanet systems. RVLIN is a set of IDL routines used to quickly fit an arbitrary number of Keplerian curves to radial velocity data to find adequate parameter point estimates. BOOTTRAN is an IDL-based extension of RVLIN to provide orbital parameter uncertainties using bootstrap based on a Keplerian model. RUN DMC is a highly parallelized Markov chain Monte Carlo algorithm that employs an n-body model, primarily used for dynamically complex or poorly constrained exoplanet systems. We will compare the performance of these tools and their applications to various exoplanet systems.

  4. Monte Carlo dose calculation algorithm on a distributed system

    International Nuclear Information System (INIS)

    Chauvie, Stephane; Dominoni, Matteo; Marini, Piergiorgio; Stasi, Michele; Pia, Maria Grazia; Scielzo, Giuseppe

    2003-01-01

    The main goal of modern radiotherapy, such as 3D conformal radiotherapy and intensity-modulated radiotherapy is to deliver a high dose to the target volume sparing the surrounding healthy tissue. The accuracy of dose calculation in a treatment planning system is therefore a critical issue. Among many algorithms developed over the last years, those based on Monte Carlo proven to be very promising in terms of accuracy. The most severe obstacle in application to clinical practice is the high time necessary for calculations. We have studied a high performance network of Personal Computer as a realistic alternative to a high-costs dedicated parallel hardware to be used routinely as instruments of evaluation of treatment plans. We set-up a Beowulf Cluster, configured with 4 nodes connected with low-cost network and installed MC code Geant4 to describe our irradiation facility. The MC, once parallelised, was run on the Beowulf Cluster. The first run of the full simulation showed that the time required for calculation decreased linearly increasing the number of distributed processes. The good scalability trend allows both statistically significant accuracy and good time performances. The scalability of the Beowulf Cluster system offers a new instrument for dose calculation that could be applied in clinical practice. These would be a good support particularly in high challenging prescription that needs good calculation accuracy in zones of high dose gradient and great dishomogeneities

  5. Latest LHCf results and preparation to the LHC run for 13 TeV proton–proton interactions

    Directory of Open Access Journals (Sweden)

    Bonechi L.

    2015-01-01

    Full Text Available The LHCf experiment is a CERN experiment dedicated to forward physics which is optimized to measure the neutral particle flow at extreme pseudo-rapidity values, ranging from 8.4 up to infinity. LHCf results are extremely important for the calibration of the hadronic interaction models used for the study of the development of atmospheric showers in the Earth atmosphere. Starting from the recent run of proton-Lead nucleus interactions at LHC, the LHCf and ATLAS collaborations have performed a common data taking which allows a combined study of the central and forward regions of the interaction. The latest results of LHCf, the upgrade of the detectors for the next 6.5 TeV + 6.5 TeV proton–proton run and the status of the LHCf-ATLAS common activities are summarized in this paper.

  6. Statistical estimation Monte Carlo for unreliability evaluation of highly reliable system

    International Nuclear Information System (INIS)

    Xiao Gang; Su Guanghui; Jia Dounan; Li Tianduo

    2000-01-01

    Based on analog Monte Carlo simulation, statistical Monte Carlo methods for unreliable evaluation of highly reliable system are constructed, including direct statistical estimation Monte Carlo method and weighted statistical estimation Monte Carlo method. The basal element is given, and the statistical estimation Monte Carlo estimators are derived. Direct Monte Carlo simulation method, bounding-sampling method, forced transitions Monte Carlo method, direct statistical estimation Monte Carlo and weighted statistical estimation Monte Carlo are used to evaluate unreliability of a same system. By comparing, weighted statistical estimation Monte Carlo estimator has smallest variance, and has highest calculating efficiency

  7. Applications of Monte Carlo method in Medical Physics

    International Nuclear Information System (INIS)

    Diez Rios, A.; Labajos, M.

    1989-01-01

    The basic ideas of Monte Carlo techniques are presented. Random numbers and their generation by congruential methods, which underlie Monte Carlo calculations are shown. Monte Carlo techniques to solve integrals are discussed. The evaluation of a simple monodimensional integral with a known answer, by means of two different Monte Carlo approaches are discussed. The basic principles to simualate on a computer photon histories reduce variance and the current applications in Medical Physics are commented. (Author)

  8. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-01-01

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  9. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros

    2016-08-29

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  10. Guideline of Monte Carlo calculation. Neutron/gamma ray transport simulation by Monte Carlo method

    CERN Document Server

    2002-01-01

    This report condenses basic theories and advanced applications of neutron/gamma ray transport calculations in many fields of nuclear energy research. Chapters 1 through 5 treat historical progress of Monte Carlo methods, general issues of variance reduction technique, cross section libraries used in continuous energy Monte Carlo codes. In chapter 6, the following issues are discussed: fusion benchmark experiments, design of ITER, experiment analyses of fast critical assembly, core analyses of JMTR, simulation of pulsed neutron experiment, core analyses of HTTR, duct streaming calculations, bulk shielding calculations, neutron/gamma ray transport calculations of the Hiroshima atomic bomb. Chapters 8 and 9 treat function enhancements of MCNP and MVP codes, and a parallel processing of Monte Carlo calculation, respectively. An important references are attached at the end of this report.

  11. GePEToS: A Geant4 Monte Carlo simulation package for positron emission tomography

    International Nuclear Information System (INIS)

    Jan, Sebastien; Collot, Johann; Gallin-Martel, Marie-Laure; Martin, Philippe; Mayet, Frederic; Tournefier, Edwige

    2003-01-01

    GePEToS is a simulation framework developed over the last few years for assessing the instrumental performance of future PET scanners. It is based on Geant4, written in Object- Oriented C++ and runs on Linux platforms. The validity of GePEToS has been tested on the well-known Siemens ECAT EXACT HR+ camera. The results of two application examples are presented: the design optimization of a liquid Xe μPET camera dedicated to small animal imaging as well as the evaluation of the effect of a strong axial magnetic field on the image resolution of a Concorde P4 μPET camera. Index Terms-Positron Emission Tomography, Monte Carlo Simulation, Geant 4. (authors)

  12. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Badal, A [U.S. Food and Drug Administration (CDRH/OSEL), Silver Spring, MD (United States); Zbijewski, W [Johns Hopkins University, Baltimore, MD (United States); Bolch, W [University of Florida, Gainesville, FL (United States); Sechopoulos, I [Emory University, Atlanta, GA (United States)

    2014-06-15

    virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail

  13. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    International Nuclear Information System (INIS)

    Badal, A; Zbijewski, W; Bolch, W; Sechopoulos, I

    2014-01-01

    generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all

  14. Experience with the Monte Carlo Method

    Energy Technology Data Exchange (ETDEWEB)

    Hussein, E M.A. [Department of Mechanical Engineering University of New Brunswick, Fredericton, N.B., (Canada)

    2007-06-15

    Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed.

  15. Experience with the Monte Carlo Method

    International Nuclear Information System (INIS)

    Hussein, E.M.A.

    2007-01-01

    Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed

  16. A multi-transputer system for parallel Monte Carlo simulations of extensive air showers

    International Nuclear Information System (INIS)

    Gils, H.J.; Heck, D.; Oehlschlaeger, J.; Schatz, G.; Thouw, T.

    1989-01-01

    A multiprocessor computer system has been brought into operation at the Kernforschungszentrum Karlsruhe. It is dedicated to Monte Carlo simulations of extensive air showers induced by ultra-high energy cosmic rays. The architecture consists of two independently working VMEbus systems each with a 68020 microprocessor as host computer and twelve T800 transputers for parallel processing. The two systems are linked via Ethernet for data exchange. The T800 transputers are equipped with 4 Mbyte RAM each, sufficient to run rather large codes. The host computers are operated under UNIX 5.3. On the transputers compilers for PARALLEL FORTRAN, C, and PASCAL are available. The simple modular architecture of this parallel computer reflects the single purpose for which it is intended. The hardware of the multiprocessor computer is described as well as the way how the user software is handled and distributed to the 24 working processors. The performance of the parallel computer is demonstrated by well-known benchmarks and by realistic Monte Carlo simulations of air showers. Comparisons with other types of microprocessors and with large universal computers are made. It is demonstrated that a cost reduction by more than a factor of 20 is achieved by this system as compared to universal computer. (orig.)

  17. Processing and Quality Monitoring for the ATLAS Tile Hadronic Calorimeter Data

    Science.gov (United States)

    Burghgrave, Blake; ATLAS Collaboration

    2017-10-01

    An overview is presented of Data Processing and Data Quality (DQ) Monitoring for the ATLAS Tile Hadronic Calorimeter. Calibration runs are monitored from a data quality perspective and used as a cross-check for physics runs. Data quality in physics runs is monitored extensively and continuously. Any problems are reported and immediately investigated. The DQ efficiency achieved was 99.6% in 2012 and 100% in 2015, after the detector maintenance in 2013-2014. Changes to detector status or calibrations are entered into the conditions database (DB) during a brief calibration loop between the end of a run and the beginning of bulk processing of data collected in it. Bulk processed data are reviewed and certified for the ATLAS Good Run List if no problem is detected. Experts maintain the tools used by DQ shifters and the calibration teams during normal operation, and prepare new conditions for data reprocessing and Monte Carlo (MC) production campaigns. Conditions data are stored in 3 databases: Online DB, Offline DB for data and a special DB for Monte Carlo. Database updates can be performed through a custom-made web interface.

  18. Monte Carlo alpha calculation

    Energy Technology Data Exchange (ETDEWEB)

    Brockway, D.; Soran, P.; Whalen, P.

    1985-01-01

    A Monte Carlo algorithm to efficiently calculate static alpha eigenvalues, N = ne/sup ..cap alpha..t/, for supercritical systems has been developed and tested. A direct Monte Carlo approach to calculating a static alpha is to simply follow the buildup in time of neutrons in a supercritical system and evaluate the logarithmic derivative of the neutron population with respect to time. This procedure is expensive, and the solution is very noisy and almost useless for a system near critical. The modified approach is to convert the time-dependent problem to a static ..cap alpha../sup -/eigenvalue problem and regress ..cap alpha.. on solutions of a/sup -/ k/sup -/eigenvalue problem. In practice, this procedure is much more efficient than the direct calculation, and produces much more accurate results. Because the Monte Carlo codes are intrinsically three-dimensional and use elaborate continuous-energy cross sections, this technique is now used as a standard for evaluating other calculational techniques in odd geometries or with group cross sections.

  19. Monte Carlo simulations of neutron scattering instruments

    International Nuclear Information System (INIS)

    Aestrand, Per-Olof; Copenhagen Univ.; Lefmann, K.; Nielsen, K.

    2001-01-01

    A Monte Carlo simulation is an important computational tool used in many areas of science and engineering. The use of Monte Carlo techniques for simulating neutron scattering instruments is discussed. The basic ideas, techniques and approximations are presented. Since the construction of a neutron scattering instrument is very expensive, Monte Carlo software used for design of instruments have to be validated and tested extensively. The McStas software was designed with these aspects in mind and some of the basic principles of the McStas software will be discussed. Finally, some future prospects are discussed for using Monte Carlo simulations in optimizing neutron scattering experiments. (R.P.)

  20. Application of Monte Carlo method in determination of secondary characteristic X radiation in XFA

    International Nuclear Information System (INIS)

    Roubicek, P.

    1982-01-01

    Secondary characteristic radiation is excited by primary radiation from the X-ray tube and by secondary radiation of other elements so that excitations of several orders result. The Monte Carlo method was used to consider all these possibilities and the resulting flux of characteristic radiation was simulated for samples of silicate raw materials. A comparison of the results of these computations with experiments allows to determine the effect of sample preparation on the characteristic radiation flux. (M.D.)

  1. RUN COORDINATION

    CERN Multimedia

    C. Delaere

    2013-01-01

    Since the LHC ceased operations in February, a lot has been going on at Point 5, and Run Coordination continues to monitor closely the advance of maintenance and upgrade activities. In the last months, the Pixel detector was extracted and is now stored in the pixel lab in SX5; the beam pipe has been removed and ME1/1 removal has started. We regained access to the vactank and some work on the RBX of HB has started. Since mid-June, electricity and cooling are back in S1 and S2, allowing us to turn equipment back on, at least during the day. 24/7 shifts are not foreseen in the next weeks, and safety tours are mandatory to keep equipment on overnight, but re-commissioning activities are slowly being resumed. Given the (slight) delays accumulated in LS1, it was decided to merge the two global runs initially foreseen into a single exercise during the week of 4 November 2013. The aim of the global run is to check that we can run (parts of) CMS after several months switched off, with the new VME PCs installed, th...

  2. Monte Carlo simulation of a statistical mechanical model of multiple protein sequence alignment.

    Science.gov (United States)

    Kinjo, Akira R

    2017-01-01

    A grand canonical Monte Carlo (MC) algorithm is presented for studying the lattice gas model (LGM) of multiple protein sequence alignment, which coherently combines long-range interactions and variable-length insertions. MC simulations are used for both parameter optimization of the model and production runs to explore the sequence subspace around a given protein family. In this Note, I describe the details of the MC algorithm as well as some preliminary results of MC simulations with various temperatures and chemical potentials, and compare them with the mean-field approximation. The existence of a two-state transition in the sequence space is suggested for the SH3 domain family, and inappropriateness of the mean-field approximation for the LGM is demonstrated.

  3. Alternative Implementations of the Monte Carlo Power Method

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Gelbard, E.M.

    2002-01-01

    We compare nominal efficiencies, i.e., variances in power shapes for equal running time, of different versions of the Monte Carlo (MC) eigenvalue computation. The two main methods considered here are 'conventional' MC and the superhistory method. Within each of these major methods, different variants are available for the main steps of the basic MC algorithm. Thus, for example, different treatments of the fission process may vary in the extent to which they follow, in analog fashion, the details of real-world fission, or they may vary in details of the methods by which they choose next-generation source sites. In general the same options are available in both the superhistory method and conventional MC, but there seems not to have been much examination of the special properties of the two major methods and their minor variants. We find, first, that the superhistory method is just as efficient as conventional MC and, second, that use of different variants of the basic algorithms may, in special cases, have a surprisingly large effect on MC computational efficiency

  4. A portable, parallel, object-oriented Monte Carlo neutron transport code in C++

    International Nuclear Information System (INIS)

    Lee, S.R.; Cummings, J.C.; Nolen, S.D.

    1997-01-01

    We have developed a multi-group Monte Carlo neutron transport code using C++ and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α-eigenvalues and is portable to and runs parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities of MC++ are discussed, along with physics and performance results on a variety of hardware, including all Accelerated Strategic Computing Initiative (ASCI) hardware. Current parallel performance indicates the ability to compute α-eigenvalues in seconds to minutes rather than hours to days. Future plans and the implementation of a general transport physics framework are also discussed

  5. Linear filtering applied to Monte Carlo criticality calculations

    International Nuclear Information System (INIS)

    Morrison, G.W.; Pike, D.H.; Petrie, L.M.

    1975-01-01

    A significant improvement in the acceleration of the convergence of the eigenvalue computed by Monte Carlo techniques has been developed by applying linear filtering theory to Monte Carlo calculations for multiplying systems. A Kalman filter was applied to a KENO Monte Carlo calculation of an experimental critical system consisting of eight interacting units of fissile material. A comparison of the filter estimate and the Monte Carlo realization was made. The Kalman filter converged in five iterations to 0.9977. After 95 iterations, the average k-eff from the Monte Carlo calculation was 0.9981. This demonstrates that the Kalman filter has the potential of reducing the calculational effort of multiplying systems. Other examples and results are discussed

  6. Study of the Material within the Run-2 ATLAS Inner Detector

    CERN Document Server

    Cairo, Valentina; The ATLAS collaboration

    2017-01-01

    The material in the ATLAS Inner Detector (ID) is studied with several methods, using a sample of \\sqrt{s}=13 TeV pp collisions collected in 2015 during Run II of the LHC. The material within the innermost barrel regions of the ID is studied using reconstructed secondary vertices from hadronic interactions and photon conversions. The layout of the cables, cooling p ipes and support structures (services) associated with the Pixel detector, in the region in front of the Silicon Microstrip detector (SCT), was modified in 2014. The material in this region was studied by measuring the efficiency with which tracks reconstructed only in the Pixel detector can be matched to tracks reconstructed in the full ID (track extension efficiency). The results of these studies are presented together with a comparison to previous measurements and a description of their impact on physics analyses and Monte Carlo simulation.

  7. Burnup calculations using Monte Carlo method

    International Nuclear Information System (INIS)

    Ghosh, Biplab; Degweker, S.B.

    2009-01-01

    In the recent years, interest in burnup calculations using Monte Carlo methods has gained momentum. Previous burn up codes have used multigroup transport theory based calculations followed by diffusion theory based core calculations for the neutronic portion of codes. The transport theory methods invariably make approximations with regard to treatment of the energy and angle variables involved in scattering, besides approximations related to geometry simplification. Cell homogenisation to produce diffusion, theory parameters adds to these approximations. Moreover, while diffusion theory works for most reactors, it does not produce accurate results in systems that have strong gradients, strong absorbers or large voids. Also, diffusion theory codes are geometry limited (rectangular, hexagonal, cylindrical, and spherical coordinates). Monte Carlo methods are ideal to solve very heterogeneous reactors and/or lattices/assemblies in which considerable burnable poisons are used. The key feature of this approach is that Monte Carlo methods permit essentially 'exact' modeling of all geometrical detail, without resort to ene and spatial homogenization of neutron cross sections. Monte Carlo method would also be better for in Accelerator Driven Systems (ADS) which could have strong gradients due to the external source and a sub-critical assembly. To meet the demand for an accurate burnup code, we have developed a Monte Carlo burnup calculation code system in which Monte Carlo neutron transport code is coupled with a versatile code (McBurn) for calculating the buildup and decay of nuclides in nuclear materials. McBurn is developed from scratch by the authors. In this article we will discuss our effort in developing the continuous energy Monte Carlo burn-up code, McBurn. McBurn is intended for entire reactor core as well as for unit cells and assemblies. Generally, McBurn can do burnup of any geometrical system which can be handled by the underlying Monte Carlo transport code

  8. A Statistical Perspective on Running with Prosthetic Lower-Limbs: An Advantage or Disadvantage?

    Directory of Open Access Journals (Sweden)

    Hossein Hassani

    2014-11-01

    Full Text Available Technological developments have led to the increased use of carbon fiber and prosthetic lower-limbs in running events at the Paralympic Games. This study aims to exploit a series of statistical techniques in order to prepare a response to the vital question of whether utilizing prosthetic feet can affect an athletes ability when running competitively at the Paralympics Games by comparing both within and between different classifications. The study also considers the differences between running on biological limbs and prosthetic lower-limbs from a mechanical point of view. The results from the male 100 m, 200 m and 400 m at the 2012 London Paralympic Games have been the source of this investigation. The investigation provides statistical evidence to propose that the number of prosthetic limbs used and the structure of such limbs have a significant impact on the outcome of track events at the Paralympic Games.

  9. Monte Carlo simulations for plasma physics

    International Nuclear Information System (INIS)

    Okamoto, M.; Murakami, S.; Nakajima, N.; Wang, W.X.

    2000-07-01

    Plasma behaviours are very complicated and the analyses are generally difficult. However, when the collisional processes play an important role in the plasma behaviour, the Monte Carlo method is often employed as a useful tool. For examples, in neutral particle injection heating (NBI heating), electron or ion cyclotron heating, and alpha heating, Coulomb collisions slow down high energetic particles and pitch angle scatter them. These processes are often studied by the Monte Carlo technique and good agreements can be obtained with the experimental results. Recently, Monte Carlo Method has been developed to study fast particle transports associated with heating and generating the radial electric field. Further it is applied to investigating the neoclassical transport in the plasma with steep gradients of density and temperatures which is beyong the conventional neoclassical theory. In this report, we briefly summarize the researches done by the present authors utilizing the Monte Carlo method. (author)

  10. Monte Carlo climate change forecasts with a global coupled ocean-atmosphere model

    International Nuclear Information System (INIS)

    Cubasch, U.; Santer, B.D.; Hegerl, G.; Hoeck, H.; Maier-Reimer, E.; Mikolajwicz, U.; Stoessel, A.; Voss, R.

    1992-01-01

    The Monte Carlo approach, which has increasingly been used during the last decade in the field of extended range weather forecasting, has been applied for climate change experiments. Four integrations with a global coupled ocean-atmosphere model have been started from different initial conditions, but with the same greenhouse gas forcing according to the IPCC scenario A. All experiments have been run for a period of 50 years. The results indicate that the time evolution of the global mean warming depends strongly on the initial state of the climate system. It can vary between 6 and 31 years. The Monte Carlo approach delivers information about both the mean response and the statistical significance of the response. While the individual members of the ensemble show a considerable variation in the climate change pattern of temperature after 50 years, the ensemble mean climate change pattern closely resembles the pattern obtained in a 100 year integration and is, at least over most of the land areas, statistically significant. The ensemble averaged sea-level change due to thermal expansion is significant in the global mean and locally over wide regions of the Pacific. The hydrological cycle is also significantly enhanced in the global mean, but locally the changes in precipitation and soil moisture are masked by the variability of the experiments. (orig.)

  11. Parallel Monte Carlo simulation of aerosol dynamics

    KAUST Repository

    Zhou, K.

    2014-01-01

    A highly efficient Monte Carlo (MC) algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process). Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI). The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands) of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD) function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles. 2014 Kun Zhou et al.

  12. Short-run and long-run elasticities of import demand for crude oil in Turkey

    International Nuclear Information System (INIS)

    Altinay, Galip

    2007-01-01

    The aim of this study is to attempt to estimate the short-run and the long-run elasticities of demand for crude oil in Turkey by the recent autoregressive distributed lag (ARDL) bounds testing approach to cointegration. As a developing country, Turkey meets its growing demand for oil principally by foreign suppliers. Thus, the study focuses on modelling the demand for imported crude oil using annual data covering the period 1980-2005. The bounds test results reveal that a long-run cointegration relationship exists between the crude oil import and the explanatory variables: nominal price and income, but not in the model that includes real price in domestic currency. The long-run parameters are estimated through a long-run static solution of the estimated ARDL model, and then the short-run dynamics are estimated by the error correction model. The estimated models pass the diagnostic tests successfully. The findings reveal that the income and price elasticities of import demand for crude oil are inelastic both in the short run and in the long run

  13. Monte Carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf; Kroisandt, Gerald

    2010-01-01

    Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...

  14. Habituation contributes to the decline in wheel running within wheel-running reinforcement periods.

    Science.gov (United States)

    Belke, Terry W; McLaughlin, Ryan J

    2005-02-28

    Habituation appears to play a role in the decline in wheel running within an interval. Aoyama and McSweeney [Aoyama, K., McSweeney, F.K., 2001. Habituation contributes to within-session changes in free wheel running. J. Exp. Anal. Behav. 76, 289-302] showed that when a novel stimulus was presented during a 30-min interval, wheel-running rates following the stimulus increased to levels approximating those earlier in the interval. The present study sought to assess the role of habituation in the decline in running that occurs over a briefer interval. In two experiments, rats responded on fixed-interval 30-s schedules for the opportunity to run for 45 s. Forty reinforcers were completed in each session. In the first experiment, the brake and chamber lights were repeatedly activated and inactivated after 25 s of a reinforcement interval had elapsed to assess the effect on running within the remaining 20 s. Presentations of the brake/light stimulus occurred during nine randomly determined reinforcement intervals in a session. In the second experiment, a 110 dB tone was emitted after 25 s of the reinforcement interval. In both experiments, presentation of the stimulus produced an immediate decline in running that dissipated over sessions. No increase in running following the stimulus was observed in the first experiment until the stimulus-induced decline dissipated. In the second experiment, increases in running were observed following the tone in the first session as well as when data were averaged over several sessions. In general, the results concur with the assertion that habituation plays a role in the decline in wheel running that occurs within both long and short intervals. (c) 2004 Elsevier B.V. All rights reserved.

  15. Monte Carlo approaches to light nuclei

    International Nuclear Information System (INIS)

    Carlson, J.

    1990-01-01

    Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of 16 O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs

  16. Monte Carlo approaches to light nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, J.

    1990-01-01

    Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of {sup 16}O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs.

  17. RUNNING INJURY DEVELOPMENT

    DEFF Research Database (Denmark)

    Johansen, Karen Krogh; Hulme, Adam; Damsted, Camma

    2017-01-01

    BACKGROUND: Behavioral science methods have rarely been used in running injury research. Therefore, the attitudes amongst runners and their coaches regarding factors leading to running injuries warrants formal investigation. PURPOSE: To investigate the attitudes of middle- and long-distance runners...... able to compete in national championships and their coaches about factors associated with running injury development. METHODS: A link to an online survey was distributed to middle- and long-distance runners and their coaches across 25 Danish Athletics Clubs. The main research question was: "Which...... factors do you believe influence the risk of running injuries?". In response to this question, the athletes and coaches had to click "Yes" or "No" to 19 predefined factors. In addition, they had the possibility to submit a free-text response. RESULTS: A total of 68 athletes and 19 coaches were included...

  18. Running Injury Development

    DEFF Research Database (Denmark)

    Krogh Johansen, Karen; Hulme, Adam; Damsted, Camma

    2017-01-01

    Background: Behavioral science methods have rarely been used in running injury research. Therefore, the attitudes amongst runners and their coaches regarding factors leading to running injuries warrants formal investigation. Purpose: To investigate the attitudes of middle- and long-distance runners...... able to compete in national championships and their coaches about factors associated with running injury development. Methods: A link to an online survey was distributed to middle- and long-distance runners and their coaches across 25 Danish Athletics Clubs. The main research question was: “Which...... factors do you believe influence the risk of running injuries?”. In response to this question, the athletes and coaches had to click “Yes” or “No” to 19 predefined factors. In addition, they had the possibility to submit a free-text response. Results: A total of 68 athletes and 19 coaches were included...

  19. Study of Gamma spectra by Monte Carlo simulation

    International Nuclear Information System (INIS)

    Cantaragiu, A.; Gheorghies, A.; Borcia, C.

    2008-01-01

    The purpose of this paper is obtaining gamma ray spectra by means of a scintillation detector applying the Monte Carlo statistic simulation method using the EGS4 program. The Monte Carlo algorithm implies that the physical system is described by the probability density function which allows generating random figures and the result is taken as an average of numbers which were observed. The EGS4 program allows the simulation of the following physical processes: the photo-electrical effect, the Compton effect, the electron positron pairs generation and the Rayleigh diffusion. The gamma rays recorded by the detector are converted into electrical pulses and the gamma ray spectra are acquired and processed by means of the Nomad Plus portable spectrometer connected to a computer. As a gamma ray sources 137Cs and 60Co are used whose spectra drawn and used for study the interaction of the gamma radiations with the scintillation detector. The parameters which varied during the acquisition of the gamma ray spectra are the distance between source and detector and the measuring time. Due to the statistical processes in the detector, the peak looks like a Gauss distribution. The identification of the gamma quantum energy value is achieved by the experimental spectra peaks, thus gathering information about the position of the peak, the width and the area of the peak respectively. By means of the EGS4 program a simulation is run using these parameters and an 'ideal' spectrum is obtained, a spectrum which is not influenced by the statistical processes which take place inside the detector. Then, the convolution of the spectra is achieved by means of a normalised Gauss function. There is a close match between the experimental results and those simulated in the EGS4 program because the interactions which occurred during the simulation have a statistical behaviour close to the real one. (authors)

  20. Generation and performance of a multigroup coupled neutron-gamma cross-section library for deterministic and Monte Carlo borehole logging analysis

    International Nuclear Information System (INIS)

    Kodeli, I.; Aldama, D. L.; De Leege, P. F. A.; Legrady, D.; Hoogenboom, J. E.; Cowan, P.

    2004-01-01

    As part of the IRTMBA (Improved Radiation Transport Modelling for Borehole Applications) project of the EU community's 5. framework program a special purpose multigroup cross-section library was prepared for use in deterministic and Monte Carlo oil well logging particle transport calculations. This library is expected to improve the prediction of the neutron and gamma spectra at the detector positions of the logging tool, and their use for the interpretation of the neutron logging measurements was studied. Preparation and testing of this library is described. (authors)

  1. Using of the Serpent code based on the Monte-Carlo method for calculation of the VVER-1000 fuel assembly characteristics

    Directory of Open Access Journals (Sweden)

    V. V. Galchenko

    2016-12-01

    Full Text Available The description of calculation scheme of fuel assembly for preparation of few-group characteristics is considered with help of Serpent code. This code uses the Monte-Carlo method and energy continuous microscopic data libraries. Serpent code is devoted for calculation of fuel assembly characteristics, burnup calculations and preparation of few-group homogenized macroscopic cross-sections. The results of verification simulations in comparison with other codes (WIMS, HELIOS, NESSEL etc., which are used for neutron-physical analysis of VVER type fuel, are presented.

  2. Simulation and the Monte Carlo method

    CERN Document Server

    Rubinstein, Reuven Y

    2016-01-01

    Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...

  3. Lecture 1. Monte Carlo basics. Lecture 2. Adjoint Monte Carlo. Lecture 3. Coupled Forward-Adjoint calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J.E. [Delft University of Technology, Interfaculty Reactor Institute, Delft (Netherlands)

    2000-07-01

    The Monte Carlo method is a statistical method to solve mathematical and physical problems using random numbers. The principle of the methods will be demonstrated for a simple mathematical problem and for neutron transport. Various types of estimators will be discussed, as well as generally applied variance reduction methods like splitting, Russian roulette and importance biasing. The theoretical formulation for solving eigenvalue problems for multiplying systems will be shown. Some reflections will be given about the applicability of the Monte Carlo method, its limitations and its future prospects for reactor physics calculations. Adjoint Monte Carlo is a Monte Carlo game to solve the adjoint neutron (or photon) transport equation. The adjoint transport equation can be interpreted in terms of simulating histories of artificial particles, which show properties of neutrons that move backwards in history. These particles will start their history at the detector from which the response must be estimated and give a contribution to the estimated quantity when they hit or pass through the neutron source. Application to multigroup transport formulation will be demonstrated Possible implementation for the continuous energy case will be outlined. The inherent advantages and disadvantages of the method will be discussed. The Midway Monte Carlo method will be presented for calculating a detector response due to a (neutron or photon) source. A derivation will be given of the basic formula for the Midway Monte Carlo method The black absorber technique, allowing for a cutoff of particle histories when reaching the midway surface in one of the calculations will be derived. An extension of the theory to coupled neutron-photon problems is given. The method will be demonstrated for an oil well logging problem, comprising a neutron source in a borehole and photon detectors to register the photons generated by inelastic neutron scattering. (author)

  4. Lecture 1. Monte Carlo basics. Lecture 2. Adjoint Monte Carlo. Lecture 3. Coupled Forward-Adjoint calculations

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.

    2000-01-01

    The Monte Carlo method is a statistical method to solve mathematical and physical problems using random numbers. The principle of the methods will be demonstrated for a simple mathematical problem and for neutron transport. Various types of estimators will be discussed, as well as generally applied variance reduction methods like splitting, Russian roulette and importance biasing. The theoretical formulation for solving eigenvalue problems for multiplying systems will be shown. Some reflections will be given about the applicability of the Monte Carlo method, its limitations and its future prospects for reactor physics calculations. Adjoint Monte Carlo is a Monte Carlo game to solve the adjoint neutron (or photon) transport equation. The adjoint transport equation can be interpreted in terms of simulating histories of artificial particles, which show properties of neutrons that move backwards in history. These particles will start their history at the detector from which the response must be estimated and give a contribution to the estimated quantity when they hit or pass through the neutron source. Application to multigroup transport formulation will be demonstrated Possible implementation for the continuous energy case will be outlined. The inherent advantages and disadvantages of the method will be discussed. The Midway Monte Carlo method will be presented for calculating a detector response due to a (neutron or photon) source. A derivation will be given of the basic formula for the Midway Monte Carlo method The black absorber technique, allowing for a cutoff of particle histories when reaching the midway surface in one of the calculations will be derived. An extension of the theory to coupled neutron-photon problems is given. The method will be demonstrated for an oil well logging problem, comprising a neutron source in a borehole and photon detectors to register the photons generated by inelastic neutron scattering. (author)

  5. Rocker shoe, minimalist shoe, and standard running shoe : A comparison of running economy

    NARCIS (Netherlands)

    Sobhani, Sobhan; Bredeweg, Steven; Dekker, Rienk; Kluitenberg, Bas; van den Heuvel, Edwin; Hijmans, Juha; Postema, Klaas

    Objectives: Running with rocker shoes is believed to prevent lower limb injuries. However, it is not clear how running in these shoes affects the energy expenditure. The purpose of this study was, therefore, to assess the effects of rocker shoes on running economy in comparison with standard and

  6. Monte Carlo Transport for Electron Thermal Transport

    Science.gov (United States)

    Chenhall, Jeffrey; Cao, Duc; Moses, Gregory

    2015-11-01

    The iSNB (implicit Schurtz Nicolai Busquet multigroup electron thermal transport method of Cao et al. is adapted into a Monte Carlo transport method in order to better model the effects of non-local behavior. The end goal is a hybrid transport-diffusion method that combines Monte Carlo Transport with a discrete diffusion Monte Carlo (DDMC). The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the method will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.

  7. Safety evaluation of the ESP sludge washing baselines runs. Revision 2

    International Nuclear Information System (INIS)

    Gupta, M.K.

    1993-01-01

    Purpose is to provide the technical basis for evaluation of unreviewed safety question for the Extended Sludge Processing (ESP) Sludge Washing Baseline Runs, which are necessary to resolve technical questions associated with process control (sludge suspension, sludge settling, heat transfer, temperature control). The sludge is currently stored in below-ground tanks and will be prepared for processing at the Defense Waste Processing Facility as part of the Integrated Waste Removal Program for Savannah River Site

  8. Generalized hybrid Monte Carlo - CMFD methods for fission source convergence

    International Nuclear Information System (INIS)

    Wolters, Emily R.; Larsen, Edward W.; Martin, William R.

    2011-01-01

    In this paper, we generalize the recently published 'CMFD-Accelerated Monte Carlo' method and present two new methods that reduce the statistical error in CMFD-Accelerated Monte Carlo. The CMFD-Accelerated Monte Carlo method uses Monte Carlo to estimate nonlinear functionals used in low-order CMFD equations for the eigenfunction and eigenvalue. The Monte Carlo fission source is then modified to match the resulting CMFD fission source in a 'feedback' procedure. The two proposed methods differ from CMFD-Accelerated Monte Carlo in the definition of the required nonlinear functionals, but they have identical CMFD equations. The proposed methods are compared with CMFD-Accelerated Monte Carlo on a high dominance ratio test problem. All hybrid methods converge the Monte Carlo fission source almost immediately, leading to a large reduction in the number of inactive cycles required. The proposed methods stabilize the fission source more efficiently than CMFD-Accelerated Monte Carlo, leading to a reduction in the number of active cycles required. Finally, as in CMFD-Accelerated Monte Carlo, the apparent variance of the eigenfunction is approximately equal to the real variance, so the real error is well-estimated from a single calculation. This is an advantage over standard Monte Carlo, in which the real error can be underestimated due to inter-cycle correlation. (author)

  9. The LHCb Run Control

    Energy Technology Data Exchange (ETDEWEB)

    Alessio, F; Barandela, M C; Frank, M; Gaspar, C; Herwijnen, E v; Jacobsson, R; Jost, B; Neufeld, N; Sambade, A; Schwemmer, R; Somogyi, P [CERN, 1211 Geneva 23 (Switzerland); Callot, O [LAL, IN2P3/CNRS and Universite Paris 11, Orsay (France); Duval, P-Y [Centre de Physique des Particules de Marseille, Aix-Marseille Universite, CNRS/IN2P3, Marseille (France); Franek, B [Rutherford Appleton Laboratory, Chilton, Didcot, OX11 0QX (United Kingdom); Galli, D, E-mail: Clara.Gaspar@cern.c [Universita di Bologna and INFN, Bologna (Italy)

    2010-04-01

    LHCb has designed and implemented an integrated Experiment Control System. The Control System uses the same concepts and the same tools to control and monitor all parts of the experiment: the Data Acquisition System, the Timing and the Trigger Systems, the High Level Trigger Farm, the Detector Control System, the Experiment's Infrastructure and the interaction with the CERN Technical Services and the Accelerator. LHCb's Run Control, the main interface used by the experiment's operator, provides access in a hierarchical, coherent and homogeneous manner to all areas of the experiment and to all its sub-detectors. It allows for automated (or manual) configuration and control, including error recovery, of the full experiment in its different running modes. Different instances of the same Run Control interface are used by the various sub-detectors for their stand-alone activities: test runs, calibration runs, etc. The architecture and the tools used to build the control system, the guidelines and components provided to the developers, as well as the first experience with the usage of the Run Control will be presented

  10. runDM: Running couplings of Dark Matter to the Standard Model

    Science.gov (United States)

    D'Eramo, Francesco; Kavanagh, Bradley J.; Panci, Paolo

    2018-02-01

    runDM calculates the running of the couplings of Dark Matter (DM) to the Standard Model (SM) in simplified models with vector mediators. By specifying the mass of the mediator and the couplings of the mediator to SM fields at high energy, the code can calculate the couplings at low energy, taking into account the mixing of all dimension-6 operators. runDM can also extract the operator coefficients relevant for direct detection, namely low energy couplings to up, down and strange quarks and to protons and neutrons.

  11. Neural network-based run-to-run controller using exposure and resist thickness adjustment

    Science.gov (United States)

    Geary, Shane; Barry, Ronan

    2003-06-01

    This paper describes the development of a run-to-run control algorithm using a feedforward neural network, trained using the backpropagation training method. The algorithm is used to predict the critical dimension of the next lot using previous lot information. It is compared to a common prediction algorithm - the exponentially weighted moving average (EWMA) and is shown to give superior prediction performance in simulations. The manufacturing implementation of the final neural network showed significantly improved process capability when compared to the case where no run-to-run control was utilised.

  12. Fermented soymilk increases voluntary wheel running activity and sexual behavior in male rats.

    Science.gov (United States)

    Sato, Takuya; Shinohara, Yasutomo; Kaneko, Daisuke; Nishimura, Ikuko; Matsuyama, Asahi

    2010-12-01

    Wheel running by rodents is thought to reflect voluntary exercise in humans. The present study examined the effect of fermented soymilk (FSM) on voluntary wheel running in rats. FSM was prepared from soymilk (SM) using the bacteria Leuconostoc pseudomesenteroides. The rats were fed a normal diet for 3 weeks followed by a 3-week administration of diet containing FSM or SM (5% w/w), and then the diets were switched back to a normal diet for 3 weeks. The voluntary wheel running activity was increased by FSM administration, although no changes were observed by SM administration. This effect was observed 2 weeks after FSM administration and lasted 1 week after deprivation of FSM. Then we evaluated the effect of FSM on sexual behavior in male rats. FSM administration for 10 days significantly increased the number of mounts. The protein expression of tyrosine hydroxylase (TH) increased in the hippocampus by FSM administration and it is suggested that FSM may change norepinephrine or dopamine signaling in the brain. Our study provides the first evidence that FSM increases voluntary wheel running activity and sexual behavior and suggests that TH may be involved in these effects.

  13. Symmetry in running.

    Science.gov (United States)

    Raibert, M H

    1986-03-14

    Symmetry plays a key role in simplifying the control of legged robots and in giving them the ability to run and balance. The symmetries studied describe motion of the body and legs in terms of even and odd functions of time. A legged system running with these symmetries travels with a fixed forward speed and a stable upright posture. The symmetries used for controlling legged robots may help in elucidating the legged behavior of animals. Measurements of running in the cat and human show that the feet and body sometimes move as predicted by the even and odd symmetry functions.

  14. The Robust Running Ape: Unraveling the Deep Underpinnings of Coordinated Human Running Proficiency

    Directory of Open Access Journals (Sweden)

    John Kiely

    2017-06-01

    Full Text Available In comparison to other mammals, humans are not especially strong, swift or supple. Nevertheless, despite these apparent physical limitations, we are among Natures most superbly well-adapted endurance runners. Paradoxically, however, notwithstanding this evolutionary-bestowed proficiency, running-related injuries, and Overuse syndromes in particular, are widely pervasive. The term ‘coordination’ is similarly ubiquitous within contemporary coaching, conditioning, and rehabilitation cultures. Various theoretical models of coordination exist within the academic literature. However, the specific neural and biological underpinnings of ‘running coordination,’ and the nature of their integration, remain poorly elaborated. Conventionally running is considered a mundane, readily mastered coordination skill. This illusion of coordinative simplicity, however, is founded upon a platform of immense neural and biological complexities. This extensive complexity presents extreme organizational difficulties yet, simultaneously, provides a multiplicity of viable pathways through which the computational and mechanical burden of running can be proficiently dispersed amongst expanded networks of conditioned neural and peripheral tissue collaborators. Learning to adequately harness this available complexity, however, is a painstakingly slowly emerging, practice-driven process, greatly facilitated by innate evolutionary organizing principles serving to constrain otherwise overwhelming complexity to manageable proportions. As we accumulate running experiences persistent plastic remodeling customizes networked neural connectivity and biological tissue properties to best fit our unique neural and architectural idiosyncrasies, and personal histories: thus neural and peripheral tissue plasticity embeds coordination habits. When, however, coordinative processes are compromised—under the integrated influence of fatigue and/or accumulative cycles of injury, overuse

  15. A PC version of the Monte Carlo criticality code OMEGA

    International Nuclear Information System (INIS)

    Seifert, E.

    1996-05-01

    A description of the PC version of the Monte Carlo criticality code OMEGA is given. The report contains a general description of the code together with a detailed input description. Furthermore, some examples are given illustrating the generation of an input file. The main field of application is the calculation of the criticality of arrangements of fissionable material. Geometrically complicated arrangements that often appear inside and outside a reactor, e.g. in a fuel storage or transport container, can be considered essentially without geometrical approximations. For example, the real geometry of assemblies containing hexagonal or square lattice structures can be described in full detail. Moreover, the code can be used for special investigations in the field of reactor physics and neutron transport. Many years of practical experience and comparison with reference cases have shown that the code together with the built-in data libraries gives reliable results. OMEGA is completely independent on other widely used criticality codes (KENO, MCNP, etc.), concerning programming and the data base. It is a good practice to run difficult criticality safety problems by different independent codes in order to mutually verify the results. In this way, OMEGA can be used as a redundant code within the family of criticality codes. An advantage of OMEGA is the short calculation time: A typical criticality safety application takes only a few minutes on a Pentium PC. Therefore, the influence of parameter variations can simply be investigated by running many variants of a problem. (orig.)

  16. The ALICE–HMPID performance during the LHC run period 2010–2013

    Energy Technology Data Exchange (ETDEWEB)

    De Cataldo, Giacinto, E-mail: giacinto.decataldo@ba.infn.it

    2014-12-01

    The ALICE–High Momentum Particle Identification Detector (HMPID) is based on seven modules of proximity focusing Ring Imaging Cherenkov detectors. It is equipped with CsI photo-cathodes for the Cherenkov photon detection installed in multiwire proportional chambers filled with methane. It uses liquid C{sub 6}F{sub 14} as Cherenkov radiator and identifies with three sigma separation, charged π and K in the momentum range 1–3 GeV/c, and protons in the range 1.5–5 GeV/c. At the end of the first run period 2010–2013 of the Large Hadron Collider, the HMPID performance is analysed. No ageing effects of CsI photo-cathodes induced by the ion avalanches have been observed, and we do not expect to reach the charge threshold during the High Luminosity LHC run period 2019–2021. Leakages of C{sub 6}F{sub 14} in 4 out of 21 quartz radiator vessels, combined with 3 HV failing sectors out of 42, have limited the full detector acceptance to 72%. Improvements on the filling operation of the vessels should minimize further leakage. Finally, the particle identification performances are introduced using p-Pb collisions. For Pb–Pb central collisions, where purity and contamination of identified particles are extracted via a Monte Carlo simulation, the impact of the background on the pattern recognition algorithm is briefly discussed.

  17. Mean field simulation for Monte Carlo integration

    CERN Document Server

    Del Moral, Pierre

    2013-01-01

    In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko

  18. Monte Carlo tools for Beyond the Standard Model Physics , April 14-16

    DEFF Research Database (Denmark)

    Badger...[], Simon; Christensen, Christian Holm; Dalsgaard, Hans Hjersing

    2011-01-01

    This workshop aims to gather together theorists and experimentalists interested in developing and using Monte Carlo tools for Beyond the Standard Model Physics in an attempt to be prepared for the analysis of data focusing on the Large Hadron Collider. Since a large number of excellent tools....... To identify promising models (or processes) for which the tools have not yet been constructed and start filling up these gaps. To propose ways to streamline the process of going from models to events, i.e. to make the process more user-friendly so that more people can get involved and perform serious collider...

  19. Variational Variance Reduction for Monte Carlo Criticality Calculations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Larsen, Edward W.

    2001-01-01

    A new variational variance reduction (VVR) method for Monte Carlo criticality calculations was developed. This method employs (a) a variational functional that is more accurate than the standard direct functional, (b) a representation of the deterministically obtained adjoint flux that is especially accurate for optically thick problems with high scattering ratios, and (c) estimates of the forward flux obtained by Monte Carlo. The VVR method requires no nonanalog Monte Carlo biasing, but it may be used in conjunction with Monte Carlo biasing schemes. Some results are presented from a class of criticality calculations involving alternating arrays of fuel and moderator regions

  20. TOPIC: a debugging code for torus geometry input data of Monte Carlo transport code

    International Nuclear Information System (INIS)

    Iida, Hiromasa; Kawasaki, Hiromitsu.

    1979-06-01

    TOPIC has been developed for debugging geometry input data of the Monte Carlo transport code. the code has the following features: (1) It debugs the geometry input data of not only MORSE-GG but also MORSE-I capable of treating torus geometry. (2) Its calculation results are shown in figures drawn by Plotter or COM, and the regions not defined or doubly defined are easily detected. (3) It finds a multitude of input data errors in a single run. (4) The input data required in this code are few, so that it is readily usable in a time sharing system of FACOM 230-60/75 computer. Example TOPIC calculations in design study of tokamak fusion reactors (JXFR, INTOR-J) are presented. (author)

  1. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  2. EnergyPlus Run Time Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences, identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.

  3. Monte Carlo Solutions for Blind Phase Noise Estimation

    Directory of Open Access Journals (Sweden)

    Çırpan Hakan

    2009-01-01

    Full Text Available This paper investigates the use of Monte Carlo sampling methods for phase noise estimation on additive white Gaussian noise (AWGN channels. The main contributions of the paper are (i the development of a Monte Carlo framework for phase noise estimation, with special attention to sequential importance sampling and Rao-Blackwellization, (ii the interpretation of existing Monte Carlo solutions within this generic framework, and (iii the derivation of a novel phase noise estimator. Contrary to the ad hoc phase noise estimators that have been proposed in the past, the estimators considered in this paper are derived from solid probabilistic and performance-determining arguments. Computer simulations demonstrate that, on one hand, the Monte Carlo phase noise estimators outperform the existing estimators and, on the other hand, our newly proposed solution exhibits a lower complexity than the existing Monte Carlo solutions.

  4. A parallel Monte Carlo code for planar and SPECT imaging: implementation, verification and applications in (131)I SPECT.

    Science.gov (United States)

    Dewaraja, Yuni K; Ljungberg, Michael; Majumdar, Amitava; Bose, Abhijit; Koral, Kenneth F

    2002-02-01

    This paper reports the implementation of the SIMIND Monte Carlo code on an IBM SP2 distributed memory parallel computer. Basic aspects of running Monte Carlo particle transport calculations on parallel architectures are described. Our parallelization is based on equally partitioning photons among the processors and uses the Message Passing Interface (MPI) library for interprocessor communication and the Scalable Parallel Random Number Generator (SPRNG) to generate uncorrelated random number streams. These parallelization techniques are also applicable to other distributed memory architectures. A linear increase in computing speed with the number of processors is demonstrated for up to 32 processors. This speed-up is especially significant in Single Photon Emission Computed Tomography (SPECT) simulations involving higher energy photon emitters, where explicit modeling of the phantom and collimator is required. For (131)I, the accuracy of the parallel code is demonstrated by comparing simulated and experimental SPECT images from a heart/thorax phantom. Clinically realistic SPECT simulations using the voxel-man phantom are carried out to assess scatter and attenuation correction.

  5. The ATLAS Tau Trigger Performance during LHC Run 1 and Prospects for Run 2

    CERN Document Server

    Mitani, T; The ATLAS collaboration

    2016-01-01

    The ATLAS tau trigger is designed to select hadronic decays of the tau leptons. Tau lepton plays an important role in Standard Model (SM) physics, such as in Higgs boson decays. Tau lepton is also important in beyond the SM (BSM) scenarios, such as supersymmetry and exotic particles, as they are often produced preferentially in these models. During the 2010-2012 LHC run (Run1), the tau trigger was accomplished successfully, which leads several rewarding results such as evidence for $H\\rightarrow \\tau\\tau$. From the 2015 LHC run (Run2), LHC will be upgraded and overlapping interactions per bunch crossing (pile-up) are expected to increase by a factor two. It will be challenging to control trigger rates while keeping interesting physics events. This paper summarized the tau trigger performance in Run1 and its prospects for Run2.

  6. Variance of measurements from a calibration function derived from data which exhibit run-to-run differences

    International Nuclear Information System (INIS)

    Liebetrau, A.M.

    1985-01-01

    The volume of liquid in a nuclear process tank is determined from a calibration equation which expresses volume as a function of liquid level. Successive calibration runs are made to obtain data from which to estimate either the calibration function or its inverse. For tanks equipped with high-precision measurement systems to determine liquid level, it frequently happens that run-to-run differences due to uncontrolled or uncontrollable ambient conditions are large relative to within-run measurement errors. In the strict sense, a calibration function cannot be developed from data which exhibit significant run-to-run differences. In practice, run-to-run differences are ignored when they are small relative to the accuracy required for measurements of the tank's contents. The use of standard statistical techniques in this situation can result in variance estimates which severely underestimate the actual uncertainty in volume measurements. This paper gives a method whereby reasonable estimates of the calibration uncertainty in volume determinations can be obtained in the presence of statistically significant run-to-run variability. 4 references, 3 figures, 1 table

  7. Monte Carlo based diffusion coefficients for LMFBR analysis

    International Nuclear Information System (INIS)

    Van Rooijen, Willem F.G.; Takeda, Toshikazu; Hazama, Taira

    2010-01-01

    A method based on Monte Carlo calculations is developed to estimate the diffusion coefficient of unit cells. The method uses a geometrical model similar to that used in lattice theory, but does not use the assumption of a separable fundamental mode used in lattice theory. The method uses standard Monte Carlo flux and current tallies, and the continuous energy Monte Carlo code MVP was used without modifications. Four models are presented to derive the diffusion coefficient from tally results of flux and partial currents. In this paper the method is applied to the calculation of a plate cell of the fast-spectrum critical facility ZEBRA. Conventional calculations of the diffusion coefficient diverge in the presence of planar voids in the lattice, but our Monte Carlo method can treat this situation without any problem. The Monte Carlo method was used to investigate the influence of geometrical modeling as well as the directional dependence of the diffusion coefficient. The method can be used to estimate the diffusion coefficient of complicated unit cells, the limitation being the capabilities of the Monte Carlo code. The method will be used in the future to confirm results for the diffusion coefficient obtained of the Monte Carlo code. The method will be used in the future to confirm results for the diffusion coefficient obtained with deterministic codes. (author)

  8. The ATLAS Distributed Computing project for LHC Run-2 and beyond.

    CERN Document Server

    Di Girolamo, Alessandro; The ATLAS collaboration

    2015-01-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run2. An increased data rate and computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (ProdSys2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward the flexible computing model. The flexible computing utilization exploring the opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model, the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover a new data management strategy, based on defined lifetime for each dataset, has been defin...

  9. Investigation of attenuation correction in SPECT using textural features, Monte Carlo simulations, and computational anthropomorphic models.

    Science.gov (United States)

    Spirou, Spiridon V; Papadimitroulas, Panagiotis; Liakou, Paraskevi; Georgoulias, Panagiotis; Loudos, George

    2015-09-01

    To present and evaluate a new methodology to investigate the effect of attenuation correction (AC) in single-photon emission computed tomography (SPECT) using textural features analysis, Monte Carlo techniques, and a computational anthropomorphic model. The GATE Monte Carlo toolkit was used to simulate SPECT experiments using the XCAT computational anthropomorphic model, filled with a realistic biodistribution of (99m)Tc-N-DBODC. The simulated gamma camera was the Siemens ECAM Dual-Head, equipped with a parallel hole lead collimator, with an image resolution of 3.54 × 3.54 mm(2). Thirty-six equispaced camera positions, spanning a full 360° arc, were simulated. Projections were calculated after applying a ± 20% energy window or after eliminating all scattered photons. The activity of the radioisotope was reconstructed using the MLEM algorithm. Photon attenuation was accounted for by calculating the radiological pathlength in a perpendicular line from the center of each voxel to the gamma camera. Twenty-two textural features were calculated on each slice, with and without AC, using 16 and 64 gray levels. A mask was used to identify only those pixels that belonged to each organ. Twelve of the 22 features showed almost no dependence on AC, irrespective of the organ involved. In both the heart and the liver, the mean and SD were the features most affected by AC. In the liver, six features were affected by AC only on some slices. Depending on the slice, skewness decreased by 22-34% with AC, kurtosis by 35-50%, long-run emphasis mean by 71-91%, and long-run emphasis range by 62-95%. In contrast, gray-level non-uniformity mean increased by 78-218% compared with the value without AC and run percentage mean by 51-159%. These results were not affected by the number of gray levels (16 vs. 64) or the data used for reconstruction: with the energy window or without scattered photons. The mean and SD were the main features affected by AC. In the heart, no other feature was

  10. Not Just Running: Coping with and Managing Everyday Life through Road-Running

    OpenAIRE

    Cook, Simon

    2014-01-01

    From the external form, running looks like running. Yet this alikeness masks a hugely divergent practice consisting of different movements, meanings and experiences. In this paper I wish to shed light upon some of these different ‘ways of running’ and in turn identify a range of the sometimes surprising, sometimes significant and sometimes banal benefits that road-running can gift its practitioners beyond simply exercise and physical fitness. Drawing on an innovative mapping and ethnographic ...

  11. Running Club

    CERN Multimedia

    Running Club

    2011-01-01

    The cross country running season has started well this autumn with two events: the traditional CERN Road Race organized by the Running Club, which took place on Tuesday 5th October, followed by the ‘Cross Interentreprises’, a team event at the Evaux Sports Center, which took place on Saturday 8th October. The participation at the CERN Road Race was slightly down on last year, with 65 runners, however the participants maintained the tradition of a competitive yet friendly atmosphere. An ample supply of refreshments before the prize giving was appreciated by all after the race. Many thanks to all the runners and volunteers who ensured another successful race. The results can be found here: https://espace.cern.ch/Running-Club/default.aspx CERN participated successfully at the cross interentreprises with very good results. The teams succeeded in obtaining 2nd and 6th place in the Mens category, and 2nd place in the Mixed category. Congratulations to all. See results here: http://www.c...

  12. Change in running kinematics after cycling are related to alterations in running economy in triathletes.

    Science.gov (United States)

    Bonacci, Jason; Green, Daniel; Saunders, Philo U; Blanch, Peter; Franettovich, Melinda; Chapman, Andrew R; Vicenzino, Bill

    2010-07-01

    Emerging evidence suggests that cycling may influence neuromuscular control during subsequent running but the relationship between altered neuromuscular control and run performance in triathletes is not well understood. The aim of this study was to determine if a 45 min high-intensity cycle influences lower limb movement and muscle recruitment during running and whether changes in limb movement or muscle recruitment are associated with changes in running economy (RE) after cycling. RE, muscle activity (surface electromyography) and limb movement (sagittal plane kinematics) were compared between a control run (no preceding cycle) and a run performed after a 45 min high-intensity cycle in 15 moderately trained triathletes. Muscle recruitment and kinematics during running after cycling were altered in 7 of 15 (46%) triathletes. Changes in kinematics at the knee and ankle were significantly associated with the change in VO(2) after cycling (precruitment in some triathletes and that changes in kinematics, especially at the ankle, are closely related to alterations in running economy after cycling. Copyright 2010 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  13. Random Numbers and Monte Carlo Methods

    Science.gov (United States)

    Scherer, Philipp O. J.

    Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.

  14. LCG Monte-Carlo Data Base

    CERN Document Server

    Bartalini, P.; Kryukov, A.; Selyuzhenkov, Ilya V.; Sherstnev, A.; Vologdin, A.

    2004-01-01

    We present the Monte-Carlo events Data Base (MCDB) project and its development plans. MCDB facilitates communication between authors of Monte-Carlo generators and experimental users. It also provides a convenient book-keeping and an easy access to generator level samples. The first release of MCDB is now operational for the CMS collaboration. In this paper we review the main ideas behind MCDB and discuss future plans to develop this Data Base further within the CERN LCG framework.

  15. Financial Performance of Health Insurers: State-Run Versus Federal-Run Exchanges.

    Science.gov (United States)

    Hall, Mark A; McCue, Michael J; Palazzolo, Jennifer R

    2018-06-01

    Many insurers incurred financial losses in individual markets for health insurance during 2014, the first year of Affordable Care Act mandated changes. This analysis looks at key financial ratios of insurers to compare profitability in 2014 and 2013, identify factors driving financial performance, and contrast the financial performance of health insurers operating in state-run exchanges versus the federal exchange. Overall, the median loss of sampled insurers was -3.9%, no greater than their loss in 2013. Reduced administrative costs offset increases in medical losses. Insurers performed better in states with state-run exchanges than insurers in states using the federal exchange in 2014. Medical loss ratios are the underlying driver more than administrative costs in the difference in performance between states with federal versus state-run exchanges. Policy makers looking to improve the financial performance of the individual market should focus on features that differentiate the markets associated with state-run versus federal exchanges.

  16. Barefoot running: does it prevent injuries?

    Science.gov (United States)

    Murphy, Kelly; Curry, Emily J; Matzkin, Elizabeth G

    2013-11-01

    Endurance running has evolved over the course of millions of years and it is now one of the most popular sports today. However, the risk of stress injury in distance runners is high because of the repetitive ground impact forces exerted. These injuries are not only detrimental to the runner, but also place a burden on the medical community. Preventative measures are essential to decrease the risk of injury within the sport. Common running injuries include patellofemoral pain syndrome, tibial stress fractures, plantar fasciitis, and Achilles tendonitis. Barefoot running, as opposed to shod running (with shoes), has recently received significant attention in both the media and the market place for the potential to promote the healing process, increase performance, and decrease injury rates. However, there is controversy over the use of barefoot running to decrease the overall risk of injury secondary to individual differences in lower extremity alignment, gait patterns, and running biomechanics. While barefoot running may benefit certain types of individuals, differences in running stance and individual biomechanics may actually increase injury risk when transitioning to barefoot running. The purpose of this article is to review the currently available clinical evidence on barefoot running and its effectiveness for preventing injury in the runner. Based on a review of current literature, barefoot running is not a substantiated preventative running measure to reduce injury rates in runners. However, barefoot running utility should be assessed on an athlete-specific basis to determine whether barefoot running will be beneficial.

  17. Igo - A Monte Carlo Code For Radiotherapy Planning

    International Nuclear Information System (INIS)

    Goldstein, M.; Regev, D.

    1999-01-01

    The goal of radiation therapy is to deliver a lethal dose to the tumor, while minimizing the dose to normal tissues and vital organs. To carry out this task, it is critical to calculate correctly the 3-D dose delivered. Monte Carlo transport methods (especially the Adjoint Monte Carlo have the potential to provide more accurate predictions of the 3-D dose the currently used methods. IG0 is a Monte Carlo code derived from the general Monte Carlo Program - MCNP, tailored specifically for calculating the effects of radiation therapy. This paper describes the IG0 transport code, the PIG0 interface and some preliminary results

  18. Monte Carlo techniques for analyzing deep-penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1986-01-01

    Current methods and difficulties in Monte Carlo deep-penetration calculations are reviewed, including statistical uncertainty and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multigroup Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications

  19. Odd-flavor Simulations by the Hybrid Monte Carlo

    CERN Document Server

    Takaishi, Tetsuya; Takaishi, Tetsuya; De Forcrand, Philippe

    2001-01-01

    The standard hybrid Monte Carlo algorithm is known to simulate even flavors QCD only. Simulations of odd flavors QCD, however, can be also performed in the framework of the hybrid Monte Carlo algorithm where the inverse of the fermion matrix is approximated by a polynomial. In this exploratory study we perform three flavors QCD simulations. We make a comparison of the hybrid Monte Carlo algorithm and the R-algorithm which also simulates odd flavors systems but has step-size errors. We find that results from our hybrid Monte Carlo algorithm are in agreement with those from the R-algorithm obtained at very small step-size.

  20. Quantum Monte Carlo approaches for correlated systems

    CERN Document Server

    Becca, Federico

    2017-01-01

    Over the past several decades, computational approaches to studying strongly-interacting systems have become increasingly varied and sophisticated. This book provides a comprehensive introduction to state-of-the-art quantum Monte Carlo techniques relevant for applications in correlated systems. Providing a clear overview of variational wave functions, and featuring a detailed presentation of stochastic samplings including Markov chains and Langevin dynamics, which are developed into a discussion of Monte Carlo methods. The variational technique is described, from foundations to a detailed description of its algorithms. Further topics discussed include optimisation techniques, real-time dynamics and projection methods, including Green's function, reptation and auxiliary-field Monte Carlo, from basic definitions to advanced algorithms for efficient codes, and the book concludes with recent developments on the continuum space. Quantum Monte Carlo Approaches for Correlated Systems provides an extensive reference ...

  1. Non statistical Monte-Carlo

    International Nuclear Information System (INIS)

    Mercier, B.

    1985-04-01

    We have shown that the transport equation can be solved with particles, like the Monte-Carlo method, but without random numbers. In the Monte-Carlo method, particles are created from the source, and are followed from collision to collision until either they are absorbed or they leave the spatial domain. In our method, particles are created from the original source, with a variable weight taking into account both collision and absorption. These particles are followed until they leave the spatial domain, and we use them to determine a first collision source. Another set of particles is then created from this first collision source, and tracked to determine a second collision source, and so on. This process introduces an approximation which does not exist in the Monte-Carlo method. However, we have analyzed the effect of this approximation, and shown that it can be limited. Our method is deterministic, gives reproducible results. Furthermore, when extra accuracy is needed in some region, it is easier to get more particles to go there. It has the same kind of applications: rather problems where streaming is dominant than collision dominated problems

  2. The LHCb Run Control

    CERN Document Server

    Alessio, F; Callot, O; Duval, P-Y; Franek, B; Frank, M; Galli, D; Gaspar, C; v Herwijnen, E; Jacobsson, R; Jost, B; Neufeld, N; Sambade, A; Schwemmer, R; Somogyi, P

    2010-01-01

    LHCb has designed and implemented an integrated Experiment Control System. The Control System uses the same concepts and the same tools to control and monitor all parts of the experiment: the Data Acquisition System, the Timing and the Trigger Systems, the High Level Trigger Farm, the Detector Control System, the Experiment's Infrastructure and the interaction with the CERN Technical Services and the Accelerator. LHCb's Run Control, the main interface used by the experiment's operator, provides access in a hierarchical, coherent and homogeneous manner to all areas of the experiment and to all its sub-detectors. It allows for automated (or manual) configuration and control, including error recovery, of the full experiment in its different running modes. Different instances of the same Run Control interface are used by the various sub-detectors for their stand-alone activities: test runs, calibration runs, etc. The architecture and the tools used to build the control system, the guidelines and components provid...

  3. MC-TESTER v. 1.23: A universal tool for comparisons of Monte Carlo predictions for particle decays in high energy physics

    Science.gov (United States)

    Davidson, N.; Golonka, P.; Przedziński, T.; Waş, Z.

    2011-03-01

    Theoretical predictions in high energy physics are routinely provided in the form of Monte Carlo generators. Comparisons of predictions from different programs and/or different initialization set-ups are often necessary. MC-TESTER can be used for such tests of decays of intermediate states (particles or resonances) in a semi-automated way. Since 2002 new functionalities were introduced into the package. In particular, it now works with the HepMC event record, the standard for C++ programs. The complete set-up for benchmarking the interfaces, such as interface between τ-lepton production and decay, including QED bremsstrahlung effects is shown. The example is chosen to illustrate the new options introduced into the program. From the technical perspective, our paper documents software updates and supplements previous documentation. As in the past, our test consists of two steps. Distinct Monte Carlo programs are run separately; events with decays of a chosen particle are searched, and information is stored by MC-TESTER. Then, at the analysis step, information from a pair of runs may be compared and represented in the form of tables and plots. Updates introduced in the program up to version 1.24.4 are also documented. In particular, new configuration scripts or script to combine results from multitude of runs into single information file to be used in analysis step are explained. Program summaryProgram title: MC-TESTER, version 1.23 and version 1.24.4 Catalog identifier: ADSM_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSM_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 250 548 No. of bytes in distributed program, including test data, etc.: 4 290 610 Distribution format: tar.gz Programming language: C++, FORTRAN77 Tested and compiled with: gcc 3.4.6, 4

  4. The NLstart2run study: Incidence and risk factors of running-related injuries in novice runners.

    Science.gov (United States)

    Kluitenberg, B; van Middelkoop, M; Smits, D W; Verhagen, E; Hartgens, F; Diercks, R; van der Worp, H

    2015-10-01

    Running is a popular form of physical activity, despite of the high incidence of running-related injuries (RRIs). Because of methodological issues, the etiology of RRIs remains unclear. Therefore, the purposes of the study were to assess the incidence of RRIs and to identify risk factors for RRIs in a large group of novice runners. In total, 1696 runners of a 6-week supervised "Start to Run" program were included in the NLstart2run study. All participants were aged between 18 and 65, completed a baseline questionnaire that covered potential risk factors, and completed at least one running diary. RRIs were registered during the program with a weekly running log. An RRI was defined as a musculo-skeletal complaint of the lower extremity or back attributed to running and hampering running ability for three consecutive training sessions. During the running program, 10.9% of the runners sustained an RRI. The multivariable Cox regression analysis showed that a higher age, higher BMI, previous musculo-skeletal complaints not attributed to sports and no previous running experience were related to RRI. These findings indicate that many novice runners participating in a short-term running program suffer from RRIs. Therefore, the identified risk factors should be considered for screening and prevention purposes. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  5. The vector and parallel processing of MORSE code on Monte Carlo Machine

    International Nuclear Information System (INIS)

    Hasegawa, Yukihiro; Higuchi, Kenji.

    1995-11-01

    Multi-group Monte Carlo Code for particle transport, MORSE is modified for high performance computing on Monte Carlo Machine Monte-4. The method and the results are described. Monte-4 was specially developed to realize high performance computing of Monte Carlo codes for particle transport, which have been difficult to obtain high performance in vector processing on conventional vector processors. Monte-4 has four vector processor units with the special hardware called Monte Carlo pipelines. The vectorization and parallelization of MORSE code and the performance evaluation on Monte-4 are described. (author)

  6. Developing concepts for improved efficiency of robot work preparation

    OpenAIRE

    Essers, M.S.; Vaneker, Thomas H.J.

    2013-01-01

    SInBot[1] is a large research project that focuses on maximizing the efficient use of mobile industrial robots during medium sized production runs. The system that will be described in this paper will focusses on the development and validation of concepts for efficient work preparation for cells of intelligent mobile robots that execute medium sized production runs. For a wide range of products, the machining tasks will be defined on an appropriate level, enabling control over the robots beha...

  7. Study of very forward jets at 13 TeV with the CASTOR calorimeter of CMS

    Energy Technology Data Exchange (ETDEWEB)

    Baur, Sebastian; Akbiyik, Melike; Baus, Colin; Katkov, Igor; Ulrich, Ralf; Woehrmann, Hauke [Institut fuer Experimentelle Kernphysik, Karlsruher Institut fuer Technologie (Germany)

    2015-07-01

    The CASTOR calorimeter of CMS measures QCD jets at pseudorapidities -5.2 ≥ η ≥ -6.6. Due to this unique very forward acceptance, such data is very discriminating for hadronic event generators. In particular since values of Bjorken-x down to 10{sup -6} are probed. In preparation for the upcoming LHC Run 2 at √(s) = 13 TeV, a Monte Carlo study of such jets is presented, including full detector simulation with GEANT4. We investigate methods of data- and Monte Carlo-driven jet energy corrections. First results of p{sub t}-balancing and detector unfolding are presented.

  8. Running: Improving Form to Reduce Injuries.

    Science.gov (United States)

    2015-08-01

    Running is often perceived as a good option for "getting into shape," with little thought given to the form, or mechanics, of running. However, as many as 79% of all runners will sustain a running-related injury during any given year. If you are a runner-casual or serious-you should be aware that poor running mechanics may contribute to these injuries. A study published in the August 2015 issue of JOSPT reviewed the existing research to determine whether running mechanics could be improved, which could be important in treating running-related injuries and helping injured runners return to pain-free running.

  9. Transport of mass goods on the top run and bottom run of belt conveyors

    Energy Technology Data Exchange (ETDEWEB)

    Zimmermann, D

    1977-06-01

    For combined coal winning from the collieries 'General Blumenthal' and 'Ewald Fortsetzung' a large belt conveyor plant was taken into operation which is able to transport 1360 tons/h in the top run and 300 tons/h of dirt in the bottom run. The different types of coal are transported separately in intermittent operation with the aid of bunker systems connected to the front and rear of the belt conveyor. Persons can be transported in the top run as well as in the bottom run.

  10. LHCb siliicon detectors: the Run 1 to Run 2 transition and first experience of Run 2

    CERN Document Server

    Rinnert, Kurt

    2015-01-01

    LHCb is a dedicated experiment to study New Physics in the decays of heavy hadrons at the Large Hadron Collider (LHC) at CERN. The detector includes a high precision tracking system consisting of a silicon-strip vertex detector (VELO) surrounding the pp interaction region, a large- area silicon-strip detector located upstream of a dipole magnet (TT), and three stations of silicon- strip detectors (IT) and straw drift tubes placed downstream (OT). The operational transition of the silicon detectors VELO, TT and IT from LHC Run 1 to Run 2 and first Run 2 experiences will be presented. During the long shutdown of the LHC the silicon detectors have been maintained in a safe state and operated regularly to validate changes in the control infrastructure, new operational procedures, updates to the alarm systems and monitoring software. In addition, there have been some infrastructure related challenges due to maintenance performed in the vicinity of the silicon detectors that will be discussed. The LHCb silicon dete...

  11. Progression in Running Intensity or Running Volume and the Development of Specific Injuries in Recreational Runners: Run Clever, a Randomized Trial Using Competing Risks.

    Science.gov (United States)

    Ramskov, Daniel; Rasmussen, Sten; Sørensen, Henrik; Parner, Erik Thorlund; Lind, Martin; Nielsen, Rasmus

    2018-06-12

    Study Design Randomized clinical trial, etiology. Background Training intensity and volume have been proposed to be associated with specific running-related injuries. If such an association exists, secondary preventive measures could be initiated by clinicians based on symptoms of a specific injury diagnosis. Objectives To test the following hypotheses: (i) A running schedule focusing on intensity will increase the risk of sustaining Achilles tendinopathy, gastrocnemius injuries and plantar fasciitis compared with hypothesized volume-related injuries. (ii) A running schedule focusing on running volume will increase the risk of sustaining patellofemoral pain syndrome, iliotibial band syndrome and patellar tendinopathy compared with hypothesized intensity-related injuries. Methods Healthy recreational runners were included in a 24-week follow-up, divided into 8-week preconditioning and 16-week specific focus-training. Participants were randomized to one of two running schedules: Schedule Intensity(Sch-I) or Schedule Volume(Sch-V). Sch-I progressed the amount of high intensity running (≥88% VO2max) each week. Sch-V progressed total weekly running volume. Global positioning system watch or smartphone collected data on running. Running-related injuries were diagnosed based on a clinical examination. Estimates were risk difference (RD) and 95%CI. Results Of 447 runners, a total of 80 sustained an injury (Sch-I n=36; Sch-V n=44). Risk of intensity injuries in Sch-I were: RD 2-weeks =-0.8%[-5.0;3.4]; RD 4-weeks =-0.8%[-6.7;5.1]; RD 8-weeks =-2.0%[-9.2;5.1]; RD 16-weeks =-5.1%[-16.5;6.3]. Risk of volume injuries in Sch-V were: RD 2-weeks =-0.9%[-5.0;3.2]; RD 4-weeks =-2.0%[-7.5;3.5]; RD 8-weeks =-3.2%[-9.1;2.7]; RD 16-weeks =-3.4%[-13.2;6.2]. Conclusion No difference in risk of hypothesized intensity and volume specific running-related injuries exist between running schedules focused on progression in either running intensity or volume. Level of Evidence Etiology, level 1

  12. Running Boot Camp

    CERN Document Server

    Toporek, Chuck

    2008-01-01

    When Steve Jobs jumped on stage at Macworld San Francisco 2006 and announced the new Intel-based Macs, the question wasn't if, but when someone would figure out a hack to get Windows XP running on these new "Mactels." Enter Boot Camp, a new system utility that helps you partition and install Windows XP on your Intel Mac. Boot Camp does all the heavy lifting for you. You won't need to open the Terminal and hack on system files or wave a chicken bone over your iMac to get XP running. This free program makes it easy for anyone to turn their Mac into a dual-boot Windows/OS X machine. Running Bo

  13. Monte Carlo modelling of TRIGA research reactor

    Science.gov (United States)

    El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.

    2010-10-01

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  14. Split-phase motor running as capacitor starts motor and as capacitor run motor

    Directory of Open Access Journals (Sweden)

    Yahaya Asizehi ENESI

    2016-07-01

    Full Text Available In this paper, the input parameters of a single phase split-phase induction motor is taken to investigate and to study the output performance characteristics of capacitor start and capacitor run induction motor. The value of these input parameters are used in the design characteristics of capacitor run and capacitor start motor with each motor connected to rated or standard capacitor in series with auxiliary winding or starting winding respectively for the normal operational condition. The magnitude of capacitor that will develop maximum torque in capacitor start motor and capacitor run motor are investigated and determined by simulation. Each of these capacitors is connected to the auxiliary winding of split-phase motor thereby transforming it into capacitor start or capacitor run motor. The starting current and starting torque of the split-phase motor (SPM, capacitor run motor (CRM and capacitor star motor (CSM are compared for their suitability in their operational performance and applications.

  15. Design of ProjectRun21

    DEFF Research Database (Denmark)

    Damsted, Camma; Parner, Erik Thorlund; Sørensen, Henrik

    2017-01-01

    BACKGROUND: Participation in half-marathon has been steeply increasing during the past decade. In line, a vast number of half-marathon running schedules has surfaced. Unfortunately, the injury incidence proportion for half-marathoners has been found to exceed 30% during 1-year follow......-up. The majority of running-related injuries are suggested to develop as overuse injuries, which leads to injury if the cumulative training load over one or more training sessions exceeds the runners' load capacity for adaptive tissue repair. Owing to an increase of load capacity along with adaptive running...... the association between running experience or running pace and the risk of running-related injury. METHODS: Healthy runners using Global Positioning System (GPS) watch between 18 and 65 years will be invited to participate in this 14-week prospective cohort study. Runners will be allowed to self-select one...

  16. Monte Carlo techniques for analyzing deep penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1985-01-01

    A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications

  17. Monte Carlo techniques for analyzing deep penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1985-01-01

    A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications. 29 refs

  18. The Relationship between Running Velocity and the Energy Cost of Turning during Running

    Science.gov (United States)

    Hatamoto, Yoichi; Yamada, Yosuke; Sagayama, Hiroyuki; Higaki, Yasuki; Kiyonaga, Akira; Tanaka, Hiroaki

    2014-01-01

    Ball game players frequently perform changes of direction (CODs) while running; however, there has been little research on the physiological impact of CODs. In particular, the effect of running velocity on the physiological and energy demands of CODs while running has not been clearly determined. The purpose of this study was to examine the relationship between running velocity and the energy cost of a 180°COD and to quantify the energy cost of a 180°COD. Nine male university students (aged 18–22 years) participated in the study. Five shuttle trials were performed in which the subjects were required to run at different velocities (3, 4, 5, 6, 7, and 8 km/h). Each trial consisted of four stages with different turn frequencies (13, 18, 24 and 30 per minute), and each stage lasted 3 minutes. Oxygen consumption was measured during the trial. The energy cost of a COD significantly increased with running velocity (except between 7 and 8 km/h, p = 0.110). The relationship between running velocity and the energy cost of a 180°COD is best represented by a quadratic function (y = −0.012+0.066x +0.008x2, [r = 0.994, p = 0.001]), but is also well represented by a linear (y = −0.228+0.152x, [r = 0.991, prunning velocities have relatively high physiological demands if the COD frequency increases, and that running velocities affect the physiological demands of CODs. These results also showed that the energy expenditure of COD can be evaluated using only two data points. These results may be useful for estimating the energy expenditure of players during a match and designing shuttle exercise training programs. PMID:24497913

  19. Run-to-Run Optimization Control Within Exact Inverse Framework for Scan Tracking.

    Science.gov (United States)

    Yeoh, Ivan L; Reinhall, Per G; Berg, Martin C; Chizeck, Howard J; Seibel, Eric J

    2017-09-01

    A run-to-run optimization controller uses a reduced set of measurement parameters, in comparison to more general feedback controllers, to converge to the best control point for a repetitive process. A new run-to-run optimization controller is presented for the scanning fiber device used for image acquisition and display. This controller utilizes very sparse measurements to estimate a system energy measure and updates the input parameterizations iteratively within a feedforward with exact-inversion framework. Analysis, simulation, and experimental investigations on the scanning fiber device demonstrate improved scan accuracy over previous methods and automatic controller adaptation to changing operating temperature. A specific application example and quantitative error analyses are provided of a scanning fiber endoscope that maintains high image quality continuously across a 20 °C temperature rise without interruption of the 56 Hz video.

  20. Improved Monte Carlo modelling of multi-energy a-rays penetration through thick stratified shielding slabs

    International Nuclear Information System (INIS)

    Bakos, G.C.

    2001-01-01

    This paper deals with the application of Monte Carlo method for the calculation of dose build up factor of, mixed 1.37 and 2.75 MeV, a-rays penetration through stratified shielding slabs. Six double layer shielding slabs namely, 12 A l+Fe, 12 A l+Pb, 6 F e+Al, 6 F e+Pb, 4 P b+Al, 4 P b+Fe were examined. Furthermore, experimental and theoretical results are also presented. The experimental results were taken from the experimental facility installed at the Universities Research reactor Center (Risley, UK). Activated Na2SO3 solution provided a uniform Na-24 disc source of a-rays at both energies (1.37 and 2.75 MeV) with equal intensity. The theoretical results were calculated using the Bowman and Trubey formula. This formula takes into account an exponentially decaying function of the shield thickness (in mfp) to the end point of the multi-layer slab. The experimental and theoretical results were used to evaluate the simulation results produced from a Monte Carlo program (DUTMONCA code) which was developed in Democritus University of Thrace (Xanthi, Greece). The DUTMONCA code was written in Pascal language and run on an Intel PIII-800 microprocessor. The developed code (which is an improved version of an existing Monte Carlo program) has the ability to produce good results for thick shielding slabs overcoming the problems encountered in older version program. The simulation results are compared with experimental and theoretical results. Good agreement can be observed, even for thick layer shielding slabs, although there are some wayward experimental values which are due to sources of error associated with the experimental procedure

  1. The effectiveness of a preconditioning programme on preventing running-related injuries in novice runners: a randomised controlled trial.

    Science.gov (United States)

    Bredeweg, Steef W; Zijlstra, Sjouke; Bessem, Bram; Buist, Ida

    2012-09-01

    There is no consensus on the aetiology and prevention of running-related injuries in runners. Preconditioning studies among different athlete populations show positive effects on the incidence of sports injuries. A 4-week preconditioning programme in novice runners will reduce the incidence of running-related injuries. Randomised controlled clinical trial; level of evidence, 1. Novice runners (N=432) prepared for a four-mile recreational running event. Participants were allocated to the 4-week preconditioning (PRECON) group (N=211) or the control group (N=221). The PRECON group started a 4-week training programme, prior to the running programme, with walking and hopping exercises. After the 4-week period both groups started a 9-week running programme. In both groups information was registered on running exposure and running-related injuries (RRIs) using an internet-based running log. Primary outcome measure was RRIs per 100 runners. An RRI was defined as any musculoskeletal complaint of the lower extremity or lower back causing restriction of running for at least a week. The incidence of RRIs was 15.2% in the PRECON group and 16.8% in the control group. The difference in RRIs between the groups was not significant (χ(2)=0.161, df=1, p=0.69). This prospective study demonstrated that a 4-week PRECON programme with walking and hopping exercises had no influence on the incidence of RRIs in novice runners.

  2. Western diet increases wheel running in mice selectively bred for high voluntary wheel running.

    Science.gov (United States)

    Meek, T H; Eisenmann, J C; Garland, T

    2010-06-01

    Mice from a long-term selective breeding experiment for high voluntary wheel running offer a unique model to examine the contributions of genetic and environmental factors in determining the aspects of behavior and metabolism relevant to body-weight regulation and obesity. Starting with generation 16 and continuing through to generation 52, mice from the four replicate high runner (HR) lines have run 2.5-3-fold more revolutions per day as compared with four non-selected control (C) lines, but the nature of this apparent selection limit is not understood. We hypothesized that it might involve the availability of dietary lipids. Wheel running, food consumption (Teklad Rodent Diet (W) 8604, 14% kJ from fat; or Harlan Teklad TD.88137 Western Diet (WD), 42% kJ from fat) and body mass were measured over 1-2-week intervals in 100 males for 2 months starting 3 days after weaning. WD was obesogenic for both HR and C, significantly increasing both body mass and retroperitoneal fat pad mass, the latter even when controlling statistically for wheel-running distance and caloric intake. The HR mice had significantly less fat than C mice, explainable statistically by their greater running distance. On adjusting for body mass, HR mice showed higher caloric intake than C mice, also explainable by their higher running. Accounting for body mass and running, WD initially caused increased caloric intake in both HR and C, but this effect was reversed during the last four weeks of the study. Western diet had little or no effect on wheel running in C mice, but increased revolutions per day by as much as 75% in HR mice, mainly through increased time spent running. The remarkable stimulation of wheel running by WD in HR mice may involve fuel usage during prolonged endurance exercise and/or direct behavioral effects on motivation. Their unique behavioral responses to WD may render HR mice an important model for understanding the control of voluntary activity levels.

  3. Utilization of Point Clouds Characteristics in Interpretation and Evaluation Geophysical Resistivity Surveying of Unstable Running Block

    Directory of Open Access Journals (Sweden)

    Marcel Brejcha

    2016-05-01

    Full Text Available Close to human residences the places often abound where anthropogenic activity and external factors cause their changes. The changes can often influence their inhabitants’ life thanks to incipient dangerous places. The project of successful design of measures to ensure stability of unstable running blocks depends on chosen approaches and primary resource preparation. Utilization of modern technologies in their taking and processing is required nowadays. The paper describes the taking and processing of data for project of solution „Stabilization of unstable running block“ in a municipal settled part with efficient utilization of unusual method of processing of geophysical resistivity method.

  4. The ATLAS Muon Trigger Performance : Run 1 and initial Run 2.

    CERN Document Server

    Kasahara, Kota; The ATLAS collaboration

    2015-01-01

    The ATLAS Muon Trigger Performance: Run 1 and Initial Run 2 Performance

Events with muons in the final state are an important signature for many physics topics at the Large Hadron Collider (LHC). An efficient trigger on muons and a detailed understanding of its performance are required. In 2012, the last year of Run 1, the instantaneous luminosity of the LHC reached 7.7x10^33 cm -2s-1 and the average number of events that occur in a same bunch crossing was 25. The ATLAS Muon trigger has successfully adapted to this changing environment by making use of isolation requirements, combined trigger signatures with electron and jet trigger objects, and by using so-called full-scan triggers, which make use of the full event information to search for di-lepton signatures, seeded by single lepton objects. A stable and highly efficient muon trigger was vital in the discovery of Higgs boson in 2012 and for many searches for new physics. 
The performance of muon triggers during the LHC Run 1 data-taking campaigns i...

  5. RUN COORDINATION

    CERN Multimedia

    Christophe Delaere

    2013-01-01

    The focus of Run Coordination during LS1 is to monitor closely the advance of maintenance and upgrade activities, to smooth interactions between subsystems and to ensure that all are ready in time to resume operations in 2015 with a fully calibrated and understood detector. After electricity and cooling were restored to all equipment, at about the time of the last CMS week, recommissioning activities were resumed for all subsystems. On 7 October, DCS shifts began 24/7 to allow subsystems to remain on to facilitate operations. That culminated with the Global Run in November (GriN), which   took place as scheduled during the week of 4 November. The GriN has been the first centrally managed operation since the beginning of LS1, and involved all subdetectors but the Pixel Tracker presently in a lab upstairs. All nights were therefore dedicated to long stable runs with as many subdetectors as possible. Among the many achievements in that week, three items may be highlighted. First, the Strip...

  6. RUN COORDINATION

    CERN Multimedia

    M. Chamizo

    2012-01-01

      On 17th January, as soon as the services were restored after the technical stop, sub-systems started powering on. Since then, we have been running 24/7 with reduced shift crew — Shift Leader and DCS shifter — to allow sub-detectors to perform calibration, noise studies, test software upgrades, etc. On 15th and 16th February, we had the first Mid-Week Global Run (MWGR) with the participation of most sub-systems. The aim was to bring CMS back to operation and to ensure that we could run after the winter shutdown. All sub-systems participated in the readout and the trigger was provided by a fraction of the muon systems (CSC and the central RPC wheel). The calorimeter triggers were not available due to work on the optical link system. Initial checks of different distributions from Pixels, Strips, and CSC confirmed things look all right (signal/noise, number of tracks, phi distribution…). High-rate tests were done to test the new CSC firmware to cure the low efficiency ...

  7. Biases in Monte Carlo eigenvalue calculations

    Energy Technology Data Exchange (ETDEWEB)

    Gelbard, E.M.

    1992-12-01

    The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the ``fixed-source`` case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated (``replicated``) over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here.

  8. Biases in Monte Carlo eigenvalue calculations

    Energy Technology Data Exchange (ETDEWEB)

    Gelbard, E.M.

    1992-01-01

    The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the fixed-source'' case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated ( replicated'') over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here.

  9. Biases in Monte Carlo eigenvalue calculations

    International Nuclear Information System (INIS)

    Gelbard, E.M.

    1992-01-01

    The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the ''fixed-source'' case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated (''replicated'') over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here

  10. The NLstart2run study : Economic burden of running-related injuries in novice runners participating in a novice running program

    NARCIS (Netherlands)

    Hespanhol, Luiz C.; Huisstede, Bionka M. A.; Smits, Dirk-Wouter; Kluitenberg, Bas; van der Worp, Henk; van Middelkoop, Marienke; Hartgens, Fred; Verhagen, Evert

    2016-01-01

    Objectives: To investigate the economic burden of running-related injuries (RRI) occurred during the 6-week 'Start-to-Run' program of the Dutch Athletics Federation in 2013. Design: Prospective cohort study. Methods: This was a monetary cost analysis using the data prospectively gathered alongside

  11. The Effects of Backwards Running Training on Forward Running Economy in Trained Males.

    Science.gov (United States)

    Ordway, Jason D; Laubach, Lloyd L; Vanderburgh, Paul M; Jackson, Kurt J

    2016-03-01

    Backwards running (BR) results in greater cardiopulmonary response and muscle activity compared with forward running (FR). BR has traditionally been used in rehabilitation for disorders such as stroke and lower leg extremity injuries, as well as in short bursts during various athletic events. The aim of this study was to measure the effects of sustained backwards running training on forward running economy in trained male athletes. Eight highly trained, male runners (26.13 ± 6.11 years, 174.7 ± 6.4 cm, 68.4 ± 9.24 kg, 8.61 ± 3.21% body fat, 71.40 ± 7.31 ml·kg(-1)·min(-1)) trained with BR while harnessed on a treadmill at 161 m·min(-1) for 5 weeks following a 5-week BR run-in period at a lower speed (134 m·min(-1)). Subjects were tested at baseline, postfamiliarized, and post-BR training for body composition, a ramped VO2max test, and an economy test designed for trained male runners. Subjects improved forward running economy by 2.54% (1.19 ± 1.26 ml·kg(-1)·min(-1), p = 0.032) at 215 m·min(-1). VO2max, body mass, lean mass, fat mass, and % body fat did not change (p > 0.05). Five weeks of BR training improved FR economy in healthy, trained male runners without altering VO2max or body composition. The improvements observed in this study could be a beneficial form of training to an already economical population to improve running economy.

  12. Oxygen transport properties estimation by classical trajectory–direct simulation Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Bruno, Domenico, E-mail: domenico.bruno@cnr.it [Istituto di Metodologie Inorganiche e dei Plasmi, Consiglio Nazionale delle Ricerche– Via G. Amendola 122, 70125 Bari (Italy); Frezzotti, Aldo, E-mail: aldo.frezzotti@polimi.it; Ghiroldi, Gian Pietro, E-mail: gpghiro@gmail.com [Dipartimento di Scienze e Tecnologie Aerospaziali, Politecnico di Milano–Via La Masa 34, 20156 Milano (Italy)

    2015-05-15

    Coupling direct simulation Monte Carlo (DSMC) simulations with classical trajectory calculations is a powerful tool to improve predictive capabilities of computational dilute gas dynamics. The considerable increase in computational effort outlined in early applications of the method can be compensated by running simulations on massively parallel computers. In particular, Graphics Processing Unit acceleration has been found quite effective in reducing computing time of classical trajectory (CT)-DSMC simulations. The aim of the present work is to study dilute molecular oxygen flows by modeling binary collisions, in the rigid rotor approximation, through an accurate Potential Energy Surface (PES), obtained by molecular beams scattering. The PES accuracy is assessed by calculating molecular oxygen transport properties by different equilibrium and non-equilibrium CT-DSMC based simulations that provide close values of the transport properties. Comparisons with available experimental data are presented and discussed in the temperature range 300–900 K, where vibrational degrees of freedom are expected to play a limited (but not always negligible) role.

  13. MC++: A parallel, portable, Monte Carlo neutron transport code in C++

    International Nuclear Information System (INIS)

    Lee, S.R.; Cummings, J.C.; Nolen, S.D.

    1997-01-01

    MC++ is an implicit multi-group Monte Carlo neutron transport code written in C++ and based on the Parallel Object-Oriented Methods and Applications (POOMA) class library. MC++ runs in parallel on and is portable to a wide variety of platforms, including MPPs, SMPs, and clusters of UNIX workstations. MC++ is being developed to provide transport capabilities to the Accelerated Strategic Computing Initiative (ASCI). It is also intended to form the basis of the first transport physics framework (TPF), which is a C++ class library containing appropriate abstractions, objects, and methods for the particle transport problem. The transport problem is briefly described, as well as the current status and algorithms in MC++ for solving the transport equation. The alpha version of the POOMA class library is also discussed, along with the implementation of the transport solution algorithms using POOMA. Finally, a simple test problem is defined and performance and physics results from this problem are discussed on a variety of platforms

  14. Verification of the Monte Carlo differential operator technique for MCNP trademark

    International Nuclear Information System (INIS)

    McKinney, G.W.; Iverson, J.L.

    1996-02-01

    The differential operator perturbation technique has been incorporated into the Monte Carlo N-Particle transport code MCNP and will become a standard feature of future releases. This feature includes first and second order terms of the Taylor series expansion for response perturbations related to cross-section data (i.e., density, composition, etc.). Perturbation and sensitivity analyses can benefit from this technique in that predicted changes in one or more tally responses may be obtained for multiple perturbations in a single run. The user interface is intuitive, yet flexible enough to allow for changes in a specific microscopic cross section over a specified energy range. With this technique, a precise estimate of a small change in response is easily obtained, even when the standard deviation of the unperturbed tally is greater than the change. Furthermore, results presented in this report demonstrate that first and second order terms can offer acceptable accuracy, to within a few percent, for up to 20-30% changes in a response

  15. Progression in Running Intensity or Running Volume and the Development of Specific Injuries in Recreational Runners

    DEFF Research Database (Denmark)

    Ramskov, Daniel; Rasmussen, Sten; Sørensen, Henrik

    2018-01-01

    -training. Participants were randomized to one of two running schedules: Schedule Intensity(Sch-I) or Schedule Volume(Sch-V). Sch-I progressed the amount of high intensity running (≥88% VO2max) each week. Sch-V progressed total weekly running volume. Global positioning system watch or smartphone collected data on running...

  16. [Research in heavy ion nuclear reactions

    International Nuclear Information System (INIS)

    Howell, E.H.; Liu, X.T.; Petitt, G.A.; Zhang, Z.

    1994-01-01

    The authors have been involved in several projects during the present contract period. These include participation in the RD93 test run performed last summer at the Alternating Gradient Synchrotron (AGS) at Brookhaven, analysis of the data from this run, Monte Carlo simulations using the GEANT code of the performance of the calorimeter/absorber used in RD45, and simulations of the performance of the muon detector system for the PHENIX detector at RHIC using the PISA code. They have been preparing for tests to be performed this summer at the AGS of a prototype muon identifier using limited streamer tube detectors of the type selected for use in the muon arm of the PHENIX detector at RHIC. They have begun work on Monte Carlo simulations of particle detection in the presence of intense background events for the E864 experiment which is approved for running at the AGS. Finally, the authors have completed their work on leakage from the absorber/calorimeter and have submitted a paper to Nuclear Instruments and Methods

  17. Why forefoot striking in minimal shoes might positively change the course of running injuries

    Directory of Open Access Journals (Sweden)

    Irene S. Davis

    2017-06-01

    Full Text Available It is believed that human ancestors evolved the ability to run bipedally approximately 2 million years ago. This form of locomotion may have been important to our survival and likely has influenced the evolution of our body form. As our bodies have adapted to run, it seems unusual that up to 79% of modern day runners are injured annually. The etiology of these injuries is clearly multifactorial. However, 1 aspect of running that has significantly changed over the past 50 years is the footwear we use. Modern running shoes have become increasingly cushioned and supportive, and have changed the way we run. In particular, they have altered our footstrike pattern from a predominantly forefoot strike (FFS landing to a predominantly rearfoot strike (RFS landing. This change alters the way in which the body is loaded and may be contributing to the high rate of injuries runners experience while engaged in an activity for which they were adapted. In this paper, we will examine the benefits of barefoot running (typically an FFS pattern, and compare the lower extremity mechanics between FFS and RFS. The implications of these mechanical differences, in terms of injury, will be discussed. We will then provide evidence to support our contention that FFS provides an optimal mechanical environment for specific foot and ankle structures, such as the heel pad, the plantar fascia, and the Achilles tendon. The importance of footwear will then be addressed, highlighting its interaction with strike pattern on mechanics. This analysis will underscore why footwear matters when assessing mechanics. Finally, proper preparation and safe transition to an FFS pattern in minimal shoes will be emphasized. Through the discussion of the current literature, we will develop a justification for returning to running in the way for which we were adapted to reduce running-related injuries.

  18. Advanced Computational Methods for Monte Carlo Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-12

    This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.

  19. Monte Carlo Simulation of Ferroelectric Domain Structure and Applied Field Response in Two Dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Potter, Jr., B.G.; Tikare, V.; Tuttle, B.A.

    1999-06-30

    A 2-D, lattice-Monte Carlo approach was developed to simulate ferroelectric domain structure. The model currently utilizes a Hamiltonian for the total energy based only upon electrostatic terms involving dipole-dipole interactions, local polarization gradients and the influence of applied electric fields. The impact of boundary conditions on the domain configurations obtained was also examined. In general, the model exhibits domain structure characteristics consistent with those observed in a tetragonally distorted ferroelectric. The model was also extended to enable the simulation of ferroelectric hysteresis behavior. Simulated hysteresis loops were found to be very similar in appearance to those observed experimentally in actual materials. This qualitative agreement between the simulated hysteresis loop characteristics and real ferroelectric behavior was also confirmed in simulations run over a range of simulation temperatures and applied field frequencies.

  20. PHYSICAL AND CHEMICAL PROPERTIES AND STRUCTURAL-GROUP COMPOSITION OF STRAIGHT-RUN FRACTIONS OF IRAQI OILFIELDS

    Directory of Open Access Journals (Sweden)

    С.В. Бойченко

    2012-10-01

    Full Text Available  Iraq is one of the richest countries of the Near East in terms of oil reserves. Only Saudi Arabia has greater hydrocarbon reserves. Despite the fact that commercial oil production is implemented since 1927, the harsh social and political and economic situation added with long-time military conflict had totally destroyed           oil processing infrastructure. Considering this factor, one of the key tasks for this country is building of  a new, contemporary oil processing infrastructure. Thus, complex study of oils as well as their  straight-run fractions is a requirement for preparation of the projects of oil processing plants that are consistent with current hard requirements. This material contents results of physical and chemical studies  of straight-run oil fractions as gasoline, Diesel oil, and heavy oils. Properties of fractions shown below, their potential related to export oil products are essential data for preparation of process flow diagram of the oil processing plant.

  1. Can parallel use of different running shoes decrease running-related injury risk?

    Science.gov (United States)

    Malisoux, L; Ramesh, J; Mann, R; Seil, R; Urhausen, A; Theisen, D

    2015-02-01

    The aim of this study was to determine if runners who use concomitantly different pairs of running shoes are at a lower risk of running-related injury (RRI). Recreational runners (n = 264) participated in this 22-week prospective follow-up and reported all information about their running session characteristics, other sport participation and injuries on a dedicated Internet platform. A RRI was defined as a physical pain or complaint located at the lower limbs or lower back region, sustained during or as a result of running practice and impeding planned running activity for at least 1 day. One-third of the participants (n = 87) experienced at least one RRI during the observation period. The adjusted Cox regression analysis revealed that the parallel use of more than one pair of running shoes was a protective factor [hazard ratio (HR) = 0.614; 95% confidence interval (CI) = 0.389-0.969], while previous injury was a risk factor (HR = 1.722; 95%CI = 1.114-2.661). Additionally, increased mean session distance (km; HR = 0.795; 95%CI = 0.725-0.872) and increased weekly volume of other sports (h/week; HR = 0.848; 95%CI = 0.732-0.982) were associated with lower RRI risk. Multiple shoe use and participation in other sports are strategies potentially leading to a variation of the load applied to the musculoskeletal system. They could be advised to recreational runners to prevent RRI. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. The efficacy of downhill running as a method to enhance running economy in trained distance runners.

    Science.gov (United States)

    Shaw, Andrew J; Ingham, Stephen A; Folland, Jonathan P

    2018-06-01

    Running downhill, in comparison to running on the flat, appears to involve an exaggerated stretch-shortening cycle (SSC) due to greater impact loads and higher vertical velocity on landing, whilst also incurring a lower metabolic cost. Therefore, downhill running could facilitate higher volumes of training at higher speeds whilst performing an exaggerated SSC, potentially inducing favourable adaptations in running mechanics and running economy (RE). This investigation assessed the efficacy of a supplementary 8-week programme of downhill running as a means of enhancing RE in well-trained distance runners. Nineteen athletes completed supplementary downhill (-5% gradient; n = 10) or flat (n = 9) run training twice a week for 8 weeks within their habitual training. Participants trained at a standardised intensity based on the velocity of lactate turnpoint (vLTP), with training volume increased incrementally between weeks. Changes in energy cost of running (E C ) and vLTP were assessed on both flat and downhill gradients, in addition to maximal oxygen uptake (⩒O 2max). No changes in E C were observed during flat running following downhill (1.22 ± 0.09 vs 1.20 ± 0.07 Kcal kg -1  km -1 , P = .41) or flat run training (1.21 ± 0.13 vs 1.19 ± 0.12 Kcal kg -1  km -1 ). Moreover, no changes in E C during downhill running were observed in either condition (P > .23). vLTP increased following both downhill (16.5 ± 0.7 vs 16.9 ± 0.6 km h -1 , P = .05) and flat run training (16.9 ± 0.7 vs 17.2 ± 1.0 km h -1 , P = .05), though no differences in responses were observed between groups (P = .53). Therefore, a short programme of supplementary downhill run training does not appear to enhance RE in already well-trained individuals.

  3. Migration of Monte Carlo simulation of high energy atmospheric showers to GRID infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Vazquez, Adolfo; Contreras, Jose Luis [Grupo de Altas EnergIas Departamento de Fisica Atomica, Molecular y Nuclear Universidad Complutense de Madrid Avenida Complutense s/n, 28040 Madrid - Spain (Spain); Calle, Ignacio de la; Ibarra, Aitor; Tapiador, Daniel, E-mail: avazquez@gae.ucm.e [INSA. IngenierIa y Servicios Aeroespaciales S.A. Paseo Pintor Rosales 34, 28008 Madrid - Spain (Spain)

    2010-04-01

    A system to run Monte Carlo simulations on a Grid environment is presented. The architectural design proposed uses the current resources of the MAGIC Virtual Organization on EGEE and can be easily generalized to support the simulation of any similar experiment, such as that of the future European planned project, the Cherenkov Telescope Array. The proposed system is based on a Client/Server architecture, and provides the user with a single access point to the simulation environment through a remote graphical user interface, the Client. The Client can be accessed via web browser, using web service technology, with no additional software installation on the user side required. The Server processes the user request and uses a database for both data catalogue and job management inside the Grid. The design, first production tests and lessons learned from the system will be discussed here.

  4. Migration of Monte Carlo simulation of high energy atmospheric showers to GRID infrastructure

    International Nuclear Information System (INIS)

    Vazquez, Adolfo; Contreras, Jose Luis; Calle, Ignacio de la; Ibarra, Aitor; Tapiador, Daniel

    2010-01-01

    A system to run Monte Carlo simulations on a Grid environment is presented. The architectural design proposed uses the current resources of the MAGIC Virtual Organization on EGEE and can be easily generalized to support the simulation of any similar experiment, such as that of the future European planned project, the Cherenkov Telescope Array. The proposed system is based on a Client/Server architecture, and provides the user with a single access point to the simulation environment through a remote graphical user interface, the Client. The Client can be accessed via web browser, using web service technology, with no additional software installation on the user side required. The Server processes the user request and uses a database for both data catalogue and job management inside the Grid. The design, first production tests and lessons learned from the system will be discussed here.

  5. Prospect on general software of Monte Carlo method

    International Nuclear Information System (INIS)

    Pei Lucheng

    1992-01-01

    This is a short paper on the prospect of Monte Carlo general software. The content consists of cluster sampling method, zero variance technique, self-improved method, and vectorized Monte Carlo method

  6. Voluntary Wheel Running in Mice.

    Science.gov (United States)

    Goh, Jorming; Ladiges, Warren

    2015-12-02

    Voluntary wheel running in the mouse is used to assess physical performance and endurance and to model exercise training as a way to enhance health. Wheel running is a voluntary activity in contrast to other experimental exercise models in mice, which rely on aversive stimuli to force active movement. This protocol consists of allowing mice to run freely on the open surface of a slanted, plastic saucer-shaped wheel placed inside a standard mouse cage. Rotations are electronically transmitted to a USB hub so that frequency and rate of running can be captured via a software program for data storage and analysis for variable time periods. Mice are individually housed so that accurate recordings can be made for each animal. Factors such as mouse strain, gender, age, and individual motivation, which affect running activity, must be considered in the design of experiments using voluntary wheel running. Copyright © 2015 John Wiley & Sons, Inc.

  7. Strategije drevesnega preiskovanja Monte Carlo

    OpenAIRE

    VODOPIVEC, TOM

    2018-01-01

    Po preboju pri igri go so metode drevesnega preiskovanja Monte Carlo (ang. Monte Carlo tree search – MCTS) sprožile bliskovit napredek agentov za igranje iger: raziskovalna skupnost je od takrat razvila veliko variant in izboljšav algoritma MCTS ter s tem zagotovila napredek umetne inteligence ne samo pri igrah, ampak tudi v številnih drugih domenah. Čeprav metode MCTS združujejo splošnost naključnega vzorčenja z natančnostjo drevesnega preiskovanja, imajo lahko v praksi težave s počasno konv...

  8. Monte Carlo electron/photon transport

    International Nuclear Information System (INIS)

    Mack, J.M.; Morel, J.E.; Hughes, H.G.

    1985-01-01

    A review of nonplasma coupled electron/photon transport using Monte Carlo method is presented. Remarks are mainly restricted to linerarized formalisms at electron energies from 1 keV to 1000 MeV. Applications involving pulse-height estimation, transport in external magnetic fields, and optical Cerenkov production are discussed to underscore the importance of this branch of computational physics. Advances in electron multigroup cross-section generation is reported, and its impact on future code development assessed. Progress toward the transformation of MCNP into a generalized neutral/charged-particle Monte Carlo code is described. 48 refs

  9. Application of numerical methods, derivatives theory and Monte Carlo simulation in evaluating BM&F BOVESPA's POP (Protected and Participative Investment

    Directory of Open Access Journals (Sweden)

    Giuliano Carrozza Uzêda Iorio de Souza

    2011-08-01

    Full Text Available This article presents a practical case in which two of the most efficient numerical procedures developed for derivative analysis are applied to evaluate the POP (Investment Protection with Participation, a structured operation created by São Paulo Stock Exchange - BM&FBOVESPA. The first procedure solves the differential equation through the use of implicit finite differences method. Due to its characteristics, the approach makes it possible to run sensitivity analysis as well as price estimation. In the second, the problem is solved by Monte Carlo simulation, which facilitates the identification of the probability related to the exercise of the embedded options.

  10. How to run 100 meters ?

    OpenAIRE

    Aftalion, Amandine

    2016-01-01

    A paraitre dans SIAP; The aim of this paper is to bring a mathematical justification to the optimal way of organizing one's effort when running. It is well known from physiologists that all running exercises of duration less than 3mn are run with a strong initial acceleration and a decelerating end; on the contrary, long races are run with a final sprint. This can be explained using a mathematical model describing the evolution of the velocity, the anaerobic energy, and the propulsive force: ...

  11. Comparison of fractions of inactive modules between Run1 and Run2

    CERN Document Server

    Motohashi, Kazuki; The ATLAS collaboration

    2015-01-01

    Fraction of inactive modules for each component of the ATLAS pixel detector at the end of Run 1 and the beginning of Run 2. A similar plot which uses a result of functionality tests during LS1 can be found in ATL-INDET-SLIDE-2014-388.

  12. Weekly running volume and risk of running-related injuries among marathon runners

    DEFF Research Database (Denmark)

    Rasmussen, Christina Haugaard; Nielsen, R.O.; Juul, Martin Serup

    2013-01-01

    The purpose of this study was to investigate if the risk of injury declines with increasing weekly running volume before a marathon race.......The purpose of this study was to investigate if the risk of injury declines with increasing weekly running volume before a marathon race....

  13. Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

    KAUST Repository

    Ben Issaid, Chaouki; Long, Quan; Scavino, Marco; Tempone, Raul

    2015-01-01

    Experimental design is very important since experiments are often resource-exhaustive and time-consuming. We carry out experimental design in the Bayesian framework. To measure the amount of information, which can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data for our purpose. One of the major difficulties in evaluating the expected information gain is that the integral is nested and can be high dimensional. We propose using Multilevel Monte Carlo techniques to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, the Multilevel Monte Carlo can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the Multilevel Monte Carlo method imposes less assumptions, such as the concentration of measures, required by Laplace method. We test our Multilevel Monte Carlo technique using a numerical example on the design of sensor deployment for a Darcy flow problem governed by one dimensional Laplace equation. We also compare the performance of the Multilevel Monte Carlo, Laplace approximation and direct double loop Monte Carlo.

  14. Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

    KAUST Repository

    Ben Issaid, Chaouki

    2015-01-07

    Experimental design is very important since experiments are often resource-exhaustive and time-consuming. We carry out experimental design in the Bayesian framework. To measure the amount of information, which can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data for our purpose. One of the major difficulties in evaluating the expected information gain is that the integral is nested and can be high dimensional. We propose using Multilevel Monte Carlo techniques to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, the Multilevel Monte Carlo can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the Multilevel Monte Carlo method imposes less assumptions, such as the concentration of measures, required by Laplace method. We test our Multilevel Monte Carlo technique using a numerical example on the design of sensor deployment for a Darcy flow problem governed by one dimensional Laplace equation. We also compare the performance of the Multilevel Monte Carlo, Laplace approximation and direct double loop Monte Carlo.

  15. Le Comte de Monte Cristo: da literatura ao cinema

    OpenAIRE

    Caravela, Natércia Murta Silva

    2008-01-01

    A presente dissertação discute o diálogo estabelecido entre literatura e cinema no tratamento da personagem principal – um homem traído que se vinga de forma cruel dos seus inimigos – na obra literária Le Comte de Monte-Cristo, de Alexandre Dumas, e nas três adaptações fílmicas escolhidas: Le Comte de Monte-Cristo de Robert Vernay (1943); The count of Monte Cristo de David Greene (1975) e The count of Monte Cristo de Kevin Reynolds (2002). O projecto centra-se na análise da ...

  16. The DWPF: Results of full scale qualification runs leading to radioactive operations

    International Nuclear Information System (INIS)

    Marra, S.L.; Elder, H.H.; Occhipinti, J.H.; Snyder, D.E.

    1996-01-01

    The Defense Waste Processing Facility (DWPF) at the Savannah River Site in Aiken, SC will immobilize high-level radioactive liquid waste, currently stored in underground carbon steel tanks, in borosilicate glass. The radioactive waste is transferred to the DWPF in two forms: precipitate slurry and sludge slurry. The radioactive waste is pretreated and then combined with a borosilicate glass frit in the DWPF. This homogeneous slurry is fed to a Joule-heated melter which operates at approximately 1150 degrees C. The glass is poured into stainless steel canisters for eventual disposal in a geologic repository. The DWPF product (i.e. the canistered waste form) must comply with the Waste Acceptance Product Specifications (WAPS) in order to be acceptable for disposal. The DWPF has completed Waste Qualification Runs which demonstrate the facility's ability to comply with the waste acceptance specifications. During the Waste Qualification Runs seventy-one canisters of simulated waste glass were produced in preparation for Radioactive Operations. These canisters of simulated waste glass were produced during five production campaigns which also exercised the facility prior to beginning Radioactive Operations. The results of the Waste Qualification Runs are presented

  17. Short-run and long-run effects of unemployment on suicides: does welfare regime matter?

    Science.gov (United States)

    Gajewski, Pawel; Zhukovska, Kateryna

    2017-12-01

    Disentangling the immediate effects of an unemployment shock from the long-run relationship has a strong theoretical rationale. Different economic and psychological forces are at play in the first moment and after prolonged unemployment. This study suggests a diverse impact of short- and long-run unemployment on suicides in liberal and social-democratic countries. We take a macro-level perspective and simultaneously estimate the short- and long-run relationships between unemployment and suicide, along with the speed of convergence towards the long-run relationship after a shock, in a panel of 10 high-income countries. We also account for unemployment benefit spending, the share of the population aged 15-34, and the crisis effects. In the liberal group of countries, only a long-run impact of unemployment on suicides is found to be significant (P = 0.010). In social-democratic countries, suicides are associated with initial changes in unemployment (P = 0.028), but the positive link fades over time and becomes insignificant in the long run. Further, crisis effects are a much stronger determinant of suicides in social-democratic countries. Once the broad welfare regime is controlled for, changes in unemployment-related spending do not matter for preventing suicides. A generous welfare system seems efficient at preventing unemployment-related suicides in the long run, but societies in social-democratic countries might be less psychologically immune to sudden negative changes in their professional lives compared with people in liberal countries. Accounting for the different short- and long-run effects could thus improve our understanding of the unemployment-suicide link. © The Author 2017. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.

  18. Present status of transport code development based on Monte Carlo method

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki

    1985-01-01

    The present status of development in Monte Carlo code is briefly reviewed. The main items are the followings; Application fields, Methods used in Monte Carlo code (geometry spectification, nuclear data, estimator and variance reduction technique) and unfinished works, Typical Monte Carlo codes and Merits of continuous energy Monte Carlo code. (author)

  19. Weekly running volume and risk of running-related injuries among marathon runners

    DEFF Research Database (Denmark)

    Rasmussen, Christina Haugaard; Nielsen, Rasmus Østergaard; Juul, Martin Serup

    2013-01-01

    PURPOSEBACKGROUND: The purpose of this study was to investigate if the risk of injury declines with increasing weekly running volume before a marathon race.......PURPOSEBACKGROUND: The purpose of this study was to investigate if the risk of injury declines with increasing weekly running volume before a marathon race....

  20. Direct utilization of information from nuclear data files in Monte Carlo simulation of neutron and photon transport

    International Nuclear Information System (INIS)

    Androsenko, P.; Joloudov, D.; Kompaniyets, A.

    2001-01-01

    Questions, related to Monte-Carlo method for solution of neutron and photon transport equation, are discussed in the work concerned. Problems dealing with direct utilization of information from evaluated nuclear data files in run-time calculations are considered. ENDF-6 format libraries have been used for calculations. Approaches provided by the rules of ENDF-6 files 2, 3-6, 12-15, 23, 27 and algorithms for reconstruction of resolved and unresolved resonance region cross sections under preset energy are described. The comparison results of calculations made by NJOY and GRUCON programs and computed cross sections data are represented. Test computation data of neutron leakage spectra for spherical benchmark-experiments are also represented. (authors)

  1. Two new DOSXYZnrc sources for 4D Monte Carlo simulations of continuously variable beam configurations, with applications to RapidArc, VMAT, TomoTherapy and CyberKnife

    International Nuclear Information System (INIS)

    Lobo, Julio; Popescu, I Antoniu

    2010-01-01

    We present two new Monte Carlo sources for the DOSXYZnrc code, which can be used to compute dose distributions due to continuously variable beam configurations. These sources support a continuously rotating gantry and collimator, dynamic multileaf collimator (MLC) motion, variable monitor unit (MU) rate, couch rotation and translation in any direction, arbitrary isocentre motion with respect to the patient and variable source-to-axis distance (SAD). These features make them applicable to Monte Carlo simulations for RapidArc(TM), Elekta VMAT, TomoTherapy(TM) and CyberKnife(TM). Unique to these sources is the synchronization between the motion in the DOSXYZnrc geometry and the motion within the linac head, represented by a shared library (either a BEAMnrc accelerator with dynamic component modules, or an external library). The simulations are achieved in single runs, with no intermediate phase space files.

  2. Successful vectorization - reactor physics Monte Carlo code

    International Nuclear Information System (INIS)

    Martin, W.R.

    1989-01-01

    Most particle transport Monte Carlo codes in use today are based on the ''history-based'' algorithm, wherein one particle history at a time is simulated. Unfortunately, the ''history-based'' approach (present in all Monte Carlo codes until recent years) is inherently scalar and cannot be vectorized. In particular, the history-based algorithm cannot take advantage of vector architectures, which characterize the largest and fastest computers at the current time, vector supercomputers such as the Cray X/MP or IBM 3090/600. However, substantial progress has been made in recent years in developing and implementing a vectorized Monte Carlo algorithm. This algorithm follows portions of many particle histories at the same time and forms the basis for all successful vectorized Monte Carlo codes that are in use today. This paper describes the basic vectorized algorithm along with descriptions of several variations that have been developed by different researchers for specific applications. These applications have been mainly in the areas of neutron transport in nuclear reactor and shielding analysis and photon transport in fusion plasmas. The relative merits of the various approach schemes will be discussed and the present status of known vectorization efforts will be summarized along with available timing results, including results from the successful vectorization of 3-D general geometry, continuous energy Monte Carlo. (orig.)

  3. CernVM Co-Pilot: a Framework for Orchestrating Virtual Machines Running Applications of LHC Experiments on the Cloud

    International Nuclear Information System (INIS)

    Harutyunyan, A; Sánchez, C Aguado; Blomer, J; Buncic, P

    2011-01-01

    CernVM Co-Pilot is a framework for the delivery and execution of the workload on remote computing resources. It consists of components which are developed to ease the integration of geographically distributed resources (such as commercial or academic computing clouds, or the machines of users participating in volunteer computing projects) into existing computing grid infrastructures. The Co-Pilot framework can also be used to build an ad-hoc computing infrastructure on top of distributed resources. In this paper we present the architecture of the Co-Pilot framework, describe how it is used to execute the jobs of the ALICE and ATLAS experiments, as well as to run the Monte-Carlo simulation application of CERN Theoretical Physics Group.

  4. Calcaneus length determines running economy: implications for endurance running performance in modern humans and Neandertals.

    Science.gov (United States)

    Raichlen, David A; Armstrong, Hunter; Lieberman, Daniel E

    2011-03-01

    The endurance running (ER) hypothesis suggests that distance running played an important role in the evolution of the genus Homo. Most researchers have focused on ER performance in modern humans, or on reconstructing ER performance in Homo erectus, however, few studies have examined ER capabilities in other members of the genus Homo. Here, we examine skeletal correlates of ER performance in modern humans in order to evaluate the energetics of running in Neandertals and early Homo sapiens. Recent research suggests that running economy (the energy cost of running at a given speed) is strongly related to the length of the Achilles tendon moment arm. Shorter moment arms allow for greater storage and release of elastic strain energy, reducing energy costs. Here, we show that a skeletal correlate of Achilles tendon moment arm length, the length of the calcaneal tuber, does not correlate with walking economy, but correlates significantly with running economy and explains a high proportion of the variance (80%) in cost between individuals. Neandertals had relatively longer calcaneal tubers than modern humans, which would have increased their energy costs of running. Calcaneal tuber lengths in early H. sapiens do not significantly differ from those of extant modern humans, suggesting Neandertal ER economy was reduced relative to contemporaneous anatomically modern humans. Endurance running is generally thought to be beneficial for gaining access to meat in hot environments, where hominins could have used pursuit hunting to run prey taxa into hyperthermia. We hypothesize that ER performance may have been reduced in Neandertals because they lived in cold climates. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Bayesian phylogeny analysis via stochastic approximation Monte Carlo

    KAUST Repository

    Cheon, Sooyoung; Liang, Faming

    2009-01-01

    in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method

  6. Reflections on early Monte Carlo calculations

    International Nuclear Information System (INIS)

    Spanier, J.

    1992-01-01

    Monte Carlo methods for solving various particle transport problems developed in parallel with the evolution of increasingly sophisticated computer programs implementing diffusion theory and low-order moments calculations. In these early years, Monte Carlo calculations and high-order approximations to the transport equation were seen as too expensive to use routinely for nuclear design but served as invaluable aids and supplements to design with less expensive tools. The earliest Monte Carlo programs were quite literal; i.e., neutron and other particle random walk histories were simulated by sampling from the probability laws inherent in the physical system without distoration. Use of such analogue sampling schemes resulted in a good deal of time being spent in examining the possibility of lowering the statistical uncertainties in the sample estimates by replacing simple, and intuitively obvious, random variables by those with identical means but lower variances

  7. Effect of Minimalist Footwear on Running Efficiency

    Science.gov (United States)

    Gillinov, Stephen M.; Laux, Sara; Kuivila, Thomas; Hass, Daniel; Joy, Susan M.

    2015-01-01

    Background: Although minimalist footwear is increasingly popular among runners, claims that minimalist footwear enhances running biomechanics and efficiency are controversial. Hypothesis: Minimalist and barefoot conditions improve running efficiency when compared with traditional running shoes. Study Design: Randomized crossover trial. Level of Evidence: Level 3. Methods: Fifteen experienced runners each completed three 90-second running trials on a treadmill, each trial performed in a different type of footwear: traditional running shoes with a heavily cushioned heel, minimalist running shoes with minimal heel cushioning, and barefoot (socked). High-speed photography was used to determine foot strike, ground contact time, knee angle, and stride cadence with each footwear type. Results: Runners had more rearfoot strikes in traditional shoes (87%) compared with minimalist shoes (67%) and socked (40%) (P = 0.03). Ground contact time was longest in traditional shoes (265.9 ± 10.9 ms) when compared with minimalist shoes (253.4 ± 11.2 ms) and socked (250.6 ± 16.2 ms) (P = 0.005). There was no difference between groups with respect to knee angle (P = 0.37) or stride cadence (P = 0.20). When comparing running socked to running with minimalist running shoes, there were no differences in measures of running efficiency. Conclusion: When compared with running in traditional, cushioned shoes, both barefoot (socked) running and minimalist running shoes produce greater running efficiency in some experienced runners, with a greater tendency toward a midfoot or forefoot strike and a shorter ground contact time. Minimalist shoes closely approximate socked running in the 4 measurements performed. Clinical Relevance: With regard to running efficiency and biomechanics, in some runners, barefoot (socked) and minimalist footwear are preferable to traditional running shoes. PMID:26131304

  8. Overuse injuries in running

    DEFF Research Database (Denmark)

    Larsen, Lars Henrik; Rasmussen, Sten; Jørgensen, Jens Erik

    2016-01-01

    What is an overuse injury in running? This question is a corner stone of clinical documentation and research based evidence.......What is an overuse injury in running? This question is a corner stone of clinical documentation and research based evidence....

  9. Injury-free running - a utopia? Risk factors of running-related injuries in men and women

    NARCIS (Netherlands)

    Worp, M.P. van der

    2016-01-01

    Running is a popular sport worldwide and has a positive effect on health and well-being. However, the rate of running-related injuries and the associated costs are high. Van der Worp performed a systematic review to examine which factors increase the risk of running injuries, and whether this is the

  10. Preventing Running Injuries through Barefoot Activity

    Science.gov (United States)

    Hart, Priscilla M.; Smith, Darla R.

    2008-01-01

    Running has become a very popular lifetime physical activity even though there are numerous reports of running injuries. Although common theories have pointed to impact forces and overpronation as the main contributors to chronic running injuries, the increased use of cushioning and orthotics has done little to decrease running injuries. A new…

  11. Reconstruction of Monte Carlo replicas from Hessian parton distributions

    Energy Technology Data Exchange (ETDEWEB)

    Hou, Tie-Jiun [Department of Physics, Southern Methodist University,Dallas, TX 75275-0181 (United States); Gao, Jun [INPAC, Shanghai Key Laboratory for Particle Physics and Cosmology,Department of Physics and Astronomy, Shanghai Jiao-Tong University, Shanghai 200240 (China); High Energy Physics Division, Argonne National Laboratory,Argonne, Illinois, 60439 (United States); Huston, Joey [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States); Nadolsky, Pavel [Department of Physics, Southern Methodist University,Dallas, TX 75275-0181 (United States); Schmidt, Carl; Stump, Daniel [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States); Wang, Bo-Ting; Xie, Ke Ping [Department of Physics, Southern Methodist University,Dallas, TX 75275-0181 (United States); Dulat, Sayipjamal [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States); School of Physics Science and Technology, Xinjiang University,Urumqi, Xinjiang 830046 (China); Center for Theoretical Physics, Xinjiang University,Urumqi, Xinjiang 830046 (China); Pumplin, Jon; Yuan, C.P. [Department of Physics and Astronomy, Michigan State University,East Lansing, MI 48824 (United States)

    2017-03-20

    We explore connections between two common methods for quantifying the uncertainty in parton distribution functions (PDFs), based on the Hessian error matrix and Monte-Carlo sampling. CT14 parton distributions in the Hessian representation are converted into Monte-Carlo replicas by a numerical method that reproduces important properties of CT14 Hessian PDFs: the asymmetry of CT14 uncertainties and positivity of individual parton distributions. The ensembles of CT14 Monte-Carlo replicas constructed this way at NNLO and NLO are suitable for various collider applications, such as cross section reweighting. Master formulas for computation of asymmetric standard deviations in the Monte-Carlo representation are derived. A correction is proposed to address a bias in asymmetric uncertainties introduced by the Taylor series approximation. A numerical program is made available for conversion of Hessian PDFs into Monte-Carlo replicas according to normal, log-normal, and Watt-Thorne sampling procedures.

  12. Sampling from a polytope and hard-disk Monte Carlo

    International Nuclear Information System (INIS)

    Kapfer, Sebastian C; Krauth, Werner

    2013-01-01

    The hard-disk problem, the statics and the dynamics of equal two-dimensional hard spheres in a periodic box, has had a profound influence on statistical and computational physics. Markov-chain Monte Carlo and molecular dynamics were first discussed for this model. Here we reformulate hard-disk Monte Carlo algorithms in terms of another classic problem, namely the sampling from a polytope. Local Markov-chain Monte Carlo, as proposed by Metropolis et al. in 1953, appears as a sequence of random walks in high-dimensional polytopes, while the moves of the more powerful event-chain algorithm correspond to molecular dynamics evolution. We determine the convergence properties of Monte Carlo methods in a special invariant polytope associated with hard-disk configurations, and the implications for convergence of hard-disk sampling. Finally, we discuss parallelization strategies for event-chain Monte Carlo and present results for a multicore implementation

  13. Problems in radiation shielding calculations with Monte Carlo methods

    International Nuclear Information System (INIS)

    Ueki, Kohtaro

    1985-01-01

    The Monte Carlo method is a very useful tool for solving a large class of radiation transport problem. In contrast with deterministic method, geometric complexity is a much less significant problem for Monte Carlo calculations. However, the accuracy of Monte Carlo calculations is of course, limited by statistical error of the quantities to be estimated. In this report, we point out some typical problems to solve a large shielding system including radiation streaming. The Monte Carlo coupling technique was developed to settle such a shielding problem accurately. However, the variance of the Monte Carlo results using the coupling technique of which detectors were located outside the radiation streaming, was still not enough. So as to bring on more accurate results for the detectors located outside the streaming and also for a multi-legged-duct streaming problem, a practicable way of ''Prism Scattering technique'' is proposed in the study. (author)

  14. Cluster monte carlo method for nuclear criticality safety calculation

    International Nuclear Information System (INIS)

    Pei Lucheng

    1984-01-01

    One of the most important applications of the Monte Carlo method is the calculation of the nuclear criticality safety. The fair source game problem was presented at almost the same time as the Monte Carlo method was applied to calculating the nuclear criticality safety. The source iteration cost may be reduced as much as possible or no need for any source iteration. This kind of problems all belongs to the fair source game prolems, among which, the optimal source game is without any source iteration. Although the single neutron Monte Carlo method solved the problem without the source iteration, there is still quite an apparent shortcoming in it, that is, it solves the problem without the source iteration only in the asymptotic sense. In this work, a new Monte Carlo method called the cluster Monte Carlo method is given to solve the problem further

  15. Changes in running kinematics, kinetics, and spring-mass behavior over a 24-h run.

    Science.gov (United States)

    Morin, Jean-Benoît; Samozino, Pierre; Millet, Guillaume Y

    2011-05-01

    This study investigated the changes in running mechanics and spring-mass behavior over a 24-h treadmill run (24TR). Kinematics, kinetics, and spring-mass characteristics of the running step were assessed in 10 experienced ultralong-distance runners before, every 2 h, and after a 24TR using an instrumented treadmill dynamometer. These measurements were performed at 10 km·h, and mechanical parameters were sampled at 1000 Hz for 10 consecutive steps. Contact and aerial times were determined from ground reaction force (GRF) signals and used to compute step frequency. Maximal GRF, loading rate, downward displacement of the center of mass, and leg length change during the support phase were determined and used to compute both vertical and leg stiffness. Subjects' running pattern and spring-mass behavior significantly changed over the 24TR with a 4.9% higher step frequency on average (because of a significantly 4.5% shorter contact time), a lower maximal GRF (by 4.4% on average), a 13.0% lower leg length change during contact, and an increase in both leg and vertical stiffness (+9.9% and +8.6% on average, respectively). Most of these changes were significant from the early phase of the 24TR (fourth to sixth hour of running) and could be speculated as contributing to an overall limitation of the potentially harmful consequences of such a long-duration run on subjects' musculoskeletal system. During a 24TR, the changes in running mechanics and spring-mass behavior show a clear shift toward a higher oscillating frequency and stiffness, along with lower GRF and leg length change (hence a reduced overall eccentric load) during the support phase of running. © 2011 by the American College of Sports Medicine

  16. Running in a running wheel substitutes for stereotypies in mink (Mustela vison) but does it improve their welfare?

    DEFF Research Database (Denmark)

    Hansen, Steffen W; Damgaard, Birthe Marie

    2009-01-01

    This experiment investigated whether access to a running wheel affects the development of stereotypies during restricted feeding and whether selection for high or low levels of stereotypy affects the use of the running wheel. Sixty-two female mink kept in standard cages and selected for high or low...... levels of stereotypy were used. Thirty of these females had access to a running wheel whereas thirty-two female mink had no access to running wheels. The number of turns of the running wheel, behaviour, feed consumption, body weight and the concentration of plasma cortisol were measured during the winter...... period. Mink with access to a running wheel did not perform stereotypic behaviour and mink selected for a high level of stereotypies had more turns in the running wheel than mink selected for low levels of stereotypies. Mink with access to a running wheel used the running wheel for the same amount...

  17. Cardiovascular responses during deep water running versus shallow water running in school children

    Directory of Open Access Journals (Sweden)

    Anerao Urja M, Shinde Nisha K, Khatri SM

    2014-03-01

    Full Text Available Overview: As the school going children especially the adolescents’ need workout routine; it is advisable that the routine is imbibed in the school’s class time table. In India as growing number of schools provide swimming as one of the recreational activities; school staff often fails to notice the boredom that is caused by the same activity. Deep as well as shallow water running can be one of the best alternatives to swimming. Hence the present study was conducted to find out the cardiovascular response in these individuals. Methods: This was a Prospective Cross-Sectional Comparative Study done in 72 healthy school going students (males grouped into 2 according to the interventions (Deep water running and Shallow water running. Cardiovascular parameters such as Heart rate (HR, Saturation of oxygen (SpO2, Maximal oxygen consumption (VO2max and Rate of Perceived Exertion (RPE were assessed. Results: Significant improvements in cardiovascular parameters were seen in both the groups i.e. by both the interventions. Conclusion: Deep water running and Shallow water running can be used to improve cardiac function in terms of various outcome measures used in the study.

  18. Wielandt acceleration for MCNP5 Monte Carlo eigenvalue calculations

    International Nuclear Information System (INIS)

    Brown, F.

    2007-01-01

    Monte Carlo criticality calculations use the power iteration method to determine the eigenvalue (k eff ) and eigenfunction (fission source distribution) of the fundamental mode. A recently proposed method for accelerating convergence of the Monte Carlo power iteration using Wielandt's method has been implemented in a test version of MCNP5. The method is shown to provide dramatic improvements in convergence rates and to greatly reduce the possibility of false convergence assessment. The method is effective and efficient, improving the Monte Carlo figure-of-merit for many problems. In addition, the method should eliminate most of the underprediction bias in confidence intervals for Monte Carlo criticality calculations. (authors)

  19. Monte Carlo shielding analyses using an automated biasing procedure

    International Nuclear Information System (INIS)

    Tang, J.S.; Hoffman, T.J.

    1988-01-01

    A systematic and automated approach for biasing Monte Carlo shielding calculations is described. In particular, adjoint fluxes from a one-dimensional discrete ordinates calculation are used to generate biasing parameters for a Monte Carlo calculation. The entire procedure of adjoint calculation, biasing parameters generation, and Monte Carlo calculation has been automated. The automated biasing procedure has been applied to several realistic deep-penetration shipping cask problems. The results obtained for neutron and gamma-ray transport indicate that with the automated biasing procedure Monte Carlo shielding calculations of spent-fuel casks can be easily performed with minimum effort and that accurate results can be obtained at reasonable computing cost

  20. Applications of the Monte Carlo method in radiation protection

    International Nuclear Information System (INIS)

    Kulkarni, R.N.; Prasad, M.A.

    1999-01-01

    This paper gives a brief introduction to the application of the Monte Carlo method in radiation protection. It may be noted that an exhaustive review has not been attempted. The special advantage of the Monte Carlo method has been first brought out. The fundamentals of the Monte Carlo method have next been explained in brief, with special reference to two applications in radiation protection. Some sample current applications have been reported in the end in brief as examples. They are, medical radiation physics, microdosimetry, calculations of thermoluminescence intensity and probabilistic safety analysis. The limitations of the Monte Carlo method have also been mentioned in passing. (author)

  1. Pore-scale uncertainty quantification with multilevel Monte Carlo

    KAUST Repository

    Icardi, Matteo; Hoel, Haakon; Long, Quan; Tempone, Raul

    2014-01-01

    . Since there are no generic ways to parametrize the randomness in the porescale structures, Monte Carlo techniques are the most accessible to compute statistics. We propose a multilevel Monte Carlo (MLMC) technique to reduce the computational cost

  2. Current and future applications of Monte Carlo

    International Nuclear Information System (INIS)

    Zaidi, H.

    2003-01-01

    Full text: The use of radionuclides in medicine has a long history and encompasses a large area of applications including diagnosis and radiation treatment of cancer patients using either external or radionuclide radiotherapy. The 'Monte Carlo method'describes a very broad area of science, in which many processes, physical systems, and phenomena are simulated by statistical methods employing random numbers. The general idea of Monte Carlo analysis is to create a model, which is as similar as possible to the real physical system of interest, and to create interactions within that system based on known probabilities of occurrence, with random sampling of the probability density functions (pdfs). As the number of individual events (called 'histories') is increased, the quality of the reported average behavior of the system improves, meaning that the statistical uncertainty decreases. The use of the Monte Carlo method to simulate radiation transport has become the most accurate means of predicting absorbed dose distributions and other quantities of interest in the radiation treatment of cancer patients using either external or radionuclide radiotherapy. The same trend has occurred for the estimation of the absorbed dose in diagnostic procedures using radionuclides as well as the assessment of image quality and quantitative accuracy of radionuclide imaging. As a consequence of this generalized use, many questions are being raised primarily about the need and potential of Monte Carlo techniques, but also about how accurate it really is, what would it take to apply it clinically and make it available widely to the nuclear medicine community at large. Many of these questions will be answered when Monte Carlo techniques are implemented and used for more routine calculations and for in-depth investigations. In this paper, the conceptual role of the Monte Carlo method is briefly introduced and followed by a survey of its different applications in diagnostic and therapeutic

  3. [Physiological differences between cycling and running].

    Science.gov (United States)

    Millet, Grégoire

    2009-08-05

    This review compares the differences in systemic responses (VO2max, anaerobic threshold, heart rate and economy) and in underlying mechanisms of adaptation (ventilatory and hemodynamic and neuromuscular responses) between cycling and running. VO2max is specific to the exercise modality. Overall, there is more physiological training transfer from running to cycling than vice-versa. Several other physiological differences between cycling and running are discussed: HR is different between the two activities both for maximal and sub-maximal intensities. The delta efficiency is higher in running. Ventilation is more impaired in cycling than running due to mechanical constraints. Central fatigue and decrease in maximal strength are more important after prolonged exercise in running than in cycling.

  4. Quantum statistical Monte Carlo methods and applications to spin systems

    International Nuclear Information System (INIS)

    Suzuki, M.

    1986-01-01

    A short review is given concerning the quantum statistical Monte Carlo method based on the equivalence theorem that d-dimensional quantum systems are mapped onto (d+1)-dimensional classical systems. The convergence property of this approximate tansformation is discussed in detail. Some applications of this general appoach to quantum spin systems are reviewed. A new Monte Carlo method, ''thermo field Monte Carlo method,'' is presented, which is an extension of the projection Monte Carlo method at zero temperature to that at finite temperatures

  5. The NLstart2run study: Training-related factors associated with running-related injuries in novice runners.

    Science.gov (United States)

    Kluitenberg, Bas; van der Worp, Henk; Huisstede, Bionka M A; Hartgens, Fred; Diercks, Ron; Verhagen, Evert; van Middelkoop, Marienke

    2016-08-01

    The incidence of running-related injuries is high. Some risk factors for injury were identified in novice runners, however, not much is known about the effect of training factors on injury risk. Therefore, the purpose of this study was to examine the associations between training factors and running-related injuries in novice runners, taking the time varying nature of these training-related factors into account. Prospective cohort study. 1696 participants completed weekly diaries on running exposure and injuries during a 6-week running program for novice runners. Total running volume (min), frequency and mean intensity (Rate of Perceived Exertion) were calculated for the seven days prior to each training session. The association of these time-varying variables with injury was determined in an extended Cox regression analysis. The results of the multivariable analysis showed that running with a higher intensity in the previous week was associated with a higher injury risk. Running frequency was not significantly associated with injury, however a trend towards running three times per week being more hazardous than two times could be observed. Finally, lower running volume was associated with a higher risk of sustaining an injury. These results suggest that running more than 60min at a lower intensity is least injurious. This finding is contrary to our expectations and is presumably the result of other factors. Therefore, the findings should not be used plainly as a guideline for novices. More research is needed to establish the person-specific training patterns that are associated with injury. Copyright © 2015 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  6. OFFLINE

    CERN Document Server

    L. Silvestris

    2010-01-01

    Since the end of the December Run, the Offline Group, has been involved in two main activities related to the reprocessing of the 2009 Run and to the preparation to the 2010 Run. The preparation of the first Physics Paper, on charged track multiplicity, and the commissioning of the detector required a series of complete reprocessings of the data taken, together with their associated Monte Carlo simulations (for the most part, Minimum Bias events at 900 GeV and at 2.36 TeV). The first complete reprocessing happened soon after the end of the Run, on December 14th,. Following this, on December 19th, a second reprocessing was completed. Together with the full sample, various skims were made available for particular use cases, such as BeamHalo skims, ‘Interesting Events’, and other specific needs. Also, the AlcaReco samples were produced to help the commissioning of the detectors with pre-processed histograms and plots. From the technical point of view, the relevant news is that for the first...

  7. SPQR: a Monte Carlo reactor kinetics code

    International Nuclear Information System (INIS)

    Cramer, S.N.; Dodds, H.L.

    1980-02-01

    The SPQR Monte Carlo code has been developed to analyze fast reactor core accident problems where conventional methods are considered inadequate. The code is based on the adiabatic approximation of the quasi-static method. This initial version contains no automatic material motion or feedback. An existing Monte Carlo code is used to calculate the shape functions and the integral quantities needed in the kinetics module. Several sample problems have been devised and analyzed. Due to the large statistical uncertainty associated with the calculation of reactivity in accident simulations, the results, especially at later times, differ greatly from deterministic methods. It was also found that in large uncoupled systems, the Monte Carlo method has difficulty in handling asymmetric perturbations

  8. Optix: A Monte Carlo scintillation light transport code

    Energy Technology Data Exchange (ETDEWEB)

    Safari, M.J., E-mail: mjsafari@aut.ac.ir [Department of Energy Engineering and Physics, Amir Kabir University of Technology, PO Box 15875-4413, Tehran (Iran, Islamic Republic of); Afarideh, H. [Department of Energy Engineering and Physics, Amir Kabir University of Technology, PO Box 15875-4413, Tehran (Iran, Islamic Republic of); Ghal-Eh, N. [School of Physics, Damghan University, PO Box 36716-41167, Damghan (Iran, Islamic Republic of); Davani, F. Abbasi [Nuclear Engineering Department, Shahid Beheshti University, PO Box 1983963113, Tehran (Iran, Islamic Republic of)

    2014-02-11

    The paper reports on the capabilities of Monte Carlo scintillation light transport code Optix, which is an extended version of previously introduced code Optics. Optix provides the user a variety of both numerical and graphical outputs with a very simple and user-friendly input structure. A benchmarking strategy has been adopted based on the comparison with experimental results, semi-analytical solutions, and other Monte Carlo simulation codes to verify various aspects of the developed code. Besides, some extensive comparisons have been made against the tracking abilities of general-purpose MCNPX and FLUKA codes. The presented benchmark results for the Optix code exhibit promising agreements. -- Highlights: • Monte Carlo simulation of scintillation light transport in 3D geometry. • Evaluation of angular distribution of detected photons. • Benchmark studies to check the accuracy of Monte Carlo simulations.

  9. Bayesian phylogeny analysis via stochastic approximation Monte Carlo

    KAUST Repository

    Cheon, Sooyoung

    2009-11-01

    Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time. © 2009 Elsevier Inc. All rights reserved.

  10. Present status and future prospects of neutronics Monte Carlo

    International Nuclear Information System (INIS)

    Gelbard, E.M.

    1990-01-01

    It is fair to say that the Monte Carlo method, over the last decade, has grown steadily more important as a neutronics computational tool. Apparently this has happened for assorted reasons. Thus, for example, as the power of computers has increased, the cost of the method has dropped, steadily becoming less and less of an obstacle to its use. In addition, more and more sophisticated input processors have now made it feasible to model extremely complicated systems routinely with really remarkable fidelity. Finally, as we demand greater and greater precision in reactor calculations, Monte Carlo is often found to be the only method accurate enough for use in benchmarking. Cross section uncertainties are now almost the only inherent limitations in our Monte Carlo capabilities. For this reason Monte Carlo has come to occupy a special position, interposed between experiment and other computational techniques. More and more often deterministic methods are tested by comparison with Monte Carlo, and cross sections are tested by comparing Monte Carlo with experiment. In this way one can distinguish very clearly between errors due to flaws in our numerical methods, and those due to deficiencies in cross section files. The special role of Monte Carlo as a benchmarking tool, often the only available benchmarking tool, makes it crucially important that this method should be polished to perfection. Problems relating to Eigenvalue calculations, variance reduction and the use of advanced computers are reviewed in this paper. (author)

  11. A new Monte Carlo code for simulation of the effect of irregular surfaces on X-ray spectra

    Energy Technology Data Exchange (ETDEWEB)

    Brunetti, Antonio, E-mail: brunetti@uniss.it; Golosio, Bruno

    2014-04-01

    Generally, quantitative X-ray fluorescence (XRF) analysis estimates the content of chemical elements in a sample based on the areas of the fluorescence peaks in the energy spectrum. Besides the concentration of the elements, the peak areas depend also on the geometrical conditions. In fact, the estimate of the peak areas is simple if the sample surface is smooth and if the spectrum shows a good statistic (large-area peaks). For this reason often the sample is prepared as a pellet. However, this approach is not always feasible, for instance when cultural heritage or valuable samples must be analyzed. In this case, the sample surface cannot be smoothed. In order to address this problem, several works have been reported in the literature, based on experimental measurements on a few sets of specific samples or on Monte Carlo simulations. The results obtained with the first approach are limited by the specific class of samples analyzed, while the second approach cannot be applied to arbitrarily irregular surfaces. The present work describes a more general analysis tool based on a new fast Monte Carlo algorithm, which is virtually able to simulate any kind of surface. At the best of our knowledge, it is the first Monte Carlo code with this option. A study of the influence of surface irregularities on the measured spectrum is performed and some results reported. - Highlights: • We present a fast Monte Carlo code with the possibility to simulate any irregularly rough surfaces. • We show applications to multilayer measurements. • Real time simulations are available.

  12. Diffusion Monte Carlo approach versus adiabatic computation for local Hamiltonians

    Science.gov (United States)

    Bringewatt, Jacob; Dorland, William; Jordan, Stephen P.; Mink, Alan

    2018-02-01

    Most research regarding quantum adiabatic optimization has focused on stoquastic Hamiltonians, whose ground states can be expressed with only real non-negative amplitudes and thus for whom destructive interference is not manifest. This raises the question of whether classical Monte Carlo algorithms can efficiently simulate quantum adiabatic optimization with stoquastic Hamiltonians. Recent results have given counterexamples in which path-integral and diffusion Monte Carlo fail to do so. However, most adiabatic optimization algorithms, such as for solving MAX-k -SAT problems, use k -local Hamiltonians, whereas our previous counterexample for diffusion Monte Carlo involved n -body interactions. Here we present a 6-local counterexample which demonstrates that even for these local Hamiltonians there are cases where diffusion Monte Carlo cannot efficiently simulate quantum adiabatic optimization. Furthermore, we perform empirical testing of diffusion Monte Carlo on a standard well-studied class of permutation-symmetric tunneling problems and similarly find large advantages for quantum optimization over diffusion Monte Carlo.

  13. Uncertainty evaluation of a modified elimination weighing for source preparation

    Energy Technology Data Exchange (ETDEWEB)

    Cacais, F.L.; Loayza, V.M., E-mail: facacais@gmail.com [Instituto Nacional de Metrologia, Qualidade e Tecnologia, (INMETRO), Rio de Janeiro, RJ (Brazil); Delgado, J.U. [Instituto de Radioproteção e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Metrologia das Radiações Ionizantes

    2017-07-01

    Some modification in elimination weighing method for radioactive source allowed correcting weighing results without non-linearity problems assign a uncertainty contribution for the correction of the same order of the mass of drop uncertainty and check weighing variability in series source preparation. This analysis has focused in knowing the achievable weighing accuracy and the uncertainty estimated by Monte Carlo method for a mass of a 20 mg drop was at maximum of 0.06%. (author)

  14. Frequency domain Monte Carlo simulation method for cross power spectral density driven by periodically pulsed spallation neutron source using complex-valued weight Monte Carlo

    International Nuclear Information System (INIS)

    Yamamoto, Toshihiro

    2014-01-01

    Highlights: • The cross power spectral density in ADS has correlated and uncorrelated components. • A frequency domain Monte Carlo method to calculate the uncorrelated one is developed. • The method solves the Fourier transformed transport equation. • The method uses complex-valued weights to solve the equation. • The new method reproduces well the CPSDs calculated with time domain MC method. - Abstract: In an accelerator driven system (ADS), pulsed spallation neutrons are injected at a constant frequency. The cross power spectral density (CPSD), which can be used for monitoring the subcriticality of the ADS, is composed of the correlated and uncorrelated components. The uncorrelated component is described by a series of the Dirac delta functions that occur at the integer multiples of the pulse repetition frequency. In the present paper, a Monte Carlo method to solve the Fourier transformed neutron transport equation with a periodically pulsed neutron source term has been developed to obtain the CPSD in ADSs. Since the Fourier transformed flux is a complex-valued quantity, the Monte Carlo method introduces complex-valued weights to solve the Fourier transformed equation. The Monte Carlo algorithm used in this paper is similar to the one that was developed by the author of this paper to calculate the neutron noise caused by cross section perturbations. The newly-developed Monte Carlo algorithm is benchmarked to the conventional time domain Monte Carlo simulation technique. The CPSDs are obtained both with the newly-developed frequency domain Monte Carlo method and the conventional time domain Monte Carlo method for a one-dimensional infinite slab. The CPSDs obtained with the frequency domain Monte Carlo method agree well with those with the time domain method. The higher order mode effects on the CPSD in an ADS with a periodically pulsed neutron source are discussed

  15. Neutron point-flux calculation by Monte Carlo

    International Nuclear Information System (INIS)

    Eichhorn, M.

    1986-04-01

    A survey of the usual methods for estimating flux at a point is given. The associated variance-reducing techniques in direct Monte Carlo games are explained. The multigroup Monte Carlo codes MC for critical systems and PUNKT for point source-point detector-systems are represented, and problems in applying the codes to practical tasks are discussed. (author)

  16. 76 FR 41819 - Notice of Intent To Prepare a Resource Management Plan Amendment for the Glade Run Recreation...

    Science.gov (United States)

    2011-07-15

    ... with American Indian tribes to identify strategies for protecting recognized traditional uses; 11... Office, New Mexico, and Associated Environmental Assessment AGENCY: Bureau of Land Management, Interior... Environmental Assessment (EA) to address recreation and travel management in the Glade Run Recreation Area (the...

  17. Development of a GUI-based RETRAN running environment and its application

    International Nuclear Information System (INIS)

    Kim, K.D.; Jeong, J.J.; Mo, S.Y.; Lee, Y.G.; Lee, C.B.

    2001-01-01

    In order to assist RETRAN users in their input preparation, code execution, and output interpretation, a visual interactive RETRAN running environment (ViRRE) has been developed. ViRRE provides dialog boxes and graphical modules for base input data generation and transient initiation on a user-friendly basis, and special graphical displays to provide an in-depth understanding of the major thermal-hydraulic phenomena during normal and accident conditions for nuclear power plants. This paper presents the main features of ViRRE and an example of its application. (authors)

  18. Shell model the Monte Carlo way

    International Nuclear Information System (INIS)

    Ormand, W.E.

    1995-01-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined

  19. Shell model the Monte Carlo way

    Energy Technology Data Exchange (ETDEWEB)

    Ormand, W.E.

    1995-03-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.

  20. Monte Carlo learning/biasing experiment with intelligent random numbers

    International Nuclear Information System (INIS)

    Booth, T.E.

    1985-01-01

    A Monte Carlo learning and biasing technique is described that does its learning and biasing in the random number space rather than the physical phase-space. The technique is probably applicable to all linear Monte Carlo problems, but no proof is provided here. Instead, the technique is illustrated with a simple Monte Carlo transport problem. Problems encountered, problems solved, and speculations about future progress are discussed. 12 refs

  1. Temperature variance study in Monte-Carlo photon transport theory

    International Nuclear Information System (INIS)

    Giorla, J.

    1985-10-01

    We study different Monte-Carlo methods for solving radiative transfer problems, and particularly Fleck's Monte-Carlo method. We first give the different time-discretization schemes and the corresponding stability criteria. Then we write the temperature variance as a function of the variances of temperature and absorbed energy at the previous time step. Finally we obtain some stability criteria for the Monte-Carlo method in the stationary case [fr

  2. Short-run and long-run dynamics of farm land allocation

    DEFF Research Database (Denmark)

    Arnberg, Søren; Hansen, Lars Gårn

    2012-01-01

    This study develops and estimates a dynamic multi-output model of farmers’ land allocation decisions that allows for the gradual adjustment of allocations that can result from crop rotation practices and quasi-fixed capital constraints. Estimation is based on micro-panel data from Danish farmers...... that include acreage, output, and variable input utilization at the crop level. Results indicate that there are substantial differences between the short-run and long-run land allocation behaviour of Danish farmers and that there are substantial differences in the time lags associated with different crops...

  3. Monte Carlo applications to radiation shielding problems

    International Nuclear Information System (INIS)

    Subbaiah, K.V.

    2009-01-01

    Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling of physical and mathematical systems to compute their results. However, basic concepts of MC are both simple and straightforward and can be learned by using a personal computer. Uses of Monte Carlo methods require large amounts of random numbers, and it was their use that spurred the development of pseudorandom number generators, which were far quicker to use than the tables of random numbers which had been previously used for statistical sampling. In Monte Carlo simulation of radiation transport, the history (track) of a particle is viewed as a random sequence of free flights that end with an interaction event where the particle changes its direction of movement, loses energy and, occasionally, produces secondary particles. The Monte Carlo simulation of a given experimental arrangement (e.g., an electron beam, coming from an accelerator and impinging on a water phantom) consists of the numerical generation of random histories. To simulate these histories we need an interaction model, i.e., a set of differential cross sections (DCS) for the relevant interaction mechanisms. The DCSs determine the probability distribution functions (pdf) of the random variables that characterize a track; 1) free path between successive interaction events, 2) type of interaction taking place and 3) energy loss and angular deflection in a particular event (and initial state of emitted secondary particles, if any). Once these pdfs are known, random histories can be generated by using appropriate sampling methods. If the number of generated histories is large enough, quantitative information on the transport process may be obtained by simply averaging over the simulated histories. The Monte Carlo method yields the same information as the solution of the Boltzmann transport equation, with the same interaction model, but is easier to implement. In particular, the simulation of radiation

  4. Effects of Heavy Strength Training on Running Performance and Determinants of Running Performance in Female Endurance Athletes

    Science.gov (United States)

    Vikmoen, Olav; Raastad, Truls; Seynnes, Olivier; Bergstrøm, Kristoffer; Ellefsen, Stian; Rønnestad, Bent R.

    2016-01-01

    Purpose The purpose of the current study was to investigate the effects of adding strength training to normal endurance training on running performance and running economy in well-trained female athletes. We hypothesized that the added strength training would improve performance and running economy through altered stiffness of the muscle-tendon complex of leg extensors. Methods Nineteen female endurance athletes [maximal oxygen consumption (VO2max): 53±3 ml∙kg-1∙min-1, 5.8 h weekly endurance training] were randomly assigned to either normal endurance training (E, n = 8) or normal endurance training combined with strength training (E+S, n = 11). The strength training consisted of four leg exercises [3 x 4–10 repetition maximum (RM)], twice a week for 11 weeks. Muscle strength, 40 min all-out running distance, running performance determinants and patellar tendon stiffness were measured before and after the intervention. Results E+S increased 1RM in leg exercises (40 ± 15%) and maximal jumping height in counter movement jump (6 ± 6%) and squat jump (9 ± 7%, p running economy, fractional utilization of VO2max or VO2max. There were also no change in running distance during a 40 min all-out running test in neither of the groups. Conclusion Adding heavy strength training to endurance training did not affect 40 min all-out running performance or running economy compared to endurance training only. PMID:26953893

  5. Children's Fitness. Managing a Running Program.

    Science.gov (United States)

    Hinkle, J. Scott; Tuckman, Bruce W.

    1987-01-01

    A running program to increase the cardiovascular fitness levels of fourth-, fifth-, and sixth-grade children is described. Discussed are the running environment, implementation of a running program, feedback, and reinforcement. (MT)

  6. Randomized quasi-Monte Carlo simulation of fast-ion thermalization

    Science.gov (United States)

    Höök, L. J.; Johnson, T.; Hellsten, T.

    2012-01-01

    This work investigates the applicability of the randomized quasi-Monte Carlo method for simulation of fast-ion thermalization processes in fusion plasmas, e.g. for simulation of neutral beam injection and radio frequency heating. In contrast to the standard Monte Carlo method, the quasi-Monte Carlo method uses deterministic numbers instead of pseudo-random numbers and has a statistical weak convergence close to {O}(N^{-1}) , where N is the number of markers. We have compared different quasi-Monte Carlo methods for a neutral beam injection scenario, which is solved by many realizations of the associated stochastic differential equation, discretized with the Euler-Maruyama scheme. The statistical convergence of the methods is measured for time steps up to 214.

  7. A user's manual for the three-dimensional Monte Carlo transport code SPARTAN

    International Nuclear Information System (INIS)

    Bending, R.C.; Heffer, P.J.H.

    1975-09-01

    SPARTAN is a general-purpose Monte Carlo particle transport code intended for neutron or gamma transport problems in reactor physics, health physics, shielding, and safety studies. The code used a very general geometry system enabling a complex layout to be described and allows the user to obtain physics data from a number of different types of source library. Special tracking and scoring techniques are used to improve the quality of the results obtained. To enable users to run SPARTAN, brief descriptions of the facilities available in the code are given and full details of data input and job control language, as well as examples of complete calculations, are included. It is anticipated that changes may be made to SPARTAN from time to time, particularly in those parts of the code which deal with physics data processing. The load module is identified by a version number and implementation date, and updates of sections of this manual will be issued when significant changes are made to the code. (author)

  8. Modifications to the Monte Carlo neutronics code MONK

    International Nuclear Information System (INIS)

    Hutton, J.L.

    1979-09-01

    The Monte Carlo neutronics code MONK has been widely used for criticality calculations, and is one of the standard methods for assessing the safety of transport flasks and fuel storage facilities in the UK. Recently, attempts have been made to extend the range of applications of this calculational technique. In particular studies have been carried out using Monte Carlo to analyse reactor physics experiments. In these applications various shortcomings of the standard version MONK5 became apparent. The basic data library was found to be inadequate and additional estimates of parameters (eg power distribution) not normally included in criticality studies were required. These features which required improvement, primarily in the context of using the code for reactor physics calculations, are enumerated. To facilitate the use of the code as a reactor physics calculational tool a series of modifications have been carried out. The code has been modified so that the user can use group data tabulations of the cross sections instead of the present 'point' data values. The code can now interface with a number of reactor physics group data preparation schemes but in particular it can use WIMS-E interfaces as a source of group data. Details of the changes are outlined and a new version of MONK incorporating these modifications has been created. This version is called MONK5W. This paper provides a guide to the use of this version. The data input is described along with other details required to use this code on the Harwell IBM 3033. To aid the user, examples of calculations using the new facilities incorporated in MONK5W are given. (UK)

  9. The study of thin film growth by using Monte Carlo method

    International Nuclear Information System (INIS)

    Tandogan, M.; Aktas, S.

    2010-01-01

    Thin film growth was studied by using Monte Carlo simulation method. Three basic models were used in this study. Model A, the gas particles used for the formation of film were under no external effects until they stick on the surface or to another particle which already stickled on the surface to form the film. Model B, gases were drifted towards the surface by an external agent. Model C, where the gas particles in the closed container were always distributed uniformly throughout the container while they are in gas state. The simulations revealed the fact that for an ideal thin film growth Model C gave the best result to prepare a thin film while a thicker but a better quality could be obtained by Model B.

  10. Open-Source Development Experiences in Scientific Software: The HANDE Quantum Monte Carlo Project

    Directory of Open Access Journals (Sweden)

    J. S. Spencer

    2015-11-01

    Full Text Available The HANDE quantum Monte Carlo project offers accessible stochastic algorithms for general use for scientists in the field of quantum chemistry. HANDE is an ambitious and general high-performance code developed by a geographically-dispersed team with a variety of backgrounds in computational science. In the course of preparing a public, open-source release, we have taken this opportunity to step back and look at what we have done and what we hope to do in the future. We pay particular attention to development processes, the approach taken to train students joining the project, and how a flat hierarchical structure aids communication.

  11. Responding for sucrose and wheel-running reinforcement: effects of sucrose concentration and wheel-running reinforcer duration.

    Science.gov (United States)

    Belke, Terry W; Hancock, Stephanie D

    2003-03-01

    Six male albino rats were placed in running wheels and exposed to a fixed-interval 30-s schedule of lever pressing that produced either a drop of sucrose solution or the opportunity to run for a fixed duration as reinforcers. Each reinforcer type was signaled by a different stimulus. In Experiment 1, the duration of running was held constant at 15 s while the concentration of sucrose solution was varied across values of 0, 2.5. 5, 10, and 15%. As concentration decreased, postreinforcement pause duration increased and local rates decreased in the presence of the stimulus signaling sucrose. Consequently, the difference between responding in the presence of stimuli signaling wheel-running and sucrose reinforcers diminished, and at 2.5%, response functions for the two reinforcers were similar. In Experiment 2, the concentration of sucrose solution was held constant at 15% while the duration of the opportunity to run was first varied across values of 15, 45, and 90 s then subsequently across values of 5, 10, and 15 s. As run duration increased, postreinforcement pause duration in the presence of the wheel-running stimulus increased and local rates increased then decreased. In summary, inhibitory aftereffects of previous reinforcers occurred when both sucrose concentration and run duration varied; changes in responding were attributable to changes in the excitatory value of the stimuli signaling the two reinforcers.

  12. A Monte Carlo algorithm for the Vavilov distribution

    International Nuclear Information System (INIS)

    Yi, Chul-Young; Han, Hyon-Soo

    1999-01-01

    Using the convolution property of the inverse Laplace transform, an improved Monte Carlo algorithm for the Vavilov energy-loss straggling distribution of the charged particle is developed, which is relatively simple and gives enough accuracy to be used for most Monte Carlo applications

  13. Adaptive Multilevel Monte Carlo Simulation

    KAUST Repository

    Hoel, H

    2011-08-23

    This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).

  14. Monte Carlo computation in the applied research of nuclear technology

    International Nuclear Information System (INIS)

    Xu Shuyan; Liu Baojie; Li Qin

    2007-01-01

    This article briefly introduces Monte Carlo Methods and their properties. It narrates the Monte Carlo methods with emphasis in their applications to several domains of nuclear technology. Monte Carlo simulation methods and several commonly used computer software to implement them are also introduced. The proposed methods are demonstrated by a real example. (authors)

  15. Nested Sampling with Constrained Hamiltonian Monte Carlo

    OpenAIRE

    Betancourt, M. J.

    2010-01-01

    Nested sampling is a powerful approach to Bayesian inference ultimately limited by the computationally demanding task of sampling from a heavily constrained probability distribution. An effective algorithm in its own right, Hamiltonian Monte Carlo is readily adapted to efficiently sample from any smooth, constrained distribution. Utilizing this constrained Hamiltonian Monte Carlo, I introduce a general implementation of the nested sampling algorithm.

  16. Barefoot running survey: Evidence from the field

    OpenAIRE

    David Hryvniak; Jay Dicharry; Robert Wilder

    2014-01-01

    Background: Running is becoming an increasingly popular activity among Americans with over 50 million participants. Running shoe research and technology has continued to advance with no decrease in overall running injury rates. A growing group of runners are making the choice to try the minimal or barefoot running styles of the pre-modern running shoe era. There is some evidence of decreased forces and torques on the lower extremities with barefoot running, but no clear data regarding how thi...

  17. The effect of three surface conditions, speed and running experience on vertical acceleration of the tibia during running.

    Science.gov (United States)

    Boey, Hannelore; Aeles, Jeroen; Schütte, Kurt; Vanwanseele, Benedicte

    2017-06-01

    Research has focused on parameters that are associated with injury risk, e.g. vertical acceleration. These parameters can be influenced by running on different surfaces or at different running speeds, but the relationship between them is not completely clear. Understanding the relationship may result in training guidelines to reduce the injury risk. In this study, thirty-five participants with three different levels of running experience were recruited. Participants ran on three different surfaces (concrete, synthetic running track, and woodchip trail) at two different running speeds: a self-selected comfortable speed and a fixed speed of 3.06 m/s. Vertical acceleration of the lower leg was measured with an accelerometer. The vertical acceleration was significantly lower during running on the woodchip trail in comparison with the synthetic running track and the concrete, and significantly lower during running at lower speed in comparison with during running at higher speed on all surfaces. No significant differences in vertical acceleration were found between the three groups of runners at fixed speed. Higher self-selected speed due to higher performance level also did not result in higher vertical acceleration. These results may show that running on a woodchip trail and slowing down could reduce the injury risk at the tibia.

  18. Influence of footwear designed to boost energy return on running economy in comparison to a conventional running shoe.

    Science.gov (United States)

    Sinclair, J; Mcgrath, R; Brook, O; Taylor, P J; Dillon, S

    2016-01-01

    Running economy is a reflection of the amount of inspired oxygen required to maintain a given velocity and is considered a determining factor for running performance. Athletic footwear has been advocated as a mechanism by which running economy can be enhanced. New commercially available footwear has been developed in order to increase energy return, although their efficacy has not been investigated. This study aimed to examine the effects of energy return footwear on running economy in relation to conventional running shoes. Twelve male runners completed 6-min steady-state runs in conventional and energy return footwear. Overall, oxygen consumption (VO2), heart rate, respiratory exchange ratio, shoe comfort and rating of perceived exertion were assessed. Moreover, participants subjectively indicated which shoe condition they preferred for running. Differences in shoe comfort and physiological parameters were examined using Wilcoxon signed-rank tests, whilst shoe preferences were tested using a chi-square analysis. The results showed that VO2 and respiratory exchange ratio were significantly lower, and shoe comfort was significantly greater, in the energy return footwear. Given the relationship between running economy and running performance, these observations indicate that the energy return footwear may be associated with enhanced running performance in comparison to conventional shoes.

  19. Statistics of Monte Carlo methods used in radiation transport calculation

    International Nuclear Information System (INIS)

    Datta, D.

    2009-01-01

    Radiation transport calculation can be carried out by using either deterministic or statistical methods. Radiation transport calculation based on statistical methods is basic theme of the Monte Carlo methods. The aim of this lecture is to describe the fundamental statistics required to build the foundations of Monte Carlo technique for radiation transport calculation. Lecture note is organized in the following way. Section (1) will describe the introduction of Basic Monte Carlo and its classification towards the respective field. Section (2) will describe the random sampling methods, a key component of Monte Carlo radiation transport calculation, Section (3) will provide the statistical uncertainty of Monte Carlo estimates, Section (4) will describe in brief the importance of variance reduction techniques while sampling particles such as photon, or neutron in the process of radiation transport

  20. Overcoming the "Run" Response

    Science.gov (United States)

    Swanson, Patricia E.

    2013-01-01

    Recent research suggests that it is not simply experiencing anxiety that affects mathematics performance but also how one responds to and regulates that anxiety (Lyons and Beilock 2011). Most people have faced mathematics problems that have triggered their "run response." The issue is not whether one wants to run, but rather…

  1. Monte Carlo code criticality benchmark comparisons for waste packaging

    International Nuclear Information System (INIS)

    Alesso, H.P.; Annese, C.E.; Buck, R.M.; Pearson, J.S.; Lloyd, W.R.

    1992-07-01

    COG is a new point-wise Monte Carlo code being developed and tested at Lawrence Livermore National Laboratory (LLNL). It solves the Boltzmann equation for the transport of neutrons and photons. The objective of this paper is to report on COG results for criticality benchmark experiments both on a Cray mainframe and on a HP 9000 workstation. COG has been recently ported to workstations to improve its accessibility to a wider community of users. COG has some similarities to a number of other computer codes used in the shielding and criticality community. The recently introduced high performance reduced instruction set (RISC) UNIX workstations provide computational power that approach mainframes at a fraction of the cost. A version of COG is currently being developed for the Hewlett Packard 9000/730 computer with a UNIX operating system. Subsequent porting operations will move COG to SUN, DEC, and IBM workstations. In addition, a CAD system for preparation of the geometry input for COG is being developed. In July 1977, Babcock ampersand Wilcox Co. (B ampersand W) was awarded a contract to conduct a series of critical experiments that simulated close-packed storage of LWR-type fuel. These experiments provided data for benchmarking and validating calculational methods used in predicting K-effective of nuclear fuel storage in close-packed, neutron poisoned arrays. Low enriched UO2 fuel pins in water-moderated lattices in fuel storage represent a challenging criticality calculation for Monte Carlo codes particularly when the fuel pins extend out of the water. COG and KENO calculational results of these criticality benchmark experiments are presented

  2. Analysis of QUADOS problem on TLD-ALBEDO personal dosemeter responses using discrete ordinates and Monte Carlo methods

    International Nuclear Information System (INIS)

    Kodeli, I.; Tanner, R.

    2005-01-01

    In the scope of QUADOS, a Concerted Action of the European Commission, eight calculational problems were prepared in order to evaluate the use of computational codes for dosimetry in radiation protection and medical physics, and to disseminate 'good practice' throughout the radiation dosimetry community. This paper focuses on the analysis of the P4 problem on the 'TLD-albedo dosemeter: neutron and/or photon response of a four-element TL-dosemeter mounted on a standard ISO slab phantom'. Altogether 17 solutions were received from the participants, 14 of those transported neutrons and 15 photons. Most participants (16 out of 17) used Monte Carlo methods. These calculations are time-consuming, requiring several days of CPU time to perform the whole set of calculations and achieve good statistical precision. The possibility of using deterministic discrete ordinates codes as an alternative to Monte Carlo was therefore investigated and is presented here. In particular the capacity of the adjoint mode calculations is shown. (authors)

  3. Multiple histogram method and static Monte Carlo sampling

    NARCIS (Netherlands)

    Inda, M.A.; Frenkel, D.

    2004-01-01

    We describe an approach to use multiple-histogram methods in combination with static, biased Monte Carlo simulations. To illustrate this, we computed the force-extension curve of an athermal polymer from multiple histograms constructed in a series of static Rosenbluth Monte Carlo simulations. From

  4. Forest canopy BRDF simulation using Monte Carlo method

    NARCIS (Netherlands)

    Huang, J.; Wu, B.; Zeng, Y.; Tian, Y.

    2006-01-01

    Monte Carlo method is a random statistic method, which has been widely used to simulate the Bidirectional Reflectance Distribution Function (BRDF) of vegetation canopy in the field of visible remote sensing. The random process between photons and forest canopy was designed using Monte Carlo method.

  5. Street children: “Running from” or “running to”?

    Directory of Open Access Journals (Sweden)

    J. le Roux

    1997-03-01

    Full Text Available The street child phenomenon presents a complex issue resulting from a diversity of integrated factors. The problem should therefore preferably be explained and addressed holistically. A search of available literature on street children clearly indicates that street children per se are not the primary problem. The phenomenon o f street children is merely a symptom of a problem underlying the intolerable situation of these children's family and community lives. In this article it is explained that the street child phenomenon is thus symptomatic of contemporary twentieth century conditions. "Running from " and “running to " are in fact intereffective tendencies or reactions to a complicated polarised society: two sides of a common coin.

  6. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.

    1996-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs

  7. Discrete Diffusion Monte Carlo for Electron Thermal Transport

    Science.gov (United States)

    Chenhall, Jeffrey; Cao, Duc; Wollaeger, Ryan; Moses, Gregory

    2014-10-01

    The iSNB (implicit Schurtz Nicolai Busquet electron thermal transport method of Cao et al. is adapted to a Discrete Diffusion Monte Carlo (DDMC) solution method for eventual inclusion in a hybrid IMC-DDMC (Implicit Monte Carlo) method. The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the iSNB-DDMC method will be presented. This work was supported by Sandia National Laboratory - Albuquerque.

  8. Monte Carlo techniques in diagnostic and therapeutic nuclear medicine

    International Nuclear Information System (INIS)

    Zaidi, H.

    2002-01-01

    Monte Carlo techniques have become one of the most popular tools in different areas of medical radiation physics following the development and subsequent implementation of powerful computing systems for clinical use. In particular, they have been extensively applied to simulate processes involving random behaviour and to quantify physical parameters that are difficult or even impossible to calculate analytically or to determine by experimental measurements. The use of the Monte Carlo method to simulate radiation transport turned out to be the most accurate means of predicting absorbed dose distributions and other quantities of interest in the radiation treatment of cancer patients using either external or radionuclide radiotherapy. The same trend has occurred for the estimation of the absorbed dose in diagnostic procedures using radionuclides. There is broad consensus in accepting that the earliest Monte Carlo calculations in medical radiation physics were made in the area of nuclear medicine, where the technique was used for dosimetry modelling and computations. Formalism and data based on Monte Carlo calculations, developed by the Medical Internal Radiation Dose (MIRD) committee of the Society of Nuclear Medicine, were published in a series of supplements to the Journal of Nuclear Medicine, the first one being released in 1968. Some of these pamphlets made extensive use of Monte Carlo calculations to derive specific absorbed fractions for electron and photon sources uniformly distributed in organs of mathematical phantoms. Interest in Monte Carlo-based dose calculations with β-emitters has been revived with the application of radiolabelled monoclonal antibodies to radioimmunotherapy. As a consequence of this generalized use, many questions are being raised primarily about the need and potential of Monte Carlo techniques, but also about how accurate it really is, what would it take to apply it clinically and make it available widely to the medical physics

  9. Monte Carlo strategies in scientific computing

    CERN Document Server

    Liu, Jun S

    2008-01-01

    This paperback edition is a reprint of the 2001 Springer edition This book provides a self-contained and up-to-date treatment of the Monte Carlo method and develops a common framework under which various Monte Carlo techniques can be "standardized" and compared Given the interdisciplinary nature of the topics and a moderate prerequisite for the reader, this book should be of interest to a broad audience of quantitative researchers such as computational biologists, computer scientists, econometricians, engineers, probabilists, and statisticians It can also be used as the textbook for a graduate-level course on Monte Carlo methods Many problems discussed in the alter chapters can be potential thesis topics for masters’ or PhD students in statistics or computer science departments Jun Liu is Professor of Statistics at Harvard University, with a courtesy Professor appointment at Harvard Biostatistics Department Professor Liu was the recipient of the 2002 COPSS Presidents' Award, the most prestigious one for sta...

  10. Off-diagonal expansion quantum Monte Carlo.

    Science.gov (United States)

    Albash, Tameem; Wagenbreth, Gene; Hen, Itay

    2017-12-01

    We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.

  11. A theoretical perspective on running-related injuries.

    Science.gov (United States)

    Gallant, Jodi Lynn; Pierrynowski, Michael Raymond

    2014-03-01

    The etiology of running-related injuries remains unknown; however, an implicit theory underlies much of the conventional research and practice in the prevention of these injuries. This theory posits that the cause of running-related injuries lies in the high-impact forces experienced when the foot contacts the ground and the subsequent abnormal movement of the subtalar joint. The application of this theory is seen in the design of the modern running shoe, with cushioning, support, and motion control. However, a new theory is emerging that suggests that it is the use of these modern running shoes that has caused a maladaptive running style, which contributes to a high incidence of injury among runners. The suggested application of this theory is to cease use of the modern running shoe and transition to barefoot or minimalist running. This new running paradigm, which is at present inadequately defined, is proposed to avoid the adverse biomechanical effects of the modern running shoe. Future research should rigorously define and then test both theories regarding their ability to discover the etiology of running-related injury. Once discovered, the putative cause of running-related injury will then provide an evidence-based rationale for clinical prevention and treatment.

  12. Dynamic bounds coupled with Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Rajabalinejad, M., E-mail: M.Rajabalinejad@tudelft.n [Faculty of Civil Engineering, Delft University of Technology, Delft (Netherlands); Meester, L.E. [Delft Institute of Applied Mathematics, Delft University of Technology, Delft (Netherlands); Gelder, P.H.A.J.M. van; Vrijling, J.K. [Faculty of Civil Engineering, Delft University of Technology, Delft (Netherlands)

    2011-02-15

    For the reliability analysis of engineering structures a variety of methods is known, of which Monte Carlo (MC) simulation is widely considered to be among the most robust and most generally applicable. To reduce simulation cost of the MC method, variance reduction methods are applied. This paper describes a method to reduce the simulation cost even further, while retaining the accuracy of Monte Carlo, by taking into account widely present monotonicity. For models exhibiting monotonic (decreasing or increasing) behavior, dynamic bounds (DB) are defined, which in a coupled Monte Carlo simulation are updated dynamically, resulting in a failure probability estimate, as well as a strict (non-probabilistic) upper and lower bounds. Accurate results are obtained at a much lower cost than an equivalent ordinary Monte Carlo simulation. In a two-dimensional and a four-dimensional numerical example, the cost reduction factors are 130 and 9, respectively, where the relative error is smaller than 5%. At higher accuracy levels, this factor increases, though this effect is expected to be smaller with increasing dimension. To show the application of DB method to real world problems, it is applied to a complex finite element model of a flood wall in New Orleans.

  13. Coded aperture optimization using Monte Carlo simulations

    International Nuclear Information System (INIS)

    Martineau, A.; Rocchisani, J.M.; Moretti, J.L.

    2010-01-01

    Coded apertures using Uniformly Redundant Arrays (URA) have been unsuccessfully evaluated for two-dimensional and three-dimensional imaging in Nuclear Medicine. The images reconstructed from coded projections contain artifacts and suffer from poor spatial resolution in the longitudinal direction. We introduce a Maximum-Likelihood Expectation-Maximization (MLEM) algorithm for three-dimensional coded aperture imaging which uses a projection matrix calculated by Monte Carlo simulations. The aim of the algorithm is to reduce artifacts and improve the three-dimensional spatial resolution in the reconstructed images. Firstly, we present the validation of GATE (Geant4 Application for Emission Tomography) for Monte Carlo simulations of a coded mask installed on a clinical gamma camera. The coded mask modelling was validated by comparison between experimental and simulated data in terms of energy spectra, sensitivity and spatial resolution. In the second part of the study, we use the validated model to calculate the projection matrix with Monte Carlo simulations. A three-dimensional thyroid phantom study was performed to compare the performance of the three-dimensional MLEM reconstruction with conventional correlation method. The results indicate that the artifacts are reduced and three-dimensional spatial resolution is improved with the Monte Carlo-based MLEM reconstruction.

  14. Randomized quasi-Monte Carlo simulation of fast-ion thermalization

    International Nuclear Information System (INIS)

    Höök, L J; Johnson, T; Hellsten, T

    2012-01-01

    This work investigates the applicability of the randomized quasi-Monte Carlo method for simulation of fast-ion thermalization processes in fusion plasmas, e.g. for simulation of neutral beam injection and radio frequency heating. In contrast to the standard Monte Carlo method, the quasi-Monte Carlo method uses deterministic numbers instead of pseudo-random numbers and has a statistical weak convergence close to O(N -1 ), where N is the number of markers. We have compared different quasi-Monte Carlo methods for a neutral beam injection scenario, which is solved by many realizations of the associated stochastic differential equation, discretized with the Euler-Maruyama scheme. The statistical convergence of the methods is measured for time steps up to 2 14 . (paper)

  15. Usefulness of the Monte Carlo method in reliability calculations

    International Nuclear Information System (INIS)

    Lanore, J.M.; Kalli, H.

    1977-01-01

    Three examples of reliability Monte Carlo programs developed in the LEP (Laboratory for Radiation Shielding Studies in the Nuclear Research Center at Saclay) are presented. First, an uncertainty analysis is given for a simplified spray system; a Monte Carlo program PATREC-MC has been written to solve the problem with the system components given in the fault tree representation. The second program MONARC 2 has been written to solve the problem of complex systems reliability by the Monte Carlo simulation, here again the system (a residual heat removal system) is in the fault tree representation. Third, the Monte Carlo program MONARC was used instead of the Markov diagram to solve the simulation problem of an electric power supply including two nets and two stand-by diesels

  16. Combinatorial nuclear level density by a Monte Carlo method

    International Nuclear Information System (INIS)

    Cerf, N.

    1994-01-01

    We present a new combinatorial method for the calculation of the nuclear level density. It is based on a Monte Carlo technique, in order to avoid a direct counting procedure which is generally impracticable for high-A nuclei. The Monte Carlo simulation, making use of the Metropolis sampling scheme, allows a computationally fast estimate of the level density for many fermion systems in large shell model spaces. We emphasize the advantages of this Monte Carlo approach, particularly concerning the prediction of the spin and parity distributions of the excited states,and compare our results with those derived from a traditional combinatorial or a statistical method. Such a Monte Carlo technique seems very promising to determine accurate level densities in a large energy range for nuclear reaction calculations

  17. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    International Nuclear Information System (INIS)

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed

  18. PRECIS Runs at IITM

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. PRECIS Runs at IITM. Evaluation experiment using LBCs derived from ERA-15 (1979-93). Runs (3 ensembles in each experiment) already completed with LBCs having a length of 30 years each, for. Baseline (1961-90); A2 scenario (2071-2100); B2 scenario ...

  19. Variational Monte Carlo Technique

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 8. Variational Monte Carlo Technique: Ground State Energies of Quantum Mechanical Systems. Sukanta Deb. General Article Volume 19 Issue 8 August 2014 pp 713-739 ...

  20. [Osteoarthritis from long-distance running?].

    Science.gov (United States)

    Hohmann, E; Wörtler, K; Imhoff, A

    2005-06-01

    Long distance running has become a fashionable recreational activity. This study investigated the effects of external impact loading on bone and cartilage introduced by performing a marathon race. Seven beginners were compared to six experienced recreational long distance runners and two professional athletes. All participants underwent magnetic resonance imaging of the hip and knee before and after a marathon run. Coronal T1 weighted and STIR sequences were used. The pre MRI served as a baseline investigation and monitored the training effect. All athletes demonstrated normal findings in the pre run scan. All but one athlete in the beginner group demonstrated joint effusions after the race. The experienced and professional runners failed to demonstrate pathology in the post run scans. Recreational and professional long distance runners tolerate high impact forces well. Beginners demonstrate significant changes on the post run scans. Whether those findings are a result of inadequate training (miles and duration) warrant further studies. We conclude that adequate endurance training results in adaptation mechanisms that allow the athlete to compensate for the stresses introduced by long distance running and do not predispose to the onset of osteoarthritis. Significant malalignment of the lower extremity may cause increased focal loading of joint and cartilage.

  1. Developing concepts for improved efficiency of robot work preparation

    NARCIS (Netherlands)

    Essers, M.S.; Vaneker, Thomas H.J.

    2013-01-01

    SInBot[1] is a large research project that focuses on maximizing the efficient use of mobile industrial robots during medium sized production runs. The system that will be described in this paper will focusses on the development and validation of concepts for efficient work preparation for cells of

  2. Discrete diffusion Monte Carlo for frequency-dependent radiative transfer

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Thompson, Kelly G.; Urbatsch, Todd J.

    2011-01-01

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique. (author)

  3. The design of the run Clever randomized trial

    DEFF Research Database (Denmark)

    Ramskov, Daniel; Nielsen, Rasmus Oestergaard; Sørensen, Henrik

    2016-01-01

    BACKGROUND: Injury incidence and prevalence in running populations have been investigated and documented in several studies. However, knowledge about injury etiology and prevention is needed. Training errors in running are modifiable risk factors and people engaged in recreational running need...... evidence-based running schedules to minimize the risk of injury. The existing literature on running volume and running intensity and the development of injuries show conflicting results. This may be related to previously applied study designs, methods used to quantify the performed running...... and the statistical analysis of the collected data. The aim of the Run Clever trial is to investigate if a focus on running intensity compared with a focus on running volume in a running schedule influences the overall injury risk differently. METHODS/DESIGN: The Run Clever trial is a randomized trial with a 24-week...

  4. Modified Monte Carlo procedure for particle transport problems

    International Nuclear Information System (INIS)

    Matthes, W.

    1978-01-01

    The simulation of photon transport in the atmosphere with the Monte Carlo method forms part of the EURASEP-programme. The specifications for the problems posed for a solution were such, that the direct application of the analogue Monte Carlo method was not feasible. For this reason the standard Monte Carlo procedure was modified in the sense that additional properly weighted branchings at each collision and transport process in a photon history were introduced. This modified Monte Carlo procedure leads to a clear and logical separation of the essential parts of a problem and offers a large flexibility for variance reducing techniques. More complex problems, as foreseen in the EURASEP-programme (e.g. clouds in the atmosphere, rough ocean-surface and chlorophyl-distribution in the ocean) can be handled by recoding some subroutines. This collision- and transport-splitting procedure can of course be performed differently in different space- and energy regions. It is applied here only for a homogeneous problem

  5. Uncertainty analysis in Monte Carlo criticality computations

    International Nuclear Information System (INIS)

    Qi Ao

    2011-01-01

    Highlights: ► Two types of uncertainty methods for k eff Monte Carlo computations are examined. ► Sampling method has the least restrictions on perturbation but computing resources. ► Analytical method is limited to small perturbation on material properties. ► Practicality relies on efficiency, multiparameter applicability and data availability. - Abstract: Uncertainty analysis is imperative for nuclear criticality risk assessments when using Monte Carlo neutron transport methods to predict the effective neutron multiplication factor (k eff ) for fissionable material systems. For the validation of Monte Carlo codes for criticality computations against benchmark experiments, code accuracy and precision are measured by both the computational bias and uncertainty in the bias. The uncertainty in the bias accounts for known or quantified experimental, computational and model uncertainties. For the application of Monte Carlo codes for criticality analysis of fissionable material systems, an administrative margin of subcriticality must be imposed to provide additional assurance of subcriticality for any unknown or unquantified uncertainties. Because of a substantial impact of the administrative margin of subcriticality on economics and safety of nuclear fuel cycle operations, recently increasing interests in reducing the administrative margin of subcriticality make the uncertainty analysis in criticality safety computations more risk-significant. This paper provides an overview of two most popular k eff uncertainty analysis methods for Monte Carlo criticality computations: (1) sampling-based methods, and (2) analytical methods. Examples are given to demonstrate their usage in the k eff uncertainty analysis due to uncertainties in both neutronic and non-neutronic parameters of fissionable material systems.

  6. Effective action and brane running

    International Nuclear Information System (INIS)

    Brevik, Iver; Ghoroku, Kazuo; Yahiro, Masanobu

    2004-01-01

    We address the renormalized effective action for a Randall-Sundrum brane running in 5D bulk space. The running behavior of the brane action is obtained by shifting the brane position without changing the background and fluctuations. After an appropriate renormalization, we obtain an effective, low energy brane world action, in which the effective 4D Planck mass is independent of the running position. We address some implications for this effective action

  7. Fitness Assessment Comparison Between the "Jackie Chan Action Run" Videogame, 1-Mile Run/Walk, and the PACER.

    Science.gov (United States)

    Haddock, Bryan; Siegel, Shannon; Costa, Pablo; Jarvis, Sarah; Klug, Nicholas; Medina, Ernie; Wilkin, Linda

    2012-06-01

    The purpose of this study was to examine whether a correlation existed among the scores of the "Jackie Chan Studio Fitness(™) Action Run" active videogame (XaviX(®), SSD Company, Ltd., Kusatsu, Japan), the 1-mile run/walk, and Progressive Aerobic Cardiovascular Endurance Run (PACER) aerobic fitness tests of the FITNESSGRAM(®) (The Cooper Institute, Dallas, TX) in order to provide a potential alternative testing method for days that are not environmentally desirable for outdoor testing. Participants were a convenience sample from physical education classes of students between the ages of 10 and 15 years. Participants (n=108) were randomly assigned to one of three groups with the only difference being the order of testing. The tests included the "Jackie Chan Action Run" active videogame, the 1-mile run/walk, and the PACER. Testing occurred on three different days during the physical education class. Rating of perceived exertion (RPE) was reported. Significant correlations (r=-0.598 to 0.312) were found among the three aerobic fitness tests administered (P<0.05). The RPE for the "Jackie Chan Action Run" was lower than the RPE for the 1-mile run/walk and the PACER (3.81±1.89, 5.93±1.77, and 5.71±2.14, respectively). The results suggest that the "Jackie Chan Action Run" test could be an alternative to the 1-mile run/walk and PACER, allowing physical education teachers to perform aerobic fitness testing in an indoor setting that requires less space. Also, children may be more willing to participate in the "Jackie Chan Action Run" based on the lower RPE.

  8. 28 CFR 544.34 - Inmate running events.

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Inmate running events. 544.34 Section 544... EDUCATION Inmate Recreation Programs § 544.34 Inmate running events. Running events will ordinarily not... available for all inmate running events. ...

  9. An Overview of the Monte Carlo Application ToolKit (MCATK)

    Energy Technology Data Exchange (ETDEWEB)

    Trahan, Travis John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-01-07

    MCATK is a C++ component-based Monte Carlo neutron-gamma transport software library designed to build specialized applications and designed to provide new functionality in existing general-purpose Monte Carlo codes like MCNP; it was developed with Agile software engineering methodologies under the motivation to reduce costs. The characteristics of MCATK can be summarized as follows: MCATK physics – continuous energy neutron-gamma transport with multi-temperature treatment, static eigenvalue (k and α) algorithms, time-dependent algorithm, fission chain algorithms; MCATK geometry – mesh geometries, solid body geometries. MCATK provides verified, unit-tested Monte Carlo components, flexibility in Monte Carlo applications development, and numerous tools such as geometry and cross section plotters. Recent work has involved deterministic and Monte Carlo analysis of stochastic systems. Static and dynamic analysis is discussed, and the results of a dynamic test problem are given.

  10. RNA-Sequencing Reveals Unique Transcriptional Signatures of Running and Running-Independent Environmental Enrichment in the Adult Mouse Dentate Gyrus.

    Science.gov (United States)

    Grégoire, Catherine-Alexandra; Tobin, Stephanie; Goldenstein, Brianna L; Samarut, Éric; Leclerc, Andréanne; Aumont, Anne; Drapeau, Pierre; Fulton, Stephanie; Fernandes, Karl J L

    2018-01-01

    Environmental enrichment (EE) is a powerful stimulus of brain plasticity and is among the most accessible treatment options for brain disease. In rodents, EE is modeled using multi-factorial environments that include running, social interactions, and/or complex surroundings. Here, we show that running and running-independent EE differentially affect the hippocampal dentate gyrus (DG), a brain region critical for learning and memory. Outbred male CD1 mice housed individually with a voluntary running disk showed improved spatial memory in the radial arm maze compared to individually- or socially-housed mice with a locked disk. We therefore used RNA sequencing to perform an unbiased interrogation of DG gene expression in mice exposed to either a voluntary running disk (RUN), a locked disk (LD), or a locked disk plus social enrichment and tunnels [i.e., a running-independent complex environment (CE)]. RNA sequencing revealed that RUN and CE mice showed distinct, non-overlapping patterns of transcriptomic changes versus the LD control. Bio-informatics uncovered that the RUN and CE environments modulate separate transcriptional networks, biological processes, cellular compartments and molecular pathways, with RUN preferentially regulating synaptic and growth-related pathways and CE altering extracellular matrix-related functions. Within the RUN group, high-distance runners also showed selective stress pathway alterations that correlated with a drastic decline in overall transcriptional changes, suggesting that excess running causes a stress-induced suppression of running's genetic effects. Our findings reveal stimulus-dependent transcriptional signatures of EE on the DG, and provide a resource for generating unbiased, data-driven hypotheses for novel mediators of EE-induced cognitive changes.

  11. Efficiency and accuracy of Monte Carlo (importance) sampling

    NARCIS (Netherlands)

    Waarts, P.H.

    2003-01-01

    Monte Carlo Analysis is often regarded as the most simple and accurate reliability method. Be-sides it is the most transparent method. The only problem is the accuracy in correlation with the efficiency. Monte Carlo gets less efficient or less accurate when very low probabilities are to be computed

  12. Adjustments with running speed reveal neuromuscular adaptations during landing associated with high mileage running training.

    Science.gov (United States)

    Verheul, Jasper; Clansey, Adam C; Lake, Mark J

    2017-03-01

    It remains to be determined whether running training influences the amplitude of lower limb muscle activations before and during the first half of stance and whether such changes are associated with joint stiffness regulation and usage of stored energy from tendons. Therefore, the aim of this study was to investigate neuromuscular and movement adaptations before and during landing in response to running training across a range of speeds. Two groups of high mileage (HM; >45 km/wk, n = 13) and low mileage (LM; joint stiffness might predominantly be governed by tendon stiffness rather than muscular activations before landing. Estimated elastic work about the ankle was found to be higher in the HM runners, which might play a role in reducing weight acceptance phase muscle activation levels and improve muscle activation efficiency with running training. NEW & NOTEWORTHY Although neuromuscular factors play a key role during running, the influence of high mileage training on neuromuscular function has been poorly studied, especially in relation to running speed. This study is the first to demonstrate changes in neuromuscular conditioning with high mileage training, mainly characterized by lower thigh muscle activation after touch down, higher initial knee stiffness, and greater estimates of energy return, with adaptations being increasingly evident at faster running speeds. Copyright © 2017 the American Physiological Society.

  13. Suppression of the initial transient in Monte Carlo criticality simulations; Suppression du regime transitoire initial des simulations Monte-Carlo de criticite

    Energy Technology Data Exchange (ETDEWEB)

    Richet, Y

    2006-12-15

    Criticality Monte Carlo calculations aim at estimating the effective multiplication factor (k-effective) for a fissile system through iterations simulating neutrons propagation (making a Markov chain). Arbitrary initialization of the neutron population can deeply bias the k-effective estimation, defined as the mean of the k-effective computed at each iteration. A simplified model of this cycle k-effective sequence is built, based on characteristics of industrial criticality Monte Carlo calculations. Statistical tests, inspired by Brownian bridge properties, are designed to discriminate stationarity of the cycle k-effective sequence. The initial detected transient is, then, suppressed in order to improve the estimation of the system k-effective. The different versions of this methodology are detailed and compared, firstly on a plan of numerical tests fitted on criticality Monte Carlo calculations, and, secondly on real criticality calculations. Eventually, the best methodologies observed in these tests are selected and allow to improve industrial Monte Carlo criticality calculations. (author)

  14. Monte Carlo criticality analysis for dissolvers with neutron poison

    International Nuclear Information System (INIS)

    Yu, Deshun; Dong, Xiufang; Pu, Fuxiang.

    1987-01-01

    Criticality analysis for dissolvers with neutron poison is given on the basis of Monte Carlo method. In Monte Carlo calculations of thermal neutron group parameters for fuel pieces, neutron transport length is determined in terms of maximum cross section approach. A set of related effective multiplication factors (K eff ) are calculated by Monte Carlo method for the three cases. Related numerical results are quite useful for the design and operation of this kind of dissolver in the criticality safety analysis. (author)

  15. Preparación y propiedades de una arcilla montmorillonita pilareada con polihidroxicationes de aluminio Preparation and properties of a montmorillonite clay pillared with aluminium polyhydroxications

    Directory of Open Access Journals (Sweden)

    Sibele B. C. Pergher

    1999-09-01

    Full Text Available Montmorillonite clay from Brazil was pillared with aluminium polyhydroxications. The influence of Al/Mont ratio and calcination temperature in the properties of the prepared materials was studied. Results showed that the pillarization process increases the basal spaces of the natural clay from 9,7 to 18,5Å and the superficial area from 41 to ~230m2/g. The calcination process at different temperatures showed that the pillared material was stable until 600oC but the adequate temperature for calcination was 450oC. Materials prepared with different Al/Mont ratios showed the maximum Al incorporation for ratios >10meq Al/g and a good distribution for rations >15meq Al/g.

  16. Asymmetric information and bank runs

    OpenAIRE

    Gu, Chao

    2007-01-01

    It is known that sunspots can trigger panic-based bank runs and that the optimal banking contract can tolerate panic-based runs. The existing literature assumes that these sunspots are based on a publicly observed extrinsic randomizing device. In this paper, I extend the analysis of panic-based runs to include an asymmetric-information, extrinsic randomizing device. Depositors observe different, but correlated, signals on the stability of the bank. I find that if the signals that depositors o...

  17. Test results of Run-1 and Run-2 in steam generator safety test facility (SWAT-3)

    International Nuclear Information System (INIS)

    Kurihara, A.; Yatabe, Toshio; Tanabe, Hiromi; Hiroi, Hiroshi

    2003-07-01

    Large leak sodium-water reaction tests were carried out using SWAT-1 rig and SWAT-3 facility in Power Reactor and Nuclear Fuel Development Corporation (PNC) O-arai Engineering Center to obtain the data on the design of the prototype LMFBR Monju steam generator against a large leak accident. This report provides the results of SWAT-3 Runs 1 and 2. In Runs 1 and 2, the heat transfer tube bundle of the evaporator, fabricated by TOSHIBA/IHI, were used, and the pressure relief line was located at the top of evaporator. The water injection rates in the evaporator were 6.7 kg/s and 14.2 (initial)-9.7 kg/s in Runs 1 and 2 respectively, which corresponded to 3.3 tubes and 7.1 (initial)-4.8 tubes failure in actual size system according to iso-velocity modeling. Approximately two hundreds of measurement points were provided to collect data such as pressure, temperature, strain, sodium level, void, thrust load, acceleration, displacement, flow rate, and so on in each run. Initial spike pressures were 1.13 MPa and 2.62 MPa nearest to injection point in Runs 1 and 2 respectively, and the maximum quasi-steady pressures in evaporator were 0.49 MPa and 0.67 MPa in Runs 1 and 2. No secondary tube failure was observed. The rupture disc of evaporator (RD601) burst at 1.1s in Run-1 and at 0.7s in Run-2 after water injected, and the pressure relief system was well-functioned though a few items for improvement were found. (author)

  18. Monte Carlo Uncertainty Quantification Using Quasi-1D SRM Ballistic Model

    Directory of Open Access Journals (Sweden)

    Davide Viganò

    2016-01-01

    Full Text Available Compactness, reliability, readiness, and construction simplicity of solid rocket motors make them very appealing for commercial launcher missions and embarked systems. Solid propulsion grants high thrust-to-weight ratio, high volumetric specific impulse, and a Technology Readiness Level of 9. However, solid rocket systems are missing any throttling capability at run-time, since pressure-time evolution is defined at the design phase. This lack of mission flexibility makes their missions sensitive to deviations of performance from nominal behavior. For this reason, the reliability of predictions and reproducibility of performances represent a primary goal in this field. This paper presents an analysis of SRM performance uncertainties throughout the implementation of a quasi-1D numerical model of motor internal ballistics based on Shapiro’s equations. The code is coupled with a Monte Carlo algorithm to evaluate statistics and propagation of some peculiar uncertainties from design data to rocker performance parameters. The model has been set for the reproduction of a small-scale rocket motor, discussing a set of parametric investigations on uncertainty propagation across the ballistic model.

  19. Running-in as an Engineering Optimization

    OpenAIRE

    Jamari, Jamari

    2007-01-01

    Running-in is a process which can be found in daily lives. This phenomenon occurs after the start of the contact between fresh solid surfaces, resulting in changes in the surface topography, friction and wear. Before the contacting engineering solid surfaces reach a steady-state operation situation this running-n enhances the contact performance. Running-in is very complex and is a vast problem area. A lot of variable occurs in the running-in process, physically, mechanically or chemically. T...

  20. TOPAS: An innovative proton Monte Carlo platform for research and clinical applications

    Energy Technology Data Exchange (ETDEWEB)

    Perl, J.; Shin, J.; Schuemann, J.; Faddegon, B.; Paganetti, H. [SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025 (United States); University of California San Francisco Comprehensive Cancer Center, 1600 Divisadero Street, San Francisco, California 94143-1708 (United States); Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States); University of California San Francisco Comprehensive Cancer Center, 1600 Divisadero Street, San Francisco, California 94143-1708 (United States); Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States)

    2012-11-15

    Purpose: While Monte Carlo particle transport has proven useful in many areas (treatment head design, dose calculation, shielding design, and imaging studies) and has been particularly important for proton therapy (due to the conformal dose distributions and a finite beam range in the patient), the available general purpose Monte Carlo codes in proton therapy have been overly complex for most clinical medical physicists. The learning process has large costs not only in time but also in reliability. To address this issue, we developed an innovative proton Monte Carlo platform and tested the tool in a variety of proton therapy applications. Methods: Our approach was to take one of the already-established general purpose Monte Carlo codes and wrap and extend it to create a specialized user-friendly tool for proton therapy. The resulting tool, TOol for PArticle Simulation (TOPAS), should make Monte Carlo simulation more readily available for research and clinical physicists. TOPAS can model a passive scattering or scanning beam treatment head, model a patient geometry based on computed tomography (CT) images, score dose, fluence, etc., save and restart a phase space, provides advanced graphics, and is fully four-dimensional (4D) to handle variations in beam delivery and patient geometry during treatment. A custom-designed TOPAS parameter control system was placed at the heart of the code to meet requirements for ease of use, reliability, and repeatability without sacrificing flexibility. Results: We built and tested the TOPAS code. We have shown that the TOPAS parameter system provides easy yet flexible control over all key simulation areas such as geometry setup, particle source setup, scoring setup, etc. Through design consistency, we have insured that user experience gained in configuring one component, scorer or filter applies equally well to configuring any other component, scorer or filter. We have incorporated key lessons from safety management, proactively